text
stringlengths 256
16.4k
|
|---|
In
category theory, a branch of mathematics, profunctors are a generalization of relations and also of bimodules. Definition [ edit ]
A
profunctor (also named by the French school and distributor module by the Sydney school) from a ϕ {\displaystyle \,\phi } category to a category C {\displaystyle C} , written D {\displaystyle D} , ϕ : C ↛ D {\displaystyle \phi \colon C\nrightarrow D}
is defined to be a
functor ϕ : D o p × C → S e t {\displaystyle \phi \colon D^{\mathrm {op} }\times C\to \mathbf {Set} }
where
denotes the D o p {\displaystyle D^{\mathrm {op} }} opposite category of and D {\displaystyle D} denotes the S e t {\displaystyle \mathbf {Set} } category of sets. Given morphisms respectively in f : d → d ′ , g : c → c ′ {\displaystyle f\colon d\to d',g\colon c\to c'} and an element D , C {\displaystyle D,C} , we write x ∈ ϕ ( d ′ , c ) {\displaystyle x\in \phi (d',c)} to denote the actions. x f ∈ ϕ ( d , c ) , g x ∈ ϕ ( d ′ , c ′ ) {\displaystyle xf\in \phi (d,c),gx\in \phi (d',c')}
Using the
cartesian closure of , the C a t {\displaystyle \mathbf {Cat} } category of small categories, the profunctor can be seen as a functor ϕ {\displaystyle \phi } ϕ ^ : C → D ^ {\displaystyle {\hat {\phi }}\colon C\to {\hat {D}}}
where
denotes the category D ^ {\displaystyle {\hat {D}}} of S e t D o p {\displaystyle \mathrm {Set} ^{D^{\mathrm {op} }}} presheaves over . D {\displaystyle D}
A
correspondence from to C {\displaystyle C} is a profunctor D {\displaystyle D} . D ↛ C {\displaystyle D\nrightarrow C} Composition of profunctors [ edit ]
The composite
of two profunctors ψ ϕ {\displaystyle \psi \phi } and ϕ : C ↛ D {\displaystyle \phi \colon C\nrightarrow D} ψ : D ↛ E {\displaystyle \psi \colon D\nrightarrow E}
is given by
ψ ϕ = L a n Y D ( ψ ^ ) ∘ ϕ ^ {\displaystyle \psi \phi =\mathrm {Lan} _{Y_{D}}({\hat {\psi }})\circ {\hat {\phi }}}
where
is the left L a n Y D ( ψ ^ ) {\displaystyle \mathrm {Lan} _{Y_{D}}({\hat {\psi }})} Kan extension of the functor along the ψ ^ {\displaystyle {\hat {\psi }}} Yoneda functor of Y D : D → D ^ {\displaystyle Y_{D}\colon D\to {\hat {D}}} (which to every object D {\displaystyle D} of d {\displaystyle d} associates the functor D {\displaystyle D} ). D ( − , d ) : D o p → S e t {\displaystyle D(-,d)\colon D^{\mathrm {op} }\to \mathrm {Set} }
It can be shown that
( ψ ϕ ) ( e , c ) = ( ∐ d ∈ D ψ ( e , d ) × ϕ ( d , c ) ) / ∼ {\displaystyle (\psi \phi )(e,c)=\left(\coprod _{d\in D}\psi (e,d)\times \phi (d,c)\right){\Bigg /}\sim }
where
is the least equivalence relation such that ∼ {\displaystyle \sim } whenever there exists a morphism ( y ′ , x ′ ) ∼ ( y , x ) {\displaystyle (y',x')\sim (y,x)} in v {\displaystyle v} such that D {\displaystyle D} and y ′ = v y ∈ ψ ( e , d ′ ) {\displaystyle y'=vy\in \psi (e,d')} . x ′ v = x ∈ ϕ ( d , c ) {\displaystyle x'v=x\in \phi (d,c)} The bicategory of profunctors [ edit ]
Composition of profunctors is associative only up to isomorphism (because the product is not strictly associative in
Set). The best one can hope is therefore to build a bicategory Prof whose 0-cells are small categories, 1-cells between two small categories are the profunctors between those categories, 2-cells between two profunctors are the natural transformations between those profunctors. Properties [ edit ] Lifting functors to profunctors [ edit ]
A functor
can be seen as a profunctor F : C → D {\displaystyle F\colon C\to D} by postcomposing with the Yoneda functor: ϕ F : C ↛ D {\displaystyle \phi _{F}\colon C\nrightarrow D} . ϕ F = Y D ∘ F {\displaystyle \phi _{F}=Y_{D}\circ F}
It can be shown that such a profunctor
has a right adjoint. Moreover, this is a characterization: a profunctor ϕ F {\displaystyle \phi _{F}} has a right adjoint if and only if ϕ : C ↛ D {\displaystyle \phi \colon C\nrightarrow D} factors through the ϕ ^ : C → D ^ {\displaystyle {\hat {\phi }}\colon C\to {\hat {D}}} Cauchy completion of , i.e. there exists a functor D {\displaystyle D} such that F : C → D {\displaystyle F\colon C\to D} . ϕ ^ = Y D ∘ F {\displaystyle {\hat {\phi }}=Y_{D}\circ F} References [ edit ] Bénabou, Jean (2000). "Distributors at Work" (PDF). Borceux, Francis (1994). Handbook of Categorical Algebra. CUP. Lurie, Jacob (2009). Higher Topos Theory. Princeton University Press. Profunctor in nLab
|
I too was unable to derive a direct proof of the formula you have given; the following proof therefore is somewhat indirect. It rests on the following claim.
Claim. Given functions $H(t), \, f(t)$, the renewal equation $g(t) = H(t) + \int_{0}^t g(t-x) f(x) \mathrm d x$ has a unique solution.
Henceforth I will use the normal notation for convolution $\phi*\psi(t) = \int_0^t \phi(t-x)\psi(x) \mathrm d x$ so simplify notation. The equation above then reads: $g = H + g*f$.
My proof proceeds in two steps:
Derive a renewal type equation for the variance. Prove that it is solved by the right hand side of the formula you gave.
Part 1 Instead of working with the variance, i will consider $g(t) = \textbf{E}[N(t)^2]$. Exactly as though we were deriving the usual renewal equation, we condition on the first jump time\begin{align*}g(t) & = \int_{0}^\infty \textbf{E}[N(t)^2 \, | \, X_1 = x] f(x) \mathrm d x \\& = \int_0^t \textbf{E}[N(t)^2 \, | \, X_1 = x] f(x) \mathrm d x + \int_t^\infty 0 \mathrm \, d x \\& = \int_0^t \textbf{E}[(1 + N(t-x) )^2] f(x) \mathrm d x \\& = \int_0^t f(x)\Big(1 + 2 \textbf{E}[N(t-x)] + \textbf{E}[N(t-x)^2] \Big ) \mathrm d x \\& = F(t) + 2(m * f)(t) + (g*f)(t).\end{align*}In particular, $g$ satisfies a renewal type equation with $H = F +2 m*f$.
Part 2 According to the formula you gave, we want to show $g(t) = 2(m*m')(t) + m(t)$, note that this is exactly the right hand side of your formula but with the $m(t)^2$ term removed, since we are not working with the variance. Substituting this formula into the right hand side of the renewal equation we have\begin{align}H + (2(m*m') + m)*f & = F + 2(m*f) +2(m*m')*f + m*f\\& = (m - m*f) + 2(m*f) +2(m*m')*f + m*f\\& = m + 2( m*f + m*m'*f),\end{align}where in the second line we substituted in the renewal equation $F = m - m*f$. It remeains then to show\begin{align} m*f + m*m'*f =m*m'. \tag{1}\end{align}But again using the renewal equation, and differentiating it\begin{align} m' = (F + m*f)' = f +m'*f,\end{align} which when rearranged is$$m'*f =m'-f $$Substituting into (1)\begin{align} m*f + m*m'*f & = m*f + m*(m' -f) \\& =m*m',\end{align}as required.
|
Let $\mathbb{X}$ be a normed space that is complete and $\mathbb{Y}$ be another normed space which is not complete. Then can a bounded linear map $A:\mathbb{X} \to \mathbb{Y}$ be bijective or not?
Edit: In fact there's a very simple theorem here that gives the whole truth: Given a bounded linear bijection $T:X\to Y$, where $X$ is complete, $Y$ is complete if and only if $T^{-1}$ is bounded. (If $Y$ is complete the open mapping theorem shows that $T^{-1}$ is bounded. On the other hand if $T^{-1}$ is bounded it's trivial to show that $Y$ is complete: A Cauchy sequence in $Y$ comes from a Cauchy sequence in $X$, which converges...) Original:
Yes, it's possible. This surprises me; I thought the answer was no. The reason I thought the answer was no was something like this:
Let's agree that an
isomorphism in the present context is a bounded linear bijection whose inverse is also bounded. Now (i) a bounded linear bijection between Banach spaces must be an isomorphism, (ii) if $X$ and $Y$ are isomorphic normed spaces and $X$ is complete then $Y$ is complete. Of course I never thought that was actually a proof here; all it proves is that $Y$ is complete if $Y$ is complete. But those facts in my head made me think the answer was no.
Anyway, here's an example. Let $X=\ell^2$, the usual space of square-summable sequences. Define $T:X\to X$ by $$Tx=(x_1,x_2/2,x_3/3,\dots).$$
Then $T$ is certainly bounded and injective.
Now let $Y=T(X)$, and give $Y$ the norm it inherits from $\ell^2$. Regard $T$ as a map from $X$ to $Y$. It's still bounded and injective, and now it's surjective.
So $T:X\to Y$ is a bounded linear bijection. And $Y$ is not complete. (Proof: If $Y$ were complete then the open mapping theorem would show that $T^{-1}:Y\to X$ was bounded, but $T^{-1}$ is certainly not bounded.)
$\newcommand{nrm}[1]{\left\lVert{#1}\right\rVert}$ Long story short: yes, and it happens quite often.
For instance, let $I=(0,1)$. Consider the map \begin{align}\psi:W^{1,p}(I)&\hookrightarrow L^p(I)\\u&\mapsto u\end{align}
Since $\nrm{u}_{W^{1,p}}=\nrm{u'}_p+\nrm{u}_p$, it holds $\nrm\psi\le1$.
But $\psi\left(W^{1,p}(I)\right)=F$ contains $C^\infty_c(I)$, therefore it is dense in $L^p(I)$. Obviously, $F\ne L^p(I)$ because all functions in $F$ have finite $\operatorname{supess}$.
Hence $(F,\nrm{\bullet}_p)$ is not Banach and $\psi:W^{1,p}(I)\to F$ is continuous and bijective.
|
Here is a sketch of proof. Since the eigenvalues of a linear operator are independent of the choice of basis, for ease of presentation, when we mention $A,G,Q$ or $M$ below, they are viewed as linear operators rather than matrices.
Consider an ordered orthonormal basis $\mathcal{U}=\left\{u_1,\ u_2,\ \ldots,\ \right\}$ where$$\begin{align*}u_1&=\frac1R(1,1,\ldots,1)^T,\\u_2&=\frac1R(r,\,-1/r,\,\ldots,\,-1/r)^T\end{align*}$$with $r=\sqrt{n-1}$ and $R=\sqrt{n}$. One can verify that under $\mathcal{U}$, the matrix of $A$ is given by $A'\oplus0_{(n-2)\times(n-2)}$ and the matrix of $G$ is given by $G'\oplus0_{(n-2)\times(n-2)}$, where\begin{equation}A'=\frac{r}{R^2}\begin{pmatrix}2r&r^2-1\\ r^2-1&-2r\end{pmatrix},\quad G'=\frac{A'+r^2\theta I_2}{1-r^2\theta^2}.\tag{1}\end{equation}We will now prove your assertion under the assumption that $u_2\notin W^\perp$. (The case $u_2\in W^\perp$ is similar but simpler and hence it is omitted here.) Let $\mathcal{V}=\{v_1,\ldots,v_n\}$ be an orthonormal basis such that $v_1,\ldots,v_k$ form a basis of $W$ and the other $v_i$s form a basis of $W^\perp$. Here $v_1,v_2$ and $v_{k+1}$ are chosen as follows:$$\begin{align}v_1&=u_1,\\v_{k+1}&=\frac{Mu_2}{\|Mu_2\|},\tag{2}\\v_2&=\frac{(I-M)u_2}{\|(I-M)u_2\|}\tag{3}.\end{align}$$Recall that $u_1\perp u_2$ and by assumption, $u_1\in W \perp Mu_2\in W^\perp$. Therefore $u_1,\,Mu_2$ and $(I-M)u_2$ are orthogonal to each other. Also, as the matrix of $A$ is $A'\oplus0_{(n-2)\times(n-2)}$ under the basis $\{u_1,u_2,\ldots\}$, some two eigenvectors of $A$ are spanned by $u_1$ and $u_2$. However, by the given conditions, $u_1\in W$ and the eigenvectors of $A$ do not lie inside $W$. Therefore $u_2$ must not lie inside $W$. Hence $Mu_2\not=0$. Yet we have assumed that $u_2\notin W^\perp$. So $(I-M)u_2$ is also nonzero. Hence the denominators in $(2)$ and $(3)$ are nonzero and $\{v_1,v_2,v_{k+1}\}$ is indeed an orthonormal set of vectors.
Now, since $\mathcal{V}$ is orthonormal, for $i\not=1,2,k+1$, we have $v_i\perp\operatorname{span}\{v_1,v_2,v_{k+1}\}$ and hence $v_i\perp u_1, u_2$. Therefore the matrix of $G$ under $\mathcal{V}$ is of the form\begin{equation}\left[\begin{array}{ccc|cc}\ast&\ast&&a\\\ast&\ast&&b\\&&0\\\hlinea&b&&\ast\\&&&&0\end{array}\right]\tag{4}\end{equation}where the two zero matrices are of sizes $(k-2)\times(k-2)$ and $(n-k-1)\times(n-k-1)$ respectively. By $(1)$, $Gu_1$ has a nonzero component in $u_2$. Since $u_2\notin W$ and $W^\perp$, it follows that $a\not=0$ in $(4)$. Therefore the matrix of $QM+MQ$ under $\mathcal{V}$ is of the form\begin{equation}\left[\begin{array}{ccc|cc}0&&&a\\&0&&b\\&&0_{(k-2)\times(k-2)}\\\hlinea&b&&c\\&&&&sI_{n-k-1}\end{array}\right]\end{equation}where $s=-\frac1n\operatorname{trace}(G)=\frac{-2r^2\theta}{n(1-r^2\theta^2)}$. This matrix is permutation similar to\begin{equation}\left[\begin{array}{ccc|cc}0&0&a\\0&0&b\\a&b&c\\\hline&&&0_{(k-2)\times(k-2)}\\&&&&2sI_{n-k-1}\end{array}\right]=S\oplus0_{(k-2)\times(k-2)}\oplus(2sI_{n-k-1}).\end{equation}So, if $\theta\not=0$ and $n-k-1>0$ (i.e. if $\theta\not=0$ and $\dim W\le n-2$, otherwise your assertion is not true), one nonzero but perhaps repeated eigenvalue of $QM+MQ$ is $\lambda=2s=\frac{-4r^2\theta}{n(1-r^2\theta^2)}$, which is contributed by the block $2sI_{n-k-1}$. Also, the characteristic equation of $S$ is $x(x^2-cx-a^2-b^2)=0$. As $a\not=0$, this equation has one zero root and two distinct nonzero roots.
you can show that these two nonzero roots are not equal to $2s$, then we are done. However, since $W$ is chosen arbitrarily, I think there might be a measure-zero set of failure cases. If
|
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question.
Notation/ Lagrangians
Let me first provide the respective Lagrangians and elucidate the notation.
I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$
The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part.
Noether currents of particles
Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$
Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included.
"Self-charge"
Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field.
For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that
classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is.
After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons.
Now to the questions:
On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level? Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena? Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why?
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
|
Consider $\mathcal{N}=1,d=4$ SUSY with $n$ chiral superfields $\Phi^i,$ Kaehler potential $K,$ superpotential $W$ and action ($\overline{\Phi}_i$ is complex conjugate of $\Phi^i$) $$ S= \int d^4x \left[ \int d^2\theta d^2\overline{\theta} K(\Phi,\overline{\Phi})+ \int d^2\theta W(\Phi) + \int d^2\overline{\theta} \overline{W}(\overline{\Phi})\right].$$
As in Weinberg (QFT, volume 3, page 89), requiring invariance of $K$ and $W$ under \begin{align}\delta\Phi^i &= i\epsilon Q^{i}_{~j}\Phi^j \\ \delta\overline{\Phi}_i &=- i\epsilon \overline{\Phi}_jQ_{~i}^{j}\end{align} with $\epsilon$ small positive constant real parameter and $Q$ hermitian matrix, one obtains some conditions on $K$ and $W,$ namely (denoting $K_i=\frac{\partial K}{\partial\Phi^i}$ and $K^i=\frac{\partial K}{\partial\overline{\Phi}_i},$ and the same for $W$) $$ K_i Q^i_{~j} \Phi^j = \overline{\Phi}_j Q^j_{~i} K^i $$ $$ W_i Q^i_{~j}\Phi^j=0=\overline{\Phi}_j Q^j_{~i} \overline{W}^i $$ and can then compute the Noether current associated to such transformations, promote $\epsilon$ to a full chiral superfield $\epsilon\Lambda$, etc.
The obtained invariance implies $\delta_{\epsilon}S=0$ for the above transformations.
QUESTION: is the reverse also true? namely, by requiring $\delta_{\epsilon}S=0$ (instead of the apparently stronger invariance condition on $K$ and $W$), does one obtain the same constraints as above on $K$ and $W?$
|
Webster's Revised Unabridged Dictionary
AS. rest, ræst, rest; akin to D. rust, G. rast,. OHG. rasta, Dan. & Sw. rast, rest, repose, Icel. röst, the distance between two resting places, a mole, Goth. rasta, a mile, also to Goth. razn, house, Icel. rann, and perhaps to G. ruhe, rest, repose, AS. rōw, Gr. 'erwh`. Cf. Ransack
In literature:
The world seemed to be divided into two parts, the rest of it and that tunnel.
"The Trail of '98" by Robert W. Service
I always wish, madam, to rest my eyes where my wife's have rested.
"The Long Roll" by Mary Johnston
To the rest of the world little or nothing was explained.
"The Best Short Stories of 1920" by Various
We rested there two days.
"Old Rail Fence Corners" by Various
He had a mind to sit down upon the bank beneath the apple trees and rest.
"Fair Harbor" by Joseph Crosby Lincoln
Of the eight men aboard two were killed outright and the rest thrown into the sea.
"The Pirate of Panama" by William MacLeod Raine
The turf ran in to the very edge of the precipice, and on the same level with the rest of the plain.
"The White Chief" by Mayne Reid
The rest is in the Lord's hands.
"Janet's Love and Service" by Margaret M Robertson
Others are spheroidal in shape, resting upon the surface of the parched earth.
"The Scalp Hunters" by Mayne Reid
While at rest the leg is flexed at the joint affected, and the toe rests on the ground.
"Special Report on Diseases of the Horse" by United States Department of Agriculture
While the rest of the party were collecting materials, I went aloft, anxious to see what the negroes on shore were about.
"In the Wilds of Africa" by W.H.G. Kingston
The office was entirely separate, that is, it had its own entrance door and no communication with the rest.
"The Heart of Unaga" by Ridgwell Cullum
All went right for the rest of the morning.
"Deerbrook" by Harriet Martineau
If you mean that my father divided the rest among you all, he only did what was right, just.
"Floyd Grandon's Honor" by Amanda Minnie Douglas
The rest of the members of the mess were variously employed.
"The Three Midshipmen" by W.H.G. Kingston
Winthrop & y^e rest said they could doe no more then they had done thus to requeste them, y^e blame must rest on them.
"Bradford's History of 'Plimoth Plantation'" by William Bradford
An awful crime rested upon my soul, a crime only the shadow of which had rested upon me before.
"Roger Trewinion" by Joseph Hocking
Rest for the weary one!
"The Wagnerian Romances" by Gertrude Hall
That which went on after the rest of the household had retired to rest was known to only two others.
"The Watchers of the Plains" by Ridgewell Cullum
The talk lasted until it was broken up by Mrs. Eberstein, who declared Dolly must go to rest.
"The End of a Coil" by Susan Warner ***
In poetry:
Then cleave more closely to my breast,
And I will closer cleave to thine:
Thy bosom is my sweetest rest—
Oh, rest thy weary head on mine!
"To--" by Thomas Cooper
We ceased: a gentler feeling crept
Upon us: surely rest is meet:
"They rest," we said, "their sleep is sweet,"
And silence follow'd, and we wept.
"In Memoriam A. H. H. Obiit MDCCCXXXIII" by Alfred Lord Tennyson
You must not weep; I believe I'd hear your tears
Tho' sleeping in a tomb:
My rest would not be rest, if in your years
There floated clouds of gloom.
"M * * *" by Abram Joseph Ryan
Rest, my little baby dear,
I will watch thy rest,
Thou shalt feel the waters near
Only on my breast ;
In the strong and tender tide
Still my love shall be thy guide.
"To My Children" by Dollie Radford
Roses! round the shrine and aisle!
Which of all I loved the best,
I have gone to rest awhile
Where the wavelets never rest --
Ye are dearer far to me
Than the ever restless sea.
"Sea Rest" by Abram Joseph Ryan
Of all the works we do — to serve the Lord,
Is the most needful, and by much the best —
It always does the surest gains afford,
And brings in greater int'rest than the rest.
"Advice To Serve God" by Rees Prichard
In news:
Fame and Fortune Laid to Rest.
Check out the rest of the list that we dread about the holiday season.
Her left elbow rests on his right elbow in line with his shoulders.
This issue and the rest of the Rolling Stone archives are available via Rolling Stone Plus, Rolling Stone's premium subscription plan.
Transfer to cutting board, tent with foil, and let rest 10 minutes.
A portion of the wine ages in new French oak barrels, and the remainder rests for 5 months in large Slovenian casks before bottling.
John Baizley and the rest of Baroness are back in the US, recovering from a brutal bus accident in the U.K.
"I have no problem doing it for the rest of my life," the 17-year-old said.
William Perlman/The Star Ledger Brett Gardner may be a spectator for the rest of the season.
Stephen Curry's right ankle still isn't 100% and the point guard could miss the rest of the season for the Golden State Warriors while in rehabilitation.
Rutgers QB Gary Nova expected to be full go Thursday after resting arm today.
Remove steak from pan and let rest.
Today, in a simple Mass, the family of Leobardo López laid their hopes, but not his body, to rest.
Baffert said Lookin At Lucky has earned a rest.
Yergin's bullish view has something in common with the views of the pessimists -- it rests on unknowns. ***
In science:
E (Z ) = M (Z )c2 , M (Z ) being the equivalent mass of this state. M (Z ) is rest mass for this state in the frame S0 in which the state is at rest.
A Sketch for a Quantum Theory of Gravitity
G is the constant of gravitation mp is the rest mass of a proton and me = m0 is the rest mass of an electron. α is the dimensionless coupling constant for the electromagnetic field.
A Sketch for a Quantum Theory of Gravitity
In other words, any proton can have a high gravitational velocity relative to its distant rest mass generating graviton’s rest frame.
A Sketch for a Quantum Theory of Gravitity
The first boost transforms the spectrum from the rest frame of a (in which h is injected with energy Eprod ) to the rest frame of A.
Spectra of neutrinos from dark matter annihilations
This procedure arose from concerns that the rest frame V light curves for the high-redshift sample are more poorly sampled than the rest frame B light curves, which is not the case for the low-redshift sample.
Measurement of \Omega_m, \Omega_{\Lambda} from a blind analysis of Type Ia supernovae with CMAGIC: Using color information to verify the acceleration of the Universe
The plan of the rest of this paper is a follows: Sect. 3 is devoted to some basic percolation estimates needed in the rest of the paper.
Functional CLT for random walk among bounded random conductances
As in Paper I, we determine the rest-frame equivalent widths directly from the spectra according to W rest λ = (Fℓ/fλ,r )/(1 + z), where Fℓ is the flux in the emission line and fλ,r is the measured red-side continuum flux density.
A Luminosity Function of Lyman Alpha Emitting Galaxies at Redshift 4.5
Fig. 7.— Histogram of the spectroscopic rest-frame equivalent widths for the z = 4.5 population, determined with W rest λ = (Fℓ/fλ,r )/(1 + z), where Fℓ is the flux in the emission line and fλ,r is the measured red-side continuum flux density.
A Luminosity Function of Lyman Alpha Emitting Galaxies at Redshift 4.5
The rest frame equivalent widths were determined with W rest λ = (Fℓ /fλ,r )/(1 + z ), where Fℓ is the flux in the emission line and fλ,r is the measured red-side continuum flux density.
A Luminosity Function of Lyman Alpha Emitting Galaxies at Redshift 4.5
B (νr , Td ) where νo and νr are the observed and rest-frame frequencies, respectively, Sνo is the observed flux density, DL is the luminosity distance, and κ(νr ) is the mass absorption coefficient in the rest frame.
Wide-field mid-infrared and millimetre imaging of the high-redshift radio galaxy, 4C41.17
The rest of this section is devoted to motivating the results and establishing the context and is not logically necessary to read the rest of the paper.
From random matrices to random analytic functions
Besides, what Einstein really proved was that if a body at rest emits a total energy of E remaining at rest, then the mass of this body decreases by E/ c2.
Foundations of Information Theory
Corresponding to a rest frequency of 492.16065 GHz, from νobs = νrest/(1 + z ), with z = 0.88582. b Note that in the 230 GHz band, the eSMA bandwidth is limited by that of the JCMT’s receiver A3.
The eSMA: description and first results
We devote the rest of this section to introduce few useful notation on quasi *-algebras, which will be used in the rest of this paper.
$O^\star$-algebras and quantum dynamics: some existence results
Since it is not possible to pinpoint the rest frame of a spontaneous random event, it will not be possible to pinpoint the rest frame of a collection of such events exhibiting complete spatial-temporal randomness.
Invariant lengths using existing Special Relativity ***
|
For students of maths stream, the 2nd puc karnataka board is a very critical year as its time for them to acquire more in depth knowledge in their stream. If a student is able to pass successfully this year, then they can go for further studies to professional colleges or other as per their preference. The marks that the students obtain in these exams are considered for admission to universities, technical and medical courses. For this, students will have to practice and learn every subject including math very thoroughly and the Karnataka 2nd PUC Maths Important Questions will also be a very useful tool.
To help with the preparation of these exams of such great importance, BYJU’s have compiled a list of important questions from II PU maths textbook karnataka board.
List of 2nd PUC Maths Important Questions Find the area of the triangle whose vertices are (-2, -3), (3, 2) and (-1, -8) by using the determinant method Write the simplest form of \(tan^{-1}\left ( \frac{cosx-sinx}{cosx+sinx} \right )\), 0< x< \(\frac{\pi }{2}\) Find \(\frac{dy}{dx}\), if \(x^{2}+xy+y^{2}\) =100 Integrate \(\frac{e^{tan^{-1^{x}}}}{1+x^{2}}\) Show that the relation R in the set A= {1,2,3.4,5} given by R={\({(a,b):\left | a-b \right |}\) is even}, is an equivalence relation Find \(\int \frac{xdx}{(x+1)(x+2)}\) Find the. area of the region bounded by the curve \(y= x^{2}\) and the line Y=4 A bag contains 4 red and 4 black batts, another bag contains 2 red and 6 black balls. One of the two bags is selected at random and a ball is drawn from the bag, which is found to be red. Find the probability that the ball is drawn from the first bag Sand is pouring from a pipe at a rate of 12 cubic cm l s. The falling sand forms a cone on the ground in such a way that the height of the cone is always one-sixth of the radius of the base. How fast is the height of the sand cone increasing when the height is 4 cm? 47) Derive the equation of the line in space passing through two given points’ both in vector and Cartesian form’ If A is a square matrix, with \(\left | A \right |=8\), then find the value of \(\left | A A’ \right |\) Define collinear vectors. Find the direction cosines of a line which makes equal angles with the positive coordinate axes. Find the approximate change in the volume of a cube of side x metres caused by increasing the side by 3%. Find the probability distribution of the number of heads in two tosses of a coin. Form the differential equation of the family of circles having centre on y -axis and radius 3 units. 17.
18. Verify Mean Value Theorem for the function, \(f(x)=x^{2}\) in the interval [2,4] 19. A ladder 5m long is leaning against a wall. The bottom of the ladder is pulled along the ground away from the wall, at a rate of 2cm/sec. How fast is its height on the wall decreasing when the foot of the ladder is 4m away from the wall? 20. Find the two positive numbers whose sum is 15 and sum of whose squares is
minimum.
Why solve Karnataka 2nd PUC Maths Important Questions? Gain practice and confidence to do exams Better prepared as they get to gauge their performance and ways to bridge the knowledge gap Learn more about the important topics of the subject Get an idea about the repeated questions and trends
|
How can I prove that $$2^n=2\left({n \choose 0}+{n \choose 2}+{n \choose 4}+\dots\right)$$ using the binomial theorem. I've tried expanding $(x-y)^n$ with multiple different values of $x$ and $y$ but nothing pans out. I can see why the above is true when looking at Pascal's triangle as the sum of the $n-1$ row gives $2^{n-1}$ and is indeed the sum above, but I can't seem to pin it down algebraically. Thank you.
Writing $2^n$ as $$2^n=2^n+0^n=(1+1)^n+(-1+1)^n$$ and by applying the Binomial Theorem, you have that $$\begin{align*}2^n&=(1+1)^n+(-1+1)^n=\sum_{k=0}^{n}\binom{n}{k}1^k\cdot1^{n-k}+\sum_{k=0}^{n}\binom{n}{k}(-1)^k\cdot1^{n-k}=\\&=\sum_{k=0}^{n}\binom{n}{k}+\sum_{k=0}^{n}\binom{n}{k}(-1)^k\end{align*}$$ and since $$(-1)^k=\begin{cases}1, &\text{if } k \in \mathbb{2N}\\-1, &\text{if } k \in \mathbb{2N+1}\end{cases}$$ the above equation can be written as $$\begin{align*}2^n&=\sum_{k=0}^{n}\binom{n}{k}+\sum_{k \in \mathbb{2N}}^{n}\binom{n}{k}-\sum_{k \in \mathbb{2N+1}}^{n}\binom{n}{k}=2\sum_{k \in \mathbb{2N}}^{n}\binom{n}{k}=\\&\\&=2\left(\binom{n}{0}+\binom{n}{2}+\binom{n}{4}+\ldots\right)\end{align*}$$
Hint: $$(x+y)^n = \sum_{k=0}^n \binom nk x^ky^{n-k}$$Now change $y\to -y$ and make the sum and difference of both relations.
take a set $S$ of $n$ things. you are computing the number of subsets even cardinality. it is well-known (or easily proved) that there are $2^n$ subsets in all, and that as many have even cardinality as have odd cardinality.
|
Brief sketch of ARE for one-sample $t$-test, signed test and the signed-rank test
I expect the long version of @Glen_b's answer includes detailed analysis for two-sample signed rank test along with the intuitive explanation of the ARE. So I'll skip most of the derivation. (one-sample case, you can find the missing details in Lehmann TSH).
Testing Problem: Let $X_1,\ldots,X_n$ be a random sample from location model $f(x-\theta)$, symmetric about zero. We are to compute ARE of signed test, signed rank test for the hypothesis $H_0: \theta=0$ relative to t-test.
To assess the relative efficiency of tests, only local alternatives are considered because consistent tests have power tending to 1 against fixed alternative. Local alternatives that give rise to nontrivial asymptotic power is often of the form $\theta_n=h/\sqrt{n}$ for fixed $h$, which is called
Pitman drift in some literature.
Our task ahead is
find the limit distribution of each test statistic under the null find the limit distribution of each test statistic under the alternative compute the local asymptotic power of each test
Test statisics and asymptotics t-test (given the existence of $\sigma$) $$t_n=\sqrt{n}\frac{\bar{X}}{\hat{\sigma}}\to_dN(0,1)\quad \text{under the null}$$ $$t_n=\sqrt{n}\frac{\bar{X}}{\hat{\sigma}}\to_dN(h/\sigma,1)\quad \text{under the alternative }\theta=h/\sqrt{n}$$ so the test that rejects if $t_n>z_\alpha$ has asymptotic power function$$1-\Phi\left(z_\alpha-h\frac{1}{\sigma}\right)$$ signed test $S_n=\frac{1}{n}\sum_{i=1}^{n}1\{X_i>0\}$$$\sqrt{n}\left(S_n-\frac{1}{2}\right)\to_dN\left(0,\frac{1}{4}\right)\quad \text{under the null }$$$$\sqrt{n}\left(S_n-\frac{1}{2}\right)\to_dN\left(hf(0),\frac{1}{4}\right)\quad \text{under the alternative }$$ and has local asymptotic power$$1-\Phi\left(z_\alpha-2hf(0)\right)$$ signed-rank test $$W_n=n^{-2/3}\sum_{i=1}^{n}R_i1\{X_i>0\}\to_dN\left(0,\frac{1}{3}\right)\quad \text{under the null }$$$$W_n\to_dN\left(2h\int f^2,\frac{1}{3}\right)\quad \text{under the alternative }$$and has local asymptotic power$$1-\Phi\left(z_\alpha-\sqrt{12}h\int f^2\right)$$
Therefore, $$ARE(S_n)=(2f(0)\sigma)^2$$$$ARE(W_n)=(\sqrt{12}\int f^2\sigma)^2$$If $f$ is standard normal density, $ARE(S_n)=2/\pi$, $ARE(W_n)=3/\pi$
If $f$ is uniform on [-1,1], $ARE(S_n)=1/3$, $ARE(W_n)=1/3$
Remark on the derivation of distribution under the alternative
There are of course many ways to derive the limiting distribution under the alternative. One general approach is to use Le Cam's third lemma. Simplified version of it states
Let $\Delta_n$ be the log of the likelihood ratio. For some statistic
$W_n$, if
$$ (W_n,\Delta_n)\to_d N\left[\left(\begin{array}{c}
\mu\\
-\sigma^2/2
\end{array}\right),\left(\begin{array}{cc}
\sigma^2_W & \tau \\
\tau & \sigma^2/2
\end{array}\right)\right]\\
$$
under the null, then $$W_n\to_d N\left(\mu+\tau,\sigma^2_W\right)\quad\text{under the alternative}$$
For quadratic mean differentiable densities, local asymptotic normality and contiguity are automatically satisfied, which in turn implies Le Cam lemma. Using this lemma, we only need to compute $\mathrm{cov}(W_n,\Delta_n)$ under the null. $\Delta_n$ obeys LAN $$\Delta_n\approx \frac{h}{\sqrt{n}}\sum_{i=1}^{n}l(X_i)-\frac{1}{2}h^2I_0$$ where $l$ is score function, $I_0$ is information matrix.Then, for instance, for signed test $S_n$$$\mathrm{cov}(\sqrt{n}(S_n-1/2),\Delta_n)=-h\mathrm{cov}\left(1\{X_i>0\},\frac{f'}{f}(X_i)\right)=h\int_0^\infty f'=hf(0)$$
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
We got an argument that 3-coloring bounded degree graphs is subexponential with complexity $O(\exp{(\sqrt{n}\log^2{n})})$.
The treewidth of a planar graphs on $n$ vertices is $O(\sqrt{n})$ and 3-coloring it is $O(\exp{\sqrt{n}})$ as shown here p. 8 of the pdf.
This paper gives reduction from 3-coloring to 3-coloring planar graph and the main idea is replace each edge crossing with a small gadget, which preserves colorability.
If for a bounded degree graph we can find drawing with $o(n^2)$ crossings, we get subexponential algorithm for 3-coloring it.
According to second paper for bounded degree graphs, we get approximation with $O(n\log^4{n})$ crossings.
After we have found drawing with few crossings, we planarize using the gadget. The resulting graph is bounded degree and planar and on $n+ C n\log^4{n}$ vertices.
Q1 Is this result correct?
To our knowledge edge coloring 3-regular graph is exponential.
|
Sandbox
this is a test sandbox. feel free to edit how you like although no changes are guaranteed to stay for long.
test edit
Contents Basic text formatting[edit]
You can format this page using Wikitext special characters.
What it looks like What you type
You can
3 apostrophes will bold
5 apostrophes will bold and italicize
(Using 4 apostrophes doesn't do anything special -- there are just '
You can ''italicize text'' by putting 2 apostrophes on each side. 3 apostrophes will bold '''the text'''. 5 apostrophes will bold and italicize '''''the text'''''. (Using 4 apostrophes doesn't do anything special -- <br> there are just ''''left over ones'''' that are included as part of the text.)
A single newlinegenerally has no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the
But an empty line starts a new paragraph.
When used in a list, a newline
A single newline generally has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function (used internally to compare different versions of a page). But an empty line starts a new paragraph. When used in a list, a newline ''does'' affect the layout ([[#lists|see below]]).
You can break lines
Please do not start a link or
You can break lines<br> without a new paragraph.<br> Please use this sparingly. Please do not start a [[link]] or ''italics'' or '''bold''' on one line and close it on the next.
You should "sign" your comments on talk pages:
You should "sign" your comments on talk pages: <br> - Three tildes gives your signature: ~~~ <br> - Four tildes give your signature plus date/time: ~~~~ <br> - Five tildes gives the date/time alone: ~~~~~ <br> [edit]
You can use some
HTML tags too. For a list of HTML tags that are allowed, see HTML in wikitext. However, you should avoid HTML in favor of Wiki markup whenever possible.
What it looks like What you type
Put text in a
Put text in a <tt>typewriter font</tt>. The same font is generally used for <code> computer code</code>. <strike>Strike out</strike> or <u>underline</u> text, or write it <span style= "font-variant:small-caps"> in small caps</span>.
Superscripts and subscripts:X
Superscripts and subscripts: X<sup>2</sup>, H<sub>2</sub>O <center>Centered text</center> <blockquote> The '''blockquote''' command will indent both margins when needed instead of the left margin only as the colon does. </blockquote>
Invisible comments to editors (<!-- -->) only appear while editing the page.
Invisible comments to editors (<!-- -->) only appear while editing the page. <!-- Note to editors: blah blah blah. --> Organizing your writing[edit]
What it looks like What you type
Subsection
Using more "equals" (=) signs creates a subsection.
A smaller subsection
Don't skip levels, like from two to four equals signs.
Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
== Section headings == ''Headings'' organize your writing into sections. The Wiki software can automatically generate a table of contents from them. === Subsection === Using more "equals" (=) signs creates a subsection. ==== A smaller subsection ==== Don't skip levels, like from two to four equals signs. Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
marks the end of the list.
* ''Unordered lists'' are easy to do: ** Start every line with a star. *** More stars indicate a deeper level. *: Previous item continues. ** A newline * in a list marks the end of the list. * Of course you can start again.
A newline marks the end of the list.
# ''Numbered lists'' are: ## Very organized ## Easy to follow A newline marks the end of the list. # New numbering starts with 1.
Here's a
Begin with a semicolon. One item per line; a newline can appear before the colon, but using a space before the colon improves parsing.
Here's a ''definition list'': ; Word : Definition of the word ; A longer phrase needing definition : Phrase defined ; A word : Which has a definition : Also a second one : And even a third Begin with a semicolon. One item per line; a newline can appear before the colon, but using a space before the colon improves parsing. * You can even do mixed lists *# and nest them *# inside each other *#* or break lines<br>in lists. *#; definition lists *#: can be *#:; nested : too
A newline starts a new paragraph.
: A colon (:) indents a line or paragraph. A newline starts a new paragraph. <br> Often used for discussion on talk pages. : We use 1 colon to indent once. :: We use 2 colons to indent twice. ::: 3 colons to indent 3 times, and so on.
You can make horizontal dividing lines (----) to separate text.
But you should usually use sections instead, so that they go in the table of contents.
You can make horizontal dividing lines (----) to separate text. ---- But you should usually use sections instead, so that they go in the table of contents.
You can add footnotes to sentences using the
References: <references/>
You can add footnotes to sentences using the ''ref'' tag -- this is especially good for citing a source. :There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> References: <references/> For details, see [[Wikipedia:Footnotes]] and [[Help:Footnotes]].
See also Wikipedia:Picture tutorial#Forcing a break (not just for pictures).
Links[edit]
You will often want to make clickable
links to other pages.
What it looks like What you type Here's a link to a page named [[Official position]]. You can even say [[Official position]] and the link will show up correctly.
You can put formatting around a link.Example:
You can put formatting around a link. Example: ''[[Wikipedia]]''. The ''first letter'' of articles is automatically capitalized, so [[wikipedia]] goes to the same place as [[Wikipedia]]. Capitalization matters after the first letter.
The weather in London is a page that doesn't exist yet. You could create it by clicking on the link.
[[The weather in London]] is a page that doesn't exist yet. You could create it by clicking on the link.
You can link to a page section by its title:
If multiple sections have the same title, add a number. #Example section 3 goes to the third section named "Example section".
You can link to a page section by its title: *[[List of cities by country#Morocco]]. If multiple sections have the same title, add a number. [[#Example section 3]] goes to the third section named "Example section".
You can make a link point to a different place with a piped link. Put the link target first, then the pipe character "|", then the link text.
Or you can use the "pipe trick" so that text in parentheses or text after a comma does not appear.
*[[Help:Link|About Links]] *[[List of cities by country#Morocco| Cities in Morocco]] *[[Spinning (textiles)|]] *[[Boston, Massachusetts|]]
You can make an external link just by typing a URL: http://www.nupedia.com
You can give it a title: Nupedia
Or leave the title blank: [1]
You can make an external link just by typing a URL: http://www.nupedia.com You can give it a title: [http://www.nupedia.com Nupedia] Or leave the title blank: [http://www.nupedia.com] Linking to an e-mail address works the same way: mailto:someone@example.com or [mailto:someone@example.com someone]
You can redirect the user to another page.
#REDIRECT [[Official position]]
Category links do not show up in linebut instead at page bottom
Add an extra colon to
[[Help:Category|Category links]] do not show up in line but instead at page bottom ''and cause the page to be listed in the category.'' [[Category:English documentation]] Add an extra colon to ''link'' to a category in line without causing the page to be listed in the category: [[:Category:English documentation]]
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your Preferences:
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your [[Special:Preferences|]]: * [[1969-07-20]] * [[July 20]], [[1969]] * [[20 July]] [[1969]] Just show what I typed[edit] See also Text formatting examples.
A few different kinds of formatting will tell the Wiki to display things as you typed them.
What it looks like What you type
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre>
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Images, tables, video, and sounds[edit]
After uploading, just enter the filename, highlight it and press the "embedded image"-button of the edit_toolbar.
This will produce the syntax for uploading a file
[[Image:filename.png]]
This is a very quick introduction. For more information, see:
Help:Images and other uploaded files for how to upload files w:en:Wikipedia:Extended image syntax for how to arrange images on the page Help:Table for how to create a table
What it looks like What you type
A picture, including alternate text:
You can put the image in a frame with a caption:
A picture, including alternate text: [[Image:Wiki.png|This Wiki's logo]] The image in a frame with a caption: [[Image:Wiki.png|frame|This Wiki's logo]]
A link to Wikipedia's page for the image: Image:Wiki.png
Or a link directly to the image itself: Media:Wiki.png
A link to Wikipedia's page for the image: [[:Image:Wiki.png]] Or a link directly to the image itself: [[Media:Wiki.png]]
Use
Use '''media:''' links to link directly to sounds or videos: [[media:Sg_mrob.ogg|A sound file]]
Embedding a video is described at [2]
{| border="1" cellspacing="0" cellpadding="5" align="center" ! This ! is |- | a | table |- |} Mathematical formulas[edit]
What it looks like What you type
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math>
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Templates[edit] Templates are segments of Wiki markup that are meant to be copied automatically ("transcluded") into a page.You add them by putting the template's name in {{double braces}}. It is also possible to transclude other pages by using {{:colon and double braces}}.
Some templates take
parameters, as well, which you separate with the pipe character.
What it looks like What you type {{Transclusion demo}} {{Help:Transclusion Demo}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS:
Go to this page to see the H:title template itself: Template:Tl
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS: {{H:title|This is the hover text| Hover your mouse over this text}} Go to this page to see the H:title template itself: {{tl|H:title}}
|
Pressure coefficient, abbreviated as \(C_p\), is a dimensionless number about the relative pressures throughout a flow field in fluid mechanics.
Pressure Coefficient FORMULA
\(\large{ C_p = \frac { p \;-\; p_{\infty} } { \frac {1}{2} \; \rho_{\infty} \; v_{\infty}^2 } }\)
Where:
\(\large{ C_p }\) = pressure coefficient
\(\large{ \rho _{\infty} }\) (Greek symbol rho) = free stream density
\(\large{ p_{\infty} }\) = free stream pressure
\(\large{ v_{\infty} }\) = free stream velocity
\(\large{ p }\) = pressure
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
Crossing the Line
This post continues our series on developing statistical models to explore the arcane relationship between UFO sightings and population. The previous post is available here: Bayes vs. the Invaders! Part One: The 37th Parallel.
The simple linear model developed in the previous post is far from satisfying. It makes many unsupportable assumptions about the data and the form of the residual errors from the model. Most obviously, it relies on an underlying Gaussian (or
normal) distribution for its understanding of the data. For our count data, some basic features of the Guassian are inappropriate.
Most notably:
a Gaussian distribution is continuous whilst counts are discrete — you can’t have 2.3 UFO sightings in a given day; the Gaussian can produce negative values, which are impossible when dealing with counts — you can’t have a negative number of UFO sightings; the Gaussian is symmetrical around its mean value whereas count data is typically skewed.
Moving from the safety and comfort of basic
linear regression, then, we will delve into the madness and chaos of generalized linear models that allow us to choose from a range of distributions to describe the relationship between state population and counts of UFO sightings. Basic Models
We will be working in a Bayesian framework, in which we assign a
prior distribution to each parameter that allows, and requires, us to express some prior knowledge about the parameters of interest. These priors are the initial starting points for parameters Afrom which the model moves towards the underlying values as it learns from the data. Choice of priors can have significant effects not only on the outputs of the model, but also its ability to function effectively; as such, it is both an important, but also arcane and subtle, aspect of the Bayesian approach 1.
Practically speaking, a simple linear regression can be expressed in the following form:
$$y \sim \mathcal{N}(\mu, \sigma)$$
(Read as “\(y\)
is drawn from a normal distribution with mean \(\mu\) and standard deviation \(\sigma\)”).
In the the above expression the model relies on a Gaussian, or
normal likelihood (\(\mathcal{N}\)) to describe the data — making assertions regarding how we believe the underlying data was generated. The Gaussian distribution is parameterised by a location parameter (\(\mu\)) and a standard deviation (\(\sigma\)).
If we were uninterested in prediction, we could describe the
shape of the distribution of counts (\(y\)) without a predictor variable. In this approach, we could specify our model by providing priors for \(\mu\) and \(\sigma\) that express a level of belief in their likely values:
$$\begin{eqnarray}
y &\sim& \mathcal{N}(\mu, \sigma) \\ \mu &\sim& \mathcal{N}(0, 1) \\ \sigma &\sim& \mathbf{HalfCauchy}(2) \end{eqnarray}$$
This provides an initial belief as to the likely shape of the data that informs, via arcane computational procedures, the model of how the observed data approaches the underlying truth
2.
This model is less than interesting, however. It simply defines a range of possible Gaussian distributions without unveiling the horror of the underlying relationships between unsuspecting terrestrial inhabitants and anomalous events.
To construct such a model, relating a
predictor to a response, we express those relationships as follows:
$$\begin{eqnarray}
y &\sim& \mathcal{N}(\mu, \sigma) \\ \mu &\sim& \alpha + \beta x \\ \alpha &\sim& \mathcal{N}(0, 1) \\ \beta &\sim& \mathcal{N}(0, 1) \\ \sigma &\sim& \mathbf{HalfCauchy}(1) \end{eqnarray}$$
In this model, the parameters of the likelihood are now probability distributions themselves. From a traditional linear model, we now have an
intercept (\(\alpha\)), and a slope (\(\beta\)) that relates the change in the predictor variable (\(x\)) to the change in the response. Each of these hyperparameters is fitted according to the observed dataset. A New Model
We can now break free from the bonds of pure linear regression and consider other distributions that more naturally describe data of the form that we are considering. The awful power of GLMs is that they can use an underlying linear model, such \(\alpha + \beta x\), as parameters to a range of likelihoods beyond the Guassian. This allows the natural description of a vast and esoteric menagerie of possible data.
For count data the default likelihood is the Poisson distribution, whose sole parameter is the
arrival rate (\(\lambda\)). While somewhat restricted, as we will see, we can begin our descent into madness by fitting a Poisson-based model to our observed data. Stan
To fit a model, we will use the Stan probabilistic programming language. Stan allows us to write a program defining a stastical model which can then be fit to the data using Markov-Chain Monte Carlo (MCMC) methods. In effect, at a very abstract level, this approach uses a random sampling to discover the values of the parameters that best fit the observed data
3.
Stan lets us specify models in the form given above, along with ways to pass in and define the nature and form of the data. This code can then be called from R using the
rstan package.
In this, and subsequent posts, we will be using Stan code directly as both a learning and explanatory exercise. In typical usage, however it is often more convenient to use one of two excellent R packages
brms or
rstanarm that allow for more compact and convenient specification of models, with well-specified raw Stan code generated automatically.
De Profundis
In seeking to take our first steps beyond the placid island of ignorance of the Gaussian, the Poisson distribution is a first step for assessing count data. Adapting the Gaussian model above, we can propose a predictive model for the entire population of states as follows:
$$\begin{eqnarray}
y &\sim& \mathbf{Poisson}(\lambda) \\ \lambda &\sim& \alpha + \beta x \\ \alpha &\sim& \mathcal{N}(0, 5) \\ \beta &\sim& \mathcal{N}(0, 5) \end{eqnarray}$$
The sole parameter of the Poisson is the
arrival rate (\(\lambda\)) that we construct here from a population-wide intercept (\(\alpha\)) and slope (\(\beta\)).
The Stan code for the above model, and associated R code to run it, is below:
With this model encoded and fit, we can now peel back the layers of the procedure to see the extent to which it has endured the horror of our data.
The MCMC algorithm that underpins Stan — specifically Hamiltonian Monte Carlo (HMC) using the No U-Turn Sampler (NUTS) — attempts to find an island of stability in the space of possibilities that corresponds to the best fit to the observed data. To do so, the algorithm spawns a set of Markov chains that explore the parameter space. If the model is appropriate, and the data coherent, the set of Markov chains end up
converging to exploring a similar, small set of possible states. Validation
When modelling via this approach, a first check of the model’s chances of having fit correctly is to examine the so-called ‘traceplot’ that shows how well the separate Markov chains ‘mix’ — that is, converge to exploring the same area of the space
4. For the Poisson model above, the traceplot can be created using the
bayesplot library:
These traceplots exhibit the characteristic insane scribbling of well-mixed chains, often referred to — in, of course, hushed whispers — as a manifestation of a hairy caterpillar; the separate lines representing each chain are clearly overlapping and exploring the same areas. If, by contrast, the lines were largely separated or did not show the same space, there would be reason to believe that our model had become lost and unable to find a coherent voice amongst the myriad babbling murmurs of the data.
A second check on the sanity of the modelling process is to examine the output of the model itself to show the value of the fitted parameters of interest, and some diagnostic information:
fit_ufo_pop_poisson %>% summary(pars=c("a", "b" )) %>% extract2( "summary" ) mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat a 5.5199115 7.162684e-03 2.701118e-01 4.99950988 5.33593127 5.51805161 5.70399617 6.05461185 1422.118 1.001581 b 0.0107647 1.728273e-06 6.574476e-05 0.01063562 0.01072192 0.01076418 0.01080788 0.01089496 1447.097 1.001551
For assessment of successful model fit, the Rhat value represents the extent to which the various Markov chains exploring the space, of which there are four by default in Stan, are consistent with each other. As a rule of thumb, a value of
Rhat > 1.1 indicates that the model has not converged appropriately and may require a longer set of random sampling iterations, or an improved model. Here, the valuesof
Rhat are close to the ideal value of 1.
As a final step, we should examine how well our model can reproduce the shape of the original data. Models aim to be eerily lifelike parodies of the truth; in a Bayesian framework, and in the Stan language, we can build into the model the ability to draw random samples from the
posterior predictive distribution — the set of parameters that the model has learnt from the data — to create new possible values of the outcomes based on the observed inputs. This process can be repeated many times to produce a multiplicity of possible outcomes drawn from model, which we can then visualize to see graphically how well our model fits the observed data.
In the Stan code above, this is created in the
generated_quantities block. When using more convenient libraries such as
brms or
rstanarm, draws from the posterior predictive distribution can be obtained more simply after the model has been fit through a range of helper functions. Here, we undertake the process manually.
We can see, then, how well the Poisson distribution, informed by our selection of priors, has shaped itself to the underlying data.
In the diagram above, the yellow line shows the densities of count values; the blue lines show a sample of twisted mockeries spawned by our piscine approximations. As can be seen, the model has roughly captured the shape of the true distribution, but has notable dissimilarities with the original data.
To appreciate the full horror of what we have wrought, of course, we can plot the predictions of the model against the real data.
This shows an extremely similar line of best fit to that produced from the basic Gaussian model in the previous post. Indeed, a side by side comparison shows that the 95% credible interval around the line are wider in this Poisson-based model
5. This most likely reflects the inflexibility of the Poisson distribution given the nature of our data, something that we will discuss and rectify in the next post. Unsettling Distributions
In this post we have opened our eyes to the weirdly non-linear possibilities of generalised linear models; sealed and bound this concept within the wild philosophy of Bayesian inference; and unleashed the horrifying capacities of Markov Chain Monte Carlo methods and their manifestation in the Stan language.
Applying the Poisson distribution to our records of extraterrestrial sightings, we have seen that we can, to some extent, create a mindless Golem that imperfectly mimics the original data. In the next post, we will delve more deeply into the esoteric possibilities of other distributions for count data, explore ways in which to account for arcane relationships across and between per-state observations, and show how we can compare the effectiveness of different models to select the final glimpse of dread truth that we inadvisably seek.
Footnotes
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
Going with Patrick87's hash idea, here's a practical construction that almost meets your requirements — the probability of falsely mistaking a new value for an old one is not quite zero, but can be easily made negligibly small.Choose the parameters $n$ and $k$; practical values might be, say, $n = 128$ and $k = 16$. Let $H$ be a secure cryptographic ...
The second paragraph of the Wikipedia article on Bloom filters says the following, with a citation to Bloom's original 1970 paper.Bloom proposed the technique for applications where the amount of source data would require an impracticably large hash area in memory if "conventional" error-free hashing techniques were applied. He gave the example of a ...
No, it is not possible to have an efficient data structure with these properties, if you want to have a guarantee that the data structure will say "new" if it is really new (it'll never, ever say "not new" if it is in fact new; no false negatives allowed). Any such data structure will need to keep all of the data to ever respond "not new". See pents90's ...
I couldn't find the source, but the idea is simple: Use additional bloom filter to represent the set of the deletions.As this is a very simple solution, it might be considered as a folklore.Anyway, I found a short reference to this solution in the following paper (Theory and Practice of Bloom Filters for Distributed Systems):http://www.dca.fee.unicamp....
I think your reasoning is in principle correct. Perfect hashing is an alternative to Bloom filters. However, classical dynamic perfect hashing is rather a theoretical result than a practical solution. Cuckoo hashing is probably the more "reasonable" alternative.Note that both dynamic perfect hashing and standard cuckoo hashing performance is only expected ...
Using the formula from wikipedia for Bloom filter false positives, your proposal would have a false positive probability of about 0.00726%. This assumes, among other things, that good hash functions are used. The formula is:$(1 - (1 - [1/m])^{kn})^k$where $m$ is the number of bits in the filter, $k$ is the number of hash functions and $n$ is the number ...
Given that you want to insert $n$ words into the Bloom filter, and you want a false positive probability of $p$, the wikipedia page on Bloom filters gives the following formulas for choosing $m$, the number of bits in your table and $k$, the number of hash functions that you are going to use. They give$m = - \frac{n \ln p}{(\ln 2)^2}$and$$k = \frac{m}{...
What about just a hash table? When you see a new item, check the hash table. If the item's spot is empty, return "new" and add the item. Otherwise, check to see if the item's spot is occupied by the item. If so, return "not new". If the spot is occupied by some other item, return "new" and overwrite the spot with the new item.You'll definitely always ...
A Bloom filter never gives a false negative. This is the property that makes them desirable in many situations. They are appropriate for any situation where the existence of false positives is a potential performance issue, but not a correctness issue.One obvious example is where the filter is used to avoid a more expensive check. Let's try to quantify ...
This balances two considerations:The more hash functions you have, the more bits will be set in the Bloom filter when you create the filter (because each element you insert causes up to $k$ bits to be set in the filter). The more bits that are set, the higher the risk of false positives.The more hash functions you have, the less likely that one of them ...
In the case where the universe of items is finite, then yes: just use a bloom filter that records which elements are out of the set, rather than in the set. (I.e., use a bloom filter that represents the complement of the set of interest.)A place where this is useful is to allow a limited form of deletion. You keep two bloom filters. They start out empty....
You have asked two questions. I will answer them one by one.Choosing the seed. The usual approach here is to choose the seeds randomly once and for all, and to hard-code them. If the hash family is reasonable, then all small sets of seeds should look "the same". I don't know the Murmur3 family, but it's probably reasonable enough.Note that linear ...
Let me simplify your third step: count the number of elements in your multiset which are not in the map, and add to it the number of elements in the map.Suppose that your elements are $x_1,\ldots,x_n$. For a given hash $h$, let $x_{i_1},\ldots,x_{i_\ell}$ be the elements hashing to $h$ (in order). If $\ell = 1$, then the elements won't be added to the map ...
This is how I use Bloom filter: suppose that full dictionary check is 10x slower than checking a bit in the Bloom filter. Then if you make a Bloom filter having $N$ bits per dictionary word, then average time required for one check would be $1+10/N$. For example, with $N=8$ (i.e., use one byte in Bloom filter per dictionary word), the avg.time will be 2.25 ...
Let's analyze how many hash bits you need in your new scheme versus a Bloom filter.First of all, we need to agree about terminology. I will use $q$ to represent the probability of a false positive.For a Bloom filter the design problem of choosing $m$ and $k$ given that you want to hold $n$ elements with false positive rate $q$ is solved by $k = -\lg_2 q$...
This is explained in Wikipedia. Given $n,m$, the false positive probability is$$\left(1 - \left(1 - \frac{1}{m}\right)^{kn}\right)^k.$$This is the quantity we want to minimize. While the exact expression is hard to minimize exactly, we can use the approximation$$\left(1 - \left(1 - \frac{1}{m}\right)^{kn}\right)^k \approx (1-e^{-kn/m})^k,$$which is ...
If the query results are mostly false, the answer will be returned in $O(1)$ on average. (In traditional Bloom filters, negative results are faster than positive ones.)This might be slow, however, since the lookups are random and have bad cache behavior. There are a few ways to fix this.I'd suggest:Use a blocked bloom filter (from Cache-, Hash- and ...
You choose the size of your Bloom filter according to the expected occupancy. If your Bloom filter is full of 1s then it's too small for the number of words you're putting it. Words in a document tend to repeat, so when estimating how big the filter should be, count the number of unique words in your document.
I think the Bloom filter gives you something the perfect hash function does not - it can test membership.The PHFs I know return some answer for any key you apply them to. If the key you supplied is not in your hash set, some value is still supplied. This is fine if you are storing all of the keys that are in your set somewhere and the PHF just gives a ...
A bit is set to 1 if it has been hit. It has $k|S|$ chances of being hit, and each time it is hit with probability $1/n$. Using a union bound, the probability that a bit is hit is at most $k|S|/n$.We can improve on this bound by calculating instead the probability that a bit is 0, that is, that it is not hit. For a bit not to be hit, it has to be missed ...
This is related to the notion of "fast path, slow path" from computer systems. In systems, one common optimization method is: if there is a common case that can be handled fast and is common, first test whether the input falls into that case and if so solve it and return immediately; otherwise, fall back to the complex computation. For instance, the slow ...
Build a $k \times k$ table $ans$ of answers, storing in each entry the smallest (according to some total order) element in that intersection or a sentinel value (e.g. -1) if the intersection is empty, and also maintain a mapping from each element to the set of all sets that contains it (e.g. using a hashtable of hashtables).When you add an element $x$ to a ...
A Bloom Filter will typically be used to eliminate mismatches quickly, since it produces true negatives, but some false positives. A join (specifically, an equi-join) which is expected to have some non-matching keys can be sped up by pre-processing the valid keys from one table into a Bloom filter. The join operator tests each key from the other table ...
A Bloom filter doesn't have "buckets".You can make a Bloom filter of any size you want. The smaller it is, the higher the false positive rate.This should be covered in any good introduction to Bloom filters, of which there are many.
In the same way that 0.5 is the error of a coin flip, 0.166666 is the error of a die throw. There's nothing sacred in the probability 0.5.A Bloom filter gives you a guarantee that a coin flip cannot – it has no false negatives. Even if its false positive rate is 0.9, it still gives you some information.Whether or not a high false positive rate is ...
Yes, it is possible. Use$$h_2(x) = \text{hash}(\text{signature}) - h_1(x) \bmod n.$$The theory behind this: if $c$ is a constant, the function$$f(t) = c - t \bmod n$$is an involution for any $c$ and any $n$, since$$f(f(t) = c - (c-t) = t \pmod n.$$Therefore, we can use the hash of the signature as the constant $c$. This gives you a scheme that ...
Let $S,T$ be two sets of size $n$. Suppose we hash each to a $m$-bit Bloom filter, using $k$ hash functions; let $x_S$ be the $m$-bit vector corresponding to $S$, and $x_T$ the $m$-bit vector corresponding to $T$.If $S,T$ agree in a $p$ fraction of entries (i.e., $|S \cap T|=pn$), thenthe expected value of the Hamming distance between these two bit-...
I just want to add in here, that if you are in the fortunate situation, that you know all of the values $v_i$ that you might possibly see; then you can use a counting bloom filter.An example might be ip-addresses, and you want to know every time one appears that you have never seen before. But it is still a finite set, so you know what you can expect.The ...
Consider Bloom filters in which $k$ hash functions are used. If we look up a value which is in the filter, then we always conclude that it is indeed in the filter. Things are more complicated when we look up a value which is not in the table; we could erroneously conclude that it were in the table.Suppose the filter is of size $n$, and $m$ values were put ...
|
In
mathematics, the Fictitious domain method is a method to find the solution of a partial differential equations on a complicated domain , by substituting a given problemposed on a domain D {\displaystyle D} , with a new problem posed on a simple domain D {\displaystyle D} containing Ω {\displaystyle \Omega } . D {\displaystyle D} General formulation [ edit ]
Assume in some area
we want to find solution D ⊂ R n {\displaystyle D\subset \mathbb {R} ^{n}} of the u ( x ) {\displaystyle u(x)} equation: L u = − ϕ ( x ) , x = ( x 1 , x 2 , … , x n ) ∈ D {\displaystyle Lu=-\phi (x),x=(x_{1},x_{2},\dots ,x_{n})\in D}
with
boundary conditions: l u = g ( x ) , x ∈ ∂ D {\displaystyle lu=g(x),x\in \partial D}
The basic idea of fictitious domains method is to substitute a given problemposed on a domain
, with a new problem posed on a simple D {\displaystyle D} shaped domain containing Ω {\displaystyle \Omega } ( D {\displaystyle D} ). For example, we can choose D ⊂ Ω {\displaystyle D\subset \Omega } n-dimensional parallelotope as . Ω {\displaystyle \Omega }
Problem in the
extended domain for the new solution Ω {\displaystyle \Omega } : u ϵ ( x ) {\displaystyle u_{\epsilon }(x)} L ϵ u ϵ = − ϕ ϵ ( x ) , x = ( x 1 , x 2 , … , x n ) ∈ Ω {\displaystyle L_{\epsilon }u_{\epsilon }=-\phi ^{\epsilon }(x),x=(x_{1},x_{2},\dots ,x_{n})\in \Omega } l ϵ u ϵ = g ϵ ( x ) , x ∈ ∂ Ω {\displaystyle l_{\epsilon }u_{\epsilon }=g^{\epsilon }(x),x\in \partial \Omega }
It is necessary to pose the problem in the extended area so that the following condition is fulfilled:
u ϵ ( x ) → ϵ → 0 u ( x ) , x ∈ D {\displaystyle u_{\epsilon }(x){\xrightarrow[{\epsilon \rightarrow 0}]{}}u(x),x\in D} Simple example, 1-dimensional problem [ edit ] d 2 u d x 2 = − 2 , 0 < x < 1 ( 1 ) {\displaystyle {\frac {d^{2}u}{dx^{2}}}=-2,\quad 0<x<1\quad (1)} u ( 0 ) = 0 , u ( 1 ) = 0 {\displaystyle u(0)=0,u(1)=0} Prolongation by leading coefficients [ edit ]
solution of problem: u ϵ ( x ) {\displaystyle u_{\epsilon }(x)} d d x k ϵ ( x ) d u ϵ d x = − ϕ ϵ ( x ) , 0 < x < 2 ( 2 ) {\displaystyle {\frac {d}{dx}}k^{\epsilon }(x){\frac {du_{\epsilon }}{dx}}=-\phi ^{\epsilon }(x),0<x<2\quad (2)}
Discontinuous
coefficient and right part of equation previous equation we obtain from expressions: k ϵ ( x ) {\displaystyle k^{\epsilon }(x)} k ϵ ( x ) = { 1 , 0 < x < 1 1 ϵ 2 , 1 < x < 2 {\displaystyle k^{\epsilon }(x)={\begin{cases}1,&0<x<1\\{\frac {1}{\epsilon ^{2}}},&1<x<2\end{cases}}} ϕ ϵ ( x ) = { 2 , 0 < x < 1 2 c 0 , 1 < x < 2 ( 3 ) {\displaystyle \phi ^{\epsilon }(x)={\begin{cases}2,&0<x<1\\2c_{0},&1<x<2\end{cases}}\quad (3)}
Boundary conditions:
u ϵ ( 0 ) = 0 , u ϵ ( 2 ) = 0 {\displaystyle u_{\epsilon }(0)=0,u_{\epsilon }(2)=0}
Connection conditions in the point
: x = 1 {\displaystyle x=1} [ u ϵ ] = 0 , [ k ϵ ( x ) d u ϵ d x ] = 0 {\displaystyle [u_{\epsilon }]=0,\ \left[k^{\epsilon }(x){\frac {du_{\epsilon }}{dx}}\right]=0}
where
means: [ ⋅ ] {\displaystyle [\cdot ]} [ p ( x ) ] = p ( x + 0 ) − p ( x − 0 ) {\displaystyle [p(x)]=p(x+0)-p(x-0)}
Equation (1) has
analytical solution therefore we can easily obtain error: u ( x ) − u ϵ ( x ) = O ( ϵ 2 ) , 0 < x < 1 {\displaystyle u(x)-u_{\epsilon }(x)=O(\epsilon ^{2}),\quad 0<x<1} Prolongation by lower-order coefficients [ edit ]
solution of problem: u ϵ ( x ) {\displaystyle u_{\epsilon }(x)} d 2 u ϵ d x 2 − c ϵ ( x ) u ϵ = − ϕ ϵ ( x ) , 0 < x < 2 ( 4 ) {\displaystyle {\frac {d^{2}u_{\epsilon }}{dx^{2}}}-c^{\epsilon }(x)u_{\epsilon }=-\phi ^{\epsilon }(x),\quad 0<x<2\quad (4)}
Where
we take the same as in (3), and expression for ϕ ϵ ( x ) {\displaystyle \phi ^{\epsilon }(x)} c ϵ ( x ) {\displaystyle c^{\epsilon }(x)} c ϵ ( x ) = { 0 , 0 < x < 1 1 ϵ 2 , 1 < x < 2. {\displaystyle c^{\epsilon }(x)={\begin{cases}0,&0<x<1\\{\frac {1}{\epsilon ^{2}}},&1<x<2.\end{cases}}}
Boundary conditions for equation (4) same as for (2).
Connection conditions in the point
: x = 1 {\displaystyle x=1} [ u ϵ ( 0 ) ] = 0 , [ d u ϵ d x ] = 0 {\displaystyle [u_{\epsilon }(0)]=0,\ \left[{\frac {du_{\epsilon }}{dx}}\right]=0}
Error:
u ( x ) − u ϵ ( x ) = O ( ϵ ) , 0 < x < 1 {\displaystyle u(x)-u_{\epsilon }(x)=O(\epsilon ),\quad 0<x<1} Literature [ edit ] P.N. Vabishchevich, The Method of Fictitious Domains in Problems of Mathematical Physics, Izdatelstvo Moskovskogo Universiteta, Moskva, 1991. Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Preprint CC SA USSR, 68, 1979. Bugrov A.N., Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Mathematical model of fluid flow, Novosibirsk, 1978, p. 79–90
|
I have a problem using
\Aboxed in the following
align*-environment:
\begin{align*}\omega(\sigma_{ij}) &\equiv 1 - \left(\frac{27J_3}{2\sigma_e^3}\right)^2 \\& = \omega(L) = 1 - \frac{\left( 9L - L^3\right)^2}{\left(L^2+3\right)^3}. \end{align*}
I want to use
\Aboxed to put the last part of the final equation in a box, i.e.
\begin{align*}\omega(\sigma_{ij}) &\equiv 1 - \left(\frac{27J_3}{2\sigma_e^3}\right)^2 \\& = \Aboxed{ \omega(L) = 1 - \frac{\left( 9L - L^3\right)^2}{\left(L^2+3\right)^3}. }\end{align*}
This gives me the error message in the title. However, using
\Aboxed in the upper equation of the
align* doesn't cause any problems at all. I've also used
\Aboxed in the last equation of other
align*'s and
align's in my document. What causes the error in this case?
|
We talked about LaTeX before.
This Full Guide on LaTeX in WordPress Explains the Js, Self Hosted & Rendered Solutions for the Users Seeking For Math, 2D or 3D Graphs. To self host the LaTeX original package, you need minimum a virtual server or cloud server. Depending on the amount of workload, that self hosted LaTeX option can go pathetic. As it is assumed that most of the WordPress users know very less about server backend, pre-rendered WordPress Plugins actually commonly classically offered for showing LaTeX in WordPress.
LaTeX in WordPress Options : Js, Self Hosted & Rendered
By default,
wordpress.com supports LaTeX via their pre-rendering on their servers. For self-hosted
wordpress.org software users like us, JetPack Plugin enables that
wordpress.com like feature.
wordpress.com users have this official guide :
1
https://en.support.wordpress.com/latex/
That is closest to what JetPack Plugin also do.
WP LaTeX is the WordPress Plugin which offers 2 options – using
wordpress.com‘s LaTeX Sever or using your server’s installation of LaTeX. We are discussing it later in the separate section. Let us see the other options.
There is another plugin named
WP QuickLaTeX. This plugin pre-render on
QuickLaTeX.com‘s LaTeX server. It supports TikZ and pgfplots graphics packages.
QuickLaTeX.com is a linkware, it is free for personal and non-commercial use in exchange to backlink.
Now, among the JavaScript’s most advanced and beautiful at this moment is KaTeX. MathJax is older option. It is a separate project :
---
1
https://github.com/Khan/KaTeX
WP-KaTeX is their plugin and KaTeX in short is a JavaScript library for TeX math rendering on the web. So, including just JavaScripts will do the trick. Its usage is limited to Math formulas. It is really beautiful, you can see their GitHub page :
1
https://khan.github.io/KaTeX/
They used this as example :
1
2
3
f(x) = \int_{-\infty}^\infty
\hat f(\xi)\,e^{2 \pi i \xi x}
\,d\xi
That I can write in this way :
1
f(x)%20=%20\int_{-\infty}^\infty\hat%20f(\xi)\,e^{2%20\pi%20i%20\xi%20x}\,d\xi
and append in front of this URL :
1
https://chart.googleapis.com/chart?cht=tx&chl=
Result will be this. Just copy-paste this URL on your browser that same thing :
1
https://chart.googleapis.com/chart?cht=tx&chl=f(x)%20=%20\int_{-\infty}^\infty\hat%20f(\xi)\,e^{2%20\pi%20i%20\xi%20x}\,d\xi
Funny. That is old Google API which can convert the formula to image. This API is deprecated, Google Chart API can do that. However that is a different topic.
LaTeX in WordPress : Installing LaTeX Packages on Ubuntu Server for WordPress Plugin
We told you that
WP LaTeX is the WordPress Plugin which offers using your server’s installation of LaTeX. We can install with this command :
1
sudo apt-get install texlive-latex-base texlive-latex-recommended dvipng imagemagick texlive-latex-extra texlive-science texlive-fonts-recommended texlive-fonts-extra
That combo thing I got from here by deduction :
1
http://www.tug.org/texlive/
Be careful about self hosting LaTeX packages. Server needs good RAM and CPU.
|
This exercise is inspired by exercises 83 and 100 of Chapter 10 in Giancoli's book
A uniform disk ($R = 0.85 m$; $M =21.0 kg$) has a rope wrapped around it. You apply a constant force $F = 35 N$ to unwrap it (at the point of contact ground-disk) while walking 5.5 m. Ignore friction.
a) How much has the center of mass of the disk moved? Explain.
Now derive a formula that relates the distance you have walked and how much rope has been unwrapped when:
b) You don't assume rolling without slipping.
c) You assume rolling without slipping.
a) I have two different answers here, which I guess one is wrong:
a.1) Here there's only one force to consider in the direction of motion: $\vec F$. Thus the center of mass should also move forward. a.2) You are unwinding the rope out of the spool and thus exerting a torque $FR$ (I am taking counterclockwise as positive); the net force exerted on the CM is zero and thus the wheel only spins and the center of mass doesn't move.
The issue here is that my intuition tells me that there should only be spinning. I've been testing the idea with a paper roll and its CM does move forward, but I think this is due to the roll not being perfectly cylindrical; if the unwrapping paper were to be touching only at a point with an icy ground the CM's roll shouldn't move.
'What's your reasoning to assert that?'
Tangential velocity points forwards at distance $R$ below the disk's CM but this same tangential velocity points backwards at distance $R$ above the disk's CM and thus translational motion is cancelled out. Actually, we note that opposite points on the rim have opposite tangential velocities (assuming there's no friction so that the tangential velocity is constant).
My book assumes a.1) is OK. I say a.2) is OK. Who's right then?
b) We can calculate the unwrapped distance noting that the arc length is related to the radius by the angle (radian) enclosed:
$$\Delta s = R \Delta \theta$$
Assuming constant acceleration and zero initial angular velocity:
$$\Delta \theta = 1/2 \alpha t^2 = 1/2 \frac{\omega}{t} t^2 = 1/2 \omega t$$
By Newton's second Law (rotation) we can solve for $\omega$ and then plug it into the above equation:
$$\tau = FR = I \alpha = I \frac{\omega}{t} = 1/2 M R^2 \frac{\omega}{t}$$
$$\omega = \frac{2F}{M R}t$$
Let's plugg it into the other EQ.
$$\Delta \theta = \frac{F}{M R}t^2$$
Mmm we still have to eliminate $t$.
Assuming constant acceleration we get by the kinematic equation (note I am using the time $t$ you take to walk 5.5 m so that we know how much rope has been unwrapped in that time):
$$t^2 = \frac{2M\Delta x}{F}$$
Plugging it into $\Delta \theta$ equation:
$$\Delta \theta = \frac{2\Delta x}{R}$$
Plugging it into $\Delta s$ equation we get the equation we wanted:
$$\Delta s = 2 \Delta x$$
If we calculate both $v$ and $\omega$ we see that $v=R\omega$ is not true so the disk doesn't roll without slipping.
c) Here $v=R\omega$ must be true. We know that if that's the case the tangential velocity must be related to the center of mass' velocity as follows:
$$2v_{cm} = v$$
Assuming that the person holding the rope goes at speed $2v_{cm}$ we get:
$$\Delta x= 2 \Delta s$$
I get reversed equations at b) and c). How can we explain that difference in both equations beyond the fact of rolling without slipping?
|
I'm trying to show:
$A\subset \mathbb{R}^n$ is open if and only if $A$ is the union of a collection of open balls accounting.
"$\Leftarrow$" Let $A=\bigcup_{i=1}^{\infty} \mathbb{B_{\varepsilon_i} (x_i)}$
If $y\in A $ then $y\in \mathbb{B}_{\varepsilon_i}(x_i)$ (for some $i$)
But note that $\mathbb{B}_{||x-y||}(y)\subset \mathbb{B}_{\varepsilon_i}(x_i)$ or $\mathbb{B}_{\varepsilon_i-||x-y||}(y)\subset \mathbb{B}_{\varepsilon_i}(x_i)$
then $A$ is open.
"$\Rightarrow$" Let $A$ open set. Let $\{V_{\alpha}\}$ such that $\alpha \in A$, a collection of sets such that $A\subset \bigcup_{\alpha\in A}V_{\alpha}$.
How I can get countable union?
Thanks for your help.
|
As the name suggest, when two distinct points are directed from one place to another then it is done by a vector. It can also be seen as differences between velocity and speed. We get no clue about in which direction the object is moving. Therefore, we use this formula that will enable us to know in which direction the object is moving. In physics, the magnitude and direction are expressed as a vector. If we say that the rock is moving at 5meter per second, and the direction is towards the West, then it is represented as a vector.
If x is the horizontal movement and y is the vertical movement, then the formula of direction is
\[\LARGE \theta =\tan^{-1}\frac{y}{x}\]
If ($x_{1}$,$y_{1}$ ) is the starting point and ends with ($x_{2}$,$y_{2}$ ), then the formula for direction is
$\LARGE \theta =\tan^{-1}\frac{(y_{2}-y_{1})}{(x_{2}-x_{1})}$
Question 1:
Find the direction of the vector $\overrightarrow{pq}$ whose initial point P is at (5, 2) and end point is at Q is at (4, 3)?
Solution:
Given $(x_{1}$, $y_{1})$ = (5, 2)
$(x_{2}$, $y_{2})$ = (4, 3)
According to the formula we have,
$\theta$ = $tan^{-1}$ $\frac{(y_{2} – y_{1})}{(x_{2} – x_{1})}$
$\theta$ = $tan^{-1}$ $\frac{(3-4)}{(2-5)}$
$\theta$ = -0.26
$\theta$ $14.89^{circ}$
|
Assume that we have a general one-period market model consisting of d+1 assets and N states.
Using a replicating portfolio $\phi$, determine $\Pi(0;X)$, the price of a European call option, with payoff $X$, on the asset $S_1^2$ with strike price $K = 1$ given that
$$S_0 =\begin{bmatrix} 2 \\ 3\\ 1 \end{bmatrix}, S_1 = \begin{bmatrix} S_1^0\\ S_1^1\\ S_1^2 \end{bmatrix}, D = \begin{bmatrix} 1 & 2 & 3\\ 2 & 2 & 4\\ 0.8 & 1.2 & 1.6 \end{bmatrix}$$
where the columns of D represent the states for each asset and the rows of D represent the assets for each state
What I tried:
We compute that:
$$X = \begin{bmatrix} 0\\ 0.2\\ 0.6 \end{bmatrix}$$
If we solve $D'\phi = X$, we get:
$$\phi = \begin{bmatrix} 0.6\\ 0.1\\ -1 \end{bmatrix}$$
It would seem that the price of the European call option $\Pi(0;X)$ is given by the value of the replicating portfolio
$$S_0'\phi = 0.5$$
On one hand, if we were to try to see if there is arbitrage in this market by seeing if a state price vector $\psi$ exists by solving $S_0 = D \psi$, we get
$$\psi = \begin{bmatrix} 0\\ -0.5\\ 1 \end{bmatrix}$$
Hence there is no strictly positive state price vector $\psi$ s.t. $S_0 = D \psi$. By 'the fundamental theorem of asset pricing' (or 'the fundamental theorem of finance' or '1.3.1' here), there exists arbitrage in this market.
On the other hand the price of 0.5 seems to be confirmed by:
$$\Pi(0;X) = \beta E^{\mathbb Q}[X]$$
where $\beta = \sum_{i=1}^{3} \psi_i = 0.5$ (sum of elements of $\psi$) and $\mathbb Q$ is supposed to be the equivalent martingale measure given by $q_i = \frac{\psi_i}{\beta}$.
Thus we have
$$E^{\mathbb Q}[X] = q_1X(\omega_1) + q_2X(\omega_2) + q_3X(\omega_3)$$
$$ = 0 + \color{red}{-1} \times 0.2 + 2 \times 0.6 = 1$$
$$\to \Pi(0;X) = 0.5$$
I guess $\therefore$ that we cannot determine the price of the European call using $\Pi(0;X) = \beta E^{Q}[X]$ because there is no equivalent martingale measure $\mathbb Q$
So what's the verdict? Can we say the price is 0.5? How can we price even if there is arbitrage?
Edit: I noticed that one of the probabilities, in what was attempted to be the equivalent martingale measure, is negative. I remember reading about
negative probabilities, but these links 1 2 mentioned by Wiki seem to assume absence of arbitrage so I think they are not applicable. Or are they?
Is it perhaps that this market can be considered to be arbitrage-free under some quasiprobability measure that allows negative probabilities?
Edit (to address a deleted answer):
Thanks BKay.
1 So you mean there is no unique price for $X$ but we can find upper bounds? Like in your example the least upper bound so far is 0.3 then we can continue to find lower upper bounds $u_1, u_2, ...$ (or even higher lower bounds $l_1, l_2, ...$) to say the price of $X$ is in $[0,\inf_n u_n]$ (or $[\sup_n l_n,\inf_n u_n]$)?
2 Re stochastic domination, I haven't heard that term in classes, but I think I read about that before. Might that depend on the (quasi)probability measure? Under this probability measure $0.5 S_1^2$ dominates $X$ but what about under some quasiprobability measure?
3 the $q_i$'s, not the $\psi_i$'s are the probabilities
|
Verify Simulations with the Method of Manufactured Solutions
How do we check if a simulation tool works correctly? One approach is the Method of Manufactured Solutions. The process involves assuming a solution, obtaining source terms and other auxiliary conditions consistent with the assumption, solving the problem with those conditions as inputs to the simulation tool, and comparing the results with the assumed solution. The method is easy to use and very versatile. For example, researchers at Sandia National Laboratories have used it with several in-house codes.
Verification and Validation
Before using a numerical simulation tool to predict outcomes from previously unforeseen situations, we want to build trust in its reliability. We can do this by checking whether the simulation tool accurately reproduces available analytical solutions or whether its results match experimental observations. This brings us to two closely related topics of
verification and validation. Let’s clarify what these two terms mean in the context of numerical simulations.
To numerically simulate a physical problem, we take two steps:
Construct a mathematical model of the physical system. This is where we account for all of the factors (inputs) that influence observed behavior (outputs) and postulate the governing equations. The result is often a set of implicit relations between inputs and outputs. This is frequently a system of partial differential equations with initial and boundary conditions that collectively are referred to as an initial boundary value problem(IBVP). Solve the mathematical model to obtain the outputs as explicit functions of the inputs. However, such closed form solutions are not available for most problems of practical interest. In this case, we use numerical methods to obtain approximate solutions, often with the help of computers to solve large systems of generally nonlinear algebraic equations and inequalities.
There are two situations where errors can be introduced. First, they can occur in the mathematical model itself. Potential errors include overlooking an important factor or assuming an unphysical relationship between variables.
Validation is the process of making sure such errors are not introduced when constructing the mathematical model. Verification, on the other hand, is to ascertain that the mathematical model is accurately solved. Here, we are ensuring that the numerical algorithm is convergent and the computer implementation is correct, so that the numerical solution is accurate.
In brief, during
validation we ask if we posed the appropriate mathematical model to describe the physical system, whereas in verification we investigate if we are obtaining an accurate numerical solution to the mathematical model.
Now, we will dive deeper into the verification of numerical solutions to initial boundary value problems (IBVPs).
Different Verification Approaches
How do we check if a simulation tool is accurately solving an IBVP?
One possibility is to choose a problem that has an exact analytical solution and use the exact solution as a benchmark. The method of separation of variables, for example, can be used to obtain solutions to simple IBVPs. The utility of this approach is limited by the fact that most problems of practical interest do not have exact solutions — the
raison d’être of computer simulation. Still, this approach is useful as a sanity check for algorithms and programming.
Another approach is to compare simulation results with experimental data. To be clear, this is combining validation and verification in one step, which is sometimes called
qualification. It is possible but unlikely that experimental observations are matched coincidentally by a faulty solution through a combination of a flawed mathematical model and a wrong algorithm or a bug in the programming. Barring such rare occurrences, a good match between a numerical solution and an experimental observation vouches for the validity of the mathematical model and the veracity of the solution procedure.
The Application Libraries in COMSOL Multiphysics contain many verification models that use one or both of these approaches. They are organized by physics areas.
Verification models are available in the Application Libraries of COMSOL Multiphysics.
What if we want to verify our results in the absence of exact mathematical solutions and experimental data? We can turn to the method of manufactured solutions.
Implementing the Method of Manufactured Solutions
The goal of solving an IBVP is to find an explicit expression for the solution in terms of independent variables, usually space and time, given problem parameters such as material properties, boundary conditions, initial conditions, and source terms. Common forms of source terms include body forces such as gravity in structural mechanics and fluid flow problems, reaction terms in transport problems, and heat sources in thermal problems.
In the Method of Manufactured Solutions (MMS), we flip the script and start with an assumed explicit expression for the solution. Then, we substitute the solution to the differential equations and obtain a consistent set of source terms, initial conditions, and boundary conditions. This usually involves evaluating a number of derivatives. We will soon see how the symbolic algebra routines in COMSOL Multiphysics can help with this process. Similarly, we evaluate the assumed solution at time t = 0 and at the boundaries to obtain initial conditions and the boundary conditions.
Next comes the verification step. Given the source terms and auxiliary conditions just obtained, we use the simulation tool to obtain a numerical solution to the IBVP and compare it to the original assumed solution with which we started.
Let us illustrate the steps with a simple example.
Verifying 1D Heat Conduction
Consider a 1D heat conduction problem in a bar of length L
with initial condition
and fixed temperatures at the two ends given by
The coefficients A_c, \rho, C_p, and k stand for the cross-sectional area, mass density, heat capacity, and thermal conductivity, respectively. The heat source is given by Q.
Our goal is to verify the solution of this problem using the method of manufactured solutions.
First, we assume an explicit form for the solution. Let’s consider the temperature distribution
where \tau is a characteristic time, which for this example is an hour. We introduce a new variable u for the assumed temperature to distinguish it from the computed temperature T.
Next, we find the source term consistent with the assumed solution. We can hand calculate partial derivatives of the solution with respect to space and time and substitute them in the differential equation to obtain Q. Alternatively, since COMSOL Multiphysics is able to perform symbolic manipulations, we will use that feature instead of hand calculating the source term.
In the case of uniform material and cross-sectional properties, we can declare A_c, \rho, C_p, and k as parameters. The general heterogeneous case requires variables, as do time-dependent boundary conditions. Notice the use of the operator
d(), one of the built-in differentiation operators in COMSOL Multiphysics, shown in the screenshot below. The symbolic algebra routine in COMSOL Multiphysics can automate the evaluation of partial derivatives.
We perform this symbolic manipulation with the caveat that we trust the symbolic algebra. Otherwise, any errors observed later could be from the symbolic manipulation and not the numerical solution. Of course, we can plot a hand-calculated expression for Q alongside the result of the symbolic manipulation shown above to verify the symbolic algebra routine.
Next, we compute the initial and boundary conditions. The initial condition is the assumed solution evaluated at t = 0.
The values of the temperature at the two ends of the bar are g_1(t) = g_2(t) = 500 K.
Next, we obtain the numerical solution of the problem using the source term, as well as the initial and boundary conditions we have just calculated. For this example, let us use the
Heat Transfer in Solids physics interface. Add initial values, boundary conditions, and sources derived from the assumed solution.
For the final step, we compare the numerical solution with the assumed solution. The plots below show the temperature after a time period of one day. The first solution is obtained using linear elements, whereas the second solution is obtained using the quadratic elements. For this type of problem, COMSOL Multiphysics chooses quadratic elements by default.
The solution computed using the manufactured solution with linear elements (left) and quadratic elements (right). Checking Different Parts of the Code
The MMS gives us the flexibility to check different parts of the code. In the example given above, for the purpose of simplicity we have intentionally left many parts of the IBVP unchecked. In practice, every item in the equation should be checked in the most general form. For example, to check if the code accurately handles nonuniform cross-sectional areas, we need to define a spatially variable area before deriving the source term. The same is true for other coefficients such as material properties.
A similar check should be made for all boundary and initial conditions. If, for example, we want to specify the flux on the left end instead of the temperature, we first evaluate the flux corresponding to the manufactured solution, i.e., -n\cdot(-A_ck \nabla u), where n is the outward unit normal. For the assumed solution in this example, the inward flux at the left end becomes \frac{A_ck}{L}\frac{t}{\tau}*1K.
In COMSOL Multiphysics, the default boundary condition for heat transfer in solids is thermal insulation. What if we want to verify the handling of thermal insulation on the left end? We would need to manufacture a new solution where the derivative vanishes on the left end. For example, we can use
Note that during verification, we are checking if the equations are being correctly solved. We are not concerned with whether the solution corresponds to physical situations.
Remember that once we manufacture a new solution, we have to recalculate the source term, initial conditions, and boundary conditions according to the assumed solution. Of course, when we use the symbolic manipulation tools in COMSOL Multiphysics, we are exempt from the tedium!
Convergence Rate
As shown in the graph above, the solutions obtained by the linear element and the quadratic element converged as the mesh size was reduced. This qualitative convergence gives us some confidence in the numerical solution. We can further scrutinize the numerical method by studying its rate of convergence, which will provide a quantitative check of the numerical procedure.
For example, for the stationary version of the problem, the standard finite element error estimate for error measured in the m-order Sobolev norm is
where u and u_h are the exact and finite element solutions, h is the maximum element size, p is the order of the approximation polynomials (shape functions). For m = 0, this gives the error estimate
where C is a mesh independent constant.
Returning to the method of manufactured solutions, this implies that the solution with linear element (p = 1) should show second-order convergence when the mesh is refined. If we plot the norm of the error with respect to mesh size on a log-log plot, the slope should asymptotically approach 2. If this does not happen, we will have to check the code or the accuracy and regularity of inputs such as material and geometric properties. As the figures below show, the numerical solution converges at the theoretically expected rate.
Left: Use integration operators to define norms. The operator intop1 is defined to integrate over the domain. Right: Log-log plot of error versus mesh size shows second-order convergence in the L_2-norm (m = 0) for linear elements, which is consistent with theoretical prediction.
While we should always check convergence, the theoretical convergence rate can only be checked for those problems like the one above where
a priori error estimates are available. When you have such problems, remember that the method of manufactured solutions can help you verify if your code shows the correct asymptotic behavior. Nonlinear Problems and Coupled Problems
In case of constitutive nonlinearity, the coefficients in the equation depend on the solution. In heat conduction, for example, thermal conductivity can depend on the temperature. In such cases, the coefficients need to be derived from the assumed solution.
Coupled (multiphysics) problems have more than one governing equation. Once solutions are assumed for all the fields involved, source terms have to be derived for each governing equation.
Uniqueness
Note that the logic behind the method of manufactured solutions holds only if the governing system of equations has a unique solution under the conditions (source term, boundary, and initial conditions) implied by the assumed solution. For example, in the stationary heat conduction problem, uniqueness proofs require positive thermal conductivity. While this is straightforward to check for in the case of isotropic uniform thermal conductivity, in the case of temperature dependent conductivity or anisotropy more thought should be given when manufacturing the solution to not violate such assumptions.
When using the method of manufactured solutions, the solution exists by construction. In addition, uniqueness proofs are available for a much larger class of problems than we have exact analytical solutions. Thus, the method gives us more room to work with than looking for exact solutions starting with source terms and initial and boundary conditions.
Try It Yourself
The built-in symbolic manipulation functionality of COMSOL Multiphysics makes it easy to implement the MMS for code verification as well as for educational purposes. While we do extensive testing of our codes, we welcome scrutiny on the part of our users. This blog post introduced a versatile tool that you can use to verify the various physics interfaces. You can also verify your own implementations when using equation-based modeling or the Physics Builder in COMSOL Multiphysics. If you have any questions about this technique, please feel free to contact us!
Resources For an extensive discussion of the method of manufactured solutions including relative strengths and limitations, see this report from Sandia National Laboratories. The report details a set of blind tests in which one author planted a series of code mistakes unbeknownst to the second author, who had to mine-sweep using the method described in this blog post. For a broader discussion on verification and validation in the context of scientific computing, check out W. J. Oberkampf and C. J. Roy, Verification and Validation in Scientific Computing, Cambridge University Press, 2010 W. J. Oberkampf and C. J. Roy, Standard error estimates for the finite element method are available in texts such as Thomas J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications, 2000 B. Daya Reddy, Introductory Functional Analysis: With Applications to Boundary Value Problems and Finite Elements, Springer-Verlag, 1997 Thomas J. R. Hughes, Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Netchaiev gave you a straight-forward answer. Here is a second way to get the amount of operations.
This way of thinking might be helpful for computer science oriented people, who see algorithms as computer code and not as maths.Also this way is useful, if you have written code to analyse.
The pseudo-code for this algorithm is
for i=1:n
sum = 0
for j=1:i
temp = t_ij * x_j
sum = sum + temp
x_i = b_i/sum
Now the rule is quite easy: $\text{for} = \sum$
$$\sum_{i=1}^n\{ \sum_{j=1}^i [1\odot + 1\oplus] + 1\oslash\} $$Then the amount of operations is: $$\sum_{i=1}^n\{ 2i + 1\} = \sum_{i=1}^n 2i + \sum_{i=1}^n 1 = 2 \sum_{i=1}^ni + n = 2 \frac{n(n+1)}{2} + n = n^2 + 2n$$
Small remark: In this calculation I separated addition and multiplication. It can be often seen in maths, that you group 1 addition and 1 multiplication (as in $x=a+bc$) to one "arithmetic operation". Also the term "elemental operation" for addition, multiplication, division etc. can be found in the literature. This grouping of multiplication and addition is a Fused Multiply-Add.
Then again, if you consider this optimized operation, you also should take into account that divisions take much longer than additions and multiplications (roughly 4 times longer).
So when you find an amount of operations for an algorithm always check what exactly the author is counting.
|
I want to evaluate the integral $$\frac{1}{2\pi} \int_0^{2\pi} \frac{1}{1 - 2r \cos \theta + r^2} d\theta. $$ I thought first to substitute $\cos(\theta)$ for $\frac{1}{2} (e^{i\theta} + e^{-i\theta} ) $, reducing the problem to a complex integral over the unit circle, then using the $ z = re^{i\theta}$ to change the polar coordinate of $d\theta$ to $dz$. However, I am stuck with the r in the term when I try to change coordinates using $dz = ire^{i\theta} d\theta$. Basically I want to be able to integrate over $dz$ instead of $d\theta$ but for some reason I'm finding it difficult to transform the coordinate planes, due to the lingering polar terms when I try to to make the transformation. Is there some generic method that I can generally use when I'm faced with the problem of making coordinate changes?
Here's a way that one might be "led by the nose" to the correct contour integral.. Looking at the denominator $r^2 - 2r\cos\theta + 1$ as a function of $r$, you can use the quadratic formula to get the roots, given by $$r = \cos(\theta) \pm {1 \over 2}\sqrt{4\cos^2(\theta) - 4}$$ $$= \cos(\theta) \pm {1 \over 2}\sqrt{-4\sin^2(\theta)}$$ $$ = \cos(\theta) \pm i \sin(\theta)$$ $$ = e^{i\theta}, e^{-i\theta}$$ So your integral is the same as $${1 \over 2\pi} \int_0^{2\pi} {d\theta \over (r - e^{i\theta})(r - e^{-i\theta})}$$ This suggests doing a contour integral over the unit circle, with $z = e^{i\theta}$, ${1 \over z} = e^{-i\theta}$, and $dz = ie^{i\theta}d\theta$, so that $d\theta = {dz \over iz}$. The resulting contour integral is $${1 \over 2\pi i} \int_{|z| = 1}{dz \over z(r - z)(r - {1 \over z})}$$ $$= {1 \over 2\pi i} \int_{|z| = 1}{dz \over (r - z)(rz - 1)}$$ Note this is the same contour integral Didier Piau got, and as he indicated it's a pretty routine application of the residue theorem.
The idea to transform everything as an integral of a function of the complex variable over the unit circle $C$ is a good one. As you wrote, this means you want to use $z=\mathrm{e}^{\mathrm{i}\theta}$, $\mathrm{d}z=\mathrm{i}\mathrm{e}^{\mathrm{i}\theta}\mathrm{d}\theta=\mathrm{i}z\mathrm{d}\theta$, and $2\cos\theta=z+1/z$. Your integral becomes $$ \frac1{2\pi\mathrm{i}}\int_Cf(z)\mathrm{d}z, $$ with $$ f(z)=\frac1{z(1+r^2-r(z+1/z))}=\frac1{z(1+r^2)-rz^2-r}=\frac1{(r-z)(rz-1)}. $$ The rest is a matter of residues computation: you will want to know the poles of $f$ inside the circle $C$ hence whether $r<1$ or $r>1$ will matter (the case $r=1$ being excluded since the integral then diverges) and, surprise, the result will involve an absolute value sign.
Hint: This is related to the Poisson Kernel. Full Solution: I assume $0<r<1$. Notice that
$$ \frac{1-r^2}{1-2r\cos\theta +r^2} = \operatorname{Re}\left(\frac{1+re^{i\theta}}{1-re^{i\theta}}\right)$$ so that
$$\frac{1}{2\pi} \int_0^{2\pi} \frac{1}{1 - 2r \cos \theta + r^2} d\theta=\frac{1}{2\pi(1-r^2)} \operatorname{Re}\left(\int_0^{2\pi} \left(\frac{1+re^{i\theta}}{1-re^{i\theta}}\right)d\theta\right).$$ Making the change of variables $z=re^{i\theta}$, $dz=ire^{i\theta}d\theta$ this becomes
$$\frac{1}{1-r^2}\operatorname{Re}\left(\frac{1}{2\pi i}\int_{C_r}\frac{1+z}{z(1-z)}dz\right)=\frac{1}{1-r^2}$$ where the last equality comes from evaluating the residue at $0$.
Remark: If $r>1$ the same solution will work, but we will pick up a residue at $z=1$ of $-2$ and instead get $$\frac{1}{2\pi} \int_0^{2\pi} \frac{1}{1 - 2r \cos \theta + r^2} d\theta=\frac{1}{r^2-1}$$ as the final answer.
Hope that helps,
|
Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?
closed as primarily opinion-based by Najib Idrissi, user98602, user147263, apnorton, user2345215 Jan 31 '15 at 18:16
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
$$\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^\pi}dx=\int_0^\infty\frac1{1+x^2}\cdot\frac1{1+x^e}dx$$
Let $f$ be a symbol with the property that $f^n = n!$. Consider $d_n$, the number of ways of putting $n$ letters in $n$ envelopes so that no letter gets to the right person (aka derangements). Many people initially think that $d_n = (n-1)! = f^{n-1}$ (the first object has $n-1$ legal locations, the second $n-2$, ...). The correct answer isn't that different actually:
$d_n = (f-1)^n$.
Best near miss
$$\int_{0}^{\infty }\cos\left ( 2x \right )\prod_{n=0}^{\infty}\cos\left ( \frac{x}{n} \right )~\mathrm dx\approx \frac{\pi}{8}-7.41\times 10^{-43}$$
One can easily be fooled into thinking that it is exactly $\dfrac{\pi}{8}$.
References:
Wikipedia Future Prospects for Computer-Assisted Mathematics, by D.H. Bailey and J.M. Borwein
I actually think currying is really cool:
$$(A \times B) \to C \; \simeq \; A \to (B \to C)$$
Though not strictly an identity, but an isomorphism.
When I met it for the first time it seemed to be a bit odd but it is so convenient and neat. At least in programming.
$$ 71 = \sqrt{7! + 1}. $$
Besides the amusement of reusing the decimal digits $7$ and $1$, this is conjectured to be the last solution of $n!+1 = x^2$ in integers. ($n=4$ and $n=5$ also work.) Even finiteness of the set of solutions is not known except using the ABC conjecture.
The Cayley-Hamilton theorem:
If $A \in \mathbb{R}^{n \times n}$ and $I_{n} \in \mathbb{R}^{n \times n}$ is the identity matrix, then the characteristic polynomial of $A$ is $p(\lambda) = \det(\lambda I_n - A)$. Then the Cayley Hamilton theorem can be obtained by "
" $\lambda = A$, since $$p(A) = \det(AI_n-A) = \det(0-0) = 0$$ substituting
We have by block partition rule for determinant $$ \det \left[ \begin{array}{cc} U & R \\ L & D \end{array} \right] = \det U\cdot \det ( D-LU^{-1}R) $$ But if $U,R,L$ and $D$ commute we have that $$ \det \left[ \begin{array}{cc} U & R \\ L & D \end{array} \right] = \det (UD-LR) $$
$$\frac{1}{998901}=0.000001002003004005006...997999000001...$$
\begin{eqnarray} \zeta(0) = \sum_{n \geq 1} 1 = -\frac{1}{2} \end{eqnarray}
Heres a interesting one again
$3435=3^3+4^4+3^3+5^5%$
$$ \frac{1}{2}=\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\frac{\frac{1}{2}}{\frac{1}{2}+\cdots}}}}}} $$
and more generally we have $$ \frac{1}{n+1}=\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\frac{\frac{1}{n(n+1)}}{\frac{1}{n(n+1)}+\ddots}}}}}} $$
$(x-a)(x-b)(x-c)\ldots(x-z) = 0$
\begin{eqnarray} \sum_{k = 0}^{\lfloor q - q/p) \rfloor} \left \lfloor \frac{p(q - k)}{q} \right \rfloor = \sum_{k = 1}^{q} \left \lfloor \frac{kp}{q} \right \rfloor \end{eqnarray}
\begin{align}\frac{64}{16}&=\frac{6\!\!/\,4}{16\!\!/}\\&=\frac41\\&=4\end{align}
For more examples of these
weird fractions, see "How Weird Are Weird Fractions?",Ryan Stuffelbeam, The College Mathematics Journal, Vol. 44, No. 3 (May 2013), pp. 202-209.
$\lnot$(A$\land$B)=($\lnot$A$\lor$$\lnot$B) and $\lnot$(A$\lor$B)=($\lnot$A$\land$$\lnot$B), because they mean that negation is an "equal form".
$$2592=2^59^2$$ Found this in one of Dudeney's puzzle books
$$ \sin \theta \cdot \sin \bigl(60^\circ - \theta \bigr) \cdot \sin \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \sin 3\theta$$
$$ \cos \theta \cdot \cos \bigl(60^\circ - \theta \bigr) \cdot \cos \bigl(60^\circ + \theta \bigr) = \frac{1}{4} \cos 3\theta$$
$$ \tan \theta \cdot \tan \bigl(60^\circ - \theta \bigr) \cdot \tan \bigl(60^\circ + \theta \bigr) = \tan 3\theta $$
$$\sum_{n=1}^{+\infty}\frac{\mu(n)}{n}=1-\frac12-\frac13-\frac15+\frac16-\frac17+\frac1{10}-\frac1{11}-\frac1{13}+\frac1{14}+\frac1{15}-\cdots=0$$This relation was discovered by Euler in 1748 (
before Riemann's studies on the $\zeta$ function as a complex variable function, from which this relation becomes much more easier!).
Then one of the most impressive formulas is the functional equation for the $\zeta$ function, in its asimmetric form: it highlights a very very deep and smart connection between the $\Gamma$ and the $\zeta$: $$ \pi^{\frac s2}\Gamma\left(\frac s2\right)\zeta(s)= \pi^{\frac{1-s}2}\Gamma\left(\frac{1-s}2\right)\zeta(1-s)\;\;\;\forall s\in\mathbb C\;. $$
Moreover no one seems to have wrote the Basel problem (Euler, 1735): $$ \sum_{n=1}^{+\infty}\frac1{n^2}=\frac{\pi^2}{6}\;\;. $$
$\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3) = \pi$ (using the principal value), but if you blindly use the addition formula $\tan^{-1}(x) + \tan^{-1}(y) = \tan^{-1}\dfrac{x+y}{1-x y}$ twice, you get zero:
$\tan^{-1}(1) + \tan^{-1}(2) = \tan^{-1}\dfrac{1+2}{1-1*2} =\tan^{-1}(-3)$; $\tan^{-1}(1) + \tan^{-1}(2) + \tan^{-1}(3) =\tan^{-1}(-3) + \tan^{-1}(3) =\tan^{-1}\dfrac{-3+3}{1-(-3)(3)} = 0$.
$$27\cdot56=2\cdot756,$$ $$277\cdot756=27\cdot7756,$$ $$2777\cdot7756=277\cdot77756,$$ and so on.
$$\frac{\pi}{4}=\sum_{n=1}^{\infty}\arctan\frac{1}{f_{2n+1}}, $$ where $f_{2n+1}$ there are fibonacci numbers, $n=1,2,...$
$$\lim_{\omega\to\infty}3=8$$ The "proof" is by rotation through $\pi/2$. More of a joke than an identity, I suppose.
For all $n\in\mathbb{N}$ and $n\neq1$ $$\prod_{k=1}^{n-1}2\sin\frac{k \pi}{n} = n$$
For some reason, the proof involves complex numbers and polynomials.
Here's one clever trigonometric identity that impressed me in high-school days. Add $\sin \alpha$, to both the numerator and the denominator of $\sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$ and get rid of the square root and nothing changes. In other words:
$$\frac{1 - \cos \alpha + \sin \alpha}{1 + \cos \alpha + \sin \alpha} = \sqrt{\frac{1-\cos \alpha}{1 + \cos \alpha}}$$
If you take a closer look you'll notice that the RHS is the formula for tangent of a half-angle. Actually if you want to prove those, nothing but the addition formulas are required.
\begin{align} E &= \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} = mc^{2} + \left[\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}\right] \\[3mm]&= mc^{2} + {\left(pc\right)^{2} \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}} = mc^{2} + {p^{2}/2m \over 1 + {\sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2} \over 2mc^{2}}} \\[3mm]&= mc^{2} + {p^{2}/2m \over 1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} + mc^{2}}} = mc^{2} + {p^{2}/2m \over 1 + {p^{2}/2m \over 1 + {p^{2}/2m \over \sqrt{\left(pc\right)^{2} + \left(mc^{2}\right)^{2}} - mc^{2}}}} \end{align}
$$ \begin{array}{rcrcl} \vdots & \vdots & \vdots & \vdots & \vdots \\[1mm] \int{1 \over x^{3}}\,{\rm d}x & = & -\,{1 \over 2}\,{1 \over x^{2}} & \sim & x^{\color{#ff0000}{\large\bf -2}} \\[1mm] \int{1 \over x^{2}}\,{\rm d}x & = & -\,{1 \over x} & \sim & x^{\color{#ff0000}{\large\bf -1}} \\[1mm] \int{1 \over x}\,{\rm d}x & = & \ln\left(x\right) & \sim & x^{\color{#0000ff}{\LARGE\bf 0}} \color{#0000ff}{\LARGE\quad ?} \\[1mm] \int x^{0}\,{\rm d}x & = & x^{1} & \sim & x^{\color{#ff0000}{\large\bf 1}} \\[1mm] \int x\,{\rm d}x & = & {1 \over 2}\,x^{2} & \sim & x^{\color{#ff0000}{\large\bf 2}} \\[1mm] \vdots & \vdots & \vdots & \vdots & \vdots \end{array} $$
$$ \frac{\pi}{2}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{2^{2k}} $$ $$ \frac{\pi}{3}=1+2\sum_{k=1}^{\infty}\frac{\eta(2k)}{6^{2k}} $$ where $ \eta(n)=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{n}} $
Voronoi summation formula:
$\sum \limits_{n=1}^{\infty}d(n)(\frac{x}{n})^{1/2}\{Y_1(4\pi \sqrt{nx})+\frac{2}{\pi}K_1(4\pi \sqrt{nx})\}+x \log x +(2 \gamma-1)x +\frac{1}{4}=\sum \limits _{n\leq x}'d(n)$
$\textbf{Claim:}\quad$$$\frac{\sin x}{n}=6$$ for all $n,x$ ($n\neq 0$).
$\textit{Proof:}\quad$$$\frac{\sin x}{n}=\frac{\dfrac{1}{n}\cdot\sin x}{\dfrac{1}{n}\cdot n}=\frac{\operatorname{si}x}{1}=\text{six}.\quad\blacksquare$$
Let $\sigma(n)$ denote the sum of the divisors of $n$.
If $$p=1+\sigma(k),$$ then $$p^a=1+\sigma(kp^{a-1})$$ where $a,k$ are positive integers and $p$ is a prime such that $p\not\mid k$.
|
Exercise
If $(x_{\alpha})_{\alpha \in \Lambda}$ is a net, then $x$ is an accumulation point of the net if for every $A \in \mathcal F_x$, the set $\{\alpha \in \Lambda: x_{\alpha}\in A\}$ is cofinal in $\Lambda$. Prove that $x$ is an accumulation point of the net if and only if there is a subnet of $(x_{\alpha})_{\alpha \in \Lambda}$ that converges to $x$.
I couldn't do much of the exercise. Here is what I've thought for the first implication:
If $S=\{\alpha \in \Lambda: x_{\alpha}\in A\}$ is cofinal in $\Lambda$, then, by definition, for every $\alpha \in \Lambda$, there is $\beta \in S$ such that $\beta \geq \alpha$. I want to construct a subnet that converges to $x$. For every open set $U: x \in U$, there must be an element $y_U$ of the subnet such that $y_U \in U$. I know that the elements of the subnet are going to be some (or all) elements of $S$. I am pretty lost on how to construct the subnet
As for the other implication I have no idea how to show it. Any suggestions would be appreciated.
|
$V(r)=\frac{1}{r}$ means for any two electrons at position $r_1$ and $r_2$, the electric potential is given by $\frac{1}{|r_1-r_2|}$
The Fourier transform of $\frac{1}{r}$ is $\frac{1}{q^2}$.
How can we interpret $V(q)=\frac{1}{q^2}$ physically?
This is what I'm thinking about:
(1)
$V(q)=\frac{1}{|q_1-q_2|^2}$ does not describe the potential between two electrons with wave vector $q_1$ and $q_2$
If we have two electrons with wave vector $q_1$ and $q_2$ (in othe words wavefunctions are $e^{iq_1r}$ and $e^{iq_2r}$), the probability distributions are uniform in space, then the potential $V(q)$ should be constant for all combination of $q_1$ and $q_2$.
(2)
$V(q)$ must be quantum mechanics in nature ?
It is hard to explain in classical physics, but if we think about quantum Hilbert basis, $$ \langle r_1',r_2'|\hat{V}|r_1,r_2\rangle =\frac{1}{|r_1-r_2|} \delta_{r_1,r_1'} \delta_{r_2,r_2'}$$ $$ \langle q_1',q_2'|\hat{V}|q_1,q_2\rangle =\frac{1}{|q_1-q_1'|^2} \delta_{q_1+q_2,q_1'+q_2'} $$ $\hat{V}$ is two particles' operator. $\frac{1}{r}$ or $\frac{1}{q^2}$ are just the representation in certain basis.
Therefore $V(q)$ is an
off-diagonal term, so it must be something quantum.
(3)
what does the energy value $V(q)$ corresponds to?
$V(q)$ has the dimensional of energy, my naive understanding is, it corresponds to the energy you need to "push" $|q_1,q_2 \rangle$ to a new state $|q_1-q,q_2+q\rangle$.
In classical picture, both $|q_1,q_2 \rangle$ and $|q_1-q,q_2+q\rangle$ configuration, the electrons are uniformly distributed, no work need to be done to change the Coulomb potential.
In quantum mechanics, let's take the analogy of two level system $$|\uparrow\rangle :=|q_1,q_2 \rangle $$ $$|\downarrow\rangle := |q_1-q,q_2+q\rangle $$
Then the off-diagonial term $\Delta=\langle \uparrow |\hat{V}|\downarrow \rangle =\frac{1}{q^2} $ is more like tunnelling rate ( hopping rate ~ frequency ~ energy).
So, "push" means tunnelling. The energy value $V(q)$ is tunnelling rate.
People are talking about $\frac{1}{q^2}$ divergence, I don't understand it. Does it mean two electrons exchange their momentum very often, and the rate is divergent for small momentum ($q\rightarrow 0$) exchanges?
(4)
$V(q)$ as a Laplace transformation in classical E&M
Contradict to (2), we do have $V(q)$ in purely classical systems.
This often solve the problem of screened plasma potential, classical Drude model of metal, etc..
$$ \nabla^2 \phi(r)=\delta(r) $$ The Laplace transformation gives : $$ q^2 \phi(q)=1 $$
And $q$ does not mean the momentum of electron wavefunction $e^{iqr}$ (there is no such concept of wavefunction in classical physics ). $q$ means the oscillation of charge density:
given charge density wave $\rho(r)=\cos(qr)$, we have potential wave $V(r) = \frac{1}{q^2}\cos(qr)$
Classical: a point charge can be represented by a superposition of cosine charge waves from all wave-vector. $$ \rho(x)=\delta(x-r)=\sum_q \cos(q(x-r)) $$
Quantum: a point charge state can be represented by a superposition of momentum q eigen-waves from all wave-vector. $$ |r\rangle =\sum_q e^{iqr}|q\rangle $$
I couldn't find a natural way to connect quantum and classical. Firstly, classical density $\cos(qr)$ changes it signs, while quantum $e^{iqr}$ is uniform in density; secondly, $\sum_q$ is coherent superpositions in the quantum case. This might be a bad analogy.
What is your interpretation of $V(q)=\frac{1}{q^2}$?
|
INTRODUCTION
Ion traps are a robust and promising platform for quantum information processing and for the implementation of a quantum computer. However, major challenges exist in scaling these systems to the level required for full-scale quantum computing. I will describe work concerned with addressing two of these challenges; namely connecting multiple trap zones in more than one dimension,...
Low energy positron beams are used in a variety of applications, including studies of atomic and condensed matter systems.[1] We study here the properties of a magnetically guided cold positron beam (generated in the conventional manner from a 22Na radioisotope source and solid neon moderator [2]) and their relationship to the different magnetic fields along the axis of a 7 m long scattering...
The thorium-229 nucleus possesses a unique first excited state at an energy of only about 7.8 eV, coupled to the ground state by a transition with a natural linewidth in the mHz range. This transition can be used as a reference for an optical clock that is highly immune to field-induced frequency shifts and as a sensitive probe of temporal variations of fundamental constants [1]. Despite many...
Precise measurements of the fundamental properties of the proton such as its mass, lifetime, charge radius, and magnetic moment are important for our understanding of the physics of atomic and nuclear structure as well as for tests of fundamental symmetries. As one of very few particle-antiparticle pairs which are directly comparable, the proton and antiproton serve as an important laboratory...
The coupling of ions stored in different traps through the charges they induce in a common electrode was proposed in Ref. [1], but it has not been
accomplished yet. The completion of such a system would be an outstanding technological breakthrough in quantum electronics and would pave the way for the implementation of hybrids systems for quantum information [2]. A pioneer work using...
In contrast to a conventional electron-ion plasma, the electron-positron pair plasma is characterized by the mass balance of the two components. Theoretical studies thus predicted long time ago a fundamentally new insight into plasma physics by studying these plasmas. Only recently experimental activities have become more precise e.g. by the APEX project which aims for the creation of a...
Quantum resource theories seek to quantify sources of non-classicality that bestow quantum technologies their operational advantage. Chief among these are studies of quantum correlations and quantum coherence. The former to isolate non-classicality in the correlations between systems, the latter to capture non-classicality of quantum superpositions within a single physical system. Here we...
Following results of laser cooling a single ion of $^{40}$Ca$^+$ to its motional ground state ($\bar{n}_z=0.02(1)$) in the axial domain of a Penning trap [1], we report simultaneous sideband cooling of both radial modes to near their ground state in the same apparatus. Sideband cooling is performed on the $S_{1/2} \leftrightarrow D_{5/2}$ electric quadrupole transition at 729 nm, and average...
The former $g$-factor experiment located in Mainz performed various $g$-factor measurements on highly charged ions, resulting in tests of bound state QED [1] and the most precise value for the atomic mass of the electron [2]. These measurements will be continued within a new experiment at the MPIK with access to heavier highly charged ions. Meanwhile the follow-up experiment in Mainz, which is...
The high-precision Penning-trap mass spectrometer PENTATRAP (1) is currently being commissioned at the Max-Planck-Institut für Kernphysik in Heidelberg. It aims at mass-ratio measurements of stable and long-lived highly-charged ions with a relative uncertainty of below $10^{-11}$, a precision so far only achieved for a few relatively light elements (2).
The mass-ratio measurement is carried... Abstract: Low temperature ion-atom interactions have been the object of growing interest over the past decade. Due to the availability of laser cooling for many atoms (Li, Na, K, Rb, Cs, Ca, Sr, Ba, Yb, etc.)[1,2,3] and ions (Ca+, Sr+, Ba+, Yb+ , etc)[4,6,7], the interactions between such ions and atoms have been explored experimentally at mK temperatures[5,7,8]. In the case of optically...
A number of upgrades and stabilisation techniques to the BASE apparatus [1], motivated by improving upon the recent 1.5 ppb measurement of the antiproton $g$-factor [2] and other fundamental properties of the antiproton, are presented.
A new modified-cyclotron mode detection system has been commissioned and installed into the BASE apparatus. The primary function of this instrument is to...
Antihydrogen, the bound state of a positron and an antiproton, is being studied by the ALPHA collaboration at CERN so that it can be compared to its matter counterpart, hydrogen. Antihydrogen is synthesised by merging plasmas of antiprotons and positrons in a magnetic trap, allowing a small fraction of the antihydrogen atoms created, the coldest, to remain trapped.
Decreasing the temperature...
Radio-frequency (rf) Paul traps operated with multifrequency rf trapping potentials provide the ability to independently confine charged particle species with widely different charge-to-mass ratios. In particular, these traps may find use in the field of antihydrogen recombination, allowing antiproton and positron clouds to be trapped and confined in the same volume without the use of large...
Several experiments at CERN aim at testing the CPT-theorem and weak equivalence principle using antimatter, among them the AEgIS experiment. Here, antihydrogen - produced via resonant charge exchange - will be used for precision measurements where the achievable sensitivity is determined by the temperature of the
antiprotons.
We are investigating laser-cooling of anionic molecules to...
Collinear laser spectroscopy (CLS) is a powerful tool, with a long and successful history at COLLAPS/ISOLDE, to access nuclear ground state properties such as spin, charge radius, and electromagnetic moments with high precision and accuracy [1]. Conventional CLS is based on the optical detection of fluorescence photons from laser-excited ions or atoms. It is limited to radioactive ion beams...
Bound state quantum electrodynamics is one of the most thoroughly tested theories in physics, but was recently challenged by measurements done on muonic atoms, where a discrepancy of >5$\sigma$ was reported between the nuclear charge radii extracted from spectroscopic measurements on muonic hydrogen and electronic hydrogen [1-4]. To gain new insights into the “proton radius puzzle” we aim to...
The ${}^{171}$Yb${}^+$ ion employed in our single-ion optical clocks features two transitions used for the realization of frequency standards, the ${}^2$S${}_{1/2}$ to ${}^2$D${}_{3/2}$ electric quadrupole (E2) $[1]$ and the ${}^2$S${}_{1/2}$ to ${}^2$F${}_{7/2}$ electric octupole (E3) $[2]$ transition. The E2 transition frequency shows a significantly higher sensitivity to frequency shifts...
Steven A. Jones (1) from the ALPHA collaboration (2)
1) Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark. steven.armstrong.jones@cern.ch 2) CERN, CH-1211 Geneve 23, Switzerland
Antihydrogen offers a unique way to test matter/antimatter symmetry. Antihydrogen can reproducibly be synthesised and trapped in the laboratory for extended periods of time [1][2],...
We present techniques tailored for sympathetic cooling and manipulation of a single (anti-)proton in a Penning trap system. Inside our trap a double-well potential is engineered for co-trapping an atomic ion, which enables for the use of quantum logic spectroscopy inspired cooling and readout schemes [1, 2]. These should allow for preparation at sub-Doppler temperatures and a readout of the...
The trends of nuclear binding-energies, obtained from high-precision atomic mass values, are sensitive to a wide range of nuclear structure phenomenon such as shell effects or onsets of collectivity. Hence, binding energies enable to track down the evolution of nuclear structure in yet unexplored region of the nuclear chart, also providing essential inputs to many nuclear models.
Three...
The main goal of the GBAR (Gravitational Behaviour of Anihydrogen at Rest) experiment is to test the week equivalence principle for $\bar{H}$, which has never been measured directly. In order to perform that very complex experiment, ultracold antihydrogen atoms ($\approx 10 \mu K$) are needed. As it is impossible to cool down neutrals to required temperature, GBAR is going to produce the...
Nearly complete quantum control of individual trapped ions has become commonplace in many precision spectroscopy, metrology, and quantum information experiments. However, while measurements of the properties of fundamental particles for CPT tests have had remarkable recent successes [1, 2, 3], they have been limited to 0.3 ppb precision by long measurement times and particle temperatures on...
Recent advances in trapped ion quantum technology have led to impressive results including the demonstration of four qubit GHZ states using subsequent entanglement gates [1] and a dc magnetometer with quantum enhanced sensitivity [2]. We will present the underlying technological advancements, starting with a high-speed multi-channel waveform generator developed in Mainz. The system delivers...
Quantum electrodynamics (QED) is one of the best tested theories in physics [1]. However, energy levels in atomic hydrogen have been determined with much higher accuracy than what QED theory can provide, because it is hampered by the uncertainty in the experimentally determined proton charge radius. Therefore, spectroscopic measurements on muonic hydrogen ($\mu$H) were performed which improved...
The ALPHATRAP experiment is a Penning-trap setup dedicated to test bound-state quantum electrodynamics by determining the
g-factor of the bound electron in the electric field of highly charged ions (HCI) with ultra-high precision. The ALPHATRAP experiment is currently in the final stage of commissioning. The setup exists of a cryogenic double Penning-trap tower in which the HCI can be stored...
The Penning-trap mass spectrometer ISOLTRAP located at the radioactive ion beam facility ISOLDE at CERN performs high-precision mass measurements of short-lived nuclides. This gives access to the study of nuclear structure effects like the location of shell and subshell closures and provides precision $\beta$-decay $Q$-values to test nuclear models and fundamental interactions. For three...
A cloud of trapped ions, represented here by a four level atomic system $\mid S_{1/2}>,\mid P_{1/2}>,\mid D_{3/2}>$ and $\mid D_{5/2}>$, is probed by the collection of photons from the transition $\mid P_{1/2}>$ to $\mid S_{1/2}>$.
![][1]
Figure : Four level atomic system of Ca+ ions
The lambda configuration with lasers at 866nm and 397nm allows for a two-photon dark state to take...
Cooling internal degrees of freedom by inelastic collisions is a widely applied method in cold molecule physics, especially for species that lack the possibility of laser cooling. [1] Despite this, the state specific rate coefficients are commonly unknown. Experimental data for ions at low temperatures, of the order of a few Kelvin, is particularly sparse. Our group has previously reported...
We report on our hybrid experiment which aims at studying trapped ions interacting with ultracold atoms that are off-resonantly coupled to Rydberg states. Since the polarisability of the Rydberg-dressed atoms can be extremely large, the interaction strength between ions and atoms increases tremendously as compared to the ground state case. Such interactions may be mediated over micrometers and...
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
I think that gauge invariance of a Lagrangian is not a sufficient condition for the Ward identity to be valid. So why does the Ward identity happen to hold in Yang-Mills theory, and maybe in many other gauge theories which I'm not familiar with? Or why do physical states with equivalent polarizations have the following relation: $$ |e^\prime,p\rangle=|e,p\rangle+Q|\mathrm{some\,state}\rangle $$ where $e,e^\prime$ are polarizations that can be related by a gauge transformation with the coupling constant set to be zero, and $Q$ is the BRST charge. Is it a coincidence or is there a way to determine whether the Ward identity holds in a general gauge theory other than to explicitly carry out the calculations?
The Ward identities are the statement that if we write the scattering amplitude for an external photon with polarization $\zeta$ and momentum $k$ as $M = \zeta^\mu M_\mu$, then we have $k^\mu M_\mu = 0$. This equation is important because it shows that the spurious longitudinal polarizations proportional to the photon momentum decouple from all physical processes because their scattering amplitudes are generically zero.
The Ward identity holds for every Yang-Mills gauge theory in which the gauge field couples to a conserved current, and it also holds for
massive vector field theories that don't have gauge symmetries, provided they still couple to a current in the form of the interaction Lagrangian being $A^\mu j_\mu$ where $j_\mu$ is some function of the matter fields the gauge field couples to. Gauge invariance of the Lagrangian and this form of couplied lead directly to the Ward identities through the general Ward-Takahashi identity:
For $j^\mu$ the conserved current of a global continuous symmetry (which is part of every gauge symmetry), we have that $$ \partial_\mu \langle T j^\mu(x) \phi(x_1)\dots \phi(x_n)\rangle = - \mathrm{i}\sum_{j=1}^n \langle \phi(x_1)\dots\phi(x_{j-1}) \delta\phi(x_j)\phi(x_{j+1})\dots \phi(x_n)\tag{1}$$ holds for all fields $\phi$, where $\delta \phi$ is the variation of $\phi$ under the symmetry. A scattering amplitude involving an external photon is schematically $$ M(k) = \zeta^\mu \mathrm{i} \int \mathrm{e}^{-\mathrm{i}kx} \partial_x^2 \langle T A_\mu(x)\cdot\text{other stuff}\rangle$$ and since $\frac{\delta S}{\delta A^\mu} = \partial_x^2 A_\mu(x) - j^\mu$ holds classically, we get $$ \partial_x^2 \langle TA_\mu(x) \dots\rangle = \langle T j_\mu(x) \dots \rangle + \text{contact terms}$$ upon use of the Dyson-Schwinger equation. The contact terms are $n-1$-point functions and don't contribute to the connected $n$-point function we're trying to compute, so we may neglect them.
Finally, for $\zeta^\mu = k^\mu$, we may use the Fourier relationship between $k$ and $\partial$ to pull the $\zeta^\mu$ into the integral, giving us $\partial_\mu\langle j^\mu\dots\rangle$ inside the integral. By Ward-Takahashi (eq. (1)), this also only consists of contact terms that don't contribute to the scattering amplitude, and therefore $\zeta^\mu M_\mu = 0$.
The only assumptions that went into this argument are that we have a global continuous symmetry with conserved current $j_\mu$, and that the equation of motion for the gauge field is $\partial_x^2 A = j_\mu$. This is for the abelian case.
For the non-Abelian Yang-Mills case, the corresponding identities get more complicated, although they still follow from the general logic of the Ward-Takahashi identities. The non-Abelian versions are called Slavnov-Taylor identities and also involve the Faddeev-Popov ghost fields, but effectively also mean that the longitudinal polarization decouple from all physical processes.
Finally, we can address the "equality"$$ \lvert e,p\rangle = \lvert e',p'\rangle + Q\lvert \psi\rangle.$$This should be thought of as an equality we
impose on the Hilbert space of states for get rid of negative/zero norm states. A priori, $\lvert e,p\rangle$ and $\lvert e',p'\rangle$ are different states, but we force this equation in the usual manner of a quotient space onto the physical space of states. Two elements that differ by an image of $Q$ are declared to be equal, more precisely, the physical Hilbert space is hte cohomology of $Q$, that is, $\ker(Q)/\mathrm{im}(Q)$. The Ward identity ensures that this quotient is physically harmless - only because the longitudinal polarizations (which correspond to $\mathrm{im}(Q)$ in the non-Abelian BRST formulation) decouple we are allowed to say that two states differing only by such a polarization are equal, since this guarantees that the scattering amplitudes fo all states we just declared equal actually are equal. Without the Ward identity, taking the quotient by the zero norm states is physically inconsistent.
|
15 minute read
In this summary, I assume you are familiar with Markov Decision Processes. In a Markov Decision Process (MDP), an agent interacts with the environment, by taking actions that induce a change in the state of the environment.
MDP: The robot knows the position
of the fuze bottle.
Image courtesy of the Personal Robotics Lab [6, 7]
An important assumption in MDPs is that the agent knows the true state of the world. So, our robotic arm would know exactly where the soda can is located. Partially Observable Markov Decision Processes relax this assumption, in that the agent does not know the true state, but can receive observations through its sensors that enable it to
estimatethat state.
POMDP: The robot retains a
probability distribution over
possible fuze bottle locations.
Image courtesy of the Personal Robotics Lab [6, 7]
For each state and action, the agent receives a reward $r_t$. As in the case of MDPs, the goal of the agent is to maximize the expected accumulated reward that it will receive over a time horizon $T$:
\begin{equation}
\mathbb{E}\left[ \sum_{t=0}^{T-1} r_t \right]
\end{equation}
Formally, we define a partially observable Markov decision process as a tuple $<S, A, \mathcal{T}, \Omega, O, R>$, where:
$S$: finite set of world states; in our example, that is the set of possible robot configurations and positions of the fuze bottle.
$A$: set of a robot actions; for our robotic arm, an action can be a displacement of the robot end-effector as well as a grasping action.
$\mathcal{T}$: $S \times A \rightarrow \Pi(S)$ is a state-transition function, mapping a world state and robot action to a probability distribution over states. It represents how the true world state changes given a robot action; in our example it encodes the uncertainty in the motion of the robot and of the fuze bottles.
$\Omega$ is a finite set of observations; for the object manipulation example, it can be a set of possible point clouds from a depth sensor or readings from the contact sensors in the robot's fingers.
$O: S \times A \rightarrow \Pi(\Omega)$ is the observation function, which maps a world state and robot action to a probability distribution over observations. It represents the uncertainty in the observations by the robot's sensors.
$R: S \times A \rightarrow \mathbb{R}$ is a reward function that the agent receives given an action at a particular world state, for instance a positive reward if the robot succeeds in picking up the object and a negative reward for robot failure.
\begin{align}
b'(s') &= P(s'|o, a, b) \\
&= \frac{P(o | s', a, b) P(s' | a, b)}{P(o | a,b)} &(\textit{from Bayes' rule}) \\
&= \frac{P(o|s',a,b) \sum_{s \in S} P(s' | a,b,s)P(s | a, b)}{P(o | a, b)} & (\textit{marginalization})\\
&= \frac{P(o|s',a) \sum_{s \in S} P(s' | a, s)P(s | b)}{P(o | a, b)} & (\textit{from conditional independence})\\
& = \frac{O(s',a, o) \sum_{s \in S} \mathcal{T}(s, a, s') b(s)}{P(o|a,b)} & (\textit{by definition of $~\mathcal{T},O,b$})
\end{align}
Now, how should the agent optimize its expected accumulated reward? The agent can control only the action $a$ that it will take, given its belief $b$. We call the policy of the agent a mapping from beliefs to actions:
$\pi: B \rightarrow A$
The optimal policy $\pi_t^*(b)$ is the policy that returns the optimal action, that is the action that maximizes the expected reward that the agent will receive over the next $t$ steps.
To understand the computation of the optimal policy, we need to first note the concept of a policy tree. A policy tree $p$ is a tree describing sequences of actions and observations. Each arc in the tree is labelled with an observation, and each node with an action to be performed when the observation leading to that node is received.
Now, to compute the optimal policy, we need a metric of how good a given policy tree p is. The value of executing a policy tree p in state s is the immediate reward by executing the action $a(p)$ at the root node of the tree, plus the expected value of the future.
\begin{align}
V_t^p(s)&= R(s,a(p)) +(\textit{Expected value of the future})
\end{align}
The value of the future depends (i) on which state the agent will end up, and (ii) on which observation $o_i$ the agent will make. The agent does not have this information in advance, so we will take the expectation over subsequent states and observations: \begin{align}
V_t^p(s)&= R(s,a(p)) + \sum_{s'\in S} P(s'|s, a(p)) \sum_{o_i \in \Omega} P(o_i|s',a(p)) V_{t-1}^{p,o_i}(s')
\end{align}
Based on our definitions of the transition function $\mathcal{T}$ and observation function $O$, we have:
\begin{align}
V_t^p(s)&= R(s,a(p)) + \sum_{s'\in S} \mathcal{T}(s, a(p),s') \sum_{o_i \in \Omega}O(s', a(p),o_i) V_{t-1}^{p,o_i}(s')
\end{align}
We see that the value of a state can be computed by one-step look-ahead using $V_{t-1}$. We can then compute the value for every state $s$ recursively, using dynamic programming [4] .
Sadly, the agents does not know the true state, but has a belief $b$, which is a probability distribution over states. Therefore, we need to compute the value of executing a policy tree $p$ given an agent belief $b$. We do this by marginalizing over the agent's belief $b$:
\begin{align}
V_t^p(b) &=\sum_{s \in S} b(s) V_t^p(s)
\end{align}
We can write the above equation more compactly by writing the sum as an inner product of the belief with a set of vectors $\alpha_t^p = <V_t^p(s_1), ... , V_t^p(s_n)>$
\begin{align}
V_t^p(b) = b \cdot \alpha_t^p
\end{align}
So, we see that a policy tree $p$ is associated with a set of vectors $\alpha_t^p$, which we call "alpha vectors." We have also computed the value of a given policy tree $p$ given a belief $b$.
So, given a set of $t$ step policy trees $P_t$, we can compute the value of the optimal policy as the value of starting in belief $b$ and executing the best policy tree in that belief:
\begin{align}
V_t(b) = \max_{p \in P_t} b \cdot \alpha_t^p \end{align}
This is equivalent to to executing the best root action at time $t$, and the best assignment of policy trees -- and their corresponding $\alpha$ vectors -- at $t-1$:
\begin{align} V_t(b) = \max_{a \in A}\left [ \sum_{s \in S}R(s,a)b(s) + \gamma \sum_{o_i \in \Omega} \max_{\alpha \in P_{t-1}} \sum_{s \in S} \sum_{s'\in S} P(o_i|s',a)P(s'|s,a)b(s) \alpha(s') \right ] \end{align}
The equation above can be solved with exact value iteration: start with a set of $t-1$ step policy trees, and iteratively use that to construct a superset of $t$ step policy trees. Unfortunately, the size of the policy trees grows exponentially at each step. We have |A| actions, and for every action we need to consider $|P_{t-1}|^{|\Omega|}$ new trees at each step. This makes solving it intractably large over time.
Instead, we can use sampling-based algorithms to find efficiently approximate solutions. Most such algorithms alternate between sampling in belief-space, and performing an one-step look-ahead, which propagates the information of the children of $b$ back to the point $b$.
Here is an example of how this is possible [3]: Let's assume a sampled set of belief points $b \in B$, and substitute $\alpha^{a,*}=R(s,a)$ and $\alpha^{a,o_i}= \sum_{s'\in S} P(o_i|s',a)P(s'|s,a)$. Then, we can rewrite the value function as:
\begin{align} V_t(b) =\max_{a \in A}\left [ \alpha^{a,*}\cdot b + \sum_{o_i \in \Omega} \max_{\alpha^{a, o_i} \in P_{t-1}} \alpha^{a, o_i}\cdot b \right] \end{align}
Then, for each sampled belief $b$ we can compute the best $\alpha^{a, o_i}$ and store it ahead of time. Then, each time we update the value at $t$ from $t-1$ policy trees, we only need to consider |A| new trees!
This however begs the question, what beliefs should we consider to sample? Recent algorithms [5] have achieved dramatic performance improvements, by attempting to sample from the belief space that is reachable by the optimal policy, since it is usually much smaller than the full belief space.
We thank Siddhartha Srinivasa and Vaibhav Unhelkar for their feedback. References
[1] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. "Planning and acting in partially observable stochastic domains." Artificial intelligence, 1998.
[2] N. Roy, 16420: Planning Under Uncertainty, Massachusetts Institute of Technology, delivered Fall 2010.
[3] J. Pineau, G. Gordon, and S. Thrun. "Point-based value iteration: An anytime algorithm for POMDPs." IJCAI, 2003.
[4] D. Bertsekas, Dynamic Programming and Optimal Control, Athena scientific, 1995.
[5] H. Kurniawati, D. Hsu, W. Sun Lee. "SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces." Robotics: Science and Systems, 2008.
[6] Michael Koval. "Robust Manipulation via Contact Sensing." PhD Thesis. Carnegie Mellon University, 2016.
[7] Personal Robotics Lab. University of Washington, 2018.
|
I'm reading Matsuzaka Topology and it says a bijective function $f$ is continuous iff it is open. Then how can I prove this? The book also says it is obvious but it's not obvious to me. I'm so confused, how can I prove this?
This isn't true: consider the function $f:[0,1) \rightarrow S^1$ given by $f(x) = e^{2\pi ix},$ where the notation $S^1$ refers to the $1$-sphere, defined by: $$S^1 = \{z \in \mathbb{C} : |z|=1\}.$$ Then:
$f$ is a bijection $f$ is continuous $f$ isn't open; for instance, the set $f([0,1/2))$ isn't open in $S^1$.
The author probably means that a bijective function $f:X \to Y$ is open iff its inverse is continuous. And $f$ is continuous iff its inverse is open. This is just the observation that if $g: Y \to X$ is the inverse of $f$: for any (open) subset of $Y$: $f^{-1}[O] = g[O]$.
Also true for bijections: $f$ is open iff $f$ is closed, because $f[X\setminus A] = Y \setminus f[A]$ for all $A$ (so if $f$ is open, the image of a closed set, which is of the form $f[X\setminus O]$ is again the complement of an open set $f[O]$, hence closed, and vice versa).
|
Okay. After 12 days, your (hypothetic) homework is past due. Let's look at a couple of solutions.
"when you're lost, draw what you know."
Undergrads' wisdom. Let's see:
Without loss of generality, set up a framework placong the fence center in $O=(0,0)$ and the tether anchor in $A=(r_0,0)$. In this framework a point $(x,y)$ is edible by the goat if it is :
In other words, the edible region is the intersection of two disks $D_0$, centered in $O$ with radius $r_0$ and $D_1$, centered in $A$ with radius $r_1$. This region can be exactly partitioned by the straight line segment joining the intersections of the two circles boundaries of these disks:
Each of these regions is the intersection of a disk of radius $r$ and a half-plane cutting the disk at a distance $X<=r$ of its center.
Therefore, a first, (almost) purely geometrical, solution to your problem is to derive the area of such a surface, and apply it to both the "yellow" and the "red" surfaces. Let's consider the "yellow" one, delimited bu th arc $B'AB$ and the straight segment $BB'$. This surface is symmetric about $OA$ ; therefore, its area is twice of the surface bounded by the arc $AB$ and the straight segments $BX$ and $XA$.
This latter surface can be in turn considered as the difference between
Let's call $\theta$ the angle $\widehat{AOB}$. It can be determined from the position of $B$. We have of course $\cos\theta=\frac{OX}{r}=\frac{B_x}{r}$ and $\tan\theta={B_y}{B_x}$. At the high school level, this "determination" is a bit fuzzy, but will be clarified in first year of college...
The circular sector has area $\frac{r^2\theta}{2}$, the triangle has area $\frac{OX*XB}{2}=\frac{B_x B_y}{2}$. The yellow surface has area twice their difference, therefore $r_0^2 \theta - B_x\cdot B_y$.
For similar reasons (left to the reader as an exercise ; beware the signs !), with $\cos\theta_1=\frac{B_y}{r_1}$, the "red" sector has area $r_1^2\theta_1-(r_0-B_x) B_y$.
All that is lacking are the coordinates of $B$ which are determined by the fact that it belongs to both the boundary circles (the fence and the tether). Since there are two solution points, we'll select the one with positive ordinate :
x,y,r_0,r_1=var("x,y,r_0,r_1")assume(x,"real",y,"real",r_0>0,r_1>0,r_1<2*r_0)O=vector([0,0])A=vector([r_0,0])b=vector([x,y])E0=(b-O).norm()^2==r_0^2E1=(b-A).norm()^2==r_1^2Sol=solve([E0, E1],[x,y], solution_dict=True)if bool(Sol[0].get(y)>0): B=vector([Sol[0].get(x),Sol[0].get(y)])else: B=vector([Sol[1].get(x),Sol[1].get(y)])X=B[0]Y=B[1]
That gives us $B=\left(\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}},\ \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}}{2 \, r_{0}}\right)$.
This also determines the angle $\theta=\widehat{AOB}$ by either
theta=arccos(X/r_0) or
theta=arctan2(Y,X) (Note : the third "obvious" choice,
theta=arcsin(Y./r_0 is unsuitable here, given the choice of branches of Sage's implementation of
arcsin). We determine $\theta_1$ in a similar way (again, mind the signs...).
Summing the partial areas and finding the root (value of $r_1$ leaving an edible area of 10 when $r_0=10$) is left as an exercise to the reader...
This first answer will be edited by comparing it to an analytical solution using calculus (much easier, but with a surprise Sage bug...).
Okay. After 12 days, your (hypothetic) homework is past due. Let's look at a couple of solutions.
"when you're lost, draw what you know."
Undergrads' wisdom. Let's see:
Without loss of generality, set up a framework
placong the fence center in $O=(0,0)$ and the tether anchor in $A=(r_0,0)$. In this framework a point $(x,y)$ is edible by the goat if it is :
In other words, the edible region is the intersection of two disks $D_0$, centered in $O$ with radius $r_0$ and $D_1$, centered in $A$ with radius $r_1$. This region can be exactly partitioned by the straight line segment joining the intersections of the two circles boundaries of these disks:
Each of these regions is the intersection of a disk of radius $r$ and a half-plane cutting the disk at a distance $X<=r$ of its center.
Therefore, a first, (almost) purely geometrical, solution to your problem is to derive the area of such a surface, and apply it to both the "yellow" and the "red" surfaces. Let's consider the "yellow" one, delimited bu th arc $B'AB$ and the straight segment $BB'$. This surface is symmetric about $OA$ ; therefore, its area is twice of the surface bounded by the arc $AB$ and the straight segments $BX$ and $XA$.
This latter surface can be in turn considered as the difference between
Let's call $\theta$ the angle $\widehat{AOB}$. It can be determined from the position of $B$. We have of course $\cos\theta=\frac{OX}{r}=\frac{B_x}{r}$ and $\tan\theta={B_y}{B_x}$. At the high school level, this "determination" is a bit fuzzy, but will be clarified in first year of college...
The circular sector has area $\frac{r^2\theta}{2}$, the triangle has area $\frac{OX*XB}{2}=\frac{B_x B_y}{2}$. The yellow surface has area twice their difference, therefore $r_0^2 \theta - B_x\cdot B_y$.
For similar reasons (left to the reader as an exercise ; beware the signs !), with $\cos\theta_1=\frac{B_y}{r_1}$, the "red" sector has area $r_1^2\theta_1-(r_0-B_x) B_y$.
All that is lacking are the coordinates of $B$ which are determined by the fact that it belongs to both the boundary circles (the fence and the tether). Since there are two solution points, we'll select the one with positive ordinate :
x,y,r_0,r_1=var("x,y,r_0,r_1")assume(x,"real",y,"real",r_0>0,r_1>0,r_1<2*r_0)O=vector([0,0])A=vector([r_0,0])b=vector([x,y])E0=(b-O).norm()^2==r_0^2E1=(b-A).norm()^2==r_1^2Sol=solve([E0, E1],[x,y], solution_dict=True)if bool(Sol[0].get(y)>0): B=vector([Sol[0].get(x),Sol[0].get(y)])else: B=vector([Sol[1].get(x),Sol[1].get(y)])X=B[0]Y=B[1]
That gives us $B=\left(\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}},\ \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}}{2 \, r_{0}}\right)$.
This also determines the angle $\theta=\widehat{AOB}$ by either
theta=arccos(X/r_0) or
theta=arctan2(Y,X) (Note : the third "obvious" choice,
theta=arcsin(Y./r_0 is unsuitable here, given the choice of branches of Sage's implementation of
arcsin). We determine $\theta_1$ in a similar way (again, mind the signs...).
Summing the partial areas and finding the root (value of $r_1$ leaving an edible area of 10 when $r_0=10$) is left as an exercise to the reader...
This first answer will be edited by comparing it to an analytical solution using calculus (much easier, but with a surprise Sage bug...).
3
No.3 Revision
Okay. After 12 days, your (hypothetic) homework is past due. Let's look at a couple of solutions.
"when you're lost, draw what you know."
Undergrads' wisdom. Let's see:
Without loss of generality, set up a framework placing the fence center in $O=(0,0)$ and the tether anchor in $A=(r_0,0)$. In this framework a point $(x,y)$ is edible by the goat if it is :
In other words, the edible region is the intersection of two disks $D_0$, centered in $O$ with radius $r_0$ and $D_1$, centered in $A$ with radius $r_1$. This region can be exactly partitioned by the straight line segment joining the intersections of the two circles boundaries of these disks:
Each of these regions is the intersection of a disk of radius $r$ and a half-plane cutting the disk at a distance $X<=r$ of its center.
Therefore, a first, (almost) purely geometrical, solution to your problem is to derive the area of such a surface, and apply it to both the "yellow" and the "red" surfaces. Let's consider the "yellow" one, delimited bu th arc $B'AB$ and the straight segment $BB'$. This surface is symmetric about $OA$ ; therefore, its area is twice of the surface bounded by the arc $AB$ and the straight segments $BX$ and $XA$.
This latter surface can be in turn considered as the difference between
Let's call $\theta$ the angle $\widehat{AOB}$. It can be determined from the position of $B$. We have of course $\cos\theta=\frac{OX}{r}=\frac{B_x}{r}$ and $\tan\theta={B_y}{B_x}$. At the high school level, this "determination" is a bit fuzzy, but will be clarified in first year of college...
The circular sector has area $\frac{r^2\theta}{2}$, the triangle has area $\frac{OX*XB}{2}=\frac{B_x B_y}{2}$. The yellow surface has area twice their difference, therefore $r_0^2 \theta - B_x\cdot B_y$.
For similar reasons (left to the reader as an exercise ; beware the signs !), with $\cos\theta_1=\frac{B_y}{r_1}$, the "red" sector has area $r_1^2\theta_1-(r_0-B_x) B_y$.
All that is lacking are the coordinates of $B$ which are determined by the fact that it belongs to both the boundary circles (the fence and the tether). Since there are two solution points, we'll select the one with positive ordinate :
x,y,r_0,r_1=var("x,y,r_0,r_1")assume(x,"real",y,"real",r_0>0,r_1>0,r_1<2*r_0)O=vector([0,0])A=vector([r_0,0])b=vector([x,y])E0=(b-O).norm()^2==r_0^2E1=(b-A).norm()^2==r_1^2Sol=solve([E0, E1],[x,y], solution_dict=True)if bool(Sol[0].get(y)>0): B=vector([Sol[0].get(x),Sol[0].get(y)])else: B=vector([Sol[1].get(x),Sol[1].get(y)])X=B[0]Y=B[1]
That gives us $B=\left(\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}},\ \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}}{2 \, r_{0}}\right)$.
This also determines the angle $\theta=\widehat{AOB}$ by either
theta=arccos(X/r_0) or
theta=arctan2(Y,X) (Note : the third "obvious" choice,
theta=arcsin(Y./r_0 is unsuitable here, given the choice of branches of Sage's implementation of
arcsin). We determine $\theta_1$ in a similar way (again, mind the signs...).
Summing the partial areas and finding the root (value of $r_1$ leaving an edible area of 10 when $r_0=10$) is left as an exercise to the reader...
This first answer will be edited by comparing it to an analytical solution using calculus (much easier, but with a surprise Sage bug...).
4
No.4 Revision
Okay. After 12 days, your (hypothetic) homework is past due. Let's look at a couple of solutions.
"when you're lost, draw what you know."
Undergrads' wisdom. Let's see:
Without loss of generality, set up a framework placing the fence center in $O=(0,0)$ and the tether anchor in $A=(r_0,0)$. In this framework a point $(x,y)$ is edible by the goat if it is :
In other words, the edible region is the intersection of two disks $D_0$, centered in $O$ with radius $r_0$ and $D_1$, centered in $A$ with radius $r_1$. This region can be exactly partitioned by the straight line segment joining the intersections of the two circles boundaries of these disks:
Each of these regions is the intersection of a disk of radius $r$ and a half-plane cutting the disk at a distance $X<=r$ of its center.
Therefore, a first, (almost) purely geometrical, solution to your problem is to derive the area of such a surface, and apply it to both the "yellow" and the "red" surfaces. Let's consider the "yellow" one, delimited bu th arc $B'AB$ and the straight segment $BB'$. This surface is symmetric about $OA$ ; therefore, its area is twice of the surface bounded by the arc $AB$ and the straight segments $BX$ and $XA$.
This latter surface can be in turn considered as the difference between
Let's call $\theta$ the angle $\widehat{AOB}$. It can be determined from the position of $B$. We have of course $\cos\theta=\frac{OX}{r}=\frac{B_x}{r}$ and $\tan\theta={B_y}{B_x}$. At the high school level, this "determination" is a bit fuzzy, but will be clarified in first year of college...
The circular sector has area $\frac{r^2\theta}{2}$, the triangle has area $\frac{OX*XB}{2}=\frac{B_x B_y}{2}$. The yellow surface has area twice their difference, therefore $r_0^2 \theta - B_x\cdot B_y$.
For similar reasons (left to the reader as an exercise ; beware the signs !), with $\cos\theta_1=\frac{B_y}{r_1}$, the "red" sector has area $r_1^2\theta_1-(r_0-B_x) B_y$.
All that is lacking are the coordinates of $B$ which are determined by the fact that it belongs to both the boundary circles (the fence and the tether). Since there are two solution points, we'll select the one with positive ordinate :
x,y,r_0,r_1=var("x,y,r_0,r_1")assume(x,"real",y,"real",r_0>0,r_1>0,r_1<2*r_0)O=vector([0,0])A=vector([r_0,0])b=vector([x,y])E0=(b-O).norm()^2==r_0^2E1=(b-A).norm()^2==r_1^2Sol=solve([E0, E1],[x,y], solution_dict=True)if bool(Sol[0].get(y)>0): B=vector([Sol[0].get(x),Sol[0].get(y)])else: B=vector([Sol[1].get(x),Sol[1].get(y)])X=B[0]Y=B[1]
That gives us $B=\left(\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}},\ \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}}{2 \, r_{0}}\right)$.
This also determines the angle $\theta=\widehat{AOB}$ by either
theta=arccos(X/r_0) or
theta=arctan2(Y,X) (Note : the third "obvious" choice,
theta=arcsin(Y./r_0 is unsuitable here, given the choice of branches of Sage's implementation of
arcsin). We determine $\theta_1$ in a similar way (again, mind the signs...).
Summing the partial areas and finding the root (value of $r_1$ leaving an edible area of 10 when $r_0=10$) is left as an exercise to the reader...
This first answer will be edited by comparing it to an analytical solution using calculus (much easier, but with a surprise Sage bug...).
EDIT : Now, an analytical solution using all-singin', all-dancin' calculus.
Let's consider an element of the "yellow" surface : an (infinitely) small vertical strip delimited bu the lines of equation $x=t$ and $x=t+dt$. The surface of this strip will be $(y_u(t)-y_l(t))dt$, where $(t, y_l(t)$ and $(t, y_u(t)$ satisfy $E0$. The total "yellow" area will be the summation of such elemental areas from $t=X$ to $t=r_0$, i.e. $\displaystyle{\int_X^{r_0} (y_u(t)-y_l(t)) dt}$.
Our previous work already gave us $X$. We need to compute explicitly the bounds $y_l$ and $y_u$, and the integral:
Sol0=solve(E0,y)Y0l(x)=Sol0[0].rhs()Y0u(x)=Sol0[1].rhs()g0(x)=(Y0u-Y0l)(x).integrate(x)Yellow=(g0(r_0)-g0(X)).expand().factor()
A couple notes:
We
should be able to obtain directly the definite integral
integrate((Y0u-Y0l)(t), t, X, r_0) ; but we aren't : Trac#27816 will
crash Sage...
The integral, $\frac{2 \, \pi r_{0}^{3} + 4 \, r_{0}^{3} \arcsin\left(-\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}^{2}}\right) - 2 \, \sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{0} r_{1} + \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}^{3}}{r_{0}}}{4 \, r_{0}}$, bears some similarity with the geometrical solution seen above. This not random...
For the same reasons, we can get the area of the "red" surface :
Sol1=solve(E1,y)Y1l(x)=Sol1[0].rhs()Y1u(x)=Sol1[1].rhs()g1(x)=(Y1u-Y1l)(x).integrate(x)Red=(g1(X)-g1(r_0-r_1)).expand().factor()
which allows us to compute the expression of the edible fraction as a finction of the fence radius and the tether length :
EdibleFraction(r_0,r_1)=((Yellow+Red)/(pi*r_0^2)).expand().factor()
That is
$$ \frac{\frac{{\left(2 \, \pi r_{0} - 4 \, r_{0} arcsin\left(\frac{r_{1}}{2 \, r_{0}}\right) - \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}}{r_{0}}\right)} r_{1}^{2}}{r_{0}} + \frac{2 \, \pi r_{0}^{3} + 4 \, r_{0}^{3} arcsin\left(-\frac{2 \, r_{0}^{2} - r_{1}^{2}}{2 \, r_{0}^{2}}\right) - 2 \, \sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{0} r_{1} + \frac{\sqrt{4 \, r_{0}^{2} - r_{1}^{2}} r_{1}^{3}}{r_{0}}}{r_{0}}}{400 \, \pi} $$
which looks reasonable:
The numerical answer is immediate :
sage: find_root(lambda u:EdibleFraction(10,u)-1/2, 0, 20)11.587284730181219
One can check that trying to get the definite integral directly gives
curipis results :
sympy can't give us an usable answer ;
fricas gives a different answer, using
arctan instead of
arcsin (we saw that this was another possibility) ;
giac can do the integration, but this result can't be used numerically ; this is discussed onsage-devel, but not yet filed as a bug.
mathematica needs a bit of rewriting (it doesn't tolerate underscore in variable names...), and gives a solution close to the one given by
fricas.
Next installment : numerical integration :
Why you shouldn't.
Why you sometimes have to.
How
not to do it.
How to speed it
Use of Cython.
Trading speed for precision with stochastic integration.
Stay tuned...
|
A minimization problem with free boundary related to a cooperative system.
Speaker
Karen Yeressian
Date
Thursday, June 14, 2018
Time
15:00
Location
Conference room
Subject
Free Boundary Problems
Abstract
I will talk about the minimum problem for the functional
$$\int_{\Omega}\left( \vert \nabla u \vert^{2} + Q^{2}\chi_{\{\vert u\vert>0\}}\right)dx$$
with the constraint $u_i\geq 0$ for $i=1,\ldots,m$ where $\Omega\subset\mathbb{R}^{n}$ is a bounded domain and $u=(u_1,\cdots,u_m)\in H^{1}(\Omega;\mathbb{R}^{m})$.
First we derive the Euler equation satisfied by each component. Then we show that the noncoincidence set $\{\vert u\vert>0\}$ is (locally) nontangentially accessible. Having this we are able to establish sufficient regularity of the force term appearing in the Euler equations and derive the regularity of the free boundary $\Omega\cap\partial\{\vert u\vert>0\}$.
This is a joint work with Professor Luis A. Caffarelli from University of Texas at Austin and Professor Henrik Shahgholian from KTH Royal Institute of Technology.
|
Neural Stack API
Warning
This API is deprecated since v1.5.2, users are advised to use the new Tensorflow API.
Summary
In v1.4.2, the neural stack API was introduced. It defines some primitives for modular construction of neural networks. The main idioms will be computational layers & stacks.
The neural stack API extends these abstract skeletons by defining two kinds of primitives.
Computational layers: Defining how inputs are propagated forward;
NeuralLayer
Activation functions:
Activation[I]
Computational stacks: composed of a number of layers;
GenericNeuralStack
Note
The key point to understand is that once a layer or stack is defined, it is immutable i.e. the parameters defining its forward computation can't be changed.
The API rather provides
factory objects which can spawn a particular layer or stack with any parameter assignments. Activation Functions¶
Activation functions are implemented using the
Activation[I] object, its
apply method requires twoarguments.
Implementation of the activation Implementation of the derivative of the activation.
1 2 3 4 5 6 //Define forward mapping val actFunc: (I) => I = _ //Define derivative of forward mapping val gradAct: (I) => I = _ val act = Activation(actFunc, gradAct)
The
dynaml.models.neuralnets package also contains implementation of the following activations.
Sigmoid g(x) = \frac{1}{1 + exp(-x)}
val act = VectorSigmoid
Tanh g(x) = tanh(x)
val act = VectorTansig
Linear g(x) = x
val act = VectorLinear
Rectified Linear g(x) = \begin{cases} x & x \geq 0\\0 & else\end{cases}
val act = VectorRecLin
Computational Layers¶
Computational layers are the most basic unit of neural networks. They define transformations of their inputs and with that define the forward data flow.
Every computational layer generally has a set of parameters describing how this transformation is going to be calculated given the inputs.
Creating Layers.¶
Creating an immutable computational layer can be done using the
NeuralLayer object.
1 2 3 4 5 6 7 8 9 10 11 import scala.math._ val compute = MetaPipe( (params: Double) => (x: Double) => 2d*Pi*params*x ) val act = Activation( (x: Double) => tanh(x), (x: Double) => tanh(x)/(sinh(x)*cosh(x))) val layer = NeuralLayer(compute, act)(0.5)
Vector feed forward layers
A common layer is the feed forward vector to vector layer which is given by. $$ \mathbf{h} = \sigma(\mathbf{W} \mathbf{x} + \mathbf{b}) $$
Layer Factories.¶
Since the computation and activation are the only two relevant inputs required to spawn any computational layer, the
NeuralLayerFactory[Params, Input, Output] class is the
factory for creating layers on the fly. Layer factories are data pipes which take the layer parameters as input and create computational layers on demand.
A layer factory can be created as follows.
1 2 3 val fact = NeuralLayerFactory(compute, act) val layer1 = fact(0.25)
Vector layer factory
Vector layers can be created using the
Vec2VecLayerFactory
1 2 val layerFactory = new Vec2VecLayerFactory(VectorTansig)(inDim = 4, outDim = 5) Neural Stacks¶
A neural stack is a sequence of computational layers. Every layer represents some computation, so the neural stack is nothing but a sequence of computations or forward data flow. The top level class for neural stacks is
GenericNeuralStack. Extending the base class there are two stack implementations.
Eagerly evaluated stack: Layers are spawned as soon as the stack is created.
1 2 3 4 5 6 val layers: Seq[NeuralLayer[P, I, I]] = _ //Variable argument apply function //so the elements of the sequence //must be enumerated. val stack = NeuralStack(layers:_*)
Lazy stack: Layers are spawned only as needed, but once created they are
memoized. 1 2 3 val layers_func: (Int) => NeuralLayer[P, I, I] = _ val stack = LazyNeuralStack(layers_func, num_layers = 4) Stack Factories¶
Stack factories like layer factories are pipe lines, which take as input a sequence of layer parameters and return a neural stack of the spawned layers.
1 2 3 4 5 6 7 8 9 10 11 12 val layerFactories: Seq[NeuralLayerFactory[P, I, I]] = _ //Create a stack factory from a sequence of layer factories val stackFactory = NeuralStackFactory(layerFactories:_*) //Create a stack factory that creates //feed forward neural stacks that take as inputs //breeze vectors. //Input, Hidden, Output val num_units_by_layer = Seq(5, 8, 3) val acts = Seq(VectorSigmoid, VectorTansig) val breezeStackFactory = NeuralStackFactory(num_units_by_layer)(acts)
|
E C G Sudarshan
Articles written in Pramana – Journal of Physics
Volume 61 Issue 4 October 2003 pp 645-653 Reasearch Articles
We examine a number of recent proofs of the spin-statistics theorem. All, of course, get the target result of Bose-Einstein statistics for identical integral spin particles and Fermi-Dirac statistics for identical half-integral spin particles. It is pointed out that these proofs, distinguished by their purported simple and intuitive kinematic character, require assumptions that are outside the realm of standard quantum mechanics. We construct a counterexample to these non-dynamical kinematic ‘proofs’ to emphasize the necessity of a dynamical proof as distinct from a kinematic proof. Sudarshan’s simple non-relativistic dynamical proof is briefly described. Finally, we make clear the price paid for any kinematic ‘proof’.
Volume 73 Issue 6 December 2009 pp 961-968
We propose to substitute Newton’s constant $G_{N}$ for another constant $G_{2}$, as if the gravitational force would fall off with the $1/r$ law, instead of the $1/r^{2}$; so we describe a system of natural units with $G_{2} , c$ and $\hbar$. We adjust the value of $G_{2}$ so that the fundamental length $L = L_{\text{Pl}}$ is still the Planck’s length and so $G_{N} = L \times G_{2}$. We argue for this system as (1) it would express longitude, time and mass without square roots; (2) $G_{2}$ is in principle disentangled from gravitation, as in (2 + 1) dimensions there is no field outside the sources. So $G_{2}$ would be truly universal; (3) modern physics is not necessarily tied up to $(3 + 1)$-dim. scenarios and (4) extended objects with $p = 2$ (membranes) play an important role both in M-theory and in F-theory, which distinguishes three $(2, 1)$ dimensions.
As an alternative we consider also the clash between gravitation and quantum theory; the suggestion is that non-commutative geometry $[x_{i} , x_{j}] = \Lambda^{2} \theta_{ij}$ would cure some infinities and improve black hole evaporation. Then the new length 𝛬 shall determine, among other things, the gravitational constant $G_{N}$.
Current Issue
Volume 93 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Here's a problem restated from Ross Starr's
General Equilibrium Theory.
Consider a two-commodity economy with an excess demand function $Z(p)=(Z_1(p),Z_2(p))$. The price space is $p \in P = \left \{ p | p \in \mathbb{R}^{2}, p \geq 0, p_1 + p_2 =1 \right \}$. Let $Z(p)$ be continuous, bounded and fulfill Walras' Law as an equality, that is $p_1Z_1(p)+p_2Z_2(p)=0$. Assume $Z_1(0,1)>0$, $Z_1(1,0)<0$, $Z_2(0,1)<0$, $Z_2(1,0)>0$. Use the
intermediate value theoremand Walras' Lawto show that the economy has a competive equilibrium. That is, demonstrate that there is a price vector $p* \in P$ so that $Z(p*)=(0,0)$.
And I have a hint: Characterize $Z(p)$ as $Z(\alpha , 1- \alpha)$ for $0 \leq \alpha \leq 1$. Use the intermediate value theorem to find $0 \leq \alpha \leq 1$ so that $Z_1(\alpha , 1- \alpha)=0$. Then apply Walras' Law.
I'm having problem finding $\alpha$, how could I find it?
|
Regression is a very fundamental concept in statistics, machine learning and in Neural Network. Imagine plotting the correlation between rainfall frequency and agriculture production in high school. Increase in rainfall generally increases agriculture production. Fitting a line to those points enables us to predict the production rate under different rain conditions. It was actually a very simplest form of linear regression.
In simple words, regression is a study of how to best fit a curve to summarize a collection of data. It’s one of the most powerful and well-studied types of supervised learning algorithms. In regression, we try to understand the data points by discovering the curve that might have generated them. In doing so, we seek an explanation for why the given data is scattered the way it is. The best-fit curve gives us a model for explaining how the dataset might have been produced. There are many types of regression e.g. simple linear regression, polynomial regression, multivariate regression. In this post, we will discuss simple linear regression only and later we will discuss the rest. We will also provide the python code from scratch at the end of the post
Simple regression, as the name implies, it’s just a very simple form of regression, where we assume that we just have one input and we’re just trying to fit a line.
Data Set
Consider a data set containing age and the number of homicide deaths in the US in the year 2015:
age num_homicide_deaths 21 652 22 633 23 653 24 644 25 610
If we plot the dataset and the line best fit to it we see:
When we are talking about regression, our goal is to predict a continuous variable output given some input variables. For simple regression, we only have one input variable x which is the age in our case and our desired output y which is num of homicide deaths for each age. Our dataset then consists of many examples of x and y, so: $$ D = \{(x_1,y_1), (x_2,y_2), …, (x_N,y_N)\} $$ where \(N\) is the number of examples in the data set. So, our data set will look like: $$ D=\{(21,652),(22,633), …,(50,197)\} $$
Model
So, how can we mathematically model single linear regression? Since the goal is to find the perfect line, let’s start by defining the
model (the mathematical description of how predictions will be created) as a line. It’s very simple. We’re assuming we have just one input, which in this case is, age of people and one output which is the number of homicide deaths and we’re just gonna fit a line. And what’s the equation of a line? $$f(x) = w_0 + w_1*x$$
what this regression model then specifies is that each one of our observations \( y_i\) is simply that function evaluated at \(x_i\), so that’s \(w_0\) plus \(w_1*x_1\) plus the error term which we call \(\epsilon_i\). So this is our regression model $$y_i = w_0 + w_1*x_i + \epsilon_i$$ and to be clear, this error, \(\epsilon_i\), is the distance from our specific observation back down to the line. The parameters of this model are\(w_0\)and \(w_1\) are intercept and slope and we call these the regression coefficients.
Quality Metric
We have chosen our model with two regression coefficients \(w_0\) and \(w_1\). For our data set, there can be infinitely many choices of these parameters. So our task is to find the best choice of parameters and we have to know how to measure the quality of the parameters or measure the quality of the fit. So in particular, we define a loss function (also called a cost function), which measures how good or bad a particular choice of \(w_0\) and \(w_1\) is. Values of \(w_0\) and \(w_1\) that seem poor should result in a large value of the loss function, whereas good values of \(w_0\) and \(w_1\) should result in small values of the loss function. So what’s the cost of using a specific line? It has many forms. But the one we’re gonna talk about here is Residual Sum of Squares (RSS): $$ RSS(w_0, w_1)= \sum_{i=1}^N(y_i-[ w_0 + w_1*x_i])^2$$
what Residual sum of squares assumes is that we’re just gonna add up the errors we made between what we believe the relationship is or what we’ve estimated the relationship to be between \(x\) and \(y\) and what the actual observation \(y\) is. And, we talked about the error as the \(\epsilon_i\).
Model Optimization
Our cost was to find the residual sum of squares, and for any given line, we can compute the cost of that line. So for example, we have two different lines for two different residual sums of squares. How do we know which choice of parameters is better? Ones with the minimum RSS.
Our goal is to minimize over all possible \(w_0\) and \(w_1\) intercepts and slopes respectively, but a question is, how are we going to do this? The mathematical notation for this minimization over all possible \(w_0\) , \(w_1\) is $$min_{w_0,w_1}\sum_{i=1}^N(y_i – [w_0 +w_1x_i])^2$$ So we want to find the specific value of \(w_0\) and \(w_1\) we’ll call that \(\hat{w_0}\) and \(\hat{w_1}\) respectively that minimize this residual sum of squares.
The red dot marked below on the above plot shows where the desired minimum is. We need an algorithm to find this minimum. We will discuss two approaches e.g.
Closed-form Solution and Gradient Descent. Closed-form Solution for Linear Regression
From calculus, we know that at the minimum the derivatives will be \(0\). So, if we compute the gradient of our RSS: $$ \begin{aligned} RSS(w_0, w_1) &= \sum_{i=1}^N(y_i-[ w_0 + w_1*x_i])^2 \end{aligned}$$ $$\begin{aligned} &\nabla RSS(w_0, w_1) = \begin{bmatrix} \frac{\partial RSS}{\partial w_0} \\ \\ \frac{\partial RSS}{\partial w_1} \end{bmatrix} \\ &=\begin{bmatrix} -2\sum_{i=1}^N{[y_i – (w_0 + w_1 * x_i)]} \\ \\ -2\sum_{i=1}^N[y_i – (w_0 + w_1 * x_i)] *x_i \end{bmatrix} \end{aligned}$$
Take this gradient, set it equal to zero and find the estiamates for \(w_0\) ,\(w_1\). Those are gonna be the estimates of our two parameters of our model that define our fitted line. $$ \begin{aligned} &\nabla RSS(w_0, w_1) = 0 \end{aligned} $$ implies, $$\begin{aligned} &\begin{bmatrix} -2\sum_{i=1}^N{[y_i – (w_0 + w_1 * x_i)]} \\ \\ -2\sum_{i=1}^N[y_i – (w_0 + w_1 * x_i)] *x_i \end{bmatrix} = 0 \\ &-2\sum_{i=1}^N{[y_i – (w_0 + w_1 * x_i)]} = 0, \\ &-2\sum_{i=1}^N{[y_i – (w_0 + w_1 * x_i)]} * x_i = 0 \end{aligned}$$ Solving for \(w_0\) and \(w_1\) we get, $$ \begin{aligned} \hat{w_0} = \frac{\sum_{i = 1}^N y_i}{N} – \hat{w_1}\frac{\sum_{i=1}^N x_i}{N} \\ \hat{w_1} = \frac{\sum y_i x_i – \frac{\sum y_i \sum x_i}{N}}{\sum x_i^2 – \frac{\sum x_i \sum x_i}{N}} \end{aligned} $$
Now that we have the solutions, we just have to compute \( \hat{w}_1\) and then plug that in and compute \(\hat{w}_0\). To compute \( \hat{w}_1\) we need to compute a couple of terms e.g. sum over all of our observations \(\sum y_i\) and sum over all of our inputs \(\sum x_i\) and then a few other terms that are multipliers of our input and output \(\sum y_i x_i\) and \(\sum x_i^2\). Plug them into these equations and we get out what our optimal \(\hat{w}_0\) and \(\hat{w}_1\) are, that minimize our residual sum of squares.
Gradient Descent for Linear Regression
The other approach that we will discuss is Gradient descent where we’re walking down this surface of residual sum of squares trying to get to the minimum. Of course, we might overshoot it and go back and forth but that’s a general idea that we’re doing this iterative procedure. $$\begin{aligned} &\nabla RSS(w_0, w_1) \\ &= \begin{bmatrix} -2\sum_{i=1}^N{[y_i – (w_0 + w_1 * x_i)]} \\ \\ -2\sum_{i=1}^N[y_i – (w_0 + w_1 * x_i)] *x_i \end{bmatrix} \\&= \begin{bmatrix} -2\sum_{i=1}^N{[y_i – \hat{y}_i (w_0, w_1)]} \\ \\ -2\sum_{i=1}^N[y_i – \hat{y}_i(w_0,w_1)] *x_i \end{bmatrix} \end{aligned}$$
Then our gradient descent algorithm will be: $$\begin{aligned}&while \; not \; converged: \begin{bmatrix} w_0^{(t+1)} \\ w_1^{(t+1)} \end{bmatrix}\\ &= \begin{bmatrix}w_0^{(t)} \\ w_1^{(t)} \end{bmatrix} – \eta* \begin{bmatrix} -2\sum{[y_i – \hat{y}_i (w_0, w_1)]}\\ \\-2\sum[y_i – \hat{y}_i(w_0,w_1)] *x_i \end{bmatrix}\\ &= \begin{bmatrix} w_0^{(t)} \\ w_1^{(t)} \end{bmatrix} +2\eta* \begin{bmatrix} \sum{[y_i – \hat{y}_i (w_0, w_1)]} \\ \\ \sum[y_i – \hat{y}_i(w_0,w_1)] *x_i \end{bmatrix} \end{aligned}$$
So gradient descent does this, we’re going to repeatedly update our weights. So set \(W\) to \(W\) minus \(\eta\) times the derivative, where \(W\) is the vector. We will repeatedly do that until the algorithm converges. \(\eta\) here is the learning rate and controls how big a step we take on each iteration of gradient descent and the derivative quantity is basically the update or the change we want to make to the parameters \(W\).
Prediction for Linear Regression
After all the hard work now we need to test our machine learning model. The dataset we work on, generally split into two parts. One part is called training data where we do all the training and another is called the test data where we test our network. We have developed equations for training and using them we have got a calibrated set of weights. We will then use this set of weights to predict the result for our new data using the equation $$ Prediction = \hat{w}_0 + \hat{w}_1 * data $$ where \( \hat{w}_0\) and \(\hat{w}_1\) are the optimized set weights.
Now that we have finished the theoretical part of the tutorial now you can see the code and try to understand different blocks of the code.
Also published on Medium.
|
Suppose $X$ and $Y$ are convex sets in $\mathbb{R}^d$ such that the origin is in each of their interiors. Then the dual of $X$, $X'$ is defined as the set of linear functionals $\alpha$ such that $\alpha(x) \leq 1$ for all $x \in X$. Is there a way to derive $(X+Y)'$ form $X'$ and $Y'$? I was thinking $(X+Y)' = X' \cap Y'$ but have been unable to prove this and am not sure if it is true.
Disappointingly, there is no simpler expression than the definition itself. in Combinatorial Convexity and Algebraic Geometry by Günter Ewald, page 105, the author states:
The polar body $(K+L)^*$ of a sum of convex bodies has, in general, no plausible interpretation in terms of $K^*, L^*$. Only in the case of direct sums do we present such an interpretation.
Namely: if $K$ and $L$ both contain $0$ and are contained in linear subspaces $A,B$ with $A\cap B=\{0\}$, then $(K+L)^*$ is the convex hull of $K^{*_A}\cup L^{*_B}$ where the subscripts on asterisks mean that the polar is taken within the indicated subspace.
An example: the polar of the Minkowski sum of line segments is the convex hull of (different) line segments: this matches the description of the unit ball of $\ell^1$.
The above does not apply in your case since you assume nonempty interior.
|
$\forall n\in\mathbb{N}^*, u_n = \sqrt{n+u_{n-1}}$ with $u_1 = 1$.
I've already shown that $ u_n \sim \sqrt{n}$ and that $\underset{n\rightarrow \infty}{\lim} u_n - \sqrt{n} = \dfrac{1}{2}$
How to show that $n\rightarrow \infty, u_n - \sqrt{n} - \dfrac{1}{2} \sim \dfrac{1}{8\sqrt{n}}$ ? (Or it might not be this, maybe something else ? I dont really know... I intuited this result with the asymptotic exampsion of $\sqrt{n}$.
I know that I have to use the asymptotic expansion, but I don't know how to use it correctly (I've some difficulties to keep the $o(1)$).
What I've done : $u_n = \sqrt{n+u_{n-1}}= \sqrt{n}(1+\dfrac{1}{2\sqrt{n}} + o(1)) = \sqrt{n} + \dfrac{1}{2} + o(1)$
|
Visualization for 2D Axisymmetric Electromagnetics Models
Today we’ll look at how to make 3D plots of vector fields that are computed using the 2D axisymmetric formulation found in the
Electromagnetic Waves, Frequency Domain interface within the RF and Wave Optics modules.
Creating 3D Plots from 2D Axisymmetric Solutions
Recall that the time harmonic
ansatz in the COMSOL software assumes that the field components oscillate in time according to e^{j\omega t}, where \omega is the angular frequency. In the 2D axisymmetric formulation, the angular dependence of the electric field is given by e^{-j m \phi}, where m is an integer that is specified by the user. As a result of the combined temporal and angular dependence, e^{j(\omega t-m \phi)}, the fields rotate about the z-axis. Our goal is to produce 3D plots from 2D axisymmetric solutions that contain this angular dependence. Using Revolved 2D Data Set for 3D Plots
After computing the solution to a 2D axisymmetric problem, COMSOL Multiphysics automatically produces a revolution of the 2D data set called “Revolution 2D”, which is listed under “Data Sets”, as shown below.
The revolved data set can be used in 3D plots. For the plots that we’re interested in, we’ll make a complete revolution from 0 to 360 degrees. The settings for “Revolution 2D 1” are shown below. You can see that under “Revolution Layers”, the start angle is set to 0 and the revolution angle is set to 360.
The coordinates for the plane in the 2D axisymmetric calculation are (r,z). The angle \phi is not defined since it is not part of the computational domain. However, it can be added as a coordinate in the 3D data set by checking the box beside “Define variables”. The variable name for the angle in the “Revolution 2D 1” data set is “rev1phi”. This can be used in expressions for plots and derived values, which we’ll do below.
Consider an axisymmetric resonant cavity with a rectangular cross section, shown below. Only the rectangular cross section is modeled in the 2D axisymmetric formulation.
We can use an Eigenfrequency study to compute the resonant modes. Suppose that we would like to plot the field quantities for a mode with m = 1. The magnitude of the electric field is plotted in the
rz-plane below on the left. Let’s also plot the magnitude of the electric field on a surface that bisects the cavity. This is done with a 3D Slice plot of “emw.normE” in an xy-plane where the number of planes is set to 1. The magnitude of the field is plotted below on the right. It is axially symmetric since the field is a traveling wave rotating about the z-axis, which also follows since | e^{j(\omega t – m \phi)} | = 1. Plotting the Electric Field’s Radial Component
Now let’s plot the real part of the radial component of the electric field in the plane that bisects the cavity. Specifically, we want to plot Re \{ E_r(r,z) \, e^{j(\omega t-m \phi)} \} at t=0, where m=1. To get the angular dependence, we can use the variable “rev1phi” that was defined above in the Revolution 2D data set. In the expression field for the slice plot we’ll enter “Er*exp(-j*rev1phi)”. The color table for the Slice plot is set to Wave. Let’s also deform the Slice plot in the
z-direction by an amount that is proportional to that quantity. We can do this by adding a Deformation node to the Slice node that is already there. The expression field for the z-component of the deformation plot contains the same quantity “Er*exp(-j*rev1phi)” and the other components are 0.
The plot of the radial component of the electric field is shown below. Note that this quantity is complex and that by default COMSOL plots the real part. This is equivalent to entering “real(Er*exp(-j*rev1phi))”. The imaginary part could be plotted by entering “imag(Er*exp(-j*rev1phi))”.
Generating an Electric Field Arrow Plot in 3D
Next up, we’ll make a 3D arrow plot of the electric field. The expressions for the Arrow Plot require the Cartesian components of the field whereas the dependent variables in the axisymmetric model are the cylindrical components. For this plot, we’ll use the variable “rev1phi” to convert the cylindrical vector components to Cartesian and also to account for the angular variation of the field in the Revolution 2D data set as we did above. The expressions for the Arrow Plot are listed below.
x-component: (Er*cos(rev1phi)-Ephi*sin(rev1phi))*exp(-j*rev1phi) y-component: (Er*sin(rev1phi)+Ephi*cos(rev1phi))*exp(-j*rev1phi) z-component: Ez*exp(-j*rev1phi)
The grid for the arrow positions is 25 \times 25 \times 1.
Let’s also add the lower surface of the cavity and the inner cylinder to the plot to provide some background for the electric field arrows. One way to do this is by including a uniform gray Surface plot on the lower surface z = 0 and on the inner cylinder r = 0.025. This is accomplished using conditional statements in the expression for the Surface plot, for example, “(z<eps)+(r<(0.025+eps))”, where “eps” is the machine precision. Since the expression is non-zero on the surfaces of interest, then in the settings for the Surface plot the data range is manually set so that the minimum value is 0.5 and the maximum value is 2, as shown below.
Animating the Plot
Finally, this plot is turned into an animation using the “Player” or “Animation” features. Both are found under Export in the model tree. Player creates an animation to view in the COMSOL graphics window, while Animation writes it to a file. To make a frequency domain data set oscillate in time, the sequence type is changed to dynamic data extension. Below you can see the electric field rotating about the
z-axis:
Postprocessing Video Tutorials
A while back we ran a video tutorial series on the blog, featuring postprocessing tips and tricks. Check it out below:
Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
Short introduction
I created a function that interpolates the IES luminious intensities (candelas) using Hermite interpolation, so in my code all light sources have $I(\theta, \phi)$ function - but for simplicity lets assume we are dealing with a hemispherical light source with constant intensity $I = 100cd$ and we are dealing with monochromatic light.
I managed to compute the direct illumination part from an IES point light source by calculating illuminance at each shading point using $E = \frac{I \cos(\theta_i)}{r^2}$ which I can easily convert to luminance $L_o$ by multiplying it by BRDF (since all my materials are diffuse, so $f_r = \frac{\rho}{\pi}$).
This time I decided to implement
Instant Radiosity to approximate the global illumination using Virtual Point Lights. I did some research, and I found a presentation with equations at slide 10 and 11. So I spent my last month experimenting with it, but the results I've got are too dark.
My current implementation is split into 2 applications. The first one is tracing random rays from light source. The second one is an OpenGL application for shading.
VPL generation (for single light path) Create a new ray in random direction Since I'm dealing with a point light, I believe that we can't calculate $L_i$, but I know that $L_o = f_r \times E$ and $E = \frac{I \cos{\theta_i}}{r^2}$ so now I know the outgoing luminance. Since I'm using a Lambertian BRDF, I know that $L_o$ is constant - that's why I store it as light intensity (I know that this is not "luminious intensity" per se, but at this point I don't know the outgoing direction). Randomize a new outgoing reflection direction. This time I know $L_i$ (it is equal to $L_o$ from previous iteration - no participating medium). I use the equation from slide 10, so $L_o = \frac{\rho}{\pi} L_i$. Again I store $L_o$ as VPL's intensity. Randomize a new outgoing (reflected) direction.
if (depth++ < max_depth) goto step_4;
Shading with VPLs Set the illumination accumulator to $E = 0$
foreach (vpl in vpls)
a. Check light visibility $V$ (from shadow map), if in shadow then
continue.
b. Compute the distance $r$ to light and a normalized vector $\hat{l}$ "to the light"
c. $E_{vpl} = I_{vpl} \frac{\cos{\theta_i} \cos{\theta_e}}{r^2}$, where $\cos \theta_e = \hat{l} \circ \hat{n}_{vpl}$ based on equation from slide 11
d.
E +=$\frac{E_{vpl}}{N}$, where $N$ is the number of traced light paths
$L_o = f_r \times E$
Problem
The problem is that the resulting $L_o$ values are always below $1$, while an identical scene rendered in Radiance (and limited to 1 bounce) has values above $20$.
(If my method is wrong) How to compute VPLs intensity?
In the mentioned presentation at page 13 is an algorithm (which I don't fully understand). Can somebody enlighten me how to calculate "average_reflectivity"? As you can see my implementation is derived from The Rendering Equation and the equations look similar.
|
I'm interested in testing independence of two groups (e.g. case and control) in a $2\times 2$ table: i.e. $H_0: \theta=1$ against the
two-sided alternative $H_1:\theta\neq 1$, where $\theta$ is the odds-ratio. Suppose that the margins of the table are fixed, then the random variable for the number of hits among cases is given by $X\sim \text{HyperGeom}(n,N_1,N_2)$, where $n$ is the total number of hits, $N_1$ is the total number of cases and $N_2$ is the total number of controls. The pmf is given by$$Pr(X=x)=\frac{\binom{N_1}{n}\binom{N_2}{n-x}}{\binom{N_1+N_2}{n}}$$for $\max(0,n-N_2)\leq x\leq \min(n,N_1)$.
For testing, I'm using the mid $p$-value as it is one way to reduce the conservativeness of the Fisher's exact test without resorting to randomized tests. Suppose that the observed number of hits among cases is $x_0$. I've seen two formulations of the
two-sided mid $p$-value in the literature: Formulation 2$$p_{lt} = Pr(X<x_0)+0.5~Pr(X=x_0)\\p_{gt} = Pr(X>x_0)+0.5~Pr(X=x_0)\\p^{(2)}_{\text{mid}}=2\min(p_{lt},p_{gt})=2\min(p_{lt},1-p_{lt})$$where the one-sided versions, $p_{lt}$ or $p_{gt}$, can be found in: Eq 1.7 or Section 5.1, to name a few.
In fact, Formulation 2 is the one used in SAS
PROC FREQ and in certain functions in
R packages such as
epitools::ormid.test.
From a simple test on the $2\times 2$ table below in
R, I noticed that these two functions sometimes don't produce the same $p$-values. In fact trying several tables seems to suggest that Formulation 1 can be much less conservative compared to Formulation 2. Additionally, Formulation 2 can be more conservative than the two-sided Fisher's exact test, as shown below.
QuestionWhich formulation is appropriate (and under what situations)?
midpval_f1 <- function(ct){ x <- ct[1,1] n <- sum(ct[1,]) N1 <- sum(ct[,1]) N2 <- sum(ct[,2]) lo <- max(0L, n - N2) hi <- min(n, N1) support <- lo : hi out <- dhyper(support, N1, N2, n) return(sum(out[out < out[x - lo + 1]]) + sum(out[out==out[x-lo+1]])/2)}midpval_f2 <- function(ct){ x <- ct[1,1] n <- sum(ct[1,]) N1 <- sum(ct[,1]) N2 <- sum(ct[,2]) plt <- phyper(x-1,N1,N2,n) + 0.5*dhyper(x,N1,N2,n) pgt <- phyper(x,N1,N2,n,lower.tail = FALSE) + 0.5*dhyper(x,N1,N2,n) return(2*min(plt,pgt))}test_ct <- matrix(c(3,5,7,9),ncol=2,byrow=T)> midpval_f1(test_ct)[1] 0.8366761> midpval_f2(test_ct)[1] 0.7956208test_ct2 <- matrix(c(5,10,2,38),ncol=2,byrow=T)> midpval_f1(test_ct2)[1] 0.006789634> midpval_f2(test_ct2)[1] 0.01357927> fisher.test(test_ct2)$p.value[1] 0.012561
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
On Math SE, I've seen several questions which relate to the following. By abusing the laws of exponents for rational exponents, one can come up with any number of apparent paradoxes, in which a number seems to be shown as equal to its opposite (negative). Possibly the most concise example:
$-1 = (-1)^1 = (-1)^\frac{2}{2} = (-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2} = (1)^\frac{1}{2} = \sqrt{1} = 1$
Of the seven equalities in this statement, I'm embarrassed to say that I'm not totally sure which one is incorrect. Restricting the discussion to real numbers and rational exponents, we can look at some college algebra/precalculus books and find definitions like the following (here, Ratti & McWaters,
Precalculus: a right triangle approach, section P.6):
The thing that looks the most suspect in my example above is the 4th equality, $(-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2}$, which seems to violate the spirit of Ratti's definition of rational exponents ("no common factors")... but technically, that translation from rational exponent to radical expression was not used at this point. Rather, we're still only manipulating rational exponents, which seems fully compliant with Ratti's 2nd property: $(a^r)^s = a^{rs}$, where indeed "all of the expressions used are defined". The rational-exponent-to-radical-expression switch (via the rational exponent definition) doesn't actually happen until the 6th equality, $(1)^\frac{1}{2} = \sqrt{1}$, and that seems to undeniably be a true statement. So I'm a bit stumped at
exactly where the falsehood lies.
We can find effectively identical definitions in other books. For example, in Sullivan's
College Algebra, his definition is (sec. R.8): "If $a$ is a real number and $m$ and $n$ are integers containing no common factors, with $n \ge 2$, then: $a^\frac{m}{n} = \sqrt[n]{a^m} = (\sqrt[n]{a})^m$, provided that $\sqrt[n]{a}$ exists"; and he briefly states that "the Laws of Exponents hold for rational exponents", but all examples are restricted to positive variables only. OpenStax College Algebra does the same (sec. 1.3): "In these cases, the exponent must be a fraction in lowest terms... All of the properties of exponents that we learned for integer exponents also hold for rational exponents."
So what exactly are the restrictions on the Laws of Exponents in the real-number context, with rational exponents? As one example, is there a reason missing from the texts above why $(-1)^{2 \cdot \frac{1}{2}} = ((-1)^2)^\frac{1}{2}$ is a false statement, or is it one of the other equalities that fails?
Edit: Some literature that discusses this issue:
Goel, Sudhir K., and Michael S. Robillard. "The Equation: $-2 = (-8)^\frac{1}{3} = (-8)^\frac{2}{6} = [(-8)^2]^\frac{1}{6} = 2$."
Educational Studies in Mathematics33.3 (1997): 319-320.
Tirosh, Dina, and Ruhama Even. "To define or not to define: The case of $(-8)^\frac{1}{3}$."
Educational Studies in Mathematics33.3 (1997): 321-330.
Choi, Younggi, and Jonghoon Do. "Equality Involved in 0.999... and $(-8)^\frac{1}{3}$"
For the Learning of Mathematics25.3 (2005): 13-36.
Woo, Jeongho, and Jaehoon Yim. "Revisiting 0.999... and $(-8)^\frac{1}{3}$ in School Mathematics from the Perspective of the Algebraic Permanence Principle."
For the Learning of Mathematics28.2 (2008): 11-16.
Gómez, Bernardo, and Carmen Buhlea. "The ambiguity of the sign √."
Proceedings of the Sixth Congress of the European Society for Research in Mathematics Education.2009.
Gómez, Bernardo. "Historical conflicts and subtleties with the √ sign in textbooks."
6th European Summer University on the History and Epistemology in Mathematics Education. HPM: Vienna University of Technology, Vienna, Austria (2010).
|
My question is about the rigorous proof of the fact that the heat goes from hot body to the cold one. There is a part of the proof I don't understand.
We consider 2 systems : one is at temperature $T_1$ and the other at $T_2$. The ensemble of those two systems is isolated. I assume they only exchange heat (no work).
I apply the second principle of thermodynamic on the ensemble :
$$dS=dS_1+dS_2=\delta S_1^e+\delta S_2^e + \delta S_1^c + \delta S_2^c=\delta S^c \geq 0$$
Where : $\delta S^e$ is the entropy exchanged and $\delta S^c$ is the entropy created.
Now, I know : $\delta S_1^e = \frac{\delta Q_1}{T_1} $, $\delta S_2^e = \frac{\delta Q_2}{T_2} $
Applying the first principle on $1+2$ I find that $\delta Q_1 = - \delta Q_2$, I end up with :
$$ dS=\delta Q_1(\frac{1}{T_1}-\frac{1}{T_2})+\delta S_1^c + \delta S_2^c = \delta S^c \geq 0$$
To prove the direction of the heat transfer, I need to have : $\delta S^c -(\delta S_1^c + \delta S_2^c) \geq 0$
But using only the classical thermodynamic (first and second principle) I don't know why it would be true ?
Do we need extra postulate to prove it ? I thought that the heat transfer direction can directly be shown using classical thermodynamics.
[edit] : What I tried in link with the answer.
Ok, let's assume I have two systems $1$ and $2$ with temperatures $T_1 \neq T_2$.
I take in consideration an interface system $I$ between those two systems. I'm forced to take it because else I couldn't have thermodynamic equilibrium and two different temperatures for my systems.
I write the variation in internal energy : $dU=dU_1+dU_2+dU_I$.
I assume the system $I$ is very small, so it's internal energy variation is negligible (conceptually I can take it as small as I want, thus in the limit its variation of energy can be considered as $0$).
$$dU_I=0$$
My whole system $\{1+2+I\}$ is isolated, so $dU=dU_1+dU_2=0$. What's more, $1$ only exchange heat (such as $2$), so I have :
$$dU_1=C_1(T_1) dT_1=\delta Q_1=T_1(dS_1-\delta S^c_1)$$
$$dU_2=C_2(T_2) dT_2=\delta Q_2=T_2(dS_2-\delta S^c_2)$$
$$dS=dS_1+dS_2+dS_I=C_1 \frac{dT_1}{T_1}+C_2 \frac{dT_2}{T_2}+(\delta S^c_1+\delta S^c_2 + \delta S^c_I)=\delta S^c \geq 0$$
Where the last equality use the fact the entropy of the whole system must increase. And I don't necesseraly have $\delta S^c=\delta S_1^c+\delta S_I^c+\delta S_2^c$ : the created entropy is not additive.
In the end, I have :
$$C_1 \frac{dT_1}{T_1}+C_2 \frac{dT_2}{T_2}=(\delta S^c-(\delta S^c_1+\delta S^c_2 + \delta S^c_I))$$
which is neither positive or negative, so I don't really see how to conclude. And I don't find the same entropy variation that you have. How did you end up with such a result ?
If I assume $C_1=C_2=C$ independant of temperature (I would like to avoid any such assumptions but let's assume it just to see some of my problems), I would have something like :
$$ C ln(\frac{T_1^f}{T_1^i}\frac{T_2^f}{T_2^i})=S^c-(S^c_1+S^c_2 +S^c_I)$$
And I don't see how to conclude anything from here... :S
[edit 2] :
As you suggested for now, I don't take in account the entropy creation terms.
I assume : $T_1^i \leq T_2^i$. I thus need to prove $T_2^f-T_2^i \leq 0$ (the hot system gets cold and reciprocally).
I assume that my $\Delta S$ is only due to the log (i forget about the creations as suggested).
Thus I have the following inequality :
$$\Delta S \geq 0 \Leftrightarrow 1-(\frac{T_2^f-T_1^f}{T_2^i+T_1^i})^2 \geq 1-(\frac{T_2^i-T_1^i}{T_2^i+T_1^i})^2 \Leftrightarrow (T_2^f-T_1^f)^2 \leq (T_2^i-T_1^i)^2 $$
Thus :
$$ -(T_2^i-T_1^i) \leq T_2^f-T_1^f \leq T_2^i-T_1^i$$
So, I find :
$$ T_2^f-T_2^i = \Delta T_2 \leq T_1^f-T_1^i = \Delta T_1$$
$$ \Delta T_2 \leq - \Delta T_2 \Leftrightarrow \Delta T_2 \leq 0 $$
We find the good result.
But now, why could I "forget" about those creation terms ?
Do you use an argument like the entropy is a state function and thus its variation only depends on the initial and final states.
So we choose a reversible transformation in all the reservoirs and in the global system that has the same final and initial temperatures ?
Using this we find a positive variation of entropy.
Is it the final idea ?
The little thing that confuse me is that either the transformation is reversible or not we would have the same heat exchanged (because same starting and ending temperature in both systems). So it is like "nothing change" physically if the transformation is reversible or not here.
But maybe it is not the idea..!
|
August 14th, 2014, 03:48 AM
# 1
Senior Member
Joined: Aug 2014
From: India
Posts: 470
Thanks: 1
how to find max shear stress at a given point?
The state of plane-stress at a point is given by σx = -200 MPa, σy = 100 MPa, σxy = 100 MPa
The Maximum shear stress (in MPa) is:
A) 111.8
B) 150.1
C) 180.3
D) 223.6
Explain Procedure also with Answer.
August 26th, 2014, 08:42 AM
# 2
Senior Member
Joined: Apr 2014
From: Glasgow
Posts: 2,161
Thanks: 734
Math Focus: Physics, mathematical modelling, numerical and computational solutions
According to the Wikipedia page,
Plane stress - Wikipedia, the free encyclopedia,
the maximum shear stress is given by
$\displaystyle \tau_{max} = \frac{1}{2}\left(\sigma_1 - \sigma_2\right)$
where
$\displaystyle \sigma_{1} = \frac{1}{2}\left(\sigma_x + \sigma_y\right) + \sqrt{\left[\frac{1}{2}\left(\sigma_x - \sigma_y\right)\right]^2 + \tau_{xy}^2}$
and
$\displaystyle \sigma_{2} = \frac{1}{2}\left(\sigma_x + \sigma_y\right) - \sqrt{\left[\frac{1}{2}\left(\sigma_x - \sigma_y\right)\right]^2 + \tau_{xy}^2}$
Plugging in your numbers ($\displaystyle \sigma_x = -200$MPa, $\displaystyle \sigma_y = 100$MPa and $\displaystyle \tau_{xy} = 100$MPa) I get
$\displaystyle \tau_{max} = 180.28$
so the answer is c). I would make sure you understand all the mathematics on that wikipage!
Tags find, max, point, shear, stress
Search tags for this page
Click on a term to search for related topics.
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Negative & Positive Shear Factor nightcrawler Algebra 0 December 11th, 2012 01:35 AM Transformations - Stretch and Shear xine Algebra 1 November 17th, 2012 03:54 PM Find moment and shear sivela Physics 1 June 25th, 2012 04:43 PM Stress return mapping algorithm de_wight Math Software 1 July 12th, 2011 12:03 AM Deriving Shear transformation aquiball Linear Algebra 0 October 3rd, 2009 03:07 PM
|
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest.
Nah, I have a pretty garbage question. Let me spell it out.
I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$.
For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$.
This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin.
Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle.
Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$
$$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$
@user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure).
The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$.
@RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea.
The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described
It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation
The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible.
Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$....
The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case?
I think there you really do need some kind of PDEs to construct good charts.
I might be way overcomplicating this.
If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$?
I think so by the squeeze theorem or something.
this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$
but then we can replace all of those $U_i$'s with balls, incurring some fixed error
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid...
@BalarkaSen what is this
ok but this does confirm that what I'm trying to do is wrong haha
In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas...
Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function?
I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
|
Meshing Considerations for Linear Static Problems
In this blog entry, we introduce meshing considerations for linear static finite element problems. This is the first in a series of postings on meshing techniques that is meant to provide guidance on how to approach the meshing of your finite element model with confidence.
About Finite Element Meshing
The finite element mesh serves two purposes. It first subdivides the CAD geometry being modeled into smaller pieces, or
elements, over which it is possible to write a set of equations describing the solution to the governing equation. The mesh is also used to represent the solution field to the physics being solved. There is error associated with both the discretization of the geometry as well as discretization of the solution, so let’s examine these separately. Geometric Discretization
Consider two very simple geometries, a block and a cylindrical shell:
There are four different types of elements that can be used to mesh these geometries — tetrahedra (tets), hexahedra (bricks), triangular prismatics (prisms), and pyramid elements:
The grey circles represent the corners, or
nodes, of the elements. Any combination of the above four elements can be used. (For 2D modeling, triangular and quadrilateral elements are available.) You can see by examination that both of these geometries could be meshed with as few as one brick element, two prisms, three pyramids, or five tets. As we learned in the previous blog post about solving linear static finite element problems, you will always arrive at a solution in one Newton-Raphson iteration. This is true for linear finite element problems regardless of the mesh. So let’s take a look at the simplest mesh we could put on these structures. Here’s a plot of a single brick element discretizing these geometries:
The mesh of the block is obviously a perfect representation of the true geometry, while the mesh of the cylindrical shell appears quite poor. In fact, it only appears that way when plotted. Elements are always plotted on the screen as having straight edges (this is done for graphics performance purposes) but COMSOL usually uses a second-order Lagrangian element to discretize the geometry (and the solution). So although the element edges always appear straight, they are internally represented as:
The white circles represent the midpoint nodes of these second-order element edges. That is, the lines defining the edges of the elements are represented by three points, and the edges approximated via a polynomial fit. There are also additional nodes at the center of each of these quadrilateral faces and in the center of the volume for these second-order Lagrangian hexahedral elements (omitted for clarity). Clearly, these elements do a better job of representing the curved boundaries of the elements. By default, COMSOL uses second-order elements for most physics, the two exceptions are problems involving chemical species transport and when solving for a fluid flow field. (Since those types of problems are convection dominated, the governing equations are better solved with first-order elements.) Higher order elements are also available, but the default second-order elements usually represent a good compromise between accuracy and computational requirements.
The figure below shows the geometric discretization error when meshing a 90° arc in terms of the number of first- and second-order elements:
The conclusion that can be made from this is that at least two second-order elements, or at least eight first-order elements, are needed to reduce the geometric discretization error below 1%. In fact, two second-order elements introduce a geometric discretization error of less that 0.1%. Finer meshes will more accurately represent the geometry, but will take more computational resources. This gives us a couple of good practical guidelines:
When using first-order elements, adjust the mesh such that there are at least eight elements per 90° arc When using second-order elements, use two elements per 90° arc
With these rules of thumb, we can now estimate the error we’ve introduced by meshing the geometry, and we can do so with some confidence before even having to solve the model. Now let’s turn our attention to how the mesh discretizes the solution.
Solution Discretization
The finite element mesh is also used to represent the solution field. The solution is computed at the node points, and a polynomial basis is used to interpolate this solution throughout the element to recover the total solution field. When solving linear finite elements problems, we are always able to compute a solution, no matter how coarse the mesh, but it may not be very accurate. To understand how mesh density affects solution accuracy, let’s look at a simple heat transfer problem on our previous geometries:
A temperature difference is applied to opposing faces of the block and the cylindrical shell. The thermal conductivity is constant, and all other surfaces are thermally insulated.
The solution for the case of the square block is that the temperature field varies linearly throughout the block. So for this model, a single, first-order, hexahedral element would actually be sufficient to compute the true solution. Of course, you will rarely be that lucky!
Therefore, let’s look at the slightly more challenging case. We’ve already seen that the cylindrical shell model will have geometric discretization error due to the curved edges, so we would start this model with at least two second-order (or eight first-order) elements along the curved edges. If you look closely at the above plot, you can see that the element edges on the boundaries are curved, while the interior elements have straight edges.
Along the axis of the cylinder, we can use a single element, since the temperature field will not vary in this direction. However, in the radial direction, from the inside to outside surface, we also need to have enough elements to discretize the solution. The analytic solution for this case goes as \ln(r) and can be compared against our finite element solution. Since the polynomial basis functions cannot perfectly describe the function, let’s plot the error in the finite element solution for both the linear and quadratic elements:
What you can see from this plot is that, as you increase the number of elements in the model, the error goes down. This is a fundamental property of the finite element method: the more elements, the more accurate your solution. Of course, there is also a cost associated with this. More computational resources, both time and hardware, are required to solve larger models. Now, you’ll notice that there are no units to the
x-axis of this graph, and that is on purpose. The rate at which error decreases with respect to mesh refinement will be different for every model, and depends on many factors. The only important point is that it will always go down, monotonically, for well-posed problems.
You’ll also notice that, after a point, the error starts to go back up. This will happen once the individual mesh elements start to get very small, and we run into the limits of numerical precision. That is, the numbers in our model are smaller than can be accurately represented on a computer. This is an inherent problem with all computational methods, not just the finite element method; computers cannot represent all real numbers accurately. The point at which the error starts to go back up will be around \sqrt{2^{-52}} \approx 1.5 \times 10^{-8} and to be on the safe and practical side, we often say that the minimal achievable error is 10
-6. Thus, if we integrate the scaled difference between the true and computed solution over the entire model:
We say that the error, \epsilon, can typically be made as small as 10
-6 in the limits of mesh refinement. In practice, the inputs to our models will anyways usually have much greater uncertainty than this. Also keep in mind that in general we don’t know the true solution, we will instead have to compare the computed solutions between different sized meshes and observe what values the solution converges toward. Adaptive Mesh Refinement
I would like to close this blog post by introducing a better way to refine the mesh. The plots above show that error decreases as all of the elements in the model are made smaller. However, ideally you would only make the elements smaller in regions where the error is high. COMSOL addresses this via
Adaptive Mesh Refinement, which first solves on an initial mesh and iteratively inserts elements into regions where the error is estimated to be high, and then re-solves the model. This can be continued for as many iterations as desired. This functionality works with triangular elements in 2D and tetrahedrals in 3D. Let’s examine this problem in the context of a simple structural mechanics problem — a plate under uniaxial tension with a hole, as shown in the figure below. Using symmetry, only one quarter of the model needs to be solved.
The computed displacement fields, and the resultant stresses, are quite uniform some distance away from the hole, but vary strongly nearby. The figure below shows an initial mesh, as well as the results of several adaptive mesh refinement iterations, along with the computed stress field.
Note how COMSOL preferentially inserts smaller elements around the hole. This should not be a surprise, since we already know there will be higher stresses around the hole. In practice, it is recommended to use a combination of adaptive mesh refinement, engineering judgment, and experience to find an acceptable mesh.
Summary of Main Points You will always want to perform a mesh refinement study and compare results on different sized meshes Use your knowledge of geometric discretization error to choose as coarse a starting mesh as possible, and refine from there You can use adaptive mesh refinement, or your own engineering judgment, to refine the mesh Comments (5) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
So, I started writing my second raytracer - this time focusing on photometric rendering that uses IES lights and standard photometric units. I've got the basic raytracing up and running (using pure geometry), I've read any article I could found on radio- and photometry (I think I developed an intuition for it) but the issues started when I tried to validate/play with the underlying math.
I use IES lights as a function of spherical coordinates that gives me the intensity of light in that direction in candelas (lumens per unit solid angle). Also in the equation for Luminance/Radiance there are differential solid angle and differential surface area. On the paper everything looks good, however:
My question is: how should I deal with those parameters in my raytracer? I assume (but I'm not sure) that my (differential) solid angle is just a (unit) vector in descrete world and the surface area is simply a point - but what values should I plug in to those equations? I saw that on some websites that people just ignored those terms and replaced the integral with sum - but no explanation was given...
And my second question is related to those IES lights - how can I simply convert intensity to radiance? The intensity already has the solid angle, but for radiance I need the surface area... Or should I somehow integrate the luminious intensity to get luminious flux and then somehow calculate the radiance from that.
EDIT: I decided to try to grab a sheet of paper, try again and share with you my thought process. In order to simplify a lot of things:
each plane has a single scalar value as material (let's name it
reflectivitybetween 0 and 1) $p1$ has $reflectivity = 0.5$ $p2$ has $reflectivity = 1$
all materials are lambertian (no specular reflections nor emission)
Step 1 - Illuminance at $p1$
The light luminious intensity at vertical angle $V=0$ is $I = 6619 [cd]$. In order to compute the illuminance at point $p1$ we can create a sphere with radius equal to the distance between the light and intersection point $p1$ which is $r = 2$ meters. That virtual sphere intersects with the first plane creating a small differential area $dA$. Knowing that $d\omega = \frac {dA}{r^2}$ we get $E_L(r) = \frac {I}{r^2} [ \frac {W}{m^2}]$ which is equal to illuminance measured at a plane perpendicular to light. We have to project it to know how many light hits the $p1$, so $E = E_L \times cos \theta$. I
assume that $L_i = E$, but that is just a guess. I hope that at this point I got it right... Step 2 - Outgoing luminance
In order to calculate the outgoing radiance I think that is the place for the rendering equation to kick in: $$ L_o = L_e + \int_{\Omega} f_r(\omega_i, \omega_o) L_i(p, \omega_i) cos \theta d\omega $$ since I ignore the emission and my BRDF is $f_r(\omega_i, \omega_o) = \frac {reflectivity}{\pi}$ which is constant across the integral giving me: $$ L_o = \frac {reflectivity}{\pi} \int_{\Omega} L_i(p, \omega_i) cos \theta d\omega $$ Now I have to discretize it, which I don't fully understand. Based on my notes that I've made based on many pages I've got: $L_o = \frac {reflectivity}{\pi} \sum L_i(p, \omega_i) cos \theta \Rightarrow L_o = \frac {0.5}{\pi} L_i cos \theta$, where $L_i$ was calculated in step 1
Step 3 - Luminance at $p2$
Can I assume that $L_{i_{p2}} = L(p_1 \rightarrow p_2) \times cos \theta = L_{o_{p1}} \times cos \theta$ ? I'm not sure if I have to apply the $cos \theta$ twice (for outgoing and incoming $dA$s).
I also would like to apologize for making this question sooo long - but I had never taken any course that would introduce radiometry and I was forced to (try to) learn it by myself.
|
Exercise :
Find a maximum likelihood estimator of $\theta$ for : $f(x) = \theta x^{-2}, \; \; 0< \theta \leq x < \infty$.
Attempt :
$$L(x;\theta) = \prod_{i=1}^n \theta x^{-2} \mathbb{I}_{[\theta, + \infty)}(x_i) = \theta^n \mathbb{I}_{[\theta, + \infty)}(\min x_i)$$
How should one proceed from now on to find a MLE ?
I think it should be such as :
$$\begin{cases} \theta \; \text{sufficiently large} \\ \min x_i \geq \theta \end{cases} \implies \hat{\theta} = \min x_i$$
Is my approach correct ?
|
Hi! I am trying to understand the problem above, and was wondering if someone could help me with the last question. I think I am fine with all other questions.
Here is my attempt:
(i) The fiscal multiplier is $\frac{1}{1-c} \Delta G$, as it is the infinite sum $ \Delta G + c\Delta G + c^2\Delta G + ... $.
Depends on c because the increase in government spending first leads to an increase in Y, which leads to an increase in C, which then leads to another increase in Y, but this time the increase in only $cY$, as part of the income is saved.
(ii) Balanced budget multiplier is 1, as $\Delta Y = \frac{1}{1-c} \Delta G + \frac{-c}{1-c} \Delta T$ with $ \Delta T = \Delta G$
This means that an increase in government spending that is matched by an identical increase in lump sum taxed increases output by the same amount as the increase in G (and T).
(iii) $$\Delta Y = \frac{\Delta X}{0.01} = 100$$
This means that giving me an additional unit of currency will increase total income by 100, as it feeds back into income and consumption infinitely.
(iv) This is the question where I would like some guidance. Here is my work so far:
We have: $$ C= a + c(Y-T) - d\alpha Y$$
The multiplier associated with an increase in G should now be: $$\frac{1}{1-c} + \frac{1}{1-d\alpha}$$ The first part is the "old multiplier, the second is the effect of the new part of the consumption function? I am not sure that this is correct, happy to receive comments!
This would mean that $c=d\alpha$.
And then, the multiplier associated with an increase in T:
1st round: Y increases by $c\Delta T$
2nd round: C increases by $c^2\Delta T + d\alpha \Delta T = 2 c^2\Delta T$
Then Y increases by $2 c^2\Delta T$
3rd round: C increases by $4 c^3\Delta T$
Does this mean that the multiplier is this sum: $$2^0C^1\Delta T + 2^1C^2\Delta T + 2^2C^2\Delta T + ... $$
And then the balanced budget multiplier would be this last result + 1 (the other multiplier, from the increase in G)?
I'd be super grateful to anyone who could help me with question (iv)!
|
Why in the following proof $$\sum_nA_n(X_n,X_m)=A_m(X_m,X_m)$$ ?
The author says it's because orthogonality but orthogonality means $(f,g)=\int_a^bfgdx=0$. So how come orthogonality helps to prove it ?
Could someone explain this please?
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Why in the following proof $$\sum_nA_n(X_n,X_m)=A_m(X_m,X_m)$$ ?
The author says it's because orthogonality but orthogonality means $(f,g)=\int_a^bfgdx=0$. So how come orthogonality helps to prove it ?
Could someone explain this please?
Thanks
To elaborate on other answers/comments, observe that $$ \sum_{n}A_{n}(X_{n}, X_{m}) = A_{1}(X_{1}, X_{m}) + A_{2}(X_{2}, X_{m}) + \ldots + A_{m}(X_{m}, X_{m}) + \ldots $$ and all of the terms where the index of $A$ is not $m$ are zero. So, $$ \sum_{n}A_{n}(X_{n}, X_{m}) = 0 + 0 + \ldots + 0 + A_{m}(X_{m}, X_{m}) + 0 +\ldots = A_{m}(X_{m}, X_{m}). $$
Because $ (X_n,X_m)=0$ if $n\ne m$. That's the orthogonality.
|
To perform the gluing, we need:
two metric spaces $X$ and $Y$ a set $A\subset X$, this is the part of $X$ covered in glue an isometric embedding $f:A\to Y$, which is a way to put the glue-covered part of $X$ over $Y$.
After we firmly press the spaces together and let them sit for a while, a point $x\in A$ becomes identical with the point $f(x)\in Y$. The resulting space can be described as the quotient $(X\sqcup Y)/(x\sim f(x))$. It is usually denoted $X\cup_A Y$. Its metric is the standard quotient metric; however, in this case the formula for quotient metric can be simplified to $$\tilde d(p,q) = \begin{cases}d_X(p, q) & \mbox{if } p, q \in X \\d_Y(p, q) & \mbox{if } p, q \in Y \\\inf_{a\in A} [d_X(p, a)+d_Y(q, f(a))] & \mbox{if } p \in X \mbox{ and } q \in Y \\\inf_{a\in A} [d_Y(p, f(a)) + d_X(q, a)] & \mbox{if } p \in Y \mbox{ and } q \in X\end{cases}$$
A simple example to start with: $X=(-\infty,0]$, $A=\{0\}$, Y=$[0,\infty)$, $f(0)=0$. This means we glue two half-lines together at point $0$. The result is $\mathbb R$ with the standard metric.
Another example: glue two closed disks of the same radius along their boundaries. This means $A$ is the boundary of one disk, and $f(A)$ is the boundary of the other. The resulting space is homeomorphic to a sphere, though the metric on each disk is still flat. It's like a sphere that someone sat on.
|
Suppose you have two equtaions: $$2xy + y^2 = 0$$ $$x^2 + 2xy + 1 = 0$$ Subtracting the second from the first yields $y^2 - x^2 - 1 = 0$. Isolating y, we discover that $y = \pm\sqrt{x^2 + 1}$. However, by inspection we can wee that the $2xy$ term in both equations must be negative, which means that a single value of x cannot have both a positive and negative corresponding y-value. (i.e. if x is positive, then y must be negative). It seems that the equation $y = \pm\sqrt{x^2 + 1}$ contains an "extraneous root", but I'm struggling to wrap my mind around how that could be. After all, I solved for y w.r.t. x without squaring both sides. Could anyone help me understand what is going on?
The problem is that your reasoning isn't reversible. Your two equations together imply $y^2-x^2-1=0$, but the
converse is not true: that equation does not imply your original system.
Compare your question to the following argument. The system $$x+y=1$$ $$2x+y=1$$ yields, by subtraction, that $x=0$. But this second equation allows $y$ to be arbitrary! So we get a whole lot of "extraneous solutions": $(0,y)$, for any $y$. The problem, of course, is that the equation $x=0$ does not, in turn, imply the original system. I take it you do not find this situation puzzling.
You generally lose information when you replace a system of equations with a linear combination of them. The combination will
include the solutions of your original equation, but it likely will include non-solutions, as well.
The form of the reasoning is: "any solution of the original equations will be a solution of this equation, too." Yes, but then you've only found a superset containing the solution set. To characterize the solution set exactly, you must worry about the converse.
When you solve a system of equations (whether it be by elimination or other means), you cannot discard the individual equations you started with. The reason is that each one of these equations carry more information (restrictions for example) than the one equation you end up with.
This goes for other example as well. The function $f(x) = \sqrt{x} - \sqrt{x}$ is not equal to $0$. The reason being any $x < 0 $ is not in the domain of $f$. By eliminating $\sqrt{x}$ , you have lost a key piece of information about this function (its domain).
I think the issue here is at least partially that you're assuming the rest of the solution will go something like:
We know that $y=\pm\sqrt{x^2+1}$. We'll use this to find the possible values for $x$, then substitute in for $y$.
and this is missing something: When we go to solve for $x$, we're going to need to know whether $y$ is positive or negative. That is, we need to split into cases $y=\sqrt{x^2+1}$ or $y=-\sqrt{x^2+1}$ and
then solve for $x$ by substituting in $y$. This is to say that when you write:
After all, I solved for y w.r.t. x without squaring both sides.
you're missing something. You didn't solve for $y$. You reduced one statement to saying that $y$ was among two possibilities. You have to proceed to the end of the argument with each possibility separate, since you can't substitute "either this or that" in for $y$.
To be sure, what elimination does is we start with a system like $$2xy+y^2=0$$ $$x^2+2xy+1=0$$ And then you've arranged into an equivalent statement: $$y=\pm\sqrt{x^2+1}$$ $$x^2+2xy+1=0$$ - or whatever you take the second equation to be. This system is equivalent to the first since all of your steps are reversible (noting that you've correctly used $y=\pm\sqrt{z}$ as the equivalent to $y^2=z$). So, you aren't introducing extraneous roots - it's just that your formula for $x$ depends on the $\pm$, so you don't get to choose $x$, then choose the sign of $y$. Another way to say this is that your error is in this statement:
A more simple example might be a system like: $$y^2=1$$ $$x+y=0$$ You get that $y=\pm 1$, but you can't really substitute that into the second equation itself since it depends on a variable $\pm$.
You have taken a square root. Whenever you take a square root you can introduce an extraneous root. Even for the simple system $x^2=y^2$, $x=y$ if you took a square root of the first equation you'd get an extraneous root. You must always consider all the equations as they all contain information about the solution. If you had gone about solving it via elimination or substitution (rather than merging the two equations together randomly like you did) then you would have not encountered this.
If you solve your equation fully (and use complex numbers) you get two pairs of roots: $$\left(\frac{1}{\sqrt3},-\frac{2}{\sqrt3}\right), \left(-\frac{1}{\sqrt3},\frac{2}{\sqrt3}\right),(i,0),(-i,0)$$ The second pair of roots does fit your reduced equation (with a small rearrangement), namely $x=\pm\sqrt{y^2-1}$.
|
I am working through Susan Lea's
Mathematics for Physicists however I am stuck on problem 31)b):
31) Prove
b)
$\int_{S}(\hat{n}\times\vec{\nabla})\times\vec{u}\hspace{0.5mm}dA = \unicode{x222E}_{C}d\vec{l}\times\vec{u} $
I have looked at the solution and understand that first a differential rectangle is constructed in the x-y plane and the integral on the right is then broken down after applying the cross product. However I get stuck afterwards with how the cross product is computed, I use the determinant method to compute crossproducts for the first row is the x,y,z unit vectors but I believe the next two rows are done incorrectly which is why I don't get the appropriate solution:
Can someone please explain this?
|
Q4. Suppose that $P$ is an $n \times n$ matrix such that $P^{2} = P$. Show that $\mathbb{R}^{n}$ is the direct sum of the range $R(P)$ and the nullspace $N(P)$ of $P$. Show also that $P$ represents the projection from $\mathbb{R}^{n}$ onto $R(P)$.
A4. Again – not sure how to start. Its idemptotent... so???
Q5. Prove that for any real matrix $A$, $N(A) = \text{orthogonal complement of } \ R(A^{T})$. Prove that for any real matrix, $N(A^{T}) = \text{orthogonal complement of} \ R(A)$.
A5. I get the first bit of this. It’s the second part I’m not sure about. Let $A^{\ast} = A^{T}$. Let $R(A)^{\ast}$ be the orthogonal complement of $R(A)$. What I said was let $x \in N(A)$. Then $Ax = 0$. So $A^{\ast}(Ax)=A^{\ast}0=0$. So $A^{\ast}(Ax)=0$. So $A(A^{\ast}x)=0$ so $A^{\ast}x=0$. So x is in the nullspace of $A^{\ast}$. I’m not sure about show to show the row space bit.
Q6. Let $A$ be a real matrix and let $R(A)$ denote its range. Show that the projection of a vector $b$ onto $R(A)$ parallel to the orthogonal complement of $R(A) – R(A)^{\ast}$ - is the vector in $R(A)$ closest to b.
Let A be a real matrix and let R(A) and R(A)* denote the range of A and the orthogonal complements of R(A) respectively. Show that the projection of a vector s onto R(A) parallel to R(A)* is the vector in R(A) closest to s.
A6. I think both of these questions are the same right? No idea where to start!!! :(
Im guessing these are all similar proofs - hence its on one thread.
|
Let $(X,\mathcal{M},\mu)$ be a measure space. Let $A_1, A_2, \ldots \in \mathcal{M}$.
Then, I want to show that: $$\mu\left(\bigcup_N \bigcap_{n=N}^{\infty} A_n \right) \leq \lim \inf \mu(A_n)$$
There is a solution in lecture notes:
Let $B_N = \bigcap_{n=N}^{\infty} A_n$. $B_N$ form an increasing sequence of elements, then by continuity from below:
$$\mu\left(\bigcup_N \bigcap_{n=N}^{\infty} A_n \right) = \mu\left(\bigcup_N B_N\right) = \lim_{N \to \infty} \mu(B_N) \leq \lim_{N \to \infty} \inf_{n \geq N} \mu(A_n) = \lim \inf \mu(A_n)$$
OK. I do not understand how $\mu\left(\bigcup_N B_N\right) = \lim_{N \to \infty} \mu(B_N)$. And then following inequality and equality. Can somebody give a detailed explanation? Thank you very much.
|
Search
Now showing items 1-10 of 19
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
|
Definition:Ordering on Natural Numbers Contents Informal Definition
Let $\N$ denote the natural numbers.
The ordering on $\N$ is the relation $\le$ everyone is familiar with.
For example, we use it when we say:
James has $6$ apples, which is more than Mary, who has $4$.
which can be symbolised as:
$6 \ge 4$
The same holds for any construction of $\N$ in an ambient theory.
Definition
Let $\left({P, 0, s}\right)$ be a Peano structure.
The ordering of $P$ is the relation $\le$ defined by: $\forall m, n \in P: m \le n \iff \exists p \in P: m + p = n$
where $+$ denotes addition in $\left({P, 0, s}\right)$.
Let $\left({S, \circ, \preceq}\right)$ be a naturally ordered semigroup.
The relation $\preceq$ in $\left({S, \circ, \preceq}\right)$ is called the ordering.
Let $\N_{>0}$ be the axiomatised $1$-based natural numbers.
The strict ordering of $\N_{>0}$, denoted $<$, is defined as follows: $\forall a, b \in \N_{>0}: a < b \iff \exists c \in \N_{>0}: a + c = b$ The (weak) ordering of $\N_{>0}$, denoted $\le$, is defined as: $\forall a,b \in \N_{>0}: a \le b \iff a = b \lor a < b$
Let $\omega$ be the minimal infinite successor set.
The strict ordering of $\omega$ is the relation $<$ defined by: $\forall m,n \in \omega: m < n \iff m \in n$ The (weak) ordering of $\omega$ is the relation $\le$ defined by: $\forall m,n \in \omega: m \le n \iff m < n \lor m = n$
Let $\left({\R, +, \times, \leq}\right)$ be the field of real numbers.
Let $\N$ be the natural numbers in $\R$.
Then the ordering of $\N$ is the restriction of $\le$ to $\N$. Also defined as
However, by Reflexive Reduction of Ordering is Strict Ordering and Reflexive Closure of Strict Ordering is Ordering, this is seen to be immaterial.
|
How can water be absorbed into a paper towel such that it continues to be absorbed up into the material against the force of gravity? What property of paper towels/water causes this? I can understand that cohesive forces cause water molecules to follow other water molecules, but why do the molecules begin this climb in the first place?
This is simply an example of capillary action expanded to a sheet of many tiny capillaries. In a narrow glass tube immersed in water, water will climb the inside of the tube to a level higher than that of the water outside because it is energetically favourable for the water to wet the surface due to adhesive forces and the cohesive forces of the water pull the bulk of the water up with the contact line of the water and the glass. Glass and water have strong adhesive forces as glass has many silicate groups which can interact with water through hydrogen bonding and other polar interactions. As the liquid rises, the volume of water above the surrounding liquid increases, thus increasing the weight of the column. When the weight of the column equals the adhesive and cohesive forces drawing the water upwards, the system is at equilibrium and the water will rise no further.
This can be calculated from: $$h = \frac{\gamma\cos\theta}{\rho gr}$$ where h is the height of the column, $\gamma$ is the liquid-air surface tension, $\theta$ is the equilibrium contact angle between the glass and the water, $\rho$ is the density of water, g is the acceleration due to gravity, and r is the radius of the tube. As you can see, increasing the surface tension (which comes from water's cohesive forces) or decreasing the contact angle (the lower the contact angle, the stronger the adhesive forces between liquid and surface) will increase the height the column will reach, and conversely a denser liquid or larger tube will decrease the height (greater increase in weight for a given rise).
I won't do the whole calculation but for a 1 mm radius glass tube with water ($\theta \approx 50°$, $\gamma = 72\ \mathrm{mN/m}$), the equilibrium height is 4.7 mm. Now, if we shrink the tube much further to approximate something like a pore in a piece of paper towel, we can make the liquid go much further. (it's obviously not perfectly cylindrical, but close enough for illustration and the cellulose-water contact angle seems to be close enough to glass' as cellulose's abundance of hydroxyl groups is similarly adhesive to water) At a diameter of 10 µm, we get a height of 47 cm.
When we expand this concept to a large sheet of interconnected pores, we find that capillary action can draw a relatively large volume of water across considerable distances, which in addition to being great for cleaning up household spills, opens the possibility of using paper as a substrate for analytical devices that don't need pumps, including the lateral flow assay (most commonly seen as the pregnancy test stick) and paper microfluidic devices.
|
An unknown function $y=f(x)$ is infinitely differentiable everywhere on $\mathbb{R}$.
Known the values of every points $ (x,y) $ where $0<x<1 $ ,
Could you please show whether there exist a unique solution of $y=f(x)$ ?
If not, could you please show that there exist infinite number of solution of $y=f(x)$ ?
My approach:
$$y=f(x)=\Sigma a_nx^{n}$$
we have infinite number of data point $(x_i,y_i)$ to solve for $a_n$, through a system of infinite number of linear equations:
Xa = y
Where
a is single column, infinite matrix $(a_1,a_2,...,a_n, ...)$ y is a single column, infinite matrix $(y_1,y_2,...,y_m, ...)$ X is a $\mathbb{R}^2$ infinite matrix
\begin{pmatrix} x_1^{1} & x_1^{2} & ... & x_1^{n} & ...\\ x_2^{1} & x_2^{2} & ... &x_2^n & ... \\ ... \\ x_m^{1} \\ ...\end{pmatrix}
Using definitions from paper 'SEQUENCE SPACES AND INVERSE OF AN INFINITE MATRIX', we could evaluate the inverse of an infinite matrix.
Then
a = $ \textbf{X}^{-1}$ y
This way we solved all $a_n$, and the solution to f(x) found.
However my intuition told me that there should be infinite number of solutions available for f(x).
What branch of Math shall I explore to solve this kind of problems?
{Math Newbie and lover here. Thanks for your help!!}
Case 2, instead of known the values of every points $ (x,y) $ where $0<x<1 $,
we know the values of every points $ (x,y) $ where $x \in \mathbb{N} $
|
Search
Now showing items 1-10 of 27
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Wed Mar 15, 2006 10:53 pm Post subject: Y&Y TeX vs. MiKTeX with MathTimePro2 fonts I used updated fonts just prior to posting of 0.98.
Initially I had different page breaks with a test document of my own in the two TeX systems. but that seems to have disappeared once I updated geometry.sty in Y&Y to the same version I use with MiKTeX.
There are two other peculiarities, indirectly related to the fonts:
1. With Y&Y, when mtpro2.sty is loaded, I get messages that \vec is already defined, and then ditto for \grave, \acute, \check, \breve, \bar, \hat, \dot, \tilde, \ddot. Perhaps this is due to different versions of amsmath.sty?
2. In Y&Y, I must include
\usepackage[T1]{fontenc}
for otherwise I get message:
OT1/ptm/m/n/10.95=ptm7t at 10.95pt not loadable:
Metric (TFM) file not found
I am clearly using different psnfss package files with Y&Y than with MiKTeX. I tried updating the Y&Y versions to be the same as those for MiKTeX, but then all hell breaks loose over encodings -- with Y&Y expecting to find TeXnAnsi encodings and not finding them. (It may be that in Y&Y I have to update tfm's for Times, too. But I am loathe to mess further with Y&Y with respect to a working font configuration.) WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Thu Mar 16, 2006 3:47 am Post subject: please, send me <w.a.schmidt@gmx.net> your test document and the
log files that would result with and without \usepackage[T1]{fontenc}
TIA
Walter WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany
Posted: Fri Mar 17, 2006 9:27 am Post subject: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX.
Y&Y-TeX supports Times and other fonts from the non-TeX world
only with LY1 encoding.
2) Updating psnfss on Y&Y-TeX is pointless. The psnfsss collection
supports the Base35 fonts with OT1 and T1/TS1 encoding, which
does not work on Y&Y-TeX; see above.
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
More info on Sunday. murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA
Posted: Fri Mar 17, 2006 7:49 pm Post subject: Your answers identified the problems & solutions!
WaS wrote: Preliminary anwers:
1) Using T1 encoding with Times cannot work on Y&Y-TeX....
2) Updating psnfss on Y&Y-TeX is pointless....
3) Loading fontenc should not be necessary at all, but
I do not yet understand why you get the error re. OT1/ptm.
Does it help to issue \usepackage[LY1]{fontenc} before loading
mtpro2?
4) The errors re. \vec etc. may be due to an obsolete amsmath.sty,
as compared with MikTeX. Please, run a minimal test document
that does not use amsmath to check this.
Re 3): Yes, \usepackage[LY1]{fontenc} in my test documen avoides he error about OT1.
5) Yes, the error about \vec, etc., was due to an obsolete amsmath.sty. Refreshing the amsmath files fixed this.
Thank you!
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
2017-04-19 08:09
Flavour anomalies after the $R_{K^*}$ measurement / D'Amico, Guido (CERN) ; Nardecchia, Marco (CERN) ; Panci, Paolo (CERN) ; Sannino, Francesco (CERN ; Southern Denmark U., CP3-Origins ; U. Southern Denmark, Odense, DIAS) ; Strumia, Alessandro (CERN ; Pisa U. ; INFN, Pisa) ; Torre, Riccardo (EPFL, Lausanne, LPTP) ; Urbano, Alfredo (CERN) The LHCb measurement of the $\mu/e$ ratio $R_{K^*}$ indicates a deficit with respect to the Standard Model prediction, supporting earlier hints of lepton universality violation observed in the $R_K$ ratio. We show that the $R_K$ and $R_{K^*}$ ratios alone constrain the chiralities of the states contributing to these anomalies, and we find deviations from the Standard Model at the $4\sigma$ level. [...] arXiv:1704.05438; CP3-ORIGINS-2017-014; CERN-TH-2017-086; IFUP-TH/2017; CP3-Origins-2017-014.- 2017-09-04 - 31 p. - Published in : JHEP 09 (2017) 010 Article from SCOAP3: PDF; Fulltext: PDF; Preprint: PDF; Registo detalhado - Registos similares 2017-04-15 08:30
Multi-loop calculations: numerical methods and applications / Borowka, S. (CERN) ; Heinrich, G. (Munich, Max Planck Inst.) ; Jahn, S. (Munich, Max Planck Inst.) ; Jones, S.P. (Munich, Max Planck Inst.) ; Kerner, M. (Munich, Max Planck Inst.) ; Schlenk, J. (Durham U., IPPP) We briefly review numerical methods for calculations beyond one loop and then describe new developments within the method of sector decomposition in more detail. We also discuss applications to two-loop integrals involving several mass scales.. CERN-TH-2017-051; IPPP-17-28; MPP-2017-62; arXiv:1704.03832.- 2017-11-09 - 10 p. - Published in : J. Phys. : Conf. Ser. 920 (2017) 012003 Fulltext from Publisher: PDF; Preprint: PDF; In : 4th Computational Particle Physics Workshop, Tsukuba, Japan, 8 - 11 Oct 2016, pp.012003 Registo detalhado - Registos similares 2017-04-15 08:30
Anomaly-Free Dark Matter Models are not so Simple / Ellis, John (King's Coll. London ; CERN) ; Fairbairn, Malcolm (King's Coll. London) ; Tunney, Patrick (King's Coll. London) We explore the anomaly-cancellation constraints on simplified dark matter (DM) models with an extra U(1)$^\prime$ gauge boson $Z'$. We show that, if the Standard Model (SM) fermions are supplemented by a single DM fermion $\chi$ that is a singlet of the SM gauge group, and the SM quarks have non-zero U(1)$^\prime$ charges, the SM leptons must also have non-zero U(1)$^\prime$ charges, in which case LHC searches impose strong constraints on the $Z'$ mass. [...] KCL-PH-TH-2017-21; CERN-TH-2017-084; arXiv:1704.03850.- 2017-08-16 - 19 p. - Published in : JHEP 08 (2017) 053 Article from SCOAP3: PDF; Preprint: PDF; Registo detalhado - Registos similares 2017-04-13 08:29
Single top polarisation as a window to new physics / Aguilar-Saavedra, J.A. (Granada U., Theor. Phys. Astrophys.) ; Degrande, C. (CERN) ; Khatibi, S. (IPM, Tehran) We discuss the effect of heavy new physics, parameterised in terms of four-fermion operators, in the polarisation of single top (anti-)quarks in the $t$-channel process at the LHC. It is found that for operators involving a right-handed top quark field the relative effect on the longitudinal polarisation is twice larger than the relative effect on the total cross section. [...] CERN-TH-2017-013; arXiv:1701.05900.- 2017-06-10 - 5 p. - Published in : Phys. Lett. B 769 (2017) 498-502 Article from SCOAP3: PDF; Elsevier Open Access article: PDF; Preprint: PDF; Registo detalhado - Registos similares 2017-04-12 07:18
Colorful Twisted Top Partners and Partnerium at the LHC / Kats, Yevgeny (CERN ; Ben Gurion U. of Negev ; Weizmann Inst.) ; McCullough, Matthew (CERN) ; Perez, Gilad (Weizmann Inst.) ; Soreq, Yotam (MIT, Cambridge, CTP) ; Thaler, Jesse (MIT, Cambridge, CTP) In scenarios that stabilize the electroweak scale, the top quark is typically accompanied by partner particles. In this work, we demonstrate how extended stabilizing symmetries can yield scalar or fermionic top partners that transform as ordinary color triplets but carry exotic electric charges. [...] MIT-CTP-4897; CERN-TH-2017-073; arXiv:1704.03393.- 2017-06-23 - 34 p. - Published in : JHEP 06 (2017) 126 Article from SCOAP3: PDF; Preprint: PDF; Registo detalhado - Registos similares 2017-04-11 08:06
Where is Particle Physics Going? / Ellis, John (King's Coll. London ; CERN) The answer to the question in the title is: in search of new physics beyond the Standard Model, for which there are many motivations, including the likely instability of the electroweak vacuum, dark matter, the origin of matter, the masses of neutrinos, the naturalness of the hierarchy of mass scales, cosmological inflation and the search for quantum gravity. So far, however, there are no clear indications about the theoretical solutions to these problems, nor the experimental strategies to resolve them [...] KCL-PH-TH-2017-18; CERN-TH-2017-080; arXiv:1704.02821.- 2017-12-08 - 21 p. - Published in : Int. J. Mod. Phys. A 32 (2017) 1746001 Preprint: PDF; In : HKUST Jockey Club Institute for Advanced Study : High Energy Physics, Hong Kong, China, 9 - 26 Jan 2017 Registo detalhado - Registos similares 2017-04-08 07:18 Registo detalhado - Registos similares 2017-04-05 07:33
Radiative symmetry breaking from interacting UV fixed points / Abel, Steven (Durham U., IPPP ; CERN) ; Sannino, Francesco (CERN ; U. Southern Denmark, CP3-Origins ; U. Southern Denmark, Odense, DIAS) It is shown that the addition of positive mass-squared terms to asymptotically safe gauge-Yukawa theories with perturbative UV fixed points leads to calculable radiative symmetry breaking in the IR. This phenomenon, and the multiplicative running of the operators that lies behind it, is akin to the radiative symmetry breaking that occurs in the Supersymmetric Standard Model.. CERN-TH-2017-066; CP3-ORIGINS-2017-011; IPPP-2017-23; arXiv:1704.00700.- 2017-09-28 - 14 p. - Published in : Phys. Rev. D 96 (2017) 056028 Fulltext: PDF; Preprint: PDF; Registo detalhado - Registos similares 2017-03-31 07:54 Registo detalhado - Registos similares 2017-03-31 07:54
Continuum limit and universality of the Columbia plot / de Forcrand, Philippe (ETH, Zurich (main) ; CERN) ; D'Elia, Massimo (INFN, Pisa ; Pisa U.) Results on the thermal transition of QCD with 3 degenerate flavors, in the lower-left corner of the Columbia plot, are puzzling. The transition is expected to be first-order for massless quarks, and to remain so for a range of quark masses until it turns second-order at a critical quark mass. [...] arXiv:1702.00330; CERN-TH-2017-022.- SISSA, 2017-01-30 - 7 p. - Published in : PoS LATTICE2016 (2017) 081 Fulltext: PDF; Preprint: PDF; In : 34th International Symposium on Lattice Field Theory, Southampton, UK, 24 - 30 Jul 2016, pp.081 Registo detalhado - Registos similares
|
The only time we need to know where a force acts is when we are calculating a torque. For contact forces, it is clear that the force acts at the point of contact. But for a force like gravity, that acts at a distance, it is less clear.
In reality, a rigid object is made up of many particles, and there is a small gravitational force and torque on each of them. When we only care about acceleration we only need the sum of all these forces, which is $\vec{F}_{tot} = \sum_i m_i \vec{g}= M\vec{g}$. But what about the torques?
We would like to pretend that this total gravitational force acts at a single point for the purpose of calculating torque. Is there a point $\vec{x}_{cg}$ such that $\vec{x}_{cg}\times \vec{F}_{tot}$ gives the same total torque as summing up all the small torques?
If we do sum up all the torques we find $\vec{\tau}_{tot} = \sum_i \vec{x}_i\times (m_i\vec{g}) = \left(\frac{1}{M}\sum_i m_i \vec{x}_i\right) \times (M\vec{g})$. This tells us to call $\vec{x}_{cg} = \frac{1}{M}\sum_i m_i \vec{x}_i$ the center of gravity, and if we pretend that the total force of gravity acts at this point, it will always give us the right answer for the gravitational torque. Finally, we notice that it happens to have the same form as the definition of the center of mass!
However! If you do the calculation yourself you might notice that if $\vec{g}$ varies from particle to particle then this derivation does not work. In this case the center of gravity is not actually well defined. There may be no $\vec{x}_{cg}$ that does what we want, and even if there is it is not unique, except in a few special cases.
|
Let $G$ be a finite group and $H \le G$ with $g \in G$ such that all elements of the coset $Hg$ are conjugate in $G$. Let $\chi$ be a $\mathbb C$-character of $G$ such that $[\chi_H, 1_H] = 0$. Show that $\chi(g) = 0$.
This is exercise 2.1 (b) from M. Isaacs, Character Theory of Finite Groups; it is given the hint to look at the trace of $\sum_{h \in H} \mathcal X(hg)$, where $\mathcal X$ is a $\mathbb C$-representation for $\chi$.
A $\mathbb C$-representation is a homomorphism $\mathcal X : G \to GL(n, \mathbb C)$ which leads naturally to a represententation of the algebra $\mathbb C[G]$. Further for two class functions $\varphi$ and $\vartheta$ on a group $G$, the
inner product is$$ [\varphi, \vartheta] = \frac{1}{|G|} \sum_{g\in G} \varphi(g) \overline{\vartheta(g)}.$$For a character $\chi$, the notation $\chi_H$ just means the restriction to $H$.
Do you have any ideas how to solve this problem?
|
I was trying to find the residue of the function
$$f(z) = \frac{z^2 + \sin z}{\cos z - 1}.$$
Here is the my attempt:
The given function has a pole of order two at $z = 2n\pi$. So, we use the following formula to calculate residue of a function when it has a pole of order
m at $z=z_0$.
$$\mathrm{Res}(f(z))_{z=z_0}=\frac{1}{(m-1)!}\lim_{z\to z_0}\left[\frac{d^{m-1}}{dz^{m-1}}(z-z_0)^m f(z)\right]$$
But I am not able to apply this formula as I am getting zero in the denominator while I am taking limit $z\to 2n\pi$. Any help or suggestions will be very helpful for me.
Thanks
|
In the literature on social welfare functionals, the only example I've seen of a functional which meets all of Arrow's conditions–––or at least utility analogues of Arrow's conditions–––plus invariance regarding ordinal level comparability is Rawls' maximin. E.g. Sen in
On Weights and Measures (1977, p. 1544) cites maximin as his case of a functional meeting all of these conditions. Maximin orders the alternatives by the welfare of individual who is worst off. I assume that the inverse of maximin–––i.e. the alternatives are ordered by the welfare of individual who is best off–––would also meet these conditions.
Is there any work on other social welfare functionals which meet all these conditions? (I'm aware that if we tweak these conditions slightly we can derive other functionals, but I'm interested in the case in which we keep them unaltered.)
If not, is this evidence that maximin, and its inverse, are the only normatively sensible social welfare functionals that meets all these conditions? Or is it just evidence that people aren't so interested in this set of conditions? (If there is a clear reason why this set of conditions is uninteresting, I'd love to hear it).
Thanks for any help!
Utility analogues of Arrow’s conditions:
Utility analogues of Arrow’s conditions are Arrow’s conditions redefined for Sen’s welfare functional framework. Instead of taking a profile of orderings as input, Sen's functional takes a profile of utility functions as input: $U \ = \ <u_{i_1}(X), \ u_{i_2}(X), \ \dots \ , \ u_{i_n}(X)>$. $U$ is defined on $X \times N$; each individual, $i \in N $, is paired with each alternative, $x \in X$, and the result of each pairing is the utility derived by $i$ from $x$. $\mathcal{U} \ = \ \{U^1, \ U^2, \ \dots \ , \ U^n \}$ is the set of all possible utility profiles. $\mathcal{U^*}$ is the set of all utility profiles which meet a particular domain restriction. $\mathcal{R}$ is the set of all possible orderings of $X$. A social welfare functional can then be defined as: $f: \ \mathcal{U^*} \longrightarrow \mathcal{R}$. The final ordering given by profile $U^1$, $f(U^1)$, is denoted: $R_{U^1}$. We can then define utility analogues of Arrow's conditions:
Unrestricted Domain$’$: The domain of $f$ is the set of all possible utility profiles: $\mathcal{U}^* \ = \ \mathcal{U}$. Weak Pareto$’$: $\forall x, y \in X$, $\forall i \in N$: $( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$. Non-Dictatorship$’$: $f$ does not single out one individual $i \in N$ such that, $\forall U \in \mathcal{U^*}, \ \forall x, y \in X$: $( \ u_i(x) \ > \ u_i(y) \ ) \ \Longrightarrow \ (xPy)$. Independence of Irrelevant Utilities: $\forall U^1$ and $U^2$ $\in \mathcal{U^*}, \ \forall x, y \in X$: $(\forall i \in N \ (( \ u^1_i(x) = u^2_i(x) \ ) \land ( \ u^1_i(y) = u^2_i(y) \ )) \ \Longrightarrow \ (( \ x R_{U^1} y \ ) \ \Longleftrightarrow \ ( \ x R_{U^2} y \ ))$.
|
Edit: As noted by @Branimir, I can remove the conjugations from the earlier draft. $T^*$ is the Banach-space adjoint conjugated by a conjugate-linear isomorphism, so constants end up being preserved.
As this is a homework problem, I will attempt to give a hint rather than the complete solution (which is elegant). I here adapt p. 192-194 of (the 1972 edition of) Reed and Simon,
Functional Analysis, so that may be a good reference [in particular, I exchange $T$ and $T$'s adjoint, and change the underlying spaces, from that book].
This operator is qualitatively similar to the "stretching" shift operator $T: \ell^2 \to \ell^2$, where$$T(x_0, x_1, x_2, \dotsc) = (0, x_0, x_1, x_2, \dotsc).$$
It helps for this operator to look at its adjoint, which we can determine by just using the definition of adjoint and looking at its effects on the $j$th basis vector $e_j$, where
$$e_j := (0, 0, \dotsc, 0, \overbrace{1}^{j\text{th slot}}, 0, 0, \dotsc ).$$
Note that for any $x$, $$ \begin{align}\left\langle T^* e_j, x \right\rangle & = \left\langle e_j, T x \right\rangle\\& = (Tx)_j\\& = \begin{cases} 0, & j = 0, \\ x_{j - 1}, & j \neq 0 \end{cases} \end{align}. $$
Therefore, $\left\langle T^* e_0, x \right\rangle = 0$ for all $x$, so $T^* e_0 = 0$, and letting $x$ range over $\left\lbrace e_k \right\rbrace$, we see that $T^* e_j = e_{j-1}$ if $j \geq 1$. Since the adjoint is a linear operator, we have$$ T^* (x_0, x_1, x_2, \dotsc ), = ( x_1, x_2, x_3, \dotsc )$$which is a "squishing" shift operator.
Why do we like squishing shift operators? I can hope to describe some eigenvectors. Here, take $\lambda$ with $\vert \lambda \vert < 1$, and define$$x_{\lambda} : = (1, \lambda, \lambda^2, \lambda^3, \lambda^4, \dotsc)$$ By $\vert \lambda \vert < 1$, and the $j$-th entry having norm $\vert \lambda \vert^j$, this is an $\ell^2$ (indeed, $\ell^1$ !) vector. Then convince yourself that $T^*(x_{\lambda}) = \lambda x_{\lambda}$. So $\sigma(T^*)$ contains the open unit disk. Yet spectra are closed, so it contains the closed unit disk. Since $\lambda \in \sigma(T^*)$ if and only if $\overline{\lambda} \in \sigma(T)$, and since the closed unit disk is its own image under conjugation, $\sigma(T)$ contains the closed unit disk.
Now argue with norms and the spectral radius rules that $\sigma(T)$ is contained in the closed unit disk, and you're done.
Try this idea with your operator.
|
This module demonstrates how to convert the lower triangular covariance table from the Excel Analysis ToolPak to a full covariance matrix for use with the
MMULT function.
First, a brief review of square matrices.
A
matrix is a rectangular array of numbers of the form $$A=\begin{bmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{bmatrix}$$ and is called an m × n matrix because it has m rows and n columns. Matrix \(A\) can also be written as \(A=(a_{ij} )\) where \(a_{ij}\) is an element of \(A\) in i-th row and j-th column where \(1 \leq i \leq m\) and \(1 \leq j \leq n\). The diagonal elements are \((a_{ij} ) \; \forall \; i=j\). Square matrices
$$B=\begin{bmatrix} 5 & -2 & 6 \\ -2 & 11 & 9.1 \\ 6 & 9.1 & 10 \end{bmatrix}$$
The matrix \(B\) is a 3 × 3 square matrix because it has equal numbers of rows and columns, that is, \(m = n = 3\). The diagonal elements are the values \(5, 11, 10\) 1. Upper triangular matrix
The \(n \times n\) matrix \(C\), is
upper triangular if all elements below the main diagonal are 0. $$C_1=\begin{bmatrix} 5 & 2 & 6 & 7 \\ 0 & 11 & 9 & 4 \\ 0 & 0 & 10 & 1 \\ 0 & 0 & 0 & 8\end{bmatrix}, \qquad C_2=\begin{bmatrix} 5 & 2 & 6 & 7 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 8\end{bmatrix}$$ 2. Lower triangular matrix
The \(n \times n\) matrix \(C\), is
lower triangular if all elements above the main diagonal are 0. $$C_3=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 3 & 9 & 0 & 0 \\ 7 & 2 & 1 & 0 \\ 4 & 5 & 8 & 6\end{bmatrix}, \qquad C_4=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 \\ 7 & 4 & 4 & 0 \\ 6 & 9 & 1 & 8\end{bmatrix}$$ 3. Diagonal matrix
The \(n \times n\) matrix \(C\), is
diagonal if all elements off the main diagonal are 0. $$C_5=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 0 & 9 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 6\end{bmatrix}, \qquad C_6=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 4 & 0 \\ 0 & 0 & 0 & 8\end{bmatrix}$$ \(C_5\) has diagonal elements \(5,9,1,6\). \(C_6\) has diagonal elements \(5,0,4,8\). 4. Symmetrical matrix
The \(n \times n\) matrix \(C\), is
symmetrical if the matrix is equal to its transpose. That is, \(C=C^T\) $$C_7=\begin{bmatrix} 5 & 3 & 7 & 4 \\ 3 & 9 & 2 & 5 \\ 7 & 2 & 1 & 8 \\ 4 & 5 & 8 & 6\end{bmatrix}$$ Lower triangular covariance table
The Analysis ToolPak includes tools to estimate Covariance, and Correlation. Both procedures produce output that is lower triangular. The omission of the upper triangle was originally based on the need to save several bytes of (expensive) computer memory. The same reason the the Toolpak being a Add-In and activated only when required. An example of the Covariance table for four stock returns is shown in figure 1.
It is clear from figure 1, however, that the output is not a lower triangular matrix, as described in point 2 above, because the upper triangle is blank rather contain zeros. The output is better described as a
lower triangular table.
To convert the lower triangular table to a symmetrical matrix for use in an
MMULT equation, do the following: Select the 4 x 4 lower triangular variance-covariance array (with the red frame in figure 1), and Copy the Selection to the Clipboard Select the top left cell of a temporary work area (cell H8) and Paste > Special > Transpose. You can also select Paste > Values to avoid format issues Select the transposed array from step 3 (with the green frame in figure 1), and copy to the Clipboard Select the top left cell, C3 of the variance-covariance array from step 1, then Paste > Special > Skip Blanks This example was developed in Excel 2013 Pro 64 bit. Last modified: 26 Oct 2018, 7:18 am [Australian Eastern Standard Time (AEST)]
|
When discussing ordinary correlations we looked at tests for the null hypothesis that the ordinary correlation is equal to zero, against the alternative that it is not equal to zero. If that null hypothesis is rejected, then we look at confidence intervals for the ordinary correlation. Similar objectives can be considered for the partial correlation.
First, consider testing the null hypothesis that a partial correlation is equal to zero against the alternative that it is not equal to zero. This is expressed below:
\(H_0\colon \rho_{jk\textbf{.x}}=0\) against \(H_a\colon \rho_{jk\textbf{.x}}\ne 0\)
Here we will use a test statistic that is similar to the one we used for an ordinary correlation. This test statistic is shown below:
\(t = r_{jk\textbf{.x}}\sqrt{\frac{n-2-k}{1-r^2_{jk\textbf{.x}}}}\) \(\dot{\sim}\) \(t_{n-2-k}\)
The only difference between this and the previous one is what appears in the numerator of the radical. Before we just took
n - 2. Here we take n - 2 - k, where k is the number of variables upon which we are conditioning. In our Adult Intelligence data, we conditioned on two variables so k would be equal to 2 in this case.
Under the null hypothesis, this test statistic will be approximately
t-distributed, also with n - 2 - k degrees of freedom.
We would reject \(H_{o}\colon\) if the absolute value of the test statistic exceeded the critical value from the
t-table evaluated at \(α\) over 2:
\(|t| > t_{n-2-k, \alpha/2}\)
Example 6-3: Wechsler Adult Intelligence Data Section
For the Wechsler Adult Intelligence Data we found a partial correlation of 0.711879, which we enter into the expression for the test statistic as shown below:
\(t = 0.711879 \sqrt{\dfrac{37-2-2}{1-0.711879^2}}=5.82\)
The sample size is 37, along with the 2 variables upon which we are conditioning is also substituted in. Carry out the math and we get a test statistic of 5.82 as shown above.
Here we want to compare this value to a
t-distribution with 33 degrees of freedom for an α = 0.01 level test. Therefore, we are going to look at the critical value for 0.005 in the table (because 33 does not appear to use the closest df that does not exceed 33 which is 30). In this case it is 2.75, meaning that \(t _ { ( d f , 1 - \alpha / 2 ) } = t _ { ( 33,0.995 ) } \) is 2.75. Note!Some text tables provide the right tail probability (the graph at the top will have the area in the right tail shaded in) while other texts will provide a table with the cumulative probability - the graph will be shaded into the left. The concept is the same. For example, if alpha was 0.01 then using the first text you would look under 0.005 and in the second text look under 0.995.
Because \(5.82 > 2.75 = t _ { ( 33,0.995 ) }\), we can reject the null hypothesis, \(H_{o}\) at the \(\alpha = 0.01\) level and conclude that there is a significant partial correlation between these two variables. In particular, we would include that this partial correlation is positive indicating that even after taking into account Arithmetic and Picture Completion, there is a positive association between Information and Similarities.
Confidence Interval for the partial correlation, \(\rho_{jk\textbf{.x}}\) Section
The procedure here is very similar to the procedure we used for ordinary correlation.
Steps
Compute the Fisher's transformation of the partial correlation using the same formula as before.
\(z_{jk} = \dfrac{1}{2}\log \left( \dfrac{1+r_{jk\textbf{.X}}}{1-r_{jk\textbf{.X}}}\right) \)
In this case, for a large
n, this Fisher transform variable will be possibly normally distributed. The mean is equal to the Fisher transform for the population value for this partial correlation, and variance equal to 1 over n-3-k.
\(z_{jk}\) \(\dot{\sim}\) \(N \left( \dfrac{1}{2}\log \dfrac{1+\rho_{jk\textbf{.X}}}{1-\rho_{jk\textbf{.X}}}, \dfrac{1}{n-3-k}\right)\)
Compute a \((1 - α) × 100\%\) confidence interval for the Fisher transform correlation. This expression is shown below:
\( \dfrac{1}{2}\log \dfrac{1+\rho_{jk\textbf{.X}}}{1-\rho_{jk\textbf{.X}}}\)
This yields the bounds \(Z_{l}\) and
\(Z_{u}\)as before.
\(\left(\underset{Z_l}{\underbrace{Z_{jk}-\dfrac{Z_{\alpha/2}}{\sqrt{n-3-k}}}}, \underset{Z_U}{\underbrace{Z_{jk}+\dfrac{Z_{\alpha/2}}{\sqrt{n-3-k}}}}\right)\)
Back transform to obtain the desired confidence interval for the partial correlation - \(\rho_{jk\textbf{.X}}\)
\(\left(\dfrac{e^{2Z_l}-1}{e^{2Z_l}+1}, \dfrac{e^{2Z_U}-1}{e^{2Z_U}+1}\right)\)
Example 6-3: Wechsler Adult Intelligence Data (Steps Shown) Section
The confidence interval is calculated substituting in the results from the Wechsler Adult Intelligence Data into the appropriate steps below:
Step 1: Compute the Fisher transform:
\begin{align} Z_{12} &= \dfrac{1}{2}\log \frac{1+r_{12.34}}{1-r_{12.34}}\\[5pt] &= \dfrac{1}{2} \log \frac{1+0.711879}{1-0.711879}\\[5pt] &= 0.89098 \end{align}
Step 2: Compute the 95% confidence interval for \( \frac{1}{2}\log \frac{1+\rho_{12.34}}{1-\rho_{12.34}}\) :
\begin{align} Z_l &= Z_{12}-Z_{0.025}/\sqrt{n-3-k}\\[5pt] & = 0.89098 - \dfrac{1.96}{\sqrt{37-3-2}}\\[5pt] &= 0.5445 \end{align}
\begin{align} Z_U &= Z_{12}+Z_{0.025}/\sqrt{n-3-k}\\[5pt] &= 0.89098 + \dfrac{1.96}{\sqrt{37-3-2}} \\[5pt] &= 1.2375 \end{align}
Step 3: Back-transform to obtain the 95% confidence interval for \(\rho_{12.34}\) :
\(\left(\dfrac{\exp\{2Z_l\}-1}{\exp\{2Z_l\}+1}, \dfrac{\exp\{2Z_U\}-1}{\exp\{2Z_U\}+1}\right)\)
\(\left(\dfrac{\exp\{2\times 0.5445\}-1}{\exp\{2\times 0.5445\}+1}, \dfrac{\exp\{2\times 1.2375\}-1}{\exp\{2\times 1.2375\}+1}\right)\)
\((0.4964, 0.8447)\)
Based on this result, we can conclude that we are 95% confident that the interval (0.4964, 0.8447) contains the partial correlation between Information and Similarities scores given scores on Arithmetic and Picture Completion.
|
Disclaimer
My following answer is the "traditional" explanation of Hund's first rule, which is based on a smaller value of $V_\mathrm{ee}$ (electron-electron repulsions) in the triplet state arising from Fermi holes. According to Levine's
Quantum Chemistry 7th ed.:
This traditional explanation turns out to be wrong in most cases. It is true that the probability that the two electrons are very close together is smaller for the helium $^3S$ $1s2s$ term than for the $^1S$ $1s2s$ term. However, calculations with accurate wave functions show that the probability that the two electrons are very far apart is also less for the $^3S$ term. The net result is that the average distance between the two electrons is slightly less for the $^3S$ term than for the $^1S$ term, and the interelectronic repulsion is slightly greater for the $^3S$ term.
The calculations show that the $^3S$ term lies below the $^1S$ term because of a substantially greater electron nucleus-attraction in the $^3S$ term as compared with the $^1S$ term. Similar results are found for terms of the atoms beryllium and carbon.
Wildcat has an answer which briefly mentions this. Aside from that Levine gives several references:
Katriel, J.; Pauncz, R. Theoretical Interpretation of Hund's Rule. Adv. Quantum Chem. 1977, 10, 143–185. DOI: 10.1016/S0065-3276(08)60580-8. Shim, I.; Dahl, J. P. Theor. Chim. Acta 1978, 48 (2), 165. The DOI doesn't resolve properly, but here's the link to the publisher's website. Boyd, R. J. A quantum mechanical explanation for Hund's multiplicity rule. Nature 1984, 310, 480–481. DOI: 10.1038/310480a0. Oyamada, T.; Hongo, K.; Kawazoe, Y.; Yasuhara, H. Unified interpretation of Hund's first and second rules for 2p and 3p atoms. J. Chem. Phys. 2010, 133, 164113. DOI: 10.1063/1.3488099.
With that said, here is the old-fashioned argument for why parallel spins are favoured over paired spins (all else being equal).
Introduction
The origin of Hund's first rule lies in
"exchange energy", which is a way of saying that electrons with like spin repel each other less than electrons with unlike spin.
To see why, let's consider an excited state of the helium atom: $\mathrm{1s^1 2s^1}$. (The ground state configuration $\mathrm{1s^2}$ is of no use to us because those electrons have to be paired with unlike spin.) In the excited state, the electrons can either have parallel spin (the
triplet case) or paired spin (the singlet case).
The total electronic wavefunction comprises a
spatial part, which describes which orbitals the electrons are in (i.e. it specifies $n,l,m_l$), and a spin part, which describes the, well, spin ($m_s$). According to the Pauli exclusion principle, the total wavefunction must be antisymmetric with respect to interchange of the electron labels: $\Psi(1,2) = -\Psi(2,1)$. Spin wavefunctions
The spin wavefunctions are simultaneous eigenfunctions of the $\hat{S}^2$ and $\hat{S}_z$ operators. The total spin quantum number, $S$, can be found from the eigenvalue of the operator $\hat{S}^2$:
$$\hat{S}^2\psi = \hbar^2 S(S+1)\psi$$
For more detailed discussion, refer to any QM text on the coupling of angular momenta.
There are three possible triplet spin wavefunctions, which are degenerate.
$$\psi_{\mathrm{spin,triplet}}(1,2) = \begin{cases}\alpha(1)\alpha(2) \\\beta(1)\beta(2) \\\frac{1}{\sqrt{2}}[\alpha(1)\beta(2) + \alpha(2)\beta(1)]\end{cases}$$
It turns out that $\hat{S}^2\psi_\mathrm{triplet} = 2\hbar^2\psi_{\mathrm{triplet}}$, which means that these have the quantum number $S = 1$. This corresponds to a multiplicity of $M = 2S + 1 = 3$, hence "triplet". These wavefunctions represent the case where the electrons have
parallel spin. (Loosely speaking, each electron has a spin of $1/2$, and if these are aligned parallel the total spin is $1/2 + 1/2 = 1$.)
The singlet state is
$$\psi_{\mathrm{spin,singlet}}(1,2) = \frac{1}{\sqrt{2}}[\alpha(1)\beta(2) - \alpha(2)\beta(1)]$$
and as you might guess, $\hat{S}^2\psi_\mathrm{singlet} = 0$, meaning that $S = 0$ and $M = 2S+1 = 1$. This represents the case where the electrons are
paired - one can think of it as the two $1/2$ electrons spins “cancelling each other out”.
The important thing to notice is that the
triplet spin wavefunctions are symmetric with respect to permutation of the labels, whereas the singlet spin wavefunction is antisymmetric. For example:
$$\begin{align}\psi_\mathrm{singlet}(2,1) &= \frac{1}{\sqrt{2}}[\alpha(2)\beta(1) - \alpha(1)\beta(2)] \\&= -\frac{1}{\sqrt{2}}[\alpha(1)\beta(2) - \alpha(2)\beta(1)] \\&= -\psi_\mathrm{singlet}(1,2)\end{align}$$
Spatial wavefunctions
The only way of constructing these are the linear combinations below.
$$\begin{align}\psi_\mathrm{space,symm} &= \frac{1}{\sqrt{2}}[\mathrm{1s}(1)\mathrm{2s}(2) + \mathrm{1s}(2)\mathrm{2s}(1)] \\\psi_\mathrm{space,antisymm} &= \frac{1}{\sqrt{2}}[\mathrm{1s}(1)\mathrm{2s}(2) - \mathrm{1s}(2)\mathrm{2s}(1)]\end{align}$$
Remember that the atomic orbitals above are functions of the coordinates of the electron: $\mathrm{1s}(1) = \mathrm{1s}(\vec{r}_1) = \mathrm{1s}(r_1,\theta_1, \phi_1)$. Now, the
antisymmetric spatial wavefunction has an interesting property. Let's explore what happens when the electrons get very close together, i.e. $\vec{r}_1 = \vec{r}_2 = \vec{r}$.
$$\begin{align}\psi_\mathrm{space,antisymm} &= \frac{1}{\sqrt{2}}[\mathrm{1s}(\vec{r})\mathrm{2s}(\vec{r}) - \mathrm{1s}(\vec{r})\mathrm{2s}(\vec{r})] \\&= 0\end{align}$$
If the wavefunction is 0 when the two electrons are on top of each other, that means that there is
no probability of this happening. The average distance between the two electrons $\left\langle\left|\vec{r}_2 - \vec{r}_1\right|\right\rangle$ will therefore be larger, and on average the electrons will repel each other less, since they are on average further away from each other. This is known as a Fermi hole.
In fact, one can also show that the
symmetric wavefunction has a larger probability amplitude when $\vec{r}_1 = \vec{r}_2$. This is known as a Fermi heap. The total wavefunction
As I mentioned earlier, the Pauli exclusion principle dictates that the total wavefunction be antisymmetric. This means that the triplet spin wavefunctions, which are symmetric, must be paired with the antisymmetric spatial wavefunction. Likewise, the singlet spin wavefunction, which is antisymmetric, must be paired with the symmetric spatial wavefunction.
$$\begin{array}{c|c|c}\text{Spin} & \text{Spatial} & \text{Total}\\\hline\text{Triplet (S)} & \text{AS} & \text{S } \times \text{ AS} = \text{AS} \\\text{Singlet (AS)} & \text{S} & \text{AS } \times \text{ S} = \text{AS}\end{array}$$
So, there is a decrease in electron-electron repulsion when the electrons are
parallel - because this means they have a triplet spin wavefunction, and hence an antisymmetric spatial wavefunction, which has a Fermi hole.
Exchange integrals
There is another, perhaps more mathematical, way of looking at it, and that is to calculate the expectation value of the electronic energy for both wavefunctions. For the helium atom, under the assumption of an infinitely heavy nucleus (essentially an atomic version of the Born-Oppenheimer approximation), the total Hamiltonian (in atomic units) is
$$\hat{H} = -\frac{1}{2}(\nabla_1^2+\nabla_2^2) - \frac{2}{r_1} - \frac{2}{r_2} + \frac{1}{r_{12}}$$
where $r_i$ is the distance of electron $i$ from the nucleus, and $r_{12}$ is the distance between the two electrons. The expectation value of the energy is simply $\left\langle\Psi\middle|\hat{H}\middle|\Psi\right\rangle$ (assuming our wavefunction is already normalised). It turns out that the only important term in our present discussion is the electron-electron repulsion term $1/r_{12}$.
Let's look at the
triplet spin wavefunction (of course paired with the antisymmetric spatial wavefunction) first. The operator doesn't depend on spin degrees of freedom, so we can separate those, and the inner product of the spin wavefunction with itself is just unity.
$$\begin{align}\left\langle\Psi_\mathrm{triplet}\middle|\frac{1}{r_{12}}\middle|\Psi_\mathrm{triplet}\right\rangle &= \left\langle\psi_\mathrm{spin,triplet}\middle|\psi_\mathrm{spin,triplet}\right\rangle\left\langle\psi_\mathrm{space,antisymm}\middle|\frac{1}{r_{12}}\middle|\psi_\mathrm{space,antisymm}\right\rangle \\&= \frac{1}{2} \left\langle \mathrm{1s}(1)\mathrm{2s}(2) - \mathrm{1s}(2)\mathrm{2s}(1) \middle| \frac{1}{r_{12}} \middle| \mathrm{1s}(1)\mathrm{2s}(2) - \mathrm{1s}(2)\mathrm{2s}(1) \right\rangle \\&= \frac{1}{2} \Biggr\{ \left\langle \mathrm{1s}(1)\mathrm{2s}(2) \middle|\frac{1}{r_{12}}\middle| \mathrm{1s}(1)\mathrm{2s}(2) \right\rangle \\&\quad\quad\quad - \left\langle \mathrm{1s}(2)\mathrm{2s}(1) \middle|\frac{1}{r_{12}}\middle| \mathrm{1s}(1)\mathrm{2s}(2) \right\rangle \\&\quad\quad\quad - \left\langle \mathrm{1s}(1)\mathrm{2s}(2) \middle|\frac{1}{r_{12}}\middle| \mathrm{1s}(2)\mathrm{2s}(1) \right\rangle \\&\quad\quad\quad + \left\langle \mathrm{1s}(2)\mathrm{2s}(1) \middle|\frac{1}{r_{12}}\middle| \mathrm{1s}(2)\mathrm{2s}(1) \right\rangle\Biggr\}\end{align}$$
Ugly, but that's what it is.
We can simplify it however by noting that the first and the fourth brackets are the same (since the variables in the integral are just dummy variables, interchanging the electron labels does not affect the value of the integral). This integral is traditionally called a
Coulomb integral, since it simply reflects the average Coulombic repulsion between two independent (uncorrelated) electrons, one in the 1s orbital and the second in the 2s orbital. It is given the symbol $J_\mathrm{1s2s}$ (the subscripts indicating the orbitals that the electrons are in). Because this is an expectation value for a potential energy arising from repulsion, it is necessarily a positive quantity.
The second and third terms are actually also the same, again because the variables are just dummy variables and you can interchange the labels. This integral, however, does not have a physical interpretation; it is a purely quantum mechanical effect. It is called the
exchange integral, and is given the symbol $K_\mathrm{1s2s}$. It's a bit harder to show, but the exchange integral is also a positive quantity (there is a question on it here). However, its magnitude is not as large as $J_\mathrm{1s2s}$.
Putting it all together (leaving out subscripts for clarity) we have
$$\left\langle\Psi_\mathrm{triplet}\middle|\frac{1}{r_{12}}\middle|\Psi_\mathrm{triplet}\right\rangle = \frac{1}{2}(J-K-K+J) = J-K$$
and if you were to perform the same calculation on the singlet spin wavefunction (not very difficult, just replace the minus signs with a plus!) you would get
$$\left\langle\Psi_\mathrm{singlet}\middle|\frac{1}{r_{12}}\middle|\Psi_\mathrm{singlet}\right\rangle = \frac{1}{2}(J+K+K+J) = J+K$$
Since $J > K > 0$, the expectation value of the energy for the singlet state is higher.
But I have more than two electrons. What now?
Well, you just have to do a ton more ugly maths and perhaps a few hundred more integrals. But the idea is the same.
For every pair of electrons with parallel spins, there is a corresponding exchange integral that reduces the expectation value of the energy.
Therefore, in a $\mathrm{3d^5}$ manganese atom (we ignore the closed shells) there are
$$\frac{5!}{2!(5-2)!} = 10$$
different pairs of d-d electrons with parallel spins. Therefore if you evaluate the expectation value of the energy, there will be a $-10K_\mathrm{3d3d}$ term, which is stabilising. This is precisely the exchange energy effect that Geoff describes in his answer.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
As far as I understand it, at least in scalar QFT, the canonical variables are the field operator $\hat{\phi}(x)$ and its conjugate momentum $\hat{\pi}_{\phi}(x)=\frac{\partial\mathcal{L}}{\partial\dot{\hat{\phi}}}$ (where $x=(t,\mathbf{x})$ and $c=\hbar =1$).
Someone has recently told me that it is usually assumed that these have
no explicit time dependence, i.e. $\partial_{t}\hat{\phi}(x)=0$ and $\partial_{t}\hat{\pi}_{\phi}(x)=0$ (where $\partial_{t}:=\frac{\partial}{\partial t}$). As such, in the Heisenberg picture, their time evolution is governed by $$\frac{d}{dt}\hat{\phi}(x)=i\left[\hat{H},\hat{\phi}(x)\right]\, ,\quad \frac{d}{dt}\hat{\pi}_{\phi}(x)=i\left[\hat{H},\hat{\pi}_{\phi}(x)\right]$$ where $\hat{H}$ is the Hamiltonian of the theory.
If this is indeed the case, what is the argument (rationale) for why this is a valid assumption?
If I've understood things correctly, in the "standard" case of canonical quantisation the fields are quantised in the Schrödinger picture, where they have no time-dependence, and then through mapping to the Heisenberg picture, they pick up time-dependence through the unitary transformation $\hat{U}(t)=e^{-i\hat{H}t}$, hence I can see why, in this case, they have no explicit time dependence (since $\partial_{t}\hat{\phi}_{H}(t,\mathbf{x})=e^{i\hat{H}t}\partial_{t}\hat{\phi}_{S}(\mathbf{x})e^{-i\hat{H}t}$ and $\partial_{t}\hat{\phi}_{S}(\mathbf{x})=0$).
But what about the case where the Hamiltonian (or Lagrangian) has explicit time dependence? Won't operators have explicit time dependence even in the Schrödinger picture in this case?
|
Mooring¶
A mooring system can be composed of several components, including :
mooring lines, mooring buoys, clumpweights, fairlead, anchor, etc…
Its general purpose is to restrict the motions of a floating structure (ship, platform, etc.) in a specified area. Mooring systems can be more complex than a single line connected to an anchor and fulfill multiple objectives.
Line¶ (In construction)
A mooring line is used to connect different kinds of elements : floating structures, buoys, anchors, etc.
See line theory.
Mooring buoy¶
Mooring buoys are easy, pre-defined bodies, with already built-in model forces : non linear hydrostatic and linear damping forces. They are based on a spheric shape for their inertia distribution, collision box and visualization asset. Their CoG is defined at the center of the sphere. Their radius and density are to be specified at creation.
Hydrostatic force¶
They come with a simplified non linear hydrostatic force :
where
\(\rho_{water}\) is the water density, \(\mathbf{g}\) is the gravity acceleration, \(V(t) = \dfrac{\pi}{3} \left[ h(t) \times \left(3 \times R^2 - h(t)^2 \right) + 2 R^3 \right]\) is the immersed volume of the buoy, \(h(t)\) is the immersed draft of the buoy.
This force is applied on the sphere CoG, which means it induce no torque.
Clumpweight¶ In construction (and development) Anchor¶ In construction (and development) Soil Interaction¶ In construction (and development)
|
Market
1. INTRODUCTION
Market is arguably the most known and widely used factor - think about values of broad-based equity indices such as the S&P 500 reported in news, index-tracking ETFs, and the whole universe of derivatives with indices' values as underlying assets. The market is also the most empirically studied factor with solid theoretical foundations - the famous Capital Asset Pricing Model (CAPM), which links returns of individual assets to the market return, is the cornerstone of modern finance and factor theory. Moreover, the model is still extensively used by practitioners, mainly in the corporate sector. For us it is important, that the logic of the CAPM can be easily generalized to a multifactor framework, so in the next section we review the CAPM implications, which are relevant to the factor theory. A rigorous technical derivation of the CAPM can be found in Cochrane (2005) [1] or in Cvianic and Zapatero (2004) [2].
2. CAPM
The Capital Asset Pricing Model of Sharpe (1964) [3] and Lintner (1965) [4] was the first factor pricing theory with only a single factor - market portfolio. The CAPM stems from the mean-variance portfolio theory developed by Markowitz (1952) [5]. In the Markowitz's model investors, who care about mean and variance of returns only, hold the
mean-variance-efficient portfolios, that either maximize expected return for a given level of variance, or minimize variance, given expected return. So, if there are two portfolios $A$ and $B$ with equal expected returns $E[R_a]=E[R_b]$, but with different variances, for instance $\sigma^2_a<\sigma^2_b$, then portfolio $A$ is mean-variance efficient, since it allows investors to earn the same return with lower risk. The CAPM in Sharpe's and Linter's version employs Markowitz's approach and further assumes that i.) there is unrestricted borrowing and lending at the risk-free rate, ii.) investors have homogeneous expectations (or the same perception of the joint distribution of returns). Under these conditions, in equilibrium investors hold different combinations of the same portfolio (that has the highest Sharpe ratio - return per unit of risk) and the risk-free asset according to their individual attitudes towards risk. Since the portfolio of risky assets is the same for each investor, in equilibrium, the weights of the individual assets must equal their market value divided by the total market value of all risky assets. This value-weighted portfolio is the market portfolio. Furthermore, the expected excess return of asset $i$ can be expressed as:
$E[R_i]-r_f = \beta_i(E[R_M] - r_f) \text{ with } \beta_i=\dfrac{Cov(R_i, R_M)}{\sigma^2_M}\tag{1}$
So the expected excess return depends only on its beta, which measures sensitivity to variation in the market return. In other words, the risk of an asset is measured by its
factor exposure - beta; in case of the CAPM, the only factor is the market risk premium. High-beta securities depend more on market movements and offer higher expected returns in order to compensate investors for losses during bad times (The bad times are defined in terms of the factor, here the 'bad times' mean the poor performance of the market portfolio). Low-beta assets, on the other hand, have low risk premia - such assets are attractive for investors, who buy them as an 'insurance' for losses in the distressed market, pushing their prices up, or equivalently, decreasing their expected returns.
Consider the following simple linear regression implied by the CAPM:
$R_i - r_f = \alpha_i + \beta_i (R_M - r_f) + \varepsilon_i \text{, } \quad \varepsilon_i \sim i.i.d \ N(0, \sigma_{\varepsilon}^2)\tag{2}$
Taking the variance of equation (2) we get:
$\sigma^2_i= \beta^2_i \sigma^2_M + \sigma^2_\varepsilon \tag{3}$
The risk of a security has two components: the first term on the right hand side of (3), $\beta^2_i \sigma^2_M$, is the
systematic risk which affects all investors (except those who hold the risk-free asset only); the second term represents idiosyncratic risk which is security-specific and not rewarded with higher return, because it can be diversified away. In the CAPM framework investors are rewarded for the systematic or factor risk only, which can not be avoided by diversification, and, therefore, compensates investors with higher return. In other words, investors are better off holding the factor rather than individual assets. The main contribution of the CAPM is that its logic can be generalized to a framework of multiple tradable factors. In a multifactor setting (See Ang (2014) [6] Ch. 6 for an excellent discussion of single factor vs. multifactor frameworks), risk of an asset is measured by its factor exposures (or factor betas with interpretation similar to the CAPM beta), furthermore, the factors diversify idiosyncratic risk away just as the market portfolio does in the CAPM.
3. MARKET RISK PREMIUM IN PRACTICE
The market risk premium is ubiquitous in practical applications: from M&A, where it is used to estimate cost of equity, to sophisticated portfolio strategies, which attempt to completely avoid market risk (market neutral, or zero beta strategies). Furthermore, in contrast to other factors, it is also employed as a benchmark in asset management. For the period 1991-2013 return on the US equities was 8.2% p.a. with the Sharpe ratio of 0.54 (see Table 1 in Israel et al. (2014) [7]), higher than for value, size and momentum -- so, beating the market is a challenging endeavor. However, the market portfolio possesses characteristics, which are undesirable for many investors: procyclicality (co-movement of asset prices with business cycle), significant drawdowns during crashes, high volatility, exposure to funding liquidity risk - all of these lead to increasing popularity of defensive or low risk (see the Low Risk article for more details) and market neutral strategies. For example, the betting-against-beta (BAB) factor of Frazzini and Pedersen (2014) [8] (which takes long position in low beta stocks and shorts high beta stocks - so the
ex ante beta is zero) earns 8.4% p.a. and has the Sharpe ratio of 0.78 for US stocks for the period 1926-2012, with realized beta of -0.06. Overall, the optimal exposure to the market risk should be determined according to investors' needs. For example, very large institutional investors such as sovereign funds have long investment horizons and may simply wait until a distressed market rebounds, since in the long run asset prices will return to fundamental values. Obviously, the same strategy may be not acceptable for individual investors with short horizons, who, therefore, may wish to mitigate their market exposure.
4.MARKET RISK
The mean-variance framework assumes constant correlations between assets. However, empirically asset prices tend to move together during downturns. In other words, diversification benefits vanish exactly when they are needed. The one way to address this issue is to introduce dynamic correlations and volatilities, for instance, in a regime-switching framework (see Ang and Bekaert (2002) [9] for an application to the international equity market) or in GARCH DCC framework (see Engle (2002) [10]). Another way is to understand fundamental reasons for correlations to increase. For example, Brunnermeier and Pedersen (2009) [11] suggest an appealing liquidity-based explanation - an initial price shock and increased volatility lead to tightening of margin requirements (or reduction in
funding liquidity), as the result investors become more constrained to fund their positions and are forced to liquidate, thus moving prices even further away from fundamentals and decreasing supply of market liquidity, which in turn leads to higher margins. Such liquidity spirals result in cross-asset contagion, rising correlations, and trigger 'flight to quality'. So the market risk may be partially avoided by investing in factors with low funding liquidity risk exposure, for example the quality-minus-junk factor or a combination of value and momentum.
5. SUMMARY
The market risk was the first discovered factor and it marks the single source of risk in the (still widely used) CAPM. Despite its theoretical beauty, the CAPM is not capable of adequately describing asset returns in practice (for a discussion on empirical tests of CAPM as a single factor model we refer readers to Fama and French (2004) [12]). However, it is important that the basic intuition of the CAPM is still relevant -- factor risks drive assets' risk premia, and asset returns may be represented as a combination of factor risks. Furthermore, the exposure to a specific factor should be determined according to the needs and characteristics of a particular investor.
References
Asset Pricing, Cochrane, John H. , (2005) Introduction to the economics and mathematics of financial markets, Cvitanić, Jakša, and Zapatero Fernando , (2004) Capital asset prices: a theory of market equilibrium under conditions of risk, Sharpe, William F. , Journal of Finance, Volume 19, p.425–442, (1964) The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets, Lintner, John , Review of Economics and Statistics, Volume 47, p.13–37, (1965) Portfolio selection, Markowitz, Harry , Journal of Finance, Volume 7, p.77–99, (1952) Asset Management: A Systematic Approach to Factor Investing, Ang, Andrew , (2014) Fact, Fiction and Momentum Investing, Israel, Ronen, Frazzini Andrea, Moskowitz Tobias J., and Asness Clifford S. , Working Paper, (2014) Betting against beta, Frazzini, Andrea, and Pedersen Lasse Heje , Journal of Financial Economics, Volume 111, Number 1, p.1–25, (2014) International asset allocation with regime shifts, Ang, Andrew, and Bekaert Geert , Review of Financial studies, Volume 15, p.1137–1187, (2002) Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models, Engle, Robert , Journal of Business & Economic Statistics, Volume 20, p.339–350, (2002) Market liquidity and funding liquidity, Brunnermeier, Markus K., and Pedersen Lasse Heje , Review of Financial studies, Volume 22, p.2201–2238, (2009) The capital asset pricing model: theory and evidence, Fama, Eugene F., and French Kenneth R. , Journal of Economic Perspectives, p.25–46, (2004)
|
Take the Hamiltonian you write at end of your question as the example(where your statement about the eigenvector is not right), the procedure to obtain the eigenvalue and eigenvectors of other Hamiltonian is nearly the same.
$$H=\sum_{ij}A_{ij}a_i^\dagger a_j$$
Writing in matrix form you have(suppose it is a $n\times n$ matrix):
$$H=\alpha^\dagger A \alpha$$
with:
$$ \alpha^\dagger=(a_1^\dagger, a_2^\dagger,\dotsb,a_n^\dagger )$$
Now you are ready to diagonalize Matrix analytically or numerically:
$$A=X^\dagger D X$$
Where $D=[E_1,E_2,\dotsb,E_n]$ is the eigenvalue you want, and $X$ is just a
Unitary matrix that diagonalize $A$. Indeed, it is the normalized mathematical eigenvectors of $A$ write in column way(which is same in Fortran if you do it numerically).
Now substitute this to the original Hamiltonian:
$$H=\alpha^\dagger A \alpha=\alpha^\dagger X^\dagger DX\alpha=\beta^\dagger D \beta$$
where $\beta$ is $\beta =X\alpha=(\beta_1,\beta_2,\dotsb,\beta_n)^T$. Because this is a unitary transformation, you can easily check that the commute or anti-commute relation still holds for $\beta_i,\beta_2,\dotsb$.
Now you can consider the system is consisting of non-interacting qusi-particles $\beta_i,\beta_2,\dotsb$, then the eigenstate of the system is easily obtained:$$|\psi \rangle=(\beta_1^\dagger)^{n_1}(\beta_2^\dagger)^{n_2}\dotsb(\beta_n^\dagger)^{n_n}|0\rangle$$
$n_i=0,1$ for fermions and $n_i=0,1,2,3\dotsb$ for bosons.
Also, see this thread with the very similar question.This post imported from StackExchange Physics at 2014-04-05 11:04 (UCT), posted by SE-user luming
|
Show that the first excited state of 8-Be nucleus fits the experimental value of the rotational band $E(2^+) = 92 keV$ (this is the first excited state, which is 92 keV above the ground state). To do so model 8-Be to be made of 2 alpha particles $4.5 fm$ apart rotating about their center of mass.
The method I used is:
1) Get the moment of inertia of the system.
For two particles rotating each other (in the same plane) about its center of mass, one gets:
$$I = m_1 r_1^2 + m_2 r_2^2 = 2m_{\alpha} (d/2)^2 = 6.728 \times 10^{-56}kgm^2$$
where:
$$m_{\alpha} = 6.645 \times 10^{-27} kg$$
$$d = 4.5 \times 10^{-15}m$$
I've checked this value and it's correct.
2) Verify the experimental value for the rotational band by applying the rotational energy formula:
$$E = J (J + 1) \frac{\hbar^2}{2I}$$
OK let's first get what's called "characteristic rotational energy": $\frac{\hbar^2}{2I}$
$$\frac{\hbar^2}{2I} = 8.256 \times 10^{-14} J= 0.516 MeV$$
where:
$$\hbar = 1.054 \times 10^{-34}Js$$
$$1 eV = 1.6 \times 10^{-19}J$$
$2^+$ band has $J=2$ associated with it. So it's just a matter of plugging numbers in:
$$E = J (J + 1) \frac{\hbar^2}{2I} = 3096 keV$$
Which is way off $92keV$.
Where is my method gone wrong? Maybe the original question provided an incorrect experimental result.
PS/ For anyone interested in rotational energy related to nucleus this is a good video to check: https://www.youtube.com/watch?v=rwdBnwznt3s
|
Let $G$ be a Lie group and $L(G)$ it's Lie algebra. We know that every left-invariant vector field $X$ in $G$ is complete, and so one can consider the integral curve defined for all $t\in \mathbb{R}$ by $t\mapsto \exp(tX)$. If $M$ is a smooth manifold on which $G$ acts on the left, then if $a\in M$ we can consider the curve in $M$
$$\gamma_a(t) = \exp(tX)\cdot a$$
which "moves the point $a$ an amount $t$" using the infinitesimal transformation induced by $X$ on $M$.
On the other hand, in Quantum Mechanics, if $\Psi$ is the wavefunction of a particle in one-dimension, for example, we have some interesting things. First of all, let $\varepsilon\in \mathbb{R}$ and suppose $\Psi$ has Taylor series in both variables, then
$$\Psi(x+\varepsilon,t)=\sum_{n=0}^\infty \dfrac{1}{n!}\dfrac{\partial^n\Psi}{\partial x^n}(x,t)\varepsilon^n = \sum_{n=0}^\infty \dfrac{1}{n!}\left(\varepsilon\dfrac{\partial}{\partial x}\right)^n\Psi(x,t),$$
using then the fact that the momentum operator is $\hat{p}=-i\hbar \dfrac{\partial}{\partial x}$ together with the definition of the exponential of an operator we have then
$$\Psi(x+\varepsilon, t)=\exp\left[\dfrac{i\varepsilon }{\hbar}\hat{p}\right]\Psi(x,t).$$
In the same way we can show that
$$\Psi(x,t+\varepsilon)=\exp\left[-\dfrac{i\varepsilon}{\hbar}\hat{H}\right]\Psi(x,t).$$
Those two equations are often used to say that $\hat{p}$ is the generator of spatial translations and $\hat{H}$ is the generator of time translations, in the same way that the vector field $X$ on the Lie group is the generator of the diffeomorphisms it induces on the manifold.
These two situations seems to be very similar. In both cases we have something generating one transformation, where the actual transformation is performed using some notion of exponential (the first is the exponential map on the Lie group, the second is the exponential of the operator).
So I was wondering, is there a relationship between those things? Or it is just a coincidence that they seem to be so closely related?
|
Help:Formatting
You can format your text using the formatting toolbar or wiki markup. Wiki markup can be thought of as a simplified version of html and it consists of normal characters like asterisks, single quotes or equation marks which have a special function in the wiki. For example, to format a word in
italic, you include it in two single quotes like
''this''.
Contents Text Formatting
Description You type You get character (inline) formatting – applies anywhere Italic text
''italic''
italic Bold text
'''bold'''
bold Bold and italic
'''''bold & italic'''''
bold & italic Ignore wiki markup
<nowiki>no ''markup''</nowiki>
no ''markup'' section formatting – only at the beginning of the line Preformatted text preformatted text is done with a '''space''' at the ''beginning'' of the line
This way of preformatting only applies to section formatting, and character formatting markups are still effective.
preformatted text is done with a Organizing Headers & Lines
Description You type You get section formatting – only at the beginning of the line (with no leading spaces) Headings of different levels =level 1= ==level 2== ===level 3=== ====level 4==== =====level 5===== ======level 6======
Level 1 is normally set aside for the article title. An article with 4 or more headings automatically creates a table of contents.
Level 3
Level 4
Level 5
Level 6 Horizontal rule
----
Lists
Description You type You get section formatting – only at the beginning of the line (with no leading spaces) Bullet list * one * two * three ** three point one ** three point two
Inserting a blank line will end the first list and start another.
Numbered list # one # two<br />spanning more lines<br />doesn't break numbering # three ## three point one ## three point two Definition list ;item 1 : definition 1 ;item 2 : definition 2-1 : definition 2-2 Adopting definition list to indent text : Single indent :: Double indent ::::: Multiple indent
This workaround may be controversial from the viewpoint of accessibility.
Mixture of different types of list # one # two #* two point one #* two point two # three #; three item one #: three def one # four #: four def one #: this rather looks like the continuation of # four #: and thus often used instead of <br /> # five ## five sub 1 ### five sub 1 sub 1 ## five sub 2 ;item 1 :* definition 1-1 :* definition 1-2 : ;item 2 :# definition 2-1 :# definition 2-2
The usage of
For even more on list, check out Wikipedia's List Help article.
Signatures
You should always sign your comments, though signatures can be inserted anywhere on a wiki page.
Description You type You get character (inline) formatting – applies anywhere Signature Three tildes for just a signature, ~~~ Three tildes for just a signature, Cynthia (UBC LSIT) Signature with Date and Time Four tildes for your signature with date and time, ~~~~ Four tildes for your signature with date and time, Cynthia (UBC LSIT) 22:27, 26 May 2010 (UTC) Only Date and Time Five tildes for date and time only, ~~~~~ Five tildes for date and time only, 22:27, 26 May 2010 (UTC) Note: Once you save, the signature and date/time are automatically created. The next time someone edits, it no longer show the tildes. Links Please see Help:Links for detailed information on creating hyperlinks Paragraphs
MediaWiki ignores single line breaks. To start a new paragraph, leave an empty line. You can force a line break within a paragraph with the HTML tags
<br />.
HTML Formatting
Some HTML tags are allowed in MediaWiki, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them.
Description You type You get Underscore
<u>underscore</u>
underscore Strikethrough
<del>Strikethrough</del> or
<s>Strikethrough</s>
Fixed width text
<tt>Fixed width text</tt> or
<code>source code</code>
Fixed width text or
source code
Blockquotes
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text Typewriter font Puts text in a <tt>typewriter font</tt>. The same font is generally used for <code> computer code</code>. Puts the text in a typewriter Superscripts and Subscripts X<sup>2</sup>, H<sub>2</sub>O X 2, H 2O Centered text <center>Centered text</center> * Please note the American spelling of "center". Comment
<!-- This is a comment -->
Text can only be viewed in the edit window.
Completely preformatted text
this way, all markups are '''ignored'''. Customized preformatted text this way, all markups are '''ignored''' and formatted with a CSS text Mathematical formulas
MediaWiki allows you to use LaTeX to insert mathematical formulae by typing in <math>Formula here</math>. Included here are a couple of examples and commonly used functions and expressions.
What you type What it looks like Superscript <math> a^2 </math> Subscript <math> a_3 </math> Grouping <math> a_{x,y} </math> Combination <math> a_3^2 or {a_3}^2 </math> or Root <math> ([n] is optional) \sqrt[n]{x} </math> Fraction <math> \frac{3}{4}=0.75 or (small) \tfrac{1}{3}=0.\bar{3} </math> or (small) More Complex Example <math> \sum_{n=0}^\infty \frac{x^n}{n!} </math>
See WikiMedia's Help on Displaying a Formula for a full article on using TeX to display formulae. Beginning at Section 3 (Functions, symbols, special characters) is a comprehensive list of all the symbols.
Footnotes
You can add footnotes to sentences usingthe
ref tag -- this is especially goodfor citing a source.
What it looks like What you type There are over six billion people in the world. [1]References: There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> References: <references/>
|
How can I show that every monoid $M$ admits a surjection from a free monoid $F(X) \rightarrow M$ ?
The set $X$ is called an alphabet. You can take $M$ itself as alphabet and then the morphism $\mu : F(M)\rightarrow M$ is just the multiplication.
To be more specific, the structure of monoid is the data of a triplet $(M,*,1_M)$ where
$M$ is a set $*$ is an associative internal law in $M$, $1_M$ is the neutral
The elements of $F(M)$ are
strings $m_1.m_2.\cdots .m_k$ where the $m_i\in M$, then$$\mu(m_1.m_2.\cdots .m_k)=m_1*m_2*\cdots *m_k$$the stars standing for the multiplication inside the monoid $M$.
|
Trying to make my AI hit people. So I need a formula to know the time until my projectile will hit the target and also the direction the projectile should be shot at.
Here is an example scenario.
The projectile shoots from the origin $(0,0)$ Both projectile and target will move in a straight line. Speed of projectile: $v = 2000$ m/s Speed of target: $u = 450$ m/s Initial position of target: $x = 6000$ m, $y = 0$ m Direction target moves: $b$ (above $+x$ axis). $b = 135^\circ$
Unknown variables.
Time projectile reaches target: $t$ Angle of projectile: $a$
Here are 2 equations made from the data.
$(v \cos a)t = x + (u \cos b)t$ $(v \sin a)t = y + (u \sin b)t$
How do I get the $t$ and $a$, but mostly I just need the $t$?
|
I have to calculate this : $$ \lim_{x\to 0}\frac{2-x}{x^3}e^{(x-1)/x^2} $$ Can somebody help me?
Hint: It may be fruitful to substitute $\alpha = 1/x$, in which case you obtain the limit
$$ \lim_{ \alpha \rightarrow \infty} \left(2 - \frac{1}{\alpha} \right) \alpha^3 e^{\alpha - \alpha^2} $$
I should note that, here, I'm taking your limit to in fact be the limit as $x$ approaches $0$ from the positive direction. If you're intending for your limit to be two-sided, then you should think about why that would cause problems.
Letting $w=1/x$, we have $$ \lim_{x\downarrow 0}\frac{2-x}{x^3}e^{(x-1)/x^2} = \lim_{w\to+\infty} \left(2 - \frac 1 w \right) w^3 e^{w^2\left(\frac 1 w - 1\right)} = \lim_{w\to+\infty} (2w^3 - w^2) e^{w-w^2} $$ $$ = \lim_{w\to+\infty} \frac{2w^3-w^2}{e^{w^2-w}}. $$ L'Hopital should handle that.
Maybe I'll post something on $x\uparrow 0$ later . . .
$$\lim_{x\to0}(f(x)g(x)) = \lim_{x\to0}(f(x)) \cdot \lim_{x\to0}(g(x)) $$
With that being said you can let $f(x) = (2-x)/x^3 $ and $ g(x) = e^{(x-1)/x^2} $
I hope this helps.
|
Prandtl number, abbreviated as Pr, is a dimensionless number used in fluid dynamics is used to calculate force by the ratio of momentum diffusivity (kinematic viscosity) and thermal diffusivities. This number helps characterize the relative thickness of the velocity boundary layer to the thermal boundary layer.
\(\large{ Pr= \frac{\nu} {\alpha} }\)
Where:
\(\large{ Pr }\) = Prandtl number
\(\large{ \nu }\) (Greek symbol nu) = kinematic viscosity
\(\large{ \alpha }\) (Greek symbol alpha) = thermal diffusivity
Solve for:
\(\large{ \nu = Pr \; \alpha }\)
\(\large{ \alpha = \frac { \nu }{ Pr } }\)
Note: Unit conversion is done automatically and is built into the calculation.
Tags: Equations for Force
|
There are a lot of websites and forums, which explain that there is a bijection between $\mathbb{R}$ and $\mathbb{R}^2$, and even give some bijections. (By the way: Can you generalize it? since it works with the natural Numbers and the real numbers, does there exist a bijection between any infinite set $X$ and $X^2$) On some websites there is claimed that there doesn't exist a continuous bijection. But how would I approach such a proof? (I don't need a complete proof,... just a starting point because I have absolutely no idea how to begin this.)
Easy direction: there is no continuous bijection from $\mathbb R^2$ onto $\mathbb R$. For suppose $f : \mathbb R^2 \to \mathbb R$ is continuous. Choose one point $a \in \mathbb R^2$. The deleted set $\mathbb R^2 \setminus \{a\}$ is connected. A continuous image of a connected set is connected. So $f\big(\mathbb R^2 \setminus \{a\}\big)$ is a connected subset of $\mathbb R$. In particular, it is not the deleted set $\mathbb R \setminus \{f(a)\}$, since that is disconnected. So either $f$ is not injective or not surjective.
Note The same argument shows there is no continuous bijection from an open ball in $\mathbb R^2$ onto a subset of $\mathbb R$. You just have to make sure to choose $a$ that does not map to the maximum or minimum of the image.
Hard direction: there is no continuous bijection from $\mathbb R$ onto $\mathbb R^2$. This will use the Baire Category Theorem. Let $f : \mathbb R \to \mathbb R^2$ be continuous and injective. $\mathbb R$ is the countable union of compact sets $[-n,n]$. I will show that $f\big([-n,n]\big)$ has empty interior. Then $f(\mathbb R)$ is a countable union of closed sets with empty interior, so by the Baire Category Theorem, it is not $\mathbb R^2$.
So, why does $K_n:=f\big([-n,n]\big)$ have empty interior? Suppose it has nonempty interior. Since $[-n,n]$ is compact and $\mathbb R^2$ is Hausdorff, the continuous bijection $f$ from $[-n,n]$ onto $K_n$ is a homeomorphism. So the restriction of $f^{-1}$ to an open ball contained in $K_n$ would be a homeomorphism from an open ball in $\mathbb R^2$ onto a subset of $\mathbb R$. Contradiction.
For the question about generalization, consider Netto's theorem;
If $f$ represents a bijective map from an $m$-dimensional smooth manifold $\mu_m$ onto an $n$-dimensional smooth manifold $\mu_n$ and $m \neq n$, then $f$ is necessarily discontinuous.
|
Background: Bombieri and Lagarias showed that a function $f$ with roots $\rho=x+iy$ satisfies has all its roots lying on $x=\frac12$ if and only if
$$\lambda_n :=\sum_\rho 1-\left(1-\frac{1}{\rho}\right)^n$$
is nonnegative for all $n$.
When $f$ is the Riemann zeta restricted to the critical strip, this gives an equivalent statement for the Riemann Hypothesis: find a (sharp, I think) lower bound for $\lambda_n$.
Question:Is there a simple function that serves as an upperbound for $\lambda_n$ for all $n$? Motivation: I was trying to answer this question, and I suspected there would be a nice way to say that the question was hard because it implies the Riemann Hypothesis. I managed to find another way to show the implication, but I would still like to complete this approach.
This formulation of $\lambda_n$ suggests an easy continuous extension to $\lambda:\Bbb R_+\to\Bbb R$. However, for the purposes of the equivalence, I wanted a function which never went above 1, so if we had some upper bound $\lambda\leq B$, then I could use $f(x)=\lambda(x)/B(x)$ [and so $\lfloor f\rfloor$ in the question would be nonzero iff RH.]
However, I'm a completely new to analytic number theory and so I have absolutely no intuition about these numbers. I used WolframAlpha to plot the first two terms in the partial sum, and I played around with it a little bit, so I am pretty sure that $\lambda(x)$ is real. But that's about it.
|
For a particle in an infinite square-well potential in an energy eigenstate, the probability distribution relating to outcomes of position measurements vanishes outside the square well and takes a sinusoidal form inside the well, approaching zero at the edges of the well.
Suppose a particle trapped in an infinite square-well potential has an energy 10 eV in an energy eigenstate for which the position probability distribution has three maxima (or “humps”). If the particle is excited to an energy eigenstate in which the probability has five maxima, what is the energy of the particle in this new state?
I find the question to be rather vague. But here's my best attempt.
First begin by supposing the potential well is of length L.
Solving the Schrodinger equation with V=0,
we get
$$\frac{\mathrm{d}^{2}\psi }{\mathrm{d} x^{2}}=-\bar{k}^{2}$$ with
$$\bar{k}^{2}=\frac{2mE}{\bar{h}^{2}}$$
The general solution is $$\psi\left ( x \right )=Asin\left ( \bar{k}x \right )+B cos\left ( \bar{k}x \right ) for 0\leq x\leq L$$
On physical ground we require $$\psi\left ( 0 \right )=\psi\left ( L \right )=0$$
The physically acceptable solution is $$\psi\left ( x \right )=Asin\left ( \bar{k} x\right )$$ and$$ \bar{k}=\frac{n\pi}{L}$$
A three maxima implies
$$\left | \psi_{n=3}\left ( x \right ) \right |^{2}=\left | A_{n} \right |^{2}sin^{2}\left ( \frac{3\pi x}{L} \right )$$
The energy of the particle is quantised and is in general the formula $$E_{n}=\frac{n^{2}\pi^{2}h^{2}}{2mL^{2}}$$
For an n=3 state, $$E_{n=3}=10ev=\frac{9\pi^{2}h^{2}}{2mL^{2}}$$
This is as far as my attempt goes. For some reason, I do not understand the need for the information provided in the question as such the wave function having a sin and the physical condition-likely red herring? Obviously, if the mass m is known, I can solve for the energy level for which n=5.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
How do these models work? I know they have to take in various data points - what are these, and how does the model use it? Also, how do they come up with a forecast or a map of what will happen in the future?
All numerical atmospheric models are built around calculations derived from primitive equations that describe atmospheric flow. Vilhelm Bjerknes discovered the relationships and thereby became the father of numerical weather prediction. Conceptually, the equations can be thought of as describing how a parcel of air would move in relationship to its surroundings. For instance, we learn at a young age that hot air rises. The hydrostatic vertical momentum equation explains why
and quantifies under what condictions hot air would stop rising. (As the air rises it expands and cools until it reaches hydrostatic equilibrium.) The other equations consider other types of motion and heat transfer.
Unfortunately, the equations are nonlinear, which means that you can't simply plug in a few numbers and get useful results. Instead, weather models are simulations which divide the atmosphere into three-dimensional grids and calculate how matter and energy will flow from one cube of space into another during discrete time increments. Actual atmospheric flow is continuous, not discrete, so by necessity the models are approximations. Different models make different approximations appropriate to their specific purpose.
Numerical models have been improving over time for several reasons:
More and better input data, Tighter grids, and Better approximations.
Increasing computational power has allowed models to use smaller grid boxes. However, the number of computations increases exponentially with the number of boxes and the process suffers diminishing returns. On the input end of things, more and better sensors improve the accuracy of the initial conditions of the model. Synoptic scale and mesoscale models take input from General Circulation Models, which helps set reasonable intial conditions. On the output end, Model Output Statistics do a remarkable job of estimating local weather by comparing the current model state with historical data of times when the model showed similar results. Finally, ensemble models take the output of several models as input and produce a range of possibly outcomes.
Weather models (or, as they are more commonly called in the field, atmospheric models) are computer programs that read in input data (initial conditions) and solve partial differential equations to produce a future state of the atmosphere. Although @JonEricson provides an overall good but anecdotal summary of what models do, here I describe the exact steps of what it takes for an atmospheric model to produce a forecast. This answer generally applies to ocean circulation and climate models as well. Many people believe that weather forecasters sit in front of a map and brainstorm where the cloud will go. This answer aims to provide an easy to understand but thorough explanation about how atmosphere and ocean prediction models work.
The evolution of the atmosphere may be described by a system of partial differential equations (PDEs). Most commonly, these are primitive equations, which consist of momentum equation (solving for velocity $\mathbf{v}$ or momentum $\mathbf{\rho v}$), continuity (or mass conservation equation) and thermal energy equation (solving for temperature $T$ and specific humidity $q$). Continuity equation is necessary for closure with the momentum equations. These equations may be approximated in many ways, yielding a reduced and/or simplified set of equations. Some of these approximations are hydrostatic, Boussinessq, anelastic etc. In the most complete form of the primitive equations for the atmosphere the prognostic state variables are $u$, $v$, $w$, $p$, $T$, $q$. An idealized atmosphere could also be simulated with just momentum and continuity equations (no thermodynamics), shallow water equations or just absolute vorticity equation. For an example of the latter, see the pioneering paper by Charney, Fjortoft and von Neumann (1950) who numerically predicted 500 mb vorticity by integrating the absolute vorticity equation in time. Because their model was barotropic it could not produce cyclogenesis. However, they achieved the first successful numerical weather prediction in history and their model ran on the first general-purpose computer, ENIAC.
Now take the momentum equation for example:
$$\dfrac{\partial \rho \mathbf{v}}{\partial t} + \nabla (\rho \mathbf{v}^{2}) +2\Omega \times \rho \mathbf{v} = -\nabla p + \nu \nabla^{2}(\rho \mathbf{v}) + \Phi $$
From left to right, we have the time tendency of momentum, advection, Coriolis force, pressure gradient, viscous dissipation, and finally, any external forcing or subgrid tendency. Unfortunately, the advection term $\nabla(\rho\mathbf{v}^{2})$ is non-linear, and it is because of this term that the analytical solution to this equation is not known. This term is also the reason atmosphere and other fluids are chaotic in nature and small errors in $\mathbf{v}$ grow quickly because they multiply in this term. If the equation is linearized, $\nabla(\rho\mathbf{v}^{2})=0$, analytical solutions can be found. For example, Rossby, Kelvin, Poincare waves are all analytical solutions for a certain reduced set of linearized momentum or vorticity equations. It is important to identify that we need to have the non-linear advection term if we hope to produce accurate forecasts. Thus we solve the equations numerically.
How to solve these PDEs? Processors are not capable of doing derivatives - they know how to
addand multiplynumbers. All other operations are derived from these two. We need to somehow approximate partial derivatives using basic arithmetic operations. The domain of interest (say, the globe) is discretized on a grid. Each grid cell will have a value for each of the state variables. For example, take the pressure gradient term in x-direction:
$$\nabla_{x}p = \dfrac{\partial p}{\partial x} \approx \dfrac{\Delta p}{\Delta x} = \dfrac{p_{i+1,j}-p_{i-1,j}}{2\Delta x_{i,j}} $$
where $i,j$ are the grid indices in $x,y$. This example used finite differences, centered in space. There are many other methods to discretize partial derivatives, and the ones that are in use in modern models are typically much more sophisticated than this example. If the grid spacing is not uniform, finite volume methods must be used if the predicted quantitity is to be conserved. Finite element methods are more common for computational fluid dynamics problems defined on unstructured meshes in engineering, but can be used for atmosphere and ocean solvers as well. Spectral methods are used in some global models like GFS and ECMWF.
Processes unresolved on the grid scale (term $\Phi$) are implemented in the form of parameterization schemes. Parameterized processes may include turbulence and boundary layer mixing, cumulus convection, cloud microphysics, radiation, soil physics, chemical composition etc. Parameterization schemes are still a hot research topic and they keep improving. There are many different schemes for all of the physical processes listed above. Some work better than the others in different meteorological scenarios.
Once all the terms in all the equations have been discretized on paper, the discrete equations are written in the form of a computer code. Most atmosphere, ocean circulation, and ocean wave models are written in Fortran. This is mostly for historical reasons - having long history, Fortran had the luxury of having very mature compilers and very optimized linear algebra libraries. Nowadays, with very efficient C, C++ and Fortran compilers available, it is more a matter of preference. However, Fortran code is still most prevalent in atmosphere and ocean modeling, even in recently started projects. Finally, an example code line for the above pressure gradient term would look like this:
dpdx(i,j,k) = 0.5*(p(i+1,j,k)-p(i-1,j,k))/dx(i,j)
The whole code is compiled into machine language that is then loaded on the processor(s). The model program is typically not user friendly with a fancy graphical interface - it is most commonly run from dumb terminals on high-performance multiprocessor clusters.
Once started, the program discretely steps through time into the future. The calculated values for state variables at every grid point are stored in an output file, typically every hour (simulation time). The output files can then be read by visualization and graphics software to produce pretty images of model forecast. These are then used as guidance to forecasters to provide a meaningful and reasonable forecast.
This is not a complete answer. One aspect of weather models consists of
Data assimilation or 4D-var.
I agree that they are amazing, and the question
how do they work is too broad to be answered. So I recommend you read up on data assimilation and in particular 4D-Var. Concepts are somewhat similar in inverse theory, but of much higher dimensionality. In a tiny nutshell: At each time step and grid point, the model has a backgroundconsisting of the latest information it has on the entire atmosphere (and ocean!). This is a huge quantity of information. Then, every six hours or so, it feeds in a big chunk of information from measurements. It uses a Bayesian method (see the 4D-Var link above) to combine the background and the measurements to make a newestimate of the state of the atmosphere. Measurements are, naturally, only available for the present and past. The rest is basically extrapolation. But to get a good estimate, the model run starts at some time in the past; so the first part of the forecast is actually for the past or present (they don't show you that one in the models ;-).
Can't go into detail, but it's true, they are amazing!
Weather models and forecasts are governed by systems of differential equations. One starts with the current levels or values of causal variables: temperature, humidity, atmospheric pressure etc. One also has to factor in the "derivatives," or rates of change of these variables. Hence the need for differential equations, which incorporate both variables and their derivatives to explain various "wave" phenomena such as heat, light, and sound, etc.
Even with the current large body of raw knowledge over large parts of the earth, forecasting the weather is still a dicey business, because of the apparent "randomness" of various variables. (Some of them are truly random, others get explained better over time.) By "slicing and dicing" variables, (and benefiting from past experience, weather forecasts have slowly but surely gotten more accurate over time over larger spaces. Increased computational power has also helped. (Extending the length of time for more accurate forecasts is trickier, because there are still too many moving parts.) For now, it seems that the tools that we have are a "drop in the bucket" compared to the size of the earth and universe (some weather patterns may be caused by things happening in interplanetary space), so it is truly amazing that we can often come up with weather forecasts that are more or less accurate.
|
In a 2-person economy, the owner of a firm pays 20% of the total output to the worker.
What is the Gini coefficient in this economy?
The answer is supposed to be 3/5 but I end up getting 3/10 instead.
Economics Stack Exchange is a question and answer site for those who study, teach, research and apply economics and econometrics. It only takes a minute to sign up.Sign up to join this community
In a 2-person economy, the owner of a firm pays 20% of the total output to the worker.
What is the Gini coefficient in this economy?
The answer is supposed to be 3/5 but I end up getting 3/10 instead.
This question appears to be off-topic. The users who voted to close gave this specific reason:
Gini Coefficient is the ratio of the area that lies between the 45 degree line (line of equality) and the Lorenz curve, and the area below the line of equality. Given the data in the problem Lorenz curve connects points $(0, 0)$, $(50, 20)$ and $(100, 100)$ in the graph. To find the Gini coefficient, we find the area between the line of equality and the Lorenz curve and divide it by the area of the triangular region that lies below the line of equality.
Gini coefficient $\displaystyle = 1 - \frac{\color{blue}{0.5\times 50 \times 20} + \color{red}{(100 - 50) \times 20} + \color{gray}{0.5\times (100-50) \times (100-20)}}{0.5\times 100 \times 100} = 0.3$
Here is the graph:
The Gini coefficient is defined as the normalized Gini index. If $G$ is the Gini index, $0 \leq G \leq \frac{n-1}{n}$. The Gini coefficient $G^* = \frac{n}{n-1} G$ is a normalization such that $0\leq G^* \leq 1$. That way you can easier compare different populations with different $n$. In your example it should be that $G= 3/10$ and then $G^*=6/10=3/5$.
|
1. Discrete returns
Let \(P_t\) be the asset price at time \(t\) and \(P_{t-1}\) be the price in the prior period – day, week, month, … For daily returns on stocks, \(P_{t-1}\) is the closing price of the stock on the previous trading day.
Example calculations are provided in figure 1 – Excel Web App #1 – Sheet1, where a sample of stock price data is used.
1.1 Net return Also called “simple returns” Measured by the change in price \(\Delta P\), divided by the price at the start of the period \(R_t = \left(P_t – P_{t-1} \right) / P_{t-1} = \left( P_t / P_{t-1} \right) – 1 \) With dividends: \(R_{d,t} = \left( P_t – P_{t-1} + D_t \right) / P_{t-1} \)
Rows 12 to 15of figure 1
1.2 Gross return \(R_{g,t} = P_t / P_{t-1} = R_t + 1 \) With dividends: \(R_{g,d,t} = \left( P_t + D_t \right) / P_{t-1} \)
Rows 17 to 20of figure 1
2. Log returns Uses continuously compounded returns Often written as log \(r_t = \log P_t / P_{t-1} \), but uses natural logs. The LNfunction in Excel, and the LOGfunction in VBA Natural logs have base 2.71828. This is referred to as Euler’s number, denoted by \(e\) Log return: \(r_t = \ln P_t / P_{t-1} = \ln P_t-\ln P_{t-1} \) \(\ln P_t\) is called the “log price” With dividends: \(r_{d,t} = \ln \left(P_t + D_t \right) / P_{t-1} = \ln (P_t + D_t) -\ln P_{t-1} \)
Rows 24 to 28of figure 1
Converting net returns to log returns: \(r_t = \ln(1 + R_t)\) Converting log returns to net returns: \(R_t = e^{r_t} – 1 \)
Rows 31 to 34of figure 1
3. Returns method selector panel
Cell D39contains a
Data Validationselector Validation criteria - Allow:List Validation criteria - Source:discrete, log Cell formula – range
D43:D54–
=IF(Return_method="discrete",(RC[-1]-R[1]C[-1])/R[1]C[-1],LN(RC[-1])-LN(R[1]C[-1]))
4. Cumulative returns 4.1 Cumulative discrete returns Denote \(R_{c,t}\) as the cumulative discrete return as time \(t\) Cumulative discrete returns are multiplicative In vector form for the 12 month period: \(\left(1+R_{c,t-12-1}\right) \times \left(1+R_{t-12}\right)-1, \left(1+R_{c,t-12}\right) \times \left(1+R_{t-11}\right)-1, \left(1+R_{c,t-11}\right) \times \left(1+R_{t-10}\right)-1, …\) \(R_{c,t}\) for 12 months at 1 July 15 = 0.0072 (
Cell E62). This is the same value as the Annual Discrete return for the single 12 month period –
Cell E76: 0.0072
4.2 Cumulative log returns Denote \(r_{c,t}\) as the cumulative log return as time \(t\) Cumulative log returns are additive In vector form for the 12 month period \(\left(r_{c,t-12-1} + r_{t-12}\right), \left(r_{c,t-12} + r_{t-11} \right), \left(r_{c,t-11} + r_{t-10} \right), … \) \(r_{c,t}\) for 12 months at 1 July 15 = 0.0072 (
Cell E81). This is the same value as the Annual Log return for the single 12 month period –
Cell E95: 0.0072
This example was developed in Excel 2013 Pro 64 bit. Last modified: 17 Aug 2015, 7:21 pm [Australian Eastern Standard Time (AEST)]
|
It means the theory is expressed/discussed in the language of tensor fields, where the tensors are quantities that transform between mutually moving inertial frames according to
the Lorentz transformation. All differential equations are expressed as relations between tensor fields and their derivatives. For example, the equation
$$\frac{dp_k}{dt} = q E_k + q \epsilon_{kij} v_i B_j$$
which uses the notation of 3-vectors, is not covariant, because, although it does have the same form in all frames, the quantities involved are not tensors that would transform between moving frames according to the Lorentz transformation. They only transform as cartesian tensors between frames that are at rest with respect to each other.
The covariant formulation of the same law is$$\frac{d p_\mu}{d\tau} = q F_{\mu\nu} u^\nu,$$using components of 4-vectors $p,u$ and 4-tensor $F$. This is because in other frame the equation has the same form
and the quantities involved - $p,u,F$ - transform as tensors according to the Lorentz transformation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.