text stringlengths 256 16.4k |
|---|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Statement of the problem
Given a continuous map $f:G \rightarrow D^2$ where $G$ is a compact simply connected Lie group and $D^2$ is the unit disk in the plane, I have shown that:
There exists a simple (non-self-intersecting, except at a single point) loop $\gamma \subset G$ that maps injectively to a simple loop $f(\gamma) \subset D^2$.
Furthermore, $f(\gamma)$ intersects $\partial D^2$ at several isolated points. Call $P \subset G$ the isolated set of points in $\gamma$ that map to $\partial D^2$.
Any small neighborhood of $\gamma$ maps to the interior of the region bounded by $f(\gamma)$.
Is this sufficient info to prove that $f(\gamma)$ is the boundary of the image $f(G)$?
The image is this
Attempt at a solution
$f$ continuous, $G$ compact implies $f(G)$ compact and $f(G)\subset D^2$ has a boundary $B$. Let $\gamma\subset G$ be as described above and call $S$ the open set that has boundary $f(\gamma)$. Assume that $B$ $\neq$ $\gamma$ along some path $\phi \subset D^2$. Then $\phi \subset D^2 \setminus S$.
Then $\exists x \in G$ s.t. $f(x) \in \phi$ and $f(x) \not\in S\cup f(\gamma)$. Then $x$ is not in a neighborhood of $\gamma$, by definition of $\gamma$.
Claim: Because $G$ is simply connected, we can define a continuous map
\begin{align} & h:[0,1] \times G\rightarrow G \\ &h(0,x)= x,\hspace{5mm} h(1,x)=y\in \gamma \\ \end{align}
Furthermore, we can choose this $h$ so that $f h(t,x) \not\in f(\gamma), \forall t \neq 1$.
(
Is it possible to prove that such an $h$ exists, speaking rougly: that I can always continuously deform a point not in a neighborhood of $\gamma$ to a point in $\gamma$, while avoiding other subsets of $G$ that might map to $f(\gamma)$?)
Then $\forall \epsilon > 0$, $\exists t < 1$, such that $h(t,x)$ is in the open ball $B(y,\epsilon)$. But then for some $\tilde{t}<1, h(\tilde{t},x) \in S$ (by definition of $\gamma$: small neighborhoods of $\gamma$ map to $S$).
But this is impossible, because $fh$ is continuous and $fh(t,x)\not\in f(\gamma)\,,\forall t\neq 1$. Then $\phi$ cannot exist and $f(\gamma)=\partial f(G)$.
I'm looking for either an answer using some other machinery, a resolution of my attempt, or some references (really my knowledge of topology/homotopy/differential geometry is quite limited). |
[2.13.6] An expression representing, in a functional form, the spectral radiance of a blackbody as a function of the wavelength and the temperature. L_{\lambda }=dI_{\lambda}/dA' = c_{1L}\left (\lambda \right )^{-5} \cdot f\left ( \lambda T \right )
or L_{\lambda } T^{-5}=c_{1L}(\lambda T)^{-5} \cdot f(\lambda T) f\left ( \lambda T \right )= e^{-(c_{2}/\lambda T)}
The two principal corollaries of this law are: \lambda _{m} T = b L _{m}/ T^{5} = b'
These show how the maximum spectral radiance
L and the wavelength m λ at which it occurs are related to the absolute temperature m T.
Note: The currently recommended value of 4.0956 \cdot 10^{-14} \left ( W/cm^{2}\cdot sr\cdot \mu m\cdot K^{5} \right ) « Back to Definitions Index
b is 2.8978 x 10 -3 m•K , or 2.8978 x 10 -1 cm•K. From the Planck radiation law, and with the use of the values of b, c, and 1 cas given in that definition, 2 is found to be: b’ |
I was following the coleman lectures on "Aspects of symmetry" particularly the chapter about the 't Hooft's model. Then I have wandered into older papers like the 't Hooft's papers and many others. And concerning the reason why there are no free quarks in this model it seems to me that their reasoning is the following:
Calculate the dressed propagator and you will get something like :
$$\frac{ip_-}{2p_+p_- -M^2-\frac{g^2|p_-|}{\lambda\pi}+i\varepsilon}$$
This is a propagator depending on the cut-off $\lambda$.Then, because of the infra-red divergence, we have to restore gauge invariance taking the $\lambda\to 0$ limit. The pole of this propagator is shifted towards $p_-\to \infty$. We conclude that there is no physical single quark state.
But, Einhorn claims (in PhysRevD.14.3451) that the dependence on $\lambda$ has nothing whatever to do with the confinement mechanism. In order to prove that the 't Hooft's argument is wrong, Einhorn switch off the coulomb potential but retain a constant gauge-dependent term. He finds that the interaction between $\bar{q}q$ pairs cancels the term in the self energy, so that free quarks are produced. So the confinement must be obtained by other means.
By the way, It seems that Coleman is using the principal value method and not the original 't Hooft's regularization. So it is not obvious for me to realize whether Coleman agrees or not.
Is the underlying reason of confinememnt (in 't Hooft's model) clear currently? Are Einhorn's arguments wrong?
This post imported from StackExchange Physics at 2017-06-25 18:17 (UTC), posted by SE-user physics_teacher |
Table of Contents
The Interior of Sets in Finite Topological Products
In the following theorem we will show that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces and $A_i \subseteq X_i$ for all $i \in \{ 1, 2, ..., n \}$ then the interior of the product of these sets is equal to the product of the interiors of these sets.
Theorem 1: Let $\{ X_1, X_2, ..., X_n \}$ be a collection of topological spaces and let $A_i \subseteq X_i$ for all $i \in \{1, 2, ..., n \}$. Then $\displaystyle{\mathrm{int} \left ( \prod_{i=1}^{n} A_i \right) = \prod_{i=1}^{n} \mathrm{int} (A_i)}$. Proof:Let $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \mathrm{int} \left ( \prod_{i=1}^{n} A_i \right )}$. Then there exists an open set $\displaystyle{U = \prod_{i=1}^{n} U_i}$ in $\displaystyle{\prod_{i=1}^{n} X_i}$ such that: So then for all $i \in \{1, 2, ..., n \}$ we have that $x_i \in U_i \subseteq X_i$. Each of the open sets $U_i$ is open in $X_i$, and so $x_i \in \mathrm{int}(X_i)$ for all $i \in \{1, 2, ..., n \}$. Thus $\displaystyle{\mathbf{x} \in \prod_{i=1}^{n} \mathrm{int} (A_i)}$ which shows that: Now let $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \prod_{i=1}^{n} \mathrm{int} (A_i)}$. Then $x_i \in \mathrm{int} (A_i)$ for all $i \in \{1, 2, ..., n \}$. So for each $i$ there exists an open set $U_i$ of $X_i$ such that: Let $\displaystyle{U = \prod_{i=1}^{n} U_i}$. Then $U$ is open in $\displaystyle{\prod_{i=1}^{n} X_i}$ as it is a product of open sets. Moreover: Thus $\displaystyle{\mathbf{x} \in \mathrm{int} \left ( \prod_{i=1}^{n} A_i \right )}$ which shows that: From the inclusions in $(*)$ and $(**)$ we conclude that: |
amp-mathml
Displays a MathML formula.
Required Script
<script async custom-element="amp-mathml" src="https://cdn.ampproject.org/v0/amp-mathml-0.1.js"></script>
Supported Layouts container Examples amp-mathml.amp.html
This extension creates an iframe and renders a MathML formula.
<amp-mathml layout="container" data-formula="\[x = {-b \pm \sqrt{b^2-4ac} \over 2a}.\]"> </amp-mathml> <amp-mathml layout="container" data-formula="\[f(a) = \frac{1}{2\pi i} \oint\frac{f(z)}{z-a}dz\]"> </amp-mathml> <amp-mathml layout="container" data-formula="$$ \cos(θ+φ)=\cos(θ)\cos(φ)−\sin(θ)\sin(φ) $$"> </amp-mathml>
This is an example of a formula of
<amp-mathml layout="container" inline data-formula="`x`"></amp-mathml>,
<amp-mathml layout="container" inline data-formula="\(x = {-b \pm \sqrt{b^2-4ac} \over 2a}\)"></amp-mathml> placed inline in the middle of a block of text.
<amp-mathml layout="container" inline data-formula="\( \cos(θ+φ) \)"></amp-mathml> This shows how the formula will fit inside a block of text and can be styled with CSS.
Specifies the formula to render.
If specified, the component renders inline (
inline-block in CSS).
See amp-mathml rules in the AMP validator specification.
You've read this document a dozen times but it doesn't really cover all of your questions? Maybe other people felt the same: reach out to them on Stack Overflow.Go to Stack Overflow Found a bug or missing a feature?
The AMP project strongly encourages your participation and contributions! We hope you'll become an ongoing participant in our open source community but we also welcome one-off contributions for the issues you're particularly passionate about.Go to GitHub |
Precision Measurement of the Boron to Carbon Flux Ratio
Carbon nuclei in cosmic rays are thought to be mainly produced and accelerated in astrophysical sources, while boron nuclei are entirely produced by the collision of heavier nuclei, such as carbon and oxygen, with the interstellar matter. Therefore, the boron to carbon flux ratio (B/C) directly measures the average amount of interstellar material traversed by cosmic rays. In cosmic ray propagation models, where cosmic rays are described as a relativistic gas scattering on a magnetized plasma, the B/C ratio is used to constrain the spatial diffusion coefficient $D$, as the B/C ratio is proportional to $1/D$ at high rigidities $R$. The diffusion coefficient dependence on rigidity is $D \propto R^{-\delta}$, where $\delta$ is predicted to be $\delta = −1/3$ with the Kolmogorov theory of interstellar turbulence [A. N. Kolmogorov, Dokl. Akad. Nauk SSSR 30, 301(1941)], or $\delta = −1/2$ using the Kraichnan theory [R. H. Kraichnan, Phys. Fluids 8, 1385 (1965)]. The measured B/C spectral index $\Delta$, obtained from a fit at high rigidities of the $({\rm B}/{\rm C})=kR^{\Delta}$, approaches the diffusion spectral index $\delta$ asymptotically ($\Delta = \delta$) and $k$ is the normalization constant.
AMS precisely measured the B/C ratio in cosmic rays in the rigidity range from 1.9 GV to 2.6 TV. This measurement is based on 2.3 million boron nuclei and 8.3 million carbon nuclei collected by AMS during the first 5 years of operation onboard the ISS. In this measurement the total error is ∼3% at 100 GV. Figure 1 shows the measured B/C ratio.
As seen in Figure 1, the B/C ratio increases with rigidity reaching a maximum at 4 GV then decreases. The B/C ratio does not show any significant structures. Above 65 GV the B/C ratio measured by AMS is well fit with a single power law $kR^{\Delta}$ with a $\chi^2/\mathrm{d.o.f} = 14/24$ and a spectral index $\Delta=−0.333±0.014(\rm fit) \pm 0.005(\rm syst)$ in good agreement with the Kolmogorov theory of turbulence which predicts $\Delta = -1/3$ asymptotically.
Figure 2 shows the AMS B/C ratio together with recent results. Also shown in blue dash line is the prediction for the B/C ratio from an important theoretical model [R. Cowsik and T. Madziwa-Nussinov, Astrophys. J. 827, 119 (2016)], which explains the AMS positron fraction [L. Accardo et al., Phys. Rev. Lett. 113, 121101 (2014)] and antiproton results [M. Aguilar et al., Phys. Rev. Lett. 117, 091103 (2016)] by secondary production in cosmic ray propagation. The model shown is ruled out by this measurement.
In conclusion, the B/C ratio does not show any significant structures. Above 65 GV the B/C ratio can be described by a single power law of $\Delta =−0.333\pm0.014(\rm fit)±0.005(\rm syst)$, in good agreement with the Kolmogorov theory of turbulence which predicts $\Delta = -1/3$ asymptotically. |
Focus Questions
The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts motivated by these questions and be able to write precise, coherent answers to these questions. What is the polar (trigonometric) form of a complex number? How do we multiply two complex numbers in polar form? How do we divide one complex number in polar form by a nonzero complex number in polar form?
Beginning Activity
If \(z = a + bi\) is a complex number, then we can plot \(z\) in the plane as shown in Figure 5.3. In this situation, we will let \(r\) be the magnitude of \(z\) (that is, the distance from \(z\) to the origin) and \(\theta\) the angle \(z\) makes with the positive real axis as shown in Figure 5.3.
Use right triangle trigonometry to write \(a\) and \(b\) in terms of \(r\) and \(\theta\).
Explain why we can write \(z\) as
\[z = r(\cos(\theta) + i\sin(\theta)). \label{eq1}\]
When we write \(z\) in the form given in Equation \ref{eq1}, we say that \(z\) is written in
trigonometric form (or polar form).
The angle \(\theta\) is called the argument of the
argument of the complex number \(z\) and the real number \(r\) is the modulus or norm of \(z\). To find the polar representation of a complex number \(z = a + bi\), we first notice that
\[r = |z| = \sqrt{a^{2} + b^{2}}\]
\[a = r\cos(\theta)\]
\[b = r\sin(\theta)\]
Multiplication of complex numbers is more complicated than addition of complex numbers. To better understand the product of complex numbers, we first investigate the trigonometric (or polar) form of a complex number. This trigonometric form connects algebra to trigonometry and will be useful for quickly and easily finding powers and roots of complex numbers.
Note
The word
polar here comes from the fact that this process can be viewed as occurring with polar coordinates.
Figure \(\PageIndex{1}\): Trigonometric form of a complex number.
To find \(\theta\), we have to consider cases.
If \(z = 0 = 0 + 0i\),then \(r = 0\) and \(\theta\) can have any real value. If \(z \neq 0\) and \(a neq 0\), then \(\tan(\theta) = \dfrac{b}{a}\). If \(z \neq 0\) and \(a = 0\) (so \(b \neq 0\)), then
\[^* \space \theta = \dfrac{\pi}{2} \space if \space b > 0\]
\[^* \space \theta = -\dfrac{\pi}{2} \space if \space b < 0\]
Exercise \(\PageIndex{1}\)
Determine the polar form of the complex numbers \(w = 4 + 4\sqrt{3}i\) and \(z = 1 - i\). Determine real numbers \(a\) and \(b\) so that \(a + bi = 3(\cos(\dfrac{\pi}{6}) + i\sin(\dfrac{\pi}{6}))\) Answer
1. Note that \(|w| = \sqrt{4^{2} + (4\sqrt{3})^{2}} = 4\sqrt{4} = 8\) and the argument of \(w\) is \(\arctan(\dfrac{4\sqrt{3}}{4}) = \arctan\sqrt{3} = \dfrac{\pi}{3}\). So
\[w = 8(\cos(\dfrac{\pi}{3}) + \sin(\dfrac{\pi}{3}))\]
Also, \(|z| = \sqrt{1^{2} + 1^{2}} = \sqrt{2}\) and the argument of \(z\) is \(\arctan(\dfrac{-1}{1}) = -\dfrac{\pi}{4}\).
So \[z = \sqrt{2}(\cos(-\dfrac{\pi}{4}) + \sin(-\dfrac{\pi}{4})) = \sqrt{2}(\cos(\dfrac{\pi}{4}) - \sin(\dfrac{\pi}{4})\]
2. Recall that \(\cos(\dfrac{\pi}{6}) = \dfrac{\sqrt{3}}{2}\) and \(\sin(\dfrac{\pi}{6}) = \dfrac{1}{2}\). So \[3(\cos(\dfrac{\pi}{6} + i\sin(\dfrac{\pi}{6})) = 3(\dfrac{\sqrt{3}}{2} + \dfrac{1}{2}i) = \dfrac{3\sqrt{3}}{2} + \dfrac{3}{2}i\]
So \(a = \dfrac{3\sqrt{3}}{2}\) and \(b = \dfrac{3}{2}\).
There is an alternate representation that you will often see for the polar form of a complex number using a complex exponential. We won’t go into the details, but only consider this as notation. When we write \(e^{i\theta}\) (where \(i\) is the complex number with \(i^{2} = -1\)) we mean
\[e^{i\theta} = \cos(\theta) + i\sin(\theta)\]
So the polar form \(r(\cos(\theta) + i\sin(\theta))\) can also be written as \(re^{i\theta}\):
\[re^{i\theta} = r(\cos(\theta) + i\sin(\theta))\]
Products of Complex Numbers in Polar Form
There is an important product formula for complex numbers that the polar form provides. We illustrate with an example.
Example \(\PageIndex{1}\): Products of Complex Numbers in Polar Form
Let \(w = -\dfrac{1}{2} + \dfrac{\sqrt{3}}{2}i\) and \(z = \sqrt{3} + i\). Using our definition of the product of complex numbers we see that
\[wz = (\sqrt{3} + i)(-\dfrac{1}{2} + \dfrac{\sqrt{3}}{2}i) = -\sqrt{3} + i.\]
Now we write \(w\) and \(z\) in polar form. Note that \(|w| = \sqrt{(-\dfrac{1}{2})^{2} + (\dfrac{\sqrt{3}}{2})^{2}} = 1\) and the argument of \(w\) satisfies \(\tan(\theta) = -\sqrt{3}\). Since \(w\) is in the second quadrant, we see that \(\theta = \dfrac{2\pi}{3}\), so the polar form of \(w\) is \[w = \cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3})\]
Also, \(|z| = \sqrt{(\sqrt{3})^{2} + 1^{2}} = 2\) and the argument of \(z\) satisfies \(\tan(\theta) = \dfrac{1}{\sqrt{3}}\).
Since \(z\) is in the first quadrant, we know that \(\theta = \dfrac{\pi}{6}\) and the polar form of \(z\) is \[z = 2[\cos(\dfrac{\pi}{6}) + i\sin(\dfrac{\pi}{6})]\]
We can also find the polar form of the complex product \(wz\). Here we have \(|wz| = 2\), and the argument of \(zw\) satisfies \(\tan(\theta) = -\dfrac{1}{\sqrt{3}}\). Since \(wz\) is in quadrant II, we see that \(\theta = \dfrac{5\pi}{6}\) and the polar form of \(wz\) is \[wz = 2[\cos(\dfrac{5\pi}{6}) + i\sin(\dfrac{5\pi}{6})].\]
When we compare the polar forms of \(w, z\), and \(wz\) we might notice that \(|wz| = |w||z|\) and that the argument of \(zw\) is \(\dfrac{2\pi}{3} + \dfrac{\pi}{6}\) or the sum of the arguments of \(w\) and \(z\). This turns out to be true in general.
The result of Example 5.7 is no coincidence, as we will show. In general, we have the following important result about the product of two complex numbers.
Multiplication of Complex Numbers in Polar Form
Let \(w = r(\cos(\alpha) + i\sin(\alpha))\) and \(z = s(\cos(\beta) + i\sin(\beta))\) be complex numbers in polar form. Then the polar form of the complex product \(wz\) is given by
\[wz = rs(\cos(\alpha + \beta) + i\sin(\alpha + \beta))\]
This states that to multiply two complex numbers in polar form, we multiply their norms and add their arguments.
To understand why this result it true in general, let \(w = r(\cos(\alpha) + i\sin(\alpha))\) and \(z = s(\cos(\beta) + i\sin(\beta))\) be complex numbers in polar form. We will use cosine and sine of sums of angles identities to find \(wz\):
\[w = [r(\cos(\alpha) + i\sin(\alpha))][s(\cos(\beta) + i\sin(\beta))] = rs([\cos(\alpha)\cos(\beta) - \sin(\alpha)\sin(\beta)]) + i[\cos(\alpha)\sin(\beta) + \cos(\beta)\sin(\alpha)]\]
We now use the cosine and sum identities (see page 291) and see that
\(\cos(\alpha + \beta) = \cos(\alpha)\cos(\beta) - \sin(\alpha)\sin(\beta)\) and \(\sin(\alpha + \beta) = \cos(\alpha)\sin(\beta) + \cos(\beta)\sin(\alpha)\).
Using equation (1) and these identities, we see that
\[w = rs([\cos(\alpha)\cos(\beta) - \sin(\alpha)\sin(\beta)]) + i[\cos(\alpha)\sin(\beta) + \cos(\beta)\sin(\alpha)] = rs(\cos(\alpha + \beta) + i\sin(\alpha + \beta))\]
An illustration of this is given in Figure \(\PageIndex{2}\). The formula for multiplying complex numbers in polar form tells us that to multiply two complex numbers, we add their arguments and multiply their norms.
Figure \(\PageIndex{2}\): A Geometric Interpretation of Multiplication of Complex Numbers.
Exercise \(\PageIndex{2}\)
Let \(w = 3[\cos(\dfrac{5\pi}{3}) + i\sin(\dfrac{5\pi}{3})]\) and \(z = 2[\cos(-\dfrac{\pi}{4}) + i\sin(-\dfrac{\pi}{4})]\).
What is \(|wz|\)? What is the argument of \(wz\)? In which quadrant is \(wz\)? Explain. Determine the polar form of wz. Draw a picture of \(w\), \(z\), and \(wz\) that illustrates the action of the complex product. Answer
1. Since \(|w| = 3\) and \(|z| = 2\), we see that
\[|wz| = |w||z| = (3)(2) = 6\]
2. The argument of \(w\) is \(\dfrac{5\pi}{3}\) and the argument of \(z\) is \(-\dfrac{\pi}{4}\), we see that the argument of \(wz\) is \[\dfrac{5\pi}{3} - \dfrac{\pi}{4} = \dfrac{20\pi - 3\pi}{12} = \dfrac{17\pi}{12}\]
3. The terminal side of an angle of \(\dfrac{17\pi}{12} = \pi + \dfrac{5\pi}{12}\) radians is in the third quadrant.
4. We know the magnitude and argument of \(wz\), so the polar form of \(wz\) is
\[wz = 6[\cos(\dfrac{17\pi}{12}) + \sin(\dfrac{17\pi}{12})]\]
5. Following is a picture of \(w, z\), and \(wz\) that illustrates the action of the complex product.
Quotients of Complex Numbers in Polar Form
We have seen that we multiply complex numbers in polar form by multiplying their norms and adding their arguments. There is a similar method to divide one complex number in polar form by another complex number in polar form.
Division of Complex Numbers in Polar Form
Let \(w = r(\cos(\alpha) + i\sin(\alpha))\) and \(z = s(\cos(\beta) + i\sin(\beta))\) be complex numbers in polar form with \(z \neq 0\). Then the polar form of the complex quotient \(\dfrac{w}{z}\) is given by \[\dfrac{w}{z} = \dfrac{r}{s}(\cos(\alpha - \beta) + i\sin(\alpha - \beta)).\]
So to divide complex numbers in polar form, we divide the norm of the complex number in the numerator by the norm of the complex number in the denominator and subtract the argument of the complex number in the denominator from the argument of the complex number in the numerator.
The proof of this is similar to the proof for multiplying complex numbers and is included as a supplement to this section.
Exercise \(\PageIndex{3}\)
Let \(w = 3[\cos(\dfrac{5\pi}{3}) + i\sin(\dfrac{5\pi}{3})]\) and \(z = 2[\cos(-\dfrac{\pi}{4}) + i\sin(-\dfrac{\pi}{4})]\).
What is \(|\dfrac{w}{z}|\)? What is the argument of \(|\dfrac{w}{z}|\)? In which quadrant is \(|\dfrac{w}{z}|\)? Explain. Determine the polar form of \(|\dfrac{w}{z}|\). Draw a picture of \(w\), \(z\), and \(|\dfrac{w}{z}|\) that illustrates the action of the complex product. Answer
1. Since \(|w| = 3\) and \(|z| = 2\), we see that
\[|\dfrac{w}{z}| = \dfrac{|w|}{|z|} = \dfrac{3}{2}\]
2. The argument of \(w\) is \(\dfrac{5\pi}{3}\) and the argument of \(z\) is \(-\dfrac{\pi}{4}\), we see that the argument of \(\dfrac{w}{z}\) is
\[\dfrac{5\pi}{3} - (-\dfrac{\pi}{4}) = \dfrac{20\pi + 3\pi}{12} = \dfrac{23\pi}{12}\]
3. The terminal side of an angle of \(\dfrac{23\pi}{12} = 2\pi - \dfrac{\pi}{12}\) radians is in the fourth quadrant.
4. We know the magnitude and argument of \(wz\), so the polar form of \(wz\) is \[\dfrac{w}{z} = \dfrac{3}{2}[\cos(\dfrac{23\pi}{12}) + \sin(\dfrac{23\pi}{12})]\]
5. Following is a picture of \(w, z\), and \(wz\) that illustrates the action of the complex product.
Proof of the Rule for Dividing Complex Numbers in Polar Form
Let \(w = r(\cos(\alpha) + i\sin(\alpha))\) and \(z = s(\cos(\beta) + i\sin(\beta))\) be complex numbers in polar form with \(z \neq 0\). So
\[\dfrac{w}{z} = \dfrac{r(\cos(\alpha) + i\sin(\alpha))}{s(\cos(\beta) + i\sin(\beta)} = \dfrac{r}{s}[\dfrac{\cos(\alpha) + i\sin(\alpha)}{\cos(\beta) + i\sin(\beta)}].\]
We will work with the fraction \(\dfrac{\cos(\alpha) + i\sin(\alpha)}{\cos(\beta) + i\sin(\beta)}\) and follow the usual practice of multiplying the numerator and denominator by \(\cos(\beta) - i\sin(\beta)\). So
\[\dfrac{w}{z} = \dfrac{r}{s}[\dfrac{(\cos(\alpha) + i\sin(\alpha))}{(\cos(\beta) + i\sin(\beta)}] = \dfrac{r}{s}[\dfrac{(\cos(\alpha) + i\sin(\alpha))}{(\cos(\beta) + i\sin(\beta)} \cdot \dfrac{(\cos(\beta) - i\sin(\beta))}{(\cos(\beta) - i\sin(\beta)}] = \dfrac{r}{s}[\dfrac{(\cos(\alpha)\cos(\beta) + \sin(\alpha)\sin(\beta)) + i(\sin(\alpha)\cos(\beta) - \cos(\alpha)\sin(\beta)}{\cos^{2}(\beta) + \sin^{2}(\beta)}].\]
We now use the following identities with the last equation:
\(\cos(\alpha)\cos(\beta) + \sin(\alpha)\sin(\beta) = \cos(\alpha - \beta)\) \(\sin(\alpha)\cos(\beta) - \cos(\alpha)\sin(\beta) = \sin(\alpha - \beta)\) \(\cos^{2}(\beta) + \sin^{2}(\beta) = 1\)
Using these identities with the last equation for \(\dfrac{w}{z}\), we see that
\[\dfrac{w}{z} = \dfrac{r}{s}[\dfrac{\cos(\alpha - \beta) + i\sin(\alpha- \beta)}{1}].\]
Summary
In this section, we studied the following important concepts and ideas:
If \(z = a + bi\) is a complex number, then we can plot \(z\) in the plane. If \(r\) is the magnitude of \(z\) (that is, the distance from \(z\) to the origin) and \(\theta\) the angle \(z\) makes with the positive real axis, then the
trigonometric form (or polar form) of \(z\) is \(z = r(\cos(\theta) + i\sin(\theta))\), where
\[r = \sqrt{a^{2} + b^{2}}, \cos(\theta) = \dfrac{a}{r}\]
and \[\sin(\theta) = \dfrac{b}{r}\]
The angle \(\theta\) is called the
argument of the complex number \(z\) and the real number \(r\) is the modulus or norm of \(z\).
If \(w = r(\cos(\alpha) + i\sin(\alpha))\) and \(z = s(\cos(\beta) + i\sin(\beta))\) are complex numbers in polar form, then the polar form of the complex product \(wz\) is given by
\[wz = rs(\cos(\alpha + \beta) + i\sin(\alpha + \beta))\] and \(z \neq 0\), the polar form of the complex quotient \(\dfrac{w}{z}\) is
\[\dfrac{w}{z} = \dfrac{r}{s}(\cos(\alpha - \beta) + i\sin(\alpha - \beta)),\]
This states that to multiply two complex numbers in polar form, we multiply their norms and add their arguments, and to divide two complex numbers, we divide their norms and subtract their arguments. |
I am not an expert at all in the subject of Lie groups, lattices, arithmetic groups and rigidity. But, lately I am interested in Margulis superrigidity theorem, which in most versions can be stated as follows:
Theorem.
Let $G$ and $G'$ be semisimple connected real center-free Lie groups without compact factors with $\mathrm{rk}(G)\geq 2$, $\Gamma < G$ be an irreducible lattice, and $\pi: \Gamma \to G'$ a homomorphism with $\pi(\Gamma)$ being Zariski dense in $G'$. Then $\pi$ extends to a rational epimorphism $\pi':G\to G'$.
Here "$H$ is without compact factors" means, for $H$ center-free, that if we write $H=\prod_{i=1}^kS_i$ with $S_i$ simple, then each $S_i$ is non-compact, or equivalently has positive (real) rank; the (real) rank $\mathrm{rk}(H)$ of $H$ is the sum of the ranks of all $S_i$. Irreducibility of a lattice $\Gamma$ means that its projection in $H/S_i$ has a dense image for all $i$.
Questions:
Do you have examples of Lie groups and lattices for which Margulis theorem applies, and also groups for which the theorem does not holds. In particular, do this theorem apply for $G=G'=\mathrm{PSL}(n,\mathbb{R})$ and $\Gamma=\mathrm{PSL}(n,\mathbb{Z})$. Does this implies that $\mathrm{Out}(\mathrm{PSL}(n,\mathbb{Z}))$ is finite?
Thank you all. |
In David J. Griffiths's
Introduction to Electrodynamics, the author gave the following problem in an exercise.
Sketch the vector function$$ \vec{v} ~=~ \frac{\hat{r}}{r^2}, $$ and compute its divergence, where$$\hat{r}~:=~ \frac{\vec{r}}{r} , \qquad r~:=~|\vec{r}|.$$ The answer may surprise you. Can you explain it?
I found the divergence of this function as $$ \frac{1}{x^2+y^2+z^2} $$ Please tell me what is the surprising thing here. |
Table of Contents
Clopen Set Criterion for Disconnected Topological Spaces
Recall from the Connected and Disconnected Topological Spaces page that a topological space $X$ is said to be disconnected if there exists open sets $A, B \subset X$, $A, B \neq \emptyset$, $A \cap B = \emptyset$, and such that:(1)
Furthermore we said that $\{ A, B \}$ is a separation of $X$. We also defined $X$ to be connected if $X$ is not disconnected.
We will now look at a nice criterion for determining whether a topological space is connected or disconnected with regards to clopen sets in the topological space of interest.
Theorem 1: A topological space $X$ is disconnected if and only if there exists a clopen set (open and closed set) $A \subset X$ such that $A \neq \emptyset$ and $A \neq X$. Proof:$\Rightarrow$ Suppose that $X$ is disconnected. Then there exists open sets $A, B \subset X$, $A, B \neq \emptyset$, $A \cap B = \emptyset$, and $X = A \cup B$. Since $A \cap B = \emptyset$ and $X = A \cup B$ we see that $A^c = B$. So $A^c$ is an open set in $X$ which implies that $A$ is also closed in $X$. Similarly, $B$ is also a closed set in $X$ since $B = A^c$ and $A$ is open in $X$. Furthermore, the sets $A$ and $B$ are such that $A, B \neq \emptyset$ and $A, B \neq X$. $\Leftarrow$ Suppose that there exists a clopen set $A \subset X$ such that $A \neq \emptyset$ and $A \neq X$. Let $B = A^c$. We claim that $\{ A, B \}$ is a separation of $X$. Since $A$ is open (and closed), $B$ is also open (and closed). Furthermore, $A \neq \emptyset$. Since $A \neq X$ we also have that $B \neq \emptyset$. Obvious by definition $A \cap B = \emptyset$. Furthermore, by definition it is not hard to see that $X = A \cup B$. Hence $\{ A, B \}$ is a separation of $X$ which shows that $X$ is disconnected. $\blacksquare$
Sometimes the following (logically equivalent) corollary in terms of connected topological spaces is more useful to remember.
Corollary 1: A topological space $X$ is connected if and only if the open clopen sets in $X$ are the empty set $\emptyset$ and the whole set $X$. Proof:Corollary 1 is logically equivalent to Theorem 1. $\blacksquare$ |
Power of a Lens is one of the most interesting concepts in ray optics. The detailed concept of this topic is given in the below article so that learners can understand this chapter more effectively.
Simply put, the power of a lens in Ray Optics is its ability to bend light. The greater the power of a lens, the greater is its ability to refract light that passes through it. For a convex lens, the converging ability is defined by power and in a concave lens, the diverging ability.
Check out the following ray diagrams Do you notice the connection between the focal length and the bending of the light ray? As the focal length decreases, the amount the light bends increases. Therefore, we can conclude that the power of a lens is inversely proportional to the focal length of the lens. A short focal length basically contributes to high optical power. Power of a Lens Formula
To find the power of a lens in Ray Optics, the following formula can be used.
If the focal length is given in meters (m), the power of the lens if measured in Diopters (D), as in the unit of power of the lens is diopter. Another thing you should keep in mind is that for a converging lens the optical power is positive and for a diverging lens, it is negative. For example, if the focal length of a lens is 20cm, converting this to meters, we get 0.2m To find the power of this lens, take the reciprocal of 0.2 and we get 5. Therefore the power of this particular lens if 5D. If you’ve previously read about the lens maker’s formula, you would’ve realized that what we are calculating there is actually the power of the lens. This means that you can calculate the power of a lens using radii of curvature of two surfaces and the refractive index of the lens material. An important application of using the power of lenses is in ‘Optometry”. Optometrists prescribe corrective lenses (either convex or concave lens) based on depreciating vision. You’re actually eye is basically a lens and you may experience problems with having a clear vision sometimes. This can be rectified by wearing corrective lenses with the appropriate power. Optics Formulas
Following are the list of other optic formulas that are studied in optics:
Total internal refraction \(\frac{n_{1}}{n_{2}}=\frac{sin r}{sin i}\) Critical angle, Ө \(sin \Theta =\frac{n_{2}}{n_{1}}\) Prism formula \(\mu =\frac{sin\frac{A+\delta _{m}}{2}}{sin\frac{A}{2}}\) Len’s maker formula \(\frac{1}{f}=(\mu -1)(\frac{1}{R_{1}}-\frac{1}{R_{2}})\)
Was this article interesting? Check out other related articles by visiting BYJU’S. |
Well, it becomes a bit clearer when we see the final formulas of Ref. 1:
$$\delta \langle a_f , t_f |a_i , t_i \rangle ~=~ \frac{i}{\hbar} \int_{t_i}^{t_f} \! dt \langle a_f , t_f | \delta L(t) |a_i , t_i \rangle \tag{7.126}$$
$$ \delta^{\prime} \delta \langle a_f , t_f |a_i , t_i \rangle ~=~\frac{1}{2}\left(\frac{i}{\hbar}\right)^2 \int_{t_i}^{t_f} \! dt \int_{t_i}^{t_f} \!dt^{\prime} $$$$\times \langle a_f , t_f |T[ \delta L(t)~\delta^{\prime}L(t^{\prime}) ] |a_i , t_i \rangle. \tag{7.131}$$
Recall that the Schwinger action principle can be described via a time integral $\int_{t_i}^{t_f}\!dt~ \delta L(t)$ of an operator $\delta L(t)$. Imagine that the time interval
$$[t_i,t_f]~=~\cup_{n=1}^N I_n, \qquad I_n~:=~[t_{n-1},t_n],$$
is divided into a sufficiently fine discretization $t_i=t_0<t_1, \ldots t_{N-1} < t_N=t_f $, where the integer $N$ is sufficiently large.
By inserting many completeness identities $\sum_b |b , t_n \rangle\langle b , t_n |={\bf 1}$, we can split a total variation (7.126) into many small contribution labelled by the time intervals $I_n$, $n\in\{1,2, \ldots, N\}$.
Similarly, when performing a double variation (7.131), we will get $N$ diagonal and $N(N-1)$ off-diagonal contributions labelled by two time intervals $I_n$ and $I_m$, where $n,m\in\{1,2, \ldots, N\}$. (A diagonal contribution $n=m$ refers to the same time interval $I_n=I_m$.) If in the limit $N\to\infty$, the $N(N-1)$ off-diagonal contributions dominate over the $N$ diagonal contributions, the two variations becomes
effectively independent.
However in hindsight, it seems that Ref. 1 is assuming that the two variation $\delta$ and $\delta^{\prime}$ are
manifestly independent and not just effectively independent. Manifest independence here means that the $\delta^{\prime}$ variation simply doesn't act on the $\delta L(t)$ operator, and vice-versa.
References:
This post imported from StackExchange Physics at 2014-04-20 15:03 (UCT), posted by SE-user Qmechanic D.J. Toms, The Schwinger Action Principle and Effective Action, 1997, Section 7.6. |
The Interior Points of Sets in a Topological Space Examples 2
Recall from The Interior Points of Sets in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then a point $a \in A$ is called an interior point of $A$ if there exists an open set $U \in \tau$ such that:(1)
We also proved some important results for a topological space $(X, \tau)$ with $A \subseteq X$:
$A$ is open if and only if every $a \in A$ is an interior point of $A$, i.e., $A = \mathrm{int} (A)$. If $U \in \tau$ is such that $U \subseteq A$ then $U \subseteq \mathrm{int} (A)$. $\mathrm{int} (A)$ is the largest open subset of $A$.
We will now look at some examples regarding interior points of subsets of a topological space.
Example 1 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau = \{ (-n, n) : n \in \mathbb{Z}, n \geq 1 \}$ and let $A = (-\pi, e)$. What is $\mathrm{int} (A)$?
It's not hard to see that $(-1, 1) \subseteq A$, $(-2, 2) \subseteq A$, and $(-3, 3) \not \subseteq A$ since $2.9 \in (-3, 3)$ and $2.9 \not \in (-\pi, e)$. Therefore the largest open subset of $A$ is $(-2, 2)$ and hence:(2)
Example 2 Let $(X, \tau)$ be a topological space. Prove that if for every $A \subseteq X$ we have that $A = \mathrm{int} (A)$ then $\tau$ is the discrete topology on $X$.
Suppose that for every $A \subseteq X$ we have that $A = \mathrm{int} (A)$. Then every $A \subseteq X$ is open, i.e., every subset of $X$ is open, so $\tau = \mathcal P(X)$ and hence $\tau$ is the discrete topology on $X$.
Example 3 Let $(\mathbb{R}, \tau)$ be a topological space where $\tau$ is the topology of open intervals. What is $\mathrm{int} (\mathbb{N})$?
Consider the point $1 \in \mathbb{N}$. Notice that for any open interval $(a, b)$ such that $1 \in (a, b)$ we have that $(a, b) \not \subseteq \mathbb{N}$. For example, $1 \in (0, 2)$ but $(0, 2) \not \subseteq \mathbb{N}$. In fact, for all $a, b \in \mathbb{R}$ such that $a < 1 < b$ we have that there exists a number $x \in \mathbb{R}$ such that $x$ is not a natural number and where $a < 1 < x < b$. So $x \in (a, b)$ and $x \not \in \mathbb{N}$.
The same applies for all $n \in \mathbb{N}$, and hence $\mathrm{int} (\mathbb{N}) = \emptyset$. |
The following problem has appeared in 2013 January qualifying exam in Purdue University, which is publicly available here.
Problem 3. Let $\{a_k\}$ be sequence of positive numbers such that $a_n\to\infty$ as $n\to\infty$. Prove that the following limit exists $$ \lim_{k\to\infty}\int_{0}^{\infty} \frac{e^{-x}\cos(x)}{a_kx^2 + \frac{1}{a_k}} dx $$ and find it.
I have hardly come across to limits of sequences that involve definite integrals (in my undergraduate education so far), so this problem just seems insurmountable at the first glance. I would appreciate any hints.
One of the things that comes to mind is to use limit comparison test. For example, we can evaluate integrals such as $$\int_{0}^{\infty} e^{-x}\cos(x)=\frac{1}{2}$$ But for that we would have to bound the integrand somehow. One tempting thing is to interchange the integral and the limit, which would tell us that integrand is zero in the limit, but I highly doubt this is allowed here.
Looking forward to hear your thoughts.
P.S. I am not sure how to make the title informative for this post. Feel free to edit as you see fit. |
Dirr, Nicolas and Luckhaus, Stephen 2001. Mesoscopic limit for non-isothermal phase transition. Markov processes and related fields 7 (3) , pp. 355-381. Abstract
Motivated by the problem of modeling nucleation in non-isothermal systems, we consider the stochastic evolution of a coupled system of a lattice spin variable $\sigma$ and a continuous variable $e$ (corresponding to the phase and the energy density of a continuum system). The spin variables flip with rates depending both on a Kac potential type interaction with the spins and on an interaction with the $e$-field, which plays the role of the external field in ferromagnetics but evolves by a diffusion equation with a forcing depending on the spins. We analyze the mesoscopic limit, where space scales like the diverging interaction range of the Kac potential, $\gamma^{-1},$ while time is not rescaled. By writing $\sigma$ as random time change of a family of independent spins, and thus reducing the problem to investigating integral equations parametrized by independent random variables, we show that as $\gamma\to 0$ the average of the spins over small cubes and the field $e$ converge in probability to the solution of a system of nonlocal evolution equations which is similar to the phase field equations. In some cases the convergence holds until times of order ${\log(\gamma^{-1})}.$
Item Type: Article Date Type: Publication Status: Published Schools: Mathematics Subjects: Q Science > QA Mathematics Uncontrolled Keywords: Non-isothermal phase change; Kac potential; Random time change; Microscopic model for phase field equations Publisher: Polymat ISSN: 1024-2953 Related URLs: Last Modified: 04 Jun 2017 02:52 URI: http://orca-mwe.cf.ac.uk/id/eprint/13080 Citation Data
Cited 5 times in
Google Scholar. View in Google Scholar
Actions (repository staff only)
Edit Item |
H(w) =1/ (square root of 2) That sounds a bit confusing, we usually refer to the cutoff point as the -3 dB point.
That is the same though, -3 dB is
half the power.
Let me explain: take your \$H(\omega) = \frac{1}{\sqrt2}\$
That means that at that \$\omega\$ the
voltage is divided by \$\sqrt2\$, if this voltage is applied across a (load) resistor at the output of the filter then the current through that same load resistor will also be divided by \$\sqrt2\$.
What does that mean for the
Power?
I means that the power is
halved.
On a dB Power scale that means -3 dB
Using that "half of the power" as a reference point is useful because if we divide a wideband signal into a low frequency part and a high frequency part then we can do that using a low pass filter and a highpass filter with the same cutoff frequency. At that cutoff frequency half of the power ends up at the output of the lowpass filter and the other half ends up at the output of the highpass filter. |
This example will use an elementary dipole and loop antenna and analyze the wave impedance behavior of each radiator in space at a single frequency. The region of space around an antenna has been defined in a variety of ways. The most succinct description is using a 2-or 3-region model. One variation of the 2-region model uses the terms near-field and far-field to identify specific field mechanisms that are dominant. The 3-region model, splits the near-field into a transition zone, wherein a weakly radiative mechanism is at work. Other terms that have been used to describe these zones, include, quasistatic field, reactive field, non-radiative field, Fresnel region, induction zone etc. [1]. Pinning these regions down mathematically presents further challenges as observed with the variety of definitions available across different sources [1]. Understanding the regions around an antenna is critical for both an antenna engineer as well as an electromagnetic compatibility(EMC) engineer. The antenna engineer may want to perform near-field measurements and then compute the far-field pattern. To the EMC engineer, understanding the wave impedance is required for designing a shield with a particular impedance to keep interference out.
For this analysis the frequency is 1 GHz. The length and circumference of the dipole and loop are selected so that they are electrically short at this frequency.
f = 1e9;c = physconst('lightspeed');lambda = c/f;wavenumber = 2*pi/lambda;d = dipole;d.Length = lambda/20;d.Width = lambda/400;circumference = lambda/20;r = circumference/(2*pi);l = loopCircular;l.Radius = r;l.Thickness = circumference/200;
The wave impedance is defined in a broad sense as the ratio of the magnitudes of the total electric and magnetic field, respectively. The magnitude of a complex vector is defined to be the length of the real vector resulting from taking the modulus of each component of the original complex vector. To examine impedance behavior in space, choose a direction and vary the radial distance R from the antenna along this direction. The spherical coordinate system is used with azimuth and elevation angles fixed at (0,0) while R is varied in terms of wavelength. For selected antennas, the maximum radiation occurs in the azimuthal plane. The smallest value of R has to be greater than the structure dimensions, i.e. field computations are not done directly on the surface.
N = 1001; az = zeros(1,N); el = zeros(1,N); R = linspace(0.1*lambda,10*lambda,N); x = R.*sind(90-el).*cosd(az); y = R.*sind(90-el).*sind(az); z = R.*cosd(90-el); points = [x;y;z];
Since the antennas are electrically small at the frequency of 1 GHz, mesh the structure manually by specifying a maximum edge length. The surface mesh is a triangular discretization of the antenna geometry. Compute electric and magnetic field complex vectors.
md = mesh(d,'MaxEdgeLength',0.0003); ml = mesh(l,'MaxEdgeLength',0.0003); [Ed,Hd] = EHfields(d,f,points); [El,Hl] = EHfields(l,f,points);
The electric and magnetic field results from function
EHfields is a 3-component complex vector. Calculate the resulting magnitudes of the electric and magnetic field component wise, respectively
Edmag = abs(Ed);Hdmag = abs(Hd);Elmag = abs(El);Hlmag = abs(Hl);% Calculate resultant E and HEd_rt = sqrt(Edmag(1,:).^2 + Edmag(2,:).^2 + Edmag(3,:).^2);Hd_rt = sqrt(Hdmag(1,:).^2 + Hdmag(2,:).^2 + Hdmag(3,:).^2);El_rt = sqrt(Elmag(1,:).^2 + Elmag(2,:).^2 + Elmag(3,:).^2);Hl_rt = sqrt(Hlmag(1,:).^2 + Hlmag(2,:).^2 + Hlmag(3,:).^2);
The wave impedance can now be calculated at each of the predefined points in space as the ratio of the total electric field magnitude to the total magnetic field magnitude. Calculate this ratio for both the dipole antenna and the loop antenna.
ZE = Ed_rt./Hd_rt; ZH = El_rt./Hl_rt;
The material properties of free space, the permittivity and permeability of vacuum, are used to define the free space impedance .
eps_0 = 8.854187817e-12; mu_0 = 1.2566370614e-6; eta = round(sqrt(mu_0/eps_0));
The behavior of wave impedance for both antennas is given on the same plot. The x-axis is the distance from the antenna in terms of and the y-axis is the impedance measured in .
fig1 = figure; loglog(R,ZE,'--','LineWidth',2.5) hold on loglog(R,ZH,'m--','LineWidth',2.5) line(R,eta.*ones(size(ZE)),'Color','r','LineWidth',1.5); textInfo = 'Wavenumber, k = 2\pi/\lambda'; text(0.4,310,textInfo,'FontSize',9) ax1 = fig1.CurrentAxes; ax1.XTickLabelMode = 'manual'; ax1.XLim = [min(R) max(R)]; ax1.XTick = sort([lambda/(2*pi) 5*lambda/(2*pi) lambda 1 5*lambda ax1.XLim]); ax1.XTickLabel = {'0.1\lambda';'\lambda/2\pi';'5\lambda/2\pi'; '\lambda'; 'k\lambda/2\pi';'5\lambda';'10\lambda'}; ax1.YTickLabelMode = 'manual'; ax1.YTick = sort([ax1.YTick eta]); ax1.YTickLabel = cellstr(num2str(ax1.YTick')); xlabel('Distance from antenna in \lambda (m)') ylabel('Impedance (\Omega)') legend('Dipole','Loop') title('Wave Impedance') grid on
The plot of the wave impedance variation reveals several interesting aspects.
The wave impedance changes with distance from the antenna and shows opposing behaviors in the case of the dipole and the loop. The dipole, whose dominant radiation mechanism is via the electric field, shows a minima close to the radian sphere distance, , whilst the loop which can be thought of as a magnetic dipole shows a maxima in the impedance.
The region below the radian sphere distance, shows the first cross-over across 377 . This cross over occurs very close to the structure, and the quick divergence indicates that we are in the reactive near-field.
Beyond the radian sphere distance (), the wave impedance for the dipole and loop decreases and increases respectively. The impedance starts to converge towards the free space impedance value of .
Even at a distance of 5 - from the antennas, the wave impedance has not converged, implying that we are not yet in the far-field.
At a distance of and beyond, the values for wave impedance are very nearly equal to 377 . Beyond 10 , the wave impedance stabilizes and the region of space can be termed as the far-field for these antennas at the frequency of 1GHz.
Note that the dependence on wavelength implies that these regions we have identified will change if the frequency is changed. Thus, the boundary will move in space.
[1] C. Capps, "Near Field or Far Field," EDN, August 16, 2001, pp. 95-102. Online at: http://m.eet.com/media/1140931/19213-150828.pdf |
In the previous chapter we found that we could differentiate functions of several variables with respect to one variable, while treating all the other variables as constants or coefficients. We can integrate functions of several variables in a similar way. For instance, if we are told that \(f_x(x,y) = 2xy\), we can treat \(y\) as staying constant and integrate to obtain \(f(x,y)\):
\[\begin{align*}
f(x,y) &= \int f_x(x,y) \,dx\\ &= \int 2xy \,dx \\ &= x^2y + C. \end{align*}\]
Make a careful note about the constant of integration, \(C\). This "constant'' is something with a derivative of \(0\) with respect to \(x\), so it could be any expression that contains only constants and functions of \(y\). For instance, if \(f(x,y) = x^2y+ \sin y + y^3 + 17\), then \(f_x(x,y) = 2xy\). To signify that \(C\) is actually a function of \(y\), we write:
$$f(x,y) = \int f_x(x,y) \,dx = x^2y+C(y).$$
Using this process we can even evaluate definite integrals.
Example \(\PageIndex{1}\): Integrating functions of more than one variable
Evaluate the integral \(\displaystyle \int_1^{2y} 2xy \,dx.\)
SOLUTION
We find the indefinite integral as before, then apply the Fundamental Theorem of Calculus to evaluate the definite integral:
\[\begin{align*}
\int_1^{2y} 2xy \,dx &= x^2y\Big|_1^{2y}\\ &= (2y)^2y - (1)^2y \\ &= 4y^3-y. \end{align*}\]
We can also integrate with respect to \(y\). In general,
$$\int_{h_1(y)}^{h_2(y)} f_x(x,y) \,dx = f(x,y)\Big|_{h_1(y)}^{h_2(y)} = f\big(h_2(y),y\big)-f\big(h_1(y),y\big),$$
and
$$\int_{g_1(x)}^{g_2(x)} f_y(x,y) \,dy = f(x,y)\Big|_{g_1(x)}^{g_2(x)} = f\big(x,g_2(x)\big)-f\big(x,g_1(x)\big).$$
Note that when integrating with respect to \(x\), the bounds are functions of \(y\) (of the form \(x=h_1(y)\) and \(x=h_2(y)\)) and the final result is also a function of \(y\). When integrating with respect to \(y\), the bounds are functions of \(x\) (of the form \(y=g_1(x)\) and \(y=g_2(x)\)) and the final result is a function of \(x\). Another example will help us understand this.
Example \(\PageIndex{2}\): Integrating functions of more than one variable
Evaluate \(\displaystyle \int_1^x\big(5x^3y^{-3}+6y^2\big) \,dy\).
SOLUTION
We consider \(x\) as staying constant and integrate with respect to \(y\):
\[\begin{align*}
\int_1^x\big(5x^3y^{-3}+6y^2\big) \,dy & = \left(\frac{5x^3y^{-2}}{-2}+\frac{6y^3}{3}\right)\Bigg|_1^x \\ &= \left(-\frac52x^3x^{-2}+2x^3\right) - \left(-\frac52x^3+2\right) \\ &= \frac92x^3-\frac52x-2. \end{align*}\]
Note how the bounds of the integral are from \(y=1\) to \(y=x\) and that the final answer is a function of \(x\).
In the previous example, we integrated a function with respect to \(y\) and ended up with a function of \(x\). We can integrate this as well. This process is known as
iterated integration, or multiple integration.
Example \(\PageIndex{3}\): Integrating an integral
Evaluate \(\displaystyle \int_1^2\left(\int_1^x\big(5x^3y^{-3}+6y^2\big) \,dy\right) \,dx.\)
SOLUTION
We follow a standard "order of operations'' and perform the operations inside parentheses first (which is the integral evaluated in Example \(\PageIndex{2}\).)
\[\begin{align*}
\int_1^2\left(\int_1^x\big(5x^3y^{-3}+6y^2\big) \,dy\right) \,dx &= \int_1^2 \left(\left[\frac{5x^3y^{-2}}{-2}+\frac{6y^3}{3}\right]\Bigg|_1^x\right) \,dx \\ &= \int_1^2 \left(\frac92x^3-\frac52x-2\right) \,dx \\ &= \left(\frac98x^4-\frac54x^2-2x\right)\Bigg|_1^2\\ &= \frac{89}8. \end{align*}\]
Note how the bounds of \(x\) were \(x=1\) to \(x=2\) and the final result was a number.
The previous example showed how we could perform something called an iterated integral; we do not yet know why we would be interested in doing so nor what the result, such as the number \(89/8\),
means. Before we investigate these questions, we offer some definitions.
Definition: Iterated Integration
Iterated integration is the process of repeatedly integrating the results of previous integrations. Integrating one integral is denoted as follows.
Let \(a\), \(b\), \(c\) and \(d\) be numbers and let \(g_1(x)\), \(g_2(x)\), \(h_1(y)\) and \(h_2(y)\) be functions of \(x\) and \(y\), respectively. Then:
\(\displaystyle \int_c^d\int_{h_1(y)}^{h_2(y)} f(x,y) \,dx \,dy = \int_c^d\left(\int_{h_1(y)}^{h_2(y)} f(x,y) \,dx\right) \,dy.\) \(\displaystyle \int_a^b\int_{g_1(x)}^{g_2(x)} f(x,y) \,dy \,dx = \int_a^b\left(\int_{g_1(x)}^{g_2(x)} f(x,y) \,dy\right) \,dx.\)
Again make note of the bounds of these iterated integrals.
With \(\displaystyle \int_c^d\int_{h_1(y)}^{h_2(y)} f(x,y) \,dx \,dy\), \(x\) varies from \(h_1(y)\) to \(h_2(y)\), whereas \(y\) varies from \(c\) to \(d\). That is, the bounds of \(x\) are
curves, the curves \(x=h_1(y)\) and \(x=h_2(y)\), whereas the bounds of \(y\) are constants, \(y=c\) and \(y=d\). It is useful to remember that when setting up and evaluating such iterated integrals, we integrate "from curve to curve, then from point to point.''
We now begin to investigate
why we are interested in iterated integrals and what they mean. Area of a plane region
Consider the plane region \(R\) bounded by \(a\leq x\leq b\) and \(g_1(x)\leq y\leq g_2(x)\), shown in Figure \(\PageIndex{1}\). We learned in Section 7.1 (in Calculus I) that the area of \(R\) is given by
$$\int_a^b \big(g_2(x)-g_1(x)\big) \,dx.$$
\(\PageIndex{1}\)Figure : Calculating the area of a plane region R with an iterated integral.
We can view the expression \(\big(g_2(x)-g_1(x)\big)\) as
$$\big(g_2(x)-g_1(x)\big) = \int_{g_1(x)}^{g_2(x)} 1 \,dy =\int_{g_1(x)}^{g_2(x)} \,dy,$$
meaning we can express the area of \(R\) as an iterated integral:
$$\text{area of }R = \int_a^b \big(g_2(x)-g_1(x)\big) \,dx = \int_a^b\left(\int_{g_1(x)}^{g_2(x)} \,dy\right) \,dx =\int_a^b\int_{g_1(x)}^{g_2(x)} \,dy \,dx.$$
In short: a certain iterated integral can be viewed as giving the area of a plane region.
A region \(R\) could also be defined by \(c\leq y\leq d\) and \(h_1(y)\leq x\leq h_2(y)\), as shown in Figure \(\PageIndex{2}\). Using a process similar to that above, we have
$$\text{the area of }R = \int_c^d\int_{h_1(y)}^{h_2(y)} \,dx \,dy.$$ \(\PageIndex{2}\)Figure : Calculating the area of a plane region R with an iterated integral.
We state this formally in a theorem.
THEOREM \(\PageIndex{1}\): Area of a plane region
Let \(R\) be a plane region bounded by \(a\leq x\leq b\) and \(g_1(x)\leq y\leq g_2(x)\), where \(g_1\) and \(g_2\) are continuous functions on \([a,b]\). The area \(A\) of \(R\) is $$A = \int_a^b\int_{g_1(x)}^{g_2(x)} \,dy \,dx.$$ Let \(R\) be a plane region bounded by \(c\leq y\leq d\) and \(h_1(y)\leq x\leq h_2(y)\), where \(h_1\) and \(h_2\) are continuous functions on \([c,d]\). The area \(A\) of \(R\) is $$A = \int_c^d\int_{h_1(y)}^{h_2(y)} \,dx \,dy.$$
The following examples should help us understand this theorem.
Example \(\PageIndex{4}\): Area of a rectangle
Find the area \(A\) of the rectangle with corners \((-1,1)\) and \((3,3)\), as shown in Figure \(\PageIndex{3}\).
\(\PageIndex{3}\)Figure : Calculating the area of a rectangle with an iterated integral in Example\(\PageIndex{4}\) . Solution
Multiple integration is obviously overkill in this situation, but we proceed to establish its use.
The region \(R\) is bounded by \(x=-1\), \(x=3\), \(y=1\) and \(y=3\). Choosing to integrate with respect to \(y\) first, we have
$$A = \int_{-1}^3\int_1^3 1 \,dy \,dx = \int_{-1}^3 \left(y\ \Big|_1^3\right) \,dx = \int_{-1}^3 2 \,dx = 2x\Big|_{-1}^3=8.$$
We could also integrate with respect to \(x\) first, giving:
$$A = \int_1^3\int_{-1}^3 1 \,dx \,dy =\int_1^3 \left(x\ \Big|_{-1}^3\right) \,dy = \int_1^3 4 \,dy = 4y\Big|_1^3 = 8.$$
Clearly there are simpler ways to find this area, but it is interesting to note that this method works.
Example \(\PageIndex{5}\): Area of a triangle
Find the area \(A\) of the triangle with vertices at \((1,1)\), \((3,1)\) and \((5,5)\), as shown in Figure \(\PageIndex{4}\).
\(\PageIndex{4}\)Figure : Calculating the area of a triangle with iterated integrals in Example\(\PageIndex{5}\) . SOLUTION
The triangle is bounded by the lines as shown in the figure. Choosing to integrate with respect to \(x\) first gives that \(x\) is bounded by \(x=y\) to \(x = \frac{y+5}2\), while \(y\) is bounded by \(y=1\) to \(y=5\). (Recall that since \(x\)-values increase from left to right, the leftmost curve, \(x=y\), is the lower bound and the rightmost curve, \(x=(y+5)/2\), is the upper bound.) The area is
\[\begin{align*}
A &= \int_1^5\int_{y}^{\frac{y+5}2} \,dx \,dy \\ &= \int_1^5\left(x\ \Big|_y^{\frac{y+5}2}\right) \,dy \\ &= \int_1^5 \left(-\frac12y+\frac52\right) \,dy \\ &= \left(-\frac14y^2+\frac52y\right)\Big|_1^5\\ &=4. \end{align*}\]
We can also find the area by integrating with respect to \(y\) first. In this situation, though, we have two functions that act as the lower bound for the region \(R\), \(y=1\) and \(y=2x-5\). This requires us to use two iterated integrals. Note how the \(x\)-bounds are different for each integral:
\[\begin{align*}
A &= \int_1^3\int_1^x 1 \,dy \,dx &+& & &\int_3^5\int_{2x-5}^x1 \,dy \,dx\\ &= \int_1^3\big(y\big)\Big|_1^x \,dx & + & & & \int_3^5\big(y\big)\Big|_{2x-5}^x \,dx\\ &= \int_1^3\big(x-1\big) \,dx & + & & & \int_3^5\big(-x+5\big) \,dx \\ &= 2 & + & & & 2 \\ &=4. \end{align*}\]
As expected, we get the same answer both ways.
Example \(\PageIndex{6}\): Area of a plane region
Find the area of the region enclosed by \(y=2x\) and \(y=x^2\), as shown in Figure \(\PageIndex{5}\).
\(\PageIndex{5}\)Figure : Calculating the area of a plane region with iterated integrals in Example\(\PageIndex{6}\) . Solution
Once again we'll find the area of the region using both orders of integration.
Using \(\,dy \,dx\):
$$\int_0^2\int_{x^2}^{2x}1 \,dy \,dx = \int_0^2(2x-x^2) \,dx = \big(x^2-\frac13x^3\big)\Big|_0^2 = \frac43.$$
Using \(\,dx \,dy\):
$$\int_0^4\int_{y/2}^{\sqrt{y}} 1 \,dx \,dy = \int_0^4 (\sqrt{y}-y/2) \,dy = \left(\frac23y^{3/2} - \frac14y^2\right)\Big|_0^4 = \frac43.$$
Changing Order of Integration
In each of the previous examples, we have been given a region \(R\) and found the bounds needed to find the area of \(R\) using both orders of integration. We integrated using both orders of integration to demonstrate their equality.
We now approach the skill of describing a region using both orders of integration from a different perspective. Instead of starting with a region and creating iterated integrals, we will start with an iterated integral and rewrite it in the other integration order. To do so, we'll need to understand the region over which we are integrating.
The simplest of all cases is when both integrals are bound by constants. The region described by these bounds is a rectangle (see Example \(\PageIndex{4}\)), and so:
$$\int_a^b\int_c^d 1 \,dy \,dx = \int_c^d\int_a^b1 \,dx \,dy.$$
When the inner integral's bounds are not constants, it is generally very useful to sketch the bounds to determine what the region we are integrating over looks like. From the sketch we can then rewrite the integral with the other order of integration.
Examples will help us develop this skill.
Example \(\PageIndex{7}\): Changing the order of integration
Rewrite the iterated integral \(\displaystyle \int_0^6\int_0^{x/3} 1 \,dy \,dx\) with the order of integration \(\,dx \,dy\).
SOLUTION
We need to use the bounds of integration to determine the region we are integrating over.
The bounds tell us that \(y\) is bounded by \(0\) and \(x/3\); \(x\) is bounded by 0 and 6. We plot these four curves: \(y=0\), \(y=x/3\), \(x=0\) and \(x=6\) to find the region described by the bounds. Figure \(\PageIndex{6}\) shows these curves, indicating that \(R\) is a triangle.
\(\PageIndex{6}\)Figure : Sketching the region R described by the iterated integral in Example\(\PageIndex{7}\) .
To change the order of integration, we need to consider the curves that bound the \(x\)-values. We see that the lower bound is \(x=3y\) and the upper bound is \(x=6\). The bounds on \(y\) are \(0\) to \(2\). Thus we can rewrite the integral as \(\displaystyle \int_0^2\int_{3y}^6 1 \,dx \,dy.\)
Example \(\PageIndex{8}\): Changing the order of integration
Change the order of integration of \(\displaystyle \int_0^4\int_{y^2/4}^{(y+4)/2}1 \,dx \,dy\).
SOLUTION
We sketch the region described by the bounds to help us change the integration order. \(x\) is bounded below and above (i.e., to the left and right) by \(x=y^2/4\) and \(x=(y+4)/2\) respectively, and \(y\) is bounded between 0 and 4. Graphing the previous curves, we find the region \(R\) to be that shown in Figure \(\PageIndex{7}\).
\(\PageIndex{7}\)Figure : Drawing the region determined by the bounds of integration in Example \(\PageIndex{8}\) .
To change the order of integration, we need to establish curves that bound \(y\). The figure makes it clear that there are two lower bounds for \(y\): \(y=0\) on \(0\leq x\leq 2\), and \(y=2x-4\) on \(2\leq x\leq 4\). Thus we need two double integrals. The upper bound for each is \(y=2\sqrt{x}\). Thus we have
$$\int_0^4\int_{y^2/4}^{(y+4)/2}1 \,dx \,dy = \int_0^2\int_0^{2\sqrt{x}} 1 \,dy \,dx + \int_2^4\int_{2x-4}^{2\sqrt{x}}1 \,dy \,dx.$$
This section has introduced a new concept, the iterated integral. We developed one application for iterated integration: area between curves. However, this is not new, for we already know how to find areas bounded by curves.
In the next section we apply iterated integration to solve problems we currently do not know how to handle. The "real" goal of this section was not to learn a new way of computing area. Rather, our goal was to learn how to define a region in the plane using the bounds of an iterated integral. That skill is very important in the following sections. |
Source Significance
In the first release of the CSC, detect and flux significance were related, since sources were accepted for inclusion in the catalog if their flux significance exceeded a given threshold.
In CSC 2.0, three separate metrics are used to describe the significance of detection and flux determination.
Detection Significance & Likelihood
The fundamental metric used to decide whether a source is included in CSC 2.0 is the likelihood,L = -2 ln(P)
where
P is the probability that an MLE fit to a point or extended source model, in a region with no source, would yield a change in fit statistic as large or larger than that observed, when compared to a fit to background only.
The likelihood is closely related to the probability,
P pois, that a Poisson distribution with a mean background in the source aperture would produce at least the number of counts observed in the aperture. This quantity, called detect_significance, is also reported in CSC 2.0. Smoothed background maps are used to estimate mean background, and detect_significance is expressed in terms of the number of σ, z, in a zero-mean, unit standard deviation Gaussian distribution that would yield an upper integral probability P gaus, from z to ∞, equivalent to P pois. That is,
whereP_{\mathrm{gaus}} = \int_{z}^{\infty} \frac{e^{-x^{2}/2}}{\sqrt{2\pi}} dx
[Version: full-size]
detect_significance vs. likelihood Flux Significance
Flux significance is a simple estimate of the ratio of the flux measurement to its average error. The mode of the marginalized probability distribution for
photflux_aper is used as the flux measurement and the average error, σ e, is defined to be:
which are both used to estimate flux significance.
Table Columns Master Sources Table: significance, likelihood
The maximum likelihood and flux significance across all stacked observations and energy bands are reported as the master source
significance and likelihood. Stacked Observation Sources Table: flux_significance, detect_significance, likelihood
Likelihood, detect significance, and flux significance are reported per band for all sources detected in the valid stack. The likelihood reported is the maximum of the likelihood determined from the MLE fit to all valid stack data, and the likelihoods from each individual observation, per band.
Per-Observation Sources Table: flux_significance, likelihood
Likelihood and flux significance are reported per band for all detected sources that fall in the valid field of view. Likelihoods are computed for each source detection in a stack, from MLE fits to data from all valid observations for the source. Likelihoods from each individual observation are also computed. |
Table of Contents
The Dimension of The Null Space and Range Examples 1
Recall from The Dimension of The Null Space and Range page that if $T$ is a linear map from $V \to W$ and $V$ is finite-dimensional then we have the following formula relating the dimension of $V$ to the dimension of the the null space of $T$ and the dimension of the range of $T$:(1)
We will now look at some examples applying this formula.
Example 1 Let $U$, $V$, and $W$ be finite-dimensional vector spaces, and suppose that $S \in \mathcal L (V, W)$ and $T \in \mathcal L (U, V)$. Let $ST \in \mathcal L (U, W)$. Show that $\mathrm{dim} (\mathrm{null} (ST)) ≤ \mathrm{dim} ( \mathrm{null} (S)) + \mathrm{dim} ( \mathrm{null} (T))$.
Using the dimension formula above we have that:(2)
Since $\mathrm{dim} (U) = \mathrm{dim} (\mathrm{null} (T)) + \mathrm{dim} (\mathrm{range} (T))$ we can substitute this into the equation above to get that:(3)
Now notice that $\mathrm{dim} (\mathrm{range}(T)) ≤ \mathrm{dim} (V)$ and so:(4)
Since $\mathrm{dim} (V) = \mathrm{dim} (\mathrm{null} (S)) + \mathrm{dim} (\mathrm{range} (S))$, we can substitute this into the equation above to get:(5)
Note that $\mathrm{dim} (\mathrm{range} (S)) - \mathrm{dim} (\mathrm{range} (ST)) ≤ 0$ though since $S$ maps elements $v \in V$ to elements $S(v) \in W$, and $ST$ maps elements $T(u) \in V$ to elements $S(T(u)) \in W$ and so $\mathrm{range} S \subseteq \mathrm{range} (ST)$ which implies that $\mathrm{dim} ( \mathrm{range} (S)) ≤ \mathrm{dim} ( \mathrm{range} (ST))$, so:(6)
Example 2 Let $V$ be a finite-dimensional vector space. Suppose that $V = \mathrm{null}(T) + \mathrm{range}(T)$. Prove that $\mathrm{null} (T) \cap \mathrm{range} (T) = \{ 0 \}$.
If we apply the dimension formula given above, we have that:(7)
Furthermore, from The Dimension of a Sum of Subspaces we have that:(8)
Since $V = \mathrm{null} (T) + \mathrm{range} (T)$, this implies that the two equations above are equal, and so:(9)
The equation above implies that $\mathrm{dim} (\mathrm{null} (T) \cap \mathrm{range} (T)) = 0$, so $\mathrm{null} (T) \cap \mathrm{range} (T) = \{ 0 \}$. |
Can any polynomial $P\in \mathbb C[X]$ be written as $P=Q+R$ where $Q,R\in \mathbb C[X]$ have all their roots on the unit circle (that is to say with magnitude exactly $1$) ?
I don't think it's even trivial with degree-1 polynomials... In this supposedly simple case, with $P(X)=\alpha X + \beta$, this boils down to finding $\alpha_1$ and $\beta_1$ such that $|\alpha_1|=|\beta_1|$ and $|\alpha-\alpha_1|=|\beta-\beta_1|$. I can't prove that geometrically, let alone analytically...
Furthermore I don't think anything can be said about the sum of two polynomials with known roots...
Can someone give me some hints ? |
The Dimension of The Null Space and Range Examples 2
Recall from The Dimension of The Null Space and Range page that if $T$ is a linear map from $V \to W$ and $V$ is finite-dimensional then we have the following formula relating the dimension of $V$ to the dimension of the the null space of $T$ and the dimension of the range of $T$:(1)
We will now look at some examples applying this formula.
Example 1 Let $V$ be a finite-dimensional vector space and define a linear map $T \in (V, V)$ where $\mathrm{dim} \mathrm{null} (T) = 1$ and $\mathrm{dim} \mathrm{range} (T) = 4$.
By the dimension formula above, we must have that:(2)
Let $\{ v_1, v_2, v_3, v_4, v_5 \}$ be a basis of $V$. Then for every vector $v \in V$ we have that $v = a_1v_1 + a_2v_2 + a_3v_3 + a_4v_4 + a_5v_5$. Define the linear map $T$ by:(3)
Such a linear map exists as seen on the Linear Maps Defined by Bases page. We have that:(4)
Notice that $T(v) = 0$ if and only if $a_1 = a_2 = a_3 = a_4 = 0$, and so:(5)
Note that $\mathrm{null} (T) = \mathrm{span} (v_5)$ and $\mathrm{dim} (\mathrm{span} (v_5)) = 1$, so $\mathrm{null} (T) = 1$.
Now we will show that $\mathrm{dim} ( \mathrm{range} (T)) = 4$. Let $w \in \mathrm{range} (T)$. Then there exists a vector $v = a_1v_1 + a_2v_2 + a_3v_3 + a_4v_4 + a_5v_5 \in V$ such that:(6)
Thus we have that:(7)
Notice that $\mathrm{range} (T) = \mathrm{span} (v_1, v_2, v_3, v_4)$ and that $\mathrm{dim} (\mathrm{span} (v_1, v_2, v_3, v_4)) = 4$ so $\mathrm{dim} (\mathrm{range} (T)) = 4$
Example 2 Let $n \in \mathbb{N}$. Find a linear map $T : \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ where $\mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (\mathrm{range} (T))$.
Note that $\mathrm{dim} (\mathbb{R}^{2n}) = 2n$. Let $\{ v_1, v_2, ..., v_n, v_{n+1}, v_{n+2}..., v_{2n} \}$ be a basis of $V$ and define $T$ by:(8)
Now since $\{ v_1, v_2, ..., v_n, v_{n+1}, v_{n+2} ..., v_{2n} \}$ is a basis of $V$, then for every vector $v \in V$ there exists scalars in $\mathbb{F}$ such that:(9)
Apply the linear map $T$ to both sides of the equation above to get that:(10)
We then have that:(11)
As we can see, $\mathrm{null} (T) = \mathrm{span} (v_{n+1}, v_{n+2}, ..., v_{2n})$ and $\mathrm{dim} (\mathrm{span} (v_{n+1}, v_{n+2}, ..., v_{2n})) = n$.
Furthermore we have that:(12)
As we can see, $\mathrm{range} (T) = \mathrm{span} (v_1, v_2, ..., v_n)$ and $\mathrm{dim} (\mathrm{span} (v_1, v_2, ..., v_n)) = n$.
Therefore $T$ is a linear map such that $\mathrm{dim} (\mathrm{null} (T)) = n = \mathrm{dim} (\mathrm{range} (T))$.
Example 3 Let $n \in \mathbb{N}$. Show that there exists no linear map $T \in \mathcal ( \mathbb{R}^{2n-1}, \mathbb{R}^{2n-1})$ such that $\mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (\mathrm{range} (T))$.
We see that $\mathrm{dim} (\mathbb{R}^{2n-1} ) = 2n - 1$ which is an odd number. Now suppose that $\mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (\mathrm{range} (T))$. Then by the dimension formula from the top of this page, we have that:(13)
The equation above implies that $\mathrm{dim} (\mathrm{null} (T)) = \frac{2n - 1}{2}$, however, $\frac{2n - 1}{2}$ is not a whole number, and so there exists no linear map $T \in \mathcal ( \mathbb{R}^{2n-1}, \mathbb{R}^{2n-1})$ such that $\mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (\mathrm{range} (T))$. |
Action functionals that attain regular minima in presence of energy gaps
1.
EPFL, Chaire d’Analyse Mathématiques et Applications, CH-1015 Lausanne, Switzerland
inf$\{\int_a^b L(t,x,\dot x): x\in W_0^{1,1}(a,b)\} $< inf$\{\int_a^bL(t,x,\dot x): x\in W_0^{1,\infty}(a,b)\}$
(where $ W_0^{1,p}(a,b)$ denote the usual Sobolev spaces with zero boundary conditions), in which in the first example the two infima are actually minima, in the second example the infimum in $ W_0^{1,\infty}(a,b)$ is attained while the infimum in $ W_0^{1,1}(a,b)$ is not, and in the third example both infimum are not attained. We discuss also how to construct energies with a gap between any space and energies with multi-gaps.
Keywords:Lavrentiev phenomenon, singular phenomena, regularity of minimizers., One-dimensional variational problems. Mathematics Subject Classification:Primary: 49J30; secondary: 49J45, 49N6. Citation:Alessandro Ferriero. Action functionals that attain regular minima in presence of energy gaps. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 675-690. doi: 10.3934/dcds.2007.19.675
[1]
Christos Gavriel, Richard Vinter.
Regularity of minimizers for second order variational problems in one independent variable.
[2]
Andrea Braides, Anneliese Defranceschi, Enrico Vitali.
Variational evolution of one-dimensional Lennard-Jones systems.
[3]
Alfredo Lorenzi, Eugenio Sinestrari.
Regularity and identification for an integrodifferential
one-dimensional hyperbolic equation.
[4]
Jianyu Chen, Hong-Kun Zhang.
Statistical properties of one-dimensional expanding maps with singularities of low regularity.
[5]
Gabriele Bonanno, Giuseppina D'Aguì, Angela Sciammetta.
One-dimensional nonlinear boundary value problems with variable exponent.
[6]
Franco Obersnel, Pierpaolo Omari.
Existence, regularity and boundary behaviour of bounded variation solutions of a one-dimensional capillarity equation.
[7]
Inbo Sim.
On the existence of nodal solutions for singular one-dimensional $\varphi$-Laplacian problem with asymptotic condition.
[8]
Umberto Biccari.
Boundary controllability for a one-dimensional heat equation with a singular inverse-square potential.
[9]
K. D. Chu, D. D. Hai.
Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem.
[10]
Naoki Sato, Toyohiko Aiki, Yusuke Murase, Ken Shirakawa.
A one dimensional free boundary problem for adsorption phenomena.
[11] [12]
Julián López-Gómez, Marcela Molina-Meyer, Andrea Tellini.
Intricate bifurcation diagrams for a class of one-dimensional superlinear indefinite problems of interest in population dynamics.
[13] [14] [15] [16] [17] [18] [19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The Dimension of The Null Space and Range Examples 3
Recall from The Dimension of The Null Space and Range page that if $T$ is a linear map from $V \to W$ and $V$ is finite-dimensional then we have the following formula relating the dimension of $V$ to the dimension of the the null space of $T$ and the dimension of the range of $T$:(1)
We will now look at some more examples applying this formula.
Example 1 Determine whether there exists a linear map $T \in \mathcal (\mathbb{F}^6, \mathbb{F}^2)$ such that $\mathrm{null}(T) = \{ (x_1, x_2, x_3, x_4, x_5, x_6) \in \mathbb{F}^6 : x_1 = x_2 = x_3, x_4, x_5 = 2x_6 \}$.
Suppose that such a linear map exists. We note that $\mathrm{dim} (\mathbb{F}^6) = 6$. Furthermore, note that:(2)
We note that no smaller set of vectors can span $\mathrm{null} (T)$ so $\mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (\mathrm{span} ((1, 1, 1, 0, 0, 0), (0, 0, 0, 1, 0, 0), (0, 0, 0, 0, 1, 1)) )= 3$.
If we apply the dimension formula we see that:(3)
Therefore we must have that $\mathrm{dim} (\mathrm{range}(T)) = 3$. However, $\mathrm{range}(T)$ is a subspace of the codomain space $\mathbb{F}^2$ and $\mathrm{dim} (\mathbb{F}^2) = 2$, so any subspace $U$ of $\mathbb{F}^2$ must be such that $\mathrm{dim} (U) ≤ \mathrm{dim} (\mathbb{F}^2)$.
Therefore since $3 > 2$ we have a contradiction and so there exists no linear map $T \in \mathcal (\mathbb{F}^6, \mathbb{F}^2)$ such that $\mathrm{null}(T) = \{ (x_1, x_2, x_3, x_4, x_5, x_6) \in \mathbb{F}^6 : x_1 = x_2 = x_3, x_4, x_5 = 2x_6 \}$.
Example 2 Let $V$ and $W$ be finite-dimensional vector spaces. Prove that there exists a linear map $T \in \mathcal L(V, W)$ that is injective if and only if $\mathrm{dim} (V) ≤ \mathrm{dim} (W)$.
$\Rightarrow$ Suppose that $T$ is injective. Then $\mathrm{dim} (\mathrm{null} (T)) = 0$ and so by the dimension formula above we have that:(4)
Therefore $\mathrm{dim} (V) ≤ \mathrm{dim} (W)$.
$\Leftarrow$ Now suppose that $\mathrm{dim} (V) = n ≤ m = \mathrm{dim} (W)$. Let $\{ v_1, v_2, ..., v_n \}$ be a basis of $V$ and let $\{ w_1, w_2, ..., w_m \}$ be a basis of $W$. We can define a linear map $T \in \mathcal L (V, W)$ by:(5)
For every vector $u = a_1v_1 + a_2v_2 + ... + a_nv_n \in V$ we have that:(6)
Now suppose that $T(u) = T(v)$ where $v = b_1v_1 + b_2v_2 + ... + b_nv_n$. Then we have that:(7)
Since $\{w_1, w_2, ..., w_n \}$ is linearly independent set of vectors (since this set of vectors is a subset of the basis $\{ w_1, w_2, ..., w_n, ..., w_m \}$ of $W$) we must have that:(8)
Therefore $a_1 = b_1$, $a_2 = b_2$, …, $a_n = b_n$. Therefore $u = v$ so $T$ is injective.
Example 3 Let $V$ and $W$ be finite-dimensional vector spaces. Prove that there exists a linear map $T \in \mathcal L(V, W)$ that is surjective if and only if $\mathrm{dim} (V) ≥ \mathrm{dim} (W)$.
$\Rightarrow$ Suppose that $T$ is surjective. Then $\mathrm{range} (T) = W$ and so $\mathrm{dim} (\mathrm{range} (T)) = \mathrm{dim} (W)$. Using the dimension formula and we have that:(9)
Since $\mathrm{dim} (\mathrm{null} (T)) ≥ 0$ we must have that $\mathrm{dim} (V) ≥ \mathrm{dim} (W)$ as desired.
$\Leftarrow$ Now suppose that $\mathrm{dim} (V) = n ≥ m = \mathrm{dim} (W)$. Let $\{ v_1, v_2, ..., v_n\}$ be a basis of $V$ and let $\{ w_1, w_2, ..., w_m \}$ be a basis of $W$. Extend the set of vectors $\{ w_1, w_2, ..., w_m \}$ to $\{ w_1, w_2, ..., w_m, w_{m+1}, ..., w_n \}$. This set of vectors still spans $W$ but is not longer linearly independent.
Now define a linear map $T \in \mathcal L (V, W)$ by:(10)
Notice that for any vector $w \in W$, we have that $w = b_1w_1 + b_2w_2 + ... + b_mw_m$ since $\{ w_1, w_2, ..., w_m \}$ is a basis of $W$. So for any vector $w \in W$, there exists a vector $v = b_1v_1 + b_2v_2 + ... + b_mv_m \in V$ and:(11)
Therefore $\mathrm{range} (T) = W$ and so $T$ is surjective. |
We have been learning how we can understand the behavior of a function based on its first and second derivatives. While we have been treating the properties of a function separately (increasing and decreasing, concave up and concave down, etc.), we combine them here to produce an accurate graph of the function without plotting lots of extraneous points.
Why bother? Graphing utilities are very accessible, whether on a computer, a hand--held calculator, or a smartphone. These resources are usually very fast and accurate. We will see that our method is not particularly fast -- it will require time (but it is not
hard). So again: why bother?
We are attempting to understand the behavior of a function \(f\) based on the information given by its derivatives. While all of a function's derivatives relay information about it, it turns out that "most" of the behavior we care about is explained by \(f'\) and \(f''\). Understanding the interactions between the graph of \(f\) and \(f'\)and \(f'\) is important. To gain this understanding, one might argue that all that is needed is to look at lots of graphs. This is true to a point, but is somewhat similar to stating that one understands how an engine works after looking only at pictures. It is true that the basic ideas will be conveyed, but "hands--on'' access increases understanding.
The following Key Idea summarizes what we have learned so far that is applicable to sketching graphs of functions and gives a framework for putting that information together. It is followed by several examples.
Key Idea 4: Curve Sketching
To produce an accurate sketch a given function \(f\), consider the following steps.
Find the domain of \(f\). Generally, we assume that the domain is the entire real line then find restrictions, such as where a denominator is 0 or where negatives appear under the radical. Find the critical values of \(f\). Find the possible points of inflection of \(f\). Find the location of any vertical asymptotes of \(f\) (usually done in conjunction with item 1 above). Consider the limits \(\displaystyle \lim_{x\to-\infty}f(x)\) and \(\displaystyle \lim_{x\to\infty}f(x)\) to determine the end behavior of the function. Create a number line that includes all critical points, possible points of inflection, and locations of vertical asymptotes. For each interval created, determine whether \(f\) is increasing or decreasing, concave up or down. Evaluate \(f\) at each critical point and possible point of inflection. Plot these points on a set of axes. Connect these points with curves exhibiting the proper concavity. Sketch asymptotes and \(x\) and \(y\) intercepts where applicable.
Example \(\PageIndex{1}\): curve sketching
Use Key Idea 4 to sketch \(f(x) = 3x^3-10x^2+7x+5\).
Solution The domain of \(f\) is the entire real line; there are no values \(x\) for which \(f(x)\) is not defined. Find the critical values of \(f\). We compute \(f'(x) = 9x^2-20x+7\). Use the Quadratic Formula to find the roots of \(f'\): $$x = \frac{20\pm \sqrt{(-20)^2-4(9)(7)}}{2(9)} = \frac19\left(10\pm\sqrt{37}\right) \Rightarrow x\approx 0.435, 1.787.$$ Find the possible points of inflection of \(f\). Compute \(f''(x) = 18x-20\). We have $$f'p(x) = 0 \Rightarrow x= 10/9 \approx 1.111.$$ There are no vertical asymptotes. We determine the end behavior using limits as \(x\) approaches \(\pm\)infinity.$$\lim_{x\to -\infty} f(x) = -\infty \qquad \lim_{x\to \infty}f(x) = \infty.$$We do not have any horizontal asymptotes. We place the values \(x=(10\pm\sqrt{37})/9\) and \(x=10/9\) on a number line, as shown in Figure \(\PageIndex{1}\). We mark each subinterval as increasing or decreasing, concave up or down, using the techniques used in Sections 3.3 and 3.4.
Figure \(\PageIndex{1}\): Number line for \(f\) in Example \(\PageIndex{1}\). We plot the appropriate points on axes as shown in Figure \(\PageIndex{2a}\) and connect the points with straight lines. In Figure \(\PageIndex{2b}\) we adjust these lines to demonstrate the proper concavity. Our curve crosses the \(y\) axis at \(y=5\) and crosses the \(x\) axis near \(x=-0.424\). In Figure \(\PageIndex{2c}\) we show a graph of \(f\) drawn with a computer program, verifying the accuracy of our sketch.
Figure \(\PageIndex{2}\): Sketching \(f\) in Example \(\PageIndex{1}\).
Example \(\PageIndex{2}\): Curve sketching
Sketch \(f(x) = \dfrac{x^2-x-2}{x^2-x-6}\).
Solution
We again follow the steps outlined in Key Idea 4.
In determining the domain, we assume it is all real numbers and looks for restrictions. We find that at \(x=-2\) and \(x=3\), \(f(x)\) is not defined. So the domain of \(f\) is \(D = \{\text{real numbers } x\ | \ x\neq -2,3\}\). To find the critical values of \(f\), we first find \(f'(x)\). Using the Quotient Rule, we find$$f'(x) = \frac{-8x+4}{(x^2+x-6)^2} = \frac{-8x+4}{(x-3)^2(x+2)^2}.$$\(f'(x) = 0\) when \(x = 1/2\), and \(f'\) is undefined when \(x=-2,3\). Since \(f'\) is undefined only when \(f\) is, these are not critical values. The only critical value is \(x=1/2\). To find the possible points of inflection, we find \(f''(x)\), again employing the Quotient Rule:$$f''(x) = \frac{24x^2-24x+56}{(x-3)^3(x+2)^3}.$$We find that \(f''(x)\) is never 0 (setting the numerator equal to 0 and solving for \(x\), we find the only roots to this quadratic are imaginary) and \(f'\)is undefined when \(x=-2,3\). Thus concavity will possibly only change at \(x=-2\) and \(x=3\). The vertical asymptotes of \(f\) are at \(x=-2\) and \(x=3\), the places where \(f\) is undefined. There is a horizontal asymptote of \(y=1\), as \(\lim_{x\to -\infty}f(x) = 1\) and \(\lim_{x\to\infty}f(x) =1\). We place the values \(x=1/2\), \(x=-2\) and \(x=3\) on a number line as shown in Figure \(\PageIndex{3}\). We mark in each interval whether \(f\) is increasing or decreasing, concave up or down. We see that \(f\) has a relative maximum at \(x=1/2\); concavity changes only at the vertical asymptotes.
FIgure \(\PageIndex{3}\): Number line for \(f\) in Example \(\PageIndex{2}\) In Figure \(\PageIndex{4a}\), we plot the points from the number line on a set of axes and connect the points with straight lines to get a general idea of what the function looks like (these lines effectively only convey increasing/decreasing information). In Figure \(\PageIndex{4b}\), we adjust the graph with the appropriate concavity. We also show \(f\) crossing the \(x\) axis at \(x=-1\) and \(x=2\).
Figure \(\PageIndex{4}\): Sketching \(f\) in Example \(\PageIndex{2}\).
Figure \(\PageIndex{4c}\) shows a computer generated graph of \(f\), which verifies the accuracy of our sketch.
Example \(\PageIndex{3}\): Curve sketching
Sketch \(f(x) = \frac{5(x-2)(x+1)}{x^2+2x+4}.\)
Solution
We again follow Key Idea 4
We assume that the domain of \(f\) is all real numbers and consider restrictions. The only restrictions come when the denominator is 0, but this never occurs. Therefore the domain of \(f\) is all real numbers, \(\mathbb{R}\). We find the critical values of \(f\) by setting \(f'(x)=0\) and solving for \(x\). We find $$f'(x) = \frac{15x(x+4)}{(x^2+2x+4)^2} \quad \Rightarrow \quad f'(x) = 0 \text{ when } \ x=-4,0.$$ We find the possible points of inflection by solving \(f''(x) = 0\) for \(x\). We find $$f'p(x) = -\frac{30x^3+180x^2-240}{(x^2+2x+4)^3} .$$ The cubic in the numerator does not factor very "nicely." We instead approximate the roots at \(x= -5.759\), \(x=-1.305\) and \(x=1.064\). There are no vertical asymptotes. We have a horizontal asymptote of \(y=5\), as \(\lim_{x\to-\infty}f(x) = \lim_{x\to\infty}f(x) = 5\). We place the critical points and possible points on a number line as shown in Figure \(\PageIndex{5}\) and mark each interval as increasing/decreasing, concave up/down appropriately.
Figure \(\PageIndex{5}\): Number line for \(f\) in Example \(\PageIndex{3}\). In Figure \(\PageIndex{6a}\) we plot the significant points from the number line as well as the two roots of \(f\), \(x=-1\) and \(x=2\), and connect the points with straight lines to get a general impression about the graph. In Figure \(\PageIndex{6b}\), we add concavity. Figure \(\PageIndex{6c}\) shows a computer generated graph of \(f\), affirming our results.
Figure \(\PageIndex{6}\): Sketching \(f\) in Example \(\PageIndex{3}\).
In each of our examples, we found a few, significant points on the graph of \(f\) that corresponded to changes in increasing/decreasing or concavity. We connected these points with straight lines, then adjusted for concavity, and finished by showing a very accurate, computer generated graph.
Why are computer graphics so good? It is not because computers are "smarter" than we are. Rather, it is largely because computers are much faster at computing than we are. In general, computers graph functions much like most students do when first learning to draw graphs: they plot equally spaced points, then connect the dots using lines. By using lots of points, the connecting lines are short and the graph looks smooth.
This does a fine job of graphing in most cases (in fact, this is the method used for many graphs in this text). However, in regions where the graph is very "curvy," this can generate noticeable sharp edges on the graph unless a large number of points are used. High quality computer algebra systems, such as
Mathematica, use special algorithms to plot lots of points only where the graph is "curvy.''
In Figure \(\PageIndex{7}\), a graph of \(y=\sin x\) is given, generated by
Mathematica. The small points represent each of the places Mathematica sampled the function. Notice how at the "bends" of \(\sin x\), lots of points are used; where \(\sin x\) is relatively straight, fewer points are used. (Many points are also used at the endpoints to ensure the "end behavior" is accurate.)
Figure \(\PageIndex{7}\): A graph of \(y=\sin x\) generated by Mathematica.
How does
Mathematica know where the graph is "curvy"? Calculus. When we study curvature in a later chapter, we will see how the first and second derivatives of a function work together to provide a measurement of "curviness." Mathematica employs algorithms to determine regions of "high curvature"' and plots extra points there.
Again, the goal of this section is not "How to graph a function when there is no computer to help.'' Rather, the goal is "Understand that the shape of the graph of a function is largely determined by understanding the behavior of the function at a few key places." In Example \(\PageIndex{3}\), we were able to accurately sketch a complicated graph using only 5 points and knowledge of asymptotes!
There are many applications of our understanding of derivatives beyond curve sketching. The next chapter explores some of these applications, demonstrating just a few kinds of problems that can be solved with a basic knowledge of differentiation.
Contributors
Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/
Integrated by Justin Marshall. |
This answer partially disagrees with Motl's. The crucial point is to consider the difference between the abelian and non-abelian case. I totally agree with Motl's answer in the non-abelian event — where these identities are usually denominated Slavnov-Taylor's rather than Ward's, so that I will refer to the abelian case.
First, a few words about terminology: Ward identities are the quantum counterpart to (first and second) Noether's theorem in classical physics. They apply to both global and gauge symmetries. However, the term is often reserved for the $U(1)$ gauge symmetry in QED. In the case of gauge symmetries, Ward identities yield real identities, such as $k^{\mu}\mathcal M_{\mu}=0$, where $\mathcal M_{\mu}$ is defined by $\mathcal M=\epsilon_{\mu}\,\mathcal M^{\mu}$, in QED, that tell us that photon's polarizations parallel to photon's propagation don't contribute to scattering amplitudes. In the case of global symmetries, however, Ward identities reflect properties of the theory. For example, the S-matrix of a Lorentz invariant theory is also Lorentz invariant or the number of particles minus antiparticles in the initial state is the same as in the final state in a theory with a global (independent of the point in space-time) $U(1)$ phase invariance.
Let's study the case of a massive vectorial field minimally coupled to a conserved current:
$$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}A^2+i\,\bar\Psi\displaystyle{\not}D\, \Psi - {m^2\over 2}\bar\Psi\Psi \\=-{1\over 4}\,F^2+{a^2\over 2}A^2+i\,\bar\Psi\displaystyle{\not}\partial \, \Psi - {m^2\over 2}\bar\Psi\Psi-e\,A_{\mu}\,j^{\mu}$$
Note that this theory has a global phase invariance $\Psi\rightarrow e^{-i\theta}\,\Psi$, with a Noether current
$$j^{\mu}={\bar\Psi\, \gamma^{\mu}}\,\Psi$$
such that (classically) $\partial_{\mu}\,j^{\mu}=0$. Apart from this symmetry, it is well-known that the Lagrangian above is equivalent to a theory: i)that doesn't have an explicit mass term for the vectorial field. ii) that contains a scalar field (a Higgs-like field) with a different from zero vacuum expectation value, which spontaneously break a $U(1)$ gauge symmetry (this symmetry is
not the gauged $U(1)$ global symmetry mentioned previously). The equivalence is in the limit where vacuum expectation value goes to infinity and the coupling between the vectorial field and the Higgs-like scalar goes to zero. Since one has to take this last limit, the charge cannot be quantized and therefore the $U(1)$ gauge symmetry must be topologically equivalent to the addition of real numbers rather than the multiplication of complex numbers with unit modulus (a circumference). The difference between both groups is only topological (does this mean then that the difference is irrelevant in the following?). This mechanism is due to Stueckelberg and I will summarize it at the end of this answer.
In a process in which there is a massive vectorial particle in the initial or final state, the LSZ reductio formula gives:
$$\langle i\,|\,f \rangle\sim \epsilon _{\mu}\int d^4x\,e^{-ik\cdot x}\, \left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)...\langle 0|\mathcal{T}A_{\nu}(x)...|0\rangle$$
From the Lagrangian above, the following classical equations of motion may be obtained
$$\left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)A_{\nu}=ej^{\mu}$$
Then, quantumly,
$$\left(\eta^{\mu\nu}(\partial ^2-a^2)-\partial^{\mu}\partial^{\nu}\right)\langle 0|\mathcal{T}A_{\nu}(x)...|0\rangle = e\,\langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle + \text{contact terms, which don't contribute to the S-matrix}$$
And therefore
$$\langle i\,|\,f \rangle\sim \epsilon _{\mu}\int d^4x\,e^{-ik\cdot x}\,...\langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle +\text{contact terms, which don't contribute}\sim \epsilon_{\mu}\mathcal{M}^{\mu}$$
If one replaces $\epsilon_{\mu}$ with $k_{\mu}$, one obtains
$$k_{\mu}\mathcal{M}^{\mu}\sim k _{\mu}\int d^4x\,e^{-ik\cdot x}\,...\langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle$$
Making use of $k_{\mu}\sim \partial_{\mu}\,,e^{-ik\cdot x}$, integrating by parts, and getting ride of the surface term (the plane wave is an idealization, what one actually has is a wave packet that goes to zero in the spatial infinity), one gets
$$k_{\mu}\mathcal{M}^{\mu}\sim \int d^4x\,e^{-ik\cdot x}\,...\, \partial_{\mu}\,\langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle$$
One can now use the Ward identity for the
global $\Psi\rightarrow e^{-i\theta}\,\Psi$ symmetry (classically $\partial_{\mu}\,j^{\mu}=0$ over solutions of the matter, $\Psi$, equations of motion)
$$\partial_{\mu}\, \langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle = \text{contact terms, which don't contribute to the S-matrix}$$
And hence
$$k^{\mu}\mathcal M_{\mu}=0$$
same as in the massless case.
Note that in this derivation, it has been crucial that the explicit mass term for the vectorial field doesn't break the global $U(1)$ symmetry. This is also related to the fact that the explicit mass term for the vectorial field can be obtained through a Higgs-like mechanism connected with a hidden (the Higgs-like field decouples from the rest of the theory) $U(1)$ gauge symmetry.
A more careful calculation should include counterterms in the interacting theory, however I think that this is the same as in the massless case. We can think of the fields and parameters in this answer as bare fields and parameters.
Stueckelberg mechanism
Consider the following Lagrangian
$$\mathcal L=-{1\over 4}\,F^2+|d\phi|^2+\mu^2\,|\phi|^2-\lambda\, (\phi^*\phi)^2$$
where $d=\partial - ig\, B$ and $F$ is the field strength (Faraday tensor) for $B$. This Lagrangian is invariant under the gauge transformation
$$B\rightarrow B + (1/g)\partial \alpha (x)$$$$\phi\rightarrow e^{i\alpha(x)}\phi$$
Let's take a polar parametrization for the scalar field $\phi$: $\phi\equiv {1\over \sqrt{2}}\rho\,e^{i\chi}$, thus
$$\mathcal L=-{1\over 4}\,F^2+{1\over 2}\rho^2\,(\partial_{\mu}\chi-g\,B_{\mu})^2+{1\over 2}(\partial \rho)^2+{\mu^2\over 2}\,\rho ^2- {\lambda\over 4}\rho^4$$
We may now make the following field redefinition $A\equiv B - (1/g)\partial \chi$ and noting that $F_{\mu\nu}=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is also the field strength for $A$
$$\mathcal L=-{1\over 4}\,F^2+{g^2\over 2}\rho^2\,A^2+{1\over 2}(\partial \rho)^2+{\mu^2\over 2}\,\rho ^2-{\lambda\over 4}\, \rho^4$$
If $\rho$ has a vacuum expectation value different from zero $\langle 0|\rho |0\rangle = v=\sqrt{\mu^2\over \lambda}$, it is then convenient to write $\rho (x)=v+\omega (x)$. Thus
$$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2+g^2\,v\,\omega\,A^2+{g^2\over 2}\,\omega ^2\,A^2+{1\over 2}(\partial \omega)^2-{\mu^2\over 2}\,\omega ^2-\lambda\,v\omega^3-{\lambda\over 4}\, \omega^4+{v^4\,\lambda^2\over 4}$$
where $a\equiv g\times v$. If we now take the limit $g\rightarrow 0$, $v\rightarrow \infty$, keeping the product, $a$, constant, we get
$$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2+{1\over 2}(\partial \omega)^2-{\mu^2\over 2}\,\omega ^2-\lambda\,v\omega^3-{\lambda\over 4}\, \omega^4+{v^4\,\lambda^2\over 4}$$
that is, all the interactions terms between $A$ and $\omega$ disappear so that $\omega$ becomes an auto-interacting field with infinite mass that is decoupled from the rest of the theory, and therefore it doesn't play any role. Thus, we recover the massive vectorial field with which we started.
$$\mathcal L=-{1\over 4}\,F^2+{a^2\over 2}\,A^2$$
Note that in a non-abelian gauge theory must be non-linear terms such as $\sim g A^2\,\partial A\;$, $\sim g^2 A^4$, which prevent us from taking the limit $g\rightarrow 0$.This post imported from StackExchange Physics at 2014-03-31 22:25 (UCT), posted by SE-user drake |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I am reading some lecture notes that unfortunately don't seem to be available online, but that are quite close in spirit in their treatment of the Dirac equation to Sakurai's "Advanced Quantum Mechanics". At some point the $4\times4$ matrices $\Sigma_k$ are introduced as infinitesimal generators of the action of rotations on Dirac spinors; to be precise, a spatial rotation given by a vector $\theta$, whose direction is the axis and whose length the angle of rotation, acts as $\exp(-\frac i2\theta\cdot\Sigma)$. Since these $\Sigma_k$ are block-diagonal with the Pauli matrices $\sigma_k$ on the diagonal, we already see that the components of Dirac spinors pairwise transform just like spin-1/2 states under rotations.
It is clear that the $\Sigma_k$ formally behave like an angular momentum operator (commutation relations), but in these notes the $\Sigma_k$ arose purely as generators or rotations and not as observables (this led to this other question on the dual role of Hermitian operators in quantum mechanics)
It can be seen that the orbital angular momentum $L$ is
not a conserved quantity, but $J = L + \frac12\Sigma$ is ($\hbar$ is set to 1). We have $\left|\frac12\Sigma\right|^2 = \frac34$, which corresponds to an (abstract) "orbital" quantum number $s = \frac12$.
According to the notes, this should be interpreted as meaning that a particle described by the Dirac equation must have spin $1/2$.
My question is how exactly this conclusion follows. I could think of three possible reasons, from which it has to follow on physical grounds:
Is it because for physical reasons we expect the total angular momentum to be conserved, and $\frac12\Sigma$, which formally behaves like an angular momentum is exactly the missing quantity and can therefore be interpreted as an intrinsic angular momentum?
Is it because the transformation of a Dirac spinor under rotations is similar to that of spin 1/2 particles as identified in an ad-hoc way before the Dirac equation?
Does it have to do with the fact that $\Sigma$ is the infinitesimal generator of rotations on the finite dimensional "tensor factor" of the state space which can be though of as corresponding to "non-classical degrees of freedom"?
Is any of these facts a compelling reason to draw the conclusion that a particle described by the Dirac equation has spin 1/2? If so, why? |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Possible Duplicate: Density of a Set on $\mathbb{R}$?
I have to show that show that $A=\{ \frac{m}{2^n}:m\in \mathbb {Z},n\in \mathbb {N}\} $ is dense in $\mathbb {R}$.
A set A is dense in $\mathbb {R}$ if $\overline A=\mathbb {R}$.
But also $Y$ is a subset of $X$, we say that $Y$ is dense in $X$, if for every $x\in X$ , there is $y \in Y$ that is arbitary close to $x$.
So ,I have to prove that for every $x \in \mathbb {R}$ ,there is a number $\frac{m}{2^n}$ arbitrarily close to $x$.So $\forall \epsilon,x ,\exists y$ such that $|y-x|<\epsilon$.
I got a little stuck at this point...Could anyone give me a hint?Thanks a lot! |
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology.
Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$
The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra).
My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago.
It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-15 09:41 (UCT), posted by SE-user Sasha Pavlov |
In my post Trigonometry Yoga, I discussed how defining sine and cosine as lengths of segments in a unit circle helps develop intuition for these functions.
I learned the circle definitions of sine and cosine in my junior year of high school, in the class that would now be called pre-calculus (it was called “Trig Senior Math”). Two years earlier, I’d learned the triangle definitions of sine, cosine, and tangent in geometry class. I don’t remember any of my teachers ever mentioning a circle definition of the tangent function.
The geometric definition of the tangent function, which predates the triangle definition, is the length of a segment tangent to the unit circle. The tangent really is a tangent! Just as for sine and cosine, this one-variable definition helps develop intuition. Here is the definition, followed by an applet to help you get a feel for it:
Let OA be a radius of the unit circle, let B = (1,0), and let \( \theta =\angle BOA\). Let C be the intersection of \(\overrightarrow{OA}\) and the line x=1, i.e. the tangent to the unit circle at B. Then \(\tan \theta\) is the y-coordinate of C, i.e. the signed length of segment BC.
Move the blue point below; the tangent is the length of the red segment. (If a label is getting in the way, right click and toggle “show label” from the menu).
The circle definition of the tangent function leads to geometric illustrations of many standard properties and identities. (If this were my class, I would stop here and tell you to explore on your own and with others).
Some things to notice:
\(\left| \tan \theta \right|\) gets big as \(\theta\) approaches \(\pm 90{}^\circ \).
\(\tan (\pm 90{}^\circ)\) is undefined, because at these angles, \(\overline{OA}\) is parallel to x=1, so the two lines don’t intersect, and point C doesn’t exist.
\(\tan 90{}^\circ\) tends toward \(+\infty\), \(\tan (-90{}^\circ)\) tends toward \(-\infty\).
\(\tan \theta\) is positive in the first and third quadrants, negative in the second and fourth quadrants.
\(\tan \theta\)=\(tan (\theta+180{}^\circ)\) — the angles \(\theta\) and \(\theta +180{}^\circ\) form the same line. Thus the period of the tangent function is \(180 {}^\circ = \pi\) radians.
\(\tan \theta\) = \(- \tan (-\theta)\). Moving from \(\theta\) to \(-\theta\) reflects \(OC\) about the x-axis.
\(\tan \theta\) is equal to the slope of OA (rise = \(\tan \theta\) , run =1), which is also equal to \(\dfrac{\sin\theta}{\cos\theta}\), as well as Opposite over Adjacent for angle \(\theta\) in right triangle CBO.
\(\tan (45{}^\circ)=1\). When \(\theta=45{}^\circ\), triangle CBO is a 45-45-90 triangle, and OB=1. Similarly, \(\tan (-45{}^\circ)=-1\), etc.
For small values of \(\theta\), \(\tan \theta\) is close to \(\sin \theta\), which is close to the arc length of AB, i.e. the measure of \(\theta\) in radians.
If we define \(\arctan \theta\) as the function whose input is the signed length of BC and whose output is the angle \(\theta\) corresponding to that tangent length, then the domain of that function is the reals, and it makes sense to define the range as \(-90 {}^\circ< \theta <90{}^\circ\) (in radians \(-\pi/2<\theta < \pi/2\) and arctan’s output is an arc length). This range includes all the angles we need and avoids the discontinuity at \(\theta= \pm 90{}^\circ =\pm \pi/2\) radians. For \(\left| \theta \right|\leq 45{}^\circ\), \(\left| \tan \theta \right|\leq 1\). Half of the input values of \(\tan \theta\) give outputs with absolute values less than or equal to 1, and the other half give values on the rest of the number line. This mapping also occurs with fractions and slopes, but there’s something very compelling about seeing the lengths change dynamically. Applets like the one above could also help students develop intuition about slopes. \(\tan (180{}^\circ-\theta) = -\tan \theta\). We reflect BC over the x-axis to form \(B{C}’\). Then \(\angle BO{C}’=\theta\) and \(\angle BOD =(180{}^\circ-\theta)\). \(B{C}’\) (the blue segment) is the tangent of \((180{}^\circ-\theta)\).
\(\tan (\theta \pm 90{}^\circ)\) = \(-1/\tan \theta\). The picture below illustrates the geometry of this identity when \(\theta\) is in the first quadrant.
The line formed at \(\theta + 90{}^\circ\) is perpendicular to OC and \(\triangle COB\sim \triangle ODB\). Thus \(\dfrac{BD}{OB}=\dfrac{OB}{BC}\), and with appropriate signs, \(\tan (\theta + 90{}^\circ)\) = \(-1/\tan \theta\). Since \(\tan \theta\)=\(\tan (\theta+180{}^\circ)\), \(\tan (\theta +90{}^\circ)=\tan(\theta-90{}^\circ)\).
The applet below shows the geometry in all quadrants, and it gives a dynamic sense of the relationship between \(\tan\theta\) and \(\tan(-\theta)\). Again, move the blue point:
Special Bonus: The Secant Function
The signed length of the segment OC is called the secant function, \(\sec\theta\).
Using similar triangles, we see that \(\sec \theta = \dfrac{1}{\cos \theta}\).
The Pythagorean Theorem applied to \(\triangle COB\) shows that \(\tan^2\theta+1=\sec^2 \theta\).
When the tangent function is big, so is the secant function, and when the tangent function is small, so is the secant function. Also \(\sec \theta\) is close to \(\pm 1\) when \(\theta\) is close to the x-axis and when \(\tan \theta\) is close to 0.
The graphs of the two functions look nice together: |
The action shown in the question is a functional of $\phi$, not of $x$. A change of the integration variable $x$ is just a relabeling of the index set. It does not transform the dynamic variables $\phi$ at all, so no: a change of variable does not correspond to a conserved quantity.
More explicitly, if $y(x)$ is a monotonic smooth function of $x$, then$$ \int d^4y\ {\cal L}\left(\phi\big(y(x)\big),\, \frac{\partial}{\partial y_\mu}\phi\big(y(x)\big)\right) = \int d^4x\ {\cal L}\left(\phi(x),\frac{\partial}{\partial x_\mu}\phi(x)\right)\tag{1}$$identically, for any ${\cal L}$ whatsoever (as long as it depends on $x$ only via $\phi$). This is just a change of variable (a relabeling of the index-set), and there is no associated conserved quantity.
In contrast, suppose that the action has this property:$$ \int d^4x\ {\cal L}\left(\phi\big(y(x)\big),\, \frac{\partial}{\partial x_\mu}\phi\big(y(x)\big)\right) = \int d^4x\ {\cal L}\left(\phi(x),\frac{\partial}{\partial x_\mu}\phi(x)\right).\tag{2}$$Unlike equation (1), equation (2) is
not identically true for any ${\cal L}$ and any $y(x)$, though it may be true for some choices of ${\cal L}$ and $y(x)$. The transformation represented in equation (2) replaces the original function $x$, namely $\phi(x)$, with a new function of $x$, namely $\phi\big(y(x)\big)$.This is the kind of transformation we have in mind when we talk about Poincaré invariance and its associated conserved quantities: it is a change of the function $\phi$ which we then insert into the original action, not a change of the integration variable. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
I'm going to find an example of
uniform algebra and show that satisfying the definition. Example: Show that The Gelfand transform $\widehat{f}$ is uniform algebra.
We know that:
A uniform algebra is a
closed subalgebra$\mathcal A$ of the complex algebra $C(X)$ that contains the constantsand separates points. Here $X$ is a compact Hausdorff space.
The Gelfand transform of $f$ is the function $\widehat{f}$ defined on $M_{\mathcal A}$ in the following way:
$$\begin{align}\widehat{f}: M_{\mathcal A} &\to \Bbb C\\ \varphi &\mapsto \widehat{f}(\varphi)=\varphi(f), \forall \varphi \in M_{\mathcal A} \end{align}$$
How can we show that $\widehat{A}$ such that $3$ conditions above?
=================================
After a period of time to think about my problem which I posted above, I'll write it here:
Let $\widehat{\mathcal A}=\{\widehat{f}: \ f \in \mathcal A \}$
$1$. $\widehat {\mathcal A} \ $
contains the constants
Because $e \in \mathcal A \implies \widehat{e}(\varphi)=\varphi (e)=1, \ \forall \varphi \in M_{\mathcal A}$.
Therefore, $\widehat{(\lambda e)}(\varphi)=\varphi(\lambda e)=\lambda \varphi(e)=\lambda,\ \forall \lambda \in \Bbb C$.
Hence, $\widehat {\mathcal A} \ $
contains the constants
$2$. $\widehat {\mathcal A} \ $
separates points
We assume that $\varphi_1,\ \varphi_2 \in M_{\mathcal A}$ such that $\widehat{f}(\varphi_1)= \widehat{f}(\varphi_2),\ \forall f \in \mathcal A$.
Whence, $\varphi_1(f)= \varphi_2(f),\ \forall f \in \mathcal A$. So $\varphi_1= \varphi_2$.
Hence, $\widehat {\mathcal A} \ $
separates points
===========================
$3$. Now I have stuck when I try to show $\widehat {\mathcal A}$ is a
closed subalgebra of algebra Banach $C(M_{\mathcal A})$
I think that we have $\widehat{f}$ is continuous, because $\left |\widehat{f}(\varphi ) \right |=\left | \varphi (f) \right |\le \left \| \varphi \right \|\cdot \left \| f \right \|=\left \| f \right \|$
But How can we prove The first condition (i.e $\widehat {\mathcal A}$ is a
closed subalgebra of algebra Banach $C(M_{\mathcal A})$)
I don't remember the definition of
closed subalgebra. Can anyone post it help me!
Any help will be appreciated! Thanks! |
From the famous Double-slit experiment, it is clear that electrons do behave as wave as well as particle. When it is detected by geiger counter, "click" sound appears & no matter how greatly the voltage is decreased along the cathode tube, "click" & never "
half click" appears. So, electrons always arrive at lumps like bullets. However, unlike bullets the probability of detecting electron at the backstop in front of the slits is not like bullet but like interference of waves like water waves. So, electron does behave as wave.
Waves of what? Waves of probability. The quantity that varies with wave like electric field in electromagnetic wave is $\Psi(x,y,z,t) = \psi(x,y,z)e^{-(iE/\hbar)t}$, a complex entity called
wavefunction. The wave associated with the electron is purely mathematical construct. It doesn't describe the space-time variation of any measurable quantity. The wave rather relates to the probabilities of observing the electron at different space locations as a function of time.
Photons do have wavefunction but it is not the classical EM waves. It needs relativistic approach & is too subtle. However, it can be expressed by means of electric & magnetic field i.e. $\psi(x) = \begin{pmatrix} \vec{E} \\ ic\vec{B} \end{pmatrix}$. You can check this paper for more info on this. |
There's a mapping $f:X\rightarrow Y$.
1.for all $A,B\subset X$, $f(A\cap B)=f(A)\cap f(B)$, prove $f$ is injective.
2.for all $A\subset X$, $f(A^{c})=[f(A)]^{c}$, prove $f$ is bijective.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
For 1 you want to show that $f(x) = f(y)$ implies $x = y$. So let $f(x) = f(y)$. Then $f(\{x\}) = f(\{y\})$ and hence $f(\{x\}) \cap f(\{y\}) = f(\{x\}) = f(\{y\}) = f(\{x\} \cap \{y\})$. Hence $x=y$.
(1) Suppose $f$ is not injective. That is, there exists $a \neq b$ such that $f(a) = f(b)$. Consider the sets $\{a\}$ and $\{b\}$. Then $f(\{a\} \cap \{b\}) = f(\emptyset) = \emptyset \neq f(\{a\}) \cap f(\{b\})$.
(2) Suppose $f$ is not injective. Then there exists an $x$ such that $f^{-1}(x)$ contains more than a single element. Let $a \in f^{-1}(x)$. Let $A = f^{-1}(x) - \{a\}$. Then $x \in f(A^c)$ but $x \notin f(A)^c$. So $f(A^c) \neq f(A)^c$.
Suppose that $f$ is not surjective. Then there exists a $y \in Y$ such that $y \notin f(X)$. Let $A = X$. Then $f(A^c) = f(\emptyset) = \emptyset$. However $y \in f(A)^c$. Hence $f(A^c) \neq f(A)^c$.
Thus, it has been shown that $f$ is injective and surjective. |
For some non-negative numbers $m_1$, $m_2$, $E$ I define a function $$ f\left(\boldsymbol{q},E\right)=\frac{1}{2\omega_{1}\omega_{2}}\frac{1}{\omega_{1}+\omega_{2}+E}+\frac{1}{4\omega_{1}\omega_{2}}\frac{1}{\omega_{1}-\omega_{2}-E}+\frac{1}{4\omega_{1}\omega_{2}}\frac{1}{\omega_{2}-\omega_{1}-E} $$ where $$ \omega_{1,2}\left(\boldsymbol{q}\right)=\sqrt{m_{1,2}^{2}+\boldsymbol{q}^{2}} $$ Note that $f$ actually depends on $\boldsymbol{q}^2$.
I want to evaluate at least numerically the integral over all $\mathbb{R}^3$ $$ I\left(E\right)=\int\frac{d^{3}q}{\left(2\pi\right)^{3}}f\left(\boldsymbol{q},E\right)e^{i\boldsymbol{q}\cdot\boldsymbol{n}} $$ where $\boldsymbol{n}$ in a non-negative integer 3-vector.
I've tried NIntegrate using e.g. $m_1=0.2$, $m_2=0.7$, $E=0.8$ and $\boldsymbol{n}=\left(1,1,1\right)$ . The calculation takes very long and I just abort. I've tried changing the input numbers, adding some methods as options. At most, I get warnings like NIntegrate::slwcon. I believe Mathematica runs into trouble because of the oscillating exponential.
With a simpler version, where $f$ is just $$ f\left(\boldsymbol{q}\right)=\frac{1}{2\sqrt{m^{2}+\boldsymbol{q}^{2}}} $$ similar issues appear. When it works, after long time, it's quite different from the analytical result.
Are you aware of some magical methods/options combination that would speed up the process and give a reliable result? Of course, if you know some analytical converging expression for the integral, I'd be happy to know. |
Effects of the noise level on nonlinear stochastic fractional heat equations
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
We consider the stochastic fractional heat equation $\partial_{t}u=\triangle^{\alpha/2}u+\lambda\sigma(u)\dot{w}$ on $[0,L]$ with Dirichlet boundary conditions, where $\dot{w}$ denotes the space-time white noise. For any $\lambda>0$, we prove that the $p$th moment of $\sup_{x\in [0,L]}|u(t,x)|$ grows at most exponentially. If $\lambda$ is small, we prove that the $p$th moment of $\sup_{x\in [0,L]}|u(t,x)|$ is exponentially stable. At last, we obtain the noise excitation index of $p$th energy of $u(t,x)$ is $\frac{2\alpha}{\alpha-1}$.
Keywords:Fractional heat kernel, stochastic fractional heat equations, Mittag-Leffler function, excitation index. Mathematics Subject Classification:Primary: 60H15, 35K05. Citation:Kexue Li. Effects of the noise level on nonlinear stochastic fractional heat equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5437-5460. doi: 10.3934/dcdsb.2019065
References:
[1]
E. Bazhlekova,
[2] [3] [4] [5] [6] [7] [8] [9] [10]
P. K. Friz and N. B. Victoir,
[11]
A. M. Garsia, E. Rodemich and H. Rumsey Jr,
A real variable lemma and the continuity of paths of some Gaussian processes,
[12]
D. Henry,
[13] [14] [15] [16]
I. Podlubny,
Fractional Differential Equations, Academic Press, San Diego, 1999.
Google Scholar
[17]
E. M. Stein,
Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, New Jersey, 1970.
Google Scholar
[18] [19] [20]
show all references
References:
[1]
E. Bazhlekova,
[2] [3] [4] [5] [6] [7] [8] [9] [10]
P. K. Friz and N. B. Victoir,
[11]
A. M. Garsia, E. Rodemich and H. Rumsey Jr,
A real variable lemma and the continuity of paths of some Gaussian processes,
[12]
D. Henry,
[13] [14] [15] [16]
I. Podlubny,
Fractional Differential Equations, Academic Press, San Diego, 1999.
Google Scholar
[17]
E. M. Stein,
Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, New Jersey, 1970.
Google Scholar
[18] [19] [20]
[1] [2]
Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar.
A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel.
[3]
Mehmet Yavuz, Necati Özdemir.
Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel.
[4]
Jean Daniel Djida, Juan J. Nieto, Iván Area.
Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel.
[5]
Ebenezer Bonyah, Samuel Kwesi Asiedu.
Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function.
[6]
Yalçin Sarol, Frederi Viens.
Time regularity of the evolution solution to fractional stochastic
heat equation.
[7]
Francesco Mainardi.
On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$,
completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$.
[8] [9]
Antonio Greco, Antonio Iannizzotto.
Existence and convexity of solutions of the fractional heat equation.
[10]
Fausto Ferrari, Michele Miranda Jr, Diego Pallara, Andrea Pinamonti, Yannick Sire.
Fractional Laplacians, perimeters and heat semigroups in Carnot groups.
[11] [12]
Tomás Caraballo, José Real, I. D. Chueshov.
Pullback attractors for stochastic heat equations in materials with memory.
[13] [14]
François Bolley, Arnaud Guillin, Xinyu Wang.
Non ultracontractive heat kernel bounds by Lyapunov conditions.
[15]
Sandra Carillo, Vanda Valente, Giorgio Vergara Caffarelli.
Heat conduction with memory: A singular kernel problem.
[16]
Tomás Caraballo, I. D. Chueshov, Pedro Marín-Rubio, José Real.
Existence and asymptotic behaviour for stochastic heat equations with multiplicative noise in materials with memory.
[17]
Donghui Yang, Jie Zhong.
Optimal actuator location of the minimum norm controls for stochastic heat equations.
[18]
María J. Garrido-Atienza, Bohdan Maslowski, Jana Šnupárková.
Semilinear stochastic equations with bilinear fractional noise.
[19] [20]
Irena Lasiecka, Roberto Triggiani.
Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Misconceptions on Galileo's experiment at Pisa Tower
I think a lot of people have misconceptions on the implication of Galileo's experiment at Pisa Tower. Wikipedia says that Galileo “is said to have dropped two spheres of different masses from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass” [1]. This experiment was even repeated on the surface of the Moon.
Understanding Galileo's experiment in this way raises a lot of interesting questions. If two objects have the shape and different mass, would they have the same terminal velocity? Would they accelerate at an equal rate? Does that also mean two bikes with different mass would arrive at the bottom of a hill at the same time?
Last Friday (19/07/19) my research group went out for a dinner, at one point we talked about skydiving. It is kind of surprising to hear some of them mumbling that everyone would have the same terminal velocity. It was really surprising, because two of my labmates have physical science undergraduate background.
I once had a debate with a physicist during a bike ride in York, on whether our mass differences affect our speed at going downhill. Now looking back, starting that debate was rather embarrassing, although that conversation did make me think, so it was a useful conversation.
In this article, we discuss some of the physics behind these questions.
What did Galileo's experiment actually demonstrated
Another part of the Wikipedia says “Galileo proposed that a falling body would fall with a uniform acceleration, as long as the resistance of the medium through which it was falling remained negligible, or in the limiting case of its falling through a vacuum” [2]. I believe this is better way to understand Galileo's experiment.
Newton's second law is basically:
$$ F = ma $$
The formula for Newton's law of universal gravitation is:
$$ F = G \frac{m_1 m_2}{r^2}, $$ where $F$ is the gravitational force acting between two objects, $m_1$ and $m_2$ are the masses of the objects, $r$ is the distance between the centers of their masses, and $G$ is the gravitational constant. The gravitational force acting between two objects is also known as weight.
By equating the two equations above together, and setting $m = m_2$, we get:
$$ m_2 a = G \frac{m_1 m_2}{r^2}. $$ Let's also assume that $m_1$ is the mass of planet Earth, and $m_2$ is the mass of the object we are interested in.
It is clear that $m_2$, the mass of the object we are interested in, gets cancelled out, leaving us with:
$$ a = G \frac{m_1}{r^2}.$$ The acceleration due to gravity only depends on the mass of the planet $m_1$ and distance between the centre of masses $r$. In practice, on planet Earth at sea level, the acceleration due to gravity is about $9.81ms^{-2}$ [5]. The terminal velocity of an object
After an object reaches terminal velocity, the object no longer accelerates further. Based on Newton's second law of motion, net force acting on the object has to be zero. The force acting on a falling object are buoyancy [6], drags [7] and weight [8]. Buoyancy is normally negligible in the atmosphere. So let's look at weight and drag.
The equation for weight $F_g$ is:
$$ F_g = m g, $$ where: $m$ is mass, and $g$ is local acceleration due to gravity.
The equation for drag $F_D$ is:
$$F_D\, =\, \tfrac12\, \rho\, u^2\, C_D\, A$$ where: $\rho$ is the density of the fluid, $u$ is the flow velocity relative to the object, $A$ is the reference area, and $C_D$ is the drag coefficient
At terminal velocity, if we ignore buoyancy, the weight and and the drag are balanced. It is clear that weight only depends on mass. If we assume that the object doesn't change it shape while falling, and the air doesn't change its density, then drag only depends on the object's velocity.
In fact, the formula for terminal velocity $V_t$ is:
$$ V_t= \sqrt{\frac{2mg}{\rho A C_D }}, $$ its derivation can be found here [9]. Does the time of descent of an object depend on its mass?
Quite often, the falling object's velocity does not depend on the weight. This is because most of the time the objects don't fall far enough to reach a velocity at which drag matters.
It is important to know that if the object travels at terminal velocity, then its time of descent will depend on its mass. For example, feathers and leaves reach their terminal velocity pretty quickly.
The speed which a bike rolls downhill
In the previous section, we determined whether mass affects terminal velocity by analysing into the force acting on the falling object. We adopt the same approach in this section.
When a bike rolls downhill, mainly three forces act on the bike - weight, rolling resistance and drag. Assuming that there is no rolling resistance [10], which implies that the tyre is perfectly rigid and smooth, and the ground is perfectly rigid and smooth, then it is a matter between weight and air resistance. It is clear that weight matters in this scenario.
Let us look at rolling resistance. The rolling resistance of a tyre can be calculated as the following:
$$\ F = C_{rr} N, $$ where: $F$ is the rolling resistance force, $C_{rr}$ is the dimensionless rolling resistance coefficient or coefficient of rolling friction, and $N$ is the normal force, the force perpendicular to the surface on which the wheel is rolling.
The normal force depends on the weight, so it is clear that rolling resistance is affected by weight. Although interesting in this case, the heavier you are, the more rolling resistance you get.
The point of the analysis here is not to derive an equation for calculating the velocity of a bike going downhill at a time point, it is to show that your weight matters when you go downhill.
Conclusion
Galileo's experiment should be understood as a demonstration that “a falling body would fall with a uniform acceleration, as long as the resistance of the medium through which it was falling remained negligible, or in the limiting case of its falling through a vacuum”. In other problems, a falling object having reached terminal velocity, and a bike rolling downhill, a lot of other force components are involved. Galileo's experiment did not take those force components into account, so its results cannot be generalised into those scenarios.
I think these misconception might be showing a bigger problem in science education. I think a lot of people (including me) dogmatically remember the results and conclusion of experiments done by people in the past, rather than believing in their own empirical observation, then come up with an explanation.
Some “clever” students (including me) fudged our experimental results when we were in secondary school / sixth form. I think in these situations, the students demonstrate more trust toward the authorities of science, rather than their own experimental results. I do not think these are the correct mentality / attitude.
I think one motto to remember is the Royal Society's motto, which is the Latin phrase “Nullius in verba”. It can be translated as “on the word of no one” or “take nobody's word for it”. Quite often, it is more important to come up with an explanation of what you observe, rather than nullify your observation because of some theories. |
If $Y_n$ is a Poisson random variable with mean $n^{1/2}$
$S_n = \left(\frac{\sqrt{2} Y_n - \sqrt{2n}}{n^{1/4}}\right)$
If we consider a sequence $S_1, S_2, . . . S_n$, provide the the limiting distribution.
Attempt
Mgf of $Y_n$ is as follows:
$M_{Y_n}(t) = exp\sqrt{n}(e^{t} -1)$
So for all $t \in \mathbb{R}$, we have
$M_{\left(\frac{\sqrt{2}}{n^{1/4}}\right)Y_n - \left(\frac{\sqrt{2}n}{n^{1/4}}\right)}(t) = exp\sqrt{n}(e^{\left(\frac{\sqrt{2}}{n^{1/4}}\right)(t-n )-1} )$
Taking the $\lim_{x\to \infty}$ $M_{T_n}(t) = $ will lead to a lot of mess that, I think, will be a Maclaurin Series expansion. Any nudge in the right direction or an easier approach would be appreciated. |
I have an optimization problem where all the constraints are linear but some of the type: $$ y_i = \frac{x_i}{\sum_k x_k} $$
It seems that the equality can be relaxed to an inequality adding the linear constraint: $$ \sum_k y_k =1. $$
I further have the constraints: $$x_i\geq0$$ $$0\leq y_i \leq 1$$
The question is if the relaxed inequality is (or can be recasted as) a convex constraint? $$ y_i \leq \frac{x_i}{\sum_k x_k} $$
If I analyze it as a quadratic constraint $$\mathbf{x}^T A \mathbf{x} - b \mathbf{y} \leq 0,$$ the corresponding matrix $A$ has all zeros in the main diagonal (since there are no squared variables) and shows a negative eigenvalue, so the matrix is not positive semidefinite, and thus, the constraint is not convex.
I have also looked into the possibility of recasting the constraint as a Geometric Programming one: $$ y_i\, x_i^{-1} \,\sum_k x_k \leq 1 $$ but then I face that I don't know if a geometric programming problem can mix constraints in the transformed variables and the original ones.
Any hint towards where to look would be very appreciated! |
Analysis of a nonlinear system for community intervention in mosquito control
1.
Department of Mathematics, Bentley College, 175 Forest Street, Waltham, MA 02452, United States
2.
Department of Population and International Health, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA 02115, United States, United States
$x_{n+1}= a x_{n}h(p y_{n})+b h(q y_{n})$
n=0,1,...
$y_{n+1}= c x_{n}+d y_{n}$
where the function $h\in C^{1}$ ( [ $0,\infty$) $\to $ [$0,1$] ) satisfying certain properties, will denote either $h(t)=h_{1}(t)=e^{-t}$ and/or $h(t)=h_{2}(t)=1/(1+t).$ We give conditions in terms of parameters for boundedness and stability. This enables us to explore the dynamics of prevalence/community-activity systems as affected by the range of parameters.
Mathematics Subject Classification:39A11; Secondary: 92D4. Citation:M. Predescu, R. Levins, T. Awerbuch-Friedlander. Analysis of a nonlinear system for community intervention in mosquito control. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 605-622. doi: 10.3934/dcdsb.2006.6.605
[1]
Kai Liu, Zhi Li.
Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes.
[2]
Hal L. Smith, Horst R. Thieme.
Persistence and global stability for a class of discrete time structured population models.
[3]
Nguyen Thieu Huy, Vu Thi Ngoc Ha, Pham Truong Xuan.
Boundedness and stability of solutions to semi-linear equations and applications to fluid dynamics.
[4] [5]
Antoine Perasso.
Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity.
[6]
Kazuo Yamazaki, Xueying Wang.
Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model.
[7]
Fuchen Zhang, Xiaofeng Liao, Chunlai Mu, Guangyun Zhang, Yi-An Chen.
On global boundedness of the Chen system.
[8]
Qi Wang, Yang Song, Lingjie Shao.
Boundedness and persistence of populations in advective Lotka-Volterra competition system.
[9]
Pierre Magal.
Global stability for differential equations with homogeneous nonlinearity and application to population dynamics.
[10]
Guihong Fan, Yijun Lou, Horst R. Thieme, Jianhong Wu.
Stability and persistence in ODE models
for populations with many stages.
[11]
Cemil Tunç.
Stability, boundedness and uniform boundedness of solutions of nonlinear delay differential equations.
[12]
Vincent Calvez, Lucilla Corrias.
Blow-up dynamics of self-attracting diffusive particles driven by competing convexities.
[13]
Marcel Freitag.
Global existence and boundedness in a chemorepulsion system with superlinear diffusion.
[14]
Evariste Sanchez-Palencia, Jean-Pierre Françoise.
Topological remarks and new examples of persistence of diversity in biological dynamics.
[15]
Yu Yang, Shigui Ruan, Dongmei Xiao.
Global stability of an age-structured virus dynamics model with Beddington-DeAngelis infection function.
[16]
Cyrine Fitouri, Alain Haraux.
Boundedness and stability for the damped and forced single well Duffing equation.
[17]
Wei Wang, Yan Li, Hao Yu.
Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity.
[18]
Hao Yu, Wei Wang, Sining Zheng.
Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity.
[19]
Johannes Lankeit, Yulan Wang.
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption.
[20]
Hua Zhong, Chunlai Mu, Ke Lin.
Global weak solution and boundedness in a three-dimensional competing chemotaxis.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Some explanations first
The substitution in the question introduces the reduced wave function $u(r)$ by solving the original radial equation in polar coordinates,
$$-\frac{1}{2}\left(R''(r)+\frac {1}{r}R'(r)\right) - \frac{1}{r}R(r) + \frac {m^2}{2r^2}R(r) = E R(r)$$
using the
ansatz
$$R(r)\equiv \frac{1}{\sqrt{r}}u(r)$$
The apparently divergent prefactor is cancelled by the fact that $u(r)$ is then found to go to zero at $r\to 0$ with a power larger than, or equal to, $\frac12$. The equal sign occurs exactly for the case $m=0$, as one can prove analytically. This is essential because the new radial equation for the reduced wave function still contains a strongly divergent term even at $m=0$. The equation reads
$$-\frac{1}{2}u''(r)- \frac{1}{r}u(r) + \frac {m^2-\frac14}{2r^2}u(r) = E u(r)$$
The reason this form is often used is that it is in the form of a Sturm-Liouville equation (with no first derivative), and the analytic solution for $m=0$ yields $u(r)\propto \sqrt{r}$ as $r\to 0$, which means that the original wave function approaches a
finite limit. It doesn't go to zero, so Dirichlet boundary conditions don't apply to $R(r)$ at the origin.
The problem with the substitution in the question is that it introduces an effective potential that diverges at $r\to 0$ even when the angular momentum is zero. This is difficult to treat numerically, and you can only get marginally closer to the correct value by decreasing
MaxCellMeasure. A similar centrifugal term in the effective potential for $m>0$ is less problematic numerically, because then it enters with a larger prefactor. Since the centrifugal term always leads to a suppression of the wave amplitude near the origin
independently of the boundary conditions, the limit of vanishing amplitude as $r\to 0$ is approached smoothly for nonzero $m$. But for $m=0$ the amplitude has to fall off with $\sqrt{r}$, and that means
NDEigensystem has to deliver a result for $u(r)$ whose slope ideally
diverges. This is why I think this formulation of the problem is not the right one for a numerical solution.
Below, I therefore use the unmodified radial equation, denoting the radial wave function by $\psi(r)$ instead of $R(r)$. You'll understand why if you try to read $R(r)$ out loud.
The Dirichlet boundary condition at $r=0$ that was needed for $u(r)$ is still correct for $R(r)$ at $m>0$ because these functions vanish as $r^m$ there. But the centrifugal potential is pretty much all by itself able to enforce this condition (see
caveat at the end), so the main condition that leads to the quantization of the eigenvalues is the Dirichlet condition at $r\to\infty$.
The remaining issue in the radial equation is that the $r\to\infty$ boundary conditions then needs to be faked by choosing a large but finite $r$ at which you expect the wave to have decayed to zero. This distance can in principle be estimated from the classical turning points of the Coulomb potential. However, in my approach you don't need to do that because I transform $r$ to a different variable defined on a finite interval, so that the large-$r$ variations (which are slow) get compressed into that interval, and the boundary condition can be applied at the finite point to which $r\to\infty$ has been mapped.
Suggested numerical approach
The correct eigenvalues are:
e[n_] := -(1/(2 (n - 1/2)^2))
N[Table[e[n], {n, 0, 10}]]
(*
==> {-2., -2., -0.222222, -0.08, -0.0408163, -0.0246914,
-0.0165289, -0.0118343, -0.00888889, -0.00692042, -0.00554017}
*)
To reproduce this numerically, I would choose the same substitution of variables that I proposed in the linked answer: $r=\tan(\xi)$. This leads to the following modification of the radial equation:
Clear[f, r, ξ, ψ, radialξ];
radialEq = -(1/r) f[r] - 1/2 f''[r] - 1/(2 r) f'[r] + m^2/(2 r^2) f[r];
radialξ[m_] =
Simplify[radialEq /. f -> (ψ[ArcTan[#]] &) /. r -> (Tan[ξ]),
Pi/2 > ξ > 0]
(*
==> 1/4 Cot[ξ] (2 (-2 + m^2 Cot[ξ]) ψ[ξ] -
Cos[ξ]^2 (2 Cos[2 ξ] Derivative[1][ψ][ξ] +
Sin[2 ξ] (ψ^′′)[ξ]))
*)
Now we can't impose a Dirichlet boundary condition at the origin when the angular momentum (called here
m instead of
l for clarity) vanishes. But I find that this causes no problem. I just leave a free boundary at the origin. The resulting eigenvalues are in very good agreement with expectation, up to roughly the tenth eigenvalue:
With[{max = 20, shift = 10, m = 0}, {ev, ef} =
NDEigensystem[{radialξ[m] + shift ψ[ξ],
DirichletCondition[ψ[ξ] ==
0, ξ == Pi/2]}, ψ[ξ], {ξ, 0, Pi/2}, max,
Method -> {"SpatialDiscretization" -> {"FiniteElement",
{"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}},
"Eigensystem" -> {"Arnoldi", MaxIterations -> 40000}}];
evNew = ev - shift]
(*
==> {-2., -0.222222, -0.08, -0.0408163, -0.0246914, -0.0165289, \
-0.0118343, -0.00888878, -0.00691999, -0.00553879, -0.0045312, \
-0.00377284, -0.00318483, -0.00267246, -0.00233131, -0.00196882, \
-0.00167769, -0.00141128, -0.000979967, -0.000749858}
*)
With[{n = 4, d = 10, amplitudes = {-1, 1, 1, 1}},
Plot[Evaluate[
Table[evNew[[i]] +
amplitudes[[i]] (ef[[i]] /. ξ -> ArcTan[r]), {i, n}]], {r, 0,
d}, PlotRange -> {{0, d}, {-5, 5}},
Epilog -> {Gray, Dashed,
Table[Line[{{0, evNew[[i]]}, {d, evNew[[i]]}}], {i, n}]}]]
With[{max = 20, shift = 10, m = 4}, {ev, ef} =
NDEigensystem[{radialξ[m] + shift ψ[ξ],
DirichletCondition[ψ[ξ] ==
0, ξ == Pi/2]}, ψ[ξ], {ξ, 0, Pi/2}, max,
Method -> {"SpatialDiscretization" -> {"FiniteElement", \
{"MeshOptions" -> {"MaxCellMeasure" -> 0.001}}},
"Eigensystem" -> {"Arnoldi", MaxIterations -> 40000}}];
evNew = ev - shift]
(*
==> {-0.0246914, -0.0165289, -0.0118343, -0.00888886, \
-0.00692024, -0.00553945, -0.00453286, -0.00377563, -0.00318396, \
-0.00269907, -0.00235954, -0.00196281, -0.00168251, -0.0014004, \
-0.00102958, -0.000964247, -0.000478284, 0.00014322, 0.00186455, \
0.0042127}
*)
With[{n = 4, d = 130, amplitudes = {-1, 1, 1, 1}/1000},
Plot[Evaluate[
Table[evNew[[i]] +
amplitudes[[i]] (ef[[i]] /. ξ -> ArcTan[r]), {i, n}]], {r, 0,
d}, PlotRange -> {{0, d}, All},
Epilog -> {Gray, Dashed,
Table[Line[{{0, evNew[[i]]}, {d, evNew[[i]]}}], {i, n}]}]]
The plots are for $m = 0$ (top) and $m = 4$ (bottom).
The reason why I didn't have to specify a boundary condition for $r=0$ is that whenever $m>0$ there is a centrifugal barrier in the effective potential that suppresses the solution near $r=0$ anyway. However, this suppression is a cheat because it doesn't enforce exact zero wave function, only exponential suppression. So a slightly more accurate solution is obtained if you replace the
DirichletCondition above by
DirichletCondition[ψ[ξ] == 0, If[m == 0, ξ == Pi/2, True]] |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
Let $X$ be a topological space and $X=X_1 \cup X_2$ with $X_1, X_2$ nonempty open irreducible subsets. Then $X$ is irreducible iff $X_1 \cap X_2 \ne \emptyset$.
The easy part: if it were $X_1 \cap X_2 = \emptyset$ then we would have $$ X = (X \setminus X_1) \cup (X \setminus X_2) $$ and this is impossible since $X$ is irreducible, so it can't be written as a union of two proper closed subsets.
The otherway gives me some problems. Suppose by contradiction $X=C_1 \cup C_2$ with $C_i$ proper closed subsets. Then... what can I do?
Could you please provide any hints, please? Thanks. |
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps. |
I am working on a linear analysis problem where we have boiled down the problem to finding a continuous function $f:\mathbb{R} \to \mathbb{R}$ that is bounded, but has infinite derivative at zero. So far, we have conjured up the example $$f_n(x) = \frac{2}{\pi}\arctan(nx)$$ This sequence of functions will have infinite derivative at $0$ when $n\to \infty$, and is bounded by $1$. I believe this will work for the sake of our problem, but I would like to find a function that doesn't depend on $n$. I can picture what this should look like, but I can't come up with an example function. Any ideas? All appreciated.
$f(x)=\arctan(\sqrt[3]{x})$, for example.
A quarter of a unit circle (no, the
other quarter) up and down:$$ f(x) = \begin{cases} 0 , & x < -1, \\ 1-\sqrt{1-(x+1)^2}, & -1 \leq x < 0, \\ 1-\sqrt{1-(x-1)^2}, & 0 \leq x < 1, \\ 0 , & 1 \leq x \end{cases} \text{.}$$
$g(x)=arccotx^{1/2}$ another one.
My first thought was something like
$$f(x) = x\sin\left(\frac1x\right)$$ $$f'(x) = \sin\left(\frac1x\right) - \frac1x\cos\left(\frac1x\right)$$
This is bounded between $1$ and $-1$ (note that $\lim_{x\to\infty}f(x)=1$), and its derivative has an infinite oscillatory discontinuity.
Also, $$\lim_{x\to 0}f(x)=0$$ so $f$ itself has a removable discontinuity; it can be made continuous by defining $f(0)=0$. |
Well there should be a distinction between types and terms somehow. If we had a "plus 10" function it wouldn't make any sense to apply it to a type. What is a type plus 10? So we need some kind of type system that differentiates between types and terms. This much is unavoidable. I suspect that this is the brunt of your issue.
But lets try and consolidate things a bit.
$$\begin{align}t, u ::=&~x && \text{(variable)} \\ &|~\lambda x : A. t && \text{(term abstraction)} \\ &|~t~u && \text{(application)} \\ &|~\lambda \alpha : Type. t && \text{(type abstraction)}\\\end{align}$$
where Type is a new keyword. Then applications can be type checked to ensure that only terms that have types of the form $\forall a. t$ can have types applied to them and that terms of the form $\lambda x : Type. t$ have type $\forall x. t$
This is however just a rebranding of System-F! All I have done is unified the syntax a bit. This does manage to consolidate application but you might then ask what happens when we say "Type is a Type too!" so that we can have only one kind of abstraction as well. So you might try and define things like this
$$\begin{align}A, B ::=&~\alpha && \text{(type variable)} \\ &|~A \rightarrow B && \text{(function type)} \\ &|~\forall \alpha. B && \text{(universal quantification)}\\ &|~Type && \text{(type of types)} \\\end{align}$$
and then try something like this for terms
$$\begin{align}t, u ::=&~x && \text{(variable)} \\ &|~\lambda x : A. t && \text{(abstraction)} \\ &|~t~u && \text{(application)} \\\end{align}$$
Now I have introduced some terms I didn't want to! like $\lambda x : Type \to Type. t$. We don't even have the proper machinery to use something like that! So it doesn't seem like we can consolidate much further than our first attempt lest we introduce functions on types which would take us outside of system F! Infact the above move takes us outside of system-$F\omega$ as well! Consider the term $\lambda x : \forall a. a \to Type. t$. That's a dependent type!
If we morph the system into something closer to dependent types we eventually can drop the distinction if we are very carful. Such a system would not be System-F however and thus the distinction is needed as long as we are in System-F. |
Well, I'll answer anyway, since Wikipedia isn't MSE, but to be clear, while the math doesn't come from Wikipedia, I'm more or less going to directly copy the story of the conjecture from Wikipedia.
Next, I'd like to point out that the statement is now known as the Quillen-Suslin theorem after both gave independent proofs of the conjecture.
Projective Modules
Based on your comments, it seems like the main words that you don't understand are projective and free (also maybe finitely-generated, but that's less complicated).So let's talk about projective modules. A module $P$ over a ring $R$ is
projective if any one of the following equivalent statements is satisfied. The functor $\newcommand\Hom{\operatorname{Hom}}\Hom_R(P,-)$ is exact, $\Hom_R(P,-)$ is right exact, If $f: M \to N$ is surjective, and $g:P\to N$ is any map, then there exists a lift $\tilde{g}:P\to M$ such that $g = f\circ\tilde{g}$, $P$ is a direct summand of a free module. (Free modules are defined below)
Drawing a picture of definition 3, we have that when we have $g$ and $f$, such that we have the following diagram with bottom row exact, then thereexists $\tilde{g}$ such that the second diagram below commutes (apologies for the poor diagrams, but MSE doesn't support commutative diagrams very well).$$\require{AMScd}\begin{CD}@. P @.\\ @. @VgVV\\M @>f>> N @>>> 0\\\end{CD}\qquad\implies\exists_{\tilde{g}}\text{ such that}\qquad\begin{CD}P @= P @.\\ @V\tilde{g}VV @VgVV\\M @>f>> N @>>> 0\\\end{CD}$$
The reason I've put so much effort into the third definition is because it will be the most relevant for us (at least at this stage of the explanation).And now let's introduce free modules.
Free Modules
A module over a ring $A$ is free if it is isomorphic to the module $A^{\oplus \alpha\in S}$ (the possibly infinite direct sum of copies of $A$) for some index set $S$. Free modules are in many ways analagous to vector spaces. Indeed, a module $M$ is free if and only if it has a
basis, i.e. a collection of elements $e_s$ for $s\in S$ such that the $e_s$ are linearly independent over $A$ and also span $M$. Thus the fact that every vector space has a basis tells us that every module over a field is free.
Now we can prove that free modules are projective (using definition 3).Let $e_s$ be a basis for a free module $F$, then a map from $F$ to any module $M$ is determined by its values on the basis (since a basis spans $F$), and any choice of values on the basis determines a map (since the basis is linearly independent), so if $f:M\to N$ is surjective and we have a map $g:F\to N$, then for each $s$, since $f$ is surjective, we can choose $m_s\in M$ such that $f(m_s)=g(e_s)$. Then if we define $\tilde{g}(e_s)=m_s$, we have $f(\tilde{g}(e_s))=f(m_s)=g(e_s)$, so $g=f\circ \tilde{g}$. Thus free modules are projective.
Side note, finitely generated
You've indicated you're not familiar with what a finitely generated module means, but the name sort of gives it away. A finitely generated module $M$ is one where there is a finite subset of $M$ such that its span over $A$ is all of $M$ (i.e. there is a finite subset of $M$ that generates $M$). It's analagous to a vector space being finite dimensional.
The relationship between projective and free modules
Since every free module is projective, it's natural to wonder if every projective module is free. Definition 4 of projective modules sort of answers this. We can prove that any direct summand of a free module is a projective module (and vice versa). Thus we can find projective modules that aren't free if we can find a summand of a free module that isn't itself free. See here for an example.
However, if our ring is sufficiently nice, nothing like that happens. In particular if our ring is a PID, all finitely generated projective modules are free.
Thus it is natural to wonder, well, what if our ring isn't a PID, but instead a polynomial ring over a PID. That should still be a fairly "nice" ring, so its modules should also be "nice". And this leads us to the actual story of the conjecture. For more on the relationship between projective and free, see this section of the Wiki page on projective modules.
The story
According to Wiki, Serre remarked that for certain rings $K$, like let's say as examples $\Bbb{R}$ or $\Bbb{C}$, facts about smooth and holomorphic manifolds tell us that every finitely generated projective module over $K[x_1,\ldots,x_n]$ is free.
Thus it shouldn't be too surprising if for any PID, all finitely generated projective modules over $K[x_1,\ldots,x_n]$ are free.
Indeed, this was proved by both Quillen and Suslin independently in 1976 (21 years after the initial conjecture).
Finally, the geometric interpretation
Algebraic geometry allows us to interpret algebraic statements geometrically. Under this correspondence, $K[x_1,\ldots,x_n]$ corresponds to affine space over $K$. If $K$ is an algebraically closed field, this is essentially $K^n$, although it can be much more complicated for a general PID.
Similarly under this correspondence projective modules over a commutative ring correspond to vector bundles on the geometric space the ring corresponds to. It would be fairly difficult for me to explain precisely why this is if you don't know any algebraic geometry, but for those who know some algebraic geometry, this lemma at the stacks project essentially answers this question.
Now if vector bundles correspond to projective modules, it's natural to ask which projective modules correspond to trivial vector bundles, and (perhaps unsurprisingly) the answer is free modules correspond to trivial vector bundles. Thus the fact that there are no nonfree projective modules corresponds to the geometric fact that any vector bundle on affine space over a PID is trivial.
Hopefully this was interesting and helpful. |
There are many ways to study approaches to equilibrium, which is obvious as there are many ways to drive a system out of equilibrium. So there is really no unique answer to your question. However, various universal results are known. These include various fluctuation theorems. The most famous of which is usually called just the fluctuation theorem which relates the probability of time-averaged entropy production, $\Sigma_t=A$ over time $t$ to $\Sigma_t=-A$, $$\frac{P(\Sigma_t=A)}{P(\Sigma_t=-A)}=e^{A t},$$which shows that positive entropy production is exponentially more likely than negative entropy production. Note that the second law follows from this theorem. The fluctuation-dissipation relation may also be derived from it.
There is also, for instance, the Crooks fluctuation theorem which relates the work done on a system, $W$, during a non-equilibrium transformation to the free energy difference, $\Delta F$, between the final and the initial state of the system,$$\frac{P_{A \rightarrow B} (W)}{P_{A\leftarrow B}(- W)} = ~ \exp[\beta (W - \Delta F)],$$where $\beta$, is the inverse temperature, $A \rightarrow B$ denotes a forward transformation, and vice versa.
A lot of research has been done in this area so for more information I suggest reading some review articles, such as,
Esposito, M., Harbola, U., and Mukamel, S. (2009). Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems. Reviews of Modern Physics,
81(4), 1665. (arxiv)
and
Campisi, M., Hänggi, P., and Talkner, P. (2011). Colloquium: Quantum fluctuation relations: Foundations and applications. Reviews of Modern Physics,
83(3), 771. (arxiv)
There are various master equations (e.g., Fokker-Planck type, Boltzmann, Lindblad, etc.)in physics which will give you more information than theorems like these, but they are derived using various approximations and/or assumptions or are system specif. So, like I said there is no universal answer to your question.
EDIT: Deriving Fourier law is difficult. In fact there is an article from 2000 F. Bonetto, J.L. Lebowitz and L. Rey-Bellet, Fourier's Law: a Challenge for Theorists (arxiv) which states in the abstract: "There is however at present no rigorous mathematical derivation of Fourier's law..."This post imported from StackExchange Physics at 2015-06-15 19:26 (UTC), posted by SE-user Bubble |
I'm reading the following set of notes on Taylor series and big O-notation, written by a professor at Columbia: http://www.math.columbia.edu/~nironi/taylor2.pdf. He repeatedly refers to what he calls "limit comparison", by which he means the theorem that for $a_n, b_n$ sequences of positive real numbers such that $b_n \to C>0$, we have that $\sum a_n <\infty$ iff $\sum a_n b_n < \infty$. On page 13, he is manipulating a series using O notation, and he ends up with $$\sum\limits_{n=1}^{\infty}(-1)^n\left(\frac{1}{n}\right)\frac{\frac{7}{12}+O(1/n^2)}{-\frac{1}{6}+O(1/n^2)},$$at which point he says "At this point we might be tempted to use limit comparison and conclude that the series is convergent; but limit comparison cannot be applied to an alternating series. Instead of using limit comparison we try to separate the part of the series that converges but not absolutely from the part that converges absolutely."
Now, in reference to the theorem he calls "limit comparison", can't we strengthen this theorem to say that if $a_n$ and $b_n$ are sequences of reals, and $b_n \to C \neq 0$, then $\sum a_n <\infty \iff \sum a_n b_n < \infty$? If not, what is a relevant counter-example? And if so, can't we use this to conclude that the above series converges?
My second question is that I do not understand the algebraic manipulation he does directly after the sentence quoted above. I suppose he is trying to "separate the part of the series that converges but not absolutely from the part that converges absolutely", but I don't know what this means, or what he is doing. This is all on page 13.
Thanks for your help. |
Algebra is one of the major parts of Mathematics in which general symbols and letters are used to represent quantities and numbers in equations and formulae. The more basic parts of algebra are called elementary algebra and more abstract parts are called modern algebra or abstract algebra. Algebra is very important as it includes everything from elementary equation solving to the study of abstractions such as rings, groups and fields.
Vector Algebra is included in CBSE Class 12 mathematics syllabus as its importance is multifold. Vector Algebra deals with vectors – things that have both directions and magnitudes. It is important in both mathematics and physics. Learning vector algebra will help you in handling geometric transformations and it is very important in understanding Linear Algebra.
Algebra Formulas For Class 12 If\(\vec{a}=x\hat{i}+y\hat{j}+z\hat{k}\) then magnitude or length or norm or absolute value of \(\vec{a} \) is \( \left | \overrightarrow{a} \right |=a=\sqrt{x^{2}+y^{2}+z^{2}}\) A vector of unit magnitude is unit vector. If \(\vec{a}\) is a vector then unit vector of \(\vec{a}\) is denoted by \(\hat{a}\) and \(\hat{a}=\frac{\hat{a}}{\left | \hat{a} \right |}\) Therefore \( \hat{a}=\frac{\hat{a}}{\left | \hat{a} \right |}\hat{a}\) Important unit vectors are \(\hat{i}, \hat{j}, \hat{k}\), where \(\hat{i} = [1,0,0],\: \hat{j} = [0,1,0],\: \hat{k} = [0,0,1]\) If \( l=\cos \alpha, m=\cos \beta, n=\cos\gamma,\) then \( \alpha, \beta, \gamma,\) are called directional angles of the vectors\(\overrightarrow{a}\) and \(\cos^{2}\alpha + \cos^{2}\beta + \cos^{2}\gamma = 1\)
In Vector Addition \(\vec{a}+\vec{b}=\vec{b}+\vec{a}\) \(\vec{a}+\left ( \vec{b}+ \vec{c} \right )=\left ( \vec{a}+ \vec{b} \right )+\vec{c}\) \(k\left ( \vec{a}+\vec{b} \right )=k\vec{a}+k\vec{b}\) \(\vec{a}+\vec{0}=\vec{0}+\vec{a}\), therefore \( \vec{0}\) is the additive identity in vector addition. \(\vec{a}+\left ( -\vec{a} \right )=-\vec{a}+\vec{a}=\vec{0}\), therefore \(\vec{a}\) is the inverse in vector addition. |
Learning Objectives
Evaluate square roots. Use the product rule to simplify square roots. Use the quotient rule to simplify square roots. Add and subtract square roots. Rationalize denominators. Use rational roots.
A hardware store sells \(16\)-ft ladders and \(24\)-ft ladders. A window is located \(12\) feet above the ground. A ladder needs to be purchased that will reach the window from a point on the ground \(5\) feet from the building. To find out the length of ladder needed, we can draw a right triangle as shown in Figure \(\PageIndex{1}\), and use the Pythagorean Theorem.
\[ \begin{align*} a^2+b^2&=c^2 \label{1.4.1} \\[4pt] 5^2+12^2&=c^2 \label{1.4.2} \\[4pt] 169 &=c^2 \label{1.4.3} \end{align*}\]
Now, we need to find out the length that, when squared, is \(169\), to determine which ladder to choose. In other words, we need to find a square root. In this section, we will investigate methods of finding solutions to problems such as this one.
Evaluating Square Roots
When the square root of a number is squared, the result is the original number. Since \(4^2=16\), the square root of \(16\) is \(4\).The square root function is the inverse of the squaring function just as subtraction is the inverse of addition. To undo squaring, we take the square root.
In general terms, if \(a\) is a positive real number, then the square root of \(a\) is a number that, when multiplied by itself, gives \(a\).The square root could be positive or negative because multiplying two negative numbers gives a positive number. The principal square root is the nonnegative number that when multiplied by itself equals \(a\). The square root obtained using a calculator is the principal square root.
The principal square root of \(a\) is written as \(\sqrt{a}\). The symbol is called a radical, the term under the symbol is called the radicand, and the entire expression is called a
radical expression.
Does \(\sqrt{25} = \pm 5\)?
Solution
No. Although both \(5^2\) and \((−5)^2\) are \(25\), the radical symbol implies only a
nonnegative root, the principal square root. The principal square root of \(25\) is \(\sqrt{25}=5\).
Note
The principal square root of \(a\) is the nonnegative number that, when multiplied by itself, equals \(a\). It is written as a radical expression, with a symbol called a
radical over the term called the radicand: \(\sqrt{a}\).
Evaluate each expression.
\(\sqrt{100}\) \(\sqrt{\sqrt{16}}\) \(\sqrt{25+144}\) \(\sqrt{49}\)-\(\sqrt{81}\) Solution \(\sqrt{100} =10\) because \(10^2=100\) \(\sqrt{\sqrt{16}}= \sqrt{4} =2\) because \(4^2=16\) and \(2^2=4\) \(\sqrt{25+144} = \sqrt{169} =13\) because \(13^2=169\) \(\sqrt{49} -\sqrt{81} =7−9 =−2\) because \(7^2=49\) and \(9^2=81\)
For \(\sqrt{25+144}\),can we find the square roots before adding?
Solution
No. \(\sqrt{25} + \sqrt{144} =5+12=17\). This is not equivalent to \(\sqrt{25+144}=13\). The order of operations requires us to add the terms in the radicand before finding the square root.
Exercise \(\PageIndex{1}\)
Evaluate each expression.
\(\sqrt{25}\) \(\sqrt{\sqrt{81}}\) \(\sqrt{25-9}\) \(\sqrt{36} + \sqrt{121}\) Answer a
\(15\)
Answer b
\(3\)
Answer c
\(4\)
Answer d
\(17\)
Using the Product Rule to Simplify Square Roots
To simplify a square root, we rewrite it such that there are no perfect squares in the radicand. There are several properties of square roots that allow us to simplify complicated radical expressions. The first rule we will look at is the product rule for simplifying square roots, which allows us to separate the square root of a product of two numbers into the product of two separate rational expressions. For instance, we can rewrite \(\sqrt{15}\) as \(\sqrt{3}\times\sqrt{5}\). We can also use the product rule to express the product of multiple radical expressions as a single radical expression.
\(\sqrt{300}\) \(\sqrt{162a^5b^4}\) Solution
a. \(\sqrt{100\times3}\) Factor perfect square from radicand.
\(\sqrt{100}\times\sqrt{3}\) Write radical expression as product of radical expressions.
\(10\sqrt{3}\) Simplify
b. \(\sqrt{81a^4b^4\times2a}\) Factor perfect square from radicand
\(\sqrt{81a^4b^4}\times\sqrt{2a}\) Write radical expression as product of radical expressions
\(9a^2b^2\sqrt{2a}\) Simplify
Exercise \(\PageIndex{2}\)
Simplify \(\sqrt{50x^2y^3z}\)
Answer
\(5|x||y|\sqrt{2yz}\)
Notice the absolute value signs around \(x\) and \(y\)? That’s because their value must be positive!
Howto: Given the product of multiple radical expressions, use the product rule to combine them into one radical expression
Express the product of multiple radical expressions as a single radical expression. Simplify.
Simplify the radical expression.
\(\sqrt{12}\times\sqrt{3}\)
Solution
\[\begin{align*} &\sqrt{12\times3}\qquad \text{Express the product as a single radical expression}\\ &\sqrt{36}\qquad \text{Simplify}\\ &6 \end{align*}\]
Exercise \(\PageIndex{3}\)
Simplify \(\sqrt{50x}\times\sqrt{2x}\) assuming \(x>0\).
Answer
\(10|x|\)
Using the Quotient Rule to Simplify Square Roots
Just as we can rewrite the square root of a product as a product of square roots, so too can we rewrite the square root of a quotient as a quotient of square roots, using the quotient rule for simplifying square roots. It can be helpful to separate the numerator and denominator of a fraction under a radical so that we can take their square roots separately. We can rewrite
\[\sqrt{\dfrac{5}{2}} = \dfrac{\sqrt{5}}{\sqrt{2}}. \nonumber \]
Simplify the radical expression.
\(\sqrt{\dfrac{5}{36}}\)
Solution
\[\begin{align*} &\dfrac{\sqrt{5}}{\sqrt{36}}\qquad \text{Write as quotient of two radical expressions}\\ &\dfrac{\sqrt{5}}{6}\qquad \text {Simplify denominator} \end{align*}\]
Exercise \(\PageIndex{4}\)
Simplify \(\sqrt{\dfrac{2x^2}{9y^4}}\)
Answer
\(\dfrac{x\sqrt{2}}{3y^2}\)
We do not need the absolute value signs for \(y^2\) because that term will always be nonnegative.
Simplify the radical expression.
\(\dfrac{\sqrt{234x^{11}y}}{\sqrt{26x^7y}}\)
Solution
\[\begin{align*} &\sqrt{\dfrac{234x^{11}y}{26x^7y}}\qquad \text{Combine numerator and denominator into one radical expression}\\ &\sqrt{9x^4}\qquad \text{Simplify fraction}\\ &3x^2\qquad \text{Simplify square root} \end{align*}\]
Exercise \(\PageIndex{5}\)
Simplify \(\dfrac{\sqrt{9a^5b^{14}}}{\sqrt{3a^4b^5}}\)
Answer
\(b^4\sqrt{3ab}\)
Adding and Subtracting Square Roots
We can add or subtract radical expressions only when they have the same radicand and when they have the same radical type such as square roots. For example, the sum of \(\sqrt{2}\) and \(3\sqrt{2}\) is \(4\sqrt{2}\). However, it is often possible to simplify radical expressions, and that may change the radicand. The radical expression \(\sqrt{18}\) can be written with a \(2\) in the radicand, as \(3\sqrt{2}\), so \(\sqrt{2}+\sqrt{18}=\sqrt{2}+3\sqrt{2}=4\sqrt{2}\)
Howto: Given a radical expression requiring addition or subtraction of square roots, solve
Simplify each radical expression. Add or subtract expressions with equal radicands.
Add \(5\sqrt{12}+2\sqrt{3}\).
Solution
We can rewrite \(5\sqrt{12}\) as \(5\sqrt{4\times3}\). According the product rule, this becomes \(5\sqrt{4}\sqrt{3}\). The square root of \(\sqrt{4}\) is \(2\) , so the expression becomes \(5\times2\sqrt{3}\), which is \(10\sqrt{3}\). Now we can the terms have the same radicand so we can add.
\[10\sqrt{3}+2\sqrt{3}=12\sqrt{3} \nonumber\]
Exercise \(\PageIndex{6}\)
Add \(\sqrt{5}+6\sqrt{20}\)
Answer
\(13\sqrt{5}\)
Subtract \(20\sqrt{72a^3b^4c}-14\sqrt{8a^3b^4c}\)
Solution
Rewrite each term so they have equal radicands.
\[\begin{align*} 20\sqrt{72a^3b^4c} &= 20\sqrt{9}\sqrt{4}\sqrt{2}\sqrt{a}\sqrt{a^2}\sqrt{(b^2)^2}\sqrt{c}\\ &= 20(3)(2)|a|b^2\sqrt{2ac}\\ &= 120|a|b^2\sqrt{2ac} \end{align*}\]
\[\begin{align*} 14\sqrt{8a^3b^4c} &= 14\sqrt{2}\sqrt{4}\sqrt{a}\sqrt{a^2}\sqrt{(b^2)^2}\sqrt{c}\\ &= 14(2)|a|b^2\sqrt{2ac}\\ &= 28|a|b^2\sqrt{2ac} \end{align*}\]
Now the terms have the same radicand so we can subtract.
\[120|a|b^2\sqrt{2ac}-28|a|b^2\sqrt{2ac}=92|a|b^2\sqrt{2ac}\]
Exercise \(\PageIndex{7}\)
Subtract \(3\sqrt{80x}-4\sqrt{45x}\)
Answer
\(0\)
Rationalizing Denominators
When an expression involving square root radicals is written in simplest form, it will not contain a radical in the denominator. We can remove radicals from the denominators of fractions using a process called rationalizing the denominator.
We know that multiplying by \(1\) does not change the value of an expression. We use this property of multiplication to change expressions that contain radicals in the denominator. To remove radicals from the denominators of fractions, multiply by the form of \(1\) that will eliminate the radical.
For a denominator containing a single term, multiply by the radical in the denominator over itself. In other words, if the denominator is \(b\sqrt{c}\), multiply by \(\dfrac{\sqrt{c}}{\sqrt{c}}\).
For a denominator containing the sum or difference of a rational and an irrational term, multiply the numerator and denominator by the conjugate of the denominator, which is found by changing the sign of the radical portion of the denominator. If the denominator is \(a+b\sqrt{c}\) , then the conjugate is \(a-b\sqrt{c}\).
HowTo: Given an expression with a single square root radical term in the denominator, rationalize the denominator
Multiply the numerator and denominator by the radical in the denominator. Simplify.
Write \(\dfrac{2\sqrt{3}}{3\sqrt{10}}\) in simplest form.
Solution
The radical in the denominator is \(\sqrt{10}\). So multiply the fraction by \(\dfrac{\sqrt{10}}{\sqrt{10}}\). Then simplify.
\[\begin{align*} &\dfrac{2\sqrt{3}}{3\sqrt{10}}\times\dfrac{\sqrt{10}}{\sqrt{10}}\\ &\dfrac{2\sqrt{30}}{30}\\ &\dfrac{\sqrt{30}}{15} \end{align*}\]
Exercise \(\PageIndex{8}\)
Write \(\dfrac{12\sqrt{3}}{\sqrt{2}}\) in simplest form.
Answer
\(6\sqrt{6}\)
How to: Given an expression with a radical term and a constant in the denominator, rationalize the denominator
Find the conjugate of the denominator. Multiply the numerator and denominator by the conjugate. Use the distributive property. Simplify.
Write \(\dfrac{4}{1+\sqrt{5}}\) in simplest form.
Solution
Begin by finding the conjugate of the denominator by writing the denominator and changing the sign. So the conjugate of \(1+\sqrt{5}\) is \(1-\sqrt{5}\). Then multiply the fraction by \(\dfrac{1-\sqrt{5}}{1-\sqrt{5}}\) .
\[\begin{align*} &\dfrac{4}{1+\sqrt{5}}\times\dfrac{1-\sqrt{5}}{1-\sqrt{5}}\\ &\dfrac{4-4\sqrt{5}}{-4}\qquad \text{Use the distributive property}\\ &\sqrt{5}-1\qquad \text{Simplify} \end{align*}\]
Exercise \(\PageIndex{9}\)
Write \(\dfrac{7}{2+\sqrt{3}}\) in simplest form.
Answer
\(14-7\sqrt{3}\)
Using Rational Roots
Although square roots are the most common rational roots, we can also find cube roots, \(4^{th}\) roots, \(5^{th}\) roots, and more. Just as the square root function is the inverse of the squaring function, these roots are the inverse of their respective power functions. These functions can be useful when we need to determine the number that, when raised to a certain power, gives a certain number.
Understanding \(n^{th}\) Roots
Suppose we know that \(a^3=8\). We want to find what number raised to the \(3^{rd}\) power is equal to \(8\). Since \(2^3=8\) , we say that \(2\) is the cube root of \(8\).
The \(n^{th}\) root of \(a\) is a number that, when raised to the \(n^{th}\) power, gives a. For example, \(−3\) is the \(5^{th}\) root of \(−243\) because \({(-3)}^5=-243\). If \(a\) is a real number with at least one \(n^{th}\) root, then the principal \(n^{th}\) root of \(a\) is the number with the same sign as \(a\) that, when raised to the \(n^{th}\) power, equals \(a\) .
The principal \(n^{th}\) root of \(a\) is written as \(\sqrt[n]{a}\) , where \(n\) is a positive integer greater than or equal to \(2\). In the radical expression, \(n\) is called the index of the radical.
Note: Principal \(n^{th}\) Root
If \(a\) is a real number with at least one \(n^{th}\) root, then the
principal \(n^{th}\) root of \(a\) , written as \(\sqrt[n]{a}\), is the number with the same sign as \(a\) that, when raised to the \(n^{th}\) power, equals \(a\). The index of the radical is \(n\) . Using Rational Exponents
Radical expressions can also be written without using the radical symbol. We can use rational (fractional) exponents. The index must be a positive integer. If the index \(n\) is even, then a cannot be negative.
We can also have rational exponents with numerators other than \(1\). In these cases, the exponent must be a fraction in lowest terms. We raise the base to a power and take an nth root. The numerator tells us the power and the denominator tells us the root.
All of the properties of exponents that we learned for integer exponents also hold for rational exponents.
Write \(343^{\tfrac{2}{3}}\) as a radical. Simplify.
Solution
The \(2\) tells us the power and the \(3\) tells us the root.
\(343^{\tfrac{2}{3}}={(\sqrt[3]{343})}^2=\sqrt[3]{{343}^2}\)
We know that \(\sqrt[3]{343}=7\) because \(7^3 =343\) . Because the cube root is easy to find, it is easiest to find the cube root before squaring for this problem. In general, it is easier to find the root first and then raise it to a power.
\[343^{\tfrac{2}{3}}={(\sqrt[3]{343})}^2=7^2=49\]
Exercise \(\PageIndex{11}\)
Write \(9^{\tfrac{5}{2}}\) as a radical. Simplify.
Answer
\({(\sqrt{9})}^5=3^5=243\)
Write \(\dfrac{4}{\sqrt[7]{a^2}}\) using a rational exponent.
Solution
The power is \(2\) and the root is \(7\), so the rational exponent will be \(\dfrac{2}{7}\). We get \(\dfrac{4}{a^{\tfrac{2}{7}}}\). Using properties of exponents, we get \(\dfrac{4}{\sqrt[7]{a^2}}=4a^{\tfrac{-2}{7}}\)
Exercise \(\PageIndex{12}\)
Write \(x\sqrt{{(5y)}^9}\) using a rational exponent.
Answer
\(x(5y)^{\dfrac{9}{2}}\)
Simplify:
a. \(5(2x^{\tfrac{3}{4}})(3x^{\tfrac{1}{5}})\)
b. \(\left(\dfrac{16}{9}\right)^{-\tfrac{1}{2}}\)
Solution
a.
\[\begin{align*} &30x^{\tfrac{3}{4}}\: x^{\tfrac{1}{5}}\qquad \text{Multiply the coefficients}\\ &30x^{\tfrac{3}{4}+\tfrac{1}{5}}\qquad \text{Use properties of exponents}\\ &30x^{\tfrac{19}{20}}\qquad \text{Simplify} \end{align*}\]
b.
\[\begin{align*} &{\left(\dfrac{9}{16}\right)}^{\tfrac{1}{2}}\qquad \text{Use definition of negative exponents}\\ &\sqrt{\dfrac{9}{16}}\qquad \text{Rewrite as a radical}\\ &\dfrac{\sqrt{9}}{\sqrt{16}}\qquad \text{Use the quotient rule}\\ &\dfrac{3}{4}\qquad \text{Simplify} \end{align*}\]
Exercise \(\PageIndex{13}\)
Simplify \({(8x)}^{\tfrac{1}{3}}\left(14x^{\tfrac{6}{5}}\right)\)
Answer
\(28x^{\tfrac{23}{15}}\)
Key Concepts The principal square root of a number \(a\) is the nonnegative number that when multiplied by itself equals \(a\). See Example. If \(a\) and \(b\) are nonnegative, the square root of the product \(ab\) is equal to the product of the square roots of \(a\) and \(b\) See Example and Example. If \(a\) and \(b\) are nonnegative, the square root of the quotient \(\dfrac{a}{b}\) is equal to the quotient of the square roots of \(a\) and \(b\) See Example and Example. We can add and subtract radical expressions if they have the same radicand and the same index. See Example and Example. Radical expressions written in simplest form do not contain a radical in the denominator. To eliminate the square root radical from the denominator, multiply both the numerator and the denominator by the conjugate of the denominator. See Example and Example. The principal \(n^{th}\) root of \(a\) is the number with the same sign as \(a\) that when raised to the \(n^{th}\) power equals \(a\). These roots have the same properties as square roots. See Example. Radicals can be rewritten as rational exponents and rational exponents can be rewritten as radicals. See Example and Example. The properties of exponents apply to rational exponents. See Example. |
Let $R$ be a finite ring with identity $1$, and assume $\exists x,y\in R$ such that $ xy=1$. How can I show it implies $yx=1$?
Hint: $xy=1$ implies that left multiplication by $y$ is one-to-one. Can you draw a conclusion whether or not there is a $z$ such that $yz=1$?
If so, you can complete the argument by showing that $z=x$.
Hint $\ $ As often occurs, this result on numbers is a special case of a result on functions. namely, consider $\rm\:x,y\:$ as left-multiplication maps $\rm\:f(r) = xr,\ g(r) = yr,\:$ then apply the following Lemma $\rm\ fg = 1\ \Rightarrow\ gf = 1\ $ for maps $\rm\:f,g\:$ on a finite set $\rm\:R.$
$\rm(1)\ \ \ fg = 1\ \Rightarrow\ g\ is\ 1\!-\!1\:$ by $\rm\:f\:$ of $\rm\:g(a) = g(b)\ \Rightarrow\ a = b $
$\rm(2)\ \ \ g\ is\ 1\!-\!1\ \Rightarrow\ g\:$ is onto, since $\rm\:R\:$ is finite
$\rm(3)\ \ \ g\ is\ onto\ \Rightarrow\ gf = 1\:$ by $\rm\ a = g(b) = g(fg(b)) = gf(a)$
Remark $\ $ In fact we may view the ring as the set of such maps (left-regular representation), where the elements of $\rm\:R\:$ are essentially viewed as $1$-dimensional matrices. Then the above is analogous to a well-known result about matrices, e.g. see my post here where I prove $\rm\ AB = I\:\Rightarrow\; BA = 1,\:$ or, equivalently, $\rm\:B\:$ injective $\rm \Rightarrow$ $\rm\: B\:$ surjective, by exploiting the pigeonhole principle. See also other posts in that thread which clarify the fundamental role played by the pigeonhole principle. See also this question on Dedekind-finite rings, i.e. rings where $\rm\:xy = 1\:\Rightarrow\: yx = 1.$
Let $f_y\colon:R\rightarrow R,\ z\mapsto yz$ then: $$f_y(z)=f_y(t)\iff yz=yt\Rightarrow x(yz)=x(yt)\Rightarrow (xy)z=(xy)t\Rightarrow z=t$$ hence $f_y$ is one to one. Now since $R$ is finite then the map $f_y$ is bijective hence there's a unique $z\in R$ s.t. $f_y(z)=yz=1$ so $x(yz)=(xy)z=z=x$ and conclude. |
When using SVM, we need to select a kernel.
I wonder how to select a kernel. Any criteria on kernel selection?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea.
In the absence of expert knowledge, the Radial Basis Function kernel makes a good default kernel (once you have established it is a problem requiring a non-linear model).
The choice of the kernel and kernel/regularisation parameters can be automated by optimising a cross-valdiation based model selection (or use the radius-margin or span bounds). The simplest thing to do is to minimise a continuous model selection criterion using the Nelder-Mead simplex method, which doesn't require gradient calculation and works well for sensible numbers of hyper-parameters. If you have more than a few hyper-parameters to tune, automated model selection is likely to result in severe over-fitting, due to the variance of the model selection criterion. It is possible to use gradient based optimization, but the performance gain is not usually worth the effort of coding it up).
Automated choice of kernels and kernel/regularization parameters is a tricky issue, as it is
very easy to overfit the model selection criterion (typically cross-validation based), and you can end up with a worse model than you started with. Automated model selection also can bias performance evaluation, so make sure your performance evaluation evaluates the whole process of fitting the model (training and model selection), for details, see
G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (pdf)
and
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, vol. 11, pp. 2079-2107, July 2010.(pdf)
If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a
combination of classifiers (if your problem is classification) obtained with different kernel. However, the "advantage" of working with a kernel is that you change the usual "Euclidean" geometry so that it fits your own problem. Also, you should really try to understand what is the interest of a kernel for your problem, what is particular to the geometry of your problem. This can include:
$$ \hat{f}(x)=\sum_{i=1}^n \lambda_i K(x,x_i)$$
If you know that a linear separator would be a good one, then you can use Kernel that gives affine functions (i.e. $K(x,x_i)=\langle x,A x_i\rangle+c$). If you think smooth boundaries much in the spirit of smooth KNN would be better, then you can take a gaussian kernel...
I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search.
In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel. |
I'm reading "The variational principles of mechanics- Lanczos",
The author mentions a relation between Work-Function $U(q_1,q_2,\cdots,q_n,\dot q_1,\dot q_2,\cdots,\dot q_n)$ and the potential energy $V(q_1,q_2,\cdots,q_n)$
$$V=\sum_{i=0}^n \frac{\partial U}{\partial \dot q_i}\dot q_i-U \tag{1}$$
$q_i$'s are the generalized coordinates
The work function and the generalized force $(Q_j)$ are related as
$$Q_j=\frac{\partial U}{\partial q_j}-\frac{d}{dt}\frac{\partial U}{\partial \dot q_j} \tag{2}$$
Looking at the equation $(1)$ I can only tell that $V$ is the legendre transform of $U$ but I'm not able to prove it, The work function as we can see depends also on $\dot q_i$
And we usually have velocity-
independent work functions, in this case equation $(1)$ reduces to $V=-U$, and equation $(2)$ becomes
$$Q_i=-\frac{\partial V}{\partial q_i}$$
Which is the well known equation for conservative forces
I searched the internet but couldn't find anything close to this, Can somebody give me a clue on how to derive this? Any help is appreciated |
I am getting stuck in two integrals involving Bessel functions and hoping someone to help me out.
We know that the Bessel function, $$J_v(x)=x^v\sum_{r=0}^{\infty}\frac{(-1)^rx^{2r}}{2^{2r+v}r!\Gamma(r+v+1)},$$ and the modified Bessel function, $$I_v(x)=\sum_{r=0}^{\infty}\frac{1}{r!\Gamma(r+v+1)}\left(\frac{x}{2}\right)^{v+2r}.$$ Now,how to see the following two integrals? I highly suspect the correctness of the second one. (P625, Integral Transformations and Their Applications, third edition, Lokenath Debnath and Dambaru Bhatta)
$\int_0^\infty exp(-a^2t^2)J_v(bt)J_v(ct)tdt=\frac{1}{2a^2}exp(-\frac{b^2+c^2}{4a^2})I_v(\frac{bc}{2a^2}), \quad v>-1$.
$\int_0^\infty t^{2u-v-1}J_v(t)dt=2^{2u-v-1} \frac{ \Gamma(u)}{\Gamma(v-u+1)}, \quad 0<u<\frac{1}{2}, \quad v>-\frac{1}{2}$.
Thanks in advance |
Lindelöf, Countably Compact, and BW Spaces Review
Lindelöf, Countably Compact, and BW Spaces Review
We will now review some of the recent material regarding Lindelöf spaces, countably compact spaces, and BW spaces.
Recall from the Lindelöf and Countably Compact Topological Spacespage that a topological space $X$ is said to be Lindelöfif every open cover of $X$ has a countable subcover. Furthermore, $X$ is aid to be Countably Compactif every countable cover of $X$ has a finite subcover. On The Lindelöf Lemmapage we proved a very nice theorem which says that every second countable topological space is Lindelöf. On the Bolzano Weierstrass Topological Spacespage we looked at a new type of topological space. We said that a topological space $X$ is a Bolzano Weierstrass Spaceor simply as " BW Space" if every infinite subset of $X$ has an accumulation point. We saw that $\mathbb{R}$ with the usual topology is not a BW space. This is because the infinite subset of integers $\mathbb{Z} \subset \mathbb{R}$ has no accumulation point. On the Hausdorff Spaces Are BW Spaces If and Only If They're Countably Compactpage we saw that if $X$ is a Hausdorff space that $X$ is a BW space if and only if $X$ is countably compact. On the Compact Spaces as BW Spacespage we (more generally) saw that if $X$ is a compact space then $X$ is a BW space. Further restrictions need to be applied for the converse of this result to be true though. On the The Lebesgue Number Lemmapage we looked at a very famous result known as the Lebesgue Number Lemma. It said that if $(X, d)$ is a metric space that is also a BW space that for every open cover $\mathcal F$ there exists an $\epsilon > 0$ such that for all $x \in X$ there exists a $U \in \mathcal F$ such that:
\begin{align} \quad B(x, \epsilon) \subseteq U \end{align}
Such a number $\epsilon > 0$ satisfying the definition above is called a Lebesgue Number. Using this result, we saw on the Metric Spaces Are Compact Spaces If and Only If They're BW Spacespage that if $X$ is a metric space then $X$ is compact if and only if $X$ is a BW space. As a nice consequence, on the Metric Spaces Are Compact Spaces If and Only If They're Countably Compactpage we further saw that if $X$ is a metric space then $X$ is compact if and only if $X$ is countably compact. Compactness implying countably compactness was obvious. The converse was rather simple to show too, since if $X$ is a metric space then $X$ is Hausdorff and countably compact which implies that $X$ is a BW metric spaces which implies that $X$ is compact. |
In this section we describe a systematic method that determines the greatest common divisor of two integers. This method is called the Euclidean algorithm.
[lem1] If \(a\) and \(b\) are two integers and \(a=bq+r\) where also \(q\) and \(r\) are integers, then \((a,b)=(r,b)\).
Note that by theorem 8, we have \((bq+r,b)=(b,r)\).
The above lemma will lead to a more general version of it. We now present the Euclidean algorithm in its general form. It states that the greatest common divisor of two integers is the last non zero remainder of the successive division.
Let \(a=r_0\) and \(b=r_1\) be two positive integers where \(a\geq b\). If we apply the division algorithm successively to obtain that \[r_j=r_{j+1}q_{j+1}+r_{j+2} \ \ \mbox{where} \ \ 0\leq r_{j+2}<r_{j+1}\] for all \(j=0,1,...,n-2\) and \[r_{n+1}=0.\] Then \((a,b)=r_{n}\).
By applying the division algorithm, we see that \[\begin{aligned} r_0&=&r_1q_1+r_2 \ \ \ \ \ 0\leq r_2<r_1, \\ r_1&=&r_2q_2+r_3 \ \ \ \ \ 0\leq r_3<r_2, \\ &.& \\ &.& \\ &.& \\ r_{n-2}&=&r_{n-1}q_{n-1}+r_{n} \ \ \ \ \ 0\leq r_{n}<r_{n-1}, \\ r_{n-1}&=&r_{n}q_{n}.\end{aligned}\] Notice that, we will have a remainder of \(0\) eventually since all the remainders are integers and every remainder in the next step is less than the remainder in the previous one. By Lemma [lem1], we see that \[(a,b)=(b,r_2)=(r_2,r_3)=...=(r_n,0)=r_n.\]
We will find the greatest common divisor of \(4147\) and \(10672\):
Note that \[\begin{aligned} 10672&=&4147\cdot 2+2378,\\ 4147&=&2378\cdot 1+1769,\\ 2378&=&1769\cdot 1+609,\\ 1769&=&609\cdot 2 +551,\\ 609&=& 551\cdot 1+58, \\ 551&=&58\cdot 9+ 29,\\ 58&=&29\cdot 2,\\\end{aligned}\] Hence \((4147,10672)=29.\)
We now use the steps in the Euclidean algorithm to write the greatest common divisor of two integers as a linear combination of the two integers. The following example will actually determine the variables \(m\) and \(n\) described in Theorem [thm9]. The following algorithm can be described by a general form but for the sake of simplicity of expressions we will present an example that shows the steps for obtaining the greatest common divisor of two integers as a linear combination of the two integers.
Express 29 as a linear combination of \(4147\) and \(10672\):
\[\begin{aligned} 29&=&551-9\cdot 58,\\ &=& 551-9(609-551\cdot 1),\\ &=& 10.551-9.609,\\ &=& 10\cdot (1769-609\cdot 2)-9\cdot 609,\\ &=& 10\cdot 1769-29\cdot 609,\\ &=& 10\cdot 1769-29(2378-1769\cdot 1),\\ &=& 39\cdot 1769-29\cdot 2378,\\ &=& 39(4147-2378\cdot 1)-29\cdot 2378,\\ &=& 39\cdot 4147-68\cdot 2378,\\ &=& 39\cdot 4147-68(10672-4147\cdot 2),\\ &=& 175\cdot 4147-68\cdot 10672,\end{aligned}\]
As a result, we see that \(29=175\cdot 4147-68\cdot 10672\).
Exercises
Use the Euclidean algorithm to find the greatest common divisor of 412 and 32 and express it in terms of the two integers.
Use the Euclidean algorithm to find the greatest common divisor of 780 and 150 and express it in terms of the two integers.
Find the greatest common divisor of \(70,98, 108\).
Let \(a\) and \(b\) be two positive even integers. Prove that \((a,b)=2(a/2,b/2).\)
Show that if \(a\) and \(b\) are positive integers where \(a\) is even and \(b\) is odd, then \((a,b)=(a/2,b).\) |
This is a model that is used to model soccer scores, so $i$ and $j$ are, respectively, home and away teams. Random variables $(x,y)$ are the goals scored by the home and away teams, respectively. Parameter $\lambda$ is a known mean goals scored by the home team and $\mu$ is the mean goals scored by the away team. I have managed to fix all the other parameters except for $\rho$, which I have to estimate via MLE.
$$Pr(X_{i,j}=x, Y_{i,j}=y)=\tau_{\lambda, \mu}(x,y)\frac{\lambda^x \text{exp}(-\lambda)}{x!}\frac{\mu^y\text{exp}(-\mu)}{y!}$$ where $$\lambda=\alpha_{i}\beta_{j}\gamma$$ $$\mu=\alpha_{j}\beta_{i}$$ and $$\tau_{\lambda,\mu}(x,y)=\left\{\begin{array}{cc} 1-\lambda\mu\rho &\text{if $x=y=0$,} \\ 1+\lambda\rho &\text{if $x=0,y=1$,}\\ 1+\mu\rho &\text{if $x=1,y=0$,}\\ 1-\rho &\text{if $x=y=1$,}\\ 1 &\text{otherwise}\end{array} \right.$$
Based on the above equations, all the parameters $(\lambda, \mu, \alpha, \beta, \gamma)$ are known constants.
So, now, the problem that I am having is that I have no clue on how to estimate $\rho$ using the maximum likelihood function since a piece-wise equation is involved.
Also, it will be great if anyone can do this using R. |
Functional a posteriori error estimates and adaptivity for IgA schemes Dr. Svetlana Matculevich March 28, 2017, 3:30 p.m. S2 059
We are concern with guaranteed error control of Isogeometric Analysis (IgA) numerical approximations of elliptic boundary value problems (BVPs). The approach is discussed within the paradigm of classical
linear Poisson Dirichlet model problem: find $u: \overline{\Omega} \rightarrow ℝ^d$ such that
$$ - \Delta_x u = f \;\; \rm{in} \;\; \Omega, \qquad u = u_D \;\; \rm{on} \;\; \partial \Omega, \qquad (1) $$
where $\Omega \subset ℝ^d$, $d \in \{1, 2, 3\}$, denotes a bounded domain having a Lipschitz boundary $\partial \Omega$, $\Delta_x$ is the Laplace operator in space, $f \in L^{2}(\Omega)$ is a given source function, and $u_D \in H_0^1 (\Sigma)$ is a given load on the boundary.
We conduct the numerical study of the functional a posteriori error estimates integrated into the IgA framework. These so-called majorants and minorants were originally introduced in [1] and later applied to different mathematical models. This type of error estimates can exploit the higher smoothness of B-Splines (NURBS, THB-Splines) basis functions to its advantage. Since the obtained approximations are generally $C^{p-1}$-continuous (provided that the inner knots have the multiplicity $1$), this automatically implies that their gradients are in $H(\Omega, \mathtt{div})$ space. Therefore, there is no need in projecting it from $\nabla u_h \in L^{2}(\Omega, ℝ^d)$ into $H(\Omega, \mathtt{div})$.
The functional approach to the error estimation in combination with IgA approximations (generated by tensor-product splines) was investigated in [2] for (1). In the current work, we test the algorithm of the majorant reconstruction suggested [2], which allows the considerable reduction of the time-costs for the error estimates calculation and, in the same time, generates guaranteed, sharp, and fully computable bounds of errors. Moreover, we combine functional error estimates with THB-Splines (the implementation provided by
G+smo) and demonstrate their efficiency with respect to adaptive mesh generation in IgA schemes.
This is a joint work with Prof. Ulrich Langer (RICAM) and Prof. Sergey Repin (St. Petersburg Department of V.A. Steklov Institute of Mathematics RAS).
[1] S. Repin,
A posteriori error estimation for nonlinear variational problems by duality theory, Zapiski Nauch. Sem. V. A. Steklov Math. Institute in St.-Petersburg (POMI), 243, 201--214, 1997. [2] S. K. Kleiss and S. K. Tomar, Guaranteed and sharp a posteriori error estimates in isogeometric analysis, Computers & Mathematics with Applications 70 (3), 167-190, 2015. [3] G. Kiss, C. Giannelli, U. Zore, B. Jüttler, D. Grossmann, and J. Barner, Adaptive CAD model (re-)construction with THB-splines, Graph. Models, 76, 273--288, 2014. |
A few questions about the equivalence between 2-types and crossed modules. For simplicity, assume everything is connected.
What is the precise statement? Is there an equivalence of categories (or at least a bijection of isomorphism classes) of the form
$$\{\text{2-truncated spaces}\}[\text{weak homotopy equivalence}^{-1}] \simeq \{\text{crossed modules}\}[\mathcal{W}^{-1}]$$
for some class of morphisms $\mathcal W$? If so, what is $\mathcal W$? Is it exactly the isomorphisms? What is a precise reference?
Crossed modules $1 \to A \to H_2 \to H_1 \to G \to 1$ are classified by $H^3(G;A)$ up to zigzags of morphisms of extensions. Does this translate to a classification of connected homotopy 2-types $X$ with $\pi_1(X) = G$ and $\pi_2(X) = A$ up to homotopy equivalence, or is it only up to a coarser equivalence relation? If this is a classification up to homotopy equivalence, then how does one see that a non-invertible morphism of crossed modules induces a homotopy equivalence of classifying spaces?
In any event, from every crossed module $1 \to A \to H_2 \to H_1 \to G \to 1$, I can extract an element of $H^3(G;A)$. Homotopically, this corresponds to a Postnikov invariant for a possibly non-principal Postnikov tower. Where is the theory of non-principal Postnikov invariants written, and in particular does this invariant (exist and) completely classify a 2-type? |
I am trying to find the volume inside the sphere $x^2 + y^2 + z^2 = 9$, but outside the hyperboloid $x^2 + y^2 - z^2 = 1$. by using a triple integral. for some reason i just cant seem to come up the bounds of integration for this problem. To be more precise, its the region lying to the side of the hyperboloid, that wraps around it, creating a sort of donut shape.
Just a picture, not an answer.
Hint:
The volume inside sphere but outside hyperboloid can be seen as the volume of hyperboloid subtracted from volume of the sphere. In the cartesian coordinates, the volume of the hyperboloid could be written as the below iterated integrals.
Equating both conics such as $$z^2 = 9 - x^2 -y^2 = x^2 + Y^2 -1 => 2(x^2 + y^2) = 10$$ $$ => x^2 + y^2 = 5, implies z^2 = 9-5 = 4 => z = +2, -2$$. Let us slice the solid in the xy plane and we get the limits of z is (-2,2). Each slice is the disk enclosed by a circle $x^2 + y^2 = z^2 + 1$, which is the circle of radius $\sqrt{z^2 + 1}$. Now finding the limits of x and y, we slice the hyperboloid in the vertical direction and this amounts to slicing $$[-\sqrt{z^2 + 1},\sqrt{z^2 + 1}]$$ on the x-axis. Along each slice, y goes from bottom of the circle $y = -\sqrt{z^2 + 1 - x^2}$ to the top of the circle $y = \sqrt{z^2 + 1 - x^2}$.
Putting this altogether, the volume of the hyperboloid =
$$ V_{hyperboloid} = \int_{-2}^{2} \int_{-\sqrt{z^2 + 1}}^{\sqrt{z^2 + 1}} \int_{-\sqrt{z^2 + 1 - x^2}}^{\sqrt{z^2 + 1 - x^2}} dydxdz$$
Now the volume of the sphere with a radius of 3 $$V_{sphere} = \frac{4}{3}\pi 3^3$$
Thus the volume inside sphere but outside hyperboloid $$= V_{sphere} - V_{hyperboloid} - V=36\pi - \frac{28}{3}\pi - V = \frac{80}{3}\pi - V $$ where V is the volume of shphere around z = 2 to 3. The volume of which is
$$ \int_{0}^{2\pi} \int_{2.236}^{3} (\sqrt{9-r^2})rdrd\theta = \frac{16\pi}{3}=16.756$$
Thus the required volume$ = \frac{80}{3}\pi - \frac{16\pi}{3} =\frac{64\pi}{3}$ |
Wind direction (here measured in degrees, presumably as a compass direction clockwise from North) is a circular variable. The test is that the conventional beginning of the scale is the same as the end, i.e. $0^\circ = 360^\circ$. When treated as a predictor it is probably best mapped to sine and cosine. Whatever your software, it is likely to expect angles to be measured in radians, so the conversion will be some equivalent of
$ \sin(\pi\ \text{direction} / 180), \cos(\pi\ \text{direction} / 180)$
given that $2 \pi$ radians $= 360^\circ$. Similarly time of day measured in hours from midnight can be mapped to sine and cosine using
$ \sin(\pi\ \text{time} / 12), \cos(\pi\ \text{time} / 12)$
or
$ \sin(\pi (\text{time} + 0.5) / 12), \cos(\pi (\text{time} + 0.5) / 12)$
depending on exactly how time was recorded or should be interpreted.
Sometimes nature or society is obliging and dependence on the circular variable takes the form of some direction being optimal for the response and the opposite direction (half the circle away) being pessimal. In that case a single sine and cosine term may suffice; for more complicated patterns you may need other terms. For much more detail a tutorial on this technique of circular, Fourier, periodic, trigonometric regression may be found here, with in turn further references. The good news is that once you have created sine and cosine terms they are just extra predictors in your regression.
There is a large literature on circular statistics, itself seen as part of directional statistics. Oddly, this technique is often not mentioned, as focus in that literature is commonly on circular response variables. Summarising circular variables by their vector means is a standard descriptive method but is not required or directly helpful for regression.
Some details on terminology Wind direction and time of day are in statistical terms variables, not parameters, whatever the usage in your branch of science.
Linear regression is defined by linearity in parameters, i.e. for a vector $y$ predicted by $X\beta$ it is the vector of parameters $\beta$, not the matrix of predictors $X$, that is more crucial. So, in this case, the fact that predictors such as sine and cosine are measured on circular scales and also restricted to $[-1, 1]$ is no barrier to their appearing in linear regression.
Incidental comment For a response variable such as particle concentration I'd expect to use a generalised linear model with logarithmic link to ensure positive predictions. |
I have found a new proof of the Barwise extension theorem, that wonderful yet quirky result of classical admissible set theory, which says that every countable model of set theory can be extended to a model of $\text{ZFC}+V=L$.
Barwise Extension Theorem. (Barwise 1971) $\newcommand\ZF{\text{ZF}}\newcommand\ZFC{\text{ZFC}}$ Every countable model of set theory $M\models\ZF$ has an end-extension to a model of $\ZFC+V=L$.
The Barwise extension theorem is both (i) a technical culmination of the pioneering methods of Barwise in admissible set theory and infinitary logic, including the Barwise compactness and completeness theorems and the admissible cover, but also (ii) one of those rare mathematical theorems that is saturated with significance for the philosophy of mathematics and particularly the philosophy of set theory. I discussed the theorem and its philosophical significance at length in my paper, The multiverse perspective on the axiom of constructibility, where I argued that it can change how we look upon the axiom of constructibility and whether this axiom should be considered ‘restrictive,’ as it often is in set theory. Ultimately, the Barwise extension theorem shows how wrong a model of set theory can be, if we should entertain the idea that the set-theoretic universe continues growing beyond it.
Regarding my new proof, below, however, what I find especially interesting about it, if not surprising in light of (i) above, is that it makes no use of Barwise compactness or completeness and indeed, no use of infinitary logic at all! Instead, the new proof uses only classical methods of descriptive set theory concerning the representation of $\Pi^1_1$ sets with well-founded trees, the Levy and Shoenfield absoluteness theorems, the reflection theorem and the Keisler-Morley theorem on elementary extensions via definable ultrapowers. Like the Barwise proof, my proof splits into cases depending on whether the model $M$ is standard or nonstandard, but another interesting thing about it is that with my proof, it is the $\omega$-nonstandard case that is easier, whereas with the Barwise proof, the transitive case was easiest, since one only needed to resort to the admissible cover when $M$ was ill-founded. Barwise splits into cases on well-founded/ill-founded, whereas in my argument, the cases are $\omega$-standard/$\omega$-nonstandard.
To clarify the terms, an
end-extension of a model of set theory $\langle M,\in^M\rangle$ is another model $\langle N,\in^N\rangle$, such that the first is a substructure of the second, so that $M\subseteq N$ and $\in^M=\in^N\upharpoonright M$, but further, the new model does not add new elements to sets in $M$. In other words, $M$ is an $\in$-initial segment of $N$, or more precisely: if $a\in^N b\in M$, then $a\in M$ and hence $a\in^M b$.
Set theory, of course, overflows with instances of end-extensions. For example, the rank-initial segments $V_\alpha$ end-extend to their higher instances $V_\beta$, when $\alpha<\beta$; similarly, the hierarchy of the constructible universe $L_\alpha\subseteq L_\beta$ are end-extensions; indeed any transitive set end-extends to all its supersets. The set-theoretic universe $V$ is an end-extension of the constructible universe $L$ and every forcing extension $M[G]$ is an end-extension of its ground model $M$, even when nonstandard. (In particular, one should not confuse end-extensions with rank-extensions, also known as top-extensions, where one insists that all the new sets have higher rank than any ordinal in the smaller model.)
Let’s get into the proof.
Proof. Suppose that $M$ is a model of $\ZF$ set theory. Consider first the case that $M$ is $\omega$-nonstandard. For any particular standard natural number $k$, the reflection theorem ensures that there are arbitrarily high $L_\alpha^M$ satisfying $\ZFC_k+V=L$, where $\ZFC_k$ refers to the first $k$ axioms of $\ZFC$ in a fixed computable enumeration by length. In particular, every countable transitive set $m\in L^M$ has an end-extension to a model of $\ZFC_k+V=L$. By overspill (that is, since the standard cut is not definable), there must be some nonstandard $k$ for which $L^M$ thinks that every countable transitive set $m$ has an end-extension to a model of $\ZFC_k+V=L$, which we may assume is countable. This is a $\Pi^1_2$ statement about $k$, which will therefore also be true in $M$, by the Shoenfield absolutenss theorem. It will also be true in all the elementary extensions of $M$, as well as in their forcing extensions. And indeed, by the Keisler-Morley theorem, the model $M$ has an elementary top extension $M^+$. Let $\theta$ be a new ordinal on top of $M$, and let $m=V_\theta^{M^+}$ be the $\theta$-rank-initial segment of $M^+$, which is a top-extension of $M$. Let $M^+[G]$ be a forcing extension in which $m$ has become countable. Since the $\Pi^1_2$ statement is true in $M^+[G]$, there is an end-extension of $\langle m,\in^{M^+}\rangle$ to a model $\langle N,\in^N\rangle$ that $M^+[G]$ thinks satisfies $\ZFC_k+V=L$. Since $k$ is nonstandard, this theory includes all the $\ZFC$ axioms, and since $m$ end-extends $M$, we have found an end-extension of $M$ to a model of $\ZFC+V=L$, as desired.
It remains to consider the case where $M$ is $\omega$-standard. By the Keisler-Morley theorem, let $M^+$ be an elementary top-extension of $M$. Let $\theta$ be an ordinal of $M^+$ above $M$, and consider the corresponding rank-initial segment $m=V_\theta^{M^+}$, which is a transitive set in $M^+$ that covers $M$. If $\langle m,\in^{M^+}\rangle$ has an end-extension to a model of $\ZFC+V=L$, then we’re done, since such a model would also end-extend $M$. So assume toward contradiction that there is no such end-extension of $m$. Let $M^+[G]$ be a forcing extension in which $m$ has become countable. The assertion that $m$ has no end-extension to a model of $\ZFC+V=L$ is actually true and hence true in $M^+[G]$. This is a $\Pi^1_1$ assertion there about the real coding $m$. Every such assertion has a canonically associated tree, which is well-founded exactly when the statement is true. Since the statement is true in $M^+[G]$, this tree has some countable rank $\lambda$ there. Since these models have the standard $\omega$, the tree associated with the statement is the same for us as inside the model, and since the statement is actually true, the tree is actually well founded. So the rank $\lambda$ must come from the well-founded part of the model.
If $\lambda$ happens to be countable in $L^{M^+}$, then consider the assertion, “there is a countable transitive set, such that the assertion that it has no end-extension to a model of $\ZFC+V=L$ has rank $\lambda$.” This is a $\Sigma_1$ assertion, since it is witnessed by the countable transitive set and the ranking function of the tree associated with the non-extension assertion. Since the parameters are countable, it follows by Levy reflection that the statement is true in $L^{M^+}$. So $L^{M^+}$ has a countable transitive set, such that the assertion that it has no end-extension to a model of $\ZFC+V=L$ has rank $\lambda$. But since $\lambda$ is actually well-founded, the statement would have to be actually true; but it isn’t, since $L^{M^+}$ itself is such an extension, a contradiction.
So we may assume $\lambda$ is uncountable in $M^+$. In this case, since $\lambda$ was actually well-ordered, it follows that $L^M$ is well-founded beyond its $\omega_1$. Consider the statement “there is a countable transitive set having no end-extension to a model of $\ZFC+V=L$.” This is a $\Sigma^1_2$ sentence, which is true in $M^+[G]$ by our assumption about $m$, and so by Shoenfield absoluteness, it is true in $L^{M^+}$ and hence also $L^M$. So $L^M$ thinks there is a countable transitive set $b$ having no end-extension to a model of $\ZFC+V=L$. This is a $\Pi^1_1$ assertion about $b$, whose truth is witnessed in $L^M$ by a ranking of the associated tree. Since this rank would be countable in $L^M$ and this model is well-founded up to its $\omega_1$, the tree must be actually well-founded. But this is impossible, since it is not actually true that $b$ has no such end-extension, since $L^M$ itself is such an end-extension of $b$. Contradiction. $\Box$
One can prove a somewhat stronger version of the theorem, as follows.
Theorem. For any countable model $M$ of $\ZF$, with an inner model $W\models\ZFC$, and any statement $\phi$ true in $W$, there is an end-extension of $M$ to a model of $\ZFC+\phi$. Furthermore, one can arrange that every set of $M$ is countable in the extension model.
In particular, one can find end-extensions of $\ZFC+V=L+\phi$, for any statement $\phi$ true in $L^M$.
Proof. Carry out the same proof as above, except in all the statements, ask for end-extensions of $\ZFC+\phi$, instead of end-extensions of $\ZFC+V=L$, and also ask that the set in question become countable in that extension. The final contradictions are obtained by the fact that the countable transitive sets in $L^M$ do have end-extensions like that, in which they are countable, since $W$ is such an end-extension. $\Box$
For example, we can make the following further examples.
Corollaries. Every countable model $M$ of $\ZFC$ with a measurable cardinal has an end-extension to a model $N$ of $\ZFC+V=L[\mu]$. Every countable model $M$ of $\ZFC$ with extender-based large cardinals has an end-extension to a model $N$ satisfying $\ZFC+V=L[\vec E]$. Every countable model $M$ of $\ZFC$ with infinitely many Woodin cardinals has an end-extension to a model $N$ of $\ZF+\text{AD}+V=L(\mathbb{R})$.
And in each case, we can furthermore arrange that every set of $M$ is countable in the extension model $N$.
This proof grew out of a project on the $\Sigma_1$-definable universal finite set, which I am currently undertaking with Kameryn Williams and Philip Welch.
Jon Barwise. Infinitary methods in the model theory of set theory. In Logic
Colloquium ’69 (Proc. Summer School and Colloq., Manchester, 1969), pages 53–66. North-Holland, Amsterdam, 1971. |
Focus Questions
The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts motivated by these questions and be able to write precise, coherent answers to these questions. How do we use the Law of Sines and the Law of Cosines to help solve applied problems that involve triangles? How do we determine the area of a triangle? What is Heron’s Law for the area of a triangle?
In Section 3.2, we used right triangles to solve some applied problems. It should then be no surprise that we can use the Law of Sines and the Law of Cosines to solve applied problems involving triangles that are not right triangles.
In most problems, we will first get a rough diagram or picture showing the triangle or triangles involved in the problem. We then need to label the known quantities. Once that is done, we can see if there is enough information to use the Law of Sines or the Law of Cosines. Remember that each of these laws involves four quantities. If we know the value of three of those four quantities, we can use that law to determine the fourth quantity.
We begin with the example in Exercise \(\PageIndex{1}\). The solution of this problem involved some complicated work with right triangles and some algebra. We will now solve this problem using the results from Section 3.3.
Example \(\PageIndex{1}\): Height to the Top of a Flagpole
Suppose that the flagpole sits on top a hill and that we cannot directly measure the length of the shadow of the flagpole as shown in Figure \(\PageIndex{2}\).
Some quantities have been labeled in the diagram. Angles \(\alpha\) and \(\beta\) are angles of elevation to the top of the flagpole from two different points on level ground. These points are \(d\) feet apart and directly in line with the flagpole. The problem is to determine \(h\), the height from level ground to the top of the flagpole. The following measurements have been recorded.
\[\alpha = 43.2^\circ\]
\[\beta = 34.7^\circ\]
\[d = 22.75feet\]
Figure \(\PageIndex{1}\): Flagpole on a hill
We notice that if we knew either length \(BC\) or \(BD\) in \(\triangle BDC\), then we could use right triangle trigonometry to determine the length \(BC\), which is equal to h. Now look at \(\triangle ABC\). We are given one angle \(\beta\). However, we also know the measure of angle \(\alpha\). Because they form a straight angle, we have \[\angle ABC + \alpha = 180^\circ\]
Hence, \[\angle ABC = 180^\circ - 43.2^\circ = 136.8^\circ\]. We now know two angles in \(\triangle ABC\) and hence, we can determine the third angle as follows: \[\beta + \angle ABC + \angle ACB = 180^\circ\] \[34.7^\circ + 136.8^\circ + \angle ACB = 180^\circ\] \[\angle ACB = 8.5^\circ\]
We now know all angles in \(\triangle ABC\) and the length of one side. We can use the Law of Sines. We have
\[\dfrac{AC}{\sin(34.7^\circ)} = \dfrac{22.75^\circ}{\sin(8.5^\circ)}\]
\[AC = \dfrac{22.75\sin(34.7^\circ)}{\sin(8.5^\circ)} \approx 87.620\]
We can now use the right triangle \(\triangle BDC\) to determine \(h\) as follows:
\[\dfrac{h}{AC} = \sin(43.2^\circ)\]
\[h = AC\cdot \sin(43.2^\circ) \approx 59.980\]
So the top of the flagpole is 59.980 feet above the ground. This is the same answer we obtained in Exercise \(\PageIndex{1}\).
Exercise \(\PageIndex{1}\)
A bridge is to be built across a river. The bridge will go from point \(A\) to point \(B\) in the diagram on the right. Using a transit (an instrument to mea- sure angles), a surveyor measures angle \(ABC\) to be \(94.2^\circ\) and measures angle \(BCA\) to be \(48.5^\circ\). In addition, the distance from \(B\) to \(C\) is measured to be 98.5 feet. How long will the bridge from point \(B\) to point \(A\) be?
Answer
We first note that \(\angle{BAC} = 180^\circ - 94.2^\circ - 48.5^\circ\) and so \(\angle{BAC} = 37.3^\circ\). We can then use the Law of Sines to determine the length from \(A\) to \(B\) as follows:
\[\dfrac{AB}{\sin(48.5^\circ)} = \dfrac{98.5}{\sin(37.3^\circ)}\]
\[AB = \dfrac{98.5\sin(48.5^\circ)}{\sin(37.3^\circ)}\]
\[AB \approx 121.7\]
The bridge from point \(B\) to point \(A\) will be approximately \(121.7\) feet long. Area of a Triangle
We will now develop a few different ways to calculate the area of a triangle. Perhaps the most familiar formula for the area is the following:
The triangles in Figure \(\PageIndex{2}\) illustrate the use of the variables in this formula.
The area \(A\) of a triangle is \[A = \dfrac{1}{2}bh.\]
where \(b\) is the length of the base of a triangle and \(h\) is the length of the altitude that is perpendicular to that base.
Figure \(\PageIndex{2}\): Diagrams for the Formula for the Area of a Triangle
A proof of this formula for the area of a triangle depends on the formula for the area of a parallelogram and is included in Appendix C.
Exercise \(\PageIndex{2}\)
Suppose that the length of two sides of a triangle are \(5\) meters and \(7\) meters and that the angle formed by these two sides is \(26.5^\circ\). See the diagram on the right.
For this problem, we are using the side of length \(7\) meters as the base. The altitude of length \(h\) that is perpendicular to this side is shown.
Use right triangle trigonometry to determine the value of \(h\). Determine the area of this triangle. Answer
Using the right triangle, we see that \(\sin(26.5^\circ) = \dfrac{h}{5}\). So \(h = 5\sin(26.5^\circ)\), and the area of the triangle is
\[A = \dfrac{1}{2}(7)[5\sin(26.5^\circ)] = \dfrac{35}{2}\sin(26.5^\circ) \approx 7.8085\]
The area of the triangle is approximately \(7.8085\) square meters.
The purpose of Progress Check 3.21 was to illustrate that if we know the length of two sides of a triangle and the angle formed by these two sides, then we can determine the area of that triangle.
The Area of a Triangle
The area of a triangle equals one-half the product of two of its sides times the sine of the angle formed by these two sides.
Exercise \(\PageIndex{3}\)
In the diagram on the right, \(b\) is the length of the base of a triangle, \(a\) is the length of another side, and \(\theta\) is the angle formed by these two sides. We let \(A\) be the area of the triangle.
Follow the procedure illustrated in Progress Check 3.21 to prove that \[A = \dfrac{1}{2}ab\sin(\theta)\]
Explain why this proves the formula for the area of a triangle.
Answer
Using the right triangle, we see that \(\sin(\theta) = \dfrac{h}{a}\). So \(h = a\sin(\theta\), and the area of the triangle is \[A = \dfrac{1}{2}b(a\sin(\theta)) = \dfrac{1}{2}ab\sin(\theta)\]
There is another common formula for the area of a triangle known as Heron’s Formula named after Heron of Alexandria (circa 75 CE). This formula shows that the area of a triangle can be computed if the lengths of the three sides of the triangle are known.
Heron’s Formula
The area \(A\) of a triangle with sides of length \(a\), \(b\), and \(c\) is given by the formula
\[A = \sqrt{s(s - a)(s - b)(s - c)} \label{Heron}\]
where \(s = \dfrac{1}{2}(a+ b + c)\).
For example, suppose that the lengths of the three sides of a triangle are \(a = 3ft\), \(b = 5ft\), and \(c = 6ft\). Using Heron’s Formula (Equation \ref{Heron}), we get
\[s = \dfrac{1}{2}(a+ b + c)\]
\[s = 7\]
\[A = \sqrt{s(s - a)(s - b)(s - c)}\]
\[A = \sqrt{7(7 - 3)(7 - 5)(7 - 6)}\]
\[A = \sqrt{42}\]
This fairly complex formula is actually derived from the previous formula for the area of a triangle and the Law of Cosines. We begin our exploration of the proof of this formula in Progress Check
Exercise \(\PageIndex{4}\)
Suppose we have a triangle as shown in the diagram below.
Use the Law of Cosines that involves the angle \(\gamma\) and solve this formula for \(\cos(\gamma)\). This gives a formula for \(\cos(\gamma)\) in terms of \(a\), \(b\), and \(c\). Use the Pythagorean Identity \(\cos^{2}(\gamma) + \sin^{2}(\gamma) = 1\)1 to write \(\sin(\gamma)\) in terms of \(\cos^{2}(\gamma)\). Substitute for \(\cos^{2}(\gamma)\) using the formula in (1). This gives a formula for \(\sin(\gamma)\) in terms of \(a\), \(b\), and \(c\). (Do not do any algebraic simplification.) We also know that a formula for the area of this triangle is \(A = \dfrac{1}{2}ab\sin(\gamma)\)
Substitute for \(\sin(\gamma)\) using the formula in (2). (Do not do any algebraic simplification.) This gives a formula for the area \(A\) in terms of \(a\), \(b\), and \(c\).
The formula obtained in Progress Check 3.23 was \[A = \dfrac{1}{2}ab\sqrt{1 - (\dfrac{a^{2} + b^{2} - c^{2}}{2ab})^{2}}\]
This is a formula for the area of a triangle in terms of the lengths of the three sides of the triangle. It does not look like Heron’s Formula, but we can use some substantial algebra to rewrite this formula to obtain Heron’s Formula. This algebraic work is completed in the appendix for this section.
Answer
1. Using the Law of Cosines, we see that \[c^{2} = a^{2} + b^{2} - 2ab\cos(\gamma)\] \[2ab\cos(\gamma) = a^{2} + b^{2} - c^{2}\] \[\cos(\gamma) = \dfrac{a^{2} + b^{2} - c^{2}}{2ab}\]
2. We see that
\[\sin^{2}(\gamma) = 1 - \cos^{2}(\gamma).\]
Since \(\gamma\) is between \(0^\circ\) and \(180^\circ\), we know that \(\sin(\gamma) > 0\) and so
\[\sin(\gamma) = \sqrt{\dfrac{a^{2} + b^{2} - c^{2}{2ab}}\]
3. Substituting the equation in part (2) into the formula \(A = \dfrac{1}{ab}\sin(\gamma)\), we obtain
\[A = \dfrac{1}{2}ab\sin(\gamma) = \dfrac{1}{2}ab\sqrt{1 - (\dfrac{a^{2} + b^{2} - c^{2}{2ab})^{2}}\]
Appendix – Proof of Heron’s Formula
The formula for the area of a triangle obtained in Progress Check 3.23 was \[A = \dfrac{1}{2}ab\sqrt{1 - (\dfrac{a^{2} + b^{2} - c^{2}}{2ab})^{2}}\]
We now complete the algebra to show that this is equivalent to Heron’s formula. The first step is to rewrite the part under the square root sign as a single fraction.
\[A = \dfrac{1}{2}ab\sqrt{1 - (\dfrac{a^{2} + b^{2} - c^{2}}{2ab})^{2}}\]
\[= \dfrac{1}{2}ab\sqrt{\dfrac{(2ab)^{2} - (a^{2} + b^{2} - c^{2})^{2}}{(2ab)^{2}}}\]
\[= \dfrac{1}{2}ab\dfrac{\sqrt{(2ab)^{2} - (a^{2} + b^{2} - c^{2})^{2}}}{2ab}\]
\[= \dfrac{\sqrt{(2ab)^{2} - (a^{2} + b^{2} - c^{2})^{2}}}{4}\]
Squaring both sides of the last equation, we obtain \[A^{2} = \dfrac{(2ab)^{2} - (a^{2} + b^{2} - c^{2})^{2}}{16}\]
The numerator on the right side of the last equation is a difference of squares. We will now use the difference of squares formula, \(x^{2} - y^{2} = (x - y)(x + y)\) to factor the numerator.
\[A^{2} = \dfrac{(2ab)^{2} - (a^{2} + b^{2} - c^{2})^{2}}{16}\]
\[= \dfrac{(2ab - (a^{2} + b^{2} - c^{2}))(2ab + (a^{2} + b^{2} - c^{2}))}{16}\]
\[= \dfrac{(-a^{2} + 2ab - b^{2} + c^{2})(a^{2} + 2ab + b^{2} - c^{2})}{16}\]
We now notice that \(-a^{2} + 2ab - b^{2} = -(a - b)^{2}\) and \(a^{2} + 2ab + b^{2} = (a + b)^{2}\). So using these in the last equation, we have
\[A^{2} = \dfrac{(-(a - b)^{2} + c^{2})((a + b)^{2} - c^{2})}{16}\]
\[= \dfrac{(-[(a - b)^{2} - c^{2}])((a + b)^{2} - c^{2})}{16}\]
We can once again use the difference of squares formula as follows:
\[(a - b)^{2} - c^{2} = (a - b - c)(a - b + c)\]
\[(a + b)^{2} - c^{2} = (a + b - c)(a + b + c)\]
Substituting this information into the last equation for \(A^{2}\), we obtain \[A^{2} = \dfrac{-(a - b - c)(a - b + c)(a + b - c)(a + b + c)}{16}\]
Since \(s = \dfrac{1}{2}(a + b + c)\), \(2s = a + b + c\) Now notice that
\[-(a - b - c) = -a + b + c = a + b + c - 2a = 2s - 2a\]
\[a - b + c = a + b + c -2b = 2s - 2b\]
\[a + b + c = a + b + c -2c = 2s - 2c\]
\[a + b + c = 2s\]
so
\[A^{2} = \dfrac{-(a - b - c)(a - b + c)(a + b - c)(a + b + c)}{16}\]
\[= \dfrac{(2s - 2a)(2s - 2b)(2s - 2c)(2s)}{16}\] \[= \dfrac{16s(s - a)(s - b)(s - c)}{16}\] \[= s(s - a)(s - b)(s - c)\] \[A = \sqrt{s(s - a)(s - b)(s - c)}\]
This completes the proof of Heron’s formula.
Summary
In this section, we studied the following important concepts and ideas: How to use right triangle trigonometry, the Law of Sines, and the Law of Cosines to solve applied problems involving triangles. Three ways to determine the area \(A\) of a triangle.
\(A = \dfrac{1}{2}bh\), where \(b\) is the length of the base and \(h\) is the length of the altitude.
\(A = \dfrac{1}{2}ab\), where \(a\) and \(b\) are the lengths of two sides of the triangle and \(\theta\) is the angle formed by the sides of length \(a\) and \(b\).
Heron’s Formula. If \(a\), \(b\), and \(c\) are the lengths of the sides of a triangle and \(s = \dfrac{1}{2}(a + b + c)\), then \[A = \sqrt{s(s - a)(s - b)(s - c)}.\] |
I've been playing with the Von Mises distribution for a project I'm doing in python and I'm confused about it.
I'm drawing the pdf, which is defined by wikipedia here as $p(x|\mu, k) = \frac{\exp{(k \cos(x-\mu))}}{\tau I_0(k)}$ for the angle $x$, centre $\mu$ and dispersion or concentration $k$ or $kappa$ (and $\tau = 2\pi$).
In python, I attempted to plot the pdf for different kappas to see how the variable $k$ affects the distribution, using the trapezium rule to integrate the area under the curve. When I did this, I found some funny results, specifically: The area under the curve only sums to $1$ for the case $k = 0$, sometimes the pdf goes under the x axis, the maximum value periodically spikes to very high figures too.
I think I am missing something, probably to do with the circularity and the wrapping, but I can't figure out what a good range for kappa is and how it will affect the results I draw from the distribution. Another reason why I'm confused is that wkipedia also confirms here, in the first paragraph that the area under a pdf curve should be equal to $1$. Is it because the von mises is only defined for a segment of length $\tau$? I know I shouldn't be relying on wikipedia so much, so I'm open to redirection towards better resources though I'd rather not have to deal with anything too heavy, this isn't my area of expertise.
My code is below.
import numpy as npimport scipy.special as spsimport matplotlib.pyplot as plttau = np.pi*2fig, ax = plt.subplots(1,1, figsize = (10,8))n = 10xs = np.linspace(0, 2, n) #create a range of kappasmu = 0m = 50width = tau/float(m)sums = np.empty(n) #for storing the integrals under the curves for different kappasfor i in range(n): #plot a pdf for each kappa and sum under the curve. kappa = xs[i] x = np.linspace(-np.pi, np.pi, m) y = np.exp(kappa*np.cos(x-mu))/(tau*sps.jn(0,kappa)) ax.plot(x, y, label="kappa = {}".format(kappa)) sums[i] = sum([(y[j]+y[j+1])/2*width for j in range(m-1)])ax.set_axes("tight")ax.legend()print "the area under the pdf for different kappas: "for i in range(n): print "kappa =", xs[i], ", integral =", sums[i]fig.show()
Thanks for any replies shedding light on this!
Edit: It turns out I was correct about my beliefs about the area under a pdf being $1$ and the equation for the von Mises distribution. My mistake was in following some sample code in the documentation at numpy. |
There is a top-down approach to induction that works in a very general setting. Let $\Omega$ be a set and $f:\Omega^I\to\Omega$ be a function (what I say generalizes to having many such functions) where $I$ is a nonempty set and $\Omega^I$ the set of all functions from $I$ to $\Omega$. If $I=\{1,2\}$, $f$ is essentially a binary operation. But there are also infinitary operations. We can write infinite sums of nonnegative real numbers together with "infinity" this way. Let $\Omega$ consist of all nonnegative real numbers and $I=\mathbb{N}$ we let $f(x_1,x_2,x_3,\ldots)=\sum_{n=1}^\infty x_n$. In this case $f$ is an "infinitary" operation.
Now let $S\subseteq \Omega$ be a set in which each element has a certain property $P$ and let $f$ be such that whenever $x_i$ has the property $P$ for all $P$, then $f\big((x_i)\big)$ has property $P$. Now let $\mathcal{C}$ be the collection of all subsets $C$ of $\Omega$ such that $f$ restricted to $C^I$ has only values in $C$. The collection $\mathcal{C}$ is nonempty since $\Omega\in\mathcal{C}$. Let $S^*$ be the intersection of all sets in $C$ that contain $S$. Then all elements of $S^*$ have property $P$.
This kind of argument is often easier than using transfinite induction directly and can be justified by transfinite induction.
To see that it is a form of induction, let $\Omega=\mathbb{R}$, $I=\{1\}$ and $f$ be given by $f(r)=r+1$. So $\mathcal{C}$ consists of all sets of real number such that they contain with every number the same number plus one. Now let $S=\{1\}$. Then $S^*=\mathbb{N}$! So if we have a property that holds for $1$ and such that if it holds for $n$, it holds also for $n+1$, it holds for all of $\mathbb{N}$.
Let’s do an easy infinite induction. Let $\Omega$ again be the set of all real numbers together with infinity, $f$ be infinite summation and let $S=\{2\}$. Every element in $S$ has the property of being larger than one. Now if $x_1,x_2,\ldots$ are larger than one, then $\sum_{n=1}^\infty x_n$ is larger than one. So $S^*=\{2,\infty\}$ has indeed the property that all elements are larger than one.
The approach outlined is intimately related to Moore closures. |
On the last question, I am not sure how good you are at the representation theory, but the following fact is true: take so(d,2) (we need so(3,2) for this work), use the conformal base, i.e. Lorentz generators $L_{ab}$, translations $P_a$, conformal boosts $K_a$ and dilatation $D$, $a,b=1..d$. $P$ and $K$ behave as raising/lowering generators with respect to $D$, $[D,P]=+P$, $[D,K]=-K$. Take the vacuum to carry a spin-s representation of the Lorentz algebra and a weight $\Delta$ with respect to $D$, i.e. $|\Delta\rangle^{a_1...a_s}$. When $\Delta=d+s-2$, there is a singular vector, $P_m|\Delta\rangle^{ma_2...a_s}$. This is a standard representation theory: finding raising/lowering operators, defining vacuum, looking for singular vectors. Actually, singular vectors are exactly the conformally-invariant equations one can impose.
On the field language this means that $\partial_m J^{m a_2...a_s}=0$ is a conformally invariant equation iff the conformal dimension of $J$ is $\Delta=d+s-2$. Despite the fact that $J^{a_1...a_s}$ is a good conformal operator for any value of the conformal dimension, only for $d+s-2$ its divergence decouples. (Perhaps you have seen $L_{-2}+\alpha L_{-1}^2$ as a singular vector in the Virasoro algebra, now it is replaced with $P_m$ or $\partial_m$).
Now, having $J^{a_1..a_s}$ of weight $\Delta$ we can consider its contragradient representation or on the field language couple it via $\int \phi_{a_1..a_s}J^{a_1...a_s}$ to some other field $\phi$. That we need a conformally invariant coupling implies $\Delta_\phi=d-\Delta_J=s-2$. Not surprisingly something special must happen for $\Delta_J=d+s-2$.
$$\int (\phi_{a_1...a_s}+\partial_{a_1}\xi_{a_2...a_s})J^{a_1...a_s}=\int \phi J-\int \phi_{a_1...a_s}\partial_m J^{ma_2..a_s}=\int \phi J$$we see that a statement that is dual to the conservation of $J$ is the gauge invariance of $\phi$.
I have not read the paper yet, but as far as I can see they play with the dimension of $J$ and for $d+s-2$ and $2-s$ it describes a conserved tensor and a gauge field just because of representation theory of the conformal group (decoupling of certain null states). At any given moment of time in the paper $J$ has some fixed dimension and is either a conserved tensor, a gauge field or just a spin-s conformal field of generic dimension $\Delta$.
On the last but one, you are right in that gauge invariance has a little to do with conformallity. The answer is spin and dimension dependent. For $s=0$ there is $m^2$ for which the scalar is conformal. For $s=1$ and certain $m^2$ the Maxwell field is a gauge field but the Maxwell equation is conformal in $d=4$ only. Beyond $d=4$ a gauge spin-one field is not conformal, or a spin-one conformal field is not a gauge field. For $s\geq2$ the situation is even more tricky: in $AdS_4$ the gauge fields are conformal, but in Minkowski space they are not conformal (in terms of gauge potentials $\phi_{\mu_1...\mu_s}$). You may have a look at http://arxiv.org/abs/0707.1085
On the second, first of all the transversality is on the right place in 5.1. Secondly, your confusion (inspired by my answer to another question) is that there are two different classes of fields people are interested in. First is the class of usual particles, where we talk about representations of the Poincare algebra $iso(d-1,1)$ if we are in $d$-dimensional Minkowski space or $so(d-1,2)$ and $so(d,1)$ if we are in anti de Sitter ot de Sitter (there we need harmonicity, tracelessness, transversality). Conformal fields are in the second class. Conformal means that it must be a representation of the conformal group $so(d,2)$ for Minkowski-$d$, note that $iso(d-1,1)\in so(d,2)$. The conformal group of anti de Sitter-$d$ is also $so(d,2)$. Note that the symmetry algebra of AdS-$(d+1)$ is exactly the conformal group of Minkowski-$d$. So when we talk about conformal fields we are interested in reprsetations of $so(d,2)$ (the signature can vary depending on the problem, it is some real form of $so(d+2)$). I would like to stress that conformal fields in d-dimensions are in one-to-one correspondence with usual fields in $AdS_{d+1}$, for the algebra is the same, which is at the core of AdS/CFT correspondence.
For example, a spin-$0$ field in Minkowski space obeys $\square \phi=0$. It gives rise to an irreducible representaion of $iso(d-1,1)$. Coincidentally, the same representation turns out to be an irreducible representation of a bigger algebra, $so(d,2)$, the conformal algebra. It is a coincidence. There exists also a spin-$0$ conformal field of weight $\Delta$, say $\phi_\Delta(x)$. Without imposing any equations it is an irreducible representation of $so(d,2)$. As a representation of its subalgebra $iso(d-1,1)$ it decomposes into an intergral of representations (Fourier) and is highly reducible. There is a special weight $\Delta=(d-2)/2$ for which $\phi_\Delta(x)$ is reducible and the decoupling of null states is achived via $\square \phi=0$ (analogous to the conservation of $J$ above). Note that $J$ above is an irreducible representation of $so(d,2)$ but it is highly reducible under $iso(d-1,1)$. For special weight $d+s-2$ we have to impose the conservation condition in order to project out the null states, but again the conserved tensor is an irreducible of $so(d,2)$ and reducible under $iso(d-1,1)$. So your confusion is because the fields are conformal, these are representations of a bigger algebra, they are more 'fat' and require less equations (even no at all) to project onto an irreducible.
$S^3$ is the analog of Minkowski-$3$ (compactified and Euclidian), then $so(4)$ is the analog of $iso(3,1)$ and they are interested in normalizable functions, these are the spherical harmonics or polynomials depending on coordinates. Then they discuss labelling of these representations using $so(4)\sim su(2)\oplus su(2)$ and proceed to doing some integrals. This post imported from StackExchange Physics at 2014-03-07 13:47 (UCT), posted by SE-user John |
The Dimension of The Null Space and Range
The Dimension of The Null Space and Range
We have looked at the Null Space of a Linear Map and the Range of a Linear Map. Let $T \in \mathcal L (V, W)$. We have already proven that $\mathrm{null} (T)$ is a subspace of the domain $V$ and that $\mathrm{range} (T)$ is a subspace of the codomain $W$. We will now look at an extremely important theorem that relates the dimensions of the null space and range of the linear transformation $T$ to the dimension of the domain vector space $V$, provided that $V$ is a finite-dimensional vector space.
Theorem 1: If $T \in \mathcal L (V, W)$ and $V$ is a finite-dimensional vector space, then $\dim V = \dim ( \mathrm{null} (T)) + \dim (\mathrm{range} (T))$. Proof: Let $T \in \mathcal L (V, W)$ and let $V$ be a finite-dimensional vector space. Since $V$ is finite-dimensional and $\mathrm{null} (T)$ is a subspace of $V$, then we also have that $\mathrm{null} (T)$ is finite-dimensional. Let $\{ u_1, u_2, ..., u_m \}$ be a basis of $\mathrm{null} (T)$. Since this set of vectors is a linearly independent set, it can be extended to the set $\{ u_1, u_2, ..., u_m, v_1, v_2, ... v_n \}$ to form a basis of the finite-dimensional vector space $V$. Therefore $\dim V = m + n$ and $\dim ( \mathrm{null} (T)) = m$. We want to prove that $\dim ( \mathrm{range} (T)) = n$ to satisfy the equality hypothesized. (1) Let $v \in V$. Since $\{ u_1, u_2, ..., u_m, v_1, v_2, ..., v_n \}$ is a basis of $V$, then we can write $v = a_1u_1 + a_2u_2 + ... + a_mu_m + b_1v_1 + b_2v_2 + ... + b_nv_n$ where $a_1, a_2, ..., a_m, b_1, b_2, ..., b_n \in \mathbb{F}$. If we apply the linear transformation to both sides we obtain that:
\begin{align} T(v) = T(a_1u_1 + a_2u_2 + ... + a_mu_m + b_1v_1 + b_2v_2 + ... + b_nv_n) \\ \quad T(v) = a_1T(u_1) + a_2T(u_2) + ... + a_mT(u_m) + b_1T(v_1) + b_2T(v_2) + ... + b_nT(v_n) \end{align}
(2) Now notice that since $\{ u_1, u_2, ..., u_m \}$ is a basis of $\mathrm{null} (T)$ then this implies that $u_1, u_2, ..., u_m \in \mathrm{null} (T)$. Therefore $T(u_1) = T(u_2) = ... = T(u_m) = 0$, and so:
\begin{align} \quad T(v) = a_10 + a_20 + ... + a_m0 + b_1T(v_1) + b_2T(v_2) + ... + b_nT(v_n) \\ T(v) = b_1T(v_1) + b_2T(v_2) + ... + b_nT(v_n) \end{align}
Therefore the set of vectors $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $\mathrm{range} (T)$, that is $\mathrm{span} (T(v_1), T(v_2), ..., T(v_n)) = \mathrm{range} (T)$. (3) We now need to show that the set $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is linearly independent. Consider the following vector equation:
\begin{equation} c_1T(v_1) + c_2T(v_2) + ... + c_nT(v_n) = 0 \end{equation}
(4) Where $c_1, c_2, ..., c_n \in \mathbb{F}$. Then it follows that $T(c_1v_1 + c_2v_2 + ... + c_nv_n) = 0$, and so $(c_1v_1 + c_2v_2 + ... + c_nv_n) \in \mathrm{null} (T)$. Now since the set $\{ u_1, u_2, ..., u_m \}$ spans $\mathrm{null} (T)$, so for $d_1, d_2, ..., d_m \in \mathbb{F}$ we can write that:
\begin{align} c_1v_1 + c_2v_2 + ... + c_nv_n = d_1u_1 + d_2u_2 + ... + d_mu_m \\ \quad c_1v_1 + c_2v_2 + ... + c_nv_n - d_1u_1 - d_2u_2 - ... - d_mu_m = 0 \end{align}
But the set $\{ u_1, u_2, ..., u_m, v_1, v_2, ..., v_n \}$ is linearly independent (as it is a basis for $V$), which implies that $c_1 = c_2 = ... = c_n = d_1 = d_2 = ... = d_m = 0$. More importantly, $c_1 = c_2 = ... c_m = 0$. Therefore $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a linearly independent set of vectors. Since $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $\mathrm{range} (T)$ and is linearly independent, then $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a basis for $\mathrm{range} (T)$, so $\dim ( \mathrm{range} (T)) = n$. Therefore $\dim V = \dim ( \mathrm{null} (T)) + \dim ( \mathrm{range} (T))$. $\blacksquare$
We will now look at a bunch of corollaries as a result of Theorem 1.
Corollary 1: If $T \in \mathcal L (V, W)$ and $V$ is a finite-dimensional vector space, then $\mathrm{range} (T)$ is finite-dimensional. Proof: If $V$ is finite-dimensional, then $\dim V = m$ for some $m \in \mathbb{N} \cup \{0 \}$. Since $\mathrm{null} (T)$ is a subspace of $V$ we have that $\mathrm{null} (T)$ is also finite-dimensional, so $\dim ( \mathrm{null} (T)) = n ≤ m$. By Theorem 1, it follows that $\dim ( \mathrm{range} (T)) = m - n$ and so $\mathrm{range} (T)$ is finite-dimensional. $\blacksquare$
Corollary 2: If $T \in \mathcal L (V, W)$ and $V$ and $W$ are finite-dimensional vector spaces where $\dim V > \dim W$, then $T$ is not injective. (5) Proof: Suppose that $V$ and $W$ are finite-dimensional vector spaces such that $\dim V > \dim W$, and let $T \in \mathcal L (V, W)$. By Theorem 1 we have that $\dim V = \dim ( \mathrm{null} (T)) + \dim ( \mathrm{range} (T))$ and so rearranging these terms (while noting that $\dim V - \dim ( \mathrm{range} (T)) ≥ \dim V - \dim W$ since $\dim ( \mathrm{range} (T)) ≤ \dim W$) we get that:
\begin{align} \quad \dim ( \mathrm{null} (T)) = \dim V - \dim ( \mathrm{range} (T)) ≥ \dim V - \dim W > 0 \end{align}
Therefore $\dim ( \mathrm{null} (T)) > 0$ and so $\mathrm{null} (T) \neq \{ 0 \}$ and $T$ is not injective. $\blacksquare$.
We will now look at a corollary that is analogous to that of corollary 2.
Corollary 3: If $T \in \mathcal L (V, W)$ and $V$ and $W$ are finite-dimensional vector spaces where $\dim V < \dim W$, then $T$ is not surjective. (6) Proof: Suppose that $V$ and $W$ are finite-dimensional vector spaces such that $\dim V < \dim W$, and let $T \in \mathcal L (V, W)$. By Theorem 1 we have that $\dim V = \dim ( \mathrm{null} (T)) + \dim ( \mathrm{range} (T))$ and so rearranging these terms we get that:
\begin{align} \quad \dim (\mathrm{range} (T)) = \dim V - \dim (\mathrm{null} (T)) ≤ \dim V < \dim W \end{align}
Therefore $\dim (\mathrm{range} (T)) < \dim W$ so $\mathrm{range} (T) \neq W$ so $T$ is not surjective. $\blacksquare$
Corollary 4: If $T \in \mathcal L (V, V)$ and $V$ is a finite-dimensional vector space then $T$ is injective if and only if $T$ is surjective. Proof: $\Rightarrow$ Suppose that $T$ is injective. Then $\mathrm{null} (T) = \{ 0 \}$ so $\dim (\mathrm{null} (T)) = 0$. By Theorem 1 we have that $\dim V = \dim (\mathrm{range} (T))$ which implies that $V = \mathrm{range} (T)$ (since $T : V \to V$), so $T$ is surjective. $\Leftarrow$ Suppose that $T$ is surjective. Then $\mathrm{range} (T) = V$, and so $\dim (\mathrm{range} (T)) = \dim V$. Therefore by Theorem 1, we have that $\dim (\mathrm{null} (T)) = 0$ which implies that $\mathrm{null} (T) = \{ 0 \}$, so $T$ is injective. $\blacksquare$ |
An interesting application of the spectral gap/algebraic connectivity is to determine the synchronizability of linearly coupled dynamical nodes, which can be formulated as follows:
\[\frac{dx_{i}}{dt} =R(x_{i}) +\alpha{\sum_{j\epsilon{N_{i}}}(H(x_j) -H(x_i)}) \label{(18.6)}\]
Here \(x_i\) is the state of node \(i\), \(R\) is the local reaction term that produces the inherent dynamica lbehavior of individual nodes, and \(N_i\) is the neighborhood of node \(i\). We assume that \(R\) is identical for all nodes, and it produces a particular trajectory \(x_s(t)\) if there is no interaction with other nodes. Namely, \(x_s(t)\) is given as the solution of the differential equation \(dx/dt = R(x)\). \(H\) is called the output function that homogeneously applies to all nodes. The output function is used to generalize interaction and diffusion among nodes; instead of assuming that the node states themselves are directly visible to others, we assume that a certain aspect of node states (represented by \(H(x)\)) is visible and diffusing to other nodes.
Eq. \ref{(18.6)} can be further simplified by using the Laplacian matrix, as follows:
\[\frac{dx_i}{dt} =R(x_i) -\alpha{L} \begin{pmatrix} H(x_1) \\ H(x_2) \\ \vdots \\ H(x_n)\end{pmatrix} \label{(18.7)}\]
Now we want to study whether this network of coupled dynamical nodes can synchronize or not. Synchronization is possible if and only if the trajectory \(x_i(t) = x_s(t)\) for all \(i\) is stable. This is a new concept, i.e., to study the stability of a dynamic trajectory, not of a static equilibrium state. But we can still adopt the same basic procedure of linear stability analysis: represent the system’s state as the sum of the target state and a small perturbation, and then check if the perturbation grows or shrinks over time. Here we represent the state of each node as follows:
\[x_i(t) =x_s(t) +\Delta{x_i(t)} \label{(18.8)}\]
By plugging this new expression into Eq. \ref{(18.7)}, we obtain
\[\frac{d(x_s+\Delta{x_i})}{dt} =R(x_s+\Delta{x_i})- \alpha{L} \begin{pmatrix} H(x_s+\Delta{x_1}) \\ H(x_s+\Delta{x_2}) \\ \vdots \\ H(x_s +\Delta{x_n})\end{pmatrix} \label{(18.9)}\]
Since \(∆x_i\) are very small, we can linearly approximate \(R\) and \(H\) as follows:
\[\frac{dx_s}{dt} +\frac{d\Delta{x_i}}{dt} =R(x_s) +R'(x_s)\Delta{x_i}-\alpha{L} \begin{pmatrix} H(x_s)+H'(x_s)\Delta{x_1} \\ H(x_s)+H'(x_s)\Delta{x_2} \\ \vdots \\ H(x_s)+ H'(x_s)\Delta{x_n}\end{pmatrix} \label{(18.10)} \]
The first terms on both sides cancel out each other because xs is the solution of \(dx/dt = R(x)\) by definition. But what about those annoying \(H(x_s)\)’s included in the vector in the last term? Is there any way to eliminate them? Well, the answer is that we don’t have to do anything, because the
Laplacian matrix will eat them all. Remember that a Laplacian matrix always satisfies \(Lh = 0\). In this case, those \(H(x_s)\)’s constitute a homogeneous vector \(H(x_s)h\) altogether. Therefore,\( L(H(x_s)h) = H(x_s)Lh\) vanishes immediately, and we obtain
\[\frac{d\Delta{x}}{dt} =R'(x_s)\Delta{x_i}-\alpha{H'}(x_s)L \begin{pmatrix} \Delta{x_1} \\ \Delta{x_2} \\ \vdots \\ \Delta{x_{n}} \end{pmatrix}, \label{(18.11)}\]
or, by collecting all the \(∆x_i\)’s into a new perturbation vector \(∆x\),
\[\frac{d\Delta{x}}{dt} =(R'(x_s)I -\alpha{H'}(x_s)L)\Delta{x}, \label{(18.12)}\]
as the final result of linearization. Note that \(x_s\) still changes over time, so in order for this trajectory to be stable, all the eigenvalues of this rather complicated coefficient matrix \((R'(x_s)I −αH'(x_s)L)\) should always indicate stability at any point in time.
We can go even further. It is known that the eigenvalues of a matrix \(aX+bI\) are \(aλ_i+b\), where \(λ_i\) are the eigenvalues of \(X\). So, the eigenvalues of \((R'(x_s)I −αH'(x_s)L\) are
\[-\alpha{\lambda_{i}}H'(x_s) +R'(x_s), \label{(18.13)}\]
where \(λ_i\) are \(L\)’sneigenvalues. The eigenvalue that corresponds to the smallest eigenvalue of \(L\), 0, is just \(R'(x_s)\), which is determined solely by the inherent dynamics of \(R(x)\) (and thus the nature of \(x_s(t))\), so we can’t do anything about that. But all the other \(n − 1\) eigenvalues must be negative all the time, in order for the target trajectory \(x_s(t)\) to be stable. So, if we represent the second smallest eigenvalue (the spectral gap for connected networks) and the largest eigenvalue of \(L\) by \(λ_{2}\) and \(λ_n\), respectively, then the stability criteria can be written as
\[\alpha{\lambda_{i}}H'(x_{s}(t)) >R' (x_{s}(t)) \qquad{\text{for all t, and}} \label{(18.14)}\]
\[\alpha{\lambda_n}H'(x_s(t)) >R'(x_s(t)) \qquad{\text{ for all t, }} \label{(18.15)} \]
because all other intermediate eigenvalues are “sandwiched” by \(λ_2\) and \(λ_n\). These inequalities provide us with a nice intuitive interpretation of the stability condition: the influence of diffusion of node outputs (left hand side) should be stronger than the internal dynamical drive (right hand side) all the time.
Note that, although \(α\) and \(λ_i\) are both non-negative, \(H'(xs(t))\) could be either positive or negative, so which inequality is more important depends on the nature of the output function \(H\) and the trajectory \(x_s(t)\) (which is determined by the reaction term \(R\)). If \(H'(x_s(t))\) always stays non-negative, then the first inequality is sufficient (since the second inequality naturally follows as \(λ_2 ≤ λ_n\)), and thus the spectral gap is the only relevant information to determine the synchronizability of the network. But if not, we need to consider both the spectral gap and the largest eigenvalue of the Laplacian matrix.
Here is a simple example. Assume that a bunch of nodes are oscillating in an exponentially accelerating pace:
\[\frac{d\theta_i}{dt} =\beta{\theta_i} +\alpha \sum_{j \epsilon{N_i} (\theta_{j} - \theta_{i})} \label{(18.16)}\]
Here, \(θ_i\) is the phase of node \(i\), and \(β\) is the rate of exponential acceleration that homogeneously applies to all nodes. We also assume that the actual values of θi diffuse to and from neighbor nodes through edges. Therefore, \(R(θ) = βθ\) and \(H(θ) = θ\) in this model.
We can analyze the synchronizability of this model as follows. Since \(H'(θ) = 1 > 0\), we immediately know that the inequality \ref{(18.14)} is the only requirement in this case. Also, \(R'(θ) = β\), so the condition for synchronization is given by
\[\alpha{\lambda_2} > \beta, \text{or} \lambda_2 > \frac{\beta}{\alpha}. \label{(18.17)}\]
Very easy. Let’s check this analytical result with numerical simulations on the Karate Club graph. We know that its spectral gap is 0.4685, so if \(β/α\) is below (or above) this value, the synchronization should (or should not) occur. Here is the code for such simulations:
Here I added a second plot that shows the phase distribution in a \((x,y) = (cos{θ},sin{θ})\) space, just to aid visual understanding.
In the code above, the parameter values are set to
alpha = 2 and
beta = 1, so \(λ_2 0.4685 < β/α = 0.5\) . Therefore, it is predicted that the nodes won’t get synchronized. And, indeed, the simulation result confirms this prediction (Fig. 18.3.1(a)), where the nodes initially came close to each other in their phases but, as their oscillation speed became faster and faster, they eventually got dispersed again and the network failed to achieve synchronization. However, if
beta is slightly lowered to 0.9 so that \(λ_2 = 0.4685 > β/α = 0.45\), the synchronized state becomes stable, which is confirmed again in the numerical simulation (Fig. 18.3.1(b)). It is interesting that such a slight change in the parameter value can cause a major difference in the global dynamics of the network. Also, it is rather surprising that the linear stability analysis can predict this shift so precisely. Mathematical analysis rocks!
Exercise \(\PageIndex{1}\)
Randomize the topology of the Karate Club graph and measure its spectral gap. Analytically determine the synchronizability of the accelerating oscillators model discussed above with \(α = 2\), \(β = 1\) on the randomized network. Then confirm your prediction by numerical simulations. You can also try several other network topologies.
Exercise \(\PageIndex{2}\)
he following is a model of coupled harmonic oscillators where complex node states are used to represent harmonic oscillation in a single-variable differential equation:
\[ \frac{dx_i}{dt} =i\omega{x_i} +\alpha \sum{j\epsilon{N_i}(x^{\gamma}_{j} -x^{y}_{i})} \label{(18.18)}\]
Here i is used to denote the imaginary unit to avoid confusion with node index i. Since the states are complex, you will need to use \(Re(·)\) on both sides of the inequalities \ref{(18.14)} and \ref{(18.15)} to determine the stability.
Analyze the synchnonizability of this model on the Karate Club graph, and obtain the condition for synchronization regarding the output exponent \(γ\). Then confirm your prediction by numerical simulations.
You may have noticed the synchronizability analysis discussed above is somewhat similar to the stability analysis of the continuous field models discussed in Chapter 14. Indeed, they are essentially the same analytical technique (although we didn’t cover stability analysis of dynamic trajectories back then). The only difference is whether the space is represented as a continuous field or as a discrete network. For the former, the diffusion
: Visual outputs of Code 18.2. Time flows from top to bottom. (a) Result with α = 2, β = 1 (λ2 < β/α.). (b) Result with \(α = 2\), \(β = 0.9\) (\(λ_{2} > β/α\).). Figure \(\PageIndex{1}\)
is represented by the Laplacian operator \(∇^2\), while for the latter, it is represented by the Laplacian matrix \(L\) (note again that their signs are opposite for historical misfortune!). Network models allows us to study more complicated, nontrivial spatial structures, but there aren’t any fundamentally different aspects between these two modeling frameworks. This is why the same analytical approach works for both.
Note that the synchronizability analysis we covered in this section is still quite limited in its applicability to more complex dynamical network models. It relies on the assumption that dynamical nodes are homogeneous and that they are linearly coupled, so the analysis can’t generalize to the behaviors of heterogeneous dynamical networks with nonlinear couplings, such as the Kuramoto model discussed in Section 16.2 in which nodes oscillate in different frequencies and their couplings are nonlinear. Analysis of such networks will need more advanced nonlinear analytical techniques, which is beyond the scope of this textbook. |
A power series is a type of series with terms involving a variable. More specifically, if the variable is \(x\), then all the terms of the series involve powers of \(x\). As a result, a power series can be thought of as an infinite polynomial. Power series are used to represent common functions and also to define new functions. In this section we define power series and show how to determine when a power series converges and when it diverges. We also show how to represent certain functions using power series.
Form of a Power Series
A series of the form
\[\sum_{n=0}^∞c_nx^n=c_0+c_1x+c_2x^2+\ldots ,\]
where \(x\) is a variable and the coefficients \(c_n\) are constants, is known as a
power series. The series
\[1+x+x^2+\ldots =\sum_{n=0}^∞x^n\]
is an example of a power series. Since this series is a geometric series with ratio \(r=|x|\), we know that it converges if \(|x|<1\) and diverges if \(|x|≥1.\)
Definition \(\PageIndex{1}\): Power Series
A series of the form
\[\sum_{n=0}^∞c_nx^n=c_0+c_1x+c_2x^2+\ldots \]
is a power series centered at \(x=0.\) A series of the form
\[\sum_{n=0}^∞c_n(x−a)^n=c_0+c_1(x−a)+c_2(x−a)^2+\ldots \]
is a
power series centered at \(x=a\).
To make this definition precise, we stipulate that \(x^0=1\) and \((x−a)^0=1\) even when \(x=0\) and \(x=a\), respectively.
The series
\[\sum_{n=0}^∞\dfrac{x^n}{n!}=1+x+\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\ldots \]
and
\[\sum_{n=0}^∞n!x^n=1+x+2!x^2+3!x^3+\ldots \]
are both power series centered at \(x=0.\) The series
\[\sum_{n=0}^∞\dfrac{(x−2)^n}{(n+1)3^n}=1+\dfrac{x−2}{2⋅3}+\dfrac{(x−2)^2}{3⋅3^2}+\dfrac{(x−2)^3}{4⋅3^3}+\ldots \]
is a power series centered at \(x=2\).
Convergence of a Power Series
Since the terms in a power series involve a variable \(x\), the series may converge for certain values of \(x\) and diverge for other values of \(x\). For a power series centered at \(x=a\), the value of the series at \(x=a\) is given by \(c_0\). Therefore, a power series always converges at its center. Some power series converge only at that value of \(x\). Most power series, however, converge for more than one value of \(x\). In that case, the power series either converges for all real numbers \(x\) or converges for all \(x\) in a finite interval. For example, the geometric series \(\sum_{n=0}^∞x^n\) converges for all \(x\) in the interval \((−1,1)\), but diverges for all \(x\) outside that interval. We now summarize these three possibilities for a general power series.
Note \(\PageIndex{1}\): Convergence of a Power Series
Consider the power series \(\displaystyle \sum_{n=0}^∞c_n(x−a)^n.\) The series satisfies exactly one of the following properties:
The series converges at \(x=a\) and diverges for all \(x≠a.\) The series converges for all real numbers \(x\). There exists a real number \(R>0\) such that the series converges if \(|x−a|<R\) and diverges if \(|x−a|>R\). At the values \(x\) where |x−a|=R, the series may converge or diverge.
Proof
Suppose that the power series is centered at \(a=0\). (For a series centered at a value of a other than zero, the result follows by letting \(y=x−a\) and considering the series
\[ \sum_{n=1}^∞c_ny^n. \nonumber\]
We must first prove the following fact:
If there exists a real number \(d≠0\) such that \(\sum_{n=0}^∞c_nd^n\) converges, then the series \(\sum_{n=0}^∞c_nx^n\) converges absolutely for all \(x\) such that \(|x|<|d|.\)
Since \(\sum_{n=0}^∞c_nd^n\) converges, the nth term \(c_nd^n→0\) as \(n→∞\). Therefore, there exists an integer \(N\) such that \(|c_nd^n|≤1\) for all \(n≥N.\) Writing
\[|c_nx^n|=|c_nd^n| \left|\dfrac{x}{d}\right|^n, \nonumber\]
we conclude that, for all n≥N,
\[|c_nx^n|≤\left|\dfrac{x}{d}\right|^n. \nonumber\]
The series
\[\sum_{n=N}^∞\left|\dfrac{x}{d}\right|^n \nonumber\]
is a geometric series that converges if \(|\dfrac{x}{d}|<1.\) Therefore, by the comparison test, we conclude that \(\sum_{n=N}^∞c_nx^n\) also converges for \(|x|<|d|\). Since we can add a finite number of terms to a convergent series, we conclude that \(\sum_{n=0}^∞c_nx^n\) converges for \(|x|<|d|.\)
With this result, we can now prove the theorem. Consider the series
\[\sum_{n=0}^∞a_nx^n \nonumber\]
and let \(S\) be the set of real numbers for which the series converges. Suppose that the set \(S={0}.\) Then the series falls under case i.
Suppose that the set \(S\) is the set of all real numbers. Then the series falls under case ii. Suppose that \(S≠{0}\) and \(S\) is not the set of real numbers. Then there exists a real number \(x*≠0\) such that the series does not converge. Thus, the series cannot converge for any \(x\) such that \(|x|>|x*|\). Therefore, the set \(S\) must be a bounded set, which means that it must have a smallest upper bound. (This fact follows from the
Least Upper Bound Property for the real numbers, which is beyond the scope of this text and is covered in real analysis courses.) Call that smallest upper bound \(R\). Since \(S≠{0}\), the number \(R>0\). Therefore, the series converges for all \(x\) such that \(|x|<R,\) and the series falls into case iii.
□
If a series \(\sum_{n=0}^∞c_n(x−a)^n\) falls into case iii. of Note, then the series converges for all \(x\) such that \(|x−a|<R\) for some \(R>0\), and diverges for all \(x\) such that \(|x−a|>R\). The series may converge or diverge at the values \(x\) where \(|x−a|=R\). The set of values \(x\) for which the series \(\sum_{n=0}^∞c_n(x−a)^n\) converges is known as the interval of convergence. Since the series diverges for all values \(x\) where \(|x−a|>R\), the length of the interval is \(2R\), and therefore, the radius of the interval is \(R\). The value \(R\) is called the radius of convergence. For example, since the series \(\sum_{n=0}^∞x^n\) converges for all values \(x\) in the interval \((−1,1)\) and diverges for all values \(x\) such that \(|x|≥1\), the interval of convergence of this series is \((−1,1)\). Since the length of the interval is \(2\), the radius of convergence is \(1\).
Definition: radius of convergence
Consider the power series \(\sum_{n=0}^∞c_n(x−a)^n\). The set of real numbers \(x\) where the series converges is the interval of convergence. If there exists a real number \(R>0\) such that the series converges for \(|x−a|<R\) and diverges for \(|x−a|>R,\) then \(R\) is the radius of convergence. If the series converges only at \(x=a\), we say the radius of convergence is \(R=0\). If the series converges for all real numbers \(x\), we say the radius of convergence is \(R=∞\) (Figure \(\PageIndex{1}\)).
To determine the interval of convergence for a power series, we typically apply the ratio test. In Example \(\PageIndex{1}\), we show the three different possibilities illustrated in Figure \(\PageIndex{1}\).
Example \(\PageIndex{1}\): Finding the Interval and Radius of Convergence
For each of the following series, find the interval and radius of convergence.
\(\displaystyle \sum_{n=0}^∞\dfrac{x^n}{n!}\) \(\displaystyle \sum_{n=0}^∞n!x^n\) \(\displaystyle \sum_{n=0}^∞\dfrac{(x−2)^n}{(n+1)3^n}\) Solution
a. To check for convergence, apply the ratio test. We have
\[ \begin{align*} ρ&=\lim_{n→∞} \left|\dfrac{\dfrac{x^{n+1}}{(n+1)!}}{\dfrac{x^n}{n!}}\right| \\[4pt] &=\lim_{n→∞} \left|\dfrac{x^{n+1}}{(n+1)!}⋅\dfrac{n!}{x^n}\right| \\[4pt] &=\lim_{n→∞}\left|\dfrac{x^{n+1}}{(n+1)⋅n!}⋅\dfrac{n!}{x^n}\right| \\[4pt] &=\lim_{n→∞}\left|\dfrac{x}{n+1}\right| \\[4pt] &=|x|\lim_{n→∞}\dfrac{1}{n+1} \\[4pt] &=0<1\end{align*}\]
for all values of \(x\). Therefore, the series converges for all real numbers \(x\). The interval of convergence is \((−∞,∞)\) and the radius of convergence is \(R=∞.\)
b. Apply the ratio test. For \(x≠0\), we see that
\[ \begin{align*} ρ&=\lim_{n→∞}\left|\dfrac{(n+1)!x^{n+1}}{n!x^n}\right| \\[4pt] &=\lim_{n→∞}|(n+1)x| \\[4pt] &=|x|\lim_{n→∞}(n+1) \\[4pt] &=∞. \end{align*}\]
Therefore, the series diverges for all \(x≠0\). Since the series is centered at \(x=0\), it must converge there, so the series converges only for \(x≠0\). The interval of convergence is the single value \(x=0\) and the radius of convergence is \(R=0\).
c. In order to apply the ratio test, consider
\[ \begin{align*} ρ&=\lim_{n→∞}\left|\dfrac{\dfrac{(x−2)^{n+1}}{(n+2)3^{n+1}}}{\dfrac{(x−2)^n}{(n+1)3^n}}\right| \\[4pt] &=\lim_{n→∞} \left|\dfrac{(x−2)^{n+1}}{(n+2)3^{n+1}}⋅\dfrac{(n+1)3^n}{(x−2)^n}\right| \\[4pt] &=\lim_{n→∞} \left|\dfrac{(x−2)(n+1)}{3(n+2)}\right|\\[4pt] &=\dfrac{|x−2|}{3}.\end{align*}\]
The ratio \(ρ<1\) if \(|x−2|<3\). Since \(|x−2|<3\) implies that \(−3<x−2<3,\) the series converges absolutely if \(−1<x<5\). The ratio \(ρ>1\) if \(|x−2|>3\). Therefore, the series diverges if \(x<−1\) or \(x>5\). The ratio test is inconclusive if \(ρ=1\). The ratio \(ρ=1\) if and only if \(x=−1\) or \(x=5\). We need to test these values of \(x\) separately. For \(x=−1\), the series is given by
\[ \sum_{n=0}^∞\dfrac{(−1)^n}{n+1}=1−\dfrac{1}{2}+\dfrac{1}{3}−\dfrac{1}{4}+\ldots . \nonumber\]
Since this is the alternating harmonic series, it converges. Thus, the series converges at \(x=−1\). For \(x=5\), the series is given by
\[ \sum_{n=0}^∞\dfrac{1}{n+1}=1+\dfrac{1}{2}+\dfrac{1}{3}+\dfrac{1}{4}+\ldots . \nonumber\]
This is the harmonic series, which is divergent. Therefore, the power series diverges at \(x=5\). We conclude that the interval of convergence is \([−1,5)\) and the radius of convergence is \(R=3\).
Exercise \(\PageIndex{1}\)
Find the interval and radius of convergence for the series
\[ \sum_{n=1}^∞\dfrac{x^n}{\sqrt{n}}. \nonumber\]
Hint
Apply the ratio test to check for absolute convergence.
Answer
The interval of convergence is \([−1,1).\) The radius of convergence is \(R=1.\)
Representing Functions as Power Series
Being able to represent a function by an “infinite polynomial” is a powerful tool. Polynomial functions are the easiest functions to analyze, since they only involve the basic arithmetic operations of addition, subtraction, multiplication, and division. If we can represent a complicated function by an infinite polynomial, we can use the polynomial representation to differentiate or integrate it. In addition, we can use a truncated version of the polynomial expression to approximate values of the function. So, the question is, when can we represent a function by a power series?
Consider again the geometric series
\[1+x+x^2+x^3+\ldots =\sum_{n=0}^∞x^n.\]
Recall that the geometric series
\[a+ar+ar^2+ar^3+\ldots \]
converges if and only if \(|r|<1.\) In that case, it converges to \(\dfrac{a}{1−r}\). Therefore, if \(|x|<1\), the series in Example \(\PageIndex{1}\) converges to \(\dfrac{1}{1−x}\) and we write
\[1+x+x^2+x^3+\ldots =\dfrac{1}{1−x} for|x|<1.\]
As a result, we are able to represent the function \(f(x)=\dfrac{1}{1−x}\) by the power series
\[1+x+x^2+x^3+\ldots when|x|<1.\]
We now show graphically how this series provides a representation for the function \(f(x)=\dfrac{1}{1−x}\) by comparing the graph of \(f\) with the graphs of several of the partial sums of this infinite series.
Example \(\PageIndex{2}\): Graphing a Function and Partial Sums of its Power Series
Sketch a graph of \(f(x)=\dfrac{1}{1−x}\) and the graphs of the corresponding partial sums \(S_N(x)=\sum_{n=0}^Nx^n\) for \(N=2,4,6\) on the interval \((−1,1)\). Comment on the approximation \(S_N\) as \(N\) increases.
Solution
From the graph in Figure you see that as \(N\) increases, \(S_N\) becomes a better approximation for \(f(x)=\dfrac{1}{1−x}\) for \(x\) in the interval \((−1,1)\).
Exercise \(\PageIndex{2}\)
Sketch a graph of \(f(x)=\dfrac{1}{1−x^2}\) and the corresponding partial sums \(S_N(x)=\sum_{n=0}^Nx^{2n}\) for \(N=2,4,6\) on the interval \((−1,1).\)
Hint \(S_N(x)=1+x^2+\ldots +x^{2N}=\dfrac{1−x^{2(N+1)}}{1−x^2}\) Answer
Next we consider functions involving an expression similar to the sum of a geometric series and show how to represent these functions using power series.
Example \(\PageIndex{3}\): Representing a Function with a Power Series
Use a power series to represent each of the following functions \(f\). Find the interval of convergence.
\(f(x)=\dfrac{1}{1+x^3}\) \(f(x)=\dfrac{x^2}{4−x^2}\) Solution
a. You should recognize this function \(f\) as the sum of a geometric series, because
\[ \dfrac{1}{1+x^3}=\dfrac{1}{1−(−x^3)}. \nonumber\]
Using the fact that, for \(|r|<1,\dfrac{a}{1−r}\) is the sum of the geometric series
\[ \sum_{n=0}^∞ar^n=a+ar+ar^2+\ldots , \nonumber\]
we see that, for \(|−x^3|<1,\)
\[ \begin{align*} \dfrac{1}{1+x^3}&=\dfrac{1}{1−(−x^3)} \\[4pt] &=\sum_{n=0}^∞(−x^3)^n \\[4pt] &=1−x^3+x^6−x^9+\ldots . \end{align*}\]
Since this series converges if and only if \(|−x^3|<1\), the interval of convergence is \((−1,1)\), and we have
\[ \dfrac{1}{1+x^3}=1−x^3+x^6−x^9+\ldots for|x|<1.\nonumber\]
b. This function is not in the exact form of a sum of a geometric series. However, with a little algebraic manipulation, we can relate f to a geometric series. By factoring 4 out of the two terms in the denominator, we obtain
\[ \begin{align*} \dfrac{x^2}{4−x^2}&=\dfrac{x^2}{4(\dfrac{1−x^2}{4})}\\[4pt] &=\dfrac{x^2}{4(1−(\dfrac{x}{2})^2)}.\end{align*}\]
Therefore, we have
\[ \begin{align*} \dfrac{x^2}{4−x^2}&=\dfrac{x^2}{4(1−(\dfrac{x}{2})^2)} \\[4pt] &= \dfrac{\dfrac{x^2}{4}}{1−(\dfrac{x}{2})^2} \\[4pt] &= \sum_{n=0}^∞\dfrac{x^2}{4}(\dfrac{x}{2})^{2n}. \end{align*}\]
The series converges as long as \(|(\dfrac{x}{2})^2|<1\) (note that when \(|(\dfrac{x}{2})^2|=1\) the series does not converge). Solving this inequality, we conclude that the interval of convergence is \((−2,2)\) and
\[ \begin{align*} \dfrac{x^2}{4−x^2}&=\sum_{n=0}^∞\dfrac{x^{2n+2}}{4^{n+1}}\\[4pt] &=\dfrac{x^2}{4}+\dfrac{x^4}{4^2}+\dfrac{x^6}{4^3}+ \ldots \end{align*}\]
for \(|x|<2.\)
Exercise \(\PageIndex{3}\)
Represent the function \(f(x)=\dfrac{x^3}{2−x}\) using a power series and find the interval of convergence.
Hint
Rewrite f in the form \(f(x)=\dfrac{g(x)}{1−h(x)}\) for some functions \(g\) and \(h\).
Answer
\(\sum_{n=0}^∞\dfrac{x^{n+3}}{2^{n+1}}\) with interval of convergence \((−2,2)\)
In the remaining sections of this chapter, we will show ways of deriving power series representations for many other functions, and how we can make use of these representations to evaluate, differentiate, and integrate various functions.
Key Concepts For a power series centered at \(x=a\), one of the following three properties hold: i. The power series converges only at \(x=a\). In this case, we say that the radius of convergence is \(R=0\). ii. The power series converges for all real numbers \(x\). In this case, we say that the radius of convergence is \(R=∞\). iii. There is a real number R such that the series converges for \(|x−a|<R\) and diverges for \(|x−a|>R\). In this case, the radius of convergence is \(R.\) If a power series converges on a finite interval, the series may or may not converge at the endpoints. The ratio test may often be used to determine the radius of convergence. The geometric series \(\sum_{n=0}^∞x^n=\dfrac{1}{1−x}\) for \(|x|<1\) allows us to represent certain functions using geometric series. Key Equations Power series centered at \(x=0\)
\[ \sum_{n=0}^∞c_nx^n=c_0+c_1x+c_2x^2+\ldots nonumber\]
Power series centered at \(x=a\)
\[ \sum_{n=0}^∞c_n(x−a)^n=c_0+c_1(x−a)+c_2(x−a)^2+\ldots \nonumber \]
Glossary interval of convergence the set of real numbers \(x\) for which a power series converges power series a series of the form \(\sum_{n=0}^∞c_nx^n\) is a power series centered at \(x=0\); a series of the form \(\sum_{n=0}^∞c_n(x−a)^n\) is a power series centered at \(x=a\) radius of convergence if there exists a real number \(R>0\) such that a power series centered at \(x=a\) converges for \(|x−a|<R\) and diverges for \(|x−a|>R\), then \(R\) is the radius of convergence; if the power series only converges at \(x=a\), the radius of convergence is \(R=0\); if the power series converges for all real numbers \(x\), the radius of convergence is \(R=∞\) Contributors
Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org. |
Vopěnka's principle and Vopěnka cardinals Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following:
For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$.
For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle.
As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a _Vopěnka cardinal_, that is, a cardinal $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. These three alternatives are, in the order listed, strictly increasing in strength (see http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably).
Equivalent statements
The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1]
Vopěnka cardinal This article is a stub. Please help us to improve Cantor's Attic by adding information.
References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex |
Graph Isomorphisms
We first look at the definition of what an isomorphism between two graphs $G$ and $H$:
Definition: For two graphs $G = (V(G), E(G))$, and $H = (V(H), E(H))$ the graph $G$ is an isomorphism of $H$ if there exists a bijection such that $f: V(G) \rightarrow V(H)$ so that $\left\{ {x, y}\right\} \in E(G)$ if and only if $\left\{ {f(x), f(y)}\right\} \in E(H)$. If this is the case, we say that there exists an Isomorphism from graph $G$ to graph $H$, denoted such that $G \cong H$.
While this definition may look confusing, it essentially says that there exists a function $f$, such that for every pair of vertices $\{x, y \}$ forming an edge in the edge set $E(G)$, then the pair of vertices transformed by the function $f$, or the image pair $\{ f(x), f(y) \}$ forms an edge in the edge set $E(H)$. If this occurs for all pairs $\{ x, y \}$ and also for all occurrences where the pair $\{ x, y \}$ does not form an edge in $E(G)$, then the graph $G$ is said to be isomorphic to the graph $H$.
For example, let's look at the following two graphs $G$ and $H$:
The first step to determine if two graphs are isomorphic is to check to see if the number of vertices in graph $G$ is equal to the number of vertices in $H$, or:(1)
In this case, both graph $G$ and graph $H$ have the same number of vertices. It's also good to check to see if the number of edges are the same in both graphs. In this case, both graphs have $7$ edges.
The next step is to look at the degree sequence of both graph $G$ and $H$, as their degree sequences should be the same. The degree sequence of $G$ is $(2, 2, 3, 3, 4)$. The degree sequence of $H$ is $(2, 2, 3, 3, 4)$. Now let's see if there exists a bijection function $f$ such that $f: V(G) \rightarrow V(H)$.
We can first acknowledge that $\deg (a)$ in graph $G$ is $4$, and the $\deg (v)$ in graph $H$ is $4$. Hence $\deg (a) = \deg (v)$, so if a bijection exists, then vertices $a$ and $v$ must correspond to each other. It turns out that this graph is isomorphic when we check all of the edges:
Let's try the following bijection:
$f(a) = v$ $f(b) = x$ $f(c) = w$ $f(d) = y$ $f(e) = z$
Edge in G Correspondence in H $\{a, b\} \in E(G)$ $\{f(a), f(b) \} \in E(G)$ $\{a, c\} \in E(G)$ $\{f(a), f(c) \} \in E(G)$ $\{a, d\} \in E(G)$ $\{f(a), f(d) \} \in E(G)$ $\{a, e\} \in E(G)$ $\{f(a), f(e) \} \in E(G)$ $\{b, c\} \in E(G)$ $\{f(b), f(c) \} \in E(G)$ $\{b, d\} \in E(G)$ $\{f(b), f(d) \} \in E(G)$ $\{c, e\} \in E(G)$ $\{f(c), f(e) \} \in E(G)$
Thus since there are $7$ edges in both $G$ and $H$ and there is a bijection $f : V(G) \to V(H)$ then $G \cong H$. |
Sequences of Complex Numbers
A sequence of real numbers is an infinite ordered list $(a_n)_{n=1}^{\infty}$ of real numbers. We can similarly define a sequence of complex numbers.
Definition: A Sequence of Complex Numbers is an infinite ordered list of complex numbers $(z_n)_{n=1}^{\infty} = (z_1, z_2, ..., z_n, ...)$ where $z_n \in \mathbb{C}$ for each $n \in \mathbb{N}$. For each $n \in \mathbb{N}$, $z_n$ is called a $n^{\mathrm{th}}$ Term of the sequence. It is convenient to denote the first term of a sequence by $z_1$. We can start a sequence at any index though. This will not cause any problems.
For example, the first few terms of the complex sequence $(n + i^n)_{n=1}^{\infty}$ are:(1)
Many of the definitions familiar with sequences of real numbers are readily extended to sequences of complex numbers.
Definition: A sequence of complex numbers $(z_n)_{n=1}^{\infty}$ is said to Converge to $Z \in \mathbb{C}$ if for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $n \geq N$ then $\mid z_n - Z \mid < \epsilon$. If the sequence $(z_n)_{n=1}^{\infty}$ does not converge to any $Z \in \mathbb{C}$ then $(z_n)_{n=1}^{\infty}$ is said to Diverge.
For example, consider the sequence $\left ( \frac{i^n}{n} \right)_{n=1}^{\infty}$. The first few terms of this sequence are:(2)
We claim that this sequence converges to $Z = 0$. To prove this, let $\epsilon > 0$ be given. Then we want $\biggr \lvert \frac{i^n}{n} - 0 \biggr \rvert < \epsilon$. Now:(3)
For any given $\epsilon > 0$, choose $N \in \mathbb{N}$ such that $\displaystyle{N > \frac{1}{\epsilon}}$. Then if $n \geq N$ we have that:(4)
In other words, for $n \geq N$:(5)
Therefore the sequence $\left ( \frac{i^n}{n} \right )_{n=1}^{\infty}$ converges to $Z = 0$. |
$\frac{d}{dt}|_{t=0} \alpha(t) = [X,Y](p)$, where $\alpha(t):= \phi_{-\sqrt{t}}^Y \circ \phi_{-\sqrt{t}}^X \circ \phi_{\sqrt{t}}^Y \circ \phi_{\sqrt {t}}^X$.
I think Fredrik's proof is nice everywhere except for here(the last formula)
Now we use that $(\phi_h^X)^{-1}=\phi_{-h}^X$ to get from $ \lim_{h \to 0} \frac {1}{h^2}\left[ f \circ \phi_h^Y \circ \phi_h^X(p) - f \circ \phi_h^X \circ \phi_h^Y(p) \right] $ \begin{equation} \lim_{h \to 0} \frac {1}{h^2}\left[ f- f \circ \phi_h^X \circ \phi_h^Y \circ \phi_{-h}^X \circ \phi_{-h}^Y(p) \right] \tag{1} \end{equation}
I wonder why this is correct as given. I think by using that $(\phi_h^X)^{-1}=\phi_{-h}^X$ we can get
$$\lim_{h \to 0} \frac {1}{h^2}\left[ f\circ \phi_h^Y \circ \phi_h^X(p)- f \circ \phi_h^X \circ \phi_h^Y \circ \phi_{-h}^X \circ \phi_{-h}^Y \circ \phi_h^Y \circ \phi_h^X(p) \right] \tag{2}$$
But it is hard to see why (2) actually implies (1). So I even suspect that his proof is not correct.Can anybody explain it to me or show me how to correct it?
Intuitively $\phi_h^Y \circ \phi_h^X(p)$ somehow "converges" to $p$. But in the formal proof, I am afraid we can't take the limit of this part first.
Ps: I haven't read through this proof given by Owen Barrett, but I feel like the proof using Taylor expansion is too long and prefer a direct proof. |
Difference between revisions of "Lower attic"
From Cantor's Attic
(the Takeuti-Feferman-Buchholz ordinal)
(37 intermediate revisions by 6 users not shown) Line 1: Line 1: −
[[File:SagradaSpiralByDavidNikonvscanon.jpg |
+ +
[[File:SagradaSpiralByDavidNikonvscanon.jpg | | Sagrada Spiral photo by David Nikonvscanon
+
]]
−
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an
+
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an self-similar reflecting ascent.
−
* [[
+
* [[| $\omega_1$]], the first uncountable ordinal, and the other uncountable cardinals of the [[middle attic]]
* [[stable]] ordinals
* [[stable]] ordinals
* The ordinals of [[infinite time Turing machines]], including
* The ordinals of [[infinite time Turing machines]], including
Line 9: Line 11:
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#zeta | $\zeta$]] = the supremum of the eventually writable ordinals
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
** [[infinite time Turing machines#lambda | $\lambda$]] = the supremum of the writable ordinals,
−
* [[admissible#
+
* [[admissible#| $\omega_1^x$]]
−
* [[
+
* [[Church-Kleene Church-Kleene $\omega_1^{ck}$]], the supremum of the computable ordinals
− +
* [[
−
* [[
+ + + + + + + + +
| $\$]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
* [[epsilon naught | $\epsilon_0$]] and the hierarchy of [[epsilon naught#epsilon_numbers | $\epsilon_\alpha$ numbers]]
−
* the [[small countable ordinals]],
+ −
* [[
+
* the [[small countable ordinals]], [[epsilon naught | $\epsilon_0$]]
+
* [[| Hilbert's hotel]]
* [[omega | $\omega$]], the smallest infinity
* [[omega | $\omega$]], the smallest infinity
−
* down to the [[
+
* down to the [[]], large finite numbers
Latest revision as of 13:37, 27 May 2018
Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent.
$\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ = the supremum of the game values for white of all positions in infinite chess $\omega_1^{\mathfrak{Ch},c}$ = the supremum of the game values for white of the computable positions in infinite chess $\omega_1^{\mathfrak{Ch}}$ = the supremum of the game values for white of the finite positions in infinite chess the Takeuti-Feferman-Buchholz ordinal the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Extended Veblen function the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream |
Outline: Derivatives of the Laplace equations, the wave equations and diffusion equation; Methods to solve equations: separation of variables, Fourier series and integrals and characteristics; maximum principles, Green’s functions.
Here you can find homework problems and solutions. The problems are selected fromthe text book (
Partial differential equations, an introduction, Walter A. Strauss, The second edition, John Wiley & Sons, Ltd.) and are listed here for your convenience. Basic Info Course Name: Partial Differential Equations Lecturers: Chair Prof. Xiao-Ping Wang Sessions: T1A (Wednesday 16:30-17:20 at Room 1505), T1B (Monday 17:30-18:20 at Room 1511) Course Page: http://www.math.ust.hk/~mawang/teaching/math4052/math4052a.html Semester: 2016-Autumn Homework Problems
Homework 1 (Due in tutorial class in the week of Sept 12):
(Page 5, Q2(a)(b)). Which of the following operators are linear? $ \mathscr{ L } u = u_x + x u_y $. $ \mathscr{ L } u = u_x + u u_y $. (Page 5, Q3(c)(d)). For each of the following equations, state the order andwhether it is nonlinear, linear inhomogeneous, or linear homogeneous; providereasons. $ u_t - u_{xxt} + u u_x = 0 $. $ u_{tt} - u_{xx} + x^2 = 0 $. (Page 9, Q1). Solve the first-order equation $ 2u_t + 3u_x = 0 $ with the auxiliary condition $ u=\sin x $ when $ t=0 $. (Page 5, Q2(a)(b)). Which of the following operators are linear?
Homework 2 (Due in tutorial class in the week of Sept 19):
(Page 27, Q3). Solve the boundary problem $ u’^\prime = 0 $ for $ 0<x<1 $ with $ u’(0)+k u(0) = 0 $ and $ u’(1)\pm k u(1)=0 $. Do the $+$ and $-$ cases separately. What is special about the case $k=2$? (Page 31, Q1). What is the type of each of the following equations? $ u_{xx} - u_{xy} + 2 u _y + u_{yy} - 3 u_{yx} + 4u = 0$. $ 9 u_{xx} + 6 u_{xy} + u_{yy} + u_{x} = 0$. (Page 32, Q6). Consider the equation $ 3u_y + u_{xy} = 0$. What is its type? Find the general solution. ( Hint:Substitute $v=u_y$.) With the auxiliary conditions $u(x,0)= e^{-3x}$ and $u_y(x,0)=0$, does a solution exist? Is it unique?
Homework 3 (Due in tutorial class in the week of Sept 26):
(Page 38, Q1). Solve $ u_{tt} = c^2 u_{xx}$, $u(x,0)=e^x$, $u_t(x,0) = \sin x$. (Page 38, Q2). Solve $ u_{tt} = c^2 u_{xx}$, $u(x,0)=\log (1+x^2)$, $u_t(x,0) = 4+x$. (Page 38, Q9). Solve $u_{xx} - 3 u_{xt} - 4u_{tt} = 0$,$u(x,0)=x^2$, $u_t(x,0)=e^x$. ( Hint:Factor the operator as we did for the wave equation.) (Page 41, Q4). If $u(x,t)$ satisfies the wave equation $u_{tt} = u_{xx}$, prove the identity $$u(x+h,t+k) + u(x-h,t-k) = u(x+k,t+h) + u(x-k,t-h)$$ for all $x,t,h$, and $k$. Sketch the quadrilateral $Q$ whose vertices are the arguments in the identity.
Homework 4 (Due in tutorial class in the week of Oct. 3):
(Page 52, Q3). Use the method of Green’s function to solve the diffusion equation $u_t = k u_{xx}$, subject to the initial condition $u(x,0)=\phi(x)$, where $ \phi(x) = e^{3x} $. (Page 52, Q4). Solve the diffusion equation above if $\phi(x) = e^{-x}$ for $x>0$ and $\phi(x)=0$ for $x<0$. (Page 52, Q5). Prove properties (a)-(e) seen on Page 47 of the diffusion equation $u_t = k u_{xx}$.
Homework 5 (Due in tutorial class in the week of Oct. 10):
(Page 89, Q2). Consider a metal rod ($0<x<l$), insulated along its sides but not at its ends, which is initially at temperature $=1$. Suddenly both ends are plunged into a bath of temperature $=0$. Write the differential equation, boundary conditions, and initial condition. Write the formula for the temperature $u(x,t)$ at later times. In this problem, assumethe infinite series expansion $$ 1 = \frac{4}{\pi} \left( \sin \frac{\pi x}{l} + \frac{1}{3}\sin\frac{3 \pi x}{l} + \frac{1}{5}\sin\frac{5 \pi x}{l}+ \cdots \right) $$ (Page 89, Q4). Consider waves in a resistant medium that satisfy the problem $$ u_{tt} = c^2 u_{xx} - r u_t \quad \text{for} \quad 0<x<l $$ $$ u= 0 \quad \text{at both ends} $$ $$ u(x,0)= \phi(x) \quad u_{t}(x,0) = \psi(x),$$ where $r$ is a constant, $0<r<2\pi c/l$. Write down the series expansion of the solution. (Page 92, Q2). Consider the equation $u_{tt}=c^2 u_{xx}$ for $0<x<l$, with the boundary conditions $u_x(0,t) = 0,u(l,t)=0$ (Neumann at the left, Dirichlet at the right). Show that the eigenfunctions are $\cos\left[ \left( n+\frac{1}{2}\right)\pi x/l\right]$. Write the series expansion for a solution $u(x,t)$. (Page 92, Q3). Solve the Schrödinger equation $u_t = i k u_{xx}$ for real $k$ in the interval $0<x<l$ with the boundary conditions $u_x(0,t) = 0,u(l,t) = 0$. (Page 89, Q2). Consider a metal rod ($0<x<l$), insulated along its sides but not at its ends, which is initially at temperature $=1$. Suddenly both ends are plunged into a bath of temperature $=0$. Write the differential equation, boundary conditions, and initial condition. Write the formula for the temperature $u(x,t)$ at later times. In this problem,
Homework 6 (Due in tutorial class in the week of Oct. 16):
(Page 45, Q2). Consider a solution of the diffusion equation$u_t = u_{xx}$ in $\{ 0 \leq x \leq l, 0 \leq t < \infty \}$. Let $M(T) = $ the maximum of $u(x,t)$ in the closed rectangle $\{ 0 \leq x \leq l, 0 \leq t \leq T \}$. Does $M(T)$ increase or decrease as a function of $T$? Let $m(T) = $ the minimum of $u(x,t)$ in the closed rectangle $\{ 0 \leq x \leq l, 0 \leq t \leq T \}$. Does $m(T)$ increase or decrease as a function of $T$? (Page 46, Q4). Consider the diffusion equation $u_t = u_{xx}$ in $\{ 0<x<1,0<t< \infty \}$with $u(0,t)=u(1,t)=0$ and $u(x,0)=4x(1-x)$. Show that $ 0 < u(x,t) < 1 $ for all $t>0$ and $0<x<1$. Show that $ u(x,t) = u(1-x,t) $ for all $t\geq 0$ and $0 \leq x \leq 1$. Use the energy method to show that $ \int _0 ^1 u^2 dx$ is a strictly decreasing function of $t$. (Page 45, Q2). Consider a solution of the diffusion equation $u_t = u_{xx}$ in $\{ 0 \leq x \leq l, 0 \leq t < \infty \}$.
Homework 7 (Due in tutorial class in the week of Oct. 31):
(Page 111, Q2). Let $\phi(x) \equiv x^2$ for $0 \leq x \leq 1 = l$. Calculate its Fourier sine series. Calculate its Fourier cosine series. (Page 111, Q4). Find the Fourier cosine series of the function $ | \sin x | $ in the interval $(-\pi,\pi)$. Use it to find the sums $$ \sum_{n=1}^\infty \frac{1}{4 n^2 - 1} \quad \text{and} \quad \sum_{n=1}^\infty \frac{(-1)^n}{4n^2 - 1}. $$ (Page 134, Q1). $\sum_{n=0}^\infty (-1)^n x^{2n}$ is ageometric series. Does it converge pointwise in the interval $-1<x<1$ ? Does it converge uniformly in the interval $-1<x<1$ ? Does it converge in the $L^2$ sense in the interval $-1<x<1$ ? ( Hint: You can compute its partial sums explicitly.) (Page 134, Q5). Let $\phi(x)=0$ for $0<x<1$ and $\phi(x)=1$for $1<x<3$. Find the first four nonzero terms of its Fourier cosine series explicitly. For each $x$ $(0\leq x \leq 3)$, what is the sum of this series? Does it converge to $\phi(x)$ in the $L^2$ sense? Why? Put $x=0$ to find the sum $$ 1 + \frac{1}{2} - \frac{1}{4} - \frac{1}{5} + \frac{1}{7} + \frac{1}{8} - \frac{1}{10} - \frac{1}{11} + \dots .$$ (Page 134~Page 135, Q7). Let$$ \phi(x) = \left\{ \begin{align} -1-x \quad & \text{for}\quad -1 < x < 0 \\ +1-x \quad & \text{for}\quad 0 < x < 1. \end{align} \right.$$ Find the full Fourier series of $\phi(x)$ in the interval $(-1,1)$. Find the first three nonzero terms explicitly. Does it converge in the mean square sense? Does it converge pointwise? Does it converge uniformly to $\phi(x)$ in the interval $(-1,1)$? (Page 111, Q2). Let $\phi(x) \equiv x^2$ for $0 \leq x \leq 1 = l$.
Homework 8 (Due in tutorial class in the week of Nov. 7):
(Page 160, Q6). Solve $u_{xx} + u_{yy}=1$ in the annulus $a<r<b$ with $u(x,y)$ vanishing on both parts of the boundary $r=a$ and $r=b$. (Page 160, Q9). A spherical shell with inner radius $1$and outer radius $2$ has a steady-state temperature distribution. Itsinner boundary is held at $100\unicode{x2103}$. Its outer boundary satisfies$\partial u / \partial r = - \gamma < 0 $, where $\gamma$ is a constant. Find the temperature. ( Hint: The temperature depends only on the radius.) What are the hottest and coldest temperatures? Can you choose $\gamma$ so that the temperature on its outer boundary is $20\unicode{x2103}$? Find the temperature. ( (Page 164~Page 165, Q1). Solve $u_{xx} + u_{yy}=0$in the rectangle $0<x<a$, $0<y<b$ with the following boundary conditions:$$ \begin{align} & u_x = -a \quad \text{on } x=0 \qquad & u_x = 0 \quad \text{on } x=a \\ & u_y = b \quad \text{on } y=0 \qquad & u_y = 0 \quad \text{on } y=b. \end{align}$$( Hint: Note that the necessary condition of Exercise 6.1.11 is satisfied. A shortcut is to guess that the solution might be a quadratic polynomial in $x$ and $y$.) (Page 165, Q4). Find the harmonic function in the square $\{ 0<x<1, 0<y<1 \}$ with the boundary conditions $u(x,0)=x$, $u(x,1)=0$, $u_x(0,y)=0$, $u_x( 1,y )=y^2$.
Homework 9 (Due in tutorial class in the week of Nov. 14):
(Page 172, Q1). Suppose that $u$ is a harmonic function in the disk $D=\{r<2\}$ and that $u=3 \sin 2 \theta + 1$for $r=2$. Without finding the solution, answer the following questions: Find the maximum value of $u$ in $\bar{D}$. Calculate the value of $u$ at the origin. (Page 172, Q2). Solve $u_{xx} + u_{yy}=0$ in the disk $D=\{r<a\}$ with the boundary condition $$ u = 1 + 3 \sin \theta \quad \text{on } r=a. $$ (Page 175, Q1). Solve $u_{xx} + u_{yy}=0$ in the exterior$\{r>a\}$ of a disk, with the boundary condition $u=1+3\sin\theta$ on $r=a$, and the condition at infinity that $u$ be bounded as $r\longrightarrow\infty$. (Page 176, Q10).Solve $u_{xx} + u_{yy}=0$ in the quarter-disk $\{ x^2 + y^2 < a^2, x>0, y>0 \}$ with the following BCs: $$ u=0 \quad \text{on } x=0 \text{ and on } y=0 \quad \text{and} \quad \frac{\partial u}{\partial r} = 1 \quad \text{on } r=a.$$ Write the answer as an infinite series and write the first two nonzero terms explicitly. (Page 172, Q1). Suppose that $u$ is a harmonic function in the disk $D=\{r<2\}$ and that $u=3 \sin 2 \theta + 1$ for $r=2$. Without finding the solution, answer the following questions:
Homework 10 (Due in tutorial class in the week of Nov. 21):
(Page 184, Q2). Prove the uniqueness up to constants of the Neumann problem using the energy method. (Page 184, Q3). Prove the uniqueness of the Robin problem $ \partial u / \partial n + a(\mathbf{x}) u(\mathbf{x}) = h(\mathbf{x})$ provided that $a(\mathbf{x})>0$ on the boundary. (Page 187, Q1). Derive the representation formula for harmonic functions in two dimensions: $$ u(\mathbf{x}_0) = \frac{1}{2\pi} \int_{\text{bdy} D} \left[ u(\mathbf{x}) \frac{\partial}{\partial n} ( \log |\mathbf{x} - \mathbf{x}_0| ) - \frac{\partial u}{\partial n} \log |\mathbf{x} - \mathbf{x}_0| \right] ds.$$ (Page 187, Q2). Let $\phi(\mathbf{x})$ be any $\mathbf{C}^2$ function defined on all of three-dimensional space that vanishes outside some sphere. Show that $$ \phi(\mathbf{0}) = - \iiint \frac{1}{|\mathbf{x}|} \Delta \phi(\mathbf{x}) \frac{d\mathbf{x}}{4\pi}. $$ The integration is taken over the region where $\phi(\mathbf{x})$ is not zero.
Homework 11 (Due in tutorial class in the week of Nov. 28):
(Page 196, Q1). Find the one-dimensional Green’s function for the interval $(0,l)$. The three properties defining it can be restated as follows. It solves $G’^\prime (x) = 0$ for $x\neq x_0$ (“harmonic”). $G(0) = G(l) = 0$. $G(x)$ is continuous at $x_0$ and $G(x) + \frac{1}{2} | x - x_0 |$ is harmonic at $x_0$. (Page 196, Q6). Find the Green’s function for the half-plane $ \{ (x,y): y>0 \} $. Use it to solve the Dirichlet problem in the half-plane with boundary values $h(x)$. Calculate the solution with $u(x,0) = 1$. (Page 197, Q9). Find the Green’s function for the tilted half-space $ \{ (x,y,z): ax+by+cz>0 \} $. ( Hint: Either do it from scratch by reflecting across the tilted plane, or change variables in the double integral (3) $$ 0 = \iint_{\text{bdy }D} \left( u \frac{\partial H}{\partial n} - \frac{\partial u}{\partial n} H \right) dS $$ using a linear transformation.) (Page 197, Q17). Find the Green’s function for the quadrant$$ Q = \{ (x,y) : x>0,y>0 \}.$$( Hint: Either use the method of reflection or reduce to the half-plane problem by the transformation $(x,y) \mapsto (x^2-y^2,2xy)$.) 2, Use your answer in the previous part to solve the Dirichlet problem $$ \begin{eqnarray} u_{xx} + u_{yy} = 0 \text{ in } Q, \quad u(0,y) = g(y) \text{ for } y>0, \\ u(x,0) = h(x) \text{ for } x>0. \end{eqnarray} $$ Find the Green’s function for the quadrant $$ Q = \{ (x,y) : x>0,y>0 \}.$$ ( (Page 196, Q1). Find the one-dimensional Green’s function for the interval $(0,l)$. The three properties defining it can be restated as follows. Answers and Hints Homework 1: Answers and Hints. Homework 2: Answers and Hints. Homework 3: Answers and Hints. Homework 4: Answers and Hints. Homework 5: Answers and Hints. Homework 6: Answers and Hints. Homework 7: Answers and Hints. Homework 8: Answers and Hints. Homework 9: Answers and Hints. Homework 10: Answers and Hints. Homework 11: Answers and Hints. |
Introduction to Differential Equations
Consider the equation $f(x) = x^2 -3x + 1$. To solve this equation is to find the roots of $f$, which we can obtain with the quadratic formula $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$. For the example above, the roots are $x = \frac{3 + \sqrt{5}}{2}$ and $x = \frac{3 - \sqrt{5}}{2}$. Single variable functions such as the one previously mentioned merely contain one variable, $x$. However, in real-life applications, often times we are not dealing with such static functions. Instead, an equation may contain a rate of change. For example, consider an equation $P$ which measures the population at a given time $t$. Such an equation for $P$ might be:(1)
In such a case, we have an equation that contains a derivative. Such an equation is known as a differential equation.
Definition: A Differential Equation is an equation that contains some sort of derivative. An Ordinary Differential Equation (abbreviated as O.D.E.) is an equation that contains only regular derivatives, while a Partial Differential Equation (abbreviated as P.D.E.) is an equation that contains a partial derivative. The Order of a differential equation is the highest order regular or partial derivative present in the equation.
Differential equations appear in many applications ranging from biology to physics, and even economics as such equations often provide great models of our complex world.
Let's look at some examples of differential equations. One such first order differential equation is $y = y'$. One of the solutions to this differential equation is $y = e^x$ since $y' = e^x$. Another such solution is $y = 0$ since $y' = 0$. More generally, $y = Ce^x$ where $C \in \mathbb{R}$, is a solution to the differential equation $y = y'$.
Recall that a non-vertical line on the $xy$ plane can be given by the equation $y = mx + b$. Note that $y' = m$, which should intuitively make sense as the derivative of $y$ tells us the slope at any point on the curve generated by $y$, which in this case is a straight line with constant slope $m$. Therefore we can write the equation of a line in the form $y = xy' + b$. Note that any non-vertical line $y = mx + b$ is a solution to this differential equation.
Another example of a differential equation is Newton's second law in physics which states that $F = ma$ where $F$ is the net force of an object, $m$ is the mass of that object, and $a$ is the acceleration of that object. The net force $F$ is influenced by the force of gravity which is equal to the mass of the object $m$ multiplied by the acceleration from gravity near the surface of the earth, $g = \frac{9.8m}{s^2}$. There's also force as a result of air resistance which is equal to a specific drag constant multiplied by the velocity of that object, that is $\gamma v$, and so the force $F$ can be computed as:(2)
Recall from calculus that at an arbitrary time $t$, that the derivative of the velocity of an objection is equal to the acceleration of that object at time $t$, that is $\frac{dv}{dt} = a$, and so we get that:(3)
Therefore the equation $\frac{dv}{dt} = g - \frac{\gamma v}{m}$ is a differential equation that models a falling object near the surface of the earth.
In general, finding the solutions to a differential equation can be difficult. For example, consider the following differential equation:(4)
Finding the solutions to this differential equation is much more complicated and it is not obvious what functions to even test. Clearly $y = 0$ is a solution, however, is it the only solution? We will develop methods for evaluating various types of differential equations. |
Research talks;Partial Differential Equations;Mathematical Physics
In this talk we present recent results on the Hall-MHD system. We consider the incompressible MHD-Hall equations in $\mathbb{R}^3$.
$\partial_tu +u \cdot u + \nabla u+\nabla p = \left ( \nabla \times B \right )\times B +\nu \nabla u,$ $\nabla \cdot u =0, \nabla \cdot B =0, $ $\partial_tB - \nabla \times \left (u \times B\right ) + \nabla \times \left (\left (\nabla \times B\right )\times B \right ) = \mu \nabla B,$ $u\left (x,0 \right )=u_0\left (x\right ) ; B\left (x,0 \right )=B_0\left (x\right ).$ Here $u=\left (u_1, u_2, u_3 \right ) = u \left (x,t \right ) $ is the velocity of the charged fluid, $B=\left (B_1, B_2, B_3 \right ) $ the magnetic field induced by the motion of the charged fluid, $p=p \left (x,t \right )$ the pressure of the fluid. The positive constants $\nu$ and $\mu$ are the viscosity and the resistivity coefficients. Compared with the usual viscous incompressible MHD system, the above system contains the extra term $\nabla \times \left (\left (\nabla \times B\right )\times B \right ) $ , which is the so called Hall term. This term is important when the magnetic shear is large, where the magnetic reconnection happens. On the other hand, in the case of laminar ows where the shear is weak, one ignores the Hall term, and the system reduces to the usual MHD. Compared to the case of the usual MHD the history of the fully rigorous mathematical study of the Cauchy problem for the Hall-MHD system is very short. The global existence of weak solutions in the periodic domain is done in [1] by a Galerkin approximation. The global existence in the whole domain in $\mathbb{R}^3$ as well as the local well-posedness of smooth solution is proved in [2], where the global existence of smooth solution for small initial data is also established. A refined form of the blow-up criteria and small data global existence is obtained in [3]. Temporal decay estimateof the global small solutions is deduced in [4]. In the case of zero resistivity we present finite time blow-up result for the solutions obtained in [5]. We note that this is quite rare case, as far as the authors know, where the blow-up result for the incompressible flows is proved.
In this talk we present recent results on the Hall-MHD system. We consider the incompressible MHD-Hall equations in $\mathbb{R}^3$. $\partial_tu +u \cdot u + \nabla u+\nabla p = \left ( \nabla \times B \right )\times B +\nu \nabla u,$ $\nabla \cdot u =0, \nabla \cdot B =0, $ $\partial_tB - \nabla \times \left (u \times B\right ) + \nabla \times \left (\left (\nabla \times B\right )\times B \right ) = \mu \nabla B,$ $u\left (x,0 \right ...
35Q35 ; 76W05
... Lire [+] |
We'll now consider the nonhomogeneous linear second order equation
\begin{equation}\label{eq:2.3.1}
y''+p(x)y'+q(x)y=f(x), \end{equation}
where the forcing function \(f\) isn't identically zero. The next theorem, an extension of Theorem \((2.1.1)\), gives sufficient conditions for existence and uniqueness of solutions of initial value problems for \eqref{eq:2.3.1}. We omit the proof, which is beyond the scope of this book.
Theorem \(\PageIndex{1}\)
Suppose \(p,\) \(q,\) and \(f\) are continuous on an open interval \((a,b),\) let \(x_0\) be any point in \((a,b),\) and let \(k_0\) and \(k_1\) be arbitrary real numbers. Then the initial value problem
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f(x), \quad y(x_0)=k_0,\quad y'(x_0)=k_1 \end{eqnarray*}
has a unique solution on \((a,b).\)
Proof
Add proof here and it will automatically be hidden if you have a "AutoNum" template active on the page.
To find the general solution of \eqref{eq:2.3.1} on an interval \((a,b)\) where \(p\), \(q\), and \(f\) are continuous, it's necessary to find the general solution of the associated homogeneous equation
\begin{equation}\label{eq:2.3.2}
y''+p(x)y'+q(x)y=0 \end{equation}
on \((a,b)\). We call \eqref{eq:2.3.2} the \( \textcolor{blue}{\mbox{complementary equation}} \) for \eqref{eq:2.3.1}.
The next theorem shows how to find the general solution of \eqref{eq:2.3.1} if we know one solution \(y_p\) of \eqref{eq:2.3.1} and a fundamental set of solutions of \eqref{eq:2.3.2}. We call \(y_p\) a \( \textcolor{blue}{\mbox{particular solution}} \) of \eqref{eq:2.3.1}; it can be any solution that we can find, one way or another.
Theorem \(\PageIndex{2}\)
Suppose \(p,\) \(q,\) and \(f\) are continuous on \((a,b).\) Let \(y_p\) be a particular solution of
\begin{equation}\label{eq:2.3.3}
y''+p(x)y'+q(x)y=f(x) \end{equation}
on \((a,b)\), and let \(\{y_1,y_2\}\) be a fundamental set of solutions of the complementary equation
\begin{equation}\label{eq:2.3.4}
y''+p(x)y'+q(x)y=0 \end{equation}
on \((a,b)\). Then \(y\) is a solution of \eqref{eq:2.3.3} on \((a,b)\) if and only if
\begin{equation}\label{eq:2.3.5}
y=y_p+c_1y_1+c_2y_2, \end{equation}
where \(c_1\) and \(c_2\) are constants.
Proof
We first show that \(y\) in \eqref{eq:2.3.5} is a solution of \eqref{eq:2.3.3} for any choice of the constants \(c_1\) and \(c_2\). Differentiating \eqref{eq:2.3.5} twice yields
\begin{eqnarray*}
y'=y_p'+c_1y_1'+c_2y_2' \quad \mbox{and} \quad y''=y_p''+ c_1y_1''+c_2y_2'', \end{eqnarray*}
so
\begin{eqnarray*}
y''+p(x)y'+q(x)y&=&(y_p''+c_1y_1''+c_2y_2'') +p(x)(y_p'+c_1y_1'+c_2y_2') +q(x)(y_p+c_1y_1+c_2y_2)\\ &=&(y_p''+p(x)y_p'+q(x)y_p)+c_1(y_1''+p(x)y_1'+q(x)y_1) +c_2(y_2''+p(x)y_2'+q(x)y_2)\\ &=& f+c_1\cdot0+c_2\cdot0=f, \end{eqnarray*}
since \(y_p\) satisfies \eqref{eq:2.3.3} and \(y_1\) and \(y_2\) satisfy \eqref{eq:2.3.4}.
Now we'll show that every solution of \eqref{eq:2.3.3} has the form \eqref{eq:2.3.5} for some choice of the constants \(c_1\) and \(c_2\). Suppose \(y\) is a solution of \eqref{eq:2.3.3}. We'll show that \(y-y_p\) is a solution of \eqref{eq:2.3.4}, and therefore of the form \(y-y_p=c_1y_1+c_2y_2\), which implies \eqref{eq:2.3.5}. To see this, we compute
\begin{eqnarray*}
(y-y_p)''+p(x)(y-y_p)'+q(x)(y-y_p)&=&(y''-y_p'')+p(x)(y'-y_p') +q(x)(y-y_p)\\ &=&(y''+p(x)y'+q(x)y) -(y_p''+p(x)y_p'+q(x)y_p)\\ &=&f(x)-f(x)=0, \end{eqnarray*}
since \(y\) and \(y_p\) both satisfy \eqref{eq:2.3.3}.
We say that \eqref{eq:2.3.5} is the \( \textcolor{blue}{\mbox{general solution of \(\eqref{eq:2.3.3}\) on \((a,b)\).}} \)
If \(P_0\), \(P_1\), and \(F\) are continuous and \(P_0\) has no zeros on \((a,b)\), then Theorem \((2.3.2)\) implies that the general solution of
\begin{equation}\label{eq:2.3.6}
P_0(x)y''+P_1(x)y'+P_2(x)y=F(x) \end{equation}
on \((a,b)\) is \(y=y_p+c_1y_1+c_2y_2\), where \(y_p\) is a particular solution of \eqref{eq:2.3.6} on \((a,b)\) and \(\{y_1,y_2\}\) is a fundamental set of solutions of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=0 \end{eqnarray*}
on \((a,b)\). To see this, we rewrite \eqref{eq:2.3.6} as
\begin{eqnarray*}
y''+{P_1(x)\over P_0(x)}y'+{P_2(x)\over P_0(x)}y={F(x)\over P_0(x)} \end{eqnarray*}
and apply Theorem \((2.3.2)\) with \(p=P_1/P_0\), \(q=P_2/P_0\), and \(f=F/P_0\).
To avoid awkward wording in examples and exercises, we won't specify the interval \((a,b)\) when we ask for the general solution of a specific linear second order equation, or for a fundamental set of solutions of a homogeneous linear second order equation. Let's agree that this always means that we want the general solution (or a fundamental set of solutions, as the case may be) on every open interval on which \(p\), \(q\), and \(f\) are continuous if the equation is of the form \eqref{eq:2.3.3}, or on which \(P_0\), \(P_1\), \(P_2\), and \(F\) are continuous and \(P_0\) has no zeros, if the equation is of the form \eqref{eq:2.3.6}. We leave it to you to identify these intervals in specific examples and exercises.
For completeness, we point out that if \(P_0\), \(P_1\), \(P_2\), and \(F\) are all continuous on an open interval \((a,b)\), but \(P_0\) \( \textcolor{blue}{\mbox{does}} \) have a zero in \((a,b)\), then \eqref{eq:2.3.6} may fail to have a general solution on \((a,b)\) in the sense just defined. Exercises \((2.1E.42)\), \((2.1E.43)\), and \((2.1E.44)\) illustrate this point for a homogeneous equation.
In this section we limit ourselves to applications of Theorem \((2.3.2)\) where we can guess at the form of the particular solution.
Example \(\PageIndex{1}\)
(a) Find the general solution of
\begin{equation}\label{eq:2.3.7}
y''+y=1. \end{equation}
(b) Solve the initial value problem
\begin{equation}\label{eq:2.3.8}
y''+y=1, \quad y(0)=2,\quad y'(0)=7. \end{equation} Answer
(a) We can apply Theorem \((2.3.2)\) with \((a,b)= (-\infty,\infty)\), since the functions \(p\equiv0\), \(q\equiv1\), and \(f\equiv1\) in \eqref{eq:2.3.7} are continuous on \((-\infty,\infty)\). By inspection we see that \(y_p\equiv1\) is a particular solution of \eqref{eq:2.3.7}. Since \(y_1=\cos x\) and \(y_2=\sin x\) form a fundamental set of solutions of the complementary equation \(y''+y=0\), the general solution of \eqref{eq:2.3.7} is
\begin{equation}\label{eq:2.3.9}
y=1+c_1\cos x+c_2\sin x. \end{equation}
(b) Imposing the initial condition \(y(0)=2\) in \eqref{eq:2.3.9} yields \(2=1+c_1\), so \(c_1=1\). Differentiating \eqref{eq:2.3.9} yields
\begin{eqnarray*}
y'=-c_1\sin x+c_2\cos x. \end{eqnarray*}
Imposing the initial condition \(y'(0)=7\) here yields \(c_2=7\), so the solution of \eqref{eq:2.3.8} is
\begin{eqnarray*}
y=1+\cos x+7\sin x. \end{eqnarray*}
Figure \(2.3.1\) is a graph of this function.
Figure: \(2.3.1\)
\(y=1+\cos x+7\sin x\)
Example \(\PageIndex{2}\)
(a) Find the general solution of
\begin{equation}\label{eq:2.3.10}
y''-2y'+y=-3-x+x^2. \end{equation}
(b) Solve the initial value problem
\begin{equation}\label{eq:2.3.11}
y''-2y'+y=-3-x+x^2, \quad y(0)=-2,\quad y'(0)=1. \end{equation} Answer
(a) The characteristic polynomial of the complementary equation
\begin{eqnarray*}
y''-2y'+y=0 \end{eqnarray*}
is \(r^2-2r+1=(r-1)^2\), so \(y_1=e^x\) and \(y_2=xe^x\) form a fundamental set of solutions of the complementary equation. To guess a form for a particular solution of \eqref{eq:2.3.10}, we note that substituting a second degree polynomial \(y_p=A+Bx+Cx^2\) into the left side of \eqref{eq:2.3.10} will produce another second degree polynomial with coefficients that depend upon \(A\), \(B\), and \(C\). The trick is to choose \(A\), \(B\), and \(C\) so the polynomials on the two sides of \eqref{eq:2.3.10} have the same coefficients; thus, if
\begin{eqnarray*}
y_p=A+Bx+Cx^2 \quad \mbox{then} \quad y_p'=B+2Cx\mbox{\quad and \quad} y_p''=2C, \end{eqnarray*}
so
\begin{eqnarray*}
y_p''-2y_p'+y_p&=&2C-2(B+2Cx)+(A+Bx+Cx^2)\\ &=&(2C-2B+A)+(-4C+B)x+Cx^2=-3-x+x^2. \end{eqnarray*}
Equating coefficients of like powers of \(x\) on the two sides of the last equality yields
\begin{eqnarray*}
C&=&\phantom{-}1\phantom{.}\\ B-4C&=&-1\phantom{.}\\ A-2B+2C&=& -3, \end{eqnarray*}
so \(C=1\), \(B=-1+4C=3\), and \(A=-3-2C+2B=1\). Therefore \(y_p=1+3x+x^2\) is a particular solution of \eqref{eq:2.3.10} and Theorem \((2.3.2)\) implies that
\begin{equation}\label{eq:2.3.12}
y=1+3x+x^2+e^x(c_1+c_2x) \end{equation}
is the general solution of \eqref{eq:2.3.10}.
(b) Imposing the initial condition \(y(0)=-2\) in \eqref{eq:2.3.12} yields \(-2=1+c_1\), so \(c_1=-3\). Differentiating \eqref{eq:2.3.12} yields
\begin{eqnarray*}
y'=3+2x+e^x(c_1+c_2x)+c_2e^x, \end{eqnarray*}
and imposing the initial condition \(y'(0)=1\) here yields \(1=3+c_1+c_2\), so \(c_2=1\). Therefore the solution of \eqref{eq:2.3.11} is
\begin{eqnarray*}
y=1+3x+x^2-e^x(3-x). \end{eqnarray*}
Figure \(2.3.2\) is a graph of this solution.
Figure: \(2.3.2\)
\(y=1+3x+x^2-e^x(3-x)\)
Example \(\PageIndex{3}\)
Find the general solution of
\begin{equation}\label{eq:2.3.13}
x^2y''+xy'-4y=2x^4 \end{equation}
on \((-\infty,0)\) and \((0,\infty)\).
Answer
In Example \((2.3.1)\), we verified that \(y_1=x^2\) and \(y_2=1/x^2\) form a fundamental set of solutions of the complementary equation
\begin{eqnarray*}
x^2y''+xy'-4y=0 \end{eqnarray*}
on \((-\infty,0)\) and \((0,\infty)\). To find a particular solution of \eqref{eq:2.3.13}, we note that if \(y_p=Ax^4\), where \(A\) is a constant then both sides of \eqref{eq:2.3.13} will be constant multiples of \(x^4\) and we may be able to choose \(A\) so the two sides are equal. This is true in this example, since if \(y_p=Ax^4\) then
\begin{eqnarray*}
x^2y_p''+xy_p'-4y_p=x^2(12Ax^2)+x(4Ax^3)-4Ax^4=12Ax^4=2x^4 \end{eqnarray*}
if \(A=1/6\); therefore, \(y_p=x^4/6\) is a particular solution of \eqref{eq:2.3.13} on \((-\infty,\infty)\). Theorem \((2.3.2)\) implies that the general solution of \eqref{eq:2.3.13} on \((-\infty,0)\) and \((0,\infty)\) is
\begin{eqnarray*}
y={x^4\over6}+c_1x^2+{c_2\over x^2}. \end{eqnarray*} The Principle of Superposition
The next theorem enables us to break a nonhomogeneous equation into simpler parts, find a particular solution for each part, and then combine their solutions to obtain a particular solution of the original problem.
Theorem \(\PageIndex{3}\)
Suppose \(y_{p_1}\) is a particular solution of
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f_1(x) \end{eqnarray*}
on \((a,b)\) and \(y_{p_2}\) is a particular solution of
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f_2(x) \end{eqnarray*}
on \((a,b)\). Then
\begin{eqnarray*}
y_p=y_{p_1}+y_{p_2} \end{eqnarray*}
is a particular solution of
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f_1(x)+f_2(x) \end{eqnarray*}
on \((a,b)\).
Proof
If \(y_p=y_{p_1}+y_{p_2}\) then
\begin{eqnarray*}
y_p''+p(x)y_p'+q(x)y_p&=&(y_{p_1}+y_{p_2})''+p(x)(y_{p_1}+y_{p_2})' +q(x)(y_{p_1}+y_{p_2})\\ &=&\left(y_{p_1}''+p(x)y_{p_1}'+q(x)y_{p_1}\right) +\left(y_{p_2}''+p(x)y_{p_2}'+q(x)y_{p_2}\right)\\ &=&f_1(x)+f_2(x). \end{eqnarray*}
It's easy to generalize Theorem \((2.3.3)\) to the equation
\begin{equation}\label{eq:2.3.14}
y''+p(x)y'+q(x)y=f(x) \end{equation}
where
\begin{eqnarray*}
f=f_1+f_2+\cdots+f_k; \end{eqnarray*}
thus, if \(y_{p_i}\) is a particular solution of
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f_i(x) \end{eqnarray*}
on \((a,b)\) for \(i=1\), \(2\), \(\dots\), \(k\), then \(y_{p_1}+y_{p_2}+\cdots+y_{p_k}\) is a particular solution of \eqref{eq:2.3.14} on \((a,b)\). Moreover, by a proof similar to the proof of Theorem \((2.3.3)\) we can formulate the principle of superposition in terms of a linear equation written in the form
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F(x) \end{eqnarray*}
(Exercise \((2.3E.39)\)); that is, if \(y_{p_1}\) is a particular solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_1(x) \end{eqnarray*}
on \((a,b)\) and \(y_{p_2}\) is a particular solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_2(x) \end{eqnarray*}
on \((a,b)\), then \(y_{p_1}+y_{p_2}\) is a solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_1(x)+F_2(x) \end{eqnarray*}
on \((a,b)\).
Example \(\PageIndex{4}\)
The function \(y_{p_1}=x^4/15\) is a particular solution of
\begin{equation}\label{eq:2.3.15}
x^2y''+4xy'+2y=2x^4 \end{equation}
on \((-\infty,\infty)\) and \(y_{p_2}=x^2/3\) is a particular solution of
\begin{equation}\label{eq:2.3.16}
x^2y''+4xy'+2y=4x^2 \end{equation}
on \((-\infty,\infty)\). Use the principle of superposition to find a particular solution of
\begin{equation}\label{eq:2.3.17}
x^2y''+4xy'+2y=2x^4+4x^2 \end{equation}
on \((-\infty,\infty)\).
Answer
The right side \(F(x)=2x^4+4x^2\) in \eqref{eq:2.3.17} is the sum of the right sides
\begin{eqnarray*}
F_1(x)=2x^4 \quad \mbox{ and } \quad F_2(x)=4x^2. \end{eqnarray*}
in \eqref{eq:2.3.15} and \eqref{eq:2.3.16}. Therefore the principle of superposition implies that
\begin{eqnarray*}
y_p=y_{p_1}+y_{p_2}={x^4\over15}+{x^2\over3} \end{eqnarray*}
is a particular solution of \eqref{eq:2.3.17}. |
If $(X,\tau)$ has more than $1$ point and is $T_2$ and connected, do we necessarily have $|X| =|\tau|$?
Consider the topology on $\mathbb{R}^2$ generated by subsets that are open in some line from the origin. This topological space is connected, has the cardinality of continuum, and has $2^c$ open subsets.
The $\frak{c}$-long line is $T_2$, connected and size continuum $\frak{c}$, but has $2^{\frak{c}}$ many open sets, since there is a size continuum discrete subset.
The more familiar $\omega_1$-long line is $T_2$, connected (even path connected, also locally connected), and size $2^\omega$. There are at least $2^{\omega_1}$ many open sets, since there is a size $\omega_1$ discrete family (such as the centers of the half-open intervals used to construct the long line), and so you can place intervals around each of them as you like, making $2^{\omega_1}$ many distinct open sets.
If $2^\omega<2^{\omega_1}$, for example, if CH holds (but that hypothesis is weaker than CH), then the ordinary long line itself is an example. But in any case, the $\kappa$-long line is an example for every cardinal $\kappa\geq 2^\omega$.
The density topology $\tau$ on $\mathbb{R}$ is also a natural counterexample. It consists of all (Lebesgue) measurable $X \subseteq \mathbb{R}$ such that every point $x \in X$ is a density one point of $X$ which means: Whenever $\{I_n: n \geq 1\}$ is a sequence of open intervals containing $x$ whose lengths decrease to $0$, $$\lim_{n \to \infty} \frac{\mu(X \cap I_n)}{\mu(I_n)} = 1$$
It is clear that $\tau$ is a ccc topology on $\mathbb{R}$ that extends the usual topology. It is also clear that every conull set is in $\tau$ so that $|\tau| = 2^{\mathfrak{c}}$.
Claim: (Theorem 3 in C. Goffman, D. Waterman, Approximately continuous transformations, Proc. Amer. Math. Soc. 12 (1961), 116-121) Every interval $I \subseteq \mathbb{R}$ is $\tau$-connected.
Proof: Towards a contradiction, suppose $X, Y$ are non empty members of $\tau$ that partition $I$. It suffices to construct a nested sequence $\{ (a_n, b_n): n \geq 1\}$ of intervals $(a_n, b_n) \subseteq I$ such that $a_n < a_{n+1} < b_{n+1} < b_{n}$, $b_n - a_n \to 0$ and $\mu(X \cap (a_n, b_n)) = 0.5 (b_n - a_n)$. Since then $a = \lim a_n \in I$ cannot be a density one point of either one of the sets $X, Y$.
To construct such a sequence of intervals, use the following facts.
(1) If $a < b$ are in $I$ and $\mu(X \cap (a, b)) = 0.5 (b-a)$, then both $X, Y$ meet $(a, b)$.
(2) If $a < b$, $a \in X$ and $b \in Y$, then for all sufficiently small $r > 0$, there exists $a < c < d < b$ such that $d - c = r$ and $\mu(X \cap (c, d)) = 0.5 (d - c)$.
(1) is trivial and (2) holds because for all sufficiently small $r > 0$, the functions $h:[a, b] \to \mathbb{R}$ defined by $$h(x) = \frac{\mu((x - r, x + r) \cap X)}{2r}$$ is continuous and satisfies $h(a) > 0.9$ and $h(b) < 0.1$.
The Golomb space $(\mathbb N,\tau)$ (a "universal" counterexample to many questions), also gives a counterexample with $|\mathbb N|=\aleph_0$ and $|\tau|=\mathfrak c$.
I recall that the
Golomb space is the set $\mathbb N$ of natural numbers endowed with the topology $\tau$ generated by the base consisting of the arithmetic progressions $a+\mathbb N_0b=\{a+nb:n\ge 0\}$ where $a,b$ are relatively prime.
It is well-known that the Golomb space is connected and Hausdorff. Since it contains a countable disjoint family of open sets (like any infinite Hausdorff space), its topology has cardinality $\mathfrak c\le|\tau|\le|\mathcal P(\mathbb N)|=\mathfrak c$.
In place of the Golomb space one can take any other countable Hausdorff connected space.
Such spaces have appeared in other questions of Dominic van der Zypen:
This is probably overkill, but assuming the Continuum Hypothesis there is a connected Hausdorff space $X$ such that
$|X|=\omega$; and $|\tau|=\omega_1$.
It is Example 2.1 in this paper. http://www.ams.org/journals/proc/1994-122-03/S0002-9939-1994-1287102-2/S0002-9939-1994-1287102-2.pdf. It's fairly clear from the construction that $\omega_1$ of the $U_{\alpha i}$'s (the sub-basic open sets) must be different.
Here are:
1) a metrizable example.
Namely, consider a complete graph on $\alpha\ge\mathfrak{c}$. Its 1-skeleton, endowed with the geodesic metric with edges of length one, has cardinal $\alpha$ and has $2^{\alpha}$ open subsets (since for any subset of the set of vertices, the open ball of radius $1/2$ around this subset determines this subset.
2) a separable example.
Namely, to fix ideas choose $S=\mathbf{R}/\mathbf{Z}$. Start from $X=S^{\mathfrak{c}}$, which is separable. Choose $D$ a countable dense subgroup (e.g., generated by any dense countable subset), and $F=S^{(\mathfrak{c})}$, the finitely supported subgroup. Then the topological group $G=FD$ is a dense subgroup, is connected because it has the dense connected subgroup $F$, and is separable because it has the dense countable subset $D$ (beware that $F$ is not separable), and $|G|=\mathfrak{c}$. Using suitable inverse images of projections, we see that $G$ has $2^{\mathfrak{c}}$ open subsets.
(Of course, a separable metrizable connected space of cardinal $\ge 2$ has cardinal $\mathfrak{c}$ and the same number of open subsets.) |
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization.
A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$)
Thus in general:
$$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$
Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe |
Higher Order Homogenous Differential Equations - Complex Roots of The Characteristic Equation
Recall from the Higher Order Homogenous Differential Equations - Constant Coefficients page that if we have an $n^{\mathrm{th}}$ order linear homogenous differential equation with constant coefficients $a_0, a_1, ..., a_n \in \mathbb{R}$, that is, $a_0 \frac{d^{n}y}{dt^{n}} + a_1 \frac{d^{n-1}y}{dt^{n-1}} + ... + a_{n-1} \frac{dy}{dt} + a_n y = 0$, then if the roots $r_1$, $r_2$, …, $r_n$ of the characteristic equation $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n = 0$ are all real and distinct, then the general solution to this differential equation is given by:(1)
We will now look at the case in which some of the roots of the characteristic equation are complex.
Recall that when we were dealing with second order linear homogenous differential equations with constant coefficients, say $a \frac{d^2y}{dt^2} + b \frac{dy}{dt} + cy = 0$ where $a, b, c \in \mathbb{R}$, then if the roots of the characteristic equation $ar^2 + br + c = 0$ where complex numbers, then they were complex conjugates of each other and for $\lambda, \mu \in \mathbb{R}$ we had that $r_1 = \lambda + \mu i$ and $r_2 = \lambda - \mu i$. Also recall that the general solution to this differential equation was $y = Ce^{\lambda t} \cos (\mu t) + De^{\lambda t} \sin (\mu t)$.
Now going back to our $n^{\mathrm{th}}$ order linear homogenous differential equation with constant coefficients, we note that since $a_0, a_1, ..., a_n \in \mathbb{R}$ then the characteristic equation $a_0r^n + a_1r^{n-1} + ... + a_{n-1}r + a_n = 0$ has real coefficients and so once again, it is a property of polynomials with real coefficients that any complex roots come in pairs, namely, if $\lambda + \mu i$ (for $\lambda, \mu \in \mathbb{R}$) is a complex root of the characteristic equation, then so is $\lambda - \mu i$. Assuming that none of these complex roots are repeated, then we have that $e^{\lambda t} \cos (\mu t)$ and $e^{\lambda t} \sin (\mu t)$ will still be solutions to our differential equation.
Thus if we have $n$ distinct roots $r_1$, $r_2$, …, $r_n$ of our characteristic equation, then let $p_1$, $p_2$, …, $p_l$ be the distinct complex roots of the characteristic equation ($l$ is even), and let $q_1$, $q_2$, …, $q_m$ be distinct real roots of the characteristic equation. Thus $n = l + m$. Now we will have $\frac{l}{2}$ pairs of complex roots, say they're $\lambda_1 \pm \mu_1 i$, $\lambda_2 \pm \mu_2 i$, …, $\lambda_{l/2} \pm \mu_{l/2} i$. Thus for some constants $C_1, C_2, ..., C_{l/m}, D_1, D_2, ..., D_{l/m}, K_1, K_2, ..., K_m$, the general solution to our differential equation is:(2)
Example 1 Find the general solution of the differential equation $\frac{d^3y}{dt^3} - 2\frac{d^2y}{dt^2} + \frac{dy}{dt} - 2y = 0$.
We first note that the characteristic equation for this differential equation is:(4)
The roots of this polynomial are $r_1 = i$, $r_2 = -i$, and $r_3 = 2$. For the roots $r_1$ and $r_2$, we have that $\lambda = 0$ and $\mu = 1$. Thus the general solution to our differential equation is:(5)
Example 2 Find the general solution of the differential equation $\frac{d^3y}{dt^3} + 5 \frac{d^2y}{dt^2} + 17 \frac{dy}{dt} +13 = 0$.
The characteristic equation for this differential equation is $r^3 + 5r^2 + 17r + 13 = 0$. By trial and error we see that $r_1 = -1$ is a root to the characteristic equation, and upon factoring this root out by long division we get that:(6)
We will now use the quadratic formula on the term $r^2 +4r + 13$:(7)
Therefore we have that $r_2 = -2 + 3i$ and $r_3 = -2 - 3i$ are both roots to our characteristic equation. So $\lambda = -2$ and $\mu = 3$. Thus we have that the general solution to our differential equation is(8) |
Definition
Let \(S\) be a set with a binary operation \(\star\), and with identity \(e\). Let \(a \in S\), then \(b\in S \) is called an inverse of \(a\) if \(a \star b= b \star a=e.\)
Example \(\PageIndex{1}\):
For every \(a \in \mathbb{Z}\), \(-a\) is the inverse of \(a\) with the operation \(+\). For every \(a \in \mathbb{ R} \setminus \{0\}\), \(a^{-1}=\frac{1}{a}\) is the inverse of \(a\) with the multiplication.
Cancellation law
Let \(S\) be a set with a binary operation \(\star\). If for any \(a, b, c \in S\), \(a \star b= a \star c\) then \(b=c\)
Example \(\PageIndex{2}\):
\((1)(0)=(3)(0)=0\), but \(1 \ne 3\).
Example \(\PageIndex{3}\):
For any \(a, b, c \in \mathbb{Z}\), \(a + b= a + c\) then \(b=c\). For any \(a, b,c \in \mathbb{Z}\) and \(a\ne 0\), \(a b= a c\) then \(b=c\).
Example \(\PageIndex{4}\):
If \(ab=0\) then \(a=0\) or \(b=0\).
Theorem \(\PageIndex{1}\)
For any integers \(a\), and \( b\), the following are true.
1. \(-(-a)=a.\)
2. \(0(a)=0.\)
3. \((-a)b=-ab.\)
4. \((-a)(-b)=ab.\)
Proof
1. Let \(a \in \mathbb{Z}\). Since \(-a\) is the inverse of \(a\), \(a+(-a)=(-a)+a=0\). Therefore the additive inverse of \(-a\) is \(a\).
Thus \(-(-a)=a.\)
2. Let \(a \in \mathbb{Z}\). Then by distributive law, \(0a+0a=(0+0)a=0a=0a+0.\) Now by cancelations law, \(0a=0\).
3. Let \(a, b \in \mathbb{Z}\). By distributive law, \( ((-a)+a)b=(-a)b+ab.\) Since \(-a\) is the additive inverse of \(a\), \((-a)+a=0\). By (2), \( 0=(-a)b+ab.\) Thus \((-a)b\) is the additive inverse of \(ab\). Hence \(-ab= (-a)b\).
4. Let \(a, b \in \mathbb{Z}\). Since \( (-a)(-b)+(-a)b=(-a)(-b+b)=(-a)(0)=0.\) Hence \((-a)(-b)\) is the additive inverse of \((-a)b\). But \(ab\) is the additive inverse of \(-ab\). Thus by (3), we have \((-a)(-b)=ab.\).
Definition
For every \(a, n \in \mathbb{Z_+}\), the binary operation exponentiation is denoted as \(a^n\), defined as \(n\) copies of \(a\).
Example \(\PageIndex{5}\):
\(2^3=8\)
Example \(\PageIndex{6}\):
Determine whether the exponentiation is associative? Determine whether the exponentiation is commutative? Solution: Since \((3^2)^3= 9^3\) is not the same as \(3^{2^3}= 3^8\), the exponentiation is not associative. Since \(3^2= 9\) is not the same as \(2^3= 8\), the exponentiation is not commutative.
Theorem \(\PageIndex{2}\)
The exponentiation is distributive over multiplication. That is \((ab)^n=a^nb^n, \forall a, b, n \in \mathbb{Z}\).
Proof
Since multiplication is associative, the result follows.
Example \(\PageIndex{7}\):
Prove that \(a^m a^n= a^{m+n}, \forall a, m, n \in \mathbb{Z} \).
Example \(\PageIndex{8}\):
Prove that \((a^m)^n = a^{mn}, \forall a, m, n \in \mathbb{Z} \). |
In this talk we present a proof of the Kodaira's theorem that gives a sufficient condition on the existence of an embedding of a Kahler manifold into CPn. This proof is based on the Kodaira Vanishing theorem, using a sheaf-cohomological translation of the embedding conditions.לאירוע הזה יש שיחת וידאו.הצטרף: https://meet.google.com/mcs-bwxr-iza
Let X be a complex manifold and let M be a meromorphic connection on X withpoles along a normal crossing divisor D. Levelt-Turrittin theorem asserts that the pull-back of M to the formal neighbourhood of a codimension 1 point in D decom poses (after ramification) into elementary factors easy to work with.This decomposition may not hold at some other points of D. When it does, we saythat M has good formal decomposition along D. A conjecture of Sabbah, recentlyproved by Kedlaya and Mochizuki independently, asserts roughly the
Title: Towards Chabauty-Kim loci for the polylogarithmic quotient over an arbitrary number fieldAbstract: Let K be a number field and let S be an opensubscheme of Spec O_K.Minhyong Kim has developed a method forbounding the set of S-valued points on ahyperbolic curve X over S; his method opensa new avenue in the quest for an "effectiveMordell conjecture".But although Kim's approach has lead to theconstruction of explicit bounds in specialcases, the problem of realizing the potentialeffectivity of his methods remains a difficultand beautiful open problem.
I will discuss in the talk David Nadler’s “arborealizaton conjecture” and will sketch its proof. The conjecture states that singularities of a Lagrangian skeleton of a symplectic Weinstein manifold could be always simplified to a finite list of singularities, called ``arboreal”. This is a joint work with Daniel Albarez-Gavela, David Nadler and Laura Starkston.
Title: Equidistribution of expanding translates of curves in homogeneous spaces and Diophantine approximation.Abstract:We consider an analytic curve $\varphi: I \rightarrow \mathbb{M}(n\times m, \mathbb{R}) \hookrightarrow \mathrm{SL}(n+m, \mathbb{R})$ and embed it into some homogeneous space $G/\Gamma$, and translate it via some diagonal flow
Speaker: Misha BelolipetskyTitle: Arithmetic Kleinian groups generated by elements of finite orderAbstract:We show that up to commensurability there are only finitely manycocompact arithmetic Kleinian groups generated by rotations. The proofis based on a generalised Gromov-Guth inequality and bounds for thehyperbolic and tube volumes of the quotient orbifolds. To estimate thehyperbolic volume we take advantage of known results towards Lehmer'sproblem. The tube volume estimate requires study of triangulations oflens spaces which may be of independent interest.
Consider a sequence of random walks on $\mathbb{Z}/p\mathbb{Z}$ with symmetric generating sets $A= A(p)$. I will describe known and new results regarding the mixing time and cut-off. For instance, if the sequence $|A(p)|$ is bounded then the cut-off phenomenon does not occur, and more precisely I give a lower bound on the size of the cut-off window in terms of $|A(p)|$. A natural conjecture from random walk on a graph is that the total variation mixing time is bounded by maximum degree times diameter squared.
Manchester building, Hebrew University of Jerusalem, (Room 209)
Title: Arithmetic of Double Torus Quotients and the Distribution of Periodic Torus OrbitsAbstract:In this talk I will describe some new arithmetic invariants for pairs of torus orbits on inner forms of PGLn and SLn. These invariants allow us to significantly strengthen results towards the equidistribution of packets of periodic torus orbits on higher rank S-arithmetic quotients. An important aspect of our method is that it applies to packets of periodic orbits of maximal tori which are only partially split.
Ross building, Hebrew University of Jerusalem, (Room 70)
To every topological group, one can associate a unique universalminimal flow (UMF): a flow that maps onto every minimal flow of thegroup. For some groups (for example, the locally compact ones), thisflow is not metrizable and does not admit a concrete description.However, for many "large" Polish groups, the UMF is metrizable, can becomputed, and carries interesting combinatorial information. The talkwill concentrate on some new results that give a characterization ofmetrizable UMFs of Polish groups. It is based on two papers, one joint
Ross building, Hebrew University of Jerusalem, (Room 70)
To every topological group, one can associate a unique universalminimal flow (UMF): a flow that maps onto every minimal flow of thegroup. For some groups (for example, the locally compact ones), thisflow is not metrizable and does not admit a concrete description.However, for many "large" Polish groups, the UMF is metrizable, can becomputed, and carries interesting combinatorial information. The talkwill concentrate on some new results that give a characterization ofmetrizable UMFs of Polish groups. It is based on two papers, one joint
The (wrapped) Fukaya category of a symplectic manifold is a category whose objects are Lagrangian submanifolds and which contains a wealth of information about the symplectic topology. I will discuss the construction of the wrapped Fukaya category for certain completely integrable Hamiltonian systems. These are 2n-dimensional symplectic manifolds carrying a system of n commuting Hamiltonians surjecting onto Euclidean space. This gives rise to a Lagrangian torus fibration with singularities. |
My Answer
You should set your limit order to: $s (v+1)^{-0.0314192 \sqrt{t}}$where $s$ is the current price, $t$ is the time in years you'rewilling to wait, and $v$ is the annual volatility as a percentage.
If you want to be $p$ percent sure (instead of 0.98), set your limitorder to:
$s (v+1)^{-\sqrt{\pi } \sqrt{t} \text{erf}^{-1}(1-p)}$
Of course, this is based on many assumptions and disclaimers later inthis message.
Other Answers
It turns out this question has been studied extensively, and there aresome papers on it:
http://fiquant.mas.ecp.fr/wp-content/uploads/2015/10/Limit-Order-Book-modelling.pdf
http://arxiv.org/pdf/1311.5661
I'll use a much simpler model (see disclaimers at end of message).
Example
If a stock has a volatility of 15%, that means there's a 68% chanceit's price after 1 year will be between 87% and 115% of its currentprice. Note that the lower limit is 87% (= 1/1.15), not 85%.
Overall, the price probability for a stock with volatility 15% formsthis bell curve:
Note that:
Because volatility is inherently based on logrithms, the tickmarks aren't evenly numbered, and aren't symmetric. For example, thenumbers +65% and -39% are symmetric because it takes a 39% loss tooffset a 65% gain and vice versa. In other words:
(1+(-39/100))*(1+ (65/100)) is approximately one.
The parenthesized numbers under the x axis (for this and thefollowing graphs) refer to change in the logarithm of the security'sprice. These
are evenly numbered and we will use them in the"General Case" section.
The labels on the y axis are relative to each other and don'trefer to percentages.
Of course, this isn't the probability curve you're looking for: I drewit just for reference.
Instead, let's look at the probability distribution of the
minimumvalue over the next year for our 15% volatility stock.
The same caveats apply to this graph as the previous one.
Suppose you set your limit order at 5% below the current price (ie,95% of its current price). There is a ~77% chance your order will befilled:
You can also see this using the cumulative distribution function (CDF):
In this case, the y values do represent percentages, namely thecumulative percentage change that the stock's lowest value will thepercentage value on the x axis.
For this volatility, if you want be 98% sure you order is filled, youcould only set your limit order to 0.44% below the current price.
General Case
Of course, that was for a specific volatility over a specific periodof time.
In general, a volatility of v% means the stock is ~68% (1 standarddeviation) likely to remain within v% of its current price in the nextyear. More conveniently, it means the logarithm of the price is 68%likely to remain within (plus/minus) $\log (v+1)$ of its current value(within the next year). For example, a volatility of 15% means the logof the stock price is 68% likely to remain within .1398 of its currentvalue, since $e^{0.1398}$ is approximately $1.15$
More generally, the $\log (\text{price})$ one year from now has anormal distribution with mean $\log (\text{price})$ and standarddeviation $\log (v+1)$.
Thus, the
change in the $\log (\text{price})$ for one year isnormally distributed with a mean of 0 and a standard deviation of$\log (v+1)$.
A standard deviation of $\log (v+1)$ translates to a variance of $\log^2(v+1)$. Since the variance of a process like this scales linearlly,the variance for $t$ years is given by $t \log ^2(v+1)$ and thestandard deviation for $t$ years is given by $\sqrt{t} \log (v+1)$.
Thus, the change in $\log (\text{price})$ for time $t$ has a normaldistribution with mean 0 and standard deviation $\sqrt{t} \log (v+1)$.
As noted below in another section, this means the minimum (mostnegative) value of this change has a halfnormal distribution withparameter $\frac{1}{\sqrt{t} \log (v+1)}$
The cumulative distribution of a halfnormal distribution withparameter $\frac{1}{\sqrt{t} \log (v+1)}$ evaluted at x>0 (the onlyplace the halfnormal distribution is non-zero) is:
$\text{erf}\left(\frac{x}{\sqrt{\pi } \sqrt{t} \log (v+1)}\right)$
where erf() is the standard error function.
Note that when we draw this cumulative distribution for volatility 15%above, letting the x axis be "change in $\log (\text{price})$ (insteadof the percentage change in price), the x axis looks more like weexpect.
If our limit order is $\lambda$% of the current price (meaning it's$\lambda s$ where $s$ is the current price), it will only be hit ifthe $\log (\text{price})$ moves more than $\left| \log (\lambda )\right|$ (note that we need the absolute values since we're measuringthe absolute change in $\log (\text{price})$, which is alwayspositive). The chance of that happening is:
$ 1-\text{erf}\left(\frac{\left| \log (\lambda ) \right|}{\sqrt{\pi } \sqrt{t} \log (v+1)}\right)$
Note that we need the "1-" since we're looking for the probability the$\log (\text{price})$ moves
more than the given amount.
Of course, in this case, we're
given the probability and asked tosolve for the limit price. Using $p$ as the probability we find:
$\lambda \to (v+1)^{-\sqrt{\pi } \sqrt{t} \text{erf}^{-1}(1-p)}$
and the price is thus:
$s (v+1)^{-\sqrt{\pi } \sqrt{t} \text{erf}^{-1}(1-p)}$
as in the answer section. Substituting 0.98 for p, we have:
$s (v+1)^{-0.0314192 \sqrt{t}}$
as noted for this specific example.
Research and "derivation"
It turns out this is a well-known problem and has been studied extensively:
https://stackexchange.com/search?q=brownian+halfnormal
It can also be regarded as the running maximum value of a random walk:
https://stackexchange.com/search?q=brownian+halfnormal
What is the fair price of this option?
Probability of touching
If you use Mathematica (or just want to read even more about this subject),you might look at my:
the latter of which computes the probability that a stock price willbe between two given values at two given times (ie, the fair value ofan O&A "box option") but can be used to answer your question in thelimiting case. See also: https://money.stackexchange.com/questions/4312
Disclaimers and Notices
I made several simplifying assumptions above:
As noted in the references given in "Other Answers" above, themore a stock's price decreases, the less likely it is to decreasefurther. Why? Other people place limit orders, and the further downthe stock gets from its starting price, the more limit orders willbe triggered. Generally, the
volume of limit orders alsoincreases as the stock price goes down. In other words, the limitorders act as a "buffer", slowing the rate at which a stock's pricedrops. The simple model I use does not account for this.
Conversely, I also ignore the "volatility smile", which suggeststhe exact opposite: that a larger change in price is
more likelythan what the normal distribution would yield, which means thatextreme prices are more likely that those given by the halfnormaldistribution.
The two points above aren't necessarily contradictory: undernormal conditions, the "limit order book" buffers price changes, butduring unusual circumstances (such as major news), the price canchange dramatically.
I also assume that once a stock reaches your limit price, yourorder will be triggered. However, if there are several orders atthat price, the larger orders will trigger first, and the stockprice may rise again before your limit order is triggered at all.
Since this is a limit order and not an option, the risk-freeinterest rate is not an issue: I assume you earn the risk-freeinterest rate until the order is filled.
If you don't earn the risk-free interest rate while waiting, notethat the small gain you get from the limit order may be offset bythe loss of interest.
Items Not Appearing in This Answer
Although the inverse error function is well known and "easy" tocompute, I was going to include an approximation, but felt that mightexceed the scope of the question. |
This is my first exercise for space state models and I've a few questions I'd need to resolve before I actually start doing the exercise. Unfortunately, I'm self teaching (I have no professor to ask) and I'm afraid there's no solution companion for Durbin and Koopman (2012)!
Consider the local level model (2.3).
(a) Give the model representation for $x_t = y_t - y_{t-1}$, for $t = 2, ..., n$.
(b) Show that the model for $x_t$ in (a) can have the same statistical properties as the model given by $x_t = \epsilon_t + \theta \epsilon_{t-1}$ where $\epsilon \sim N(0, \sigma_{\epsilon}^2)$ are independent disturbances with variance $\sigma_{\epsilon}^2 > 0$ and for some value $\theta$.
(c) For what value of $\theta$, in terms of $\sigma_{\epsilon}^2$ and $\sigma_{\eta}^2$, are the model representations for $x_t$ in (a) and (b) equivalent? Comment.
For the record, the local level model (2.3) is given by:
$y_t = \alpha_t + \epsilon_t \quad\quad \epsilon_t \sim N(0, \sigma_{\epsilon}^2)$
$\alpha_{t+1} = \alpha_t + \eta_t \quad\quad \eta_t \sim N(0, \sigma_{\eta}^2)$
Doubts about (a)
First of all, the model proposed in (a) looks like noise (which makes perfect sense since it's the first difference of a random walk). Is the following representation correct?
$$ x_t = y_t - y_{t-1} = \alpha_t + \epsilon_t - \alpha_{t-1} - \epsilon_{t-1} $$ $$ x_t = \alpha_{t-1} + \eta_{t-1} + \epsilon_t - \alpha_{t-1} - \epsilon_{t-1} $$ $$ x_t = \eta_{t-1} + \epsilon_{t} - \epsilon_{t-1} $$
This makes me doubt. First, state disturbance $\eta_{t-1}$ is now part of the observation equation. Second, what does the state equation mean now that the observation equation doesn't relate to the unobserved states $\alpha_t$? Third, and somehow related, what's the mean unobserved state now that $\alpha_t$ isn't anymore on the formula? Zero?
Doubts about (b)
Additionally, I wonder how to show that models have the same statistical properties. What do you have to prove to say they're the same? Same expected value and variance of observation $x_t$, unobserved state $\alpha_t$, prediction error $v_t = x_t - a_t $, filtered unobserved state, updated unobserved state, etc.? Since all random variables are Normal, I guess showing the first two moments match is enough, but a) what distribution (marginal, conditionals, conditionals on what?) of b) what variables (observed, hidden state, prediction error, etc.) should be equal?
Any comment is much appreciated!
Update
This is where I got after the hints provided by @Glen_b and @javlacalle.
(a)
$$ x_t = \eta_{t-1} + \epsilon_t - \epsilon_{t-1}$$
(b)
Respect to model $x_t$ given in (a)
$$ E[x_t | x_{t-1}] = 0 $$ $$ \gamma(0) = Var(x_t | x_{t-1}) = \sigma_{\eta}^2 + 2\sigma_{\epsilon}^2 $$ $$ \gamma(1) = Cov(x_t, x_{t-1}) = -\sigma_{\epsilon}^2 $$ $$ \gamma(2) = Cov(x_t, x_{t-2}) = 0 $$ $$ \rho(1) = \frac{-\sigma_{\epsilon}^2}{\sigma_{\eta}^2 + 2\sigma_{\epsilon}^2} $$ $$ \rho(2) = 0 $$
Respect to model $x_t$ proposed in (b), which I renamed to $z_t$ to avoid confusion
$$ E[z_t | z_{t-1}] = 0 $$ $$ \gamma(0) = Var(z_t | z_{t-1}) = \sigma_{\epsilon}^2 (1 + \theta^2) $$ $$ \gamma(1) = Cov(z_t, z_{t-1}) = \theta \sigma_{\epsilon}^2 $$ $$ \gamma(2) = Cov(z_t, z_{t-2}) = 0 $$ $$ \rho(1) = \frac{\theta}{1 + \theta^2} $$ $$ \rho(2) = 0 $$
(c)
$$ E[x_t | x_{t-1}] = E[z_t | z_{t-1}] = 0 \quad \qquad (c.1) $$
$$ \gamma_{x_t}(0) = \gamma_{z_t}(0) \leftrightarrow \sigma_{\eta}^2 + 2\sigma_{\epsilon}^2 = \sigma_{\epsilon}^2 (1 + \theta^2) \quad \quad (c.2) $$
$$ \gamma_{x_t}(1) = \gamma_{z_t}(1) \leftrightarrow -\sigma_{\epsilon}^2 = \theta \sigma_{\epsilon}^2 \rightarrow \theta = -1 \quad \quad (c.3) $$
$$ \gamma_{x_t}(2) = \gamma_{z_t}(2) = 0 \quad \quad (c.4) $$
$$ \rho_{x_t}(1) = \rho_{z_t}(1) \leftrightarrow \frac{-\sigma_{\epsilon}^2}{\sigma_{\eta}^2 + 2\sigma_{\epsilon}^2} = \frac{\theta}{1 + \theta^2} \quad \quad (c.5) $$
$$ \rho_{x_t}(2) = \rho_{z_t}(2) = 0 \quad \quad (c.6) $$
Equations c.1, c.4 and c.6 imply no restrictions for $\theta$, but equations c.2, c.3 and c.5 are clearly not consistent. |
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian
1.
Facultad de Matemáticas, Universidad Católica de Chile, Casilla 306, Correo 22 - Santiago
2.
Department of Mathematics, Pontificia Universidad Católica de Chile, Casilla 306, Correo 22, Santiago, Chile
$(P)\qquad\qquad\qquad\qquad -\Delta u=K(|x|)f(u),\quad x\in \mathbb R^n. $
Here $K$ is a positive $C^1$ function defined in $\mathbb R^+$ and $f\in C[0,\infty)$ has one zero at $u_0>0$, is non positive and not identically 0 in $(0,u_0)$, and it is locally lipschitz, positive and satisfies some superlinear growth assumption in $(u_0,\infty)$.
Mathematics Subject Classification:35J70, 35J6. Citation:C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. Communications on Pure & Applied Analysis, 2006, 5 (1) : 71-84. doi: 10.3934/cpaa.2006.5.71
[1]
C. Cortázar, Marta García-Huidobro.
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian.
[2]
Daniele Garrisi, Vladimir Georgiev.
Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one.
[3]
Scipio Cuccagna, Masaya Maeda.
On weak interaction between a ground state and a trapping potential.
[4]
Marco A. S. Souto, Sérgio H. M. Soares.
Ground state solutions for quasilinear stationary Schrödinger equations with critical growth.
[5]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[6]
Jian Zhang, Wen Zhang, Xianhua Tang.
Ground state solutions for Hamiltonian elliptic system with inverse square potential.
[7]
Norihisa Ikoma.
Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials.
[8]
Yinbin Deng, Wentao Huang.
Positive ground state solutions for a quasilinear elliptic equation with critical exponent.
[9]
Kaimin Teng, Xiumei He.
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent.
[10]
Xiao-Jing Zhong, Chun-Lei Tang.
The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem.
[11]
Dengfeng Lü.
Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations.
[12]
Claudianor Oliveira Alves, M. A.S. Souto.
On existence and concentration behavior of ground state solutions for a class of problems with critical growth.
[13]
Jian Zhang, Wen Zhang.
Existence and decay property of ground state solutions for Hamiltonian elliptic system.
[14]
Yongpeng Chen, Yuxia Guo, Zhongwei Tang.
Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents.
[15]
Gui-Dong Li, Chun-Lei Tang.
Existence of positive ground state solutions for Choquard equation with variable exponent growth.
[16] [17]
Carmen Cortázar, Marta García-Huidobro, Pilar Herreros.
On the uniqueness of bound state solutions of a semilinear equation with weights.
[18]
Alireza Khatib, Liliane A. Maia.
A positive bound state for an asymptotically linear or superlinear Schrödinger equation in exterior domains.
[19]
Hua Nie, Wenhao Xie, Jianhua Wu.
Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor.
[20]
Gui-Dong Li, Chun-Lei Tang.
Existence of ground state solutions for Choquard equation involving the general upper critical Hardy-Littlewood-Sobolev nonlinear term.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
I'm aware that there's a simple proof by contradiction for this, mainly:
Assume $x_1 \neq x_2$. If $f(x_1)=f(x_2)$, then $g(f(x_1))=g(f(x_2))$ which, since $g \circ f$ is injective, implies $x_1=x_2$. By contradiction, $f$ is injective.
I wanted to see if the method of direct proof I used is valid. Could someone just skim over it and alert me if there's any gaps? Here's the proof I came up with:
$$\exists ! x \in D(g \circ f) = A (\forall w \in R(g \circ f) \subseteq C)$$ $$\implies \exists ! y \in R(f) \subseteq B (\forall w \in R(g \circ f) \subseteq C)$$ $$\implies \exists ! x \in D(g \circ f)=D(f)=A (\forall y \in R(f) \subseteq B)$$ $$\implies \exists ! x \in A (\forall y \in R(f) \subseteq B)$$
Hence, $f$ is injective.
Rewritten in words:
Because $f$ is injective, there's a unique $x$ for each $g(f(x))\in C$. Since $g \circ f$ is a function from $R(f)$ onto $C$, this implies there's a unique $y\in R(f)$ for each $g(f(x))\in C$. Since there exists a unique $x$ and unique $y$ for each $g(f(x))\in C$, then there must exist a unique $x$ for each $y \in R(f)$, hence making $f$ injective.
To me, I believe the second implication I made, assuming there exits a unique $x$ for every $y$ just because there existed a unique $y$ for each $w$, seems like a large step -- so I'm really doubting if that's a valid move. |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Unique Factorization Domains (UFDs)
Unique Factorization Domains (UFDs)
Definition: Let $(R, +, \cdot)$ be an integral domain. Then $R$ is a Unique Factorization Domain if the following properties are satisfied: 1) Every element $a \in R$ that is nonzero and that is not a unit can be expressed as a product of irreducible elements in $R$. 2) If $a = p_1p_2...p_m$ and $a = q_1q_2...q_n$ where $p_1, p_2, ..., p_m, q_1, q_2, ..., q_n$ are irreducible elements in $R$ then $m = n$ and the factors of both expressions can be arranged so that $p_i \sim q_i$ for each $i \in \{ 1, 2, ..., m \}$.
We will soon be able to show that every principal ideal domain is a unique factorization domain. We first need to prove the following lemma.
Lemma 1: Let $(R, +, \cdot)$ be a principal ideal domain and let $I_1, I_2, ...$ be ideals of $R$ such that $I_1 \subseteq I_2 \subseteq ... \subseteq I_n \subseteq ...$. Then there exists a positive integer $m$ such that $I_n = I_m$ for all $n \geq m$. Proof:Let:
\begin{align} \quad I = \bigcup_{i=1}^{\infty} I_i \end{align}
Then $I$ is an ideal of $R$. Since $R$ is a principal ideal domain we have that $I = aR$ for some generator $a \in R$. Since $a \in I$ there exists at least one set in the union $I$ containing $a$. Let $m$ be the smallest positive integer such that $a \in I_m$. Then:
\begin{align} \quad I = aR \subseteq I_m \end{align}
So $I = I_m$. Hence $I_n = I_m$ for all $n \geq m$. $\blacksquare$
Theorem 2: Every principal ideal domain is a unique factorization domain. |
In the previous section, we explored the short run behavior of quadratics, a special case of polynomials. In this section, we will explore the short run behavior of polynomials in general.
Short run Behavior: Intercepts
As with any function, the vertical intercept can be found by evaluating the function at an input of zero. Since this is evaluation, it is relatively easy to do it for a polynomial of any degree.
To find horizontal intercepts, we need to solve for when the output will be zero. For general polynomials, this can be a challenging prospect. While quadratics can be solved using the relatively simple quadratic formula, the corresponding formulas for cubic and 4\({}^{th}\) degree polynomials are not simple enough to remember, and formulas do not exist for general higher-degree polynomials. Consequently, we will limit ourselves to three cases:
The polynomial can be factored using known methods: greatest common factor and trinomial factoring. The polynomial is given in factored form. Technology is used to determine the intercepts.
Other techniques for finding the intercepts of general polynomials will be explored in the next section.
Example 1
Find the horizontal intercepts of \(f(x)=x^{6} -3x^{4} +2x^{2}\).
Solution
We can attempt to factor this polynomial to find solutions for \(f(x) = 0\).
\(x^{6} -3x^{4} +2x^{2} =0\) Factoring out the greatest common factor
\(x^{2} (x^{4} -3x^{2} +2)=0\) Factoring the inside as a quadratic in x\({}^{2}\)
\(x^{2} (x^{2} -1)(x^{2} -2)=0\) Then break apart to find solutions
\(\begin{array}{l} {x^{2} =0} \\ {x=0} \end{array}\) or \(\begin{array}{l} {\left(x^{2} -1\right)=0} \\ {x^{2} =1} \\ {x=\pm 1} \end{array}\) or \(\begin{array}{l} {\left(x^{2} -2\right)=0} \\ {x^{2} =2} \\ {x=\pm \sqrt{2} } \end{array}\)
This gives us 5 horizontal intercepts.
Example 2
Find the vertical and horizontal intercepts of \(g(t)=(t-2)^{2} (2t+3)\)
Solution
The vertical intercept can be found by evaluating \(g(0)\).
\(g(0)=(0-2)^{2} (2(0)+3)=12\)
The horizontal intercepts can be found by solving \(g(t) = 0\)
\((t-2)^{2} (2t+3)=0\) Since this is already factored, we can break it apart:
\(\begin{array}{l} {(t-2)^{2} =0} \\ {t-2=0} \\ {t=2} \end{array}\) or \(\begin{array}{l} {(2t+3)=0} \\ {t=\dfrac{-3}{2} } \end{array}\)
We can always check our answers are reasonable by graphing the polynomial.
Example 3
Find the horizontal intercepts of \(h(t)=t^{3} +4t^{2} +t-6\)
Solution
Since this polynomial is not in factored form, has no common factors, and does not appear to be factorable using techniques we know, we can turn to technology to find the intercepts.
Graphing this function, it appears there are horizontal intercepts at \(t\) = -3, -2, and 1.
We could check these are correct by plugging in these values for
t and verifying that \(h(-3)=h(-2)=h(1)=0\).
Exercise
Find the vertical and horizontal intercepts of the function\(f(t)=t^{4} -4t^{2}\).
Answer
Vertical intercept (0, 0). \(0=t^{4} -4t^{2}\) factors as \(0=t^{2} \left(t^{2} -4\right)=t^{2} \left(t-2\right)\left(t+2\right)\) Horizontal intercepts (0, 0), (-2, 0), (2, 0).
Graphical Behavior at Intercepts
If we graph the function \(f(x)=(x+3)(x-2)^{2} (x+1)^{3}\), notice that the behavior at each of the horizontal intercepts is different.
At the horizontal intercept
x = -3, coming from the \((x+3)\) factor of the polynomial, the graph passes directly through the horizontal intercept.
The factor \((x+3)\) is linear (has a power of 1), so the behavior near the intercept is like that of a line - it passes directly through the intercept. We call this a single zero, since the zero corresponds to a single factor of the function.
At the horizontal intercept \(x = 2\), coming from the \((x-2)^{2}\) factor of the polynomial, the graph touches the axis at the intercept and changes direction. The factor is quadratic (degree 2), so the behavior near the intercept is like that of a quadratic – it bounces off the horizontal axis at the intercept. Since \((x-2)^{2} =(x-2)(x-2)\), the factor is repeated twice, so we call this a double zero. We could also say the zero has
multiplicity 2.
At the horizontal intercept \(x = -1\), coming from the \((x+1)^{3}\) factor of the polynomial, the graph passes through the axis at the intercept, but flattens out a bit first. This factor is cubic (degree 3), so the behavior near the intercept is like that of a cubic, with the same “S” type shape near the intercept that the toolkit \(x^{3}\) has. We call this a triple zero. We could also say the zero has multiplicity 3.
By utilizing these behaviors, we can sketch a reasonable graph of a factored polynomial function without needing technology.
graphical behavior of polynomials at horizontal intercepts
If a polynomial contains a factor of the form \((x-h)^{p}\), the behavior near the horizontal intercept \(h\) is determined by the power on the factor.
For higher even powers 4,6,8 etc.… the graph will still bounce off the horizontal axis but the graph will appear flatter with each increasing even power as it approaches and leaves the axis.
For higher odd powers, 5,7,9 etc… the graph will still pass through the horizontal axis but the graph will appear flatter with each increasing odd power as it approaches and leaves the axis.
Example 4
Sketch a graph of \(f(x)=-2(x+3)^{2} (x-5)\).
Solution
This graph has two horizontal intercepts. At \(x = -3\), the factor is squared, indicating the graph will bounce at this horizontal intercept. At \(x = 5\), the factor is not squared, indicating the graph will pass through the axis at this intercept.
Additionally, we can see the leading term, if this polynomial were multiplied out, would be \(-2x^{3}\), so the long-run behavior is that of a vertically reflected cubic, with the outputs decreasing as the inputs get large positive, and the inputs increasing as the inputs get large negative.
To sketch this we consider the following:
As \(x \to -\infty\) the function \(f(x) \to \infty\) so we know the graph starts in the 2\({}^{nd}\) quadrant and is decreasing toward the horizontal axis.
At (-3, 0) the graph bounces off the horizontal axis and so the function must start increasing.
At (0, 90) the graph crosses the vertical axis at the vertical intercept.
Somewhere after this point, the graph must turn back down or start decreasing toward the horizontal axis since the graph passes through the next intercept at (5,0).
As \(x \to \infty\) the function\(f(x) \to -\infty\) so we know the graph continues to decrease and we can stop drawing the graph in the 4\({}^{th}\) quadrant.
Using technology we can verify the shape of the graph.
Exercise
Given the function \(g(x)=x^{3} -x^{2} -6x\) use the methods that we have learned so far to find the vertical & horizontal intercepts, determine where the function is negative and positive, describe the long run behavior and sketch the graph without technology.
Answer
Vertical intercept (0, 0), Horizontal intercepts (-2, 0), (0, 0), (3, 0)
The function is negative on (\(-\infty\), -2) and (0, 3)
The function is positive on (-2, 0) and (3,\(\infty\))
The leading term is \(x^{3}\)so as\(x\to -\infty\), \(g(x)\to -\infty\)and as\(x\to \infty\), \(g(x)\to \infty\)
Solving Polynomial Inequalities
One application of our ability to find intercepts and sketch a graph of polynomials is the ability to solve polynomial inequalities. It is a very common question to ask when a function will be positive and negative. We can solve polynomial inequalities by either utilizing the graph, or by using test values.
Example 5
Solve \((x+3)(x+1)^{2} (x-4)>0\)
Solution
As with all inequalities, we start by solving the equality \((x+3)(x+1)^{2} (x-4)=0\), which has solutions at
x = -3, -1, and 4. We know the function can only change from positive to negative at these values, so these divide the inputs into 4 intervals.
We could choose a test value in each interval and evaluate the function \(f(x)=(x+3)(x+1)^{2} (x-4)\) at each test value to determine if the function is positive or negative in that interval
Interval Test \(x\) in interval \(f\) (test value) > 0 or < 0? \(x < -3\) -4 72 > 0 \(-3 < x < -1\) -2 -6 < 0 \(-1 < x < 4\) 0 -12 < 0 \(x > 4\) 5 288 > 0
On a number line this would look like:
From our test values, we can determine this function is positive when \(x < -3\) or \(x > 4\), or in interval notation, \((-\infty ,-3)\bigcup (4,\infty )\)
We could have also determined on which intervals the function was positive by sketching a graph of the function. We illustrate that technique in the next example
Example 6
Find the domain of the function \(v(t)=\sqrt{6-5t-t^{2} }\).
Solution
A square root is only defined when the quantity we are taking the square root of, the quantity inside the square root, is zero or greater. Thus, the domain of this function will be when \(6-5t-t^{2} \ge 0\).
We start by solving the equality \(6-5t-t^{2} =0\). While we could use the quadratic formula, this equation factors nicely to \((6+t)(1-t)=0\), giving horizontal intercepts \(t = 1\) and \(t = -6\).
Sketching a graph of this quadratic will allow us to determine when it is positive.
From the graph we can see this function is positive for inputs between the intercepts. So \(6 - 5t - t^{2} \ge 0\) for \(-6 \le t \le 1\), and this will be the domain of the \(v(t)\) function.
Writing Equations using Intercepts
Since a polynomial function written in factored form will have a horizontal intercept where each factor is equal to zero, we can form a function that will pass through a set of horizontal intercepts by introducing a corresponding set of factors.
factored form of polynomials
If a polynomial has horizontal intercepts at \(x=x_{1} ,x_{2} , ... ,x_{n}\), then the polynomial can be written in the factored form
\[f(x)=a(x-x_{1} )^{p_{1} } (x-x_{2} )^{p_{2} } \cdots (x-x_{n} )^{p_{n} }\]
where the powers \(p_i\) on each factor can be determined by the behavior of the graph at the corresponding intercept, and the stretch factor a can be determined given a value of the function other than the horizontal intercept.
Example 7
Write a formula for the polynomial function graphed here.
Solution
This graph has three horizontal intercepts: \(x\) = -3, 2, and 5. At \(x\) = -3 and 5 the graph passes through the axis, suggesting the corresponding factors of the polynomial will be linear. At \(x = 2\) the graph bounces at the intercept, suggesting the corresponding factor of the polynomial will be 2\({}^{nd}\) degree (quadratic).
Together, this gives us:
\(f(x)=a(x+3)(x-2)^{2} (x-5)\)
To determine the stretch factor, we can utilize another point on the graph. Here, the vertical intercept appears to be (0,-2), so we can plug in those values to solve for \(a\):
\(\begin{array}{l} {-2=a(0+3)(0-2)^{2} (0-5)} \\ {-2=-60a} \\ {a=\dfrac{1}{30} } \end{array}\)
The graphed polynomial appears to represent the function
\(f(x)=\dfrac{1}{30} (x+3)(x-2)^{2} (x-5)\).
Exercise
Given the graph, write a formula for the function shown.
Answer
Double zero at \(x= - 1\), triple zero at \(x = 2\). Single zero at \(x = 4\). \(f(x) = a(x - 2)^{3} (x+1)^{2} (x-4)\). Substituting (0,-4) and solving for \(a\), \(f(x) = -\dfrac{1}{8} (x-2)^{3} (x+1)^{2} (x-4)\)
Estimating Extrema
With quadratics, we were able to algebraically find the maximum or minimum value of the function by finding the vertex. For general polynomials, finding these turning points is not possible without more advanced techniques from calculus. Even then, finding where extrema occur can still be algebraically challenging. For now, we will estimate the locations of turning points using technology to generate a graph.
Example 8
An open-top box is to be constructed by cutting out squares from each corner of a 14cm by 20cm sheet of plastic then folding up the sides. Find the size of squares that should be cut out to maximize the volume enclosed by the box.
Solution
We will start this problem by drawing a picture, labeling the width of the cut-out squares with a variable, \(w\).
Notice that after a square is cut out from each end, it leaves a \((14-2w)\) cm by \((120-2w)\) cm rectangle for the base of the box, and the box will be \(w\) cm tall. This gives the volume:
\(V(w)=(14-2w)(20-2w)w=280w-68w^{2} +4w^{3}\)
Using technology to sketch a graph allows us to estimate the maximum value for the volume, restricted to reasonable values for \(w\): values from 0 to 7.
From this graph, we can estimate the maximum value is around 340, and occurs when the squares are about 2.75cm square. To improve this estimate, we could use advanced features of our technology, if available, or simply change our window to zoom in on our graph.
From this zoomed-in view, we can refine our estimate for the max volume to about 339, when the squares are 2.7cm square.
Exercise
Use technology to find the maximum and minimum values on the interval [-1, 4] of the function \(f(x)=-0.2(x-2)^{3} (x+1)^{2} (x-4)\).
Answer
The minimum occurs at approximately the point (0, -6.5), and the maximum occurs at approximately the point (3.5, 7).
Important Topics of this Section Short Run Behavior Intercepts (Horizontal & Vertical) Methods to find Horizontal intercepts Factoring Methods Factored Forms Technology Graphical Behavior at intercepts Single, Double and Triple zeros (or multiplicity 1, 2, and 3 behaviors) Solving polynomial inequalities using test values & graphing techniques Writing equations using intercepts Estimating extrema |
Recall that $\mathbb{Z}[i]=\{a+bi:a,b \in \mathbb{Z}\}$, i.e., the Gaussian integers, and $\mathbb{Z}[\sqrt{2}]=\{a+b\sqrt{2}:a,b \in \mathbb{Z}\}$.
I want to show that $\mathbb{Z}[i] \not\cong \mathbb{Z}[\sqrt{2}]$.
Suppose, to the contrary, that they are isomorphic. Then there exists a bijective ring homomorphism $\phi: \mathbb{Z}[i] \rightarrow \mathbb{Z}[\sqrt{2}]$. Since $\phi$ is a homomorphism, it preserves additive and multiplicative identities, so $\phi(0)=0$ and $\phi(1)=1$. It also preserves sums, so for any $n \in \mathbb{Z}$, $\phi(n)=n$.
We know that $\phi(a+bi)=a'+b'\sqrt{2}$. I am trying to find an element in $\mathbb{Z}[i]$ that will map to an element in $\mathbb{Z}[\sqrt{2}]$ that will give me an equation in $\mathbb{Z}[\sqrt{2}]$ that has no solution. Some help?
Thank you. |
Let $f:[a,b] \to \mathbb{R}^2$ be a continuous curve on the plane.
Question:Are there numbers $a \leq x \leq c \leq y \le b$ such that $$(c-a)f(x)+(b-c)f(y) = \int_a^b f(t) \, dt \ ?$$
In other words, is there a Riemann sum with two terms that hits the bull's-eye?
EDIT: Prompted by a down-vote, maybe I should give some motivation:
It's not difficult to convince oneself (but not completely trivial to prove (*)) that the barycenter $\frac{1}{b-a}\int_a^b f(t) \, dt$ is a convex combination of
two values of $f$, say $f(x)$ and $f(y)$ with $x<y$. But this doesn't answer my question, because depending on the weights we may be unable to find the partition point $c$.
For curves in $\mathbb{R}^n$, I could ask if there is a Riemann sum with $n$ terms that hits the bull's-eye, but dimension $n=2$ already seems tricky enough.
(*)
Footnote: A theorem of Korobkov [1], improving a theorem of McLeod [2], says that if $F: [a,b] \to \mathbb{R}^n$ is continuous on $[a,b]$ and differentiable on $(a,b)$ then $\frac{F(b)-F(a)}{b-a}$ is a convex combination of $n$ values of the derivative $F'$ (which is not necessarily continuous). References:
[1] Korobkov, M.V. -- A generalization of Lagrange's mean value theorem to the case of vector-valued mappings. Sibirsk. Mat. Zh. 42 (2001), no. 2, 349--353, ii; translation in Siberian Math. J. 42 (2001), no. 2, 297--300, doi: 10.1023/A:1004889013835.
[2] McLeod, R.M. -- Mean value theorems for vector valued functions. Proc. Edinburgh Math. Soc. (2) 14 1964/1965, 197--209, doi: 10.1017/S0013091500008786. |
Properties of Polynomials
We are about to look at an important concept known as an
eigenvalue shortly, but before then, we must secure a foundation of knowledge on polynomials. We have looked at polynomials throughout the Linear Algebra section on the site, for example, when we looked at $\wp (\mathbb{R})$ as the set of all polynomials.
Definition: A Polynomial is a function in the form $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ where $a_0, a_1, ..., a_n \in \mathbb{F}$. The values $a_0, a_1, ..., a_n$ are called the Coefficients of The Polynomial $p$, and the Degree of $p$ denoted $\mathrm{deg} (p)$ is the largest exponent attached to a variable with a nonzero coefficient.
For example, the function $p(x) = 2x^2 + 3x^4$ is a polynomial with real coefficients $2$ and $3$ and whose degree $\mathrm{deg} (p) = 4$. Another example is the function $f(x) = 4 + x^2 + 4x^7$ which is also a polynomial with real coefficients $3$, $1$, and $4$ and whose degree $\mathrm{deg} (f) = 7$.
By convention, a polynomial $p(x) = 0$, that is the constant function that is zero is defined to have degree $-\infty$, that is $\mathrm{deg} (p) = - \infty$.
The following table shows the graphs of some arbitrary functions of increasing degree:
Degree $0$ Degree $1$ Degree $2$ Degree $3$ Degree $4$ Degree $5$
One important property of polynomials is that polynomials of even degree $2, 4, 6, ...$ tend to look similar, while polynomials of odd degree greater than $1$, in other words, $3, 5, 7, ...$ also tend to look similar.
We will now look at another important definition that the reader has already likely encountered.
Definition: Let $p(x) = a_0 + a_1x + a_2x^2 + ... a_nx^n$ be a polynomial. Then $\lambda \in \mathbb{F}$ is called a Root, Solution, or Zero of $p$ if $p(\lambda) = 0$.
We will now look at an important theorem regarding the factorization of a polynomial and its roots.
Theorem 1: If $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ is a polynomial and $\mathrm{deg} (p) = n ≥ 1$ then $\lambda \in \mathbb{F}$ is a root of $p$ if and only if there exists a polynomial $q(x)$ where $\mathrm{deg} (q) = n - 1$ such that $p(x) = (x - \lambda) q(x)$. Proof:$\Leftarrow$ Suppose that $\lambda$ is a root of $p(x)$. Since $p(x)$ has degree $n$, $p(x)$ can be written as $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ where $a_n \neq 0$. Now since $\lambda$ is a root of $p$ then $p(\lambda) = 0$ and so $p(\lambda) = 0 = a_0 + a_1\lambda + a_2\lambda^2 + ... + a_n\lambda^n$. We thus get from subtracting the last two equations that: Now for some polynomial $q_2 (x)$ such that $\mathrm{deg} (q_2) = 1$ let $x^2 - \lambda^2 = (x - \lambda)q_2(x)$, and for some polynomial $q_3(x)$ such that $\mathrm{deg} (q_3) = 2$ let $x^3 - \lambda^3 = (x - \lambda)q_3(x)$, …, and for some polynomial $q_n(x)$ such that $\mathrm{deg} (q_n) = n-1$ let $x^n - \lambda^n = (x - \lambda)q_n(x)$. Then we have that: Therefore let $q(x) = a_1 + a_2q_2(x) + ... + a_nq_n(x)$, and so $p(x) = (x - \lambda)q(x)$ where $\mathrm{deg} (q) = n - 1$. $\Rightarrow$ Suppose that $p(x) = (x - \lambda)q(x)$. Then $p(\lambda) = (\lambda - \lambda)q(x) = 0q(x) = 0$ and so $\lambda$ is a root of $p$.
Theorem 2: If $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ is a polynomial such that $\mathrm{deg} (p) = n ≥ 0$ then $p$ has at most $n$ distinct roots.
We should note that Theorem 2 pertains to
distinct roots. There are some cases for which a polynomial has multiple roots but some may not be distinct. For example, the polynomial $p(x) = (x - 2)^2 = (x - 2)(x - 2)$ has the roots $\lambda_1 = 2$ and $\lambda_2 = 2$, but $\lambda_1 = \lambda_2$, so these roots are not distinct. There is only one distinct root for this quadratic equation, but notice this does not violate theorem 2 as $\mathrm{deg} (p) = 2$. We will not prove theorem 2. Proof:We will carry out this proof by induction. For $n \in \mathbb{N} \cup \{ 0 \}$ let $S(n)$ be the statement that the polynomial $p(x)$ with $\mathrm{deg} (p) = n$ has at most $n$ distinct roots. $S(0)$ says the polynomial $p(x) = a_0$ where $a_0 \neq 0$ has $0$ roots, which is true since $p(x)$ represents a horizontal line that does not intersect the $x$-axis. $S(1)$ says that the polynomial $p(x) = a_0 + a_1x$ where $a_1 \neq 0$ has $1$ root, which is also true since $\lambda = -\frac{a_0}{a_1}$ is the only root of $p$. Suppose that for some $k \in \mathbb{N}$, $k ≥ 1$ that $S(k-1)$ is true, that is every polynomial $q(x)$ with $\mathrm{deg} (q) = k-1$ has at most $k-1$ distinct roots. We want to show that $S(k+1)$ is true, that is show that every polynomial $p(x)$ with $\mathrm{deg} (p) = k$ has at most $k$ distinct roots. If $p$ has no roots, then we are done as $k ≥ 0$. If $p$ has a root, call it $\lambda$, and so by Theorem 1, for some polynomial $q(x)$ where $\mathrm{deg} (q) = k - 1$ we have that: By the induction hypothesis, $q(x)$ has at most $k - 1$ distinct roots (which are also roots of $p$), and $\lambda$ is a root of $p$, so $p$ has at most $k$ distinct roots, so $S(k)$ is true. Therefore by the Principle of Mathematical Induction, any polynomial $p(x)$ with $\mathrm{deg} (p) = n ≥ 0$ has at most $n$ distinct roots. $\blacksquare$
Corollary 1: If $p(x) = a_0 + a_1x + ... + a_mx^m$ where $a_0, a_1, ..., a_m \in \mathbb{F}$ and that $p(x) = 0$ for all $x \in \mathbb{F}$ then $a_0 = a_1 = ... = a_m = 0$. Proof:Suppose that $p(x) = a_0 + a_1x + ... + a_mx^m$ and that $p(x) = 0$ for all $x \in \mathbb{F}$. Then $p(x)$ has infinitely many roots, and so there does not exist an integer $m$ for which $\mathrm{deg} (p) = m$. Therefore $p(x)$ is precisely the zero polynomial and so $a_0 = a_1 = ... = a_m = 0$. $\blacksquare$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.