text
stringlengths 256
16.4k
|
|---|
First, for convenience rewrite $J(p,p')$ as\begin{align}J(p,p') &= -D(p\Vert p')+\sum_{i}p_{i}\sum_{j}Q_{ij}\log\frac{Q_{ij}}{q_{j}}+\sum_{j}q_{j}\log\frac{q_{j}}{q'_{j}} \\&= I({p})-\left[D(p\Vert p')-D(q\Vert q')\right]\\& = I(p) - \Delta D(p\Vert p')\end{align}where I have used your definition of $I(p)$, and defined \begin{align}\Delta D(p\Vert p')= D(p\Vert p')-D(q\Vert q')\end{align}
$\Delta D(p\Vert p')$ reflects the "contraction" of Kullback-Leibler divergence between $p$ and $p'$ under the stochastic matrix $Q$. $\Delta D$ is convex in the first argument. To see why, consider any two distributions $p$ and $\bar{p}$, and define the convex mixture $p^\alpha = (1-\alpha) p + \alpha \bar{p}$. We will show convexity by demonstrating that the second derivative with respect to $\alpha$ is non-negative.
First, compute the first derivative w.r.t. $\alpha$ as \begin{align*}& {\textstyle \frac{d}{d\alpha}}\Delta D(p^{\alpha}\Vert q) ={\textstyle \frac{d}{d\alpha}}\left[\sum_{i}p_{i}^{\alpha}\log p_{i}^{\alpha}-\sum_{i}p_{i}^{\alpha}\sum_{j}Q_{ij}\log q_{j}^{\alpha}+\sum_{i}p_{i}^{\alpha}\sum_{j}Q_{ij}\log\frac{q_{j}'}{p'_{i}}\right]\\ & =\sum_{i}(\bar{p}_{i}-p_{i})\log p_{i}^{\alpha}-\sum_{i}(\bar{p}_{i}-p_{i})\sum_{j}Q_{ij}\log q_{j}^{\alpha}+\sum_{i}(\bar{p}_{i}-p_{i})\sum_{j}Q_{ij}\log\frac{q_{j}'}{p'_{i}}\end{align*}Then, we compute the second derivative at $\alpha=0$ as\begin{align}{\textstyle \frac{d^{2}}{d\alpha^{2}}}\Delta D(p^{\alpha}\Vert q)\vert_{\alpha=0}=\sum_{i}\frac{\left(\bar{p}_{i}-p_{i}\right)^{2}}{p_{i}}-\sum_{j}\frac{\left(\bar{q}_{j}-q_{j}\right)^{2}}{q_{j}}\end{align}$\sum_{i}\frac{\left(\bar{p}_{i}-p_{i}\right)^{2}}{p_{i}}$ is the so-called $\chi^2$ divergence from $p$ to $\bar{p}$, and $\sum_{j}\frac{\left(\bar{q}_{j}-q_{j}\right)^{2}}{q_{j}}$ is the same once $Q$ is applied to $p$ and $\bar{p}$. Note that $\chi^2$ divergence is a special case of a $f$-divergence, and therefore obeys a data-processing inequality (see e.g. Liese and Vajda, IEEE Trans on Info Theory, 2006, Thm. 14). In particular, that means that ${\textstyle \frac{d^{2}}{d\alpha^{2}}}\Delta D(p^{\alpha}\Vert q)\vert_{\alpha=0} \ge 0$.
At the same time, $\Delta D(p\Vert p')$ is
not convex in the second argument. Consider $p = (0.5,0.5,0)$, $q=(0.5,0.25,0.25)$, and $Q = \left( \begin{smallmatrix} 0.95 & 0.05 \\ 1 & 0 \\ 0 & 1\end{smallmatrix} \right)$. Here is a plot of $\Delta D$ where the first argument is $p$ and the second argument is a convex mixture of between $p$ and $q$, $\alpha p + (1-\alpha) q$, for different $\alpha$:
(see code at https://pastebin.com/q8XLnGK8)
By visual inspection, it can be verified $\Delta D(p\Vert p')$ is neither convex nor concave in the second argument.
Regarding your questions:
(1) $I({p})$ is known to be concave in $p$ (Theorem 2.7.4 in Cover and Thomas, 2006). As we've shown $\Delta D(p\Vert p')$ is convex in $p$, so $-\Delta D(p\Vert p')$ is concave in $p$. Since the sum of concave functions is concave, $J(p,p') = I(p) - \Delta D(p\Vert p')$ is concave in $p$.
At the same time, as a function of the second argument $p'$,$J(p,p') = \mathrm{const} - \Delta D(p\Vert p')$, and we've shown above that $\Delta D(p\Vert p')$ is neither convex nor concave in the second argument. Thus, $J(p,p')$ is not concave in the second argument.
(2) $\Delta D(p\Vert p') \ge 0$ by the data processing inequality for KL divergence (Csiszar and Körner, 2011, Lemma 3.11). That means that \begin{align}J(p,p') = I(p) - \Delta D(p\Vert p') \le I(p) ,\end{align}from which $J(p,p') \le \max_s I(s)$ follows immediately.
|
How do we check exactly that a distribution is involutive?
I have the following definition in my book: A $k-$dimensional distribution $\Delta$ on a manifold $M$ is a smooth choice of a k-dimensional subspace $\Delta_{p} \subseteq T_{p}M$. The distribution is called $involutive$ if $\left [ \Delta , \Delta \right ]\subseteq \Delta$.
In my case i have the following vector fields: for $x=(x,y,\theta ,\varphi )\in \Omega :=\mathbb{R}^{2}\times S^{1}\times(- \frac{\pi}{4},\frac{\pi}{4})$, let $X=\frac{\partial }{\partial \varphi }$ and $Y=\cos\theta \frac{\partial }{\partial x}+\sin\theta \frac{\partial }{\partial y}+ \frac{\tan\varphi }{C}\frac{\partial }{\partial \theta }$, where $C$ is some constant. Find out if the distribution $\Delta = \langle X,Y \rangle$ is involutive or not.
Okay, i have computed the Lie bracket $\left [ X,Y \right ]=\frac{1}{C \cos ^{2}{\theta }}\frac{\partial }{\partial \theta }$, but i don't really know what to do now. Can anybody help me with this example, please? Thank you in advance!
|
Categorical models for linear logic with $\otimes$, $1$, $\&$, $\top$, $\oplus$, $0$, and $\multimap$ are typically symmetric monoidal closed categories (for modeling $\otimes$, $1$, and $\multimap$) with products (for modeling $\&$ and $\top$) and coproducts (for modeling $\oplus$ and $0$). Is it harmful to additionally require that such a category has exponentials? Exponentials would model a binary operator $\Rightarrow$ for which proofs of $A \vdash B \Rightarrow C$ correspond to proofs of $A \mathbin\& B \vdash C$. Are there sensible categorical models for linear logic that have exponentials, or does the introduction of exponentials make the structure collapse?
No, it does not make the structure collapse. For example consider the category of small categories $\mathbf{Cat}$ --- it is clearly complete and cocomplete cartesian closed category, and moreover it has another closed monoidal structure induced by the "funny tensor" and linear exponents $\mathbb{C} \multimap \mathbb{D}$ given by the category of functors $\mathbb{C} \rightarrow \mathbb{D}$ together with (unnatural) transformations (i.e. "natural transformations" without "naturality" requirement).
Much more is true. Every (small) monoidal category $\langle \mathbb{V}, \otimes, I\rangle$ fully embeds into complete and cocomplete cartesian closed and monoidal closed category. The embedding is given by the usual Yoneda functor $y \colon \mathbb{V} \rightarrow \mathbf{Set}^{\mathbb{V}^{op}}$ and the monoidal closed structure is inherited via the Day convolution: $$(F \otimes G)(X) = \int^{A, B \in \mathbb{V}} F(A) \times G(B) \times \hom(X, A \otimes B)$$ $$(F \overset{L}\multimap G)(X) = \int_{A, B \in \mathbb{V}} G(A)^{F(B) \times \hom(A, X \otimes B)}$$ $$(F \overset{R}\multimap G)(X) = \int_{A, B \in \mathbb{V}} G(A)^{F(B) \times \hom(A, B \otimes X)}$$ where $\overset{L}\multimap$ is the left linear exponent and $\overset{R}\multimap$ is the right linear exponent (which are isomorphic precisely when the tensor $\otimes$ in $\mathbb{V}$ is symmetric). Moreover, it is easy to verify that Yoneda functor preserves the monoidal closed structures. Therefore, in some sense, every monoidal category can be (co)completed to a cartesian closed category with coproducts.
Michal already answered the question where we consider
intuitionistic multiplicative linear logic, without an involutive linear negation; there are many such models where we can have a symmetric monoidal closed structure concurrent with a cartesian closed structure on a category. Another easy example (in fact a special case of one of his constructions) is the topos of $G$-sets where $G$ is an abelian group, made into a symmetric monoidal closed category by using the Day convolution.
I'd like to supplement his answer by mentioning that if you
do demand an involutive negation as well (i.e., work with classical multiplicative linear logic), which we categorically model using $\ast$-autonomous categories, then we do get a collapse: every such model is posetal. More is true: a cartesian closed category $C$ that is self-dual must be a poset; notice that a classical linear negation gives a self-duality, i.e., an equivalence
$$\neg: C^{op} \stackrel{\cong}{\to} C.$$
This is because the initial object in a cartesian closed category is
strict, i.e., for any object $X$, if there is a morphism $X \to 0$, then $X \cong 0$. Given that fact, consider morphisms $1 \to (A \Rightarrow B)$. Then, letting $(-)^\ast$ be the self-duality functor, these are in bijective correspondence with morphisms
$$(A \Rightarrow B)^\ast \to 0$$
and if such a morphism exists, then $(A \Rightarrow B)^\ast \cong 0$ by strictness. In particular, there is at most one morphism $(A \Rightarrow B)^\ast \to 0$, and therefore at most one morphism $1 \to A \Rightarrow B$, and hence at most one morphism $A \to B$ for any two objects $A$, $B$.
|
This question already has an answer here:
Limiting shape for Brillouin zones 1 answer
I am having trouble finding a mathematical definition of the Brouillin zone beyond the first, which are basically the Voronoi cells or Wigner-Seitz cells. We could imagine the set of point closer to a particular integer than any other:
$$ \min_{n \in \mathbb{Z}} \big|\big|x - n \big|\big|$$
While straightforward in one dimension, these shapes can be rather nuanced in higher dimensions.
$$ \min_{\vec{n} \in L.\mathbb{Z}^m} \big|\big|\vec{x} - \vec{n} \big|\big|$$
Even for Hexagons, the Brouillin zones look rather complicated.
It's pretty clear from the image that Broullin zones converge to a circular region as $n \to \infty$ but I can't prove as much without a definition.
Picture is taken from
Wave Propogation in Periodic Structures by Léon Brillouin.
|
The following analytic continuation for $\zeta(s)$ towards $\Re(s)>-1$ was derived here:
$$\displaystyle \zeta(s) = \frac{1}{2\,(s-1)} \left(\sum _{n=1}^{\infty } {\frac {s-1-2\,n}{{n}^{s}}} + \sum _{n=0}^{\infty } {\frac {s+1+2\,n}{\left( n+1 \right) ^{s}}} \right)$$
I plugged this formula into this question about the zeros of $\zeta(s) \pm \zeta(1-s)$. After rearranging the terms a new function emerges:
$$\displaystyle Z(s) = \frac{1}{2\,(s-1)} \sum _{n=1}^{\infty } {\frac {s-1-2\,n}{{n}^{s}}} \pm \frac{1}{2\,(-s)} \sum _{n=0}^{\infty } {\frac {1-s+1+2\,n}{\left( n+1 \right) ^{1-s}}}$$
with $\zeta(s) \pm \zeta(1-s) = Z(s) \pm Z(1-s)$.
What surprised me is that $Z(s)$ seems to diverge for all values of $s$,
except when $\Re(s)=\frac12$, in which case either only its real part $(\pm = +)$ or only its imaginary part $(\pm = -)$ converges. These are then equal to respectively $\Re(\zeta(\frac12 + s \, i)$ and $\Im(\zeta(\frac12 + s \, i)$ and correctly induce the non-trivial zeros as well as the zeros at $2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \phantom. \Gamma(1-s) = \pm 1$. Question:
Can it be proven that either the real or the imaginary part of $Z(s)$ only converges for $\Re(s)=\frac12$?
Addition:
Think I can show that $\Re(Z(s))$ diverges for $\Re(s)<-1$ and $\Re(s)>2$, however can't much get closer towards $\frac12$. The solution could lie in the fact that only when $\Re(s)=\frac12$, the series $Z(s)$ is related to the real and imaginary parts of $\zeta(s)$ and in all other cases these links simply don't exist.
Note that the
finite series $Z(s,v)$ can be expressed into zetas and Hurwitz zetas:
$Z(s,v):=\frac{\zeta \left( s \right) -\zeta_{H} \left(s,v+1 \right)}{2} +{ \frac {\zeta \left( s-1 \right) -\zeta_{H} \left(s-1,v+1 \right) }{1- s}}\pm \left( \frac{\zeta \left( 1-s \right) -\zeta_{H} \left(1-s,v+2 \right)}{2} -\frac {\zeta \left( -s \right) +\zeta_{H} \left(-s,v+2 \right) }{s} \right)$
For $\pm=+$ and $s=\frac12$ this becomes:
$Z(\frac12,v):= \zeta \left( \frac12 \right) -\frac{\zeta_{H} \left(\frac12,v+1 \right)+\zeta_{H} \left(\frac12,v+2 \right)}{2} - \frac{\zeta_{H} \left(-\frac12,v+1 \right) -\zeta_{H} \left(-\frac12,v+2 \right)}{\frac12}$
Since $Z(\frac12)=\zeta(\frac12)$, it follows that the other terms must converge to zero when $v \rightarrow \infty$. Under the assumption that $v$ is positive, it follows that (with help from Wolfram math and working fine):
$\displaystyle \lim_{v \to +\infty} \frac {-2\sqrt {v+1}}{4 \,v+3} \, \zeta_{H} \left(\frac12,v+1 \right)=1$
|
Your conclusion is built on the assumption that the bond lengths will be constant. A bond’s dipole moment varies with bond length and thus for the vector addition to give the identical product, all $\ce{C-Cl}$ and all $\ce{C-H}$ bonds would have to be identical. They’re probably not.
Sutton and Brockway studied the bond lengths in chloromethane, dichloromethane and chloroform back in 1935 by electron diffraction.
[1] Their results indicate that the $\ce{C-Cl}$ bond lengths don’t vary between the compounds in question as indicated in the table below. (Note that all chlorine atoms in each molecule are homotopic and thus equal bond lengths are theoretically expected.)
$$\begin{array}{lccc}\hline\text{Compound} & \ce{CH3Cl} & \ce{CH2Cl2} & \ce{CHCl3}\\ \hlined(\ce{C-Cl})/\mathrm{pm} & 177 & 177\phantom{^\circ} & 178\phantom{^\circ} \\\angle(\ce{Cl-C-Cl}) & \text{n. a.} & 111^\circ & 111^\circ \\ \hline\end{array}$$
These results point towards the chlorine side of things not having an impact. But there is still the hydrogen side of things. Hydrogen atoms are generally
very hard to locate in diffraction type experiments. X ray diffraction is observed if photons interact with electrons in a certain manner; hydrogen atoms do not have many electrons around them and are thus hardly seen. One needs to observe diffraction at very large angles to be able to locate hydrogens. I assume, although I don’t know, that it’s similar in electron diffraction. Indeed, the paper notes:
[…] and $l_{\ce{C-H}}$ was taken as $1.06~\mathrm{\overset{\circ}{A}}$.
The two carbons in chloromethane and chloroform are, however, very much not identical. One is surrounded by three electronegative chlorine atoms which will strongly remove electron density from it, polarising the $\ce{C-Cl}$ bonds towards each of the three chlorines. The other will have only one such withdrawing partner. The different electron densities around chlorine are well represented by the corresponding $\ce{^13C}$ NMR shifts: $\delta(\ce{CH3Cl}) \approx 22~\mathrm{ppm}$; $\delta(\ce{CHCl3}) \approx 77~\mathrm{ppm}$.
[2] The hydrogen shifts are also different in similar magnitudes, showing that hydrogen in chloroform is much more polarised than in chloromethane.
These different electronics
must have an influence on the $\ce{C-H}$ bond length of the two molecules. I can’t imagine two electronically such different systems to have equal bond lengths. My assumption is that $\ce{CH3Cl}$ should have the shorter $\ce{C-H}$ bond lengths, since carbon is more negatively charged while hydrogen remains positively charged; this should give a slightly greater attractive force and hydrogen, being such a small element, immediately feels these. Therefore, instead of the final dipole vector $\vec d$ being equal in both cases ($\vec d = \vec x + \vec y$), we have one lesser and one stronger vector.
$$\begin{gather}\vec d_{\ce{CH3Cl}} = \vec x + \vec y + \delta\vec y\tag{1}\\\vec d_{\ce{CHCl3}} = \vec x + \vec y - \delta\vec y\tag{2}\end{gather}$$
References:
[1]: L. E. Sutton, L. O. Brockway,
J. Am. Chem. Soc. 1935, 57, 473. DOI: 10.1021/ja01306a026.
[2]: Experimental spectra found for chloromethane/chloroform on SciFinder.
|
One of the hyperparameters for LSTM networks is temperature. What is it?
Temperature is a hyperparameter of LSTMs (and neural networks generally) used to control the randomness of predictions by scaling the logits before applying softmax. For example, in TensorFlow’s Magenta implementation of LSTMs, temperature represents how much to divide the logits by before computing the softmax.
When the temperature is 1, we compute the softmax directly on the logits (the unscaled output of earlier layers), and using a temperature of 0.6 the model computes the softmax on $\frac{logits}{0.6}$, resulting in a larger value. Performing softmax on larger values makes the LSTM
more confident (less input is needed to activate the output layer) but also more conservative in its samples (it is less likely to sample from unlikely candidates). Using a higher temperature produces a softer probability distribution over the classes, and makes the RNN more “easily excited” by samples, resulting in more diversity and also more mistakes.
Neural networks produce class probabilities with logit vector $\mathbf{z}$ where $\mathbf{z} =$$(z_1,\ldots,z_n)$ by performing the softmax function to produce probability vector $\mathbf{q} = (q_1,\ldots,q_n)$ by comparing $z_i$ with with the other logits.
$q_i = \frac{\exp{(z_i/T)}}{\sum_j\exp{(z_j/T)}}\tag{1}$
where $T$ is the temperature parameter, normally set to 1.
The softmax function normalizes the candidates at each iteration of the network based on their exponential values by ensuring the network outputs are all between zero and one at every timestep.
Temperature therefore increases the sensitivity to low probability candidates. In LSTMs, the candidate, or sample, can be a letter, a word, or musical note, for example:
For high temperatures (${\displaystyle \tau \to \infty }$), all [samples] have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (${\displaystyle \tau \to 0^{+}}$), the probability of the [sample] with the highest expected reward tends to 1.
Reference
Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015). arXiv
|
I think that the question is why the SI system of units considers one ampere, the unit of current, to be the elementary one, rather than the unit of the electric charge.
Recall that one ampere is defined in SI as
"the constant current that will produce an attractive force of $2\times 10^{–7}$ newton per metre of length between two straight, parallel conductors of infinite length and negligible circular cross section placed one metre apart in a vacuum"
Note that this definition relies on magnetic forces; it is equivalent to saying that the vacuum permeability $$\mu_0=4\pi\times 10^{-7} {\text{V s/(A m)}} $$ It's the magnetic force that has a "simple numerical value" in the SI system of units, and magnetic forces don't exist between static electric charges, just between currents.
If we tried to give a similar definition for the electric charge, using the electrostatic force, the numerical values would be very different.
Now, one may ask why the magnetic forces were chosen to have "simple values" in the SI system. It is a complete historical coincidence. The SI system was designed, up to the rationalized additions of $4\pi$ and different powers of ten, as the successor of CGSM, the magnetic variation of Gauss' centimeter-gram-second (CGS) system of units.
These days, both methods would be equally valid because we use units in which the speed of light in the vacuum is fixed to be a known constant, $299,792,458\,{\rm m/s}$, so both $\mu_0$ and $\epsilon_0=1/(\mu_0 c^2)$, the vacuum permittivity, are equal to known numerical constants, anyway.
At any rate, the unit of the electric charge is simply "coulomb" which is "ampere times second", so it is as accurately defined as one ampere.
|
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector
European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4
Journal Article
2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron...
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Journal Article
3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52
Journal Article
4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector
The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34
Journal Article
5. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261
Journal Article
6. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34
A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
7. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector
PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3
Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are...
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
8. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in...
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Journal Article
9. Measurement of event-shape observables in Z→ℓ+ℓ− events in pp collisions at √s = 7 TeV with the ATLAS detector at the LHC
European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 7, pp. 1 - 40
Journal Article
10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector
EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4
A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for...
DISTRIBUTIONS | BOSON | WZ | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
DISTRIBUTIONS | BOSON | WZ | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
11. Search for new phenomena in the WW→lνl′ν′ final state in pp collisions at s=7TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 01/2013, Volume 718, Issue 3, pp. 860 - 878
Journal Article
|
There are a number of imprecisions in your question, mostly having to do with confusing the Lie group and its Lie algebra. I suppose this will make it hard to read the mathematical literature. Having said that, the first volume of Kobayashi and Nomizu is probably the canonical reference.
Let me try to summarise. Let me assume that $H$ is connected.
The structure of the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$ of the Lie algebra $\mathfrak{g}$ of $G$ into the Lie algebra $\mathfrak{h}$ of $H$ and the complement $\mathfrak{m} = \mathrm{Span}(\lbrace Y_a\rbrace)$, says that you have a
reductive homogeneous space. Such homogeneous spaces have a canonical invariant connection and hence a canonical notion of covariant derivative.
The map $G \to G/H$ defines a principal $H$-bundle. Your $\Omega$ is a local section of this bundle. On $G$ you have the left-invariant Maurer-Cartan one-form $\Theta$, which is $\mathfrak{g}$-valued. You can use $\Omega$ to pull back $\Theta$ to $G/H$: it is a locally defined one-form on $G/H$ with values in $\mathfrak{g}$. For matrix groups, it is indeed the case that $\Omega^*\Theta = \Omega^{-1}d\Omega$, but you can in fact use this notation for most computations without worrying too much.
Decompose $\Omega^{-1}d\Omega$ according to the split $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{m}$:$$
\Omega^{-1} d\Omega = \omega + \theta
$$where $\omega$ is the $\mathfrak{h}$ component and $\theta$ is the $\mathfrak{m}$ component. It follows that $\theta$ defines pointwise an $H$-equivariant isomorphism from the tangent space to $G/H$ and $\mathfrak{m}$, with $H$ acting on $\mathfrak{m}$ by the restriction to $H$ of the adjoint action of $G$ on $\mathfrak{g}$ and $H$ acting on $G/H$ via the linear isotropy representation. This means that $\theta$ is a
soldering form.
On the other hand $\omega$ is $\mathfrak{h}$-valued and defines a connection one-form. You can check that if you change the parametrisation $\Omega$, then $\omega$ does transform as a connection under the local $H$-transformations.
This then allows you to differentiate sections of homogeneous vector bundles on $G/H$, such as tensors. In your notation and assuming that $\psi$ is a section of one such bundle, associated to a representation $\rho$ of $H$, the covariant derivative would be$$
\nabla \psi = d\psi + \rho(\omega)\psi~,
$$where I also denote by $\rho$ the representation of the Lie algebra of $H$.
The Maurer-Cartan structure equation satisfied by $\Theta$ is$$
d\Theta = - \tfrac12 [\Theta,\Theta]
$$and this pulls back to $G/H$ to give the following equations$$
d\theta + [\omega,\theta] = -\tfrac12 [\theta,\theta]_{\mathfrak{m}}
$$and$$
d\omega + \tfrac12 [\omega,\omega] = - \tfrac12 [\theta,\theta]_{\mathfrak{h}}
$$which say that the torsion $T$ and curvature $K$ of $\omega$ are given respectively by$$
T = -\tfrac12 [\theta,\theta]_{\mathfrak{m}} \qquad\mathrm{and}\qquad K = - \tfrac12 [\theta,\theta]_{\mathfrak{h}}.
$$
One thing to keep in mind is that in general $\nabla$ will not be the Levi-Civita connection of any invariant metric, since it has torsion. (If (and only if) the torsion vanishes, you have a (locally) This post has been migrated from (A51.SE)
symmetric space.) If you are interested in the Levi-Civita connection of an invariant metric, then you have to modify the invariant connection by the addition of a contorsion tensor which kills the torsion. The details are not hard to work out.
|
In QFT, a representation of the Lorentz group is specified as follows:$$U^\dagger(\Lambda)\phi(x) U(\Lambda)= R(\Lambda)~\phi(\Lambda^{-1}x)$$Where $\Lambda$ is an element of the Lorentz group, $\phi(x)$ is a quantum field with possibly many components, $U$ is unitary, and $R$ an element in a representation of the Lorentz group.
We know that a representation is a map from a Lie group on to the group of linear operators on some vector space.
My question is, for the representation specified as above, what is the vector space that the representation acts on?
Naively it may look like this representation act on the set of field operators, for $R$ maps some operator $\phi(x)$ to some other field operator $\phi(\Lambda^{-1}x)$, and if we loosely define field operators as things you get from canonically quantizing classical fields, we can possibly convince ourselves that this is indeed a vector space.
But then we recall that the dimension of a representation is simply the dimension of the space that it acts on. This means if we take $R$ to be in the $(1,1)$ singlet representation, this is a rep of dimension 1, hence its target space is of dimension 1. Then if we take the target space to the space of fields, this means $\phi(x)$ and $\phi(\Lambda x)$ are related by linear factors, which I am certainly not convinced of.
EDIT: This can work if we view the set of all $\phi$ as a field over which we define the vector space, see the added section below.
I guess another way to state the question is the following: we all know that scalar fields and vector fields in QFT get their names from the fact that under Lorentz transformations, scalars transform as scalars, and vectors transform like vectors.
I would like to state the statement "a scalar field transforms like a scalar" by precisely describing the target vector space of a scalar representation of the Lorentz group, how can this be done?
ADDED SECTION:
Let me give an explicit example of what I'm trying to get at:Let's take the left handed spinor representation, $(2,1)$. This is a 2 dimensional representation. We know that acts on things like $(\phi_1,\phi_2)$.
Let's call the space consisting of things of the form $(\phi_1,\phi_2)$ $V$. Is $V$ 2 dimensional?
Viewed as a classical field theory, yes, because each $\phi_i$ is just a scalar. As a quantum field theory, each $\phi_i$ is an operator.
We see that in order for $V$ space to be 2 dimensional after quantization, we need to able to view the scalar quantum fields as scalar multipliers of vectors in $V$. i.e. we need to view $V$ as a vector space defined over a (mathematical) field of (quantum) fields.
We therefore have to check whether the set of (quantum) fields satisfy (mathematical) field axioms. Can someone check this? Commutativity seem to hold if we, as in quantum mechanics, take fields and their complex conjugates to live in adjoint vector spaces, rather than the same one. Checking for closure under multiplication would require some axiomatic definition of what a quantum field is. This post imported from StackExchange Physics at 2014-08-07 15:37 (UCT), posted by SE-user bechira
|
I am trying to evaluate $$\int_0^1\frac {\{2x^{-1}\}}{1+x}\,\mathrm d x$$
where $\{x\}$ is the fractional part of $x$.
I have tried splitting up the integral but it gets quite complicated and confusing. Is there an easy method for such integrals?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I am trying to evaluate $$\int_0^1\frac {\{2x^{-1}\}}{1+x}\,\mathrm d x$$
where $\{x\}$ is the fractional part of $x$.
I have tried splitting up the integral but it gets quite complicated and confusing. Is there an easy method for such integrals?
$$\begin{eqnarray*} \int_{0}^{1}\frac{\{2x^{-1}\}}{1+x}=\int_{1}^{+\infty}\frac{\{2x\}}{x(x+1)}\,dx &=&2\int_{2}^{+\infty}\frac{\{x\}}{x(2+x)}\,dx\\&=&\int_{2}^{+\infty}\left(\frac{\{x\}}{x}-\frac{\{x+2\}}{x+2}\right)\,dx\end{eqnarray*}$$ clearly equals $$ \int_{2}^{4}\frac{\{x\}}{x}\,dx = \int_{2}^{3}\frac{x-2}{x}\,dx+\int_{3}^{4}\frac{x-3}{x}\,dx = \color{red}{2-4\log 2+\log 3}.$$
Splitting up the integral should work. Note that $n\leq 2x^{-1}\leq n+1\iff 2(n+1)^{-1}\leq x\leq 2n^{-1}$. Then splitting into intervals $[2(n+1)^{-1},2n^{-1}]$ for $n\geq 2$, we get \begin{align*} \int_0^1\frac{\{2x^{-1}\}}{1+x}~dx &= \sum_{n=2}^\infty \int_{2(n+1)^{-1}}^{2n^{-1}}\frac{\{2x^{-1}\}}{1+x}~dx = \sum_{n=2}^\infty \int_{2(n+1)^{-1}}^{2n^{-1}}\frac{2x^{-1}-n}{1+x}~dx \\ &= \sum_{n=2}^\infty \int_{2(n+1)^{-1}}^{2n^{-1}}\frac{2}{x(1+x)} - \frac{n}{1+x}~dx. \end{align*}
This is the same as the integral $$\int_1^\infty \frac{\{2x\}}{1+\frac1x} \mathrm dx$$ This is equivalent to $$\sum\limits_{n=2}^\infty \int_{n/2}^{\frac{n+1}2} \frac{2x-n}{1+\frac1x}\mathrm dx$$
This is easily evaluatable.
|
This is the third part of a series consisting of three articles with the goal to introduce some general concepts and concrete algorithms in the field of neural network optimizers. As a reminder, here is the table of contents:
We covered two important concepts of optimizers in the previous sections, namely the introduction of a momentum term and adaptive learning rates. However, other variations, combinations or even additional concepts have also been proposed
1.
Each optimizer has its own advantages and limitations making it suitable for specific contexts. It is beyond the scope of this series to name or introduce them all. Instead, we shortly explain the well established Adam optimizer as one example. It also re-uses some of the ideas discussed previously.
Before we proceed, we want to stress some thoughts regarding the combination of optimizers. One obvious choice might be to combine the momentum optimizer with the adaptive learning scheme. Even though this is theoretically possible and even an option in an implementation of the RMSProp algorithm, there might be a problem.
The main concept of the momentum optimizer is to accelerate when the direction of the gradient remains the same in subsequent iterations. As a result, the update vector increases in magnitude. This, however, contradicts one of the goals of adaptive learning rates which tries to keep the gradients in “reasonable ranges”. This may lead to issues when the momentum vector \(\fvec{m}\) increases but then gets scaled down again by the scaling vector \(\fvec{s}\).
It is also noted by the authors of RMSProp that the direct combination of adaptive learning rates with a momentum term does not work so well. The theoretical argument discussed might be a cause for these observations.
In the following, we first define the Adam algorithm and then look at the differences compared to previous approaches. The first is the usage of first-order moments which behave differently compared to a momentum vector. We are using an example to see how this choice has an advantage in skipping suboptimal local minima. The second difference is the usage of bias-correction terms necessary due to the zero-initialization of the moment vectors. Finally, we are also going to take a look at different trajectories.
This optimizer was introduced by Diederik P. Kingma and Jimmy Ba in 2017. It mainly builds upon the ideas from AdaGrad and RMSProp, i.e. adaptive learning rates, and extends these approaches. The name is derived from
adaptive moment estimation.
Additionally to the variables used in classical gradient descent, let \(\fvec{m} = (m_1, m_2, \ldots, m_n) \in \mathbb{R}^n\) and \(\fvec{s} = (s_1, s_2, \ldots, s_n) \in \mathbb{R}^n\) be the vectors with the estimates of the first and second raw moments of the gradients (same lengths as the weight vector \(\fvec{w}\)). Both vectors are initialized to zero, i.e. \(\fvec{m}(0) = \fvec{0}\) and \(\fvec{s}(0) = \fvec{0}\). The hyperparameters \(\beta_1, \beta_2 \in [0;1[\) denote the decaying rates for the moment estimates and \(\varepsilon \in \mathbb{R}^+\) is a smoothing term. Then, the Adam optimizer defines the update rules\begin{align} \begin{split} \fvec{m}(t) &= \beta_1 \cdot \fvec{m}(t-1) + (1-\beta_1) \cdot \nabla E\left( \fvec{w}(t-1) \right) \\ \fvec{s}(t) &= \beta_2 \cdot \fvec{s}(t-1) + (1-\beta_2) \cdot \nabla E \left( \fvec{w}(t-1) \right) \odot \nabla E \left( \fvec{w}(t-1) \right) \\ \fvec{w}(t) &= \fvec{w}(t-1) - \eta \cdot \frac{\fvec{m}(t)}{1-\beta_1^t} \oslash \sqrt{\frac{\fvec{s}(t)}{1-\beta_2^t} + \varepsilon} \end{split} \label{eq:AdamOptimizer_Adam} \end{align}
to find a path from the initial position \(\fvec{w}(0)\) to a local minimum of the error function \(E\left(\fvec{w}\right)\). The symbol \(\odot\) denotes the point-wise multiplication and \(\oslash\) the point-wise division between vectors.
There is a very close relationship to adaptive learning rates. In fact, the update rule of \(\fvec{s}(t)\) in \eqref{eq:AdamOptimizer_Adam} is identical to the one in the adaptive learning scheme. We also see that there is an \(\fvec{m}\) vector, although this one is different compared to the one defined in momentum optimization. We are picking up this point shortly.
In the description of Adam, the arguments are more statistically-driven: \(\fvec{m}\) and \(\fvec{s}\) are interpreted as exponentially moving averages of the first and second raw moment of the gradient. That is, \(\fvec{m}\) is a biased estimate of the means of the gradients and \(\fvec{s}\) is a biased estimate of the uncentred variances of the gradients. In total, we can say that the Adam update process uses information about where the gradients are located on average and how they tend to scatter.
In momentum optimization, we keep track of an exponentially decaying sum whereas in Adam we have an exponentially decaying average. The difference is that in Adam we do not add the full new gradient vector \(\nabla E\left( \fvec{w}(t-1) \right)\). Instead, only a fraction is used while at the same time a fraction of the old momentum is removed (the last part is identical to the momentum optimizer). For example, if we set \(\beta_1 = 0.9\), we keep 90 % of the old value and add 10 % of the new. The bottom line is that we build much less momentum, i.e. the momentum vector does not grow that much.
In the analogy of a ball rolling down a valley, we may think of the moment updates in \eqref{eq:AdamOptimizer_Adam} as of a very heavy ball with a lot of friction. It accelerates less and needs more time to take the gradient information into account. The ball rolls down the valley according to the running average of gradients along the track. Since it takes some time until the old gradient information is lost, it is less likely to stop at small plateaus and can hence overshoot small local minima.
2
We now want to test this argument on a small example function. For this, we leave out the second moments \(\fvec{s}\) for now so that \eqref{eq:AdamOptimizer_Adam} reduces to\begin{align} \begin{split} \fvec{m}(t) &= \beta_1 \cdot \fvec{m}(t-1) + (1-\beta_1) \cdot \nabla E\left( \fvec{w}(t-1) \right) \\ \fvec{w}(t) &= \fvec{w}(t-1) - \eta \cdot \frac{\fvec{m}(t)}{1-\beta_1^t}. \end{split} \label{eq:AdamOptimizer_AdamFirstMoment} \end{align}
We want to compare these first moment updates with classical gradient descent. The following figure shows the example function and allows you to play around with a trajectory which starts near the summit of the hill.
Directly after the first descent is a small local minimum and we see that classical gradient descent (\(\beta_1 = 0\)) gets stuck here. However, with first-order moments (e.g. \(\beta_1 = 0.95\)), we leverage the fact that the moving average decreases not fast enough so that we can still roll over this small hole and make it down to the valley.
4
We can see from the error landscape that the first gradient component has the major impact on the updates as it is the direction of the steepest hill. It is insightful to visualize the first component \(m_1(t)\) of the first-order moments over iteration time \(t\):
With classical gradient descent (\(\beta_1 = 0\)), we move fast down the hill but then get stuck in the first local minimum. As only local gradient information is used in the update process, the chances of escaping the hole are very low.
In contrast, when using first-order moments, we increase slower in speed as only a fraction of the large first gradients is used. However, \(m_1(t)\) also decreases slower when reaching the first hole. In this case, the behaviour of the moving average helps to step over the short increase and to move further down the valley.
Building momentum and accelerating when we move in the same direction in subsequent iterations is the main concept and advantage of momentum optimization. However, as we already saw in the toy example used in the momentum optimizer article, large momentum vectors may be problematic as they can overstep local minima and lead to oscillations. What is more, as stressed in the argument above, it is not entirely clear if momentum optimization works well together with adaptive learning rates. Hence, it might be reasonable that the momentum optimizer is not used directly in Adam.
The final change in the Adam optimizer compared to its predecessors is the bias correction terms where we divide both moment vectors by either \((1-\beta_1^t)\) or \((1-\beta_2^t)\). This is because the moment vectors are initialized to zero so that the moving averages are, especially in the beginning, biased towards the origin. The factors are a countermeasure to correct this bias.
Practically speaking, these terms boost both vectors in the beginning since they are divided by a number usually \(< 1\). This can speed-up convergence when the true moving averages are not located at the origin but are larger instead. As the factors have the iteration number \(t\) in the exponent of the hyperparameters, the terms approach 1 over time and hence become less influential.
We now consider, once again, a one-dimensional example and define measures to compare the update vectors of the second iteration using either classical gradient descent or the Adam optimizer. To visualize the effect of the bias-correction terms, we repeat the process in which we leave these terms out.
Denoting the gradients of the first two iterations as \(g_t = \nabla E\left( w(t-1) \right)\), we build the moment estimates\begin{align*} m(1) &= \beta_1 \cdot m(0) + (1-\beta_1) \cdot g_1 = (1-\beta_1) \cdot g_1 \\ m(2) &= \beta_1 \cdot m(1) + (1-\beta_1) \cdot g_2 = \beta_1 \cdot (1-\beta_1) \cdot g_1 + (1-\beta_1) \cdot g_2 \\ s(1) &= \beta_2 \cdot s(0) + (1-\beta_2) \cdot g_1^2 = (1-\beta_2) \cdot g_1^2 \\ s(2) &= \beta_2 \cdot s(1) + (1-\beta_2) \cdot g_2^2 = \beta_2 \cdot (1-\beta_2) \cdot g_1^2 + (1-\beta_2) \cdot g_2^2 \end{align*}
so that we can define a comparison measure as\begin{equation} \label{eq:AdamOptimizer_AdamMeasureCorrection} C_A(g_1,g_2) = \left| \eta \cdot \frac{\frac{m(2)}{1-\beta_1^2}}{\sqrt{\frac{s(2)}{1-\beta_2^2} + \varepsilon}} \right| - |\eta \cdot g_2| = \left| \eta \cdot \frac{\sqrt{1-\beta_2^2}}{1-\beta_1^2} \cdot \frac{m(2)}{\sqrt{s(2) + (1-\beta_2^2) \cdot \varepsilon}} \right| - |\eta \cdot g_2|. \end{equation}
To make the effect of the bias correction terms more evident, we moved them out of the compound fraction and used them as prefactor. We define a similar measure without these terms\begin{equation} \label{eq:AdamOptimizer_AdamMeasureNoCorrection} \tilde{C}_A(g_1,g_2) = \left| \eta \cdot \frac{m(2)}{\sqrt{s(2) + \varepsilon}} \right| - |\eta \cdot g_2|. \end{equation}
The following figure compares the two measures by interpreting the gradients of the first two iterations as variables.
With correction terms (left image), we can observe that small gradients get amplified and larger ones attenuated. This is an inheritance from the adaptive learning scheme. Back then, however, this behaviour was more centred around the origin whereas here smaller gradients get amplified less and more independently of \(g_1\). This is likely an effect of the \(m(2)\) term which uses only a small fraction (10 % in this case) of the first gradient \(g_1\) leading to a smaller numerator.
When we compare this result with the one without any bias corrections (right image), we see a much brighter picture. That is, the area of amplification of small and attenuation of large gradients is stronger. This is not surprising, as the prefactor\begin{equation*} \frac{\sqrt{1-\beta_2^2}}{1-\beta_1^2} = \frac{\sqrt{1-0.999^2}}{1-0.9^2} \approx 0.2353 \end{equation*}
is smaller than 1 and hence leads to an overall decrease (the term \((1-\beta_2^2) \cdot \varepsilon \) is too small to have a visible effect). Therefore, the bias correction terms ensure that the update vectors behave also more moderately at the beginning of the learning process.
Like in previous articles, we now also want to compare different trajectories when using the Adam optimizer. For this, we can use the following widget which implements the Adam optimizer.
Basically, the parameters behave like expected: larger values for \(\beta_1\) make the accumulated gradients decrease slower so that we first overshoot the minimum. \(\beta_2\) controls again the preference of direction (\(\beta_2\) small) vs. magnitude (\(\beta_2\) large).
It is to note that even though the Adam optimizer is much more advanced than classical gradient descent, this does not mean that it is immune against extreme settings. It is still possible that weird effects happen like oscillations or that the overshooting mechanism discards good minima (example settings). Hence, it may still be worth it to search for good values for the hyperparameters.
We finished with the main concepts of the Adam optimizer. It is a popular optimization technique and its default settings are often a good starting point. Personally, I have had good experience with this optimizer and would definitely use it again. However, depending on the problem, it might not be the best choice or requires tuning of the hyperparameters. For this, it is good to know what they do and also how the other optimization techniques work.
List of attached files:
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Ex.13.4 Q3 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
A fez, the cap used by the Turks, is shaped like the frustum of a cone (see Fig. 13.24). If its radius on the open side is \(10\,\rm{cm}\), radius at the upper base is \(4\,\rm{cm}\) and its slant height is \(15\,\rm{cm, }\) find the area of material used for making it.
Text Solution What is Known?
A fez is shaped like the frustum of a cone with radius of open side \(10\,\rm{cm},\)
radius at the upper base \(4\,\rm{cm}\) and Slant Height \(15 \,\rm{cm}\)
What is Unknown?
Area of the material used for making Fez
Reasoning:
Since the
fez is in the shape of frustum of a cone and is open at the bottom.
Therefore,
Area of material used for making fez \(=\) Curved Surface Area of the frustum \(+\) Area of the upper circular end
We will find the Area of material by using formulae;
CSA of frustum of a cone \( = \pi \left( {{r_1} + {r_2}} \right)l\)
where
\(r1, r2\) and \(l\) are the radii and slant height of the frustum of the cone respectively.
Area of the circle \( = \pi {r^2}\)
where
\(r\) is the radius of the circle. Steps:
Slant height, \(l =15\,\rm{cm}\)
Radius of open side \(r_1=10\,\rm{cm} \)
Radius of upper base \({r_2} =4\,\rm{cm}\)
Area of material used for making fez \(=\) Curved Surface area of the frustum \(+\) area of the upper circular end
\[\begin{align}&= \pi \left( {{r_1} + {r_2}} \right)l + \pi {r^2}\\&= \pi \left[ {\left( {{r_1} + {r_2}} \right)l + r_2^2} \right]\end{align}\]
\[\begin{align}&= \frac{{22}}{7}\left[ {\left( {10cm + 4cm} \right)15cm + {{\left( {4cm} \right)}^2}} \right]\\&= \frac{{22}}{7}\left[ {14cm \times 15cm + 16c{m^2}} \right]\\&= \frac{{22}}{7}\left[ {210c{m^2} + 16c{m^2}} \right]\\&= \frac{{22}}{7} \times 226c{m^2}\\&= \frac{{4972}}{7}c{m^2}\\&= 710\frac{2}{7}c{m^2}\end{align}\]
\(\begin{align} 710\frac{2}{7}c{m^2}\end{align} \) of the material used for making Fez.
|
This question already has an answer here:
A polynomial that is zero on an open set 5 answers
Consider a non constant polynomial $f:\Bbb R^n\to \Bbb R$. Let $S_f\subset \Bbb R^n$ be its solution set i.e. the set of $x\in \Bbb R^n$ such that $f(x)=0$. What's the slickest proof that $S$ can't contain any ball, or equivalently, $S_f^c$ is open and dense?
My try:
let $r_0$ be such that $f(r_0)=0$. Then consider the new polynomial $g(x)=f(x+r_0)$. If $B_\epsilon(r_0)\subset S_f$, then $B_\epsilon(0)\subset S_g$. Also, $g$ is non constant.
Clearly $g$ doesn't have a constant term. Now, if $g$ is homogeneous, by which I mean all of $g$'s term are of the same order $k$. Then, for any $x\in\Bbb R^n$, find $L>0$ so that $\|y\|/L<\epsilon$, then $g(y/L)=0$, so $g(y)=g(y/L)L^k=0$, contradicting the fact that $g$ is non constant.
If $g$ is not homogeneous, we consider the following two cases:
A) in each term in $g$, $x_1,\cdots,x_n$ all appear. This indicates all terms have a common factor in the form $c(x)=(x_1\cdots x_n)^p$ such that $g(x)=c(x)h(x)$ where in at least one term in $h(x)$, not all $x_1,\cdots,x_n$ are présent. (A way to do this is keep factorising out $(x_1\cdots x_n)$ till you can do it no more.) As such, $S_g^c=S_h^cS_c^x$, but it's easy to show $S_c^c$ is open and dense, so by Baire's Theorem it suffices to show $S_h^c$ is open and dense, or $S_h$ doesn't contain any ball. So we just repeat the initial discussion i.e. consider a new polynomial $h(x+r_1)$ (we can't directly jump to the next case since $h$ may contain the constant term). Note that, there may be several cycles, but ultimately we can only visit case A) finitely many times, since each time we visit case A) we come to consider a polynomial of strictly lower order than previously. Hence, we must eventually arrive at the next case;
B) there's at least one term in $g$ in which there's at least one variable, say $x_k$, which is absent. Thus, we fix $x_k=0$, and $g$ becomes a non-zero polynomial in fewer variables.
I find the striked arguments futile.
In fact I also tried to generalise the linear scaling argument to non homegeneous cases, i.e. assigning different scaling rates to different variables, but was quickly disappointed to find it wouldn't work for even the univariable case say $x^3+x^2+x$.
Yet another attempt (the striked part) I made was try to select a suitable variable and fix all the other variables equal to $0$ to obtain a univariable
non-zero polynomial (pretty much like choosing a suitable coordinate axis to move around on). Since this would make the new polynomial vanishes in a while short segment about zero on the real line, we get a contradiction. However, this was unable to deal with cases like $x_1x_2+x_2x_3+x_1x_3$ where no such suitable axis exists.
Really need some Enlightenment now. Thanks. PS: absolutely no background in algebraic geometry.
|
I would like to ask if there is a way to find the expected value and the variance of the following process $$ dv_t=(a-be^{\alpha v_t})dt+\sigma dW_t, \quad v_t=v_0 $$ where $a\in (-\infty,+\infty), b>0, \sigma>0, \alpha>0$.
Thank you for your time. I truly appreciate it.
Added (after the question of The Bridge)
Let $V_t=e^{2v_t}$, then applying Ito's lemma we can show that
$$ V_t=\frac{V_0\exp(2at +2\sigma W_t)}{\left(1+\alpha b V_0^{\alpha/2}\int_0^t exp(\alpha(as+\sigma W_s))ds\right)^{\alpha/2}} $$ From here, we can see that the above SDE has a unique solution.
|
This is an interactive problem.
Khue and Hanh are obsessed with doughnuts although they are very much overweight. Over the last week, they worked very hard setting up the problems for ICPC Hanoi. To prepare for a long night of coding, they bought a super-large ring-shaped doughnut.
The perimeter of this doughnut is $1$ meter. They like things to be super precise so they divided the doughnut into ${10}^{9}$ parts, The outermost length of each of them is $1$ nanometer. They are numbered from $0$ to ${10}^{9}-1$.
This doughnut was topped with exactly $N$ ($N$ is even) sprinkles. No two sprinkles are in the same part. Khue and Hanh wants to divide the doughnut so that:
Khue has $5 \cdot {10}^{8}$ consecutive parts. These parts have exactly $N/2$ sprinkles.
Hanh has $5 \cdot {10}^{8}$ consecutive parts. These parts have exactly $N/2$ sprinkles.
In order to divide, they use an image processing program to count the number of sprinkles in consecutive parts.
You task is to divide the doughnut for Khue and Hanh without using the image processing program too many times. If there are multiple ways to divide the doughnut that satisfy the above conditions, you can answer with any of them.
First your program reads the total number of sprinkles $N$ $(1 \leq N \leq 100\, 000)$ from the standard input. It is guaranteed that $N$ is even. Then the following process repeats:
Your program write to the standard output one of the following:
QUERY u v $(0 \leq u,v < {10}^{9})$ — count the number of sprinkles from part $u$ to part $v$. Please note that: $u>v$ means count the number of sprinkles from part $u$ to ${10}^{9}-1$ and $0$ to $v$.
YES x $(0 \leq x < {10}^{9})$ — You answer Khue should get parts from $x$ to $(x + 5 \cdot {10}^{8} - 1) \bmod {10}^{9}$.
NO — You answer there is no division that satisfies Khue and Hanh.
If your program asks a query, an integer $S$ will be available in the standard input. Your program should then read it.
Otherwise, your answer will be checked.
Your program should terminate immediately after this.
You are allowed to interact at most $30 + \left\lfloor { \log _{2}{\sqrt {N}} } \right\rfloor $ times, including giving an answer.
Interpretation
There are $4$ sprinkles in this doughnut
There are $4$ sprinkles in these parts
There are $4$ sprinkles in parts $0$ to $10$
You want to count sprinkles in parts $0$ to $5$
There are $3$ sprinkles in parts $0$ to $5$
You want to count sprinkles in parts $3$ to $5$
There is $1$ sprinkle in parts $3$ to $5$
When you write the solution for the interactive problem it is important to keep in mind that if you output some data it is possible that this data is first placed to some internal buffer and may be not directly transferred to the interactor. In order to avoid such situation
you have to use special ‘flush’ operation each time you output some data. There are these ‘flush’ operations in standard libraries of almost all languages. For example, in C++ you may use
fflush(stdout) or
cout $<<$ flush (it depends on what do you use for output data —
scanf/printf or
cout). In Java you can use method ‘flush’ for output stream, for example,
System.out.flush(). In Python you can use
stdout.flush().
|
The von Mises yield function is given by:
$ \Phi(\sigma_1,\sigma_2)=\sqrt{\sigma_{1}^{2} +\sigma_{2}^{2}-\sigma_{1} \sigma_{2}} - \sigma_y $
were $\sigma_1$ and $\sigma_2$ are the principal stresses and $\sigma_y$ is the yield stress. If $\Phi(\sigma_1, \sigma_2)=0$, $\sigma_y =200$ and using
ContourPlot:
contourplot = ContourPlot[Sqrt[sig1^2 + sig2^2 - sig1 sig2] - 200 == 0, {sig1, -300, 300}, {sig2, -300, 300}]
I have:
I need to find the parametric version of $\Phi(\sigma_1,\sigma_2)$, but I'm stuck.
Still now based on this question How to plot a rotated ellipse using
ParametricPlot?, I can plot a rotated parametrized ellipse (red and dashed line) obtained from this code:
a = 300; b = a/2;gamma = Pi/4; pmplot = ParametricPlot[{(a Cos[theta] Cos[gamma] - b Sin[theta] Sin[gamma]), a Cos[theta] Sin[gamma] + b Sin[theta] Cos[gamma]}, {theta, 0 ,2 Pi}, PlotStyle -> {Thick, Red, Dashed}]; Show[contourplot, pmplot]
The problem is to find the values of
a and
b to fit the parametric equation with the von Mises ellipse.
|
I'm just starting to learn general relativity (GR), and I'm a beginner, but I came out with this situation which is unclear to me: The trajectory of a charged particle in GR is given from the equation:
$$\dot{u}^{\mu} + \Gamma^{\mu}_{\alpha \beta} u^{\alpha} u^{\beta} = \frac{q}{m} F^{\mu}_{\; \nu} \, u^{\nu}$$
So, if I have a neutral particle $q=0$ the equation reduces to the geodesic equation for a free particle, but because of the Einstein-Maxwell equations:
$$R_{\mu \nu} - \frac{1}{2} R g_{\mu \nu} = T^{EM}_{\mu \nu}$$
the EM stress-energy tensor determines the form of the metric, and consequently the Christoffel symbols that appears in the geodesic equation for the neutral particle. So would the trajectory of this neutral particle in an EM field be different from the case of a space-time with a null EM field?
|
How to Calculate Bending Stress in Beams
In this tutorial we will look at how to calculate the bending stress of a beam using a bending stress formula that relates the longitudinal stress distribution in a beam to the internal bending moment acting on the beam’s cross section. We assume that the beam’s material is
linear-elastic (i.e. Hooke’s Law is applicable). Bending stress is important and since beam bending is often the governing result in beam design, it’s important to understand. 1. Calculating by Hand
Let’s look at an example. Consider the I-beam shown below:
At some distance along the beam’s length (the x-axis) it is experiencing an internal bending moment (M) which you would normally find using a bending moment diagram. The general formula for bending or normal stress on the section is given by:
[math]
\sigma_{bend} = \dfrac{My}{I} \text{ where:} \\\\ \begin{align} M &= \text{the internal bending moment about the section’s neutral axis} \\ y &= \text{the perpendicular distance from the neutral axis to a point on the section} \\ I &= \text{the moment of inertia of the section area about the neutral axis} \end{align} [math]
Given a particular beam section, it is obvious to see that the bending stress will be maximised by the distance from the neutral axis (y). Thus, the maximum bending stress will occur either at the TOP or the BOTTOM of the beam section depending on which distance is larger:
[math]
\sigma_{bend,max} = \dfrac{Mc}{I} \text{ where:} \\\\ c = \text{the perpendicular distance from the neutral axis to the farthest point on the section} [math]
Lets’s consider the real example of our I-beam shown above. In our previous moment of inertia tutorial we already found the moment of inertia about the neutral axis to be I = 4.74×10
8 mm 4. Additionally, in the centroid tutorial we found the centroid and hence the location of the neutral axis to be 216.29 mm from the bottom of the section. This is shown below:
Obviously, it is very common to require the MAXIMUM bending stress that the section experiences. For example, say we know from our bending moment diagram that the beam experiences a maximum bending moment of 50 kN-m or 50,000 Nm (converting bending moment units). Then we need to find whether the top or the bottom of the section is furthest from the neutral axis. Clearly the bottom of the section is further away with a distance c = 216.29 mm. We now have enough information to find the maximum stress using the bending stress formula above:
[math]
\sigma_{bend,max} = \dfrac{Mc}{I} \text{ where:} \\\\ \begin{align} M &= 50 \text{ kNm} = 50,000 \text{ Nm} \\ c &= 216.29\text{ mm} = 0.21629\text{ m} \text{ (BOTTOM)} \\ I &= 4.74\times10^{8} \text{ mm}^{4} = 4.74\times10^{-4} \text{ m}^{4} \\\\ \end{align} [math]
[math]
\begin{align} \therefore \sigma_{bend,max} &= \dfrac{(50,000 \text{ Nm})(0.21629\text{ m})}{4.74\times10^{-4} \text{ m}^{4}} \\\\ \sigma_{bend,max} &= 22,815,400 \text{ N/m}^{2} \text{ or Pa} \\ \sigma_{bend,max} &= 22.815 \text { MPa} \\ \end{align} [math]
Similarly we could find the bending stress at the top of the section, as we know that it is y = 159.71 mm from the neutral axis (NA):
[math]
\begin{align} \sigma_{bend,top} &= \dfrac{My}{I}\\\\ \sigma_{bend,top} &= \dfrac{(50,000 \text{ Nm})(0.15971\text{ m})}{4.74\times10^{-4} \text{ m}^{4}} \\\\ \sigma_{bend,top} &= 16,847,046 \text{ N/m}^{2} \text{ or Pa} \\ \sigma_{bend,top} &= 16.847 \text { MPa} \\ \end{align} [math]
The last thing to worry about is whether the stress is causing compression or tension of the section’s fibers. If the beam is sagging like a “U” then the top fibers are in compression (negative stress) while the bottom fibers are in tension (positive stress). If the beam is sagging like an upside-down “U” then it is the other way around: the bottom fibers are in compression and the top fibers are in tension.
2. How to Calculate Stress using SkyCiv Beam
Of course you don’t need to do these calculations by hand because you can use the SkyCiv Beam – bending stress calculator to find shear and bending stress in a beam! Simply start by modeling the beam, with supports and apply loads. Once you hit solve, the sftware will show the max stresses from this bending stress calculator. The image below shows an example of an I-beam experiencing bending stress:
|
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 4 (Wednesday) John Baez (UC Riverside) TBA Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
March 16 Anne Gelb (Dartmouth)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
|
We derive a lower bound on the secrecy capacity of a compound wiretap channel with channel state information at the transmitter which matches the general upper bound on the secrecy capacity of general compound wiretap channels given by Liang et al. [1], thus establishing a full coding theorem in this case. We achieve this with a stronger secrecy criterion and the maximum error probability criterion, and with a decoder that is robust against the effect of randomization in the encoding. This relieves us from the need of decoding the randomization parameter, which is in general impossible within this model. Moreover, we prove a lower bound on the secrecy capacity of a compound wiretap channel without channel state information and derive a multiletter expression for the capacity in this communication scenario.
We consider a family of energy-constrained diamond norms on the set of Hermitian- preserving linear maps (superoperators) between Banach spaces of trace class operators. We prove that any norm from this family generates strong (pointwise) convergence on the set of all quantum channels (which is more adequate for describing variations of infinite-dimensional channels than the diamond norm topology). We obtain continuity bounds for information characteristics (in particular, classical capacities) of energy-constrained infinite-dimensional quantum channels (as functions of a channel) with respect to the energy-constrained diamond norms, which imply uniform continuity of these characteristics with respect to the strong convergence topology.
The impact of diversity on reliable communication over arbitrarily varying channels (AVC) is investigated as follows. First, the concept of an identical state-constrained jammer is motivated. Second, it is proved that symmetrizability of binary symmetric AVCs (AVBSC) caused by identical state-constrained jamming is circumvented when communication takes place over at least three orthogonal channels. Third, it is proved that the deterministic capacity of the identical state-constrained AVBSC is continuous and shows super-activation. This effect was hitherto demonstrated only for quantum communication and for classical communication under secrecy constraints.
We use Hamilton equations to identify most likely scenarios of long queues being formed in ergodic Jackson networks. Since the associated Hamiltonians are discontinuous and piecewise Lipschitz, one has to invoke methods of nonsmooth analysis. Time reversal of the Hamilton equations yields fluid equations for the dual network. Accordingly, the optimal trajectories are time reversals of the fluid trajectories of the dual network. Those trajectories are shown to belong to domains that satisfy a certain condition of being “essential.” As an illustration, we consider a two-station Jackson network. In addition, we prove certain properties of substochastic matrices, which may be of interest in their own right.
We introduce a new wide class of error-correcting codes, called non-split toric codes. These codes are a natural generalization of toric codes where non-split algebraic tori are taken instead of usual (i.e., split) ones. The main advantage of the new codes is their cyclicity; hence, they can possibly be decoded quite fast. Many classical codes, such as (doubly-extended) Reed-Solomon and (projective) Reed-Muller codes, are contained (up to equivalence) in the new class. Our codes are explicitly described in terms of algebraic and toric geometries over finite fields; therefore, they can easily be constructed in practice. Finally, we obtain new cyclic reversible codes, namely non-split toric codes on the del Pezzo surface of degree 6 and Picard number 1. We also compute their parameters, which prove to attain current lower bounds at least for small finite fields.
The operation of Minkowski addition of geometric figures has a discrete analog, addition of subsets of a Boolean cube viewed as a vector space over the two-element field. Subsets of the Boolean cube (or multivariable Boolean functions) form a monoid with respect to this operation. This monoid is of interest in classical discrete analysis as well as in a number of problems related to information theory. We consider several complexity aspects of this monoid, namely structural, algorithmic, and algebraic.
We prove equivalence of using the modulus metric and Euclidean metric in solving the soft decoding problem for a memoryless discrete channel with binary input and Q-ary output. For such a channel, we give an example of a construction of binary codes correcting t binary errors in the Hamming metric. The constructed codes correct errors at the output of a demodulator with Q quantization errors as (t + 1)(Q − 1) − 1 errors in the modulus metric. The obtained codes are shown to have polynomial decoding complexity.
We substantially improve a presently known explicit exponentially growing lower bound on the chromatic number of a Euclidean space with forbidden equilateral triangle. Furthermore, we improve an exponentially growing lower bound on the chromatic number of distance graphs with large girth. These refinements are obtained by improving known upper bounds on the product of cardinalities of two families of homogeneous subsets with one forbidden cross-intersection.
This work is a survey on completely regular codes. Known properties, relations with other combinatorial structures, and construction methods are considered. The existence problem is also discussed, and known results for some particular cases are established. In addition, we present several new results on completely regular codes with covering radius ρ = 2 and on extended completely regular codes.
This paper considers a multimessage network where each node may send a message to any other node in the network. Under the discrete memoryless model, we prove the strong converse theorem for any network whose cut-set bound is tight, i.e., achievable. Our result implies that for any fixed rate vector that resides outside the capacity region, the average error probability of any sequence of length-n codes operated at the rate vector must tend to 1 as n approaches infinity. The proof is based on the method of types and is inspired by the work of Csiszár and Körner in 1982 which fully characterized the reliability function of any discrete memoryless channel with feedback for rates above capacity. In addition, we generalize the strong converse theorem to the Gaussian model where each node is subject to an almost-sure power constraint. Important consequences of our results are new strong converses for the Gaussian multiple access channel with feedback and the following relay channels under both models: the degraded relay channel (RC), the RC with orthogonal sender components, and the general RC with feedback.
An exact expression for the probability of inversion of a large spin is established in the form of an asymptotic expansion in the series of Bessel functions with orders belonging to an arithmetic progression. Based on the new asymptotic expansion, a formula for the inversion time of the spin is derived.
We consider the problem of determining extreme values of the Rényi entropy for a discrete random variable provided that the value of the α-coupling for this random variable and another one with a given probability distribution is fixed.
We develop refinements of the Levenshtein bound in q-ary Hamming spaces by taking into account the discrete nature of the distances versus the continuous behavior of certain parameters used by Levenshtein. We investigate the first relevant cases and present new bounds. In particular, we derive generalizations and q-ary analogs of the MacEliece bound. Furthermore, we provide evidence that our approach is as good as the complete linear programming and discuss how faster are our calculations. Finally, we present a table with parameters of codes which, if exist, would attain our bounds.
We consider the problem of estimating the noise level σ 2 in a Gaussian linear model Y = Xβ+σξ, where ξ ∈ ℝ n is a standard discrete white Gaussian noise and β ∈ ℝ p an unknown nuisance vector. It is assumed that X is a known ill-conditioned n × p matrix with n ≥ p and with large dimension p. In this situation the vector β is estimated with the help of spectral regularization of the maximum likelihood estimate, and the noise level estimate is computed with the help of adaptive (i.e., data-driven) normalization of the quadratic prediction error. For this estimate, we compute its concentration rate around the pseudo-estimate ||Y − Xβ||2/n.
We study chromatic numbers of spaces $$\mathbb{R}_p^n=(\mathbb{R}^n, \ell_p)$$ R p n = ( R n , ℓ p ) with forbidden monochromatic sets. For some sets, we for the first time obtain explicit exponentially growing lower bounds for the corresponding chromatic numbers; for some others, we substantially improve previously known bounds.
We introduce a construction of a set of code sequences {C n (m) : n ≥ 1, m ≥ 1} with memory order m and code length N(n). {C n (m)} is a generalization of polar codes presented by Arıkan in [1], where the encoder mapping with length N(n) is obtained recursively from the encoder mappings with lengths N(n − 1) and N(n − m), and {C n (m)} coincides with the original polar codes when m = 1. We show that {C n (m)} achieves the symmetric capacity I(W) of an arbitrary binary-input, discrete-output memoryless channel W for any fixed m. We also obtain an upper bound on the probability of block-decoding error P e of {C n (m)} and show that $${P_e} = O({2^{ - {N^\beta }}})$$ P e = O ( 2 − N β ) is achievable for β < 1/[1+m(ϕ − 1)], where ϕ ∈ (1, 2] is the largest real root of the polynomial F(m, ρ) = ρ m − ρ m − 1 − 1. The encoding and decoding complexities of {C n (m)} decrease with increasing m, which proves the existence of new polar coding schemes that have lower complexity than Arıkan’s construction.
We show that converting an n-digit number from a binary to Fibonacci representation and backward can be realized by Boolean circuits of complexity O(M(n) log n), where M(n) is the complexity of integer multiplication. For a more general case of r-Fibonacci representations, the obtained complexity estimates are of the form $${2^O}{(\sqrt {\log n} )_n}$$ 2 O ( log n ) n .
We propose a new version of the proof of Good's theorem stating that the Kronecker power of an arbitrary square matrix can be represented as a matrix power of a sparse matrix Z. We propose new variants of sparse matrices Z. We observe that for another version of the tensor power of a matrix, the b-power, there exists an analog of another Good's expansion but no analog of this theorem.
We consider recurrence sequences over the set of integers with generating functions being arbitrary superpositions of polynomial functions and the sg function, called polynomial recurrence sequences. We define polynomial-register (PR) machines, close to random-access machines. We prove that computations on PR machines can be modeled by polynomial recurrence sequences. On the other hand, computation of elements of a polynomial recurrence sequence can be implemented using a suitable PR machine.
The common wisdom is that the capacity of parallel channels is usually additive. This was also conjectured by Shannon for the zero-error capacity function, which was later disproved by constructing explicit counterexamples demonstrating the zero-error capacity to be super-additive. Despite these explicit examples for the zero-error capacity, there is surprisingly little known for nontrivial channels. This paper addresses this question for the arbitrarily varying channel (AVC) under list decoding by developing a complete theory. The list capacity function is studied and shown to be discontinuous, and the corresponding discontinuity points are characterized for all possible list sizes. For parallel AVCs it is then shown that the list capacity is super-additive, implying that joint encoding and decoding for two parallel AVCs can yield a larger list capacity than independent processing of both channels. This discrepancy is shown to be arbitrarily large. Furthermore, the developed theory is applied to the arbitrarily varying wiretap channel to address the scenario of secure communication over AVCs.
|
Difference between revisions of "Geometry and Topology Seminar"
Line 111: Line 111:
|Feb 3
|Feb 3
| Rafael Montezuma (University of Chicago)
| Rafael Montezuma (University of Chicago)
−
| [[#Rafael Montezuma| "
+
| [[#Rafael Montezuma| ""]]
| Lu Wang
| Lu Wang
|-
|-
Line 244: Line 244:
===Rafael Montezuma===
===Rafael Montezuma===
−
"
+
""
+ +
===Carmen Rovi===
===Carmen Rovi===
Revision as of 23:13, 27 January 2017 Contents 1 Fall 2016 2 Spring 2017 3 Fall Abstracts 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2016 Spring 2017
date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Rafael Montezuma (University of Chicago) "Metrics of positive scalar curvature and unbounded min-max widths" Lu Wang Feb 10 Feb 17 Yair Hartman (Northwestern University) "Intersectional Invariant Random Subgroups and Furstenberg Entropy." Dymarz Feb 24 Lucas Ambrozio (University of Chicago) "TBA" Lu Wang March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 Autumn Kent (Wisconsin) Analytic functions from hyperbolic manifolds local March 17 March 24 Spring Break March 31 Xiangwen Zhang (University of California-Irvine) "TBA" Lu Wang April 7 reserved Lu Wang April 14 Xianghong Gong (Wisconsin) "TBA" local April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons
A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud).
Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces
We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky.
Sean Howe Representation stability and hypersurface sections
We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}!
Nan Li Quantitative estimates on the singular sets of Alexandrov spaces
The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber.
Yu Li
In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature.
Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms
A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups.
Bing Wang The extension problem of the mean curvature flow
We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li.
Ben Weinkove Gauduchon metrics with prescribed volume form
Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti.
Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow
The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set.
Yu Zeng Short time existence of the Calabi flow with rough initial data
Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He.
Spring Abstracts Lucas Ambrozio
"TBA"
Paul Feehan
"TBA"
Rafael Montezuma
"Metrics of positive scalar curvature and unbounded min-max widths"
In this talk, I will construct a sequence of Riemannian metrics on the three-dimensional sphere with scalar curvature greater than or equal to 6, and arbitrarily large min-max widths. The search for such metrics is motivated by a rigidity result of min-max minimal spheres in three-manifolds obtained by Marques and Neves.
Carmen Rovi The mod 8 signature of a fiber bundle
In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles.
Yair Hartman
"Intersectional Invariant Random Subgroups and Furstenberg Entropy."
In this talk I'll present a joint work with Ariel Yadin, in which we solve the Furstenberg Entropy Realization Problem for finitely supported random walks (finite range jumps) on free groups and lamplighter groups. This generalizes a previous result of Bowen. The proof consists of several reductions which have geometric and probabilistic flavors of independent interests. All notions will be explained in the talk, no prior knowledge of Invariant Random Subgroups or Furstenberg Entropy is assumed.
Bena Tshishiku
"TBA"
Autumn Kent Analytic functions from hyperbolic manifolds
At the heart of Thurston's proof of Geometrization for Haken manifolds is a family of analytic functions between Teichmuller spaces called "skinning maps." These maps carry geometric information about their associated hyperbolic manifolds, and I'll discuss what is presently known about their behavior. The ideas involved form a mix of geometry, algebra, and analysis.
Xiangwen Zhang
"TBA"
Archive of past Geometry seminars
2015-2016: Geometry_and_Topology_Seminar_2015-2016
2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
|
Last edited: March 22nd 2018
This notebook is an introduction to a set of partial differential equations which are widely used to model aerodynamics, atmosphere and climate, explosive detonations and even astrophysics. It only gives a small taste of the world of hyperbolic PDEs, Riemann problems and computational fluid dynamics, and the interested reader is encouraged to investigate the field further [1].
The Euler equations govern adiabatic and inviscid flow of a fluid. In the Froude limit (no external body forces) in one dimension, with density $\rho$, velocity $u$, total energy $E$ and pressure $p$, they are given in dimensionless form as\begin{align*} \frac{\partial \rho}{\partial t} + \frac{\partial (\rho u)}{\partial x} &= 0, \\ \frac{\partial (\rho u)}{\partial t} + \frac{\partial (\rho u^2 + p)}{\partial x} &= 0, \\ \frac{\partial (\rho E)}{\partial t} + \frac{\partial u(E + p)}{\partial x} &= 0. \end{align*}
These three equations describe conservation of mass, momentum and energy, respectively. In order to solve them numerically, we start by importing NumPy and setting up the plotting environment.
%matplotlib inlineimport numpy as npfrom matplotlib import pyplot as plt
newparams = {'font.size': 14, 'figure.figsize': (14, 7), 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral', 'lines.linewidth': 2}plt.rcParams.update(newparams)
For an ideal gas, the total energy is the sum of the kinetic and potential contributions, i.e.\begin{equation*} E = \frac{1}{2} \rho u^2 + \frac{p}{\gamma - 1}, \end{equation*}
where $\gamma$ is the ratio of specific heats for the material in our system. We shall be considering air, which has $\gamma = 1.4$. Note that $e = \frac{p}{(\gamma - 1)\rho}$ is the specific internal energy for ideal gases. Conversions between energy and pressure will be useful to us later, so we define appropriate functions:
def energy(rho, u, p): return 0.5 * rho * u ** 2 + p / 0.4 def pressure(rho, u, E): return 0.4 * (E - 0.5 * rho * u ** 2)
Conveniently, this first-order hyperbolic system of PDEs can be written as a set of conservation laws, i.e.\begin{equation*} \partial_t {\bf Q} + \partial_x {\bf F(Q)} = {\bf 0}\,. \end{equation*}
Here, the vector of conserved quantities ${\bf Q}$ and their fluxes ${\bf F(Q)}$ are given by\begin{equation*} {\bf Q} = \begin{bmatrix} \rho \\ \rho u \\ E \end{bmatrix} \,,\quad {\bf F} = \begin{bmatrix} \rho u \\ \rho u^2 + p \\ u(E+p) \end{bmatrix} \,. \end{equation*}
Given the state of the system (i.e. the vector of conserved quantities), we compute the flux as:
def flux(Q): rho, u, E = Q[0], Q[1] / Q[0], Q[2] p = pressure(rho, u, E) F = np.empty_like(Q) F[0] = rho * u F[1] = rho * u ** 2 + p F[2] = u * (E + p) return F
Consider a spatial domain $[x_L, x_R]$ and two points in time $t_2 > t_1$. By integrating the Euler equations in differential form in space and time, we acquire the integral form,\begin{equation*} \int_{x_L}^{x_R} {\bf Q}(x, t_2) {\rm d} x = \int_{x_L}^{x_R} {\bf Q}(x, t_1) {\rm d} x + \int_{t_1}^{t_2} {\bf F}({\bf Q}(x_L, t)) {\rm d} t - \int_{t_1}^{t_2} {\bf F}({\bf Q}(x_R, t)) {\rm d} t . \end{equation*}
This relation is the basis for our spatial and temporal discretisations. For simplicity, we take our computational domain to be $[0, 1]$, and divide it into $N$ equal cells of width $\Delta x = 1/N$:
N = 100dx = 1 / Nx = np.linspace(-0.5 * dx, 1 + 0.5 * dx, N + 2)
Note that we have added one extra cell to each side of the domain. These are so-called ghost cells which allow us to apply appropriate boundary conditions (more on that shortly). It is also necessary to discretise the state vector $\bf Q$ and intercell fluxes $\bf F$. We denote by ${\bf Q}_i^n$ the spatial average within the cell $[x_{i-1/2}, x_{i+1/2}]$ at time $t_n$, i.e.\begin{equation*} {\bf Q}_i^n = \frac{1}{\Delta x} \int_{x_{i-1/2}}^{x_{i+1/2}} {\bf Q}(x, t_n) {\rm d} x , \end{equation*}
and initialise a numpy array to store the values within each cell:
Q = np.empty((3, len(x)))
Similarly, the temporal average of the flux across the cell boundary at $x_{i+1/2}$ is denoted ${\bf F}_{i+1/2}^n$:\begin{equation*} {\bf F}_{i+1/2}^n = \frac{1}{\Delta t^n} \int_{t_n}^{t_{n+1}} {\bf F}({\bf Q}(x_{i+1/2}, t)) {\rm d} t . \end{equation*}
Inserting these discretisations into the integral form of the Euler equations, we get a conservative update formula for each computational cell which is $exact$:\begin{equation*} {\bf Q}_i^{n+1} = {\bf Q}_i^n + \frac{\Delta t^n}{\Delta x} \left( {\bf F}_{i-\frac{1}{2}}^n - {\bf F}_{i+\frac{1}{2}}^n \right) \end{equation*}
This formula is our method for advancing the system forwards in time, and the numerical approximations are solely in the evaluations of the intercell fluxes ${\bf F}_{i \pm 1/2}^n$. We cannot, however, choose the time step $\Delta t^n$ as large as we want, due to restrictions on stability. By choosing a Courant-Friedrichs-Lewis (CFL) coefficient $c \leq 1$, the time step can safely be set to\begin{equation*} \Delta t^n = \frac{c \Delta x}{S_{\rm max}^n}, \end{equation*}
where $S_{\rm max}^n$ is a measure of the maximum wave speed present in the system. We use a common approximation which finds the cell with highest sum of material and sound speeds, i.e.\begin{equation*} S_{\rm max}^n = \max_i ( |u_i^n| + a_i^n ) , \end{equation*}
where the speed of sound for ideal gases is given by\begin{equation*} a = \sqrt{\frac{\gamma p}{\rho}} . \end{equation*}
def timestep(Q, c, dx): rho, u, E = Q[0], Q[1] / Q[0], Q[2] a = np.sqrt(1.4 * pressure(rho, u, E) / rho) S_max = np.max(np.abs(u) + a) return c * dx / S_max
Many different procedures exist for approximating the intercell fluxes ${\bf F}_{i \pm 1/2}$. For simplicity, we implement the relatively straight-forward FIrst-ORder CEntred (FORCE) scheme. [2] Given the state of two neighbouring cells, the FORCE flux at the interface is computed as\begin{equation*} {\bf F}_{\rm FORCE} ({\bf Q}_L, {\bf Q}_R) =\frac{1}{2} \left( {\bf F}_0 + \frac{1}{2} ({\bf F}_L + {\bf F}_R) \right) + \frac{1}{4} \frac{\Delta x}{\Delta t^n} ({\bf Q}_L - {\bf Q}_R) . \end{equation*}
Here, ${\bf F}_K = {\bf F}({\bf Q}_K)$ and\begin{equation*} {\bf Q}_0 = \frac{1}{2} ({\bf Q}_L + {\bf Q}_R) + \frac{1}{2} \frac{\Delta t^n}{\Delta x} ({\bf F}_L- {\bf F}_R) , \end{equation*}
These equations correspond to Eqns. (16) and (19) in [2].
At this point, the reason for our previously introduced ghost cells become apparent. Since each cell interface requires information from both sides, the first and $N$-th cells in our domain lack information from the left and right, respectively. By initialising a ghost cell on each side and transmitting information from within the domain outwards for each time step, these fictional cells provide the necessary information for performing our computations.
Implementing the force scheme in Python, we have
def force(Q): Q_L = Q[:, :-1] Q_R = Q[:, 1:] F_L = flux(Q_L) F_R = flux(Q_R) Q_0 = 0.5 * (Q_L + Q_R) + 0.5 * dt / dx * (F_L - F_R) F_0 = flux(Q_0) return 0.5 * (F_0 + 0.5 * (F_L + F_R)) + 0.25 * dx / dt * (Q_L - Q_R)
Given an initial condition ${\bf Q}(x, 0)$, we want to evolve the system in time to predict the state at some future time $t=T$. A popular test case for the Euler equations is Sod's shock tube. The test consists of a Riemann problem, which means that the PDE is coupled with a set of piecewise constant initial conditions separated by a single discontinuity. Intuitively, the test can be thought of as a tube with a membrane separating air of two different densities (and pressures). At $t=0$, the membrane is removed, which results in a rarefaction wave, a contact discontinuity and a shock wave. The initial conditions are given by\begin{equation*} {\bf Q}(x, 0) = \begin{cases} {\bf Q}_L \quad {\rm if} \quad x \leq 0.5 \\ {\bf Q}_R \quad {\rm if} \quad x > 0.5 \end{cases} , \quad \begin{pmatrix} \rho \\ u \\ p \end{pmatrix}_L = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} , \quad \begin{pmatrix} \rho \\ u \\ p \end{pmatrix}_R = \begin{pmatrix} 0.125 \\ 0 \\ 0.1 \end{pmatrix} . \end{equation*}
We therefore initialise the vector of conserved variables according to
# Density: Q[0, x <= 0.5] = 1.0Q[0, x > 0.5] = 0.125# Momentum: Q[1] = 0.0# Energy: Q[2, x <= 0.5] = energy(1.0, 0.0, 1.0)Q[2, x > 0.5] = energy(0.125, 0.0, 0.1)
All prerequisites are now in place for us to evolve the shock tube system in time numerically. We choose a CFL-coefficient of 0.95, and advance to a final time $T=0.25$
c = 0.9T = 0.25t = 0while t < T: # Compute time step size dt = timestep(Q, c, dx) if t + dt > T: # Make sure to end up at specified final time dt = T - t # Transmissive boundary conditions Q[:, 0] = Q[:, 1] # Left boundary Q[:, N + 1] = Q[:, N] # Right boundary # Flux computations using FORCE scheme F = force(Q) # Conservative update formula Q[:, 1:-1] += dt / dx * (F[:, :-1] - F[:, 1:]) # Go to next time step t += dt
In order to compare our results, an exact reference solution is provided in the file "ref.txt". For the simple case of a single contact discontinuity, this Riemann problem can be solved up to arbitrary accuracy with an exact (iterative) solver. The implementation of the exact solver is outside the scope of this notebook, but the interested reader is referred to the extensive resource by Toro [1].
# Load reference solutionref_sol = np.loadtxt('ref.txt')ref_sol = np.transpose(ref_sol)# Numerical results for density, velocity, pressure and internal energynum_sol = [Q[0], Q[1] / Q[0], pressure(Q[0], Q[1] / Q[0], Q[2]), \ pressure(Q[0], Q[1] / Q[0], Q[2]) / (0.4 * Q[0])]
With the reference solution in place, we can make plots of how density, velocity, pressure and internal energy are distributed throughout the domain at the final time $t = T$.
fig, axes = plt.subplots(2, 2, sharex='col', num=1)axes = axes.flatten()labels = [r'$\rho$', r'$u$', r'$p$', r'$e$']# For each subplot, plot numerical and exact solutionsfor ax, label, num, ref in zip(axes, labels, num_sol, ref_sol[1:]): ax.plot(x, num, 'or', fillstyle='none', label='Numerical') ax.plot(ref_sol[0], ref, 'b-', label='Exact') ax.set_xlim([0, 1]) ylim_offset = 0.05 * (np.max(num) - np.min(num)) ax.set_ylim([np.min(num) - ylim_offset , np.max(num) + ylim_offset]) ax.set_xlabel(r'$x$') ax.set_ylabel(label, rotation=0) ax.legend(loc='best')plt.show()
Going from left to right in the domain, it is clear that our scheme has resolved the rarefaction wave, contact discontinuity (evident in $\rho$ and $e$) and the shock wave. Compared to the exact solution, however, it is obvious that the numerical approximation is diffused and fails to capture sharp discontinuities accurately. This inaccuracy is as expected for a scheme which is only first-order accurate, since the second-order error term is diffusive by nature. Every time we advance the system in time, the numerical solution gets slightly smeared out compared to the exact one. After 60 time steps, the result is as shown. For any serious applications, high-resolution schemes with at least second order accuracy should be considered. A few such schemes are given in [3].
[1] E. F. Toro: "Riemann Solvers and Numerical Methods for Fluid Dynamics - A Practical Introduction" (3rd ed, Springer, 2009)
[2] E. F. Toro & A. Hidalgo & M. Dumbser: "FORCE schemes on unstructured meshes I: Conservative hyperbolic systems" (Journal of Computational Physics, 2009)
[3] E. F. Toro & S. J. Billett: "Centred TVD Schemes for Hyperbolic Conservation Laws" (IMA Journal of Numerical Analysis, 2000)
|
The Annals of Applied Probability Ann. Appl. Probab. Volume 17, Number 1 (2007), 81-101. On the signal-to-interference ratio of CDMA systems in wireless communications Abstract
Let {
s : ij i, j=1, 2, …} consist of i.i.d. random variables in ℂ with $\mathsf{E}s_{11}=0$, $\mathsf{E}|s_{11}|^{2}=1$. For each positive integer N, let s = k s ( k N)=( s 1, k s 2, …, k s ) Nk , 1≤ T k≤ K, with K= K( N) and K/ N→ c>0 as N→∞. Assume for fixed positive integer L, for each Nand k≤ K, α =( k α (1), …, k α ( k L)) is random, independent of the T s , and the empirical distribution of ( ij α 1, …, α ), with probability one converging weakly to a probability distribution K Hon ℂ . Let L β = k β ( k N)=( α (1) k s k , …, T α ( k L) s k ) T and set T C= C( N)=(1/ N)∑ k=2 K β k β k *. Let σ 2>0 be arbitrary. Then define SIR 1=(1/ N) β 1 *( C+ σ 2 I) −1 β 1, which represents the best signal-to-interference ratio for user 1 with respect to the other K−1 users in a direct-sequence code-division multiple-access system in wireless communications. In this paper it is proven that, with probability 1, SIR 1tends, as N→∞, to the limit ∑ ℓ, ℓ'=1 L α̅ 1( ℓ) α 1( ℓ') a , where ℓ, ℓ' A=( a ) is nonrandom, Hermitian positive definite, and is the unique matrix of such type satisfying $A=\bigl(c\,\mathsf{E}\frac{\mathbf{\alpha}\mathbf{\alpha}^{*}}{1+\mathbf{\alpha}^{*}A\mathbf{\alpha}}+\sigma^{2}I_{L}\bigr)^{-1}$, where ℓ, ℓ' ∈ℂ α has distribution L H. The result generalizes those previously derived under more restricted assumptions. Article information Source Ann. Appl. Probab., Volume 17, Number 1 (2007), 81-101. Dates First available in Project Euclid: 13 February 2007 Permanent link to this document https://projecteuclid.org/euclid.aoap/1171377178 Digital Object Identifier doi:10.1214/105051606000000637 Mathematical Reviews number (MathSciNet) MR2292581 Zentralblatt MATH identifier 1133.94012 Subjects Primary: 15A52 60F15: Strong theorems Secondary: 60G35: Signal detection and filtering [See also 62M20, 93E10, 93E11, 94Axx] 94A05: Communication theory [See also 60G35, 90B18] 94A15: Information theory, general [See also 62B10, 81P94] Citation
Bai, Z. D.; Silverstein, Jack W. On the signal-to-interference ratio of CDMA systems in wireless communications. Ann. Appl. Probab. 17 (2007), no. 1, 81--101. doi:10.1214/105051606000000637. https://projecteuclid.org/euclid.aoap/1171377178
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
The newly realized relationship between the geometric connections in the spacetime and the standard quantum entanglement has been the topic of exciting papers in recent years. One aspect of the papers that have been written down so far made them simple and too special: the entangled systems were always pretty much pairs of degrees of freedom and the wormhole correspondingly looked bipartite, like a cylindrical tunnel connecting two pretty much identical throats at the ends.
A newly published 65-page-long hep-th preprint
In particular, they use a setup in \(AdS/CFT\). More precisely, it's \(AdS_3/CFT_2\) in the Euclidean context. The \(CFT\) looks much like the world sheet \(CFT\)s in perturbative string theory. But because it's a holographic boundary \(CFT\), there should be no two-dimensional gravity in it. Two-dimensional gravity has no local dynamical excitations but there is a difference because you shouldn't sum over topologies of the \(CFT_2\) etc.
Their boundary has the two-dimensional, Euclidean, connected geometry \(\Sigma\) – a Riemann surface – but this Riemann surface has several, namely \(n\), circular boundaries. The setup is a bit complicated so don't forget that this whole \(\Sigma\) is a boundary of something else, too.
The dynamical gravitational spacetime capping this two-dimensional Euclidean \(CFT\) boudary is Euclidean three-dimensional, some \(AdS_3\)-like geometry. They argue that they may find the gravitational i.e. geometric dual description to "non-maximally" entangled states similar to the GHZ state\[
{\ket\psi}_{GHZ} = \frac{ \ket{\uparrow\uparrow\uparrow} + \ket{\downarrow\downarrow\downarrow} }{ \sqrt{2} }
\] which is known from the discussions about the intrinsically non-classical or "paradoxical" character of the quantum entanglement. (We often call it the GHZM state and the dominant convention for the relative sign is "minus", but let's not be picky.)
What they find out is that the very topology of the three-dimensional interpolating \(AdS_3\)-based geometry isn't fixed by the discrete data about the two-dimensional Euclidean-signature Riemann surface \(\Sigma\). Instead, even the topology depends on the (shape i.e.) moduli of the Riemann surface \(\Sigma\).
(Once again, recall that these moduli are not integrated over because the boundary \(CFT_2\) isn't a gravitational theory. It is a boundary \(CFT\) which is, by general rules of holography, non-gravitational.)
For some values of the moduli, the connecting three-dimensional surface looks more bipartite while for others, it looks multipartite. The relevant geometry may be guessed from the appropriate – not uniquely determined – way to cut the surface \(\Sigma\) by scissors. The bulk spacetime topology is so dynamical and emergent that it seems to depend on lots of data that you wouldn't expect to matter if you thought that the spacetime topology may be decided in an
a prioriway.
This dependence of the topology on the point in the moduli space obviously generalizes the Hawking-Page transition. For decades, since the early days of holography, this phase transition between some gas and a black hole has been known to be holographically dual to the confinement/deconfinement transition in the boundary theory.
So in principle, similar topology changes as functions of unexpected parameters have been known except that the topology change in the newest paper is perhaps even more unexpected.
As I have emphasized since the early days of this entanglement-as-glue minirevolution, one should stop thinking about the spacetime topology in quantum gravity as something that is given by some good "topological quantum numbers" that may be decided at the very beginning so that everything else is constrained by these topological assumptions. Instead, the topological invariants in quantum gravity aren't even well-defined quantum numbers (they are not given by Hermitian operators) due to various ER-EPR-like dualities. And the most convenient topology for a given state actually requires some calculation.
The Hilbert space of the microscopic, e.g. the boundary \(CFT\), theory isn't "divided" to sectors of different bulk topologies in any easy way that you might immediately guess. This fact is a testimony of the fact that the spacetime geometry has become "really dynamical" or, if you won't misinterpret the adjective in a stupid Laughlinian way, "really emergent".
|
Answer:
Given, Shreya's age = 15 yr and Bhoomika's age = 12 yr \[\therefore \]Ratio of their ages \[\text{=}\frac{\text{Shreya }\!\!'\!\!\text{ age}}{\text{Bhoomika }\!\!'\!\!\text{ s}\,\text{age}}=\frac{15yr}{12yr}=\frac{15}{12}\] \[=\frac{15\div 3}{12\div 3}=\frac{5}{4}=5:4\] [\[\because \]HCF of 15 and 12 = 3] Now, mother wants to divide ` 36 between her daughters in the ratio of their ages. \[\therefore \] Sum of the parts of ratios = 5 + 4 = 9 Amount which has to be divided = ` 36 Here, we can say that Shreya gets 5 parts and Bhoomika gets 4 parts out of every 9 parts. \[\therefore \]Shreya's share \[=\frac{5}{9}\times 36=\]` 20 and Bhoomika's share \[=\frac{4}{9}\times 36=\] ` 16 Hence, Shreya gets ` 20 and Bhoomika gets ` 16.
You need to login to perform this action.
You will be redirected in 3 sec
|
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
|
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
So we need to show:
$$\overline{M} = X \leftrightarrow \forall V \subseteq X \text{ open and non-empty } : V \cap M \neq \emptyset$$
Now it will depend on how you define $\overline{M}$. If you define it as the smallest closed subset that contains $M$ (one of the usual definitions) I'd go as follows:
Left to right: assume $\overline{M} =X$ and let $V$ be any non-empty open subset of $X$. Then $M \nsubseteq X \setminus V$, or otherwise the latter set would be a smaller closed subset than $X$ that contained $M$. So there is always a point of $M$ that is
not in $X \setminus V$, or put equivalently: $M$ always intersects $V$, as required.
Right to left: suppose the right hand condition holds. Then let $C$ be a closed subset of $X$ with $M \subseteq C$. We want to show that $C =X$ (so $X$ is then the only (hence smallest) closed superset of $M$). If $C \neq X$, $V = X\setminus C$ is non-empty and open, but $V \cap M = \emptyset$, this contradicts the right hand condition. So $C = X$.
Another common definition is that $\overline{A}$ is the set of all adherence points of $A$, i.e. all $x \in X$ such that for any open set $O$ that contains $x$, $O \cap A \neq \emptyset$.
In that case the equivalence is more immediate:
Left to right. If $V \neq \emptyset$ is open then for $p \in V$ we have that $p$ is an adherence point of $M$, as $\overline{M} = X$, and so $V$ must intersect $M$.
Right to left: suppose $p \in X$. Then let $O$ be any open set that contains $p$. Then the right hand condition implies that $O \cap M \neq \emptyset$. So $p$ is an adherence point of $M$ and as $p \in X$ was arbitrary, $\overline{M} =X$
If your definition of closure is different, please let us know.
|
I'll post some elements of answer to my own question from what I understand. Anyone feel free to make a better, less sloppy answer.
First, there are several versions of this document online. They all have some typographic defects at some point. So when in doubt, better check those 3 versions first.
Second, there's a lot of implicit stuff going on with this first actual proof. So let's clarify some of them.
Reachable bound of $y - \bar{y}$
I think that,
no, the bound on $y - \bar{y}$ cannot be reached. Let's try to go for the worst case with $k = p+1$. Let's assume $p = 6$ and the radix $\beta = 10$.\begin{align}y & = \overbrace{0.000000}^{k=7}\overbrace{999999}^{p=6} \\\bar{y} & = \overbrace{0.000000}^{p+1=7} \\y - \bar{y} & = 0.000000999999 \\ & = 9 \times (10^{-7}+\dots+10^{-12}) \\ & = (\beta - 1)(\beta^{-p-1}+\dots+\beta^{-p-p}) \\ & \lt (\beta - 1)(\beta^{-p-1}+\dots+\beta^{-p-p}+\beta^{-p-(p+1)}) \\ & \lt (\beta - 1)(\beta^{-p-1}+\dots+\beta^{-p-p}+\beta^{-p-k})\end{align}
I had to add an extra term to the sum (i.e. an extra digit) to get the same formula as the paper. Hence the $=$ got transformed to $\lt$.
This exact bound simplify the second case of the proof. Although a looser and simpler bound $y - \bar{y} \lt \beta^{-p}$ would be enough for the first case when $x - y \ge 1$.
No rouding error when $x - \bar{y} \lt 1$
The assertion that
if $x - \bar{y} \lt 1$, then $\delta = 0$ seems true. The rounding error $\delta$ is the error we introduce by removing the guard digit to produce the final result. This is sometimes needed because $\bar{y}$ has $p+1$ digits and $x - \bar{y}$ might as well have $p+1$ digits.
For instance:\begin{align}x & = 2.00000 \\y & = 0.0123456 \\\bar{y} & = 0.012345 \\x - \bar{y} & = 1.987655\end{align}The result has $7$ digits therefore has to be rounded to $6$ digits.
But when $x - \bar{y} \lt 1$ then the result starts with the digit $0$. Meaning that at most only $p$ significant digits remain from the $p+1$ digits used to perform the subtraction. For example:\begin{align}x & = 1.00000 \\y & = 0.0123456 \\\bar{y} & = 0.012345 \\x - \bar{y} & = 0.987655\end{align}The result $x - \bar{y}$ already has 6 digits. So no rounding error need to be introduced.
However, it is worth noting that this doesn't change anything to the error introduced by truncating $y$ to $\bar{y}$.
Minimum value of $x - y$
When the paper say that the smallest value $x - y$ can take is:$$1.0 - 0.\overbrace{0 \dots 0}^k\overbrace{\rho \dots \rho}^k$$there's actually a typo that has been corrected in other documents. What was ment to be written is:$$1.0 - 0.\overbrace{0 \dots 0}^k\overbrace{\rho \dots \rho}^p$$which make more sens and is the general formula when $k$ is fixed. Maybe a better form could be:\begin{align}x - y & \ge 1.0 - 0.\overbrace{0 \dots 0}^k\overbrace{\rho \dots \rho}^p \\ & \ge 0.\overbrace{\rho \dots \rho}^k\overbrace{0 \dots 0}^{p-1}1 \\ & \gt 0.\overbrace{\rho \dots \rho}^k\end{align}
Which lead naturally to the formula written in the paper.
Third case: $x - \bar{y} = 1$
And finally, another implicit point of this proof (not asked but worth writing down): why would having both $x - y \lt 1$ and $x - \bar{y} \ge 1$ imply that $x - \bar{y} = 1$?
I think the informal answer would be that since $x$ is a float with $p$ digits, then $x - 1$ is also a float with $p$ or $p - 1$ significant digits (if $x >= 2$ or $x < 2$ respectively). Meaning that truncating $y$ to $\bar{y}$ cannot make it less than $x - 1$ since $y > x - 1$ and $x - 1$ is a candidate value for $\bar{y}$ and is less than $y$.
In other words: $x - y \lt 1$ imply that $x - \bar{y} \le 1$. Therefore, having this and $x - \bar{y} \ge 1$ at the same time imply that $x - \bar{y} = 1$.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords Brownian motion (2) (remove)
296
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
|
My question in short:
Under which scenarios minimising transmission power is beneficial in terms of transmitter energy-efficiency? is there any fundamental reason to keep a low-power oriented design strategy?
In particular, small, low-cost transceivers (such as those used for M2M/IoT communications) are designed according to this strategy. I'd like to understand why.
Background
It is quite natural to think that energy efficiency in radio link design is directly linked to a low transmission power. This, however, cannot be concluded from a simple analysis (see baseline model below). For instance, we may feel tempted to decrease the data rate in order to reduce the required transmission power. This in turn increases the transmission time, and the total energy consumed does not change.
Naturally, this question requires a look to the whole transmitter circuitry (which is a little outside my field). After some research, I noted that there must be a trade off regarding clock rate (high-precision and high-rate is payed with extra power) and amplifier efficiency (lower input powers are usually less efficient). But this suggests that minimising power is not always the best option.
The baseline model (optional)
Consider a wireless radio system where a transmitter $A$ wants to communicate with a receiver $B$, at distance $d$. The total attenuation (due to to signal propagation) can be simply modelled as $L=d^\alpha$, where typically $2 \leq \alpha \leq 4$.
The carrier-to-noise density ratio $\frac{C}{N_0}$ at $B$'s receiver input is
\begin{equation} \frac{C}{N_0} = \frac{P_t G_A G_B}{d^{\alpha}N_0}, \end{equation}
where $P_t$ is the power at the output of $A$. When designing these systems, there is usually a performance constraint expressed as a maximum tolerated bit error rate, which turns into a minimum $E_b/N_0 = \gamma_0$ requirement.
So in order to achieve this $\gamma_0$, it seems that there are only two system design parameters to play with (assuming all other parameters are fixed): transmission power and data rate, i.e.
\begin{equation} \gamma_0 \propto \Big( \frac{P_t}{R_b} \Big). \end{equation}
Thus, we observe that increasing the transmission power has the same effect that decreasing the data rate, in terms of $E_b/N_0$.
Now consider that the total amount of data to be delivered is $L_b$. The total energy consumed by $A$ to deliver $L_b$ bits is therefore
\begin{equation} E_t \propto P_t\frac{L}{R_b}=\gamma_0 N_0 L, \end{equation}
which does not depend on the operating power neither on the data rate.
|
You are correct. Many methods of echo cancellation exist, but none of them are exactly trivial. The most generic and popular method is echo cancellation via an adaptive filter. In one sentence, the adaptive filter's job is to alter the signal that it's playing back by minimizing the amount of information coming from the input.
Adaptive filters
An adaptive (digital) filter is a filter that changes its coefficients and eventually converges to some optimal configuration. The mechanism for this adaptation works by comparing the output of the filter to some desired output. Below is a diagram of a generic adaptive filter:
As you can see from the diagram, the signal $x[n]$ is filtered by (convolved with) $\vec{w}_n$ to produce output signal $\hat{d}[n]$. We then subtract $\hat{d}[n]$ from the
desired signal $d[n]$ to produce the error signal $e[n]$. Note that $\vec{w}_n$ is a vector of coefficients, not a number (hence we don't write $w[n]$). Because it changes every iteration (every sample), we subscript the current collection of these coefficients with $n$. Once $e[n]$ is obtained we use it to update $\vec{w}_n$ by an update algorithm of choice (more on that later). If input and output satisfy a linear relationship that does not change over time and given a well-designed update algorithm, $\vec{w}_n$ will eventually converge to the optimal filter and $\hat{d}[n]$ will be closely following $d[n]$. Echo cancellation
The problem of echo cancellation can be presented in terms of a adaptive filter problem where we're trying to produce some known ideal output given an input by finding the optimal filter satisfying the input-output relationship. In particular, when you grab your headset and say "hello", it's received on the other end of the network, altered by acoustic response of a room (if it's being played back out loud), and fed back into the network to go back to you as an echo. However, because the system knows what the initial "hello" sounded like and now it knows what the reverberated and delayed "hello" sounds like, we can try and guess what that room response is using an adaptive filter. Then we can use that estimate, convolve all incoming signals with that impulse response (which would give us the estimate of the echo signal) and subtract it from what goes into the microphone of the person you called. The diagram below shows an adaptive echo canceller.
In this diagram, your “hello” signal is $x[n]$. After being played out of a loudspeaker, bouncing off the walls and getting picked up by the device’s microphone it becomes an echoed signal $d[n]$. The adaptive filter $\vec{w}_n$ takes in $x[n]$ and produces output $y[n]$ which after convergence should be ideally tracking echoed signal $d[n]$. Therefore $e[n]=d[n]-y[n]$ should eventually go to zero, given that nobody is talking on the other end of the line, which is usually the case when you’ve just picked up the headset and said “hello”. This is not always true, and some non-ideal case consideration will be discussed later.
Mathematically, the NLMS (normalized least mean square) adaptive filter is implemented as follows. We update $\vec{w}_n$ every step using the error signal of the previous step. Namely, let
$$\vec{x}_n = \left ( x[n], x[n-1], \ldots , x[n-N+1] \right)^T$$
where $N$ is the number of taps (samples) in $\vec{w}_n$. Notice what samples of $x$ are in reverse order. And let
$$\vec{w}_n = \left ( w[0], w[1], \ldots , x[N-1] \right )^T$$
Then we calculate $y[n]$ via (by convolution) finding the inner product (dot product if both signals are real) of $= \vec{x}_n$ and $= \vec{w}_n$:
$$y[n] = \vec{x}_n^T \vec{w}_n = \vec{x}_n \cdot \vec{w}_n $$
Now that we can calculate the error, we’re using a normalized gradient descent method for minimizing it. We get the following update rule for $\vec{w}$:
$$\vec{w}_{n+1} = \vec{w}_n + \mu \vec{x}_n \frac{e[n]}{ \vec{x}_n^T \vec{x}_n}= \vec{w}_n + \mu \vec{x}_n \frac{\vec{x}_n^T \vec{w}_n - d[n]}{ \vec{x}_n^T \vec{x}_n}$$
where $\mu$ is the adaptation step size such that $0 \leq \mu \leq 2$.
Real life applications and challenges
Several things can present difficulty with this method of echo cancellation. First of all, like mentioned before, it is not always true that the other person is silent whilst they receive your “hello” signal. It can be shown (but is beyond the scope of this reply) that in some cases it can still be useful to estimate the impulse response while there is a significant amount of input present on the other end of the line because input signal and echo are assumed to be statistically independent; therefore, minimizing the error will still be a valid procedure. In general, a more sophisticated system is needed to detect good time intervals for echo estimation.
On the other hand, think of what happens when you’re trying to estimate echo when the received signal is approximately silence (noise, actually). In absence of a meaningful input signal, the adaptive algorithm will diverge and quickly start producing meaningless results, culminating eventually in a random echo patter. This means that we also need to take into consideration speech detection. Modern echo cancellers look more like the figure below, but above description is the jist of it.
There are plenty of literature on both adaptive filters and echo cancellation out there, as well as some open source libraries you can tap into.
|
I think you are confused with the meaning of the expression for the error. What it states is:
There is a point $c$ in the interval $(a, b)$ such that the error in calculating the integral $\int_{a}^bf(x)~{\rm d}x$ using the trapezoid rule is given by the expression
$$
\epsilon = (b-a)\frac{h^2}{12}f''(c) \tag{1}
$$
here $h$ is the size of the partition.
To give you an example, take $a = 0$, $b= h=1$, and $f(x) = e^{x}\cos x$, using the trapezoidal rule you get
$$S = \int_0^1{\rm d}f(x) \approx \frac{1}{2}(f(0) + f(1)) = 1.2343$$
whereas the actual integral is
$$I = \int_0^1{\rm d}x~f(x) = \frac{1}{2}[-1 + e(\cos 1 + \sin 1)] = 1.37802$$
The statement above just tells you that there exist a number $c$ in $(0,1)$ such that
$$-\frac{f''(c)}{12} = 1.37802 - 1.2343$$
you can actually check this is true, with $c = 0.531375$.
Here is the deal, in most cases we do not know the actual value of the integral, but we can still actually use $(1)$ to put a constraint on the error you are making. For example, in your case
$$f''(x) = -2e^{x}\sin x$$
So that in the case $a = 0$ and $b = h$ you have
$$|\epsilon| = \frac{h^3}{12} |2e^{c}2\sin c | = \frac{h^3}{6}e^c|\sin c| \leq \frac{h^3}{6}e^h\sin h \tag {2}$$
where I have used the fact that for $h >0$ small, then $f(x)$ increases, so $f(h) > f(c)$ for $c < h$. End there you have it, you don't know what the actual error you are making, but you know it will never exceed the value in (2)
|
For a project I'm trying to build a sound engine as realistic as possible and I am trying to add more features to enhance the sound localization. So far I've only implemented one of the three main methods to localize sound: time delay. Another main sound localization method is difference in volume. This however, looks easier than it is. I know very little about sound and the ways to measure intensity and power, so please correct me if I confuse terms and concepts.
In the sound library I'm using I can set the volume of a sound sample between 0 and 100, just like on a volume mixer. This means that when I set it to 50, the sounds played sound half as loud as when it is turned up to 100. I know the intensity of sound can be calculated using this equation:
$$I = \frac{P}{4\pi r^2}$$
But when I select a distance for the virtual source and I want to know at what 'volume' the sound should be played, I decided the power (P) and $4\pi$ were constants so I could leave them out:
$$I \sim \frac{1}{r^2}$$
But this is where I didn't really come any further. I don't know if I should use some reference point that somehow defines the volume should be 100 if it is at distance zero:
$$Volume = \frac{100}{(r+1)^2}$$
Is this the correct way to calculate at what volume the sound should be played (with the distance in meters)?
|
Proceedings of the International Conference on Geometry, Integrability and Quantization Geom. Integrability & Quantization Proceedings of the Eleventh International Conference on Geometry, Integrability and Quantization, Ivaïlo M. Mladenov, Gaetano Vilasi and Akira Yoshioka, eds. (Sofia: Avangard Prima, 2010), 42 - 67 Multi-Component Nonlinear Schrödinger Equation on Symmetric Spaces with Constant Boundary Conditions: Part I Abstract
The multi-component nonlinear Schrödinger equations related to $\mathbf{C.I} \simeq \mathrm{Sp}(2p)/\mathrm{U}(p)$ and $\mathbf{D.III}\simeq \mathrm{Sp}(2p)/\mathrm{U}(p)$-type symmetric spaces with non-vanishing boundary conditions are solvable by the inverse scattering method (ISM). We focus our attention on the single threshold case. We formulate the spectral properties of the Lax operator $L$, which is the generalized Zakharov-Shabat operator. Next we construct the corresponding fundamental analytic solutions (FAS) and adapt the Wronskian relations for the constant boundary conditions. They allow one to analyze the mapping from the class of allowed potentials $\mathcal{M}$ to each of the minimal sets of scattering data $\mathcal{T}_{i} = 1,2$. The ISM for the Lax operator $L$ is interpreted as a nonlinear analog of the Fourier-transform method. As appropriate generalizations of the usual exponential functions we use the so-called ‘squared solutions’, which are constructed in terms of the $\mathrm{FAS}\, \mathcal{x}^{\pm} (x,\lambda)$ of $L$ and the Cartan-Weyl basis of the Lie algebra, relevant to the symmetric space. Finally we derive the completeness relation for the “squared solution” which turns out to provide the map from $\mathcal{M}$ to each $\mathcal{T}_i = 1,2$. Such decompositions allow one to derive all fundamental properties of the multi-component nonlinear Schrödinger equations.
Article information Source Proceedings of the Eleventh International Conference on Geometry, Integrability and Quantization, Ivaïlo M. Mladenov, Gaetano Vilasi and Akira Yoshioka, eds. (Sofia: Avangard Prima, 2010), 42-67 Dates First available in Project Euclid: 13 July 2015 Permanent link to this document https://projecteuclid.org/ euclid.pgiq/1436794520 Digital Object Identifier doi:10.7546/giq-11-2010-42-67 Mathematical Reviews number (MathSciNet) MR2757842 Zentralblatt MATH identifier 1203.37115 Citation
Gerdjikov, Vladimir S.; Kostov, Nikolay A. Multi-Component Nonlinear Schrödinger Equation on Symmetric Spaces with Constant Boundary Conditions: Part I. Proceedings of the Eleventh International Conference on Geometry, Integrability and Quantization, 42--67, Avangard Prima, Sofia, Bulgaria, 2010. doi:10.7546/giq-11-2010-42-67. https://projecteuclid.org/euclid.pgiq/1436794520
|
Solve equations and simplify expressions
In algebra 1 we are taught that the two rules for solving equations are the addition rule and the multiplication/division rule.
The addition rule for equations tells us that the same quantity can be added to both sides of an equation without changing the solution set of the equation. Example
$$\begin{array}{lcl} 4x-12 & = & 0\\ 4x-12+12 & = & 0+12\\ 4x & = & 12\\ \end{array}$$
Adding 12 to each side of the equation on the first line of the example is the first step in solving the equation. We did not change the solution by adding 12 to each side since both the second and third equations have the same solution. Equations that have the same solution sets are called equivalent equations.
The multiplication/division rule for equations tell us that every term on both sides of an equation can be multiplied or divided by the same term (except zero) without changing the solution set of the equation.
Example
$$\begin{array}{lcl} 4x-12 & = & 0\\ 4x-12+12 & = & 0+12\\ 4x & = & 12\\ \frac{4x}{4} & = & \frac{12}{4}\\ x & = & 3\\ \end{array}$$
When we simplify an expression we operate in the following order:
Simplify the expressions inside parentheses, brackets, braces and fractions bars. Evaluate all powers. Do all multiplications and division from left to right. Do all addition and subtractions from left to right.
A useful rule is the denominator-numerator rule which states that the denominator and numerator may be multiplied by the same quantity without changing the value of the fraction.
Example
$$\frac{(2^{2}-2)}{\sqrt{2}}$$
First we simplify the expression inside the parentheses by evaluating the powers and then do the subtraction within it.
$$\frac{(4-2)}{\sqrt{2}}$$
$$\frac{(2)}{\sqrt{2}}$$
We then remove the parentheses and multiply both the denominator and the numerator by √2.
$$\frac{2\cdot \sqrt{2}}{\sqrt{2}\cdot \sqrt{2}}$$
As a last step we do all multiplications and division from left to right.
$$\frac{2\cdot \sqrt{2}}{2}$$
$$\sqrt{2}$$
Video lesson
Solve the given equation
$$12(\frac{3b-b}{4a})=36$$
|
I know how to change the phase of a complex number by multiplying by $\cos \theta + i \sin \theta$. And I understand that the phase of a sine wave is reflected in its Fourier transform. So, I am trying to phase-shift a signal by changing the phase of its Fourier transform..
This works for "synthetic" Fourier transforms, but when I try to FFT a signal, apply the phase change, and invert the FFT, I don't get the expected result. I am using MATLAB in the examples below.
Fs = 1000;Tmax = 2;L = Tmax * Fs;
For example, I'll build a synthetic Fourier sequence. The frequency intervals are 0.5 Hz, so a 10Hz component is at the 20th position after DC. (I'm just spitballing here; I know there's better ways to pick frequency bins than hand-jamming
21.)
Ysynth = zeros(1, L);Ysynth(21) = 1000;
And rotate it by 45°:
Ysynth = Ysynth * exp(j * pi/4);
It has a 45° phase on the 10-Hz component:
polarscatter(angle(Ysynth), abs(Ysynth))
Now take its inverse FFT:
Xsynth = ifft(Ysynth);
So far so good. The generated signal has a phase shift of 45°. MATLAB does complain about the presence of an imaginary part when I plot it; I
think this is because I didn't bother with the negative frequency component.
t = (0:L-1)/Fs;plot(t, Xsynth)
Now, I would expect the same principle to apply to a frequency spectrum taken from an actual signal. But I do not get similar results. Quickly:
X = cos(2*pi * 10 * t);Y = fft(X);Yshift = Y * exp(j * pi/4);Xshift = ifft(Yshift);
There is an imaginary component in Xshift, which I do not expect, and again MATLAB complains about.
plot(t, [Xshift; X])
There is no phase shift here, and the amplitude (of the real part) is different.
I must be misunderstanding something about phase representation in FFTs. Why doesn't my transformation produce a phase-shifted version of the original?
|
Burgers equation¶
The Burgers equation is a non-linear equation for the advection and diffusion of momentum. Here we choose to write the Burgers equation in two dimensions to demonstrate the use of vector function spaces:
where \(\Gamma\) is the domain boundary and \(\nu\) is a constant scalar viscosity. The solution \(u\) is sought in some suitable vector-valued function space \(V\). We take the inner product with an arbitrary test function \(v\in V\) and integrate the viscosity term by parts:
The boundary condition has been used to discard the surface integral. Next, we need to discretise in time. For simplicity and stability we elect to use a backward Euler discretisation:
We can now proceed to set up the problem. We choose a resolution and set up a square mesh:
from firedrake import *n = 30mesh = UnitSquareMesh(n, n)
We choose degree 2 continuous Lagrange polynomials. We also need a piecewise linear space for output purposes:
V = VectorFunctionSpace(mesh, "CG", 2)V_out = VectorFunctionSpace(mesh, "CG", 1)
We also need solution functions for the current and the next timestep. Note that, since this is a nonlinear problem, we don’t define trial functions:
u_ = Function(V, name="Velocity")u = Function(V, name="VelocityNext")v = TestFunction(V)
For this problem we need an initial condition:
x = SpatialCoordinate(mesh)ic = project(as_vector([sin(pi*x[0]), 0]), V)
We start with current value of u set to the initial condition, but we also use the initial condition as our starting guess for the next value of u:
u_.assign(ic)u.assign(ic)
\(\nu\) is set to a (fairly arbitrary) small constant value:
nu = 0.0001
The timestep is set to produce an advective Courant number of around 1. Since we are employing backward Euler, this is stricter than is required for stability, but ensures good temporal resolution of the system’s evolution:
timestep = 1.0/n
Here we finally get to define the residual of the equation. In the advectionterm we need to contract the test function \(v\) with\((u\cdot\nabla)u\), which is the derivative of the velocity in thedirection \(u\). This directional derivative can be written as
dot(u,nabla_grad(u)) since
nabla_grad(u)[i,j]\(=\partial_i u_j\).Note once again that for a nonlinear problem, there are no trial functions inthe formulation. These will be created automatically when the residualis differentiated by the nonlinear solver:
F = (inner((u - u_)/timestep, v) + inner(dot(u,nabla_grad(u)), v) + nu*inner(grad(u), grad(v)))*dx
We now create an object for output visualisation:
outfile = File("burgers.pvd")
Output only supports visualisation of linear fields (either P1, orP1DG). In this example we project to a linear space by hand. Anotheroption is to let the
File object manage the decimation. Itsupports both interpolation to linears (the default) or projection (bypassing
project_output=True when creating the
File).Outputting data is carried out using the
write() methodof
File objects:
outfile.write(project(u, V_out, name="Velocity"))
Finally, we loop over the timesteps solving the equation each time and outputting each result:
t = 0.0end = 0.5while (t <= end): solve(F == 0, u) u_.assign(u) t += timestep outfile.write(project(u, V_out, name="Velocity"))
A python script version of this demo can be found here.
|
Edit3 April 17, 2018. I would highly recommend using differential evolution instead of BFGS to perform the optimization. The reason is that the maximum likelihood optimization is likely to have multiple local minima, which may be difficult for the BFGS to overcome without careful use. Edit2 October 20, 2016. I was passing the integer length of the data set, instead of the floating point length, which messed up the math. I’ve corrected this and updated the code. Edit October 19, 2016. There was an error in my code, where I took the standard deviation of the true values, when I should have actually been taking the standard deviation of the residual values. I have corrected the post and the files.
I am going to use maximum likelihood estimation (MLE) to fit a linear (polynomial) model to some data points. A simple case is presented to create an understanding of how model parameters can be identified by maximizing the likelihood as opposed to minimizing the sum of the squares (least squares). The likelihood equation is derived for a simple case, and gradient optimization is used to determine the coefficients of a polynomial which maximize the likelihood with the sample. The polynomial that results from maximizing the likelihood should be the same as a polynomial from a least squares fit, if we assume a normal (Gaussian) distribution and that the data is independent and identically distributed. Thus the maximum likelihood parameters will be compared to the least squares parameters. All of the Python code used in this comparison will be available here.
So let’s generate the data points that we’ll be fitting a polynomial to. I assumed the data is from some second order polynomial, of which I added some noise to make the linear regression a bit more interesting.
The Python code gives us the following data points.
Now onto the formulation of the likelihood equation that we’ll use to determine coefficients of a fitted polynomial.
Linear regression is generally of some form
for a true function \( \mathbf{Y} \), the matrix of independent variables \( \mathbf{X} \), the model coefficients \( \mathbf{\beta} \), and some residual difference between the true data and the model \( \mathbf{r} \). For a second order polynomial, \( \mathbf{X} \) is of the form \( \mathbf{X} = [\mathbf{1}, \mathbf{x}, \mathbf{x^2}]\). We can rewrite the equation of linear regression as
where the residuals \( \mathbf{r} \) are expressed as the difference between the true model (\( \mathbf{Y} \)) and the linear model (\( \mathbf{X}\mathbf{\beta} \)). If we assume the data to be of an independent and identically distributed sample, and that the residual \( \mathbf{r} \) is from a normal (Gaussian) distribution, then we’ll get the following probability density function \( f \).
The probability density function \( f(x|\mu , \sigma^2) \) is for a point \( x \), with a mean \( \mu \), and standard deviation \( \sigma \). If we substitute the residual into the equation, and assume that the residual will have a mean of zero (\( \mu = 0 \)) we get
which defines the probability density function for a given point. The likelihood function \( L \) is defined as
the multiplication of all probability densities at each \( x_i \) point. When we substitute the probability density function into the definition of the maximum likelihood function, we have the following.
It is practical to work with the log-likelihood as opposed to the likelihood equation as the likelihood equation can be nearly zero. In Python we have created a function which returns the log-likelihood value given a set of ‘true’ values (\( \mathbf{Y} \)) and a set of ‘guess’ values \( \mathbf{X}\mathbf{\beta} \).
Optimization is used to determine which parameters \( \mathbf{\beta} \) maximize the log-likelihood function. The optimization problem is expressed below.
So since our data originates from a second order polynomial, let’s fit a second order polynomial to the data. First we’ll have to define a function which will calculate the log likelihood value of the second order polynomial for three different coefficients (‘var’).
We can then use gradient-based optimization to find which polynomial coefficients maximize the log-likelihood. I used scipy and the BFGS algorithm, but other algorithms and optimization methods should work well for this simple problem. I picked some random variable values to start the optimization. The Python code to run the scipy optimizer is presented in the following lines. Note that maximizing the likelihood is the same as minimizing minus 1 times the likelihood.
As it turns out, with the assumptions we have made (Gaussian distribution, independent and identically distributed, \( \mu = 0 \)) the result of maximizing the likelihood should be the same as performing a least squares fit. So let’s go ahead and perform a least squares fit to determine the coefficients of a second order polynomial from the data points. This can be done with scikit-learn easily with the following lines of Python code.
I’ve made a plot of the data points, the polynomial from maximizing the log-likelihood, and the least squares fit all on the same graph. We can clearly see that maximizing the likelihood was equivalent to performing a least squares fit for this data set. This was intended to be a simple example, as I hope to transition a maximum likelihood estimation to non-linear regression in the future. Obviously this example doesn’t highlight or explain why someone would prefer to use a maximum likelihood estimation, but hopefully in the future I can explain the difference on a sample where the MLE gives a different result than the least squares optimization.
A few notes from implementing this simple MLE:
It appears that the MLE performs poorly from bad starting points. I suspect that with a poor starting point it may be beneficial to first run a sum of squares optimization to obtain a starting point of which a MLE can be performed from.
Edit October 19, 2016. I re ran optimizations from random starting points by minimizing the root mean square error, and it actually appears that the MLE is a better optimization problem than the RMS error. Edit2 October 26, 2016. This lead to a follow up post here, where I show that actually MLE is a more difficult gradient optimization problem.
I’m not sure about how to select an appropriate probability density function. I’m not sure what the benefit of using MLE over least squares if I always assume Gaussian… I guess I’ll always know the standard deviation, and that the mean may not always be zero.
|
First of all, Andreas' comment is right: a coverage gives no specified way to "pull back" a covering family of $U$ to a covering family of $V$. However, if you consider what
Sketches of an Elephant calls "sifted" coverages, meaning that all covering families are sieves, then there is a canonical choice: the pullback of a sieve $R$ on $U$ along $f:V\to U$ is the sieve $f^*(R)$ consisting of all $h:W\to V$ such that $f h\in R$.
For an arbitrary sifted coverage, this pullback sieve may not be a covering family, but it always contains a covering family and thus lies in the "saturation" of the coverage. If a sifted coverage $T$
is closed under pullback of sieves, in this sense, then it does yield a presheaf on $C$, which is in fact a sub-presheaf of the subobject classifier in the presheaf topos $[C^{\mathrm{op}},\mathrm{Set}]$ (which is defined by $\Omega(U) = $ the set of all sieves on $U$). (If $T$ is a Grothendieck topology, then this sub-presheaf $T$ is the classifier of dense sub-presheaves.) See also C2.1.10 in the Elephant.
I claim that this sub-presheaf $T\subseteq \Omega$ is $T$-separated iff the coverage $T$ contains at most one covering sieve of every object. (In particular, if $T$ is a Grothendieck topology, then it must be the trivial topology.) This condition is clearly sufficent; for necessity, suppose $R$ and $S$ are two $T$-covering sieves of an object $U$. Then for any $f:V\to U$ in $R$, the pullback sieve $f^*(S)$ is covering. It follows that any $T$-separated presheaf is also separated for the sieve generated by all composites $f h$ with $h\in f^*(S)$ (this sieve lies in the saturation of $T$ to a Grothendieck topology). But this sieve is precisely $R\cap S$.
Thus, if $T\subseteq \Omega$ is $T$-separated, it is also separated for $R\cap S$ for any $R,S\in T$. However, for any $f:V\to U$ in $R\cap S$, we have $f^*(R) = f^*(S)$ being the maximal sieve on $V$. Thus, since $T$ is separated for $R\cap S$, we must have $R=S$; hence $U$ admits at most one $T$-covering sieve.
Now assuming $T$ satisfies this condition so that $T$ is $T$-separated, then $T$ is a $T$-sheaf whenever if $R$ is a (the) covering sieve of $U$ and for each $f:V\to U$ in $R$ we have a (the) covering sieve $S_V$ of $V$, then there is another covering sieve of $U$ (which, of course, must also be $R$) such that $f^* R = S_V$. But when $f\in R$, then $f^*R$ is the maximal sieve on $V$, so this means that the domain of every morphism in a covering sieve is covered only by its own maximal sieve --- which is already implied by $T$ being a functor. Such objects are called $T$-irreducible (C2.2.18 in the Elephant).
Thus there are three classes of objects in $C$: the irreducible ones, which are covered by their maximal sieve; those that are covered by some non-maximal sieve whose domains are all irreducible; and those that are not covered by any $T$-sieve. The irreducible objects are themselves a sieve in $C$, by functoriality, as are the objects that are covered by any $T$-sieve at all. The objects that are not covered by any $T$-sieve will be covered only by their maximal sieve in the Grothendieck topology generated by $T$, so they will be irreducible there.
In particular, the Grothendieck topology generated by $T$ is
rigid in the sense of C2.2.18: every object is covered by the family of morphisms out of irreducible objects. It follows that the category of $T$-sheaves is equivalent to the category of presheaves on the irreducible objects for this topology (which are those that are $T$-covered by the maximal sieve or that are not covered by any $T$-sieve).
This is perhaps not a complete answer to your question, but it shows that the condition of a sifted coverage being a sheaf for itself is very restrictive.
|
Quanta Magazine's Natalie Wolchover wrote a status report about grand unification,
Grand Unification Dream Kept at Baywhose subtitle – which includes words such as "failed" and "limbo" – is much more negative than the available evidence suggests. No smoking gun – such as the proton decay – that would "almost" prove grand unification has been found. But no good reason to stop loving grand unification has been found, either, and that's why lots of particle physicists keep on thinking about grand unified theories. Physicists have failed to find disintegrating protons, throwing into limbo the beloved theory that the forces of nature were unified at the beginning of time
What is grand unification? The Standard Model has a gauge group, \(SU(3)\times SU(2)\times U(1)\), and the gauge bosons (gluons, photons, W-bosons, Z-bosons) that are almost unavoidable given a gauge group. Also, it works with fermionic fields that transform as a reducible representation – a collection of separate, irreducible representations – under the gauge group.
Some fermionic fields (quarks) are colored i.e. triplets \({\bf 3}\) under the \(SU(3)\) group, others (leptons) are color-neutral i.e. singlets \({\bf 1}\) under it. Some fermionic fields (the left-handed fermions) are doublets \({\bf 2}\) under the electroweak \(SU(2)\) factor, others (the right-handed ones) are singlets \({\bf 1}\). And all of these bits and pieces carry various values of the hypercharge \(Y\) under the \(U(1)\) factor of the Standard Model group.
You may feel it's a contrived setup. The gauge group has three factors. Why not one? And the quarks and leptons fit into several separate representations. Why not one or at least fewer? Well, the questions "why" may be legitimized because theories where the spectrum is simpler – more unified – exist.
The oldest example is the Georgi-Glashow or minimal \(SU(5)\) model. In 1974, when I was finding diapers helpful, they figured out that the Standard Model gauge group may be embedded into a simple group. The fermions may be organized into representations of this group and the Lagrangian may be written in such a way that we may obtain the Standard Model after a symmetry breaking that is very analogous to the electroweak symmetry breaking by the usual Higgs field in the Standard Model.
How does it work?
Well, \(SU(2)\) and \(SU(3)\) may be visualized as a group of \(2\times 2\) or \(3\times 3\) matrices, respectively. You may take a \(2\times 2\) matrix and a \(3\times 3\) matrix and turn them into blocks on the block diagonal of a larger, \(5\times 5\) matrix, cleverly exploiting the ingenious identity \(2+3=5\). The resulting \(5\times 5\) matrix will still be unitary. Moreover, if the blocks' determinants are one, the determinant of the \(5\times 5\) matrix will also be one. So \(SU(2)\times SU(3)\) is isomorphic to a subgroup of \(SU(5)\).
You may embed the hypercharge \(U(1)\) into the \(SU(5)\), too, by allowing matrices whose blocks' determinants aren't one but whose total determinant is one, namely matrices of the form\[
\begin{pmatrix}
f^{+3}&0&0&0&0\\
0&f^{+3}&0&0&0\\
0&0&f^{-2}&0&0\\
0&0&0&f^{-2}&0\\
0&0&0&0&f^{-2}
\end{pmatrix}, \quad f=\exp(i\alpha)
\] Both blocks are unitary matrices with determinants whose absolute value equals one. The blocks belong to \(U(2)\) and \(U(3)\), respectively, but not to \(SU(2)\) and \(SU(3)\). However, the \(5\times 5\) matrix does have the determinant \(f^{3+3-2-2-2}=f^0=1\).
Great. The dimension of \(SU(5)\) is \(5^2-1=24\). So it has 24 generators. The Standard Model gauge group only has \(8+3+1=12\) generators, so there are 12 new generators in \(SU(5)\). These generators – and their corresponding new gauge bosons – basically transform as the \(2\times 3\) complex rectangles under \(SU(2)\times SU(3)\) outside the diagonal. We haven't observed these W-like bosons, so they must be heavy if they exist. Just like the Standard Model's W-bosons are comparably heavy to the Higgs boson, the new bosons – X-bosons and Y-bosons, as they're sometimes called – should be heavy as the new generalized Higgs bosons that break \(SU(5)\).
This mass scale is likely to be huge, \(10^{13}-10^{16}\GeV\) or so, because with this rather extreme choice, one explains the values of the \(SU(3),SU(2),U(1)\) couplings that need to unify into a single \(SU(5)\) gauge coupling at the symmetry breaking scale. Just like the Standard Model uses the Higgs doubles, the Georgi-Glashow theory may use some more complicated representations of \(SU(5)\) with a potential that wants to break the gauge group from \(SU(5)\) to the Standard Model group.
The fermions may be arranged to \(SU(5)\) representations, too. Each generation of leptons and quarks in the Standard Model contains 15 (or 16 if we include a right-handed neutrino) two-component fermionic fields. They get arranged to \({\bf 5}\oplus \bar{\bf 10}\), the sum of the fundamental 5-component representation of \(SU(5)\) and the antisymmetric 2-index tensor with \(5\times 4 / 2\times 1 = 10\) components (complex conjugated, therefore the bar).
There are actually two ways how to clump the numerous lepton-and-quark representations into these 5- and 10-dimensional representations of \(SU(5)\). Georgi and Glashow found the old way where the lepton doublet teams up with the right-handed up-quark singlets to create the 5-dimensional representation, and the remaining fermions belong to the 10-dimensional one.
Surprisingly, people had to wait up to 1982-1984 when the flipped \(SU(5)\) model was found by Stephen Barr, Dimitri Nanopoulos, and with some important additions by Antoniadis, Ellis, and Hagelin, the U.S. presidential candidate for the Maharishi Natural Party. In the flipped \(SU(5)\) models, some pairs of representations are, you know, flipped. So the 5-dimensional representation arises from the lepton doublet and the right-handed up-quark (not down-quark) singlet. The correct hypercharges are obtained as long as you normalize the \(5\times 5\) matrix displayed above differently and correctly.
These original models weren't supersymmetric. You can add supersymmetry and this gives the models some virtues that go beyond the virtues of grand unification and supersymmetry separately. One of them is the gauge coupling unification that seems to work rather well in SUSY GUT. Two straight lines (graphs of \(1/g^2\) as a function of \(E\) or \(\Lambda\)) almost always intersect somewhere in the plane. But a third line only goes through the two lines' intersection if one real parameter – e.g. the slope of the third line – has the right value. In the minimal non-supersymmetric \(SU(5)\) model, the three lines almost intersect at one point but not quite. In the supersymmetric version of this model (and many models that behave almost indistinguishably), the lines exactly intersect, within the known error margins. So at least one real parameter measured experimentally
indicatesthat the three gauge groups want to get unified, and within a supersymmetric theory.
Also, people found other grand unified groups that may do the job. \(SU(5)\) may be represented as a \(10\times 10\) matrix: recall that the complex number \(a+bi\) basically behaves under multiplication just like the \(2\times 2\) matrix \(((a,b),(-b,a))\). So \(SU(5)\) may be embedded into \(SO(10)\), a somewhat larger group. Its additional advantage is that the 5- and 10-dimensional representations of \(SU(5)\) may be embedded into a single irreducible representation of \(SO(10)\), the 16-dimensional spinor – a right-handed neutrino becomes mandatory in \(SO(10)\).
Alternatively, you may pick an even larger simple group, the exceptional group \(E_6\), where the fermions come from the fundamental 27-dimensional representation. There exist various schemes to break the symmetries by Higgs fields in various large representations of the gauge group, or by stringy Wilson lines that don't behave as simple fields in point-like particle-based field theories, and there's a lot of fun to study here.
The most well-known experimental prediction of grand unified theories is the proton decay. The conservation of the lepton and baryon numbers \(L\) and \(B\) no longer holds so with an intermediate X-boson or a Y-boson, you allow some processes in which the proton turns into the (equally charged, equally spinning) positron – plus pure energy (some particles, like photons, whose total conserved charges are zero and the spin is integer).
In Japan, huge tanks with purified water have waited for flashes carrying the gospel of the proton decay for some 20 years. They have found nothing so far. They could have thrown a light-emitting fish into the water tank and receive the Nobel prize but their scientific pride tells them not to use Al Gore's methods to win the prize. ;-) If I trust Wolchover's number which is hopefully up-to-date, the lifetime of the proton is greater than\[
t_{\rm proton} \gt 1.6\times 10^{34}\,{\rm years}.
\] You see that the proton lifetime is a bit longer than the average human's life expectancy. Well, even since the Big Bang, only a very tiny fraction of the protons had enough time to decay (not more than 1 proton in a gram of matter has decayed, roughly speaking) – if they can decay at all. The rather stable proton, as indicated by the experiments, excludes some very specific models based on the grand unification. A picture from the Quanta Magazine:
You see some theories that are either killed (red) or remain viable (green). Supersymmetry tends to prolong the predicted lifetime of the proton which is a good thing. As you can read in Wolchover's article or Brian Greene's
The Elegant Universe, Glashow immediately abandoned grand unification in all formsonce their first model, minimal \(SU(5)\), was ruled out by the proton decay experiments.
That's a very unwise reaction, I think, because there's really no good reason why exactly the original Georgi-Glashow model should be the right grand unified model in Nature. (In the 1960s, Glashow wasn't lazy and tried a different model to represent the W-bosons before he ended up with the Standard Model representation. In the case of grand unification, he chose the strategy try-fail-and-relax.) Stephen Barr, the original father of the flipped \(SU(5)\) model, describes the so far null results of the search for the proton decay as follows. You're waiting for your wife and she's already some 10 minutes or 1 hour late. Will you already call the company to organize the funeral? Is Glashow organizing the funerals for his wife this quickly?
There are lots of interesting ideas in grand unification – various ways to deal with the proton decay, low neutrino masses, the doublet-triplet splitting problem etc. And grand unified theories may be naturally realized in several major classes of semi-realistic string compactifications which bring some even better (and perhaps more viable) possibilities to the research.
Grand unification is natural in string theory – but string theory may also lead to models (especially various braneworlds) which don't incorporate the idea of grand unification, at least not in any obvious way. Given the fact that I feel almost certain about the validity of string theory, do I believe in grand unification? I probably think it's more likely than not. But I am in no way certain that grand unification is realized in Nature.
However, I am sure that it's wrong to say that the theory was thrown into "limbo". Just look at the charts above taken from the Quanta Magazine. A significant portion (about one-half) of the theory bars remains green. The wife hasn't arrived in the first 1/2 of the possible times when she could have arrived. It's not a terribly strong piece of evidence that she is dead. Maybe it's sensible and ethical to only plan the funeral party once your wife is really falsified and you see her dead body. And of course, Nature may prefer variations of the grand unification that make the proton decay significantly more long-lived, perhaps by some extra discrete symmetries or other structures. Most of these sub-theories' parameter spaces could have been basically untouched by the proton decay experiments.
So be sure that phenomenologists keep on working on grand unified theories. None of them is terribly sure about the right value of the proton lifetime. But the proton's lifetime may be finite. I think that most phenomenologists believe that it is. If neither the baryon number nor the lepton number \(B,L\) is a gauge symmetry, and they're probably not, then the proton may be considered an ultratiny black hole which may evaporate – decay – and only the truly conserved, gauged charges are conserved. So it should be able to decay to the positron plus neutral stuff (such as a photon or two), at least by some (very slowly acting) Planck-scale operators.
It would be extremely sloppy to evaluate the current situation in the proton decay experiment by saying that the proton is certainly stable – or that there can't be any grand unification in Nature.
|
DBSCAN Engine
Given the graph, we extract the alarms from all the vertices and use these as points as input to the DBSCAN algorithm.
DBSCAN requires a constant \(\epsilon\) and a distance function, which we define as follows:
where:
\(a_{1}\) and \(a_{2}\) are the points representing the alarms
\(\alpha \in (0, \infty)\) is a scaling constant (directly related to \(\epsilon\))
\(\beta \in [0,1\)] is a weighting constant
When \(\beta\) is closer to 0, more weight is given to the temporal component
When \(\beta\) is closer to 1, more weight is given to the spatital component
\(t(a_{k})\) returns the time (timestamp in seconds) of the last occurence of the given alarm
\(dg(a_{i}, a_{j})\) returns the normalized distance on the shortest path between the vertices for \(a_{i}\) and \(a_{k}\)
If both alarms are on the same vertex, then the distance is 0
If there is no path between both alarms, then the distance is \(\infty\)
In simpler terms, we can think of the distance function as taking a weighted combination of both the distance in time and in space.
We set the constants with the following defaults:
\(\epsilon = 100\)
\(minpts = 1\)
\(\alpha = 144.47\)
\(\beta = 0.55\)
These were derived empirically during our testing.
Let’s assume that we have the following graph:
Let’s start determining the distance between \(a_{1}\) and \(a_{2}\). We can calculate the time component with:
And given that \(a_{1}\) and \(a_{2}\) are on the same vertex, the spatial component is simply zero:
Placing these results in the original equation gives us:
and \(d(a_{1}, a_{2}) < \epsilon\), so the alarms will be clustered together.
Now let’s determine the distance between \(a_{3}\) and \(a_{4}\). We can calculate the time component with:
To calculate the spatial distance between \(a_{3}\) and \(a_{4}\), we sum up the weights on the edges between the shortest path and divide this result by the default weight (=100), so:
Placing these results in the original equation gives us:
and \(d(a_{3}, a_{4}) < \epsilon\), so the alarms will be clustered together.
Now let’s determine the distance between \(a_{2}\) and \(a_{3}\). We can calculate the time component with:
The value of the spatial component is:
Placing these results in the original equation gives us:
and \(d(a_{2}, a_{3}) > \epsilon\), so the alarms will
not be clustered together.
The DBSCAN algorithm performs well when there are less than 500 candidate alarms. It has a worst-case complexity of \(O(n^2)\).
Note that alarms are only considered to be candidates for correlation when they have been created and/or updated in the last 2 hours (configurable). This means that the engine can still be used on systems with more than 500 active alarms, since many of these will age out over time.
|
Ex.13.1 Q2 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
A vessel is in the form of a hollow hemisphere mounted by a hollow cylinder. The diameter of the hemisphere is \(14\;\rm{cm}\) and the total height of the vessel is \(13 \;\rm{cm}\). Find the inner surface area of the vessel.
Text Solution What is known?
The diameter of the hemisphere is \(14\rm{ cm}\) and total height of the vessel is \(13\rm {cm.}\)
What is unknown?
The inner surface area of the vessel.
Reasoning:
Create a figure of the vessel according to the given description
From the figure it’s clear that the inner surface area of the vessel includes the CSA of the hemisphere and the cylinder.
Inner surface area of the vessel \(= \) CSA of the hemisphere \(+ \) CSA of the cylinder
We will find the area of the vessel by using formulae;
CSA of the hemisphere \( = 2\pi {r^2}\)
where \(r\) is the radius of the hemisphere
CSA of the cylinder \( = 2\pi rh\)
where \(r\) and \(h\) are the radius and height of the cylinder respectively.
Height of the cylinder \(=\) Total height of the vessel \(–\) height of the hemisphere
Steps:
Diameter of the hemisphere,\(d = 14cm\)
Radius of the hemisphere, \(\begin{align}r = \frac{{14cm}}{2} = 7\rm{cm}\end{align}\) Height of the hemisphere \(=\) radius of the hemisphere, \(r = 7cm\)
Radius of the cylinder,\(r = 7cm\)
Height of the cylinder \(=\) Total height of the vessel \(–\) height of the hemisphere
\[h = 13cm - 7cm = 6cm\]
Inner surface area of the vessel \(=\) CSA of the hemisphere \(+\) CSA of the cylinder
\[\begin{align}&= 2\pi {r^2} + 2\pi rh\\&= 2\pi r\left( {r + h} \right)\\&= 2 \times \frac{{22}}{7} \times 7{cm}\left( {7{cm} + 6{cm}} \right)\\&= 2 \times 22 \times 13{c{m^2}}\\&= 572{c{m^2}}\end{align}\]
|
Your reasoning is correct, but you need to go a little further to deduce the answer.
The question does not say but we must assume that the mean position O is the same for P and Q. The phase difference between them remains the same, but the separation changes.
Suppose that, at some instant, P and Q are moving in the same direction. When the separation is maximum they are moving with the same speed. If the speed of one was greater than the other, they would get closer together of further apart. For each, speed decreases with distance from O in the same manner, because they have the same amplitude. Whichever is closer to O moves faster. They will have the same speed when they are the same distance from O. So at maximum separation they will be positioned symmetrically about O.
If the maximum separation is $A\sqrt2$ then each is $\frac{A\sqrt2}{2}=\frac{A}{\sqrt2}$ from O. The phase of each is then $\phi$ where
$A\sin\phi=\frac{A}{\sqrt2}$ $\sin\phi=\frac{1}{\sqrt2}=\sin45^{\circ}$ $\phi=45^{\circ}$.
So the phase difference at maximum separation (and at all separations) is $2\phi=90^{\circ}$.
Alternative solution :
$x_1=A\sin(\omega t+\phi)$
$x_2=A\sin(\omega t)$ $x_1-x_2=2A\cos(\frac{\omega t+\phi+\omega t}{2})\sin(\frac{\omega t+\phi - \omega t}{2})=2A\cos(\omega t+\frac12 \phi)\sin(\frac12 \phi)$.
$\sin(\frac12 \phi)$ is a constant, so the maximum possible value of $x_1-x_2$ occurs when $\cos(\omega t+\frac12 \phi)=1$. Then
$2A\sin(\frac12 \phi)= A\sqrt2$ $\sin(\frac12 \phi)=\frac{\sqrt2}{2}=\frac{1}{\sqrt2}=\sin 45^{\circ}$ $\phi=90^{\circ}$.
|
What causes the melting and boiling points of noble gases to rise when the atomic number increases? What role do the valence electrons play in this?
The melting and boiling points of noble gases are very low in comparison to those of other substances of comparable atomic and molecular masses. This indicates that only weak van der Waals forces or weak London dispersion forces are present between the atoms of the noble gases in the liquid or the solid state.
The van der Waals force increases with the increase in the size of the atom, and therefore, in general, the boiling and melting points increase from $\ce{He}$ to $\ce{Rn}$.
Helium boils at $-269\ \mathrm{^\circ C}$. Argon has larger mass than helium and have larger dispersion forces. Because of larger size the outer electrons are less tightly held in the larger atoms so that instantaneous dipoles are more easily induced resulting in greater interaction between argon atoms. Therefore, its boiling point ($-186\ \mathrm{^\circ C}$) is more than that of $\ce{He}$.
Similarly, because of increased dispersion forces, the boiling and melting points of monoatomic noble gases increase from helium to radon.
For more data of melting and boiling points of noble gas
compounds, read this page
Other answers have mentioned
that dispersion forces are the key to answering the question but not how they increase from helium to radon (or let’s take xenon because that’s not radioactive so I feel safer breathing it).
The larger the mass of a nucleus the more protons are in there, and the more protons in a nucleus the more electrons are around the outside. Traditionally, one thought of electrons orbiting the nucleus rather like satellites orbiting a planet on more or less fixed orbits at certain heights. But that picture is wrong. It is much better to consider electrons as waves that completely surround the nucleus. If one were to translate these waves into particles (because of the wave-particle dualism at quantum levels), all one would get would be probabilities of finding specific electron $e$ at specific location $x$.
For a neutral atom that is surrounded by nothing, these probabilities depend only on the wave function, and are inherently centrosymmetric or anticentrosymmetric, leading to a net charge distribution of zero. In a sample of xenon gas, however, other atoms approach the atom we are observing. Consider an atom approaching our xenon atom at from ‘above’ (i.e. perpendicular to our viewpoint). While the atom as a whole is neutral, the could of electrons surrounding the nucleus is negatively charged. This new negative charge changes the potential energy the electrons in our observed atom are perceiving: While we originally had a centrosymmetric potential distribution (decreasing positive charge intensity from the nucleus) we now have a second negative source at 12 o’clock. Therefore, the probability distributions will shift ever so slightly and it will be slightly more likely to detect our electron $e$ at position $y$ below the nucleus rather than at position $z$ above the nucleus.
Note that this simplification relies on the freezing of time at a certain moment when the other atom is approaching and a certain controlled environment that I have invented for example purposes.
With the electrons now more likely to be at $y$ rather than $z$, we can say that we have created a
spontaneous or an induced dipole. A mild positive charge is now pointing towards the other atom and ever so slightly attracting it. This will, if you advance the time flow by one spec, create another induced dipole in the originally approaching atom plus further in every other atom that is close around. We cannot freeze this picture though. Every infinitesimal change in position, movement direction or rotation will change the entire picture, meaning that our induced dipole is extremely short-lived.
It is only the combination of all these induced dipoles and their slight attraction that attracks the different atoms together. Since they rely on electron distribution changing, the more electrons we have the stronger these induced dipoles can be and the more force can be excersized between atoms. Since the number of electrons loosely correlates with mass (and strictly correlates with nuclear charge), larger atoms are said to display stronger van der Waals forces than smaller ones.
Valence electrons have not much to do with this, as their outer shell is closed. As the other answer mentioned, dispersion forces are the ones responsible for any interaction between these atoms.
The size dependence therefore is directly coming from the size dependence of dispersion forces:
In a very simplistic way, a random charge fluctuation can polarize the otherwise perfectly apolar atoms. This induced dipole moment is then responsible for the dispersion interactions. The polarizibility of an atom increases (easier to polarize) if the atomic number increases, therefore the interactions in nobel gases will reflect this behavior.
As mentioned in other answers the dispersion force is responsible for noble gases forming liquids. The calculation of the boiling points is now outlined after some general comments about the dispersion force.
The dispersion force (also called London, charge-fluctuation, induced-dipole-induced-dipole force) is universal, just like gravity, as it acts between all atoms and molecules. The dipole forces can be long-range, >10 nm down to approx 0.2 nm depending on circumstances, and can be attractive or repulsive.
Although the dispersion force is quantum mechanical in origin it can be understood as follows: for a non-polar atom such as argon the time average dipole is zero, yet at any instance there is a finite dipole given by the instantaneous positions of the electrons relative to the nucleus. This instantaneous dipole generates an electric field that can polarise another nearly atom and so induce a dipole in it. The resulting interaction between these two dipoles gives rise to an instantaneous attractive force between the two atoms, whose time average in not zero.
The dispersion energy was derived by London in 1930 using quantum mechanical perturbation theory. The result is
$$U(r)=-\frac{3}{2}\frac{\alpha_0^2I}{(4\pi\epsilon _0)^2r^6}=-\frac{C_{\mathrm{disp}}}{r^6}$$
where $\alpha_0$ is the electronic polarisability, $I$ the first ionisation energy, $\epsilon_0$ the permittivity of free space and $r$ the separation of the atoms. The electronic polarisability $\alpha_0$ arises from the displacement of an atom's electrons relative to the nucleus and it is the constant of proportionality between the induced dipole and the electric field $E$, viz., $\mu_{\mathrm{ind}} = \alpha_0 E$. The polarisability has units of $\pu{J-1 C2 m2}$, which means that in SI units $\alpha_0/(4\pi\epsilon_0)$ has units of $\pu{m3}$ and this polarisability is in effect a measure of electronic volume, or put another way $\alpha_0 = 4\pi\epsilon_0r_0^3$ where experimentally it is found that $r_0$ is approximately the atomic radii. The ionisation energy $I$ arises because to estimate $r_0$ a simple model of an atom is used to calculate the orbital energy and hence radius and in doing so the energy is equated to the ionisation energy since this can be measured.
As can be seen from the formula the energy depends on the product of the square of the polarisability, i.e. volume of molecule or atom and its ionisation energy, and also on the reciprocal of the sixth power of the separation of the molecules/atoms. In a liquid of noble gases this separation may be taken to be the atomic radius, $r_0$. Thus the dependence is much more complex than just size, see table of values below. The increase in polarisability as the atomic number increases, is offset somewhat by the reduction in ionisation energy and increase in atomic radius.
If experimental values are put into the London equation then the attractive energy can be calculated. In addition the boiling point can be estimated by equating the London energy with the average thermal energy as $U(r_0)=3k_\mathrm{B}T/2$ where $k_\mathrm B$ is the Boltzmann constant and $T$ the temperature. The relevant parameters are given in the table below, with values in parentheses being experimental values:
[1]
$$\begin{array}{c|c|c|c|c|c} \text{Noble gas} & (\alpha_0/4\pi\epsilon_0)~/~\pu{10^{-30}m^3} & I~/~\pu{eV} & r_0~/~\pu{nm} & C_\mathrm{disp}~/~\pu{10^{-70} J m6} & T_\mathrm{b}~/~\pu{K} \\ \hline \ce{Ne} & 0.39 & 21.6 & 0.308 & 3.9~(3.8) & 22~(27) \\ \ce{Ar} & 1.63 & 15.8 & 0.376 & 50~(45) & 85~(87) \\ \ce{Xe} & 4.01 & 12.1 & 0.432 & 233~(225) & 173~(165) \end{array}$$
The fit to data is very good, possibly this is fortuitous, but these are spherical atoms showing only dispersion forces and a good correlation to experiment is expected. However, there short range repulsive forces that are ignored as well as higher order attractive forces. Nevertheless it does demonstrate that dispersion forces can account for the trend in boiling quite successfully.
Israelachvili, J. N. Intermolecular and Surface Forces,3rd ed.; Academic Press: Burlington, MA, 2011; p 110.
protected by Community♦ May 10 '16 at 16:12
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Similar questions have been asked before, you may search. (I'm sorry but I cannot refer to one here)
You want to
identify an analog LTI system from analysing its response $y(t)$ to a known input $x(t)$.
One method of analysis involves the use of continous Fourier transforms of the input and output signals. It can be shown that (for a stable system)$$H(j\Omega) = \frac{Y(j\Omega)}{X(j\Omega)} $$
Considering a continuous time analog LTI system, this method requires some convenient techniques to compute the Fourier transforms of the input and output signals. This method has practical limitations due to understandable reasons.
With the advent of digital computers and availability of dedicated hardwares, a more implementable approach has been developed during the rise of digital signal processing era, based on the
sampling theorem to represent the continuous time signals by their samples, i.e., the continuous time analog system is represented in the form of an equivalent (and in a one to one relation to its continuous time counterpart) digital LTI system, as long as some necessary conditions are met.
An analysis based on the taken samples of input and output signals, however, should be handled with care, as the most critical element of success in this method relies on the fact that the continuous time system's frequency response function $H(j\Omega)$ be
bandlimited to the half of the sampling frequency $\Omega_s = 2\pi F_s$, so that it can be digitally represented in an equivalent digital LTI filter with DTFT $H(e^{j\omega})$
If you know that this condition is satisfied, then there are other details such as the fact that Fourier transforms provide information on
steady-state behaviour and not on the dynamic transients that will happen in any finite length measurements. (That can be avoided with longer recording durations, as you intent to do)
Also note that the LTI analog system must be
stable so that its CTFT $H(e^{j\Omega})$ exists. Most often it will be stable I assume, but it's still important to consider though.
Finaly any noise and/or distortions on both the input and output signals not only prevent high precision computations but can even lead to misleading results.
So I suggest you to begin with a
known setup first and apply your method to see if it can also identify that known LTI system. If yes, then you are ok.
|
Hello Good people of Emacs!
I'm having trouble exporting unicode math symbols from buffer (org-mode) to pdf file.
1. Problem Description:
Here is source code demonstration:
#+TITLE: Unicode characters export test #+AUThor: #+date:Unicode characters:ℝ ℤ ℕ ⇒ ∈ ∀ Same symbols in latex format:$$\Bbb{R} \Bbb{Z} \Bbb{N} \Rightarrow \in \forall$$
Produced .tex file includes unicode symbols:
% Created 2016-03-04 Pá 21:01\documentclass[11pt]{article}\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{fixltx2e}\usepackage{graphicx}\usepackage{longtable}\usepackage{float}\usepackage{wrapfig}\usepackage{rotating}\usepackage[normalem]{ulem}\usepackage{amsmath}\usepackage{textcomp}\usepackage{marvosym}\usepackage{wasysym}\usepackage{amssymb}\usepackage{hyperref}\tolerance=1000\date{}\title{Unicode characters export test}\hypersetup{ pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 24.4.1 (Org mode 8.2.10)}}\begin{document}\maketitle\tableofcontentsUnicode characters:ℝ ℤ ℕ ⇒ ∈ ∀ Same symbols in latex format:$$\Bbb{R} \Bbb{Z} \Bbb{N} \Rightarrow \in \forall$$% Emacs 24.4.1 (Org mode 8.2.10)\end{document}
Pdf file does not :
2. Things I have tried so far: Xelatex and unicode-math:This is included in answer from Rasmus. Here I have to admit: I'm not using development version of org-mode he's mentioning. (failed to install it) I've tried xelatex adn unicode-math anyway. My version of Org mode is 8.2.10.
#+latex_compiler: xelatex#+latex_header: \usepackage{libertine}#+latex_header: \usepackage{unicode-math}
Including this in file introduces not so lovely message instead pdf file.
org-latex-compile: PDF file ./unicode_export_test.pdf wasn't produced: [package error]
I have checked for correct unicode-math installation:
~ $ kpsewhich unicode-math.sty/usr/share/texlive/texmf-dist/tex/latex/unicode-math/unicode-math.sty
Installation of Xelatex is the newest version:
~ $ sudo apt-get install texlive-xetexReading package lists... DoneBuilding dependency tree Reading state information... Donetexlive-xetex is already the newest version.0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.
install development version of org-mode: (failed) Following How do I keep current with bleeding edge development? - tutorial I have been able (I think) download it and compile it. Since I have no idea how to tell to Emacs run this version, Emacs runs Org-mode version 8.2.10 (release_8.2.10 @ /usr/local/share/emacs/24.4/lisp/org/).I have tried to add following lines to my config.
(add-to-list 'load-path (expand-file-name "~/elisp/org-mode/lisp"))(remove 'load-path (expand-file-name "/usr/local/share/emacs/24.4/lisp/org/"))(add-to-list 'auto-mode-alist '("\\.\\(org\\|org_archive\\|txt\\)$" . org-mode))(require 'org)
Without pleasing result.
Searching internet:Solution was not found. Beating my self for being stupid:Didn't help. 3. Question: How to export unicode characters from org-mode to pdf? Do I have to configure some org-mode variable? Compile with different Latex interpreter? Any other ideas? If development version fixes this.... How can I install it?
|
You have forgotten that there is work done by the battery after $t=0$, so that there is energy lost before S1 is opened and S2 is closed.
The work done by the battery to transfer charge $Q$ onto the 1st capacitor is $W=QV=\frac{Q^2}{C}$.
The final charges on the capacitors are $\frac12Q, \frac14Q, \frac{1}{16}Q, \frac{1}{64}Q, ...$ The total energy stored in all the capacitors at this time is $$E_{\infty}=\frac{Q^2}{2C}[(\frac12)^2+(\frac14)^2+(\frac{1}{16})^2+(\frac{1}{64})^2 +...]=\frac{Q^2}{6C}$$ The total loss of energy after closing S1 is $$W-E_{\infty}=\frac{5Q^2}{6C}=\frac56CV^2=\frac{20}{6}\pi\epsilon_0 RV^2$$
So I think option A should be the correct answer. The marking scheme appears to be wrong.
|
Ex.13.4 Q2 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question
The slant height of a frustum of a cone is \(4\,\rm{cm}\) and the perimeters (circumference) of its circular ends are \(18 \,\rm{cm}\) and \(6\, \rm{cm.}\) Find the curved surface area of the frustum.
Text Solution What is Known?
Slant height of a frustum of a cone is \(4\,\rm{cm}\) and the circumference of its circular ends are \(18\, \rm{cm}\) and \(6 \,\rm{cm.}\)
What is unknown?
The curved surface area of the frustum.
Reasoning:
Draw a figure to visualize the shape better
Using the circumferences of the circular ends of the frustum to find the radii of its circular ends
Circumference of the circle \( = 2\pi r\)
where
\(r\) is the radius of the circle.
We will find the CSA of the frustum by using formula;
CSA of frustum of a cone \( = \pi \left( {{r_1} + {r_2}} \right)l\)
where
\(r1, r2\) and \(l\) are the radii and slant height of the frustum of the cone respectively.
Steps:
Slant height of frustum of a cone, \(l = 4cm\)
Circumference of the larger circular end, \({C_1} = 18cm\)
Radius of the larger circular end,\(\begin{align} {r_1} = \frac{{{C_1}}}{{2\pi }} = \frac{{18cm}}{{2\pi }} = \frac{9}{\pi }cm\end{align} \)
Circumference of the smaller circular end, \({C_2} = 6cm\)
Radius of the smaller circular end,\(\begin{align} {r_2} = \frac{{{C_2}}}{{2\pi }} = \frac{{6cm}}{{2\pi }} = \frac{3}{\pi }cm\end{align} \)
CSA of frustum of a cone \( = \pi \left( {{r_1} + {r_2}} \right)l\)
\[\begin{align}&= \pi \left( {\frac{9}{\pi }cm + \frac{3}{\pi }cm} \right) \times 4cm\\&= \pi \times \frac{{12}}{\pi }cm \times 4cm\\&= 48c{m^2}\end{align}\]
The curved surface area of the frustum is \(48 \;\rm cm^2.\)
|
A
quadratic equation solver is a free step by step solver for solving the quadratic equation to find the values of the variable. It uses the quadratic equation formula:
The quadratic equation solution is obtained using the quadratic formula
\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\)
If an input is given, it easily shows the solution of the given equation. Use the quadratic equation solver to check your answers. Use it as a reference when you are finding the unknown values of a variable. When you are numerically solving the quadratic equations, you check it with the solver whether your answer is correct or incorrect. Once you find that your answers are correct, then you are on the right path to solve the algebraic equations. But, if you find that your answers are incorrect, you should figure out the area of mistakes that you had done. The quadratic equation solver online helps to find out the exact solution of a quadratic equation.
Solve the Quadratic Equation
Solve an equation in the form of ax
2 + bx + c = 0
Enter the values of a, b and c in a quadratic equation solver.
How does Quadratic Equation Solver Work?
The input for the quadratic equation solver is of the form
ax
2 + bx + c = 0
Where a is not zero , \(a\neq 0\)
If the value of a is zero, then the equation is not a quadratic equation.
The quadratic equation solution is obtained using the quadratic formula\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\)
Normally, we get two solutions, because of plus or minus symbol “ \(\pm\)”. You need to do both the addition and subtraction operation.
The part of an equation “ b
2-4ac “ is called the “ discriminant” and it produces the different types of possible solutions. Some of the possible solutions are Case 1: When a discriminant part is positive, you get two real solutions Case 2: When a discriminant part is zero, it gives only one solution Case 3: When a discriminant part is negative, you get complex solutions
Quadratic Equation solver class 10 level helps the students of grade 10 to clearly know about the different cases involved in the discriminant producing different solutions. Here are some of the quadratic equation examples
Quadratic Formula Examples Example for Case 1 : b 2– 4ac > 0
Consider an example x
2 – 3x – 10 = 0
Given data : a =1, b = -3 and c = -10
b
2 – 4ac = (-3) 2– 4 (1)(-10)
= 9 +40 = 49
b
2 – 4ac= 49 >0
Therefore, we get
two real solutions
The general quadratic formula is given as
\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\) \(x=\frac{-(-3)\pm \sqrt{(-3)^{2}-4(1)(-10)}}{2(1)}\) \(x=\frac{3\pm \sqrt{9+40}}{2}\) \(x=\frac{3\pm \sqrt{49}}{2}\) \(x=\frac{3\pm 7}{2}\)
x= 10/2 , -4/2
x= 5, -2
Therefore, the solutions are 5 and -2
Case 2 : b 2– 4ac = 0
Consider an example 9x
2 +12x + 4 = 0
Given data : a =9, b = 12 and c = 4
b
2 – 4ac = (12) 2– 4 (9)(4)
= 144 – 144= 0
b
2 – 4ac= 0
Therefore, we get only
one distinct solution
The general quadratic formula is given as\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\) \(x=\frac{-(12)\pm \sqrt{(12)^{2}-4(9)(4)}}{2(9)}\) \(x=\frac{-12\pm \sqrt{144-144}}{18}\) \(x=\frac{-12\pm \sqrt{0}}{18}\) \(x=\frac{-12}{18}\)
x= -6/9 = -2/3
x= -2/3
Therefore, the solution is -2 / 3
Case 3 : b 2– 4ac < 0
Consider an example x
2 + x + 12= 0
Given data : a =1, b = 1 and c = 12
b
2 – 4ac = (1) 2– 4 (1)(12)
= 1 – 48 = -47
b
2 – 4ac= -47 < 0
Therefore, we get
complex solutions
The general quadratic formula is given as\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\) \(x=\frac{-(1)\pm \sqrt{(1)^{2}-4(1)(12)}}{2(1)}\) \(x=\frac{-1\pm \sqrt{1-48}}{2}\) \(x=\frac{-1\pm \sqrt{-47}}{2}\) \(x=\frac{-1+i\sqrt{47}}{2}\) and \(x=\frac{-1-i\sqrt{47}}{2}\)
Therefore, the solutions are\(x=\frac{-1+i\sqrt{47}}{2}\) and \(x=\frac{-1-i\sqrt{47}}{2}\)
For more information about quadratic equations and other related topics in mathematics, register with BYJU’S – The Learning App and watch interactive videos.
|
Let $M>0$. $$\sum_{n=M}^{\infty} \frac {1}{(x-n)^2}$$
How do I show this converges uniformly for $x ≤ \frac{|M|}{2}$
My actual question is how do I determine the interval of uniform convergence for general series like this one.
Am I allowed to use the Weierstrass M test? Because the sum is between $M$ and $\infty$.
$\sum_{n=M}^{\infty} \frac {1}{(x-n)^2}≤\sum_{n=M}^{\infty} \frac {1}{n^2}$ which converges so the series converges uniformly for all $x$?
|
Consider the following sequence $a_n$:
\begin{align*} a_0 &= \alpha \\ a_k &= \beta a_{k-1} + \kappa \end{align*}
Now consider the implementation of this sequence via lisp:
(defun an (k) (if (= k 0) alpha (+ (* beta (an (- k 1))) kappa)))
What is the running time to compute
(an k) just as it is written? Also, what would be the maximum stack depth reached? My hypothesis would be that both the running time and stack depth are $O(k)$ because it takes $k$ recursive calls to get to the base case and there is one call per step and one stack push per step.\.
Is this analysis corect?
|
I have to apologize, in fact the answer to the second question is still unknown. Namely, up to now all known symplectic manifolds of dimension 4 that have negative Euler characteristic are blow ups of ruled surfaces. However it is not known if there are no other examples. I have corrected the answer accordingly.
It is difficult to give a precise answer to this question, but personally I am pretty sure that a random four dimensional manifold has no reason at all to have a symplectic structure. What is true for sure is that symplectic manifolds are clustered among all four-manifolds, namely if you look in some places then probability to find a symplectic manifold equals zero. Let me give one example. Let us consider the following two questions:
Question 1. Is the probability for a 4-manifold to have negative Euler characteristic $>0$?
Question 2. Is the probability for a symplectic $4$-manifold to have negative Euler characteristic $>0$?
I am sure that one can give a relatively precise answer to Question 1 and my common sense tells me that since the probability for a number to be negative equals $\frac{1}{2}$, the answer to the question is morally yes.
What is very surprising is that the answer to the second question is probably NO. In 1995 Gompf asked if all such manifolds are blow ups of $S^2 \times \Sigma_g$ in at most $4g-5$ points where $\Sigma_g$ is a surface of genus $g$. This is still is an open question.
Added. It seems to me that it would not be unfair to say that the majority of constructions of compact symplectic 4-manifolds come for the moment from algebro-geometric analogues. In particular this is the case for Gompf sum, which is the most used construction. So in some sense the cloud of symplectic manifolds in dimension four that we see now is centred around algebraic surfaces. In particular this fact about symplectic 4-manifolds with negative Euler characteristic is inherited from the theory of algebraic surfaces (though one needs SW theory to prove it for symplectic manifolds). I think no one doubts that algebraic surfaces are not distributed among $4$-manifolds evenly.
I want to add that there is a conjecture saying that four manifolds of constant sectional curvature $-1$ are never symplectic. It is not clear (for me) how well this conjecture is justified but still there are no ideas of how to construct symplectic structures on these manifolds...
|
Question:
How can we solve a first-order differential equation of the form $$ \frac{d}{dt}x(t)=g(x(t),t), $$ with the initial condition $x(t_0)=x_0$, if we cannot solve it analytically?
Example 1:
We want to solve the ordinary differential equation (ODE) \begin{equation} \frac{d}{dt}x(t)=\cos(x(t))+\sin(t)\qquad\qquad\qquad\qquad \label{eq:1} \end{equation} with $x(0)=0$, i.e. we need to find the right function $x(t)$ that fulfills the ODE and the initial condition (IC).
Given the initial condition $x(0)=0$, we want to know $x(t)$ for $t>0$. We will now find an approximate numerical solution of the exact solution by computing the values of the function only at discrete values of $t$.
To do so, we define a discrete set of $t$-values, called grid points, by$$ t_n=t_0+n\cdot h~~~~~\mathrm{with}~~~~n=0,1,2,3,...,N. $$
The distance between two adjacent grid points is $h$. The largest value is $t_N=t_0+N*h$. Depending on the problem, $t_N$ might be given and $h$ is then determined by how many grid points $N$ we choose$$ h=\frac{t_N-t_0}{N}. $$
The key is now to approximate the derivative of $x(t)$ at a point $t_n$ by\begin{equation} \frac{dx}{dt}_{t=t_n}\approx \frac{x(t_{n+1})-x(t_n)}{h},~~~~~h>0. \label{eq:2} \end{equation}
We know that this relation is exact in the limit $h\to 0$, since $x(t)$ is differentiableaccording to equation \eqref{eq:1}. For $h>0$, however, equation \eqref{eq:2} is onlyan approximation that takes into account the current value of $x(t)$ and the value at the next (forward) grid point. Hence, this method is called a
forward difference approximation.
In equation \eqref{eq:2}, we approximate the slope of the tangent line at $t_n$ ("the derivative") by the slope of the chord that connects the point $(t_n,x(t_n))$ with the point $(t_{n+1},x(t_{n+1}))$. This is illustrated in the figure below: blue - graph; dotted - tangent line; green - chord.
Substituting the approximation \eqref{eq:2} into \eqref{eq:1}, we obtain\begin{equation*} \frac{x(t_{n+1})-x(t_n)}{h} \approx \cos(x(t_n))+\sin(t_n). \end{equation*}
Rearranging the equation, using the notation $x_n=x(t_n)$ and writing this as an equality (rather than an approximation) yields$$ x_{n+1} = x_n + h\left[ \cos(x_n)+\sin(t_n)\right]. $$
This describes an iterative method to compute the values of the function successivelyat all grid points $t_n$ (with $t_n>0$), starting at $t_0=0$ and $x_0=0$ in our case.This is called
Euler's method.
For example, the value of $x$ at the next grid point, $t_1=h$, after the starting point is\begin{eqnarray*} x_{1} &=& x_0 + h\left[ \cos(x_0)+\sin(t_0)\right] \\ &=& 0 + h\left[ \cos(0) +\sin(0) \right] \\ &=& h. \end{eqnarray*}
Similarly, we find at $t_2=2h$\begin{eqnarray*} x_{2} &=& x_1 + h\left[ \cos(x_1)+\sin(t_1)\right] \\ &=& h + h\left[ \cos(h) +\sin(h) \right] . \end{eqnarray*}
It is now a matter of what value to choose for $h$. To look at this, we will write some code which uses Euler's method to calculate $x(t)$.
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt# Set common figure parametersnewparams = {'figure.figsize': (16, 6), 'axes.grid': True, 'lines.linewidth': 1.5, 'lines.markersize': 10, 'font.size': 14}plt.rcParams.update(newparams)
N = 10000 # number of stepsh = 0.001 # step size# initial valuest_0 = 0x_0 = 0t = np.zeros(N+1)x = np.zeros(N+1)t[0] = t_0x[0] = x_0t_old = t_0x_old = x_0for n in range(N): x_new = x_old + h*(np.cos(x_old)+np.sin(t_old)) # Euler's method t[n+1] = t_old+h x[n+1] = x_new t_old = t_old+h x_old = x_newprint(r'x_N = %f' % x_old)# Plot x(t)plt.figure()plt.plot(t, x)plt.ylabel(r'$x(t)$')plt.xlabel(r'$t$')plt.grid()plt.show()
x_N = 1.742849
In the code above, we have chosen $h=0.001$ and $N=10000$, and so $t_N=10$. In the plot of $x(t)$, the discrete points have been connected by straight lines.
What happens to $x_N$ when we decrease $h$ by a factor of $10$? (Remember to increase $N$ simultaneously by a factor of $10$ so as to obtain the same value for $t_N$.)
We see that the value of $x_N$ depends on the step size $h$. In theory, a higher accuracy of the numerical solution in comparison to the exact solution can be achieved by decreasing $h$ since our approximation of the derivative $\frac{d}{dt}x(t)$ becomes more accurate.
However, we cannot decrease $h$ indefinitely since, eventually, we are hitting the limits set by the machine precision. Also, lowering $h$ requires more steps and, hence, more computational time.
For Euler's method, it turns out that the global error (error at a given $t$) is proportional to the step size $h$ while the local error (error per step) is proportional to $h^2$. This is called a first-order method.
We can now summarize
Euler's method.
Given the ODE $$ \frac{d}{dt}x(t)=g(x(t),t)~~~\mathrm{with}~~~x(t_0)=x_0, $$ we can approximate the solution numerically in the following way:
Apart from its fairly poor accuracy, the main problem with Euler's method is that it can be unstable, i.e. the numerical solution can start to deviate from the exact solution in dramatic ways. Usually, this happens when the numerical solution grows large in magnitude while the exact solution remains small.
A popular example to demonstrate this feature is the ODE $$ \frac{dx}{dt}=-x~~~\mathrm{with}~~~x(0)=1. $$ The exact solution is simply $x(t)=e^{-t}$. It fulfills the ODE and the initial condition.
On the other hand, our Euler method reads $$ x_{n+1}=x_n+h\cdot(-x_n)=(1-h)x_n. $$
Clearly, if $h>1$, $x(t_n)$ will oscillate between negative and positive numbers and grow without bounds in magnitude as $t_n$ increases. We know that this is incorrect since we know the exact solution in this case.
On the other hand, when $0<h<1$, the numerical solution approaches zero as $t_n$ increases, reflecting the behavior of the exact solution.
Therefore, we need to make sure that the step size of the Euler method is sufficiently small so as to avoid such instabilities.
We will now demonstrate how Euler's method can be applied to second-order ODEs.
In physics, we often need to solve Newton's law which relates the change in momentum of an object to the forces acting upon it. Assuming constant mass, it usually has the form $$ m\frac{d^2}{dt^2}x(t)=F(v(t),x(t),t), $$ where we restrict our analysis to one dimension. (The following ideas can be extended to two and three dimensions in a straightforward manner.)
Dividing by the mass, we find $$ \frac{d^2}{dt^2}x(t)=G(v(t),x(t),t), $$ with $G(v,x,t)\equiv F(v,x,t)/m$. We can re-write this second-order ODE as two coupled, first-order ODEs. By definition, we have $v(t)=\frac{d}{dt}x(t)$. Hence, we obtain \begin{eqnarray*} \frac{dx}{dt} &=& v, \\ \frac{dv}{dt} &=& G(v,x,t). \end{eqnarray*} Now, we only need to specify the initial conditions $x_0=x(t_0)$ and $v_0=v(t_0)$ to have a well-defined problem.
Using the same discretization of time as previously, we can apply the ideas ofEuler's method also to this first-order system. It yields\begin{eqnarray*}x_{n+1} &=& x_n+h\cdot v_n, \\v_{n+1} &=& v_n+h\cdot G(v_n,x_n,t_n),\end{eqnarray*}
where (at $n=0$) $x_0$ and $v_0$ are the initial conditions at $t=t_0$. Example 2:
Let us consider a particle of mass $m$ that is in free fall
towards the centerof a planet of mass $M$. Let us assume that the atmosphere exerts a force$$F_\mathrm{drag}=Dv^2$$onto the particle which is proportional to the square of the velocity. Here, $D$ is the drag coefficient. Note that the $x$-axis is pointing away from the planet. Hence, we only consider $v\leq 0$.
The particle motion is described by the following governing equation ($G$: gravitational constant) $$ m\frac{d^2 x}{dt^2}=Dv^2-\frac{GmM}{x^2}. $$
Dividing each side by $m$ gives $$ \frac{d^2 x}{dt^2}=\frac{D}{m}v^2-\frac{GM}{x^2}. $$ Following our recipe above, we re-cast this as two first-order ODEs \begin{eqnarray*} \frac{dx}{dt} &=& v, \\ \frac{dv}{dt} &=& \frac{D}{m}v^2-\frac{GM}{x^2}. \end{eqnarray*} We choose $D=0.0025\,\mathrm{kg}\,\mathrm{m}^{-1}$, $m=1\,\mathrm{kg}$ and $M=M_\mathrm{Earth}$, i.e. the mass of the Earth.
Accordingly, our algorithm now reads\begin{eqnarray*}x_{n+1} &=& x_n+h*v_n, \\v_{n+1} &=& v_n+h*\left[ \frac{D}{m}v_n^2-\frac{GM}{x_n^2} \right] .\end{eqnarray*}
Let us specify the following initial conditions and step size: $$ t_0=0,~~x(t_0)=x_0=7000.0\,\mathrm{km},~~v(t_0)=v_0 =0\,\mathrm{m/s},~~h=0.001\,\mathrm{s}. $$ We could now iterate the above equations until the particle hits the ground, i.e. until $x=R_\mathrm{Earth}$, where $R_\mathrm{Earth}$ is the radius of Earth. This occurs in finite time both in reality and in our code.
Moreover, the particle would also reach $x=0$ in finite time, given the above equations, while the speed grows to infinity. However, the code would crash well before $x$ approaches zero due to the speed reaching very large values.
Note: The governing equation actually changes when $|x|<R$.
Therefore, we need to be careful with our numerical solution procedure. This goes to show that it is often very useful to understand the physical problem under consideration when solving its governing equations numerically.
Let us integrate until $t_N=100\,\mathrm{s}$, equivalent to $N=10^5$ time steps. As it turns out, the particle does not reach the ground by $t=t_N$.
G = 6.67300e-11 # gravitational constantM = 5.97219e24 # mass of EarthD = 0.0025 # drag coefficient m = 1.0 # mass of particleN = 100000 # number of stepsh = 0.001 # step size# initial valuest_0 = 0.0x_0 = 7.0e6v_0 = 0.0t = np.zeros(N+1)x = np.zeros(N+1)v = np.zeros(N+1)t[0] = t_0x[0] = x_0v[0] = v_0for n in range(N): # Euler's method x_new = x[n] + h*v[n] v_new = v[n] + h*((D/m)*v[n]**2-G*m*M/x[n]**2) t[n+1] = t[n] + h x[n+1] = x_new v[n+1] = v_newplt.figure()plt.plot(t,x) # plotting the position vs. time: x(t)plt.xlabel(r'$t$')plt.ylabel(r'$x(t)$')plt.grid()plt.figure()plt.plot(t,v) # plotting the velocity vs. time: v(t)plt.xlabel(r'$t$')plt.ylabel(r'$v(t)$')plt.grid();
What do you observe? What happens if you choose $x_0=10000.0\,\mathrm{km}$ as your initial height?
We have learned how to use Euler’s method to solve first-order and second-order ODEs numerically. It is the simplest method in this context and very easy to implement. However:
Notwithstanding these issues, Euler’s method is often useful: it is easy to use (the coding is less error prone); it can provide a helpful first impression of the solution; modern-day computer power makes computational expense less of an issue. Simply put, sometimes it is sufficient.
|
Publications Influence
Claim Your Author Page
Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications.
In this paper, the existence of a nontrivial least energy solution is considered for the nonlinear fractional Schrodinger-Poisson systems (−Δ)su + V(x)u + ϕu = |u|p−1u and (−Δ)tϕ = u2 in R3, where… (More)
AbstractThe present study is concerned with the following fractional p-Laplacian equation involving a critical Sobolev exponent of Kirchhoff type: … (More)
We develop a statistical model using extreme value theory to estimate the 2000–2050 changes in ozone episodes across the United States. We model the relationships between daily maximum temperature… (More)
Abstract This paper is concerned with the fractional coupled Schrodinger system. By using the Nehari manifold and fibering map, we obtain the multiplicity and concentration of solutions for the given… (More)
Abstract In this paper, we are concerned with the following quasilinear elliptic problems of Kirchhoff type: [ a + b ( ∫ R N | ∇ u | p − μ | u | p | x | p d x ) θ − 1 ] ( − Δ p u − μ | u | p − 2 u |… (More)
In this paper, we study the following nonlinear Kirchhoff problem involving critical growth: $$ \left\{% \begin{array}{ll} -(a+b\int_{\Omega}|\nabla u|^2dx)\Delta u=|u|^4u+\lambda|u|^{q-2}u, u=0\ \… (More)
The present study is concerned with the following Schr\"{o}dinger-Poisson system involving critical nonlocal term $$ \left\{ \begin{array}{ll} -\Delta u+u-K(x)\phi |u|^3u=\lambda f(x)|u|^{q-2}u, &… (More)
The present study is concerned with the following Schrodinger-Poisson system involving critical nonlocal term with general nonlinearity: $$ \left\{ \begin{array}{ll} -\Delta u+V(x)u- \phi |u|^3u=… (More)
This paper is concerned with the existence of ground state solutions for a class of generalized quasilinear Schrödinger–Poisson systems in R3$\mathbb {R}^{3}$ which have appeared in plasma physics,… (More)
|
Main Page The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A subset of [math][3]^n[/math] is said to be
line-free if it contains no combinatorial lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Useful background materials
Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (inactive) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (active)
There is also a chance that we will be able to improve the known bounds on Moser's cube problem.
Here are some unsolved problems arising from the above threads.
Here is a tidy problem page.
Proof strategies
It is natural to look for strategies based on one of the following:
Szemerédi's original proof of Szemerédi's theorem. Szemerédi's combinatorial proof of Roth's theorem. Ajtai-Szemerédi's proof of the corners theorem. The density increment method. The triangle removal lemma. Ergodic-inspired methods. The Furstenberg-Katznelson argument. Bibliography
Density Hales-Jewett
H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Behrend-type constructions
M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint.
Triangles and corners
M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
|
Now I have a breaker which is containing $\ce{CH4}$ of some amount at standard conditions and then I put it on a burner.
In Thermodynamics we are taught about how to find the bond enthalpy of poly-atomic atoms. They say that such molecules (here in this case methane) all four $C-H$ bonds are identical. However, the energies required to break the individual $C-H$ bond in each successive step differs $$\begin{alignat}{2} \ce{CH4(g) &-> CH3(g) + H(g)}\qquad &&\Delta_\text{bond}H^\circ=+427\ \mathrm{kJ\cdot mol^{-1}}\\ \ce{CH3(g) &-> CH2(g) + H(g)}\qquad &&\Delta_\text{bond}H^\circ=+439\ \mathrm{kJ\cdot mol^{-1}}\\ \ce{CH4(g) &-> CH(g) + H(g)}\qquad &&\Delta_\text{bond}H^\circ=+452\ \mathrm{kJ\cdot mol^{-1}}\\ \ce{CH(g) &-> C(g) + H(g)}\qquad &&\Delta_\text{bond}H^\circ=+347\ \mathrm{kJ\cdot mol^{-1}} \end{alignat}$$
and now their bond enthalpy is the mean of all those numbers.
now going back to the diagram we heat up the beaker containing methane then the energy that is absorbed by any one of the $\ce{C-H}$ bond should be equal to the energy absorbed by the other $\ce{C-H}$ bond because all the bond in a molecule should absorb equal amount of energy at the same time because at time $t$ the
temperature is same all over the beaker and all bonds are equal. so it should be like that while calculating the bond dissociation energy of a poly-atomic molecule (take the example of methane) calculate it like $4\times 427\ \mathrm{kJ\cdot mol^{-1}}$. (is this a common sense?)
all bonds Should broke up at once!
but we know that this does not happen, which means that all the bonds in a molecule does not absorb heat of equal amount at the same time or are they unequal? And fairly speaking it is painful to me to visualize this thing maybe because I am not able to understand this reality.
How do I comprehend this thing?
In Short
all bonds of $CH_4$ are equal
The temperature of beaker is same all over its volume
energy that is absorbed by any one of the $\ce{C-H}$ bond should be equal to the energy absorbed by the other $\ce{C-H}$ bond bonds should absorb equal energy at time
t
the should all break at once at same time
t
this do not happen so how to understand this thing?
|
There are many ways to interpolate data. Interpolation in my mind means that you 'draw' lines between some data points. This can be done many ways. One type of interpolation which is useful in DSP (especially in multirate DSP) is 'Bandlimited interpolation'. If you google that you will get many interesting and useful hits. What you propose is not bandlimited interpolation. In your 'upsampled' x you have frequency components not present in the original x.
Edit (too long to fit into a comment):
There is a quite significant difference between your construction, starting with $X=[A,B,C,D,E,F,G,H]$ and the example in the reference you provide.
Considering real input
$X=[A,B,C,D,E,D^*,C^*,B^*]$
Upsampling by a factor of 2 for fullband input. In this case upsampling can be performed by first placing zeros in the input interleaved (that is $x_0,0,x_1,0,...$. The result is a signal with a frequency spectrum containing a compressed version of the frequency spectrum of x (in range $0-\pi/2$) and an image extending from $\pi/2 - \pi$ (considering only the positive frequency axis). If x2 is the upsampled version then
$X2=[A,B,C,D,E,D^*,C^*,B^*,A,B,C,D,E,D^*,C^*,B^*]$
In the ideal case an ideal brick-wall filter with cutoff frequency $\pi/2$ is required in order to remove the image. That is (for infinite input)
$y_n = \sum_{k=-\infty}^{\infty} x2_k sinc(0.5n - k)$
In practice though there will be some distortion because the brick-wall filter is not realistic. The practical filter can suppress/remove frequencies in the input or it can leave in some of the frequency components in the image in the upsampled signal. Or the filter can make a compromise between the two. I think your frequency-domain construction also reflects this compromise. These two examples, represents two different choices:
$Y=[A,B,C,D,E,0,0,0,0,0,0,0,E^*,D^*,C^*,B^*]$
$Y=[A,B,C,D,0,0,0,0,0,0,0,0,0,D^*,C^*,B^*]$
If the input is bandlimited below the nyquist frequency as in your reference this issue disappears.
Maybe it is possible to find a value of $\rho$ below, such that some error function, for instance the squared error between the input spectrum and the upsampled output spectrum is minimum.
$Y=[A,B,C,D,\rho,0,0,0,0,0,0,0,\rho^*,D^*,C^*,B^*]$
|
Experimental Mathematics Experiment. Math. Volume 11, Issue 4 (2002), 527-546. Random Generators and Normal Numbers Abstract
Pursuant to the authors' previous chaotic-dynamical model for random digits of fundamental constants, we investigate a complementary, statistical picture in which pseudorandom number generators (PRNGs) are central. Some rigorous results are achieved: We establish
b-normality for constants of the form $\sum_i 1/(b^{m_i} c^{n_i})$ for certain sequences $(m_i), (n_i)$ of integers. This work unifies and extends previously known classes of explicit normals. We prove that for coprime $b,c>1$ the constant $\alpha_{b,c} = \sum_{n = c, c^2, c^3,\dots} 1/(n b^n)$ is b-normal, thus generalizing the Stoneham class of normals. Our approach also reproves b-normality for the Korobov class $\beta_{b,c,d}$, for which the summation index n above runs instead over powers $c^d, c^{d^2}, c^{d^3}, \dots$ with $d>1$. Eventually we describe an uncountable class of explicit normals that succumb to the PRNG approach. Numbers of the $\alpha, \beta$ classes share with fundamental constants such as $\pi, \; \log 2$ the property that isolated digits can be directly calculated, but for these new classes such computation tends to be surprisingly rapid. For example, we find that the googol-th (i.e., $10^{100}$-th) binary bit of $\alpha_{2,3}$ is 0. We also present a collection of other results---such as digit-density results and irrationality proofs based on PRNG ideas---for various special numbers. Article information Source Experiment. Math., Volume 11, Issue 4 (2002), 527-546. Dates First available in Project Euclid: 10 July 2003 Permanent link to this document https://projecteuclid.org/euclid.em/1057864662 Mathematical Reviews number (MathSciNet) MR1969644 Zentralblatt MATH identifier 1165.11328 Subjects Primary: 11K16: Normal numbers, radix expansions, Pisot numbers, Salem numbers, good lattice points, etc. [See also 11A63] 11K06: General theory of distribution modulo 1 [See also 11J71] Secondary: 11J81: Transcendence (general theory) 11K45: Pseudo-random numbers; Monte Carlo methods Citation
Bailey, David H.; Crandall, Richard E. Random Generators and Normal Numbers. Experiment. Math. 11 (2002), no. 4, 527--546. https://projecteuclid.org/euclid.em/1057864662
|
The question states:
Prove that if $S\subseteq T \subseteq V$ and if $T$ is a subspace of $V$, then $L(S)\subseteq T$.
My proof is as follows:
All elements in $S$ are in $T$. Now $T$ is a subspace, $\therefore$ all elements in $S$ spanning $L(S)$ are independent elements in $T$.
$\therefore T$ is spanned by $S$ if $\dim T = n$ and there are $n$ elements in $S$; or $T$ is spanned by a set $Q$ containing all independent elements in $S$. In both cases, the independent elements in $S$ (which span $L(S)$) form part of a basis for $T$. $\therefore L(S) \subseteq T$.
Is it correct?
Since $ S \subseteq T$ , we have $L(S) \subseteq L(T)$. Since $T$ is a supspace of $V$, we get $T=L(T).$ Hence $L(S) \subseteq T.$
Let $v\in L(S)$ then there exists $v_1,\dots , v_n\in S$ and $a_1,\dots , a_n\in \mathbb{R}$ such that
$v=\sum_{i=1}^n a_iv_i$
But $v_i\in S\subseteq T$ and $T$ is a subspace so each linear combination of element of $T$ is in $T$ then
$v\in T$ so $L(S)\subseteq T$
First, the span operator is monotonous w.r.t. set inclusion. That is, if $S,T$ are subsets of a $K$-vector space $V$, then $$S\subseteq T\Rightarrow L(S)\subseteq L(T).$$
Second, the span operator is idempotent. That is, if $S$ is a subset of a $K$-vector space $V$, then $$L(L(S)) =L(S).$$
Both properties suffice to prove your claim.
|
Little explorations with HP calculators (no Prime)
03-23-2017, 01:23 PM (This post was last modified: 03-23-2017 01:23 PM by pier4r.)
Post: #21
RE: Little explorations with the HP calculators
(03-23-2017 12:19 PM)Joe Horn Wrote: I see no bug here. Variables which are assigned values should never be used where formal variables are required. Managing them is up to the user.
Ok (otherwise a variable cannot just be feed in a function from a program) but then how come that when I set the flags, that let the function return reals, the variable is purged? The behavior could be more consistent.
Nothing bad, just a quirks like this, when not clear to spot, may lead to other solutions (see the advice of John Keit that I followed)
Wikis are great, Contribute :)
03-24-2017, 01:45 PM (This post was last modified: 03-24-2017 03:23 PM by pier4r.)
Post: #22
RE: Little explorations with the HP calculators
Quote:Brilliant.org
Is there a way to solve this without using a wrapping program? (hp 50g)
I'm trying around some functions (e.g: MSLV) with no luck, so I post this while I dig more on the manual, and on search engines focused on "site:hpmuseum.org" or "comp.sys.hp48" (the official hp forums are too chaotic, so I won't search there, although they store great contributions as well).
edit: I don't mind inserting manually new starting values, I mean that there is a function to find at least one solution, then the user can find the others changing the starting values.
Edit. It seems that the numeric solver for one equation can do it, one has to set values to the variables and then press solve, even if one variable has already a value (it was not so obvious from the manual, I thought that variables with given values could not change it). The point is that one variable will change while the others stay constant. In this way one can find all the solutions.
Wikis are great, Contribute :)
03-24-2017, 03:23 PM (This post was last modified: 03-24-2017 03:24 PM by pier4r.)
Post: #23
RE: Little explorations with the HP calculators
Quote:Same site as before.
This can be solved with multiple applications of SOLVEVX (hp 50g) on parts of the equation with proper observations. So it is not that difficult, just I found it nice and I wanted to share.
Wikis are great, Contribute :)
03-24-2017, 10:04 PM (This post was last modified: 03-24-2017 10:05 PM by pier4r.)
Post: #24
RE: Little explorations with the HP calculators
How do you solve this using a calculator to compute the number?
I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
I also needed to prove to myself that the center on the circle is on a particular location to proceed to build the final equation.
Wikis are great, Contribute :)
03-24-2017, 10:54 PM (This post was last modified: 03-24-2017 11:07 PM by Dieter.)
Post: #25
RE: Little explorations with the HP calculators
I think a calculator is the very last thing required here.
(03-24-2017 10:04 PM)pier4r Wrote: I solved it translating it in a formula after a bit of thinkering, and I got a number that I may write as "x + 0.343 + O(0.343)" if I'm not mistaken. I used the numeric solver on the hp 50g as helper.
Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
So 6√2 = 2√2 + r + r√2. Which directly leads to r = 4 / (1 + 1/√2) = 2,343...
Dieter
03-25-2017, 08:17 AM (This post was last modified: 03-25-2017 08:19 AM by pier4r.)
Post: #26
RE: Little explorations with the HP calculators
As always, first comes the mental work to create the formula or the model, but to compute the final number one needs a calculator most of the time.
Quote:Numeric solver? The radius can be determined with a simple closed form solution. Take a look at the diagonal through B, O and D which is 6√2 units long.
I used the numeric solver because instead of grouping r on the left, I just used the formula of one step before - the one without grouping - to find the value. Anyway one cannot just use the diagonal because the picture is well done, one has to prove to himself that O is on the diagonal (nothing difficult, but required), otherwise it may be a step taken for granted.
Wikis are great, Contribute :)
03-27-2017, 12:14 PM
Post: #27
RE: Little explorations with the HP calculators
Quote:Brilliant.org
This one defeated me at the moment. My rusty memory about mathematical relations did not help me. At the end, having the hp50g, I tried to use some visual observations to write down the cartesian coordinates of the points defining the inner square, or observing the length of the sides, so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution (with or without the hp50g). I end in too ugly/tedious formulae.
Wikis are great, Contribute :)
03-27-2017, 12:54 PM (This post was last modified: 03-27-2017 03:42 PM by Thomas Okken.)
Post: #28
RE: Little explorations with the HP calculators
Consider a half-unit circle jammed into the corner of the first quadrant (so its center is at (0.5, 0.5)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The tangent on the circle where it meets that radius will intersect the X axis at 1 + tan(phi), and the Y axis at 1 + cot(phi) or 1 + 1 / tan(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 1 (or X = Y - 1). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the diameter of the circle is 1 / sqrt(X^2 + Y^2).
EDIT: No, I screwed up. The intersections at the axes are not at 1 + tan(phi), etc., that relationship is not quite that simple. Back to the drawing board!
Second attempt:
Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)). The tangent on the circle at that point will have a slope of -1 / tan(phi), and so it will intersect the X axis at Px + Py * tan(phi), or (1 + sin(phi)) * tan(phi) + 1 + cos(phi), and it will intersect the Y axis at (1 + cos(phi)) / tan(phi) + 1 + sin(phi). The triangle formed by the X axis, the Y axis, and this tangent, is like the four triangles in the puzzle, and the challenge is to find phi such that X = Y + 2 (or X = Y - 2). The answer to the puzzle is then obtained by scaling everything down so that the hypotenuse of the triangle OXY becomes 1, and then the radius of the circle is 1 / sqrt(X^2 + Y^2).
Because of symmetry, sweeping the angles from 0 to pi/2 is actually not necessary; you can restrict yourself to 0 through pi/4 and the case that X = Y - 2.
03-27-2017, 02:27 PM (This post was last modified: 03-27-2017 02:28 PM by pier4r.)
Post: #29
RE: Little explorations with the HP calculators
(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
I'm a bit blocked.
http://i.imgur.com/IW4QIeU.jpg
Would be possible to add a quick sketch?
Wikis are great, Contribute :)
03-27-2017, 03:44 PM (This post was last modified: 03-27-2017 03:44 PM by Thomas Okken.)
Post: #30
RE: Little explorations with the HP calculators
(03-27-2017 02:27 PM)pier4r Wrote:(03-27-2017 12:54 PM)Thomas Okken Wrote: Consider a unit circle jammed into the corner of the first quadrant (so its center is at (1, 1)). Now consider a radius of that circle sweeping the angles from 0 to pi/2. The point P on the circle where that radius intersects it is at (1 + cos(phi), 1 + sin(phi)).
OK; I attached a sketch to my previous post.
03-27-2017, 03:55 PM
Post: #31
RE: Little explorations with the HP calculators
Thanks and interesting approach. On brilliant.org there were dubious solutions (that did not prove their assumptions) and just one really cool making use of a known relationship with circles enclosed in right triangles.
Wikis are great, Contribute :)
03-27-2017, 05:29 PM (This post was last modified: 03-27-2017 05:31 PM by pier4r.)
Post: #32
RE: Little explorations with the HP calculators
Quote:Brilliant.org
For this I wrote a quick program, remembering some quality of the mean that after enough iterations it stabilizes. (I should find the right statement though).
Code:
But i'm not sure about the correctness of the approach. Im pretty sure there is a way to compute this with an integral ad then a closed form too.
Anyway this is the result at the moment:
Wikis are great, Contribute :)
03-27-2017, 06:01 PM (This post was last modified: 03-27-2017 07:33 PM by Dieter.)
Post: #33
RE: Little explorations with the HP calculators
(03-27-2017 12:14 PM)pier4r Wrote: ...so if the side of the inner square is 2r so the sides of the triangle are "s+2r" and "s" from which one can say that "s^2+(s+2r)^2=1" . This plus the knowledge that 4 times the triangles plus the inner square add up to 1 as area. Still, those were not enough for a solution
Right, in the end you realize that both formulas are the same. ;-)
The second constraint for s and r could be the formula of a circle inscribed in a triangle. This leads to two equations in two variables s and r. Or with d = 2r you'll end up with something like this:
(d² + d)/2 + (sqrt((d² + d)/2) + d)² = 1
I did not try an analytic solution, but using a numeric solver returns d = 2r = (√3–1)/2 = 0,36603 and s = 1/2 = 0,5.
Edit: finally this seems to be the correct solution. ;-)
Dieter
03-27-2017, 06:14 PM (This post was last modified: 03-27-2017 06:15 PM by pier4r.)
Post: #34
RE: Little explorations with the HP calculators
(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
How? One should be from the Pythagoras' theorem, a^2+b^2=c^2 (where I use the two sides of the triangle to get the hypotenuse) the other is the composition of the area of the square, made up from 4 triangles and one inner square. To me they sound as different models for different measurements. Could you explain me why are those the same?
Anyway to me it is great even the numerical solution (actually the one that I search with the hp50g) but I cannot tell you if it is right or not because I did not solve it by myself, other reviewers are needed.
Edit, anyway I remember that a discussed solution mentioned the relationship of a circle inscribed in a triangle, so I guess your direction is right.
Wikis are great, Contribute :)
03-27-2017, 06:30 PM
Post: #35
RE: Little explorations with the HP calculators
(03-27-2017 06:14 PM)pier4r Wrote:(03-27-2017 06:01 PM)Dieter Wrote: Right, in the end you realize that both formulas are the same. ;-)
Just do the math. On the one hand, \( s^2 + (s + 2r)^2 = 1\) from Phythagorus' Theorem as you observed. And your other observation is that
\[ 4 \cdot \underbrace{\frac{1}{2} \cdot s \cdot (s+2r)}_{\text{area of }\Delta}
+ \underbrace{(2r)^2}_{\text{area of } \Box} = 1 \]
Simplify the left hand side:
\[
\begin{align}
4 \cdot \frac{1}{2} \cdot s \cdot (s+2r) + (2r)^2 & =
2s^2+4rs + 4r^2 \\
& = s^2 + s^2 + 4rs + 4r^2 \\
& = s^2 + (s+2r)^2
\end{align} \]
Hence, both formulas are the same.
Graph 3D | QPI | SolveSys
03-27-2017, 06:57 PM (This post was last modified: 03-27-2017 06:57 PM by pier4r.)
Post: #36
RE: Little explorations with the HP calculators
Thanks, I did not worked on the formula, I was more stuck (and somewhat still stuck) on the fact that they should represent different objects/results.
But then again, the square build on the side, it is the square itself. So now I see it. I wanted to see it in terms of "represented objects (1)" not only formulae.
Wikis are great, Contribute :)
03-27-2017, 07:07 PM
Post: #37
RE: Little explorations with the HP calculators
(03-27-2017 06:57 PM)pier4r Wrote:
Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
A few geometric proofs: http://www.cut-the-knot.org/pythagoras/
Graph 3D | QPI | SolveSys
03-27-2017, 07:22 PM
Post: #38
RE: Little explorations with the HP calculators
(03-27-2017 07:07 PM)Han Wrote: Are you familiar with the geometric proofs of Phythagorus' Theorem? What I wrote above is just a variation of one of the geometric proofs using areas of regular polygons (triangles, rectangles, squares).
Maybe my choice of words was not the best. I wanted to convey the fact that if I try to model two different events (or objects in this case) and I get the same formula, for me it is not immediate to say "oh, ok, then they are the same object", I have to, how can I say, "see it". So in the case of the problem, I saw it when I realized that the 1^2 is not only equal to the area because also the area is 1^2, it is exactly the area because it models the area of the square itself. (I was visually building 1^2 outside the square, like a duplicate)
Anyway the link you shared is great. I looked briefly and I can say:
- long ago I saw the proof #1
- in school I saw the proof #9
- oh look, the proof #34 would have helped, as someone mentioned
- how many!
Great!
Wikis are great, Contribute :)
03-27-2017, 07:24 PM (This post was last modified: 03-27-2017 07:28 PM by Joe Horn.)
Post: #39
RE: Little explorations with the HP calculators
(03-27-2017 05:29 PM)pier4r Wrote:Quote:Brilliant.org
After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result:
10 randomize
20 T=0:C=0
30 repeat
40 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
50 until C=99999994
60 repeat
70 T+=sqr((rnd-rnd)^2+(rnd-rnd)^2):C+=1
80 print C;T/C
90 until C=99999999
run
99999995 0.5214158234249566646569152059
99999996 0.5214158242970253667174680247
99999997 0.5214158240318481570747604814
99999998 0.5214158247892039896051570164
99999999 0.5214158253601312510245695897
OK
run
99999995 0.5213642776110289008920452545
99999996 0.5213642752079475043717958065
99999997 0.52136427197858201293861314
99999998 0.5213642744828552963477424429
99999999 0.5213642759132547792130043215
OK
run
99999995 0.5213770659191193073147616413
99999996 0.5213770610000764506616015052
99999997 0.5213770617149058467216528505
99999998 0.5213770589414874167694264508
99999999 0.5213770570854305903944611055
OK
So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up.
<0|ɸ|0>
-Joe-
03-27-2017, 07:35 PM (This post was last modified: 03-27-2017 07:37 PM by pier4r.)
Post: #40
RE: Little explorations with the HP calculators
(03-27-2017 07:24 PM)Joe Horn Wrote: 99999999 0.5213770570854305903944611055
Interestingly your number is quite different from mine. Ok that you have a couple of iterations more, but my average was pretty stable for the first 3 digits. I wonder why the discrepancy.
Moreover if you round to the 4th decimal place, you always get 0.5214 if the rounding is adding a "+1" to the last digit in the case the first digit to be excluded is higher than 5.
Wikis are great, Contribute :)
User(s) browsing this thread: 1 Guest(s)
|
To avoid confusion, let's clarify some notation. Let $\mathcal{M}^{*}$ denote the $\sigma$-algebra of $\mu^{*}$-measurable subsets of $X$. Let $\bar{\mathcal{M}}$ denote the completion of $\mathcal{M}$ with respect to $\mu$. Let $\widetilde{\bar{\mathcal{M}}}$ denote the $\sigma$-algebra of locally measurable subsets of $(X,\bar{\mathcal{M}},\bar{\mu})$. Since $\mu^{*}$ restricts to a complete measure on $\mathcal{M}^{*}$, we know that $\mu^{*}|{\mathcal{M}}^{*}$ coincides with the completion of $\mu$ on the completion $\bar{\mathcal{M}}$ of $\mathcal{M}$ with respect to $\mu$, so there is no ambiguity in using the notation $\bar{\mu}$.
Lemma. Let $(X,\mathcal{M},\mu)$ be a measure space, and let $\mu^{*}$ be the outer measure induced by $\mu$. For $E\in\mathcal{M}^{*}$, with $\mu^{*}(E)<\infty$, there exists $A\in\mathcal{M}$ such that
\begin{align*} E\subset A, \quad \mu^{*}(E)=\mu(A) \end{align*} In particular, if $\mu^{*}(E)=0$, then $E$ is contained in a $\mu$-null set $N\in\mathcal{M}$.
Proof. By definition of infimum, there exists a countable collection of sets $\left\{A_{j}\right\}\subset\mathcal{M}$ such that\begin{align*}E\subset A:=\bigcup_{j=1}^{\infty}A_{j}, \quad \sum_{j=1}^{\infty}\mu(A_{j})<\mu^{*}(E)+\epsilon\end{align*}Since $\mathcal{M}$ is a $\sigma$-algebra, $A\in\mathcal{M}$, and by subadditivity, $\mu(A)\leq\mu^{*}(E)+\epsilon$.
We can apply the preceding result for each $\epsilon_{n}=1/n$, to obtain a collection of sets $\left\{A^{n}\right\}\subset\mathcal{M}$ such that $E\subset A^{n}$ and $\mu(A^{n})\leq\mu^{*}(E)+1/n$. If we set $A:=\bigcap_{n}A^{n}$, then $E\subset A$ and by monotonicity,\begin{align*}\mu(A)\leq\mu(A^{n})\leq\mu^{*}(E)+\dfrac{1}{n},\qquad\forall n\end{align*}Letting $n\rightarrow\infty$, we obtain $\mu(A)\leq\mu^{*}(E)$. The reverse inequality holds by monotonicity. $\Box$
Let $E$ be a locally measurable subset of $(X,\bar{\mathcal{M}},\bar{\mu})$. I claim that $E\in\mathcal{M}^{*}$. It suffices to show that for any $F\subset X$ with $\mu^{*}(F)<\infty$, we have\begin{align*}\mu^{*}(F)\geq\mu^{*}(E\cap F)+\mu^{*}(E^{c}\cap F)\end{align*}By the lemma, there exists a set $A\in\mathcal{M}$ such that\begin{align*}F\subset A, \quad \mu^{*}(F)=\mu(A)\end{align*}So $E\cap A$ is measurable, whence $E^{c}\cup A^{c}\cap A=E^{c}\cap A$ is measurable. By monotonicity and additivity, we see that\begin{align*}\mu^{*}(E\cap F)+\mu^{*}(E^{c}\cap F)\leq\mu^{*}(E\cap A)+\mu^{*}(E^{c}\cap A)=\mu(A)=\mu^{*}(F)\end{align*}
The reverse inclusion also holds: $E\in\mathcal{M}^{*}$ implies $E\in\widetilde{\bar{\mathcal{M}}}$. For any set $A\in\bar{\mathcal{M}}$ with $\bar{\mu}(A)<\infty$, $\mu^{*}(E\cap A)<\infty$, whence there exists a set $B\in\mathcal{M}$ such that $E\cap A\subset B$ and $\mu^{*}(E\cap A)=\mu(B)$. Since $E\cap A, B\in\mathcal{M}^{*}$, $\mu^{*}(B\setminus (E\cap A))=0$. But then there exists $N\in\mathcal{M}$, such that\begin{align*}B\setminus (E\cap A)\subset N, \quad \mu^{*}(B\setminus (E\cap A))=\mu(N)=0\end{align*}We conclude that $B\setminus (E\cap A)\in\bar{\mathcal{M}}$, whence\begin{align*}E\cap A=B\setminus (B\setminus (E\cap A))\in\bar{\mathcal{M}}\end{align*}
With $E$ as above, suppose $\mu^{*}(E)<\infty$. Then, as asserted before, there exists a set $A\in\mathcal{M}$ such that $E\subset A$ and $\bar{\mu}(E)=\mu(A)$. But then $E=E\cap A\in\bar{\mathcal{M}}$. We conclude that
\begin{align*}\bar{\mu}(E)=\begin{cases}\bar{\mu}(E) & {E\in\bar{\mathcal{M}}}\\ \infty & {E\in\widetilde{\bar{\mathcal{M}}}\setminus\bar{\mathcal{M}}} \end{cases},\end{align*}which is the definition of the $\widetilde{\bar{\mu}}$, the saturation of $\bar{\mu}$.
|
Magnetic force is a consequence of electromagnetic force and is caused due to the motion of charges. We have learned that moving charges surround itself with a magnetic field. With this context, the magnetic force can be described as a force that arises due to interacting magnetic fields.
What is Magnetic Force?
If we place a point charge
in the presence of both a magnitude field given by magnitude q and an electric field given by a magnitude B(r) , then the total force on the electric charge E(r) can be written as the sum of the electric force and the magnetic force acting on the object ( q F electric +). F magnetic Magnetic force can be defined as:
The magnetic force between two moving charges may be described as the effect exerted upon either charge by a magnetic field created by the other.
How do we find magnetic force?
The magnitude of the magnetic force depends on how much charge is in how much motion in each of the objects and how far apart they are.
Mathematically, we can write magnetic force as:
This force is termed as the Lorentz Force. It is the combination of the electric and magnetic force on a point charge due to electromagnetic fields. The interaction between the electric field and the magnetic field has the following features:
The magnetic force depends upon the charge of the particle, the velocity of the particle and the magnetic field in which it is placed. The direction of the magnetic force is opposite to that of a positive charge. The magnitude of the force is calculated by the cross product of velocity and the magnetic field, given by . The resultant force is thus perpendicular to the direction of the velocity and the magnetic field, the direction of the magnetic field is predicted by the right-hand thumb rule. q [ v × B ] In the case of static charges, the total magnetic force is zero. Magnetic Force on a Current-Carrying Conductor
Let us now discuss the force due to the magnetic field in a straight current-carrying rod.
We consider a rod of uniform length and cross-sectional area l . A In the conducting rod, let the number density of mobile electrons be given by . n
Then the total number of charge carriers can be given by
, where nAI is the steady current in the rod. The drift velocity of each mobile carrier is assumed to be given as vd. When the conducting rod is placed in an external magnetic field of magnitude I , the force applied on the mobile charges or the electrons can be given as: B
\(F=(nAI)qvd\times B\) Where q is the value of charge on the mobile carrier.
As
is also the current density nqvd and j is the current A×|nqvd| through the conductor, then we can write: I Where I is the vector of magnitude equal to the length of the conducting rod. Solved Examples Q1. The direction of the current in a copper wire carrying a current of 6.00 A through a uniform magnetic field with magnitude 2.20T is from the left to right of the screen. The direction of the magnetic field is upward-left, at an angle of θ = 3π/4 radians from the current direction. Determine the magnitude and direction fo the magnetic force acting on a 0.100 m section of the wire? Solution:
The magnitude of the magnetic force can be found using the formula:
\(\underset{F}{\rightarrow}=ILB\sin \theta \widehat{n}\)
where,
\(\underset{F}{\rightarrow}\) is the magnetic vector (N) I is the current magnitude (A) \(\underset{L}{\rightarrow}\) is the length vector (m) L is the length of the wire (m) \(\underset{B}{\rightarrow}\) is the magnetic field vector (T) B is the magnetic field magnitude (T) θ is the angle between length and magnetic field vectors (radians) \(\hat{n}\) is the cross product direction vector (unitless)
Substituting the values, we get\(F = (6.00\, A)(0.100\, m)(2.20\, T)\sin(3\pi /4\,radians)\) \(F = (6.00\, A)(0.100\, m)(2.20\, T)(1/\sqrt{2})\) \(F = (6.00\, A)(0.100\, m)(2.20\, \frac{kg}{A\cdot s^{2}})(1/\sqrt{2})\) \(F = (6.00)(0.100\, m)(2.20\, \frac{kg}{s^{2}})(1/\sqrt{2})\) \(F = (6.00)(0.100)(2.20)(1/\sqrt{2})\,kg\cdot m/{s^{2}}\) \(F \simeq 0.933\,\,kg\cdot m/s^2\)
The magnitude of the force on the 0.100 m section of wire has a magnitude of 0.933 N.
We use “right-hand rule” to find the direction of the force vector. The direction of the current is to the right, and so point the right index finger in that direction. The magnetic field points upward-left, so curl your fingers up. Your thumb would be pointing away from the page. This means that the direction of the force vector is out of the page.
|
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA
|
Difference between revisions of "De Bruijn-Newman constant"
(→Bibliography)
(→Threads)
Line 83: Line 83:
* [https://terrytao.wordpress.com/2018/01/27/polymath15-first-thread-computing-h_t-asymptotics-and-dynamics-of-zeroes/ Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes], Terence Tao, Jan 27, 2018.
* [https://terrytao.wordpress.com/2018/01/27/polymath15-first-thread-computing-h_t-asymptotics-and-dynamics-of-zeroes/ Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes], Terence Tao, Jan 27, 2018.
* [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/ Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation], Terence Tao and Sujit Nair, Feb 2, 2018.
* [https://terrytao.wordpress.com/2018/02/02/polymath15-second-thread-generalising-the-riemann-siegel-approximate-functional-equation/ Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation], Terence Tao and Sujit Nair, Feb 2, 2018.
+
== Other blog posts and online discussion ==
== Other blog posts and online discussion ==
Revision as of 16:09, 12 February 2018
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
See asymptotics of H_t for asymptotics of the function [math]H_t[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
Let $f:X\mapsto[0,+\infty)$ be a non-negative measurable function defined on the space $X$, endowed with the complete $\sigma$-additive, $\sigma$-finite, measure $\mu$ defined on the $\sigma$-algebra of the measurable subsets of $X$.
I have read that the following equality holds for the Lebesgue integral:
$$\int_X f d\mu = \int_{[0,+\infty)} \mu(\{x\in X: f(x)>t\}) d\mu_t$$where $\mu_t$ is the usual Lebesgue linear measure.
I would like to understand why this equality holds, but I have got serious problems in proving even the measurability of the function $\phi:t\mapsto \mu(\{x\in X: f(x)>t\})$ (necessary for the Lebesgue integral to be defined) to myself, which would be proved if we could verify that, for any $c\in\mathbb{R}$, the set $$\{t\in\mathbb{R}_{\ge 0}:\mu(\{x\in X: f(x)>t\})<c\}$$is measurable. How can we prove the equality (including the measurability of $\phi$)? I $\infty$-ly thank anyone answering.
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
I'm trying to understand what are the required conditions for an ideal gas in one spatial dimension to act as a heat reservoir.
I would appreciate answers that don't use the terms heat, temperature, heat capacity, entropy unless they are well defined for the system described in the question. (I'm trying to find reasoning from first principles)
I'll start with a helpful calculation: An ideal gas with $N$ particles in one dimension has energy which is proportional to $\sum_{i=1}^Np_i^2$, where $p_i$ is the momentum of the $i$'th particle. Therefore, the number of microstates of the ideal gas with a total energy $E$ is proportional to $E^{N-1}$. Let's denote the constant of proportionality $\alpha^{N-1}$, therefore the number of microstates with energy $E$ is $\Omega(E)=(\alpha E)^{N-1}$. These three equations follow: $$\log \Omega(E)=(N-1)\log(\alpha E)$$ $$\frac{\partial}{\partial E}\log \Omega(E)=\frac{N-1}{E}$$ $$\frac{\partial^2}{\partial E^2}\log \Omega(E)=-\frac{N-1}{E^2}$$
Now, let's suppose that some thermodynamic system $S$ is coupled to this one-dimensional ideal gas, which I denote by $S'$, and that the combined system is isolated with total energy $E_\text{tot}$. The coupling lets energy flow between the systems, but the interface between them does not allow the "flow" of volume, particles etc..
Using the principle of equal a priori probabilities, we assume that all microstates of the combined system have the same probability. It follows that the probability $p(s)$ of a microstate $s$ of $S$, with energy $E_s$, is proportional to $\Omega({E_\text{tot}-E_s)}$. Taylor expanding around $E_s=0$ we get: $$p(s)\propto e^{\log\Omega({E_\text{tot}-E_s)}}=e^{\log\Omega({E_\text{tot})}-\frac{E_s}{E_\text{tot}}(N-1)+\mathcal{O}\left(\left(\frac{E_s}{E_\text{tot}}\right)^2\right)}$$ where the leading coefficient of the $\mathcal{O}\left(\left(\frac{E_s}{E_\text{tot}}\right)^2\right)$ term is $N-1$.
Denoting $\beta=\frac{N-1}{E_\text{tot}}$, we get: $$p(s)\propto e^{-\beta E_s+\mathcal{O}\left(\left(\frac{E_s}{E_\text{tot}}\right)^2\right)}$$
(The proportionality means that the factor of proportion does not depend on $s$).
You are probably aware that I'm following the derivation of the canonical ensemble. At this point of the derivation, usually the $\mathcal{O}\left(\left(\frac{E_s}{E_\text{tot}}\right)^2\right)$ is omited, owing to the fact that $S'$ is a heat reservoir. This is the point I'd like to better understand, and here are my questions:
When is it justified to omit the term? I understand that it is required that $\left(\frac{E_s}{E_\text{tot}}\right)^2$ is small compared to its coefficient $N-1$. But when can I expect this condition to hold (especially noting that usually $N-1$ is taken very large)? I mean, suppose that the only variable that I directly control in the experiment is $E_\text{tot}$. How can I know which values of $E_\text{tot}$ I should provide in order for the energy of the system to be distributed between $S$ and $S'$ such that $\left(\frac{E_s}{E_\text{tot}}\right)^2$ is indeed small enough? When can the ideal gas behave as a heat reservoir for small $N$, such as $N=1,2$? Does the mass of the particles in the ideal gas matter? What are we assuming about the interaction between the systems $S$ and $S'$? Suppose that the system $S$ is also an ideal gas (in three dimensions), and that its interaction with $S'$ is such that $S'$ can actually only influence only one specific particle in $S$. In other words, the energy flow from $S'$ to $S$ allows for the energy to go only to one specific particle in $S$. Could $S'$ still be considered a heat reservoir in this case? If no, what condition is not satisfied? For a general system $S'$ (not necessarily and ideal gas), what are the conditions on $S'$, $S$ and the interactions between them that allow the treatment of $S'$ as a heat reservoir? I'd appreciate if the answer to this question (question 6) is given in two versions: one without referring to heat, temperature, heat capacity, entropy and the other which does includes these terms (or some of them) Edit 1:
An additional question:
Does the question whether a system behaves as a heat reservoir depend also on the specific calculation of interest? For example, if we are interested in the extensive property $A$ which is defined as the sum over all particles $i$ of $E_i^{10^{10^{10}}}$, where $E_i$ is the energy of the $i$'th particle. Calculating its expected value involves integrating the integrand $A\cdot p(s)\text{d}s$. Here, clearly the integrand might be dominated by microstates $s$ with high energy, which seems to suggest that we can't use the standard canonical ensemble.
|
I am using the following formula to calculate the Hann values for a set of points: $$ w(n)=0.5\left(1-\cos\left(2\pi \frac nN\right)\right) $$
To implement this in java, I am using the following code:
private static void applyHannWindow(final double[] points) { for (int i = 0; i < points.length; i++) { points[i] = 0.5 * (1.0 - Math.cos(2.0 * Math.PI * points[i] / points.length)); }}
To take the inverse, solve for $n$, I can re-use the $w(n)$ . Hence, the equation I come up with is this: $$ n = \frac{1}{2\pi}\arccos\big(-2w(n) + 1\big) $$
To implement this in Java, I am using the following code:
private static void reverseHannWindow(final double[] points) { for (int i = 0; i < points.length; i++) { points[i] = points.length * (Math.acos(-2.0 * points[i] + 1.0) / (2.0 * Math.PI)); }}
I wrote a simple test to make sure that after I apply the Hann window, and take it's inverse, that I get the same thing. However, I seem not to.
Given the original array:
final double[] data = new double[5];data[0] = 200;data[1] = 210;data[2] = 215;data[3] = 207;data[4] = 203;
I get the following output:
### applying Hann window0.00.00.00.90450849718746860.9045084971874787### Reversing Hann window0.00.00.01.99999999999998622.0000000000000138
Not sure what I am doing wrong here in applying the inverse to the Hann window?
|
Let us consider the enthalpy as a function of temperature and pressure, such that $H = H(T,p).$
Now, say we want to exchange the pressure $p$ by the volume $V$ by performing a Legendre transformation. Let's call the result $U = U(T,V)$ (the internal energy). Thus the Legendre transformation should take the form
$$U(T,V) = H(T,p) - pV.$$
If we calculate $dU$ we obtain
$$dU = \left(\frac{\partial U}{\partial T}\right)_V dT + \left(\frac{\partial U}{\partial V}\right)_T dV$$ or alternatively \begin{equation} \begin{split} dU &= dH - pdV - Vdp\\ &= \left(\left(\frac{\partial H}{\partial T}\right)_pdT + \left(\frac{\partial H}{\partial p}\right)_T dp\right) - pdV - Vdp. \end{split} \end{equation}
Equating the two expressions for $dU$ and requiring that the coefficient of $dp$ should vanish we obtain the relations \begin{equation} \begin{split} &\left(\frac{\partial U }{\partial T}\right)_V = \left(\frac{\partial H}{\partial T}\right)_p, \quad \quad -p = \left(\frac{\partial U}{\partial V}\right)_T, \quad \quad V = \left(\frac{\partial H}{\partial p}\right)_T. \end{split} \end{equation}
But the relation $-p = \left(\frac{\partial U}{\partial V}\right)_T$ can't be correct because we have the very general relation $$C_p - C_v = \left(p + \left(\frac{\partial U}{\partial V}\right)_T\right)\left(\frac{\partial V}{\partial T}\right)_p \neq 0.$$
So there are two options:
Something is wrong about the Legendre transformation I started out with. If this is the case, can anyone see what it is? The pressure $-p = \left(\frac{\partial U}{\partial V}\right)_T$ is not the pressure of the gas. If this is the case, what is the meaning of this pressure?
I know that the natural variables of $U$ is the entropy $S$ and the volume $V$, and that the natural variables of $H$ is entropy $S$ and pressure $p$. But I do not see why I should get something unphysical by treating $U = U(T,V)$ and $H = H(T,p)$ instead of $U = U(S,V)$ and $H = H(S,p)$.
|
A function $f: \mathbb R \to \mathbb R$ is called sequentially continuuous if
$x_n \to x$ implies $f(x_n) \to f(x)$.
Every continuous function is sequentially continuous. Let $f$ be a continuous function on $\mathbb R$.
If $y_n = f(x_n)$ is a convergent sequence does it follow that $x_n$ is a convergent sequence?
At first I thought that yes because: assume $f(x_n) \to f(x)$ but $x_n \not\to x$. Say, $x_n \to z$. Then since $f$ is sequentially continuous, $f(x_n) \to f(z)\neq f(y)$. Although $x_n \not\to x$ is not equivalent to $x_n$ converging to a different value it seems that the case where $x_n$ diverges can be treated similarly.
But then I came up with an almost counterexample: Let $x_n =n$ and $f(x) = {1\over x}$. Then $f(x_n) \to 0$ but $x_n \to \infty$. The problem is, this $f$ is not defined on $\mathbb R$ and also, there is no $x$ with $f(x_n) \to f(x)$.
Is it possible that $f(x_n) \to f(x)$ implies $x_n \to x$?
|
I have been told that \[
[\hat x^2,\hat p^2]=2i\hbar (\hat x\hat p+\hat p\hat x)
\] illustrates
. ordering ambiguity
What does that mean?
I tried googling but to no avail.
Thank you.
LM:The ordering ambiguity is the statement – or the "problem" – that for a classical function \(f(x,p)\), or a function of analogous phase space variables, there may exist multiple operators \(\hat f(\hat x,\hat p)\) that represent it i.e. that have the right form in the \(\hbar\to 0\) limit. In particular, the quantum Hamiltonian isn't uniquely determined by the classical limit.
This ambiguity appears even if we require the quantum operator corresponding to a real function to be Hermitian and \(x^2 p^2\) is the simplest demonstration of this "more serious" problem. (The Hermitian parts of \(\hat x\hat x\hat p\) and \(\hat x \hat p \hat x\) or other cubic expressions are the same which is a fact that simplifies a step in the calculation of the hydrogen spectrum using the \(SO(4)\) symmetry.) On one hand, the Hermitian part of \(\hat x^2 \hat p^2\) is\[
\hat x^2 \hat p^2 - [\hat x^2,\hat p^2]/2 = \hat x^2\hat p^2 -i\hbar (\hat x\hat p+\hat p\hat x)
\] where I used your commutator.
On the other hand, we may also classically write the product and add the hats as \(\hat x \hat p^2\hat x\) which is already Hermitian. But\[
\hat x \hat p^2\hat x = \hat x^2 \hat p^2+\hat x[\hat p^2,\hat x] = \hat x^2\hat p^2-2i\hbar\hat x\hat p
\] where you see that the correction is different because \(\hat x\hat p+\hat p\hat x\) isn't quite equal to \(2\hat x\hat p\) (there's another, \(c\)-valued commutator by which they differ). So even when you consider the Hermitian parts of the operators "corresponding" to classical functions, there will be several possible operators that may be the answer. The \(x^2p^2\) is the simplest example and the two answers we got differed by a \(c\)-number. For higher powers or more general functions, the possible quantum operators may differ by \(q\)-numbers, nontrivial operators, too.
This is viewed as a deep problem (perhaps too excessive a description) by the physicists who study various effective quantum mechanical models such as those with a position-dependent mass – where we need \(p^2/2m(x)\) in the kinetic energy and by an expansion of \(m(x)\) around a minimum or a maximum, we may get the \(x^2p^2\) problem suggested above.
But the ambiguity shouldn't really be surprising because it's the quantum mechanics, and not the classical physics, that is fundamental. The quantum Hamiltonian contains all the information, including all the behavior in the classical limit. On the other hand, one can't "reconstruct" the full quantum answer out of its classical limit. If you know the limit \(\lim_{\hbar\to 0} g(\hbar)\) of one variable \(g(\hbar)\), it clearly doesn't mean that you know the whole function \(g(\hbar)\) for any \(\hbar\).
Many people don't get this fundamental point because they think of classical physics as the fundamental theory and they consider quantum mechanics just a confusing cherry on a pie that may nevertheless obtained by quantization, a procedure they consider canonical and unique (just hat addition). It's the other way around, quantum mechanics is fundamental, classical physics is just a derivable approximation valid in a limit, and the process of quantization isn't producing unique results for a sufficiently general classical limit.
Quantum field theory
The ordering ambiguity also arises in field theory. In that case, all the ambiguous corrections are actually divergent, due to short-distance singularities, and the proper definition of the quantum theory requires one to understand renormalization. At the end, what we should really be interested in is the space of relevant/consistent quantum theories, not "the right quantum counterpart" of a classical theory (the latter isn't fundamental so it shouldn't stand at the beginning or base of our derivations).
In the path-integral approach, one effectively deals with classical fields and their classical functions so the ordering ambiguities seem to be absent; in reality, all the consequences of these ambiguities reappear anyway due to the UV divergences that must be regularized and renormalized. I have previously explained why the uncertainty principle and nonzero commutators are respected by the path-integral approach due to the non-smoothness of the dominant paths or histories. There are additional things to say once we start to deal with UV-divergent diagrams. The process of regularization and renormalization depends on the subtraction of various divergent counterterms, to get the finite answer, which isn't quite unique, either (the finite leftover coupling may be anything and the value has to be determined by a measurement).
That's why the renormalization ambiguities are just the ordering ambiguities in a different language – one that is more physical and less dependent on the would-be classical formalism, however. Whether we study those things as ordering ambiguities or renormalization ambiguities, the lesson is clear: the space of possible classical theories isn't the same thing as the space of possible quantum theories and we shouldn't think about the classical answers when we actually want to do something else – to solve the problems in quantum mechanics.
Incidentally, for TRF readers only: the "space of possible theories" is similar to the "moduli space" and the Seiberg-Witten analysis of the \(\NNN=2\) gauge theories is the prettiest among the simplest examples of the fact that the quantum moduli space is different than the classical one – often much richer and more interesting. You should never assume the classical answers when you're doing quantum mechanics because quantum mechanics has the right to modify all these answers. And this world is quantum mechanical, stupid.
|
Your alternative method uses the non-relativistic approximation twice : first when you insert a value for $v/c$ in the numerator, second when you insert a value for $v$ in the denominator. Because of this your error is compounded, so you get a result which is less accurate than if you inserted the approximation only once.
Whenever you make approximations, it is most accurate to do so only once, at the last possible step, as follows :
First identify a dimensionless variable $y$ which depends on the independent variable in your problem. Examples : $\beta=v/c$ if the independent variable is $v$ or $x=T/mc^2$ if it is $T=eV$( see below).
Obtain a formula for the quantity you are interested in, as a function of $y$.
Expand the formula into a power series in $y$, using Taylor's Theorem. Check that the expansion is valid for the range of values of $y$ which interest you. For example, the expansion $$(1+y)^n=1+ny+\frac12 n(n-1)y^2+\frac16n(n-1)(n-2)y^3+...$$ is valid provided that$y\lt 1$.
Finally throw out higher powers of the dimensionless variable, depending on how much accuracy you want to have.
The accurate formula which you got can be written as $$\lambda=\frac{hc}{\sqrt{2Tmc^2+T^2}}=\lambda_0 (1+\frac12 x)^{-1/2}= \lambda_0 (1-\frac14 x+\frac{3}{32}x^2-\frac{5}{128}x^3-...)$$ where $\lambda_0=\frac{h}{\sqrt{2mT}}$ is the de Broglie wavelength assuming the electron is non-relativistic, and $x=\frac{T}{mc^2}=\frac{eV}{mc^2}$. This expansion is valid because $T=85keV$ and $mc^2=511keV$ (the rest mass of the electron) so $\frac12 x\lt 1$.
Note that the approximation is introduced only at the last step, by ignoring the higher powers of $x$. The full infinite series expansion, and the steps before it, are 100% accurate.
Using the figures given, we get $\lambda_0=4.2066pm, x=0.08317$ and $\lambda\approx4.0317pm$ when we ignore terms in $x^2$ or higher powers. Using the exact equation we get $\lambda=4.0419pm$.
Your method first makes the classical approximation $T=eV\approx \frac12 mv^2$ before any. Then $$\frac{v^2}{c^2}\approx\frac{2T}{mc^2}=4x$$ $$mv\approx\sqrt{2mT}$$ $$\lambda=\frac{h\sqrt{1-\frac{v^2}{c^2}}}{mv}\approx \lambda_0 (1-4x)^{+1/2}\approx \lambda_0 (1-2x-2x^2+4x^3-...)$$
Compare this power series with the one above. The term in $x$ now has a coefficient of$2$ instead of $\frac14$, so the correction to $\lambda_0$ is $8\times$ what it should be. The previous line was already an approximation, so the power series is also an approximation, regardless of however many terms we retain. Unlike the series above, which gets close to being 100% accurate as we retain more and more powers.
Ignoring terms in $x^2$ or higher powers. we get $\lambda \approx 3.5069pm$ instead of$\lambda \approx 4.0317pm$.
The larger is the value of $T=eV$ then the larger is $x$ also, and the greater is the difference between the correct approximation $\lambda\approx \lambda_0 (1-\frac14 x)$ and your incorrect approximation $\lambda\approx \lambda_0 (1-2x)$. And as noted above, if $x\gt \frac14$ then your approximation $\lambda\approx \lambda_0 (1-4x)^{+1/2}$ gives an imaginary result, whereas the exact formula $\lambda= \lambda_0 (1+\frac12x)^{-1/2}$ gives a real result because $x\ge 0$ for all values of $V$.
There is no sharp
cut off point for $V$ at which the electron suddenly switches over from being classical to relativistic, or above which your approximation suddenly ceases to be a good one. The change is gradual.
At $V=33kV$ the value of $x$ is less than half that at $V=85kV$ so the smaller is the difference between your erroneous calculation and the correct one. At both values of $V$ your method gives an error which is $8\times$ bigger than it should be if you used the correct approximate formula.
In fact, if the accelerating potential is such that $T=eV \gt \frac12 mc^2$ then $v \gt c$ so the factor $\sqrt{1-\frac{v^2}{c^2}}$ in the numerator would be imaginary, and so would the de Broglie wavelength.
|
Let $b\in \mathbb{R^n}$ and $c>0$. Assume $g \in C(R^n)$ has compact support and $f = f(x,t)$, $f \in C_1^2(R^n \times [0,\infty))$ has compact support. I'm trying to solve the following IBP via fourier transform: $$ \left\{ \begin{array}{rc} u_t+b \cdot \nabla u-c^2 \Delta u &=& f & \text{ in } \mathbb{R^n} \times(0,\infty)\\ u &=& g& \text{on } \mathbb{R^n}\times \{t=0\} \end{array} \right. $$
Thing's are getting a little iffy for me right around when I'm solving for $\mathcal{F}(u(x,t))=\hat{u}(y,t)$ so that I can apply an inverse fourier transform to get the end solution.
I get to an ODE, namely
$$\left\{ \begin{array} \hat{u_t} + \hat{u}(b \cdot iy - c^2 \left| y \right|^2)=\hat{f}\\ \hat{u}(y,0) = \hat{g} \end{array} \right.$$
However it's getting a little awkward solving for $u$ from here, that is, if what I've done so far is correct. How do I proceed in the calculation here?
EDIT: I'm certain that the solution to the ODE IBP is
$$\hat{u}(y,t) = e^{-t(b \cdot iy - c^2 |y|^2)}\int_0^t \hat{f}(y,s)e^{s(b \cdot iy - c^2 \left| y \right|^2)}ds + \hat{g}(y)e^{-t(b \cdot iy - c^2 |y|^2)}$$
So that I guess I am a little iffy on how to compute the inverse fourier transform of that expression.
|
Three question about this equation:
$ \displaystyle\nabla\times\mathbf{E}=-\frac{\partial \mathbf{B}}{\partial t} $
1 If I solve this equation with Mathematica, I find the magnetic field $b(x,y,z,t),b:\mathbb{R}^4\rightarrow\mathbb{R}^3$ right? 2 I have put an arbitrary function $e:\mathbb{R}^4\rightarrow\mathbb{R}^3$ as electric field for this experiment, but how can I calculate that function for a real case; what I need to do this? 3 One time that I have both the electric and magnetic field how can I compose the electromagnetic field?
Needs["VectorAnalysis`"](*Electric field e : R^4->R^3 *)e[x_, y_, z_, t_] := {x - 3 y, 4 y + t, y + z + t};Maxwell = Curl[e[x, y, z, t]] == -D[b[x, y, z, t], t];DSolve[Maxwell, b[x, y, z, t], {x, y, z, t}]
|
During a discussion with my professor, he asked me about relativistic resistance transformation. Firstly I started with the formula $R=\frac{\rho L}{s}$, where $\rho$ is electric resistivity. So if some resistor moves with relativistic speed its length contracts. And this means that its resistance changes like its length. Its resistivity doesn't change because of Ohm's law $E=\rho I$. Current changes like an electric field and that means that $\rho=\text{const}$. But then the professor said that to find this transformation I can't use the first formula, and I should find it from some laws that have resistance in it. So I used $$P=I^2R \tag{2}$$ and $$R=\frac{U}{I} \tag{3}.$$ From formula (3) we find that resistance doesn't change because difference of potentials and current change the same way. But from formula (2) we find that resistance changes like $\frac{R}{\gamma^3} $ (where $\gamma$ is Lorentz factor). So basically I have three variants, but the professor said that I need to find one result. Any ideas?
Repeating what @Dale said in a slightly different way, if you want relativistic laws, it is better to work with relativisitic quantities.
The response of matter to applied electromagnetic field is generally expressed through charge density ($\rho$) and current density ($\mathbf{J}$). The 4D equivalent is the four-current:
$J^\mu=\left(c\rho, \mathbf{J}\right)^\mu$
Next, the electric and magnetic fields are not covariant quantities in SR, so use the electromagnetic tensor: $F^{\mu\nu}$ (https://en.wikipedia.org/wiki/Electromagnetic_tensor)
Now, conductivity, which in general is a tensor, arises as a result of postulating a linear relationship between the applied electric field and the current density.
So we could say:
$J^\mu=\sigma^{\mu}_{\nu\eta}F^{\nu\eta}$
Now this general definition hoovers up pretty much all the simple types of electromagnetic response. But that's the point, what looks like conductivity response to one observer will look very different to another one.
You could postulate that in your rest frame ($\bar{S}$) the response is solely that of the isotropic conductivity ($\sigma$), then the only non-zero components are ($c$ is the speed of light):
$\bar{\sigma}^{1}_{01}=-\bar{\sigma}_{10}=c\sigma/2$
$\bar{\sigma}_{02}=-\bar{\sigma}_{20}=c\sigma/2$
$\bar{\sigma}_{03}=-\bar{\sigma}^{3}_{30}=c\sigma/2$
Then consider the lab-frame relative to which the rest-frame is moving along x with speed $v$. You then transform your tensor $\sigma^\mu_{\eta\nu}=\frac{\partial x^\mu}{\partial \bar{x}^\alpha}\frac{\partial \bar{x}^\beta}{\partial x^\eta}\frac{\partial \bar{x}^\kappa}{\partial x^\nu}\bar{\sigma}^\alpha_{\beta\kappa}$. The only components to change are the ones involving x:
I get
$\sigma^{0}_{10}=-\sigma^{0}_{01}=\left(\frac{v}{c}\right)\gamma\cdot\frac{c\sigma}{2}$
$\sigma^{1}_{10}=-\sigma^{1}_{01}=\gamma\cdot\frac{c\sigma}{2}$
So you could say that in lab-frame the conductivity in x direction will change by $\gamma=1/\sqrt{1-\left(v/c\right)^2}$. This is the second equation, but that's not the end of the story. There is the first equation, which in non-relativistic terms would lead to:
$\rho=\frac{v\sigma}{c^2}\gamma\cdot E_x$
Where $E_x$ is the x-component of the electric field. So you would see a build-up of charge density due to applied electric field.
So, as you now see, the picture is more complex than just finding an equation of resistance $R$
So basically I have three variants, but professor that I need to find one result. Any ideas?
Circuit theory is inherently nonrelativistic and so there is no relativistic circuit theory. This should actually not be surprising since circuit theory does not use the concept of space at all and relativity is a theory of spacetime.
The issue is in the assumptions that circuit theory is based on. The three assumptions are: (1) there is no net charge on any circuit element, (2) there is no magnetic flux outside of any circuit element, (3) the circuit is small so that electromagnetic influences can be assumed to propagate instantaneously.
Assumption (3) is expressly prohibited by relativity, and since a current density in one frame is both a current and a charge density in another then (1) will generally not be satisfied in all frames. Therefore, it is fatally flawed to attempt to find a relativistic version of Ohms law based on the equations of circuit theory since those laws are inherently non relativistic.
However, you could take the form of Ohm’s law that is based directly on Maxwell’s equations: $\mathbf J = \sigma \mathbf E$. This law is based on Maxwell’s equations, which are completely relativistic. Since we know how $\mathbf J$ and $\mathbf E$ both transform then we can calculate how $\sigma$ transforms.
|
MathRevolution wrote:
[Math Revolution
GMAT math practice question]
What is the perimeter of a rectangle?
1) The square of the diagonal is \(52\).
2) The area of the rectangle is \(24\).
\(? = {\text{perim}}\left( {{\text{rectangle}}} \right)\)
Excellent opportunity to GEOMETRICALLY BIFURCATE each statement alone:
\(\left( 1 \right)\,\,\,{\text{dia}}{{\text{g}}^{\,{\text{2}}}} = 52\,\,\,\,\,\,\mathop \Rightarrow \limits^{{\text{diag}}\,\, > \,\,0} \,\,\,{\text{diag}}\,\,{\text{unique}}\,\,\,{\text{but}}\,\,\,{\text{INSUFF}}.\)
\(\left( 2 \right)\,\,\,{\text{area}} = 24\,\,\,\,\, \Rightarrow \,\,\,\,{\text{INSUFF}}{\text{.}}\)
(See the image attached!)
\(\left( {1 + 2} \right)\)
Let L and W be the length and width of our focused-rectangle
. Hence:
\(? = {\text{2}}\left( {L + W} \right)\)
\(\left( {1 + 2} \right)\,\,\left\{ \begin{gathered}
{L^2} + {W^2} = 52 \hfill \\
2LW = 2 \cdot 24\,\,\,\, \hfill \\
\end{gathered} \right.\,\,\,\,\mathop \Rightarrow \limits^{\left( + \right)} \,\,\,\,\,{\left( {L + W} \right)^2} = 52 + 48 = 100\)
\({\left( {L + W} \right)^2} = 100\,\,\,\,\mathop \Rightarrow \limits^{L + W\,\, > \,\,0} \,\,\,\,L + W\,\,\,{\text{unique}}\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,? = 2\left( {L + W} \right)\,\,\,\,{\text{unique}}\)
This solution follows the notations and rationale taught in the GMATH method.
Regards,
fskilnik.
Attachments
20Set18_5m.gif [ 22.84 KiB | Viewed 461 times ]
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
|
Let $S_n$ be the symmetric group of all the permutations of $\{1,\ldots,n\}$. Recall that a permutation $\sigma\in S_n$ is called a derangemnt if $\sigma(k)\not=k$ for all $k=1,\ldots,n$.
Motivated by the well-known result that $\sum_{k=m}^n\frac1k\not\in\mathbb Z$ whenever $n\ge m>1$ (Kurschak, 1918), here I ask the following question.
QUESTION: Is it true that whenever $n\ge m\ge1$ we have $$\sum_{k=m}^n\frac{\sigma(k)}k\not\in\mathbb Z$$ for all derangements $\sigma\in S_n$?
If $n$ is a prime number $p$ and $\sum_{k=m}^n\frac{\sigma(k)}k\in\mathbb Z$ with $\sigma\in S_n$, then $\sigma$ is not a derangement since $\sigma(p)=p$. Thus the question has a positive answer if $n$ is prime. I conjecture that the question always has a positive answer, and I have verified this for every $n=1,\ldots,11$. For $n=4$, note that $$\frac41+\frac12+\frac33+\frac24\in\mathbb Z$$ but the permutation $(4,1,3,2)$ of $\{1,2,3,4\}$ is not a derangement since it fixes the number $3$.
|
OpenCV 3.4.7
Open Source Computer Vision
This tutorial demonstrates to you how to use F-transform for image filtering. You will see:
As I shown in previous tutorial, F-transform is a tool of fuzzy mathematics highly usable in image processing. Let me rewrite the formula using kernel \(g\) introduced before as well:
\[ F^0_{kl}=\frac{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} \iota_{kl}(x,y) g(x,y)}{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} g(x,y)}, \]
where \(\iota_{kl} \subset I\) centered to pixel \((k \cdot h,l \cdot h)\) and \(g\) is a kernel. More details can be found in related papers.
Image filtering changes input in a defined way to enhance or simply change some concrete feature. Let me demonstrate some simple blur.
As a first step, we load input image.
Following the F-transform formula, we must specify a kernel.
So now, we have two kernels that differ in
radius. Bigger radius leads to bigger blur.
The filtering itself is applied as shown below.
Output images look as follows.
|
Let $a$ and $n\ge3$ be integers. Suppose that $a^{n-1} \equiv 1 \pmod n$, while $a^{(n-1)/p} \not\equiv 1 \pmod n$ for every prime $p$ dividing $n-1$, and I want to show that $n$ is prime.
First of all, we know that $(a,n) = 1$ because if $(a,n) = g > 1$, then $g \mid n$ which means $a^{n-1} \equiv 1 \pmod g$, but this is impossible since $g \mid a \implies g \mid a^{n-1}$. Since $a^{n-1} \equiv 1 = a^0 \pmod n$, it follows that $n-1 \equiv 0 \pmod{\phi(n)}$ (
this is the part I am having trouble proving, I know that the converse always holds, and I want to show that in this case the statement also holds), which means $n - 1 = k \cdot\phi(n)$, and now we just need to show $k = 1$. But we have that $(n-1)/p \not \equiv 0 \pmod {\phi(n)}$ which means $\phi(n) \nmid (n-1)/p$ for all $p$ dividing $n-1$. Thus, $\phi(n) = n-1$.
Is it possible to prove the statement preceding the text in bold? Otherwise, what is the right approach to take for this problem?
|
I'm studying the construction of the $\mathrm{Sing}$ functor in Morel-Voevodsky ``$\mathbb{A}^1$-homotopy theory of schemes'' and I was trying to understand the properties of its left adjoint, the realization functor.
To start with, let $\mathcal{C}$ be a site (with an interval, in the sense of M-V) and let $\mathsf{Sh}(\mathcal{C})$ (resp. $\mathsf{sSh}(\mathcal{C})$) be the category of sheaves (resp. simplicial sheaves) on $\mathcal{C}$.
The category of simplicial sheaves is enriched, tensored and cotensored over the category of sheaves in an obvious way. In particular, for a sheaf $F$ and a simplicial sheaf $X$ we can define the tensor product $F\otimes X$ of $F$ and $X$ as the product of the constant simplicial sheaf at $F$ with $X$.
Now, if $X$ is a simplicial sheaf and $D^\bullet$ is a cosimplicial simplicial sheaf, we define the tensor product $X\otimes_{\Delta^{\text{op}}} D^\bullet$ of $X$ and $D^\bullet$ (or realization of $X$ with respect to $D^\bullet$, denoted $|X|_{D^\bullet}$ in Morel-Voevodsky) as the coend of the functor:
\begin{align*} X\otimes D^\bullet \colon \Delta^{\text{op}}\times\Delta&\to \mathsf{sSh}(\mathcal{C})\\ ([n],[m])&\mapsto X_n\otimes D^m \end{align*}
This construction gives rise to a bifunctor:
$$\otimes_{\Delta^{\text{op}}}\colon \mathsf{sSh}(\mathcal{C})\times \mathsf{sSh}(\mathcal{C})^{\Delta}\to \mathsf{sSh}(\mathcal{C})$$
Now, a cosimplicial simplicial sheaf $D^\bullet$ is said to be
unaugmentable if the morphism $(d^0,d^1)\colon D^0\amalg D^0\to D^1$ is a monomorphism. Equivalently, $D^\bullet$ is unaugmentable iff the equalizer of $d^0$ and $d^1$ is empty. Those are the cofibrant objects in the Reedy model structure on $\mathsf{sSh}(\mathcal{C})$, for any injective model structure on simplicial sheave.
Morel and Voevodsky prove that a cosimplicial simplicial sheaf $D^\bullet$ is unaugmentable if and only if the functor $-\otimes_{\Delta^{\text{op}}}D^\bullet$ preserves monomorphisms. Moreover, for some specific unaugmentable $D^\bullet$ they show that the functor $-\otimes_{\Delta^{\text{op}}}D^\bullet$ also preserves $I$-
local equivalences (where $I$ is an interval in the sense of M-V and we localize the local-injective model structure on simplicial sheaves with respect to the morphism $I\to *$).
Question: Is the functor $\otimes_{\Delta^\text{op}}$ defined above a left Quillen bifunctor? Here, we put the $I$-local model structure on the category of simplicial sheaves (in particular cofibrations are monomorphisms) and we put the Reedy model structure on the category of cosimplicial objects in the $I$-local model category of simplicial sheaves (In particular cofibrant objects are unaugmentable cosimplicial simplicial sheaves).
|
A potential deviation from ΛCDM described via "eating" At least sixteen news outlets ran stories about "dark energy that is devouring dark matter" in recent two days. I think that the journalists started with a University of Portsmouth press release that described the recent publication of a British-Italian paper in Physical Review Letters. The article by Salvatelli and 4 co-authors has been available since June:
What is the paper about? They try to reconcile the cosmic microwave background (CMB) data from WMAP, Planck, and others, and design a model that correctly incorporates cold dark matter (CDM) as well as the growth rate of the large-scale structure (LSS) – that rate currently looks lower than previously thought.
And when they construct their model, which combines these things slightly differently than existing papers in literature, they find some deviation slightly greater than 2 standard deviations somewhere. So the next question is how this deviation can be cured if it is real. And they effectively propose two possibilities to change the successful ΛCDM model (cosmological constant plus cold dark matter). Either to allow the neutrinos to be massive – that's represented by adding \(m_\nu\) in front of ΛCDM – or by adding "vacuum interactions" i.e. switching to iVCDM (interacting vacuum cold dark matter).
I would say that the neutrino masses should be favored in that case but all the celebrations are directed at the second possibility which is described by the phenomenological equations\[
\eq{
\dot \rho_c + 3 H \rho_c &= -Q,\\
\dot V &= +Q.
}
\] Here, \(V\) is the dark energy (density) – it's equal to \(\Lambda/8\pi G\) in the normal cosmological constant case – and \(\rho_c\) is the cold dark matter (density). The terms \(\pm Q\) have to be the same due to some kind of energy conservation and their nonzero value is what adds "iV" to the iVCDM model. This phenomenological adjustment was first proposed in the mid 1970s so it's very old. A nonzero \(Q\) is supposed to arise from some interactions in the dark matter sector but I think that no convincing particle-physics-based model of this kind exists as of now.
You know, the whole concept that \(V\), the dark energy, is non-constant is highly problematic. A cosmological constant has to be constant in time (that's why it's called a constant) and it's a far more convincing explanation to keep it constant and add something else (like the neutrino masses). If you want to make \(V\) variable, you have to add at least "some degrees of freedom", and if you add too many, so that it will resemble some other particles, the pressure will be much closer to zero than the \(p=-\rho\) value that you approximately need. However, it's a possibility that \(V\) is (and especially was) changing and the authors make various Bayesian and other exercises to quantify how much they think that it's favored over the ΛCDM.
OK, the content of the paper is rather clear. It makes some sense but it's not conclusive and there's no revolution yet (or around the corner). But let's have a look what the media have made out of this technical paper on cosmology.
The Daily Mail's title is
Is dark energy turning the universe into a 'big, empty, boring' place? Mysterious force may be swallowing up galaxiesAs you can see, the title has pretty much nothing to do with the paper. First of all, the positive cosmological constant (dark energy) has been known to turn the Universe into a big, empty, boring place for almost a century, and this scenario has been expected to occur in our actual Universe since the experimental discovery of the cosmological constant in the late 1990s. So the new paper they are trying to describe – but they utterly fail – has surely not discovered that this is what the future of the Universe is going to look like.
Are galaxies being swallowed by a mysterious force? First, the sign of \(Q\) may be positive or negative which converts some dark matter into dark energy or vice versa. The sign is really just a technicality and there shouldn't be too much ado about nothing. If you care about the sign, they prefer \(Q=-q_V HV\) with \(q_V\approx -0.15\) so the minus signs cancel and \(Q\) is positive. The displayed equations above indeed say that the dark matter goes up and the dark matter goes down so the press is right and Richard Mitnick is wrong.
And is it right to suggest that the galaxies are being swallowed now? Swallowed is surely a very strong word here – if something is reducing the amount of dark matter, it is very slow today and it occurs rather uniformly everywhere. Equally importantly, most of the paper is about past cosmological epochs. They divide the life of the Universe to four bins, \(q_1,q_2,q_3,q_4\), and it's the fourth bin that contains the present.
They see some small but potentially tantalizing deviations in all the bins. But because this is based on cosmological observations that basically study the distant past only, it is very problematic to suggest that the finding, even if it were real, primarily tells us something about what is happening today.
The other popular articles mainly differ in the choice of verbs. Sometimes the dark matter is being "swallowed up", sometimes it is being "gobbled up". But whatever the wording is, I think it is fair to say that most cosmologists wouldn't be able to reconstruct the point of the paper – even approximately – from these popular stories in the media.
And make no doubts about it. It
isvery important whether there exists a functioning, particle-physics-based model (ideally embedded in string theory) that describes the microscopic origin of the required extra term – otherwise the extra term looks like a nearly indefensible fudge factor. For those reasons, I think that the neutrino masses would be preferred as an explanation if the deviation were real (which I find very uncertain). There may be other explanations, too. We just need some correction at a generic place in the equations of energy densities. Perhaps, some cosmic strings or cosmic domain walls could help, too.
Whatever the right calculation of these processes and the explanation of any possible deviations is, don't imagine that a big goblin is walking throughout the Universe and gobbles up the galaxies (or the dark matter in them). Even if this term existed, it's just another boring term that the laymen (and perhaps even most experts) would be annoyed by and most likely, the microscopic explanation would be in terms of some basic particle reactions similar to hundreds of particle reactions that are already known.
The journalists' desire to present even the most mundane and inconclusive suggestions as mysterious discoveries that change absolutely everything is unfortunate. It may be needed to get the readers' attention – but this is largely a fault of the previous overhyped stories. The dynamics is similar to a p@rn consumer who demands increasingly more hardc@re p@rn. Where does this trend lead?
By the way, one more news on theoretical work on the identity of dark matter. Three days ago, Science promoted SIMP (strongly interacting massive particle) models as a replacement for WIMP. SIMPs could account for the observed multi-keV line and offer you their own of a SIMP miracle, matching the WIMP miracle. See e.g. this paper by Feng and others or this paper in PRL for extra ideas.
On Tuesday, SciAm and others promoted a paper arguing that Hooper-like gamma rays from the center of the Milky Way may be created by dark matter explosions instead.
Guth, Linde, Dijkgraaf, and some other well-known characters were hired as instructors at the World Science U(niversity). Register now.
A new team has created an app to detect cosmic rays with people's (and your) smartphones – with the apparent goal to make the Auger experiment obsolete. ;-)
|
Strategy for Starting a Thesis from Scratch The CLAS Linux Group have posted this information on behalf of others. This information is likely dated. Use at your own risk. Create a file called thesis.texto use as your "master" file. This is the big daddy of them all, and will link together the other files that will make up your thesis. Note that a comment in LaTeX is preceded by a percent sign ('%') and extends to the end of the line. Suppose you plan to have 4 chapters and 2 appendices (which you can change later, of course). Here's a good thesis.texto start with:
% thesis.tex (starting point of my thesis)
% Declare overall type of document (use 12pt report class):
\documentclass[12pt]{report}
% Import uithesisXX.sty file:
\usepackage{uithesis03}
% Specify which pieces (other .tex files) you plan to include later:
\includeonly{ prelude, newcom, chap1, chap2, chap3, chap4, app0, app1, app2 }
\begin{document}
% Include all of the other pieces of your thesis:
\include{prelude} \include{newcom} \include{chap1} \include{chap2} \include{chap3} \include{chap4} \include{app0} \include{app1} \include{app2}
\end{document}
Create a file prelude.texto specify which features in uithesisXX.styyou are using, your personal information, and your title & abstract. For example:
% prelude.tex (specification of which features in 'uithesisXX.sty' you
% are using, your personal information, and your title & abstract)
% Specify features of 'uithesisXX.sty' you want to use:
\abtitlepgtrue % special title page announcing your abstract, not to be bound with thesis (required) \abstractpgtrue % special version of your abstract suitable for microfilming, not to be bound with thesis (required) \titlepgtrue % main title page (required) \copyrighttrue % copyright page (optional) \signaturepagetrue % page for thesis committee signatures (required) \dedicationtrue % dedication page (optional) \acktrue % acknowledgments page (optional) \abswithesistrue % abstract to be bound with thesis (optional) \tablecontentstrue % table of contents page (required) \tablespagetrue % table of contents page for tables (required only if you have tables) \figurespagetrue % table of contents page for figures (required only if you have figures)
\title{MY THESIS TITLE} % use all capital letters
\author{My Name} % use mixed upper & lower case
\advisor{Title and Name of My Advisor} % example: Associate Professor John Doe \dept{Statistics} % your academic department \submitdate{December 1998} % month & year of your graduation
\newcommand{\abstextwithesis}
{ This is the abstract that I want bound with my thesis. It's optional. }
\newcommand{\abstracttext}
{ This is the abstract to turn in for microfilming. Specific formatting requirements (eg., limits on \#words and use of symbols and fonts) are described in the UMI Microfilming guide, which you can get from the Graduate College. It's required. }
\newcommand{\dedication}
{ This is optional; start with the word "To"; you do not need to end with a period }
\newcommand{\acknowledgement}
{ This is also optional. Use complete sentences here. }
% Take care of things in 'uithesisXX.sty' behind the scenes (it sounds weird, but you need these commands):
\beforepreface \afterpreface Place all of your LaTeX new command definitions in a file called newcom.tex:
% newcom.tex (new command definitions)
% Some examples (yours may be different):
\newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newcommand{\bfx}{{\ensuremath{\mathbf{x}}}} \newcommand{\thetadd}{{\ensuremath{\bar{\theta}^{(\cdot)}_\cdot}}} Create a separate files chap1.tex, chap2.tex, chap3.tex, chap4.texfor each chapter. Here is an example for the first of our 4 chapters, illustrating the use of sectioning commands (make sure you capitalize the "C" in
\Chapter!):
% chap1.tex (Chapter 1 of my thesis)
\Chapter{INTRODUCTION}
\label{intro}
\section{The Problem}
\label{intro:prob}
Here's the problem I'm trying to solve.
\subsection{Background}
\label{intro:prob:bkgd}
Here's some background on the problem, the model for which is described in section~\ref{intro:prob:model}.
\subsection{The Model}
\label{intro:prob:model}
Here's the model I'm using for the problem, whose background is mentioned in section~\ref{intro:prob:bkgd}. Here's a lemma important to the model (for the proof, see Appendix~\ref{app:proof:easy}):
\begin{lemma} \label{easylemma} This is a Lemma. \end{lemma}
\section{Previous Approaches}
Here are some previous approaches to this problem, which is defined in section~\ref{intro:prob}.
If you have one or more appendices, create a tiny little file called app0.texto implement the "switch" to appendix mode:
% app0.tex (silly little file to separate chapters from appendices)
\appendix Create the appendices app1.texand app2.tex, using the sectioning command
\Appendixinstead of
\Chapter:
% app1.tex (will be Appendix A)
\Appendix{SELECTED PROOFS AND DERIVATIONS}
\label{app:proof}
\newpage
\section{Proof of Lemma~\ref{easylemma}} \label{app:proof:easy}
\noindent \textbf{Proof}:\quad The result is clearly obvious.
Add content to the various .texfiles to build your thesis. Always compile thesis.tex(not the other .texfiles) when you want to view your changes. To shorten compile time, comment out lines in the \includeonlysection of thesis.texthat you are not updating at the moment. LaTeX will then include the most recently compiled version of these files. The page numbers may be off, but you will still have access to labels within the commented-out files. For example, if you are working on Chapter 3, you have not made any recent changes to Chapters 1-2, and you haven't started on Chapter 4 or the appendices, you will make your \includeonlystatement look like this:
\includeonly{
prelude, newcom, %chap1, %chap2, chap3, %chap4, app0, %app1, %app2 }
There is no need to comment out any
\include statements. If you add more chapters or appendices, be sure to add statements for them.
|
Sines
The law of sines is an equation relating the lengths of the sides of an arbitrary triangle to the sines of its angles.
If we have the following triangel
The following holds true
$$\frac{sin\; A}{a}=\frac{sin\; B}{b}=\frac{sin\; C}{c}$$
Example
If we have a triangle (this example is also shown in our video lesson) with one side that measure 2 with an opposite angle of 30° and one angle that is 40°. What is the measure of the side that is opposite to the angle that is 40°?
We use the law of sines to solve this problem:
$$\frac{sin\; 30^{\circ}}{2}=\frac{sin\; 40^{\circ}}{x}$$
$$x=2\cdot \frac{sin\; 40^{\circ}}{sin\: 30^{\circ}}=2.571$$
Video lesson
The example above in videoformat
|
I'm looking at continuum mechanics from the perspective of De Groot and Mazur's "Non-Equilibrium Thermodynamics" - the first reference that I've come across that seems to do a good job of bringing together thermodynamics and continuum mechanics. In the course of reading I've started wondering why I've never seen transport of momentum by mass diffusion discussed. Transport at the bulk-average velocity $\vec{v}$ is included in standard Navier-Stokes, but it seems to me that a continuum element experiencing mass diffusion should experience additional momentum transport
\begin{align*} \int_{\partial \Omega} \sum_\alpha (\vec{v}_\alpha - \vec{v})(-\vec{j}_\alpha \cdot \hat{n})\ \mathrm{d}A \end{align*} because flux by diffusion of each species ($\vec{j}_\alpha$) carries additional momentum due to its relative velocity $(\vec{v}_\alpha - \vec{v})$. This would give the continuum equation
\begin{align*} \frac{\partial}{\partial t}(\rho \vec{v}) + \vec{\nabla}\cdot(\rho \vec{v} \vec{v}) = \rho \frac{\text{D}}{\text{D}t}\vec{v} = \vec{\nabla}\cdot(\sigma\underbrace{-\sum_\alpha\vec{j}_\alpha(\vec{v}_\alpha-\vec{v})}_\text{New})+\rho \vec{g} \end{align*}
With a some manipulation, I was able to convince myself that this is the same as
\begin{align*} \frac{\partial}{\partial t}(\rho \vec{v}) + \vec{\nabla}\cdot\left(\sum_\alpha \rho_\alpha \vec{v}_\alpha \vec{v}_\alpha \right) = \rho \frac{\text{D}}{\text{D}t}\vec{v} = \vec{\nabla}\cdot\sigma+\rho \vec{g} \end{align*}
i.e., that adding this term is equivalent to using the true momentum transfer term (including diffusion) on the left-hand side. This suggests that I've math'd correctly and that my new term does indicate the difference between the true momentum transfer and the momentum transfer by the bulk velocity.
Can anyone suggest a reference that accounts for this effect when formulating continuum equations? I intend to go further with this, particularly into the non-equilibrium thermodynamics side (e.g. deriving force/flux relations), and would like a source to check my math against. I've had a look around myself, but every time I search for the keywords "mass diffusion" and "momentum transport" together, I am completely inundated with introductory-level results discussing the analogy between diffusion and viscosity.
|
Consider a molecule of oxygen in a balloon.You know that at nonzero temperature all those molecules are bouncing around in all directions.Of course, the mass of air doesn't have any net motion in any direction.Indeed, the average velocity of each molecule is zero:$$\langle v \rangle = 0 \, .$$Of course, the average energy of any particular molecule is (from the equipartition theorem) $\langle E \rangle = (3/2) k_b T$.We can rewrite this as\begin{align}\left\langle \frac{p^2}{2m} \right\rangle &= \frac{3}{2} k_b T \\\left\langle v^2 \right\rangle &= \frac{3 k_b T}{m} \neq 0 \, .\end{align}The point here is that having zero average velocity definitely doesn't mean you have zero average
squared velocity.Zero average velocity just means you go left as often as you go right.You can still be bouncing around all over the place.
You probably have an intuitive idea that there should be something with dimensions of velocity, not velocity squared, which represents the "typical" speed of a particle.That's precisely what$$\sqrt{\langle v^2 \rangle}$$is.Think of it like this: if half the particles are moving at velocity $v$ and the other half are moving at $-v$, then you get$$\sqrt{\langle v^2 \rangle} = v \, .$$So in this simple case the square root of the mean square velocity really is just the average speed.You might think a simpler definition of "typical speed" is$$\langle | v | \rangle$$because this is literally the average speed.In a sense this is simpler, but the root mean square is more
useful because, as you can see from your own post, it's directly related to the energy.Being related to the energy means that the root mean square velocity is also easily related to things like pressure, etc. which is why we often use it instead of the average absolute value of velocity (although both are useful).
Of course, this was all a classical picture, but in some ways the quantum case is similar.For example, if you could cook up an experiment to directly measure the square momentum of the particle in the box, the result averaged over an ensemble of particles would be the result from the original post (I use velocity and momentum interchangeably as they differ only by a scale factor $m$).In that sense, the expression calculated for $\langle p^2 \rangle$ is just exactly what it sounds like: it's the average you would find if you measured the squared momentum on an ensemble of particle in a box quantum systems.
Taking the square root to get $\sqrt{\langle p^2 \rangle}$ you just have, like the classical case, a measure of the typical momentum that happens to be really useful because it easily connects to energy, pressure, etc.Of course, the average here is over an ensemble of quantum systems, so by "typical" we mean typical as averaged over the ensemble,
not necessarily as averaged over time for a single particle.
There is at least one important difference though: a quantum system in the ground state has nonzero mean square momentum, but cannot give off any energy.This is very different from the classical case where nonzero mean square momentum means there's internal energy which can be transferred to another system (i.e. measured).This is related to the uncertainty principle.Quantum systems have "fluctuations"$^{[a]}$ which are not thermal in nature.
$[a]$: Calling the fact that quantum systems have nonzero mean square momentum in the ground state "fluctuations" is a dangerous game because it can make the reader think those "fluctuations" are just like classical ones.However, they are not.For example, classical fluctuations have finite bandwidth because the underlying microscopic processes have internal time scales.Quantum "fluctuations" do not.What we typically call "quantum fluctuations" are really the manifestation of sampling a wave function as it shows up in the classical measurement apparatus.
|
Difference between revisions of "De Bruijn-Newman constant"
(Created page with "For each real number <math>t</math>, define the entire function <math>H_t: {\mathbf C} \to {\mathbf C}</math> by the formula :<math>\displaystyle H_t(z) := \int_0^\infty e^{t...")
Line 9: Line 9:
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes.
−
De Bruijn and Newman showed that there existed a constant, the
+
De Bruijn and Newman showed that there existed a constant, the Bruijn-Newman <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math>.
Revision as of 12:50, 25 January 2018
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes.
De Bruijn and Newman showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math].
|
A nonlocal concave-convex problem with nonlocal mixed boundary data
1.
Laboratoire d'Analyse Nonlinéaire et Mathématiques Appliquées, Université Abou Bakr Belkaïd, Tlemcen, Tlemcen 13000, Algeria
2.
Département de Mathématiques, Université Ibn Khaldoun, Tiaret, Tiaret 14000, Algeria
3.
University of Melbourne, School of Mathematics and Statistics, Peter Hall Building, Parkville, Melbourne VIC 3010, Australia
4.
School of Mathematics and Statistics, 35 Stirling Highway, Crawley, Perth WA 6009, Australia
5.
Dipartimento di Matematica, Università degli studi di Milano, Via Saldini 50,20133 Milan, Italy
6.
Istituto di Matematica Applicata e Tecnologie Informatiche, Consiglio Nazionale delle Ricerche, Via Ferrata 1,27100 Pavia, Italy
$(P_{\lambda}) \equiv\left\{\begin{array}{rcll}(-\Delta)^s u& = &\lambda u^{q}+u^{p}&{\text{ in }}\Omega,\\ u&>&0 &{\text{ in }} \Omega, \\ \mathcal{B}_{s}u& = &0 &{\text{ in }} \mathbb{R}^{N}\backslash \Omega,\end{array}\right.$
$0<q<1<p$
$N>2s$
$λ> 0$
$Ω \subset \mathbb{R}^{N}$
$(-Δ)^su(x) = a_{N,s}\;P.V.∈t_{\mathbb{R}^{N}}\frac{u(x)-u(y)}{|x-y|^{N+2s}}\,dy,$
$a_{N,s}$
$\mathcal{B}_{s}u = uχ_{Σ_{1}}+\mathcal{N}_{s}uχ_{Σ_{2}}.$
$Σ_{1}$
$Σ_{2}$
$\mathbb{R}^{N}\backslash Ω$
$Σ_{1} \cap Σ_{2} = \emptyset$
$\overline{Σ}_{1}\cup \overline{Σ}_{2} = \mathbb{R}^{N}\backslash Ω.$
$\mathcal{N}_{s}u$
$\mathcal{B}_{s}u$
$P_{λ}$
$λ$
$p$ Keywords:Integro differential operators, fractional Laplacian, weak solutions, mixed boundary condition, multiplicity of positive solution. Mathematics Subject Classification:Primary: 35R11, 35A15, 35A16; Secondary: 35J61, 60G22. Citation:Boumediene Abdellaoui, Abdelrazek Dieb, Enrico Valdinoci. A nonlocal concave-convex problem with nonlocal mixed boundary data. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1103-1120. doi: 10.3934/cpaa.2018053
References:
[1] [2] [3] [4] [5]
D. Applebaum,
[6] [7]
B. Barrios, E. Colorado, R. Servadei and F. Soria,
A critical fractional equation with concave-convex power nonlinearities,
[8] [9]
B. Barrios, M. Medina and I. Peral, Some remarks on the solvability of non-local elliptic problems with the Hardy potential,
[10] [11] [12]
C. Bucur and E. Valdinoci,
[13] [14] [15] [16]
M. Cozzi,
[17] [18]
S. Dipierro, M. Medina, I. Peral and E. Valdinoci,
Bifurcation results for a fractional elliptic equation with critica exponent in $\mathbb{R}^N$,
[19]
S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critical growth in the whole of $\mathbb{R}^{N}$,
[20] [21] [22]
M. Grossi and F. Pacella,
Positive solutions of nonlinear elliptic equations with critical Sobolev exponent and mixed boundary conditions,
[23]
N. S. Landkof,
[24] [25]
A. C. Ponce,
[26] [27] [28] [29] [30] [31] [32]
M. Struwe,
show all references
References:
[1] [2] [3] [4] [5]
D. Applebaum,
[6] [7]
B. Barrios, E. Colorado, R. Servadei and F. Soria,
A critical fractional equation with concave-convex power nonlinearities,
[8] [9]
B. Barrios, M. Medina and I. Peral, Some remarks on the solvability of non-local elliptic problems with the Hardy potential,
[10] [11] [12]
C. Bucur and E. Valdinoci,
[13] [14] [15] [16]
M. Cozzi,
[17] [18]
S. Dipierro, M. Medina, I. Peral and E. Valdinoci,
Bifurcation results for a fractional elliptic equation with critica exponent in $\mathbb{R}^N$,
[19]
S. Dipierro, M. Medina and E. Valdinoci, Fractional elliptic problems with critical growth in the whole of $\mathbb{R}^{N}$,
[20] [21] [22]
M. Grossi and F. Pacella,
Positive solutions of nonlinear elliptic equations with critical Sobolev exponent and mixed boundary conditions,
[23]
N. S. Landkof,
[24] [25]
A. C. Ponce,
[26] [27] [28] [29] [30] [31] [32]
M. Struwe,
[1]
Patricio Felmer, Ying Wang.
Qualitative properties of positive solutions for mixed integro-differential equations.
[2]
Christina A. Hollon, Jeffrey T. Neugebauer.
Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition.
[3]
Zuodong Yang, Jing Mo, Subei Li.
Positive solutions of $p$-Laplacian equations with
nonlinear boundary condition.
[4]
Tsung-Fang Wu.
Multiplicity of positive solutions for a semilinear elliptic equation
in $R_+^N$ with nonlinear boundary condition.
[5]
Michael E. Filippakis, Nikolaos S. Papageorgiou.
Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian.
[6] [7]
Leonelo Iturriaga, Eugenio Massa.
Existence, nonexistence and multiplicity of positive solutions for the poly-Laplacian and nonlinearities with zeros.
[8]
Zhiming Guo, Zhi-Chun Yang, Xingfu Zou.
Existence and uniqueness of positive solution to a non-local
differential equation with homogeneous Dirichlet boundary
condition---A non-monotone case.
[9]
Alain Hertzog, Antoine Mondoloni.
Existence of a weak solution for a quasilinear wave equation with boundary condition.
[10]
Xudong Shang, Jihui Zhang.
Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation.
[11] [12] [13]
John R. Graef, Lingju Kong, Qingkai Kong, Min Wang.
Positive solutions of nonlocal fractional boundary value problems.
[14]
Dengfeng Lü, Shuangjie Peng.
On the positive vector solutions for nonlinear fractional Laplacian systems with linear coupling.
[15]
Xudong Shang, Jihui Zhang, Yang Yang.
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent.
[16]
Rongrong Yang, Zhongxue Lü.
The properties of positive solutions to semilinear equations involving the fractional Laplacian.
[17]
Lishan Lin.
A priori bounds and existence result of positive solutions for fractional Laplacian systems.
[18]
Leyun Wu, Pengcheng Niu.
Symmetry and nonexistence of positive solutions to fractional
[19] [20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Let X be a normal random variable with mean $\mu$ and standard deviation $\sigma^2$. I am wondering how to calculate the third moment of a normal random variable without having a huge mess to integrate. Is there a quicker way to do this?
This is a general method to calculate any moment:
Let $X$ be standard normal. The moment generating function is:
$$M_X(t) = E(e^{tX}) = e^{t^2/2} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}}e^{-(x-t)^2/2} \:\:dx = e^{t^2/2}$$
since the integrand is the pdf of $N(t,1)$.
Specifically for the third moment you differentiate $M_X(t)$ three times
$M_X^{(3)}(t) = (t^3 + 3t)e^{t^2/2}$
and
$E[X^3] = M_X^{(3)}(0) = 0$
For a general normal variable $Y = \sigma X + \mu$ we have that
$$M_Y(t) = e^{\mu t} M_X(\sigma t) = e^{\mu t + \sigma^2 t^2 /2}$$
and you calculate the $n$th moment as a above, i.e. differentiating $n$ times and setting $t=0$: $$E[Y^n] = M_Y^{(n)}(0)$$.
$\mathbb{E}[(X-\mu)^3]=0$ since $X-\mu$ is normally distributed with mean zero, then expand out the cube.
If the distribution of a random variable $X$ is symmetric about $0$, meaning $\Pr(X>x)=\Pr(X<-x)$ for every $x>0$, then its third moment, if it exists at all, must be $0$, as must all of its odd-numbered moments.
If $\operatorname{E}\left[ \,|X^3|\, \right]<\infty$ then the third moment exists. Furthermore, Symmetry shows that the positive and negative parts of $\operatorname{E}\left[ X^3 \right]<0$ (without the absolute-value sign) cancel each other out.
|
Answer
$$x=-1.25,-1.95$$
Work Step by Step
Using the quadratic formula, we obtain: $$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ $$x=\frac{-80\pm \sqrt{(80)^2-4(25)(61)}}{2(25)}$$ $$x=-1.25,-1.95$$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
Good morning fellow mathematicians! I'm finally back and we are going to dive right in. Today we are going to work with quite the amazing integral, but before we can actually get started, I would like to prove a well-known fact regarding functions. Let us consider some arbitrary function, for example $f:\mathbb{R} \rightarrow \mathbb{R}$. We are now going to go through some simple algebraic manipulations and I hope that you are going to agree with all the steps I'm now doing:
\begin{align*} f(x)&=\frac{f(x)}{2}+\frac{f(x)}{2}\\ &=\frac{f(x)}{2}+\frac{f(x)}{2}+\frac{f(-x)}{2}-\frac{f(-x)}{2}\\ &=\frac{f(x)+f(-x)}{2}+\frac{f(x)-f(-x)}{2} \end{align*} Obviously, adding $f(-x)$ only holds, if we are not leaving the domain of our function. For reasons that will become apparent in a second, we are going to call the first fraction $e(x)$ and the second one $o(x)$, leaving us with the decomposition $f(x)=e(x)+o(x)$. I am now going to claim, that $e(x)$ is an even function…
\begin{align*}
f(x)&=\frac{f(x)}{2}+\frac{f(x)}{2}\\
&=\frac{f(x)}{2}+\frac{f(x)}{2}+\frac{f(-x)}{2}-\frac{f(-x)}{2}\\
&=\frac{f(x)+f(-x)}{2}+\frac{f(x)-f(-x)}{2}
\end{align*}
Obviously, adding $f(-x)$ only holds, if we are not leaving the domain of our function. For reasons that will become apparent in a second, we are going to call the first fraction $e(x)$ and the second one $o(x)$, leaving us with the decomposition $f(x)=e(x)+o(x)$. I am now going to claim, that $e(x)$ is an even function…
|
Keith Winstein,
EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the same information, which is to ignore the information on the number of fair and unfair coins when constructing the intervals. If this information is not ignored, the frequentist should use the integrated Beta-Binomial Likelihood as the sampling distribution in constructing the Confidence interval, in which case the Clopper-Pearson Confidence Interval is not appropriate, and needs to be modified. A similar adjustment should occur in the Bayesian solution.
EDIT: I have also clarified the initial use of the clopper Pearson Interval.
EDIT: alas, my alpha is the wrong way around, and my clopper pearson interval is incorrect. My humblest apologies to @whuber, who correctly pointed this out, but who I initially disagreed with and ignored.
The CI Using the Clopper Pearson method is very good
If you only get one observation, then the Clopper Pearson Interval can be evaluated analytically. Suppose the coin is comes up as "success" (heads) you need to choose $\theta$ such that
$$[Pr(Bi(1,\theta)\geq X)\geq\frac{\alpha}{2}] \cap [Pr(Bi(1,\theta)\leq X)\geq\frac{\alpha}{2}]$$
When $X=1$ these probabilities are $Pr(Bi(1,\theta)\geq 1)=\theta$ and $Pr(Bi(1,\theta)\leq 1)=1$, so the Clopper Pearson CI implies that $\theta\geq\frac{\alpha}{2}$ (and the trivially always true $1\geq\frac{\alpha}{2}$) when $X=1$. When $X=0$ these probabilities are $Pr(Bi(1,\theta)\geq 0)=1$ and $Pr(Bi(1,\theta)\leq 0)=1-\theta$, so the Clopper Pearson CI implies that $1-\theta \geq\frac{\alpha}{2}$, or $\theta\leq 1-\frac{\alpha}{2}$ when $X=0$. So for a 95% CI we get $[0.025,1]$ when $X=1$, and $[0,0.975]$ when $X=0$.
Thus, one who uses the Clopper Pearson Confidence Interval will
never ever be beheaded. Upon observing the interval, it is basically the whole parameter space. But the C-P interval is doing this by giving 100% coverage to a supposedly 95% interval! Basically, the Frequentists "cheats" by giving a 95% confidence interval more coverage than he/she was asked to give (although who wouldn't cheat in such a situation? if it were me, I'd give the whole [0,1] interval). If the king asked for an exact 95% CI, this frequentist method would fail regardless of what actually happened (perhaps a better one exists?).
What about the Bayesian Interval? (specifically the Highest Posterior Desnity (HPD) Bayesian Interval)
Because we know
a priori that both heads and tails can come up, the uniform prior is a reasonable choice. This gives a posterior distribution of $(\theta|X)\sim Beta(1+X,2-X)$ . Now, all we need to do now is create an interval with 95% posterior probability. Similar to the clopper pearson CI, the Cummulative Beta distribution is analytic here also, so that $Pr(\theta \geq \theta^{e} | x=1) = 1-(\theta^{e})^{2}$ and $Pr(\theta \leq \theta^{e} | x=0) = 1-(1-\theta^{e})^{2}$ setting these to 0.95 gives $\theta^{e}=\sqrt{0.05}\approx 0.224$ when $X=1$ and $\theta^{e}= 1-\sqrt{0.05}\approx 0.776$ when $X=0$. So the two credible intervals are $(0,0.776)$ when $X=0$ and $(0.224,1)$ when $X=1$
Thus the Bayesian will be beheaded for his HPD Credible interval in the case when he gets the bad coin
and the Bad coin comes up tails which will occur with a chance of $\frac{1}{10^{12}+1}\times\frac{1}{10}\approx 0$.
First observation, the Bayesian Interval is smaller than the confidence interval. Another thing is that the Bayesian would be closer to the actual coverage stated, 95%, than the frequentist. In fact, the Bayesian is just about as close to the 95% coverage as one can get in this problem. And contrary to Keith's statement, if the bad coin is chosen, 10 Bayesians out of 100 will on average lose their head (not all of them, because the bad coin must come up heads for the interval to not contain $0.1$).
Interestingly, if the CP-interval for 1 observation was used repeatedly (so we have N such intervals, each based on 1 observation), and the true proportion was anything between $0.025$ and $0.975$, then coverage of the 95% CI will always be 100%, and not 95%! This clearly depends on the true value of the parameter! So this is at least one case where repeated use of a confidence interval does not lead to the desired level of confidence.
To quote a
genuine 95% confidence interval, then by definition there should be some cases (i.e. at least one) of the observed interval which do not contain the true value of the parameter. Otherwise, how can one justify the 95% tag? Would it not be just a valid or invalid to call it a 90%, 50%, 20%, or even 0% interval?
I do not see how simply stating "it actually means 95% or more" without a complimentary restriction is satisfactory. This is because the obvious mathematical solution is the whole parameter space, and the problem is trivial. suppose I want a 50% CI? if it only bounds the false negatives then the whole parameter space is a valid CI using only this criteria.
Perhaps a better criterion is (and this is what I believe is implicit in the definition by Kieth) "as close to 95% as possible, without going below 95%". The Bayesian Interval would have a coverage closer to 95% than the frequentist (although not by much), and would not go under 95% in the coverage ($\text{100%}$ coverage when $X=0$, and $100\times\frac{10^{12}+\frac{9}{10}}{10^{12}+1}\text{%} > \text{95%}$ coverage when $X=1$).
In closing, it does seem a bit odd to ask for an interval of uncertainty, and then evaluate that interval by the using the true value which we were uncertain about. A "fairer" comparison, for both confidence and credible intervals, to me seems like
the truth of the statement of uncertainty given with the interval.
|
Schedule of the Workshop "Picard-Fuchs Equations and Hypergeometric Motives" Monday, March 26
10:15 - 10:50 Registration & Welcome coffee 10:50 - 11:00 Opening remarks 11:00 - 12:00 Frits Beukers: Some supercongruences of arbitrary length 12:00 - 13:45 Lunch break 13:45 - 14:45 Alexander Varchenko: Solutions of KZ differential equations modulo p 15:00 - 16:00 Bartosz Naskręcki: Elliptic and hyperelliptic realisations of low degree hypergeometric motives 16:00 - 16:30 Tea and cake 16:30 - 17:30 Roberto Villaflor Loyola: Periods of linear algebraic cycles in Fermat varieties afterwards Reception Tuesday, March 27
09:30 - 10:30 Mark Watkins: Computing with hypergeometric motives in Magma 10:30 - 11:00 Group photo and coffee break 11:00 - 12:00 Madhav Nori: Semi-Abelian Motives 12:00 - 13:45 Lunch break 13:45 - 14:45 Wadim Zudilin: A q-microscope for hypergeometric congruences 15:00 - 16:00 Masha Vlasenko: Dwork Crystals and related congruences 16:00 - 16:30 Tea and cake Wednesday, March 28
09:30 - 10:30 Jan Stienstra: Zhegalkin Zebra Motives, digital recordings of Mirror Symmetry 10:30 - 11:00 Coffee break 11:00 - 12:00 R. Paul Horja: Spherical Functors and GKZ D-modules 12:00 - 13:45 Lunch break 13:45 - 14:45 Duco van Straten: Frobenius structure for Calabi-Yau operators 15:00 - 16:00 Kiran S. Kedlaya: Frobenius structures on hypergeometric equations: computational methods 16:00 - 16:30 Tea and cake Thursday, March 29
09:30 - 10:30 Danylo Radchenko: Goursat rigid local systems of rank 4 10:30 - 11:00 Coffee break 11:00 - 12:00 Damian Rössler: The arithmetic Riemann-Roch Theorem and Bernoulli numbers 12:00 - 13:45 Lunch break 13:45 - 14:45 Robert Kucharczyk: The geometry and arithmetic of triangular modular curves 15:00 - 16:00 John Voight: On the hypergeometric decomposition of symmetric K3 quartic pencils 16:00 - 16:30 Tea and cake Friday, March 30: no talks (holiday) Abstracts Frits Beukers: Some supercongruences of arbitrary length
In joint work with Eric Delaygue it is shown that truncated hypergeometric sums with parameters ½,...,½ and 1,...,1 and evaluated at the point 1 are equal modulo p to Dwork's unit-root eigenvalue modulo p2. Congruences modulo p follow directly from Dwork's work, the fact that the congruence holds modulo p2 accounts for the name 'supercongruence'.
R. Paul Horja: Spherical Functors and GKZ D-modules
Some classical mirror symmetry results can be recast using the more recent language of spherical functors. In this context, I will explain a Riemann-Hilbert type conjectural connection with the GKZ D-modules naturally appearing in toric mirror symmetry.
Kiran S. Kedlaya: Frobenius structures on hypergeometric equations: computational methods
Current implementations of the computation of L-functions associated to hypergeometric motives in Magma and Sage rely on a p-adic trace formula. We describe and demonstrate (in Sage) an alternate approach based on computing the right Frobenius structure on the hypergeometric equation. This gives rise to a conjectural formula for the residue at 0 of this Frobenius structure in terms of p-adic Gamma functions, related to Dwork's work on generalized hypergeometric functions.
Robert Kucharczyk: The geometry and arithmetic of triangular modular curves
In this talk I will take a closer look at triangle groups acting on the upper half plane. Except for finitely many special cases, which are highly interesting in themselves, these are non-arithmetic groups. However, a notion of congruence subgroup is well-defined for these, and there are natural moduli problems that are classified by quotients of the upper half plane by such subgroups, giving rise to models over number fields. These curves have much to do with very classical mathematics, and they build a bridge between the hypergeometric world and the world of Shimura varieties. This is ongoing joint work with John Voight, who is also present at this conference.
Bartosz Naskręcki: Elliptic and hyperelliptic realisations of low degree hypergeometric motives
In this talk we will discuss what are the so-called hypergeometric motives and how one can approach the problem of their explicit construction as Chow motives in explicitely given algebraic varieties. The class of hypergeometric motives corresponds to Picard-Fuchs equations of hypergeometric type and forms a rich family of pure motives with nice L-functions. Following recent work of Beukers-Cohen-Mellit we will show how to realise certain hypergeometric motives of weights 0 and 2 as submotives in elliptic and hyperelliptic surfaces. An application of this work is the computation of minimal polynomials of hypergeometric series with finite monodromy groups and proof of identities between certain hypergeometric finite sums, which mimics well-known identities for classical hypergeometric series. This is a part of the larger program conducted by Villegas et al. to study the hypergeometric differential equations (special cases of differential equations '"coming from algebraic geometry'") from the algebraic perspective.
Madhav Nori: Semi-Abelian Motives
joint work with Deepam Patel
Danylo Radchenko: Goursat rigid local systems of rank 4
I will talk about certain rigid local systems of rank 4 considered by Goursat, with emphasis on explicit constructions and examples. The talk is based on joint work with Fernando Rodriguez Villegas.
Damian Rössler: The arithmetic Riemann-Roch theorem and Bernoulli numbers
(with V. Maillot) We shall show that integrality properties of the zero part of the abelian polylogarithm can be investigated using the arithmetic Adams-Riemann-Roch theorem. This is a refinement of the arithmetic Riemann-Roch theorem of Bismut-Gillet-Soulé-Faltings, which gives more information on denominators of Chern classes than the original theorem. We apply this theorem to the Poincaré bundle on an abelian scheme and and the final calculation involves a variant of von Staudt’s theorem.
On a canonical class of Green currents for the unit sections of abelian schemes. Documenta Math. 20 (2015), 631–668
Jan Stienstra: Zhegalkin zebra motives, digital recordings of Mirror Symmetry
I present a very simple construction of doubly-periodic tilings of the plane by convex black and white polygons. These tilings are the motives in the title. The vertices and edges in the tiling form a quiver (=directed graph) which comes with a so-called potential, provided by the polygons. Dual to this graph one has the bipartite graph formed by the black/white polygons and the edges in the tiling. We deform this structure by putting weights on the edges and connect this with representations of the Jacobi algebra of the quiver with potential and with the Kasteleyn matrix of the bi-partite graph.
Duco van Straten: Frobenius structure for Calabi-Yau operators
This is a report on joint work in progress with P. Candelas and X. de la Ossa on the (largly conjectural) computation of Euler factors from Calabi-Yau operators. The method uses Dworks deformation method starting from a simple Frobenius matrix at the MUM-point that involves a p-adic version of ζ(3). We give some new applications, in particular to the determination of congruence levels.
Alexander Varchenko: Solutions of KZ differential equations modulo p
Polynomial solutions of the KZ differential equations over a finite field Fp will be constructed as analogs of multidimensional hypergeometric solutions.
Roberto Villaflor Loyola: Periods of linear algebraic cycles in Fermat varieties
In this talk we will show how a theorem of Carlson and Griffiths can be used to compute periods of linear algebraic cycles inside Fermat varieties of even dimension. As an application we prove that the locus of hypersurfaces containing two linear cycles whose intersection is of low dimension, is a reduced component of the Hodge locus in the underlying parameter space. Our method can be used to verify similar statements for other kind of algebraic cycles (for example complete intersection algebraic cycles) by means of computer assistance. This is joint work with Hossein Movasati.
Masha Vlasenko: Dwork crystals and related congruences
In the talk I will describe a realization of the p-adic cohomology of an affine toric hypersurface which originates in Dwork's work and give an explicit description of the unit-root subcrystal based on certain congruences for the coeficients of powers of a Laurent polynomial. This is joint work with Frits Beukers.
John Voight: On the hypergeometric decomposition of symmetric K3 quartic pencils
We study the hypergeometric functions associated to five one-parameter deformations of Delsarte K3 quartic hypersurfaces in projective space. We compute all of their Picard–Fuchs differential equations; we count points using Gauss sums and rewrite this in terms of finite field hypergeometric sums; then we match up each differential equation to a factor of the zeta function, and we write this in terms of global $L$-functions. This computation gives a complete, explicit description of the motives for these pencils in terms of hypergeometric motives.
This is joint work with Charles F. Doran, Tyler L. Kelly, Adriana Salerno, Steven Sperber, and Ursula Whitcher.
Mark Watkins: Computing with hypergeometric motives in Magma
We survey the computational vistas that are available for computing with hypergeometric motives in the computer algebra system Magma. Various examples that exemplify the theory will be highlighted.
Wadim Zudilin: A q-microscope for hypergeometric congruences
By examining asymptotic behavior of certain infinite basic ($q$-) hypergeometric sums at roots of unity (that is, at a "$q$-microscopic" level) we prove polynomial congruences for their truncations. The latter reduce to non-trivial (super)congruences for truncated ordinary hypergeometric sums, which have been observed numerically and proven rarely. A typical example includes derivation, from a q-analogue of Ramanujan's formula $$ \sum_{n=0}^\infty\frac{\binom{4n}{2n}{\binom{2n}{n}}^2}{2^{8n}3^{2n}}\,(8n+1)=\frac{2\sqrt{3}}{\pi}, $$ of the two supercongruences $$ S(p-1)\equiv p\biggl(\frac{-3}p\biggr)\pmod{p^3} \quad\text{and}\quad S\Bigl(\frac{p-1}2\Bigr) \equiv p\biggl(\frac{-3}p\biggr)\pmod{p^3}, $$ valid for all primes $p>3$, where $S(N)$ denotes the truncation of the infinite sum at the $N$-th place and $(\frac{-3}{\cdot})$ stands for the quadratic character modulo $3$.
|
I am given the problem to find $B(y)$ by solving the following integral numerically:
$$B(y) = \int_{z=0}^{\infty}\frac{1}{\sigma (y,z)}\frac{\partial^2 A(y,z)}{\partial z^2}+\frac{\partial A(y,z)}{\partial z}\frac{\partial}{\partial y}\frac{1}{\sigma (y,z)}\;\;\partial z$$
Here, $A(y,z)$ and $\sigma(y,z)$ are both known on a grid of points along the $y$- and $z$-directions.
I am familiar with numerical integration (e.g. Simpson's rule, Gaussian quadrature) but usually it is for simple functions of the form
$$\int_a^b f(x) \;\;dx $$
I am not sure how to proceed with more complex forms like:
$$\int_a^b f'(x)g''(x)\;\;dx $$
Where and how do I begin? Can someone point me to any resources that deal with these sorts of numerical problems?
Thanks
|
You can do this as follows.
Let $A\subset\mathbb R$ be nonempty and bounded above. Choose $a_0\in A$ and an upper bound $b_0$ for $A$. Then, look at the middle point $m$ of the interval $[a_0,b_0]$. If $m$ is an upper bound for $A$, put $b_1:=m$ and $a_1:=a_0$. Otherwise, put $b_1:=b_0$ and choose a point $a_1\in A$ such that $a_1>m$. In either case, you have $a_0\leq a_1\leq b_1\leq b_0$, and $b_1-a_1\leq \frac12 \,( b_0-a_0)$. Moreover, $a_1\in A$ and $b_1$ is an upper bound for $A$.
Repeating this procedure, you can construct by induction a non-decreasing sequence $(a_n)\subset A$ and a non-increasing sequence $(b_n)$ of upper bounds for $A$, with $a_n\leq b_n$ for all $n$ and $b_{n+1}-a_{n+1}\leq \frac12 (b_n-a_n)$.
By the Archimedean property (which you do need as pointed out by Daniel), the diameter of the interval $[a_n,b_n]$ goes to $0$. It follows that both sequences $(a_n)$ and $(b_n)$ are Cauchy. So they are both convergent, to the same limit since $b_n-a_n\to 0$. If you call this limit $l$, then $l$ is an upper bound for $A$ because the set of all upper bounds for $A$ is closed and $b_n\to l$, and no upper bound for $A$ can be smaller than $l$ because $l$ is in the closure of $A$.
|
One way to convert a real-valued signal into a complex-valued one is to set the complex part to zero. This has the effect of "mirroring" the spectrum: all the negative frequencies are the mirror of the positive frequencies. This can be seen by the Fourier transform property for any real $f(x)$:
$$ \hat f(-\omega) = \overline{ \hat f(\omega) } \tag 1 $$
Here the overline indicates the complex conjugate.
A DDC is usually implemented by multiplying the input by a complex sinusoid to shift the desired signal to 0 Hz, which works due to this Fourier transform pair:
$$ f(x)e^{iax} = \hat f(\omega - a) \tag 2 $$
After shifting the desired signal to 0 Hz, a filter removes out-of-channel interference. Since the negative frequencies that appeared as a consequence of setting the complex part of the real input to zero are removed by a filter, a DDC implemented in this way need not do anything in particular to generate the complex-valued output.
The alternative is for the DDC to operate entirely with real numbers. So rather than calculating $f(x)e^{iax}$ as in equation 2, it calculates $f(x) \cos(ax)$. As with a simple analog mixer the result is output frequencies at the sum and difference frequencies of the input and the LO. One of these image frequencies must then be removed by filtering.
Maintaining precise and stable phase and amplitude relationships between the real and imaginary parts in an analog circuit is difficult. However in digital implementation this is a non-issue, so it's common to see digital processing done almost entirely in the complex domain. As such, in some sense it's
more work to implement a DDC which does not output complex samples.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.