text stringlengths 256 16.4k |
|---|
Quadratic Formula (deterministic---no guess and check about it)
The QF yields that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c\left(x+\frac13\right)(x-5)$$ Comparing leading coefficients, $c$ must be $3$: $$\begin{align}3x^2-14x-5&=3\left(x+\frac13\right)(x-5)\\&=(3x+1)(x-5)\end{align}$$
Use Parabola Vertex Form (deterministic---no guess and check about it)
The $x$-coordinate of the vertex of the parabola $y=3x^2-14x-5$ is $-{\frac{b}{2a}}=-{\frac{-14}{2\cdot3}}={\frac73}$. The $y$-coordinate is $3\left(\frac73\right)^2-14\left(\frac73\right)-5=\frac{49}{3}-\frac{2\cdot49}{3}-5=-{\frac{49}{3}}-\frac{15}{3}=-{\frac{64}{3}}$.
So $y=c\left(x-\frac73\right)^2-\frac{64}{3}$. Comparing leading coefficients, $c=3$, so $$\begin{align}y&=3\left(x-\frac73\right)^2-\frac{64}{3}\\&=\frac{1}{3}\left(9\left(x-\frac73\right)^2-64\right)\\&=\frac{1}{3}\left(3\left(x-\frac73\right)-8\right)\left(3\left(x-\frac73\right)+8\right)\\&=\frac{1}{3}\left(3x-15\right)\left(3x+1\right)\\&=\left(x-5\right)\left(3x+1\right)\end{align}$$
Complete the Square (deterministic---no guess and check about it)
Starting with $3x^2-14x-5$, always multiply and divide by $4a$ to avoid fractions:$$\begin{align}&3x^2-14x-5\\&=\frac{4\cdot3}{4\cdot3}\left(3x^2-14x-5\right)\\&=\frac{1}{12}\left(36x^2-12\cdot14x-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)+14^2-14^2-60\right)\\&=\frac{1}{12}\left((6x-14)^2-196-60\right)\\&=\frac{1}{12}\left((6x-14)^2-256\right)\\&=\frac{1}{12}(6x-14-16)(6x-14+16)\\&=\frac{1}{6\cdot2}(6x-30)(6x+2)\\&=(x-5)(3x+1)\\\end{align}$$
AC Method (involves integer factorization and a list of things to inspect)
$$3x^2-14x-5$$
Take $3\cdot(-5)=-15$. List pairs that multiply to $-15$:
$$(-15,1),(-5,3),(-3,5),(-1,15)$$
We could have stopped at the first pair, because $-15+1=-14$, the middle coefficient. Use this to replace the $-14$:
$$3x^2-15x+x-5$$
Group two terms at a time and factor out the GCF:
$$3x(x-5)+1(x-5)$$$$(3x+1)(x-5)$$
Prime Factor what you can version 1 (involves integer factorization and a list of things to inspect)
If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$And $(?)(??)=-5$. There are only four possibilities. $(?,??)$ is one of $$(1,-5),(-1,5),(5,-1),(-5,1)$$Multiplying out $(3x+?)(x+??)$ for each of the four cases reveals $3x^2-14x-5=(3x+1)(x-5)$.
Rational Root Theorem (involves integer factorization and a list of things to inspect)
If $3x^2-14x-5$ factors, there are rational roots. They must be of the form $\pm\frac{a}{b}$ where $a\mid5$ and $b\mid3$. The only options are $\pm5,\pm{\frac53},\pm1,\pm{\frac13}$. Check these eight inputs to $3x^2-14x-5$ and find that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c(x+1/3)(x-5)$$ Comparing leading coefficients, $c$ must be $3$.
Prime Factor what you can version 2 (using Rational Root Theorem to speed up version 1)
If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$The latter factor reveals that if the thing factors at all, one of its roots is an integer. Considering the RRT, check if any of $\pm5,\pm1$ are roots, and discover that $5$ is. Conclude $$(3x+?)(x-5)$$ and then conclude $$(3x+1)(x-5)$$
Graphing to improve efficiency ot Rational Root Theorem method
Using the vertex formula again, locate the vertex at $\left(\frac73,-{\frac{64}{3}}\right)$. Since $a=3$, consider the sequence $\{3\cdot1,3\cdot3,3\cdot5,3\cdot7,\ldots\}$. Extend horizontally outward from the vertex by $1$ in each direction, move up $3$ and plot a point. Extend horizontally outward again by $1$, move up $9$ and plot a point. Continue until you've plotted points that cross over the $x$-axis.
Now you have a rough idea where the roots are. Returning to the rational root theorem approach, you can eliminate many of the potential roots now from the initial list, speeding up that approach. |
I have recently read that the dimensional regularization scheme is "special" because power law divergences are absent. It was argued that power law divergences were unphysical and that there was no fine-tuning problem. I was immediately suspicious.
Let us take $\lambda\phi^4$ theory. For the renormalized mass $m$ (not a physical mass) with dimensional regularization, $$ m^2_\text{phys} = m^2(\mu) + m^2_\text{phys}\frac{\lambda}{16\pi^2}\ln({\mu^2}/{m^2_\text{phys}}) $$ This looks promising, but $m$ is a renormalized mass, not a true parameter of a Lagrangian that will be set by some new physics, like string theory or whatever it is.
For the Lagrangian mass with a cut-off regulator, $$ m^2_\text{phys} = m_0^2(\Lambda) + \frac{\lambda}{16\pi^2} \Lambda^2 $$ which is basically what I understand to be the fine-tuning problem. We would need incredible cancellations for $m_0\ll \Lambda$ at the low scale. Here I understand $m_0$ to be a "real" parameter determining the theory, whereas $m$ in dimensional regularisation was just an intermediate parameter that scheme.
I suspect that these two equations are related by the wave-function renormalization, $$ m_0^2=Z m^2 = (1+\text{const}\Lambda^2/m^2 + \ldots) m^2 $$ If I am correct, not much has improved with dimensional regularization. We've sort of hidden the fine-tuning in the wave-function renormalization.
You don't see the fine-tuning in dimensional regularization because you are working with a renormalized mass. The bare Lagrangian mass $m_0$ is the one being set at the high-scale by some physics we don't know about. So it's $m_0$ that we need to worry about being fine-tuned. With dimensional regularization, we see that $m$ isn't fine-tuned, but that isn't a big deal.
Have I misunderstood something? I feel like I am missing something. Can dimensional regularization solve the fine-tuning problem? Is dimensional regularization really special?
EDIT
I am not necessarily associating $\Lambda$ with a massive particle, just a massive scale at which $m_0$ is set to a finite value.
It seems to me that the dimensional regularisation cannot help me understand how $m_0$ runs, or the tuning associated with setting it at the high scale, especially as it obliterates information about the divergences. I have no idea how quickly to take the $\epsilon\to0$ limit.
I can do something like,
$$ m_0^2 = Z m^2 = ( 1 +\lambda/\epsilon) m^2\\ m_0^2(\epsilon_1) - m_0^2(\epsilon_2) = m^2 \lambda (1/\epsilon_1 - 1/\epsilon_2) $$ Now if I take $\epsilon_1$ somehow so that it corresponds to a low scale, and $\epsilon_2$ somehow corresponds to a high scale. $m_0^2(\epsilon_1) $ needs to be small for a light scalar. But then I need to fine tune the massive number on the right hand side with the bare mass at a high scale. The fine tuning is still there. Admittedly this is very informal because I have no idea how to really interpret $\epsilon$ |
The Annals of Probability Ann. Probab. Volume 26, Number 4 (1998), 1703-1726. The standard additive coalescent Abstract
Regard an element of the set $$\Delta := {(x_1, x_2, \dots): x_1 \geq x_2 \geq \dots \geq 0, \Sigma_i x_i = 1}$$ as a fragmentation of unit mass into clusters of masses $x_i$. The additive coalescent of Evans and Pitman is the $\Delta$-valued Markov process in which pairs of clusters of masses ${x_i, x_j}$ merge into a cluster of mass $x_i + x_j$ at rate $x_i + x_j$. They showed that a version $(\rm X^{\infty}(t), -\infty < t < \infty)$ of this process arises as a $n \to \infty$ weak limit of the process started at time $-1/2 \log n$ with $n$ clusters of mass $1/n$. We show this standard additive coalescent may be constructed from the continuum random tree of Aldous by Poisson splitting along the skeleton of the tree. We describe the distribution of $\rm X^{\infty}(t)$ on $\Delta$ at a fixed time $t$. We show that the size of the cluster containing a given atom, as a process in $t$, has a simple representation in terms of the stable subordinator of index 1/2. As $t \to -\infty$, we establish a Gaussian limit for (centered and normalized) cluster sizes and study the size of the largest cluster.
Article information Source Ann. Probab., Volume 26, Number 4 (1998), 1703-1726. Dates First available in Project Euclid: 31 May 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1022855879 Digital Object Identifier doi:10.1214/aop/1022855879 Mathematical Reviews number (MathSciNet) MR1675063 Zentralblatt MATH identifier 0936.60064 Citation
Aldous, David; Pitman, Jim. The standard additive coalescent. Ann. Probab. 26 (1998), no. 4, 1703--1726. doi:10.1214/aop/1022855879. https://projecteuclid.org/euclid.aop/1022855879 |
56 6 Homework Statement A boy decides to hang its bag from a tree branch, with the purpose of raising it. The boy walks with constant velocity in the x axis, and the bag moves in the y axis only. Taking into account that the lenght of the rope is constant and that we know the height h of the branch respect to the floor: a) find the acceleration of the bag Homework Equations kinematic equations
So what I did was at first consider the case the kid is below the branch, so that x=0,t=0, then I thought that the lenght L of the rope should be ##L=2h## because we know the radius from the branch to the kid is just ##x^2+y^2=r^2## and when x=0, y=h. So then I wrote the motion equations for the bag: $$a(t)=a-g$$ $$x(t)=\frac{(a-g)t^2}{2}$$
And I want to find the time t so that the bag is at the branch, then x(t)=h: $$h=\frac{(a-g)t^2}{2} \rightarrow \sqrt{\frac{2h}{a-g}}=t$$ At this moment, ##x=v_0 \sqrt{\frac{2h}{a-g}}## then we have that the lenght of the rope is ##h^2+(v_0 \sqrt{\frac{2h}{a-g}})^2=4h^2## and we find that $$a= \frac{2{v_0}^2}{3h}+g$$ Is this correct?
And I want to find the time t so that the bag is at the branch, then x(t)=h: $$h=\frac{(a-g)t^2}{2} \rightarrow \sqrt{\frac{2h}{a-g}}=t$$
At this moment, ##x=v_0 \sqrt{\frac{2h}{a-g}}## then we have that the lenght of the rope is ##h^2+(v_0 \sqrt{\frac{2h}{a-g}})^2=4h^2## and we find that $$a= \frac{2{v_0}^2}{3h}+g$$
Is this correct? |
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
16.1. List of Main Symbols¶
The main symbols used in this book are listed below.
16.1.1. Numbers¶
Symbol Type \(x\) Scalar \(\mathbf{x}\) Vector \(\mathbf{X}\) Matrix \(\mathsf{X}\) Tensor 16.1.2. Sets¶
Symbol Type \(\mathcal{X}\) Set \(\mathbb{R}\) Real numbers \(\mathbb{R}^n\) Vectors of real numbers in \(n\) dimensions \(\mathbb{R}^{a \times b}\) Matrix of real numbers with \(a\) rows and \(b\) columns 16.1.3. Operators¶
Symbol Type \(\mathbf{(\cdot)} ^\top\) Vector or matrix transposition \(\odot\) Element-wise multiplication \(\lvert\mathcal{X }\rvert\) Cardinality (number of elements) of the set \(\mathcal{X}\) \(\|\cdot\|_p\) \(L_p\) norm \(\|\cdot\|\) \(L_2\) norm \(\sum\) Series addition \(\prod\) Series multiplication 16.1.4. Functions¶
Symbol Type \(f(\cdot)\) Function \(\log(\cdot)\) Natural logarithm \(\exp(\cdot)\) Exponential function 16.1.5. Derivatives and Gradients¶
Symbol Type \(\frac{dy}{dx}\) Derivative of \(y\) with respect to \(x\) \(\partial_{x} {y}\) Partial derivative of \(y\) with respect to \(x\) \(\nabla_{\mathbf{x}} y\) Gradient of \(y\) with respect to \(\mathbf{x}\) 16.1.6. Probability and Statistics¶
Symbol Type \(\Pr(\cdot)\) Probability distribution \(z \sim \Pr\) Random variable \(z\) obeys the probability distribution \(\Pr\) \(\Pr(x|y)\) Conditional probability of \(x|y\) \({\mathbf{E}}_{x} [f(x)]\) Expectation of \(f\) with respect to \(x\) 16.1.7. Complexity¶
Symbol Type \(\mathcal{O}\) Big O notation \(\mathcal{o}\) Little o notation (grows much more slowly than) |
For example, consider a \(11 \times 11\) grid, and choose from the following building blocks:
Five different polyominos
Fail to cover complete grid
\[x_{i,j,k} = \begin{cases} 1 & \text{if we place polyomino $k$ at location $(i,j)$}\\ 0 & \text{otherwise} \end{cases}\]
I used as rule that the left-upper corner of each polynomino is its anchor. I.e. in the picture above we have \(x_{1,1,4} = 1\), \(x_{2,1,2}=1\), \(x_{1,3,5}=1\) etc.
To formulate a non-overlap constraint I populated a set \(cover_{i,j,k,i',j'}\), with elements that exist if cell \((i',j')\) is covered when we place polyomino \(k\) in cell \((i,j)\). To require each cell \((i',j')\) is covered we can say:
\[ \forall i',j': \sum_{i,j,k|cover_{i,j,k,i',j'}} x_{i,j,k} = 1\]
This constraint is infeasible if we cannot cover each cell \((i',j')\) exactly once. In order to make sure we can show a meaningful solution when we cannot cover each cell, we formulate the following optimization model:
\[\begin{align} \max\>&\sum_{i,j} y_{i,j}\\&y_{i',j'} = \sum_{i,j,k|cover_{i,j,k,i',j'}} x_{i,j,k}\\&x_{i,j,k}\in \{0,1\}\\&y_{i,j} \in \{0,1\}\end{align}\]
Here \(y_{i,j}=1\) indicates cell \((i,j)\) is covered exactly once, and \(y_{i,j}=0\) says the cell is not covered.
With a little bit of effort we can produce the following:
61 x 61 board with 9 different polyominos References Polyomino, https://en.wikipedia.org/wiki/Polyomino 2D bin packing on a grid, https://stackoverflow.com/questions/47918792/2d-bin-packing-on-a-grid |
Let $L =\{ <M> | $ the amount of words $w\in\Sigma^*$ that $M$ does
not halt on is finite $\}$.
I would like to prove that $L\notin RE$.
I can show that $\overline{L}\notin RE $ that is $L\notin CoRE$.
I do this with showing a reduction: $\overline{L_{acc}} \leq \overline{L} $ and since $\overline{L_{acc}}\notin RE$ then $\overline{L}\notin RE$.
The reduction goes as follows:
input:$<M>,<w>$ output:$<M_w >$
where $<M_w>$ is a TM that, on an input $x$ ignores it and performs:
Run $M$ on $w$ If $M$ rejected get yourself into an inf loop If $M$ accepted then accepts as well.
Now if $<M><w>\in\overline{L_{acc}}$ then $w\notin L(M)$, thus $M$ either rejects $w$ or does not end its calculation on it, but in both cases $M_w$ will not end its calculation on all inputs. Otherwise, if $<M><w>\notin \overline{L_{acc}}$ then $w\in L(M)$ and then $M_w$ will halt (and accept) its calculation on every input $x$.
So I have that $\overline{L_{acc}} \leq \overline{L} $ and thus $\overline{L}\notin RE$, that is $L\notin CoRE$.
But I am at a loss on how I can prove $L\notin RE$? I tried a reduction from $\overline{L_{acc}}$, but this idea does not work because $M$ might not halt on $w$. I also tried from $L_{\Sigma^*}$ or from $L_d$, but could not find a reduction that will prove me that $L\notin RE$.
Does someone have an idea for such a reduction? Or perhaps another method to prove $L\notin RE$? |
After I posted this question, a couple of months ago, and got from MO-users several good hints, I think i'm ready, after some study, to ask another related question (or rather, to focus on the main point of my previous question, after I got aware of all the necessary background).
First let me describe the setting:
WEAK TOPOLOGY
Given any Polish space $X$, we denote with $\mathcal{M}(X)$ the set of all Borel-probability measures on $X$. The set $\mathcal{M}(X)$ is endowed with the smallest topology such that the map $\mathcal{M}(f): \mathcal{M}(X)\rightarrow[0,1]$ defined as
$\mu \mapsto \displaystyle \int_{X} f \ d\mu$
is continuous, for every continuous $f:X\rightarrow[0,1]$, where $[0,1]$ has the usual topology. This topology is called the "weak topology" on $\mathcal{M}(X)$, and is itself a Polish space.
As Gerald Edgar pointed out in my previous question, it turns out that the map $\mathcal{M}(g)$ defined as above, is Borel-measurable for every $g:X\rightarrow [0,1]$ Borel measurable function, i.e.
$\Big(\mathcal{M}(g)\Big)^{-1}\big( (\lambda,1] \big)$
is a Borel set in $\mathcal{M}(X)$ for every $\lambda \in [0,1)$.
UNIVERSALLY MEASURABLE SETS and FUNCTIONS
Given a Polish space $X$, a subset $A\subseteq X$ is called
universally measurable if and only if is $\mu$-measurable, for every (completion of) $\mu\in\mathcal{M}(X)$. The set of universally measurable sets forms a $\sigma$-algebra. A function $f:X\rightarrow[0,1]$ is called universally measurable if the inverse images of Borel sets is universally measurable. (of course this is equivalent to "inverse image of every $(\lambda,1]$ sub-basic open.." )
In particular (I look at) the set of universally measurable functions $f$ as the largest set such that $\mathcal{M}(f)$ is well defined.
FACT 1: Universally measurable functions are closed under composition.
FACT 2: Every $\Sigma^{1}_{1}$ set is universally measurable.
FACT 3: ZFC + V=L $\vdash$ "there exists a $\Delta^{1}_{2}$ set which is not universally measurable".
FACT 4: ZFC+ $\Sigma_{n}^{1}$-Determinacy, implies that every $\Sigma^{1}_{n+1}$-set is universally measurable.
FACT 5: ZFC $\Sigma_{n}^{1}$-Determinacy implies that every function $f:X\rightarrow[0,1]$ having graph $X\times[0,1]$ which is a $\Pi_{n}^{1}$-set is universally measurable. This follows from 4, because the set $f^{-1}\big( (\lambda,1] \big)$ is $\Sigma^{1}_{n+1}$.
QUESTIONS Q1) Prove that if $f:X\rightarrow[0,1]$ is universally measurable, so is $\mathcal{M}(f)$.
I don't have many ideas for proving directly this result. Please let me know if you have any suggestions.
Anyway, I'd be happy enough (and actually equally interested) to prove the following result:
Q2) Prove that if $f:X\rightarrow[0,1]$ is function having $\Pi^{1}_{n}$ graph, so is $\mathcal{M}(f)$.
I'm not sure if the above theorem is right, or perhaps need to be weakened for example by saying that if $f:X\rightarrow[0,1]$ has a $\Pi^{1}_{n}$ graph, then $\mathcal{M}(f)$ has a $\Pi^{1}$graph, for some other $m$, maybe $m=n+1$.
This route would prove that the result of Question 1 holds (under PD), whenever $f$ has a reasonable (i.e. projective) description.
Now I would very much appreciate any suggestion for developing these ideas for Question 2. In particular I don't know precisely how to reason about the graph of $\mathcal{M}(f)$ starting from the graph of $f$. Unfortunately I'm not a mathematician and I lack proper background to work freely on this problem, and this idea of attacking the problem using PD and by considering the complexity of the graph of $f$ is the only one I had so far.
THank you again for any suggestion!
bye
matteo |
Quantum natural gradient¶
This example demonstrates the quantum natural gradient optimization technique for variational quantum circuits, originally proposed in Stokes et al. (2019).
Background¶
The most successful class of quantum algorithms for use on near-term noisy quantum hardwareis the so-called variational quantum algorithm. As laid out in the Concepts section, in variational quantum algorithmsa low-depth parametrized quantum circuit ansatz is chosen, and a problem-specificobservable measured. A classical optimization loop is then used to findthe set of quantum parameters that
minimize a particular measurement expectation valueof the quantum device. Examples of such algorithms include the variational quantumeigensolver (VQE), the quantum approximate optimization algorithm (QAOA),and quantum neural networks (QNN).
Most recent implementations of variational quantum algorithms have used gradient-free classical optimization methods, such as the Nelder-Mead algorithm. However, the parameter-shift rule (as implemented in PennyLane) allows the user to automatically compute analytic gradients of quantum circuits. This opens up the possibility to train quantum computing hardware using gradient descent—the same method used to train deep learning models. Though one caveat has surfaced with gradient descent — how do we choose the optimal step size for our variational quantum algorithms, to ensure successful and efficient optimization?
The natural gradient¶
In standard gradient descent, each optimization step is given by
where \(\mathcal{L}(\theta)\) is the cost as a function of the parameters \(\theta\), and \(\eta\) is the learning rate or step size. In essence, each optimization step calculates the steepest descent direction around the local value of \(\theta_t\) in the parameter space, and updates \(\theta_t\rightarrow \theta_{t+1}\) by this vector.
The problem with the above approach is that each optimization stepis strongly connected to a
Euclidean geometry on the parameter space.The parametrization is not unique, and different parametrizations can distortdistances within the optimization landscape.
For example, consider the following cost function \(\mathcal{L}\), parametrized using two different coordinate systems, \((\theta_0, \theta_1)\), and \((\phi_0, \phi_1)\):
Performing gradient descent in the \((\theta_0, \theta_1)\) parameter space, we are updating each parameter by the same Euclidean distance, and not taking into account the fact that the cost function might vary at a different rate with respect to each parameter.
Instead, if we perform a change of coordinate system (re-parametrization) of the cost function, we might find a parameter space where variations in \(\mathcal{L}\) are similar across different parameters. This is the case with the new parametrization \((\phi_0, \phi_1)\); the cost function is unchanged, but we now have a nicer geometry in which to perform gradient descent, and a more informative stepsize. This leads to faster convergence, and can help avoid optimization becoming stuck in local minima. For a more in-depth explanation, including why the parameter space might not be best represented by a Euclidean space, see Yamamoto (2019).
However, what if we avoid gradient descent in the parameter space altogether?If we instead consider the optimization problem as aprobability distribution of possible output values given an input(i.e., maximum likelihood estimation),a better approach is to perform the gradient descent in the
distribution space, which isdimensionless and invariant with respect to the parametrization. As a result,each optimization step will always choose the optimum step-size for everyparameter, regardless of the parametrization.
In classical neural networks, the above process is known as
natural gradient descent, and was first introduced byAmari (1998).The standard gradient descent is modified as follows:
where \(F\) is the Fisher information matrix. The Fisher information matrix acts as a metric tensor, transforming the steepest descent in the Euclidean parameter space to the steepest descent in the distribution space.
The quantum analog¶
In a similar vein, it has been shown that the standard Euclidean geometry is sub-optimal for optimization of quantum variational algorithms (Harrow and Napp, 2019). The space of quantum states instead possesses a unique invariant metric tensor known as the Fubini-Study metric tensor \(g_{ij}\), which can be used to construct a quantum analog to natural gradient descent:
where \(g^{+}\) refers to the pseudo-inverse.
Note
It can be shown that the Fubini-Study metric tensor reduces to the Fisher information matrix in the classical limit.
Furthermore, in the limit where \(\eta\rightarrow 0\), the dynamics of the system are equivalent to imaginary-time evolution within the variational subspace, as proposed in McArdle et al. (2018).
Block-diagonal metric tensor¶
A block-diagonal approximation to the Fubini-Study metric tensor of a variational quantum circuit can be evaluated on quantum hardware.
Consider a variational quantum circuit
where
\(|\psi_0\rangle\) is the initial state, \(W_\ell\) are layers of non-parametrized quantum gates, \(V_\ell(\theta_\ell)\) are layers of parametrized quantum gates with \(n_\ell\) parameters \(\theta_\ell = \{\theta^{(\ell)}_0, \dots, \theta^{(\ell)}_n\}\).
Further, assume all parametrized gates can be written in the form\(X(\theta^{(\ell)}_{i}) = e^{i\theta^{(\ell)}_{i} K^{(\ell)}_i}\),where \(K^{(\ell)}_i\) is the
generator of the parametrized operation.
For each parametric layer \(\ell\) in the variational quantum circuit the \(n_\ell\times n_\ell\) block-diagonal submatrix of the Fubini-Study tensor \(g_{ij}^{(\ell)}\) is calculated by:
where
(that is, \(|\psi_{\ell-1}\rangle\) is the quantum state prior to the application of parameterized layer \(\ell\)), and we have \(K_i \equiv K_i^{(\ell)}\) for brevity.
Let’s consider a small variational quantum circuit example coded in PennyLane:
import numpy as npimport pennylane as qmlfrom pennylane import expval, vardev = qml.device('default.qubit', wires=3)@qml.qnode(dev)def circuit(params): # |psi_0>: state preparation qml.RY(np.pi/4, wires=0) qml.RY(np.pi/3, wires=1) qml.RY(np.pi/7, wires=2) # V0(theta0, theta1): Parametrized layer 0 qml.RZ(params[0], wires=0) qml.RZ(params[1], wires=1) # W1: non-parametrized gates qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2]) # V_1(theta2, theta3): Parametrized layer 1 qml.RY(params[2], wires=1) qml.RX(params[3], wires=2) # W2: non-parametrized gates qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2]) return qml.expval(qml.PauliY(0))params = np.array([0.432, -0.123, 0.543, 0.233])
The above circuit consists of 4 parameters, with two distinct parametrized layers of 2 parameters each.
(Note that in this example, the first non-parametrized layer \(W_0\) is simply the identity.) Since there are two layers, each with two parameters, the block-diagonal approximation consists of two \(2\times 2\) matrices, \(g^{(0)}\) and \(g^{(1)}\).
To compute the first block-diagonal \(g^{(0)}\), we create subcircuits consistingof all gates prior to the layer, and observables corresponding tothe
generators of the gates in the layer:
g0 = np.zeros([2, 2])def layer0_subcircuit(params): """This function contains all gates that precede parametrized layer 0""" qml.RY(np.pi/4, wires=0) qml.RY(np.pi/3, wires=1) qml.RY(np.pi/7, wires=2)
We then post-process the measurement results in order to determine \(g^{(0)}\), as follows.
We can see that the diagonal terms are simply given by the variance:
@qml.qnode(dev)def layer0_diag(params): layer0_subcircuit(params) return var(qml.PauliZ(0)), var(qml.PauliZ(1))# calculate the diagonal termsvarK0, varK1 = layer0_diag(params)g0[0, 0] = varK0/4g0[1, 1] = varK1/4
The following two subcircuits are then used to calculate the off-diagonal covariance terms of \(g^{(0)}\):
@qml.qnode(dev)def layer0_off_diag_single(params): layer0_subcircuit(params) return expval(qml.PauliZ(0)), expval(qml.PauliZ(1))@qml.qnode(dev)def layer0_off_diag_double(params): layer0_subcircuit(params) ZZ = np.kron(np.diag([1, -1]), np.diag([1, -1])) return expval(qml.Hermitian(ZZ, wires=[0, 1]))# calculate the off-diagonal termsexK0, exK1 = layer0_off_diag_single(params)exK0K1 = layer0_off_diag_double(params)g0[0, 1] = (exK0K1 - exK0*exK1)/4g0[1, 0] = (exK0K1 - exK0*exK1)/4
Note that, by definition, the block-diagonal matrices must be real and symmetric.
We can repeat the above process to compute \(g^{(1)}\). The subcircuit required is given by
g1 = np.zeros([2, 2])def layer1_subcircuit(params): """This function contains all gates that precede parametrized layer 1""" # |psi_0>: state preparation qml.RY(np.pi/4, wires=0) qml.RY(np.pi/3, wires=1) qml.RY(np.pi/7, wires=2) # V0(theta0, theta1): Parametrized layer 0 qml.RZ(params[0], wires=0) qml.RZ(params[1], wires=1) # W1: non-parametrized gates qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2])
Using this subcircuit, we can now generate the submatrix \(g^{(1)}\).
@qml.qnode(dev)def layer1_diag(params): layer1_subcircuit(params) return var(qml.PauliY(1)), var(qml.PauliX(2))
As previously, the diagonal terms are simply given by the variance,
varK0, varK1 = layer1_diag(params)g1[0, 0] = varK0/4g1[1, 1] = varK1/4
while the off-diagonal terms require covariance between the two observables to be computed.
@qml.qnode(dev)def layer1_off_diag_single(params): layer1_subcircuit(params) return expval(qml.PauliY(1)), expval(qml.PauliX(2))@qml.qnode(dev)def layer1_off_diag_double(params): layer1_subcircuit(params) X = np.array([[0, 1], [1, 0]]) Y = np.array([[0, -1j], [1j, 0]]) YX = np.kron(Y, X) return expval(qml.Hermitian(YX, wires=[1, 2]))# calculate the off-diagonal termsexK0, exK1 = layer1_off_diag_single(params)exK0K1 = layer1_off_diag_double(params)g1[0, 1] = (exK0K1 - exK0*exK1)/4g1[1, 0] = g1[0, 1]
Putting this altogether, the block-diagonal approximation to the Fubini-Study metric tensor for this variational quantum circuit is
from scipy.linalg import block_diagg = block_diag(g0, g1)print(np.round(g, 8))
Out:
[[ 0.125 0. 0. 0. ] [ 0. 0.1875 0. 0. ] [ 0. 0. 0.24973433 -0.01524701] [ 0. 0. -0.01524701 0.20293623]]
PennyLane contains a built-in method for computing the Fubini-Study metrictensor,
QNode.metric_tensor(), which we can use to verify thisresult:
print(np.round(circuit.metric_tensor(params), 8))
Out:
[[ 0.125 0. 0. 0. ] [ 0. 0.1875 0. 0. ] [ 0. 0. 0.24973433 -0.01524701] [ 0. 0. -0.01524701 0.20293623]]
As opposed to our manual computation, which required 6 different quantum evaluations, the PennyLane Fubini-Study metric tensor implementation requires only 2 quantum evaluations, one per layer. This is done by automatically detecting the layer structure, and noting that every observable that must be measured commutes, allowing for simultaneous measurement.
Therefore, by combining the quantum natural gradient optimizer with the analytic parameter-shift rule to optimize a variational circuit with \(d\) parameters and \(L\) parametrized layers, a total of \(2d+L\) quantum evaluations are required per optimization step.
Note that the
QNode.metric_tensor() method also supports computing the diagonalapproximation to the metric tensor:
print(circuit.metric_tensor(params, diag_approx=True))
Out:
[[0.125 0. 0. 0. ] [0. 0.1875 0. 0. ] [0. 0. 0.24973433 0. ] [0. 0. 0. 0.20293623]]
Quantum natural gradient optimization¶
PennyLane provides an implementation of the quantum natural gradientoptimizer,
QNGOptimizer. Let’s compare the optimization convergenceof the QNG Optimizer and the
GradientDescentOptimizer for the simple variationalcircuit above.
steps = 200init_params = np.array([0.432, -0.123, 0.543, 0.233])
Performing vanilla gradient descent:
gd_cost = []opt = qml.GradientDescentOptimizer(0.01)theta = init_paramsfor _ in range(steps): theta = opt.step(circuit, theta) gd_cost.append(circuit(theta))
Performing quantum natural gradient descent:
qng_cost = []opt = qml.QNGOptimizer(0.01)theta = init_paramsfor _ in range(steps): theta = opt.step(circuit, theta) qng_cost.append(circuit(theta))
Plotting the cost vs optimization step for both optimization strategies:
from matplotlib import pyplot as pltplt.style.use("seaborn")plt.plot(gd_cost, "b", label="Vanilla gradient descent")plt.plot(qng_cost, "g", label="Quantum natural gradient descent")plt.ylabel("Cost function value")plt.xlabel("Optimization steps")plt.legend()plt.show()
References¶ Shun-Ichi Amari. “Natural gradient works efficiently in learning.” Neural computation 10.2, 251-276, 1998. James Stokes, Josh Izaac, Nathan Killoran, Giuseppe Carleo. “Quantum Natural Gradient.” arXiv:1909.02108, 2019. Aram Harrow and John Napp. “Low-depth gradient measurements can improve convergence in variational hybrid quantum-classical algorithms.” arXiv:1901.05374, 2019. Naoki Yamamoto. “On the natural gradient for variational quantum eigensolver.” arXiv:1909.05074, 2019. Total running time of the script: ( 0 minutes 10.804 seconds) |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Notice détaillée - Notices similaires 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Notice détaillée - Notices similaires 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Notice détaillée - Notices similaires 2019-05-15 16:57 Notice détaillée - Notices similaires 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Notice détaillée - Notices similaires 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Notice détaillée - Notices similaires 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Notice détaillée - Notices similaires 2019-01-10 15:54 Notice détaillée - Notices similaires 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Notice détaillée - Notices similaires 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Notice détaillée - Notices similaires |
I think the main issue is that you are being confused by the Dirac notation, which works fine when it does but can be occasionally confusing when you are worried about that kind of thing. I'll try and give an accessible explanation that also addresses the issues raised by Wouter.
Take a Hilbert space $\mathcal{H}$, and a linear operator $\hat{A}:\mathcal{H}\rightarrow\mathcal{H}$. The Hilbert space comes equipped with an inner product $\langle\cdot,\cdot\rangle:\mathcal{H}\times\mathcal{H}\rightarrow\mathbb{C}$, and this inner product associates to each vector $\psi\in\mathcal{H}$ a linear functional $\langle\psi,\cdot\rangle:\mathcal{H}\rightarrow\mathbb{C}$, which acts in the natural way. Thus there is a correspondence between the Hilbert space and the set $\mathcal{H}^\ast$ of all linear functionals from $\mathcal{H}$ into $\mathbb{C}$. This space is known as the dual space of $\mathcal{H}$ and is where (well-behaved$^1$) kets live.
Consider then a fixed vector $\phi\in\mathcal{H}$ and a linear operator $\hat{A}: \mathcal{H} \rightarrow\mathcal{H}$. These two define a new linear functional which depends only on the one associated with $\phi$:$$\psi\mapsto\langle\phi,\hat{A}\psi\rangle.$$If you denote by $\phi^\dagger=\langle\phi,\cdot\rangle\in\mathcal{H}^\ast$ the linear functional associated with $\phi$, then the new functional defined above is a function of $\phi^\dagger$: it is denoted $\hat{A}^\dagger\phi^\dagger\in\mathcal{H}^\ast$, and this function $\hat{A}^\dagger$ is called the adjoint of $\hat{A}$.
Now, since the Hilbert space and its adjoint are in a strict correspondence, we can pull this back to define the action of the adjoint
on $\mathcal{H}$ itself. This is done in the obvious way: $\hat{A}^\dagger$ acting on an arbitrary vector $\phi\in\mathcal{H}$ gives the unique vector $\hat{A}^\dagger\phi\in\mathcal{H}$ such that for any $\psi\in\mathcal{H}$ you have$$\langle\hat{A}^\dagger\phi,\psi\rangle=\langle\phi,\hat{A}\psi\rangle.$$ This is really the definition of the adjoint $\hat{A}^\dagger$.
Since the adjoint $\hat{A}^\dagger$ acts on the same space as the original operator $\hat{A}$, they are comparable and we can ask whether they are equal. With the definition just given, the condition for that is$$\hat{A}\textrm{ is hermitian} \Leftrightarrow \hat{A}=\hat{A}^\dagger\Leftrightarrow \langle\hat{A}\phi,\psi\rangle=\langle\phi,\hat{A}\psi\rangle\textrm{ for all }\phi,\psi\in\mathcal{H}.$$The argument I gave above works only for the left-to-right sense of this equivalence. To prove the converse you need the theorem @MarkMitchison mentioned (in essence) earlier, which states modulo subtleties that two operators are equal if and only if all their matrix elements are equal. That is:
$$\hat{A}=\hat{B}\Leftrightarrow \langle\phi,\hat{A}\psi\rangle=\langle\phi,\hat{B}\psi\rangle\textrm{ for all }\phi,\psi\in\mathcal{H}.$$
That's as far as the maths is concerned; now for some physics. Why did I state Dirac notation can be confusing? Well, when you come across a matrix element like $\langle\phi|\hat{A}|\psi\rangle$, it can be hard to see what exactly is acting on what. In the terms laid out above,$$\langle\phi|\hat{A}|\psi\rangle=\langle\phi,\hat{A}\psi\rangle=\langle\hat{A}^\dagger\phi,\psi\rangle=\left(\hat{A}^\dagger\phi^\dagger\right)(\psi).$$The last term means the linear functional $\hat{A}^\dagger\phi^\dagger$ acting on the vector $\psi$. This is what is meant by saying that $\hat{A}^\dagger$ acts on bras.
$^1$I'm going to ignore the fact that functionals may be discontinuous, operators may be unbounded, domains may be restricted, symmetric operators need not be hermitian, and such other difficulties. All of this can be rigorously dealt with using rigged Hilbert spaces. For resources on that see e.g. this question. |
I need to use
\ldbrack and
\rdbrack symbols from the package
\mathabx, but when I use the
\mathabx with the
\amssymb package (previously installed) it is causing me an issue with symbols:
\subseteq,
\cap,
\cup, etc
\documentclass{book} \usepackage{amssymb} \usepackage{mathabx}\begin{document}This a example, $$A\subseteq B=\emptyset A\cap B\cup C$$$$x + y \ldbrack z \rdbrack$$\end{document}
The following picture was created using
\mathabx and
\amssymb.
And the following picture was created using only
\amssymb.
Please notice that on imagen 1,
\cap,
\cup, etc; are different and I require the other ones.
Is there a way to use both packages without mess up my document symbols? Thanks. |
A
convex optimization problem is any problem of minimizing a convex function over a convex set.An optimization problem is quasi-convexif its feasible set is convex and its objective function $f_0$ is quasi-convex.Given convex functions $f_i(x), i=0,\dots,m$, the standard form of a convex optimization problem is:$$\begin{align}\min \quad & f_0(x) \\\text{s.t.} \quad & f_i(x) \le 0, && i = 1,\dots,m \\ & \langle a_i,x \rangle = b_i, && i = 1,\dots,p\end{align}$$
Properties of convex optimization problems:
Linear program (LP) is an optimization problemthat minimizes a linear function over a polyhedral set.
General form of LP: $$\begin{align} \min \quad & \langle c, x \rangle \\ \text{s.t.} \quad & G x \preceq h && G \in \mathbb{R}^{m×n} \\ & A x = b && A \in \mathbb{R}^{p×n} \end{align}$$
Standard form of LP: $$\begin{align} \min \quad & \langle c, x \rangle \\ \text{s.t.} \quad & A x = b && A \in \mathbb{R}^{m×n} \\ & x \succeq 0 \end{align}$$
Inequality form LP: $$\begin{align} \min \quad & \langle c, x \rangle \\ \text{s.t.} \quad & A x \preceq b && A \in \mathbb{R}^{m×n} \end{align}$$
A
quadratic program (QP) is an optimization problemwith convex quadratic objective function and linear or affine constraint functions:given a real positive semidefinite matrix $P$,$$\begin{align}\min \quad & \langle x, Px/2 + q \rangle \\\text{s.t.} \quad & G x \preceq h && G \in \mathbb{R}^{m×n} \\ & A x = b && A \in \mathbb{R}^{p×n}\end{align}$$
A
quadratically constrained quadratic program (QCQP) is an optimization problemwith convex quadratic objective and inequality constraint functionsand linear/affine equality constraint functions.
A
second-order cone program (SOCP) is an optimization problemwith linear objective function, second-order cone constraints on affine transformations of variables,and linear/affine equality constraints.$$\begin{align}\min \quad & \langle f, x \rangle \\\text{s.t.} \quad & \| A_i x + b_i \|_2 \le \langle c_i, x \rangle + d_i && A_i \in \mathbb{R}^{n_i×n}, i = 1,\dots,m \\ & F x = g && F \in \mathbb{R}^{p×n}\end{align}$$
Properties of quadratic programs:
A
monomial is the product of some power functions with positive variables and coefficient:$f(x) = c \prod_i x_i^{a_i}$, where $c>0, x_i>0, a_i \in \mathbb{R}, i = 1,\dots,n$.Monomials are closed under multiplication and division.
A
posynomial (正项式) is the sum of some monomials: $f(x) = \sum_k c_k \prod_i x_i^{a_{ik}}$,where $c_k>0, x_i>0, a_{ik} \in \mathbb{R}$, $i = 1,\dots,n, k = 1,\dots,K$.Posynomials are closed under addition, multiplication, and nonnegative scaling.
A
geometric program (GP) is an optimization problemwhose objective and inequality constraint functions $f_i, i = 0,\dots,m$ are posynomials,and whose equality constraint functions $h_i, i = 0,\dots,p$ are monomials(the constraint $x \succ 0$ is implicit.):$$\begin{align}\min \quad & f_0(x) \\\text{s.t.} \quad & f_i(x) \le 1 && i = 1,\dots,m \\ & h_i(x) = 1 && i = 1,\dots,p\end{align}$$
Given convex function $f_0$, proper cones $K_i \subseteq \mathbb{R}^{k_i}$, and $K_i$-convex functions $f_i : \mathbb{R}^n \to \mathbb{R}^{k_i}$, standard form of convex optimization problem with generalized inequality constraints is: $$\begin{align} \min \quad & f_0(x) \\ \text{s.t.} \quad & f_i(x) \preceq_{K_i} 0 && i = 1,\dots,m \\ & A x = b && A \in \mathbb{R}^{p×n} \end{align}$$
A
conic problem is an optimization problem with linear objective function,one affine generalized inequality constraint, and linear equality constraints:$$\begin{align}\min \quad & \langle c,x \rangle \\\text{s.t.} \quad & x \succeq_K 0 \\ & A x = b\end{align}$$
A
semidefinite program (SDP) is a conic form problemwith the cone of positive semidefinite matrices.Given $S^k$ the set of symmetric $k \times k$ matrices, the standard form of SDP is:$$\begin{align}\min \quad & \text{tr}(C X) && C, X \in S^n \\\text{s.t.} \quad & \text{tr}(A_i X) = b_i && A_i \in S^n, i = 1,\dots,p \\ & X \succeq 0\end{align}$$Inequality form of SDP:$$\begin{align}\min \quad & \langle c,x \rangle \\\text{s.t.} \quad & \sum_i x_i A_i \preceq B && A_i, B \in S^k, i = 1,\dots,n\end{align}$$
Suppose the $p$ equality constraints define a $k$-dimensional manifold on the $n$-dimensional Euclidean space, then we may introduce $k$ generalized coordinates such that a transformation from the generalized coordinates to the manifold exists for some $\phi: \mathbb{R}^k \to \mathbb{R}^n$. For linear equality constraints $Ax=b$, such transformation has the form $\phi(z) = Fz + x_0$, where $x_0$ is a special solution to $Ax=b$ and $F$ is a basis of the null space of $A$.
For a convex optimization problem, eliminating its (linear) equality constraints preserves convexity. (Because composition with an affine function preserves convexity of a function.)
Inequality constraint $f_i(x) \le 0$ is equivalent to $f_i(x) + s_i = 0, s_i \ge 0$,where we call $s_i$ a
slack variable.
Introducing slack variables for linear inequalities preserves convexity of a problem. (Because the objective is linear, and the new constraint $f_0(x)-t$ is convex in $\text{dom}(x) \times R$.)
If the constraint functions depend on distinct subsets of variables, then you may first optimize the objective over one subset of variables then the other. For example, optimization problem $$\begin{align} \min \quad & f_0(x_1, x_2) \\ \text{s.t.} \quad & f_i(x_1) \le 0 && i = 1,\dots,m_1 \\ & f'_i(x_2) \le 0 && i = 1,\dots,m_2 \end{align}$$ is equivalent to $$\begin{align} \min \quad & f_0(x_1, z(x_1)) \\ \text{s.t.} \quad & f_i(x_1) \le 0 && i = 1,\dots,m_1 \end{align}$$ , where $f_0(x_1,z(x_1)) = \min \{ f_0(x_1, x_2)~|~f'_i(x_2)≤0, i=1,\dots,m_2 \}$.
Minimizing over some variables preserves convexity of an optimization problem. (Because minimizing a convex function over some variables preserves convexity.)
The solution of a quasi-convex optimization problem can be approximated by solving a sequence of convex optimization problems. Let $\phi_t$ be a family of convex functions that the $0$-sublevel set of $\phi_t$ is the $t$-sublevel set of $f_0$: $f_0(x) \le t \Leftrightarrow \phi_t(x) \le 0$. If the convex feasibility problem $$\begin{align} \text{find} \quad & s \\ \text{s.t.} \quad & \phi_t(x) \le 0 \\ & f_i(x) \le 0 && i = 1,\dots,m \\ & Ax = b \end{align}$$ is feasible, then $p^∗ \le t$; if it's unfeasible, then $p^∗ > t$.
With slack variables $s \in \mathbb{R}_{\ge 0}^m$ and nonnegative variables $x^+, x^-$ with $x = x^+ - x^-$, a general linear program can be expressed in standard form: $$\begin{align} \min \quad & \langle c, (x^+, -x^-) \rangle \\ \text{s.t.} \quad & Gx^+ - Gx^- + s = h && G \in \mathbb{R}^{m×n} \\ & Ax^+ - Ax^- = b && A \in \mathbb{R}^{p×n} \\ & (x^+, x^-, s) \succeq 0 \end{align}$$
Geometric programs can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. With transformation $x_i = e^{y_i}$, posynomial $\sum_k c_k \prod_i x_i^{a_{ik}}$ can be expressed as $\sum_k e^{\langle a_k,y \rangle + b_k}$. Thus GPs are equivalent to convex optimization: $$\begin{align} \min \quad & f_0(y) = \log \sum_k \exp( \langle a_{0k},y \rangle + b_{0k}) \\ \text{s.t.} \quad & f_i(y) = \log \sum_k \exp(\langle a_{ik},y \rangle + b_{ik}) \le 0 && i = 1,\dots,m \\ & h_i(y) = \langle g_i,y \rangle + h_i = 0 && i = 1,…,p \end{align}$$
Note the objective and inequality constraint functions $f_i, i = 0,\dots,m$ are convex, because affine functions $\langle a_{ik},y \rangle + b_{ik}$ are convex and log-sum-exp $\log(e^f + e^g)$ preserves convexity. |
So, matrix A * its inverse gives you the identity matrix correct?
Also, if you have AB=BA, what does that tell you about the matrices? Is this only true when B is the inverse of A?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The
definition of the inverse of a square matrix $A$ is a matrix $A^{-1}$ such that $A^{-1}A=I=AA^{-1}$.
It is possible for matrices $A$ and $B$ to satisfy the equation $AB=BA$. When that happens, we say that they
commute. wiki
This does
not imply that they are inverses of one another however. Take for trivial counterexample the $1\times 1$ matrices (which act exactly like real numbers). $[3][2]=[6]=[2][3]$ but $[3][2]\neq [1]$ so they are not inverses of one another.
Let $A\in\mathbb{R}^{n\times n}$, $\lambda\in\mathbb{R}$ and $B:=\lambda A\in\mathbb{R}^{n\times n}$, then one has: $$AB=BA.$$ Therefore, if $A\in GL(n,\mathbb{R})$, $B:=A^{-1}$ is not the only matrix such that $AB=BA$. One can define: $$\mathcal{C}(A):=\{B\in\mathbb{R}^{n\times n}\textrm{ s.t. }AB=BA\}\subset\mathbb{R}^{n\times n}.$$ $\mathcal{C}(A)$ is a vector subspace of $\mathbb{R}^{n\times n}$ and we have seen that: $$\{\lambda A;\lambda\in\mathbb{R}\}\subset\mathcal{C}(A).$$
Yes, every invertible matrix $A$ multiplied by its inverse gives the identity.
$AB=BA$ can be true iven if $B$ is not the inverse for $A$, for example the identity matrix or scalar matrix commute with every other matrix, and there are other examples. |
A co-$A_n$ space is a based space $Y$ equipped with a co-action by the Stasheff associahedron operad $K_\bullet$. This means that $Y$ is comes with certain maps $c_n: Y \times K_n \to Y^{\vee n}$, $n = 2,3,\dots$ that are inductively described (the definition of $c_n$ uses $c_{n-1}$ as input; the map $c_2$ is a co-$H$ structure). The suspension of a based space $X$ has the structure of a co-$A_\infty$ space.
Assume $Y$ is $2$-connected and has the homotopy type of a finite complex. Then Schwaenzl, Vogt and I showed that a co-$A_\infty$ space $Y$ desuspends to a space $X$ in the sense that there's a weak equivalence $\Sigma X \simeq Y$.
However we didn't try to check that the given weak equivalence is compatible in the co-$A_\infty$ sense. Part of the problem is that a morphism $f: Y \to Z$ of co-$A_\infty$ spaces should amount a co-$A_\infty$-structure on its mapping cylinder restricting to the given ones on $X \times 1$ and $Y$. However, this doesn't form a category: it's an $\infty$-category.
Now to my questions:
Question 1: is there a documented proof somewhere that the functor which assigns to a based space $X$ its suspension (considered as an co-$A_\infty$ space) induces an equivalence between the homotopy category of $1$-connected spaces and $2$-connected co-$A_\infty$ spaces?
Presumably, such a proof should be Hilton-Eckmann dual to one of the main results in the Book of Boardman and Vogt.
Question 2: Do function spaces coincide up to weak equivalence under this functor? That is, is the map $$\hom_{\text{Top}_*}(X,X') \to \hom_{\text{co-}A_\infty}(\Sigma X,\Sigma X')$$ A weak equivalence under suitable hypotheses on $X$ and $X'$?
By $\hom$ in each case, I mean topologized mapping spaces.
How would one go about proving a result like this? |
The Diesel cycle is the thermodynamic cycle which approximates the pressure and volume of the combustion chamber in a Diesel engine. In the Diesel cycle, the working medium in the combustion chamber is
compressed adiabatically from V 1, p 1to V 2< V 1, p 2> p 1, expanded isobarically at p 2from V 2to V 3> V 2, expanded adiabatically from V 3, p 2to V 1and p 4> p 1, cooled isochorically at V 1from p 4to the initial pressure p 1.
The heat absorbed by the medium is given by \(Q=\int_{2 \rightarrow 3} C_p dT\). Assuming the working medium is an ideal monatomic gas, show that the efficiency of the Diesel cycle can be written in the form
\[ \eta = 1 - \gamma^{-1} {R_{31}^{\gamma} - R_{21}^{\gamma} \over R_{31} - R_{21}} \]
Efficiency of the Diesel Cycle
Because the process \(1 \rightarrow 2\) and \(3 \rightarrow 4\) are adiabatic, heat is not lost during those processes, so the efficiency can therefore be written in terms of the heat gained and lost, \(Q_H\) and \(Q_L\), as indicated on the diagram.
From the equation given above we can say that the heat gained \(Q_H\) is given in terms of specific heat and temperature by:
\begin{equation}
Q_H = C_p (T_3 – T_2) \label{eq:q1} \end{equation} We can likewise deduce the heat lost as \(Q=\int_{4 \rightarrow 1} C_V dT\) such that \begin{equation} Q_L = C_V (T_1 – T_4) \label{eq:q2} \end{equation}
The thermal efficiency is defined as
1 \(\eta={Q_H+Q_L \over Q_H}\), which we can express using \eqref{eq:q1} and \eqref{eq:q2} in the form of specific heats and temperatures \begin{equation} \eta=1+{C_V(T_1-T_4) \over C_p(T_3-T_2)} \label{eq:etaq1q2} \end{equation}
Using the ideal gas law \(PV=nRT\) where the \(n\)s and \(R\)s cancel, and the adiabatic index \(\gamma = {C_p \over C_V}\) we can further write
\[\eta = 1 + \gamma^{-1} {P_1 V_1 - P_4 V_4 \over P_3 V_3 - P_2 V_2}\]
From the diagram it can be seen that the volume V is the same in state 4 as in 1, i.e. V
4=V 1. Likewise for pressure, P 3=P 2. Therefore we can reduce our number of variables so that
\[\eta = 1 + \gamma^{-1} {V_1(P_1 - P_4) \over P_3(V_3-V_2)}\]
Dividing the fraction through by P
3V 1:
\[\eta = 1+\gamma^{-1} {\frac{P_1}{P_3} - \frac{P_4}{P_3} \over R_{31} - R_{21}}\]
where \(R_{31} := \frac{V_3}{V_1}\) and \(R_{21} := \frac{V_2}{V_1}\).
For an adiabatic expansion, \(PV^{\gamma}\) is constant. Exploiting this fact, we can see from the diagram that the following relations hold:
\begin{eqnarray}
\frac{P_1}{P_3}&=&\left( {V_2 \over V_1} \right)^{\gamma} &=& R_{21}^{\gamma} \nonumber \\ \frac{P_4}{P_3}&=&\left( {V_3 \over V_1} \right)^{\gamma} &=& R_{31}^{\gamma} \nonumber \end{eqnarray}
Finally, putting these back in and playing with minus signs:
\[ \eta = 1 - \gamma^{-1} {R_{31}^{\gamma} - R_{21}^{\gamma} \over R_{31} - R_{21}} \; \blacksquare \] |
This question is on the same topic as this one but more complex. So, let's consider the cell with $\ce{Zn}$ and $\ce{Cu}$ electrodes inside the $\ce{NaCl}$ solution. On the $\ce{Zn}$ electrode we have $$\ce{Zn + 2Cl^- -> ZnCl2 + 2e^-},$$ and on the copper one $$\ce{H3+O + e^- -> 1/2 H2 + H2O}.$$
For both reactions we can find standard electrode potentials, but they are given for normal conditions, and $1~\mathrm{M}$ concentration of $\ce{Zn^{2+}}$ and $\ce{H3+O}$. To calculate the EMF for non-$1~\mathrm{M}$ concentration we use the Nernst equation. In this case we have something like $$E=E_0 - \frac{\mathcal{R}T}{n\mathcal{F}}\ln{\frac{[\ce{Zn^{2+}}]}{[\ce{H3+O}]}}.$$ But really in the initial $\ce{NaCl}$ solution we do not have any $\ce{Zn}$ ions (so their concentration is very close to zero and unknown to us). Of course, in few microseconds some $\ce{Zn}$ from electrode will be dissolved creating some finite potential. But in the experiment when putting $\ce{Zn}$ electrode into the same solution we always observe the approximately same EMF value, $\approx 0.8~\mathrm{V}$. How to predict theoretically this value?
Furthermore, my question is how to calculate the EMF of such cell as precisely as possible for different experimental conditions, e.g. different temperature?
I have made an experiment showing that the EMF of a 'cell' with $\ce{Zn}$ and $\ce{Cu}$ electrodes into $\ce{NaCl}$ solution is decreasing when heating the $\ce{NaCl}$ solution.
It's strange, let me explain why. The Nernst equation tells us that EMF is increasing with $T$ when $[\ce{Zn^{2+}}]$ is less than $[\ce{H3+O}]$ (because $\log$ is negative), and otherwise EMF is decreasing with $T$ ($\log$ is positive). But in my opinion the concentration of $\ce{Zn}$ ions is always less than $[\ce{H3+O}]$ because we do not have $\ce{Zn}$ in the initial solution. So then logarithm is always negative so EMF is growing with $T$.
In this case, why do I observe the EMF decreasing for $\ce{Zn-Cu}$ pair?
And the last question. When we talk about $\ce{Zn-Cu}$ pair we assume that zinc dissolves; and the hydrogen gas is created on the copper electrode. But why do I have the EMF when putting into a $\ce{NaCl}$ solution the $\ce{Al}$ ($\ce{Fe, Cr, etc}$) electrodes instead of $\ce{Cu}$, and remain instant the $\ce{Zn}$ electrode? I am not sure that hydrogen is created in such pairs: all these metals have a negative standard electrode potential. Measured EMFs of all these ($\ce{Zn-Al, Zn-Fe, Zn-Cr}$) pairs are significantly different, so the second metal is important in these cases. What reactions are going on in such pairs? |
Value of windfall at \(t=\) 0 Value of windfall over time Value of steady saving over time Savings per time Rate of return on invested capital Time when the windfall and consistent savings are equal
Usually, we do not have a choice in the matter of windfalls, whether they come from wealthy relatives passing, insurance settlements, or winning the lottery, we tend to have little say in when and how they happen. Often I hear people justify their lack of saving habits by claiming their future windfalls will solve everything. This is lousy logic for two reasons: first and most obviously, the windfall is never guaranteed. Second, the power of habitual saving is far greater than most people believe. The equations below were derived here.
Windfall Steady saving Ratio Ratio (\(\lim t \rightarrow \infty\)) \( \displaystyle V_{w} = V_0 e^{rt}\) \(\displaystyle V_{ss} = \frac{F}{r} \left( e^{rt} - 1\right)\) \(\displaystyle \frac{V_w}{V_{ss}} = \frac{V_0 e^{rt}}{\frac{F}{r} \left( e^{rt} - 1\right)}\) \(\displaystyle \left.\frac{V_w}{V_{ss}}\right|_{\lim t \rightarrow \infty} = \frac{V_0}{\frac{F}{r}}\)
The windfall required to put you in a better situation than someone with a steady saving habit is \( F/r\). Windfalls or not, long term if you want to be ahead with money you have to cultivate a habit of saving aggressively. Unless your windfall is
many millions of dollars, you will always lose out against an aggressive middle-class saver who can put away $10,000/year. For the windfall recipient to come out ahead of the steady saver, the windfall (\(V_w\)) must be greater than the saving rate over the return (\(F/r\)). If not, there will always be a cross over point between the two curves. Where is that point?
This section was written in direct response to a friend who inherited $30,000. He asked for my advice; I told him he should be saving aggressively, that he could afford to put $5-10k/yr away, that he should have started a decade ago when he first asked for my advice, and that even if he started today, the money from consistent saving would dwarf his windfall rather rapidly. I am not qualified to speculate on the origin of the psychological deficiencies that would lead someone to conclude their $30k windfall is large enough to render saving inconsequential. It would only take 5 or 6 years of saving for him to equal his windfall in the absence of investing. Even with rewards on a relatively short timescale, he could not be persuaded to modify his behavior to improve his long term well being. |
This question is about homomorphism and it's definition, which still feels kind of weird and unreliable to me.
To be more obvious, let's take graph $G = (Vertices(G), Edges(G))$. Then homomorphism of $G$ to somewhat graph $G'$
consists of two independent functions: $vrt : Vertices(G) \mapsto Vertices(G')$, $edg : Edges(G) \mapsto Edges(G')$. As such, in this particular case, homomorphism itself becomes a complicated object: it can be decomposed into some simpler primitives and thus one has to define certain rule how to apply those primitives to $G$ in order to get $G'$. Formally it looks somewhat like: $$(\forall v_G \in Vertices(G) : vrt(v_G) \in Vertices(G')) \land (\forall e_G \in Edges(G) : edg(e_G) \in Edges(G')) \tag{1}$$ Observation #1: I can't see how it follows from the homomorphism's definition. There is no $(\circ)$ given explicitly; what kind of operation is it? Which are the elements of that operation? Can someone show me why $(vrt, edg)$ definitions follow from that? Observation #2: I see homomorphism cares not to degrade existing structure, at the same time saying literally nothing about upgrading it? Since homomorphism is complicated object on it's own right, it can preserve existing structure and add something, which does not exist in the original one. And (1) requirement from the above would hold indeed. In order to ensure no upgrading took place, one should extend (1) to:$$\text{vertices of the G did not degrade}\land\text{edges of the G did not degrade}\land\text{all the vertices of the G' came from the G}\land\text{all the edges of the G' came from the the G} \tag{2}$$where: vertices of the G did not degrade = $(\forall v_G \in Vertices(G) : vrt(v_G) \in Vertices(G'))$; edges of the G did not degrade = $(\forall e_G \in Edges(G) : edg(e_G) \in Edges(G'))$ all the vertices of the G' came from the G = $\not \exists v_{G'} \in vrt(G') : \forall v_G \in vrt(G) : vrt(v_G) \not = v_{G'}$ all the edges of the G' came from the G = $\not \exists e_{G'} \in edg(G') : \forall e_G \in edg(G) : edg(e_G) \not = e_{G'}$
As a conclusion. I don't have any teacher to learn math, I do it only with help of the books published online and this platform. And for me it feels like homomorphism is both: 1) extraordinary important, 2) poorly defined idea. At the first sight it seems rigorous, but the deeper one dig, the sloppier it becomes. It feels like mathematicians just don't bother to redefine loosed things: whenever it comes down to somewhat specific structures, like graphs or monoid, or whatever else, homomorphism emerges
not as direct consequence of the $F(g \circ f) = F(g) \bullet F(f)$, but as something intuitive, something like "we all understand that to preserve such-and-such particular structure, one has to fulfill following requirements". And yet I haven't seen "no-upgrade" requirement (observation #2), which seems quite valuable.
So, please, help me get deeper into math. What's wrong with my understanding? |
I like using
amsthm for my theorems, etc, but to highlight their importance, I like exercises to be in
tcolorboxes. I want the exercises in the boxes to share the same counter as my theorems.
I managed to create an
exercise environment which works as desired, harmoniously with
cleveref:
But then, it took me a while to realise this, but for some reason, if two exercises in different sections happen to end in the same number (example exercise X.2 from section X and exercise Y.2 from section Y), then doing
\cref{theExercise} will always point me to the first one (even if I refer to the second one).
Here is an MWE.
\documentclass{article}\usepackage{amsmath,amsthm,lipsum}\usepackage[most]{tcolorbox}\usepackage{hyperref}\usepackage[nameinlink]{cleveref}% Theorem Environments\newtheorem{lemma}{Lemma}[section] \newtheorem{definition}[lemma]{Definition}% Exercise Environment\makeatletter\let\c@exercise\c@lemma\def\p@exercise{\p@lemma}\def\theexercise{\thelemma}\makeatother\crefname{exercise}{exercise}{exercises}\newenvironment{exercise}[1][]{ \refstepcounter{exercise}\par\medskip \begin{tcolorbox}[breakable, enhanced, colback=gray!7!white, parbox=false, drop fuzzy shadow] \noindent {\textbf{Exercise~\theexercise #1}} \rmfamily\par\medskip } { \end{tcolorbox} \medskip}\begin{document} \section{The First Section} \begin{definition}[Continuity] \label{def:continuity} $f\colon A\to B$ is continuous if for all $U \subseteq B$, \[\text{$U$ is open in $B$} \implies \text{$f^{-1}(U)$ is open in $A$.} \] \end{definition} \begin{exercise} \label{exercise:continuityEpsilonDelta} Let $A\subseteq\mathbf R$. Show that $f\colon A\to\mathbf R$ is continuous in the sense of \cref{def:continuity} if and only if for all $a\in A$, \[(\forall\epsilon > 0)(\exists\delta>0)(\forall x\in A)(0<|a-x|<\delta\implies |f(x)-f(a)|<\epsilon). \] \end{exercise} {\bfseries Notice here \verb|\theexercise| is \theexercise.} \section{The Second Section} \begin{lemma}[Handshaking Lemma] \label{lemma:HS} Let $G=(V,E)$ be a graph. Then \[\sum_{v\in V} \deg(v) = 2|E|. \] \end{lemma} \begin{exercise} \label{exercise:HS} Prove that if $H=(V,E)$ is a hypergraph, then \[\sum_{v\in V}\deg(v) = \sum_{e\in E}|e|. \] You may use \cref{lemma:HS}. \end{exercise} {\bfseries Notice here \verb|\theexercise| is \theexercise.} \pagebreak \appendix \section{Solutions to Exercises} The solution to \cref{exercise:HS} is the following. \begin{proof} \lipsum[1] \end{proof}\end{document}
Clicking the link of
\cref{exercise:HS} in the appendix will take you to the first exercise, even though I reference the second.
I appreciate any assistance with this. |
Current browse context:
stat.ML
Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Machine Learning Title: Finite Precision Stochastic Optimization -- Accounting for the Bias
(Submitted on 22 Aug 2019 (v1), last revised 26 Aug 2019 (this version, v2))
Abstract: We consider first order stochastic optimization where the oracle must quantize each subgradient estimate to $r$ bits. We treat two oracle models: the first where the Euclidean norm of the oracle output is almost surely bounded and the second where it is mean square bounded. Prior work in this setting assumes the availability of unbiased quantizers. While this assumption is valid in the case of almost surely bounded oracles, it does not hold true for the standard setting of mean square bounded oracles, and the bias can dramatically affect the convergence rate. We analyze the performance of standard quantizers from prior work in combination with projected stochastic gradient descent for both these oracle models and present two new adaptive quantizers that outperform the existing ones. Specifically, for almost surely bounded oracles, we establish first a lower bound for the precision needed to attain the standard convergence rate of $T^{-\frac 12}$ for optimizing convex functions over a $d$-dimentional domain. Our proposed Rotated Adaptive Tetra-iterated Quantizer (RATQ) is merely a factor $O(\log \log \log^\ast d)$ far from this lower bound. For mean square bounded oracles, we show that a state-of-the-art Rotated Uniform Quantizer (RUQ) from prior work would need atleast $\Omega(d\log T)$ bits to achieve the convergence rate of $T^{-\frac 12}$, using any optimization protocol. However, our proposed Rotated Adaptive Quantizer (RAQ) outperforms RUQ in this setting and attains a convergence rate of $T^{-\frac 12}$ using a precision of only $O(d\log\log T)$. For mean square bounded oracles, in the communication-starved regime where the precision $r$ is fixed to a constant independent of $T$, we show that RUQ cannot attain a convergence rate better than $T^{-\frac 14}$ for any $r$, while RAQ can attain convergence at rates arbitrarily close to $T^{-\frac 12}$ as $r$ increases. Submission historyFrom: Himanshu Tyagi [view email] [v1]Thu, 22 Aug 2019 04:57:22 GMT (49kb) [v2]Mon, 26 Aug 2019 04:56:31 GMT (49kb) |
\[
\int_{-\infty}^{\infty} \mathrm{d}x \exp\left(-\frac{(x-\mu)^4}{2}\right) \]
was given. Using RcppNumerical is straight forward. One defines a class that extends
Numer::Func for the function and an interface function that calls
Numer::integrate on it:
// [[Rcpp::depends(RcppEigen)]]// [[Rcpp::depends(RcppNumerical)]]#include
class exp4: public Numer::Func {private: double mean;public: exp4(double mean_) : mean(mean_) {} double operator()(const double& x) const { return exp(-pow(x-mean, 4) / 2); }};// [[Rcpp::export]]Rcpp::NumericVector integrate_exp4(const double &mean, const double &lower, const double &upper) { exp4 function(mean); double err_est; int err_code; const double result = Numer::integrate(function, lower, upper, err_est, err_code); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = err_est);}
This works fine for finite ranges:
integrate_exp4(4, 0, 4)
## result error ## 1.077900e+00 9.252237e-08
However, it produces
NA for infinite ones:
integrate_exp4(4, -Inf, Inf)
## result error ## NaN NaN
This is disappointing, since base R’s
integrate() handles this without problems:
exp4 <- function(x, mean) exp(-(x - mean)^4 / 2)integrate(exp4, 0, 4, mean = 4)
## 1.0779 with absolute error < 1.3e-07
integrate(exp4, -Inf, Inf, mean = 4)
## 2.155801 with absolute error < 7.9e-06
In this particular case the problem can be easily solved in two different ways. First, the integral can be expressed in terms of the Gamma function:
\[
\int_{-\infty}^{\infty} \mathrm{d}x \exp\left(-\frac{(x-\mu)^4}{2}\right) = 2^{-\frac{3}{4}} \Gamma\left(\frac{1}{4}\right) \approx 2.155801 \]
Second, the integrand is almost zero almost everywhere:
It is therefore sufficient to integrate over a small region around
mean to get a reasonable approximation for the integral over the infinite range:
integrate_exp4(4, 1, 7)
## result error ## 2.155801e+00 9.926448e-13
However, the trick to approximate the integral over an infinite range with an integral over a (possibly large) finite range does not work for functions that approach zero more slowly. The help page for
integrate() has a nice example for this effect:
## a slowly-convergent integralintegrand <- function(x) {1/((x+1)*sqrt(x))}integrate(integrand, lower = 0, upper = Inf)
## 3.141593 with absolute error < 2.7e-05
## don't do this if you really want the integral from 0 to Infintegrate(integrand, lower = 0, upper = 10)
## 2.529038 with absolute error < 3e-04
integrate(integrand, lower = 0, upper = 100000)
## 3.135268 with absolute error < 4.2e-07
integrate(integrand, lower = 0, upper = 1000000, stop.on.error = FALSE)
## failed with message 'the integral is probably divergent'
How does
integrate() handle the infinite range and can we replicate this in Rcpp? The help page states:
If one or both limits are infinite, the infinite range is mapped onto a finite interval.
This is in fact done by a different function from R’s C-API:
Rdqagi() instead of
Rdqags(). In principle one could call
Rdqagi() via Rcpp, but this is not straightforward. Fortunately, there are at least two other solutions.
The GNU Scientific Library provides a function to integrate over the infinte interval \((-\infty, \infty)\), which can be used via the RcppGSL package:
// [[Rcpp::depends(RcppGSL)]]#include
#include double exp4 (double x, void * params) { double mean = *(double *) params; return exp(-pow(x-mean, 4) / 2);}// [[Rcpp::export]]Rcpp::NumericVector normalize_exp4_gsl(double &mean) { gsl_integration_workspace *w = gsl_integration_workspace_alloc (1000); double result, error; gsl_function F; F.function = &exp4; F.params = &mean; gsl_integration_qagi(&F, 0, 1e-7, 1000, w, &result, &error); gsl_integration_workspace_free (w); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = error);}
normalize_exp4_gsl(4)
## result error ## 2.155801e+00 3.718126e-08
Alternatively, one can apply the transformation used by GSL (and probably R) also in conjunction with RcppNumerical. To do so, one has to substitute \(x = (1-t)/t\) resulting in
\[
\int_{-\infty}^{\infty} \mathrm{d}x f(x) = \int_0^1 \mathrm{d}t \frac{f((1-t)/t) + f(-(1-t)/t)}{t^2} \]
Now one could write the code for the transformed function directly, but it is of course nicer to have a general solution, i.e. use a class template that can transform
any function in the desired fashion
// [[Rcpp::depends(RcppEigen)]]// [[Rcpp::depends(RcppNumerical)]]#include
class exp4: public Numer::Func {private: double mean;public: exp4(double mean_) : mean(mean_) {} double operator()(const double& x) const { return exp(-pow(x-mean, 4) / 2); }};// [[Rcpp::plugins(cpp11)]]template class trans_func: public T {public: using T::T; double operator()(const double& t) const { double x = (1-t)/t; return (T::operator()(x) + T::operator()(-x))/pow(t, 2); }};// [[Rcpp::export]]Rcpp::NumericVector normalize_exp4(const double &mean) { trans_func f(mean); double err_est; int err_code; const double result = Numer::integrate(f, 0, 1, err_est, err_code); return Rcpp::NumericVector::create(Rcpp::Named("result") = result, Rcpp::Named("error") = err_est);}
normalize_exp4(4)
## result error ## 2.155801e+00 1.439771e-06
Note that the
exp4 class is identical to the one from the initial example. This means one can use the same class to calculate integrals over a finite range and after transformation over an infinite range. |
in this case, you might consider a different approach, since the numerator isquite simple:
\documentclass[12pt]{article}
\usepackage{amsmath}
\begin{document}
This is the reaction equation:
\begin{equation}
\label{eq: rate}
r = P_{A_b} \bigg/ \! \biggl( \frac{1}{\frac{k_G a_{GL}}{\rho_b \epsilon_s}}
+ \frac{H}{\frac{k_L a_{GL}}{\rho_b \epsilon_s}}
+ \frac{H}{\frac{k_{C_A} S_{x}}{{\rho_s V_\rho}}}
+ \frac{H}{\eta \hat{k} S_g} \biggr)
\end{equation}
\end{document}
addendum: since it has been pointed out that nobody is addressing thefact that, in the original, the G and L in the subscripts in thedenominator fractions are the same size (and look larger, since they'reuppercase) than the variables they're subscripted to, here's the explanation.
there are three sizes of fonts in a "default" display: the main size,
\scriptstyle and
\scriptscriptstyle. in a fraction, the "main size" in the numerator and denominator is
\scriptstyle, and the size of sub- and superscripts within the numerator and denominator are the next size down -- which is also the smallest size available. when another fraction isembedded in anumerator or denominator, and
that fraction has sub- or superscripts,there's no smaller font to call on. so all glyphs in the "second-order"fraction are set in the same size --
\scriptscriptstyle. therefore,some drastic measures must be taken to make a visible difference.
as recommended when there is a complicated exponential function, rather thansetting everything as a superscript, using
\exp(...) to bring everythingup a size (and make it readable), reformulating a complicated expressionsuch as the one presented in this question is a legitimate way of approachingthe problem. the best solution is the one that is least visually confusingas well as mathematically valid, and that's not always accomplished bysimply changing the size of all the elements to be in "proper" relative sizes. |
Source of the problem
2012 AMC(American Mathematical Contest) 8 Problem 24
Do you really need a hint? Try it first!
Hint figure :
Assume that the radius of the circle is \( r \) unit and proceed to compare .
Note that side length of the square is \( 2r \) unit .
Clearly , $$ area(red \ square ) – area(circle ) = area(star) \\ \Rightarrow \frac{area(red \ square )}{area(circle )} – 1 =\frac{area(star)}{area(circle )} \\ \Rightarrow \frac{area(star)}{area(circle )} = \frac {(2r)^2}{\pi r^2} – 1 = \frac {4 – \pi}{\pi} $$
Connected Program at Cheenta
Math Olympiad is the greatest and most challenging academic contest for school students. Brilliant school students from over 100 countries participate in it every year. Cheenta works with small groups of gifted students through an intense training program. It is a deeply personalized journey toward intellectual prowess and technical sophistication.
Warning
: Illegal string offset 'sfwd-topic_forced_lesson_time_enabled' in
/home/cheenta4/public_html/wp-content/plugins/sfwd-lms/includes/course/ld-course-progress.php
on line
2151 Warning
: Cannot assign an empty string to a string offset in
/home/cheenta4/public_html/wp-content/plugins/sfwd-lms/includes/course/ld-course-progress.php
on line
2151 Warning
: Illegal string offset 'sfwd-topic_forced_lesson_time_enabled' in
/home/cheenta4/public_html/wp-content/plugins/sfwd-lms/includes/course/ld-course-progress.php
on line
2155 |
Heart Rate Variability Logger is an app I developed to record, plot and export time and frequency domain Heart Rate Variability (HRV) features. The app is available for
iPhone and Android, however features on Android are limited. This post is a sort of user guide where I cover in more detail the different features and some implementation choices.
App features:
I tried to cover also most of the questions that I got by email in the last few months. Please comment or email me if you have other comments or requests for future versions. More specifically, the post covers:
1. Introduction on HRV and applications
2. Hardware requirements
3. Quick start guide
4. Data acquisition and signal processing pipeline
5. RR-intervals correction
6. Features extraction
7. Data export and format
1. Introduction on HRV and applications
The cardiovascular system is mostly controlled by autonomic regulation through the activity of sympathetic and parasympathetic pathways of the autonomic nervous system. HRV analysis attempts to assess cardiac autonomic regulation through quantification of sinus rhythm variability. The sinus rhythm times series is derived from RR intervals (R peaks of the QRS complex derived from the electrocardiogram (ECG)), by extracting only normal (NN) intervals.
Since HRV aims at quantifying autonomic regulations, it can be used as marker of sympathetic or parasympathetic predominance, and therefore become relevant in many applications. In athletes for example [1] heavy training is responsible for shifting the cardiac autonomic balance toward a predominance of the sympathetic over the parasympathetic drive, and HRV analysis attempts to quantify this shift (see this post for more on HRV for training).
Other applications span from stress monitoring during public speaking [2] or daily life [3], assessment of pathological conditions [4] and even emotional regulation during financial decision-making (trading) [5].
2. Hardware requirements
Heart Rate Variability Logger requires a Bluetooth Low Energy (also called Bluetooth Smart or BLE or 4.0) heart rate monitor. Personally, the most reliable I tried are Polar's H7 and Under Armour's Armour39. See my other post on HRV for Training for detailed comparisons on RR-intervals and features extracted from different heart rate monitors. The first iPhone to support BLE was the 4S.
3. Quick start guide
Open the app, and push the Bluetooth icon to connect to your heart rate monitor. No pin or passcode is required to connect to Bluetooth low energy monitors, the only thing you need to do is to turn on your Bluetooth radio. The app will prompt you in case your Bluetooth radio is off. Once you are connected, the real-time view will be enabled and HRV logger will start processing RR-intervals and computing features. More details in this slideshow (includes info on experience sampling, activity and location tracking):
4. Data acquisition and signal processing pipeline
The following block diagram represents the signal processing pipeline implemented in HRV logger. The single blocks are explained in more detail in sections 5-6.
5. RR-intervals correction
RR-Intervals correction prevents artifacts due to ectopic beats or motion from affecting features computation, as often reported in literature for HRV analysis.
It is advised to keep RR-interval correction to 20%, meaning that every RR-interval which differs by more than 20% from the previous one, will be discarded. HRV Logger provides three options, the correction can be completely disabled, in case you are interested in looking at PVCs or other events. By default the correction is set to 20%. The third and last setting is 50%, which is a less aggressive correction, since every RR-interval which differs by more than 50% from the previous one, will be discarded.
6. Features extraction
The time window used to compute features can be configured choosing between 30 seconds, 1, 2 or 5 minutes. Windows of at least 2 minutes should be used when using low frequency features (LF) [2]. Given the non-constant frequency at which RR-intervals are received, a buffer of RR-intervals is used to collect 30 seconds to 5 minutes of data, on which features extraction is executed.
The following features are extracted from RR-intervals:
The term NN (e.g. AVNN) is used instead of RR to emphasize the fact that the beats used to extract HRV features are sinus beats (this is also why it is important to keep the RR-interval correction enabled). Assuming we have $n$ beats in our buffer:
RR-intervals Time-domain features
AVNN, mean of NN intervals:
\[AVNN = \frac{1}{n} \sum_{k=1}^n NN_k\]
SDNN, standard deviation of NN intervals:
\[SDNN =\sqrt{\frac{1}{n} \sum_{k=1}^{n} (NN_k - AVNN)^2}\]
where AVNN is computed as above.
rMSSD, square root of the mean squared difference of successive NN intervals:
\[rMSSD = \sqrt{\frac{1}{n-1} \sum_{k=1}^{n-1} (NN_{k+1}-NN_{k})^2}\]
pNN50, number of pairs of successive RRs that differ by more than 50 ms
The difference between beats is calculated as above:
\[NN_{k+1} - NN_{k}\]
Then, if $n50$ is the number of beats for which we have a difference greater than 50 ms ($(NN_{k+1}-NN_{k}) > 50$), pNN50 is computed as:
\[pNN50 = \frac{n50}{n} 100\]
Frequency-domain features
Since RR-intervals come at non-constant frequency, they are linearly interpolated before frequency analysis. After interpolation, a hamming window is applied before performing the FFT.
Accelerometer Motion Intensity
Motion Intensity (MI) is computed after filtering the accelerometer data to remove the static effect due to gravity (basically isolating dynamic movement only). Accelerometer is sampled at 2Hz, to prevent battery drain. Assuming $n$ filtered accelerometer samples, MI is computed as follows:
\[MI = \frac{1}{n} \sum_{k=1}^n ACC_{k} \]
where $ACC_{k}$ is the sum of the absolute signal over the three axis:
\[ACC_{k} = abs(ACCx) + abs(ACCy) + abs(ACCz) \]
MI should be use simply to get an indication of the user movement when carrying the phone. Especially in situations where HR and HRV can be affected by both movement and stress, it can help in acquiring more context.
For iPhones 5S motion intensity is replaced by the actual steps taken, computed using the M7 co-processor.
Location Tracking
Location tracking is available since version 4.0. By enabling location tracking you can add even more context to your HRV recordings. If coarse location information is sufficient, do not enable the high accuracy location tracking, since it will drain more battery (uses GPS).
7. Data export and format
Data can be exported either by connecting your iPhone to iTunes, or by connecting HRV Logger to your Dropbox account. Four files are exported for each recording:
All files are in
csv format, making it easy to import them in other tools. The example files include approximately 2 hours, split in half sedentary and half running. References
[1] Aubert, André E., Bert Seps, and Frank Beckers. "Heart rate variability in athletes." Sports Medicine 33.12 (2003): 889-919.
[2] Kusserow, Martin, Oliver Amft, and Gerhard Tröster. "Analysis of heart stress response for a public talk assistant system."
Ambient Intelligence. Springer Berlin Heidelberg, 2008. 326-342.
[3] Vrijkotte, Tanja GM, Lorenz JP van Doornen, and Eco JC de Geus. "Effects of work stress on ambulatory blood pressure, heart rate, and heart rate variability."
Hypertension 35.4 (2000): 880-886.
[4] Nolan, James, et al. "Prospective study of heart rate variability and mortality in chronic heart failure results of the United Kingdom heart failure evaluation and assessment of risk trial (UK-Heart)."
Circulation 98.15 (1998): 1510-1516.
[5] Fenton-O'Creevy, Mark, et al. "Emotion regulation and trader expertise: Heart rate variability on the trading floor."
Journal of Neuroscience, Psychology, and Economics 5.4 (2012): 227.
[6] Berntson, Gary G. "Heart rate variability: Origins, methods and interpretive caveats."
Psychophysiology 34 (1997): 623-648. |
Background:
In Subhash Khot's original UGC paper (PDF), he proves the UG-hardness of deciding whether a given CSP instance with constraints all of the form Not-all-equal(a, b, c) over a ternary alphabet admits an assignment satisfying 1-$\epsilon$ of the constraints or whether there exist no assignments satisying $\frac{8}{9}+\epsilon$ of the constraints, for arbitrarily small $\epsilon > 0$.
I'm curious whether this result has been generalized for any combination of $\ell$-ary constraints for $\ell \ge 3$ and variable domains of size $k \ge 3$ where $\ell \ne k \ne 3$. That is,
Question: Are there any known hardness of approximation results for the predicate $NAE(x_1, \dots, x_\ell)$ for $x_i \in GF(k)$ for $\ell, k \ge 3$ and $\ell \ne k \ne 3$?
I'm especially interested in the combination of values $\ell = k$; e.g., the predicate Not-all-equal($x_1, \dots, x_k$) for $x_1 \dots, x_k \in GF(k)$. |
I'm whirling a ball tying it by a rope. Then at any instant the centrifugal force (outward force) equalizes the centripetal force (inward force) . Then both should cancel each other and there should be only tension of the rope to balance the weight of the ball. Then why do we count the centripetal force as resultant force. As an example in the mean position $T=mg + m\frac{v^2}{R}$ why do we also consider the second term?
Suppose, you are analysing the situation from the your frame of reference. Then, the net radial inward force on the ball is, $T - mg \sin \theta$, while the inward radial acceleration is $\frac{v^2}{r}$. Applying Newton's second law, $T - mg \sin \theta$ $=$$\frac{mv^2}{r}$.
Now if you analyse the motion from the frame of the ball, there is no radial acceleration. So, the net radial force must be zero. The inward radial force is, again, $T - mg \sin \theta$, while there is the centrifugal force (pseudo force) along the outward radial direction having magnitude $\frac{mv^2}{r}$. So, we get, $\frac{mv^2}{r}-(T-mg \sin \theta) =0$. Thus, the equations are same in both of the frames.
From the reference frame of you whirling the ball, there is no centrifugal force. The centrifugal force is a
fictitious force, only showing up in the reference frame of the ball. This is because the ball's reference frame is not inertial (it is traveling in a circle, and therefore is constantly accelerating). Noninertial frames gives rise to fictitious forces like the centrifugal and Coriolis forces.
The tension
is the centripetal force (it is what keeps the ball moving in a circle). So the only forces acting on the ball, from your reference frame, are the centripetal force (tension) and gravity (neglecting air resistance). There is no centrifugal force, in your frame, to cancel the centripetal. If there was a force cancelling the centripetal force, there wouldn't be circular motion. |
By Murray Bourne, 28 Nov 2010
This is fun. And the best part - it's accurate!
If you don't get it, see Graphs of tan, cot, sec and csc.
Image source: LukeSurl.com
He's got some other math-flavored comics (search "math").
See the 1 Comment below.
Posted in Mathematics category - 28 Nov 2010 [Permalink]
Tags: Graphs
very nice example(sunbather) for tan
* Name (required)
* E-Mail (required - will not be published)
Your blog URL (can be left blank)
Notify me of followup comments via e-mail
Your comment:
Preview comment
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
qq
`a^2 = sqrt(b^2 + c^2)`
\(
\)
\( \int g dx = \sqrt{\frac{a}{b}} \)
NOTE: You can mix both types of math entry in your comment.
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates twice a month. Join thousands of satisfied students, teachers and parents!
Given name: * required Family name:
email: * required
See the Interactive Mathematics spam guarantee.
(External blogs linking to IntMath)
From Math Blogs
Blog⊗
Categories
IntMath
top |
To motivate this question, I'm going to try and explain some background notions. This won't be absolutely necessary for experts, but I want to be vaguely honest about where this question comes from. I also want to say that this is the type of question that I would ask at a coffee break during a conference about this subject. If my question is too disorganized for mathoverflow, I apologize, and there will be no hard feelings if this question is closed because it is too speculative.
Given a manifold $M$, one has a canonical symplectic structure on the cotangent bundle $T^{*}M$. Given a Poisson sub-algebra of functions $A$ on $T^{*}M$, one aim of quantization asks if there exists an algebra of differential operators $A^{'}$ on $M,$ (generally they may be differential operators on sections of line bundles over $M$), for which the principal symbol map $A^{'}\rightarrow A$ is a morphism, where commutator of differential operators corresponds to Poisson bracket of functions on $T^{*}M.$
This question, which has its roots in the passage from classical to quantum mechanics, was employed by Beilinson-Bernstein in their localization procedure for producing modules over the universal enveloping algebra of a semisimple complex Lie algebra $\mathfrak{g}$ from $D$-modules on the flag variety $G/B$ where $G$ integrates $\mathfrak{g}$ and $B<G$ is a Borel sub-group.
Beilinson and Drinfeld, in a stroke of genius, realized that for $G$ a connected, complex reductive algebraic group over $\mathbb{C},$ this process can be applied to $T^{*}{Bun}_{G}(X)$ where $X$ is a compact Riemann surface, and ${Bun}_{G}(X)$ is the moduli stack of principal $G$-bundles over $X.$ Here, the sub-algebra of functions they consider is the Hitchin integral system. Morally speaking, a cotangent vector $\xi$ to a principal $G$ bundle $E_{G}$ is a global section $\xi\in H^{0}(X, K\otimes E_{G}(\mathfrak{g})),$ and a $G$-invariant symmetric homogeneous polynomial on the Lie algebra $\mathfrak{g}$ (almost) produces an function on $T^{*}{Bun}_{G}(X)$ via evaluation: this quickly leads to a completely integrable system on $T^{*}{Bun}_{G}(X)$ called the Hitchin fibration.
What Beilinson and Drinfeld do is produce a commuting algebra of (twisted) differential operators on $Bun_{G}(X)$ whose principal symbols map to functions appearing in the Hitchin integrable system.
While certainly an attractive story on face, as a pedestrian differential geometer, the theory of stacks and local algebra through which this construction passes hides, for me, what these differential operators actually are.
This is where my question begins. Instead of the stack $Bun_{G}(X),$ consider instead the projective variety (analytic space) parameterizing stable $G$-bundles on $X.$ I'm being very loose here, so maybe I want to switch to $GL(n, \mathbb{C})$, and also add extra hypotheses to make the following question well formed.
Question: Is there an avatar of the Beilinson-Drinfeld commuting algebra of differential operators on the moduli space of stable $G$-bundles on $X?$ For $GL(1, \mathbb{C})$, I think this question has a nice answer, and is basically part of the passage from abelian class field theory to geometric class field theory as espoused by many important figures in the study of the geometric Langlands conjecture.
Moving to non-abelian groups like $GL(2,\mathbb{C}),$ I already have no precise idea what might be going on. In this case, line bundles over the moduli space of stable $G$-bundles are a very important object, and subsequent objects like generalized Theta functions play an important role, for example in the Verlinde formula. It's possible that hidden inside these ideas I should find the differential operators I am seeking, but this is the point of my question.
Refined question: Is there a commuting family of differential operators on the space of stable $G$-bundles which corresponds to the Beilinson-Drinfeld construction, which has a formulation in terms of generalized theta functions etc.
In an attempt to not be a total idiot, I understand that in the Beilinson-Drinfeld application, they're focusing on $G$-bundles which are very unstable, those corresponding to $G$-opers, and therefore the stack point of view is essential. In this vein, I'm not asking for an explanation of their work which can be understood in the language of stable bundles. I'm just asking if their construction produces something interesting, perhaps already studied before, when restricted to the space of stable bundles.
I apologize if this is a series of paragraphs, each compounding the next, resulting in a question that makes no sense. If this is not the case, I appreciate any responses, and thank you for reading to the end. |
Some mathematical elements change their style depending on the context, whether they are in line with the text or in an equation-type environment. This article explains how to manually adjust the display style.
Let's see an example
Depending on the value of $x$ the equation \( f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \) may diverge or converge. \[ f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \]
Superscripts, subscripts and fractions are formatted differently.
The maths styles can be set explicitly. For instance, if you want an in-line mathematical element to display as a equation-like element put
\displaystyle before that element. There are some more maths style-related commands that change the size of the text.
In-line maths elements can be set with a different style: \(f(x) = \displaystyle \frac{1}{1+x}\). The same is true the other way around: \begin{eqnarray*} \begin{eqnarray*} f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \textstyle f(x) = \textstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptstyle f(x) = \scriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptscriptstyle f(x) = \scriptscriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \end{eqnarray*} \end{eqnarray*}
For more information see |
[−][src]Module curve25519_dalek:: backend:: vector:: avx2 only.
feature="simd_backend" and (target feature
avx2 or target feature
avx512ifma) and target feature
avx2 and non-target feature
avx512ifma
An AVX2 implementation of the vectorized point operation strategy.
Our strategy is to implement 4-wide multiplication and squaring bywordslicing, using one 64-bit AVX2 lane for each field element. Fieldelements are represented in the usual way as 10
u32 limbs in radix\(25.5\) (i.e., alternating between \(2^{26}\) for even limbs and\(2^{25}\) for odd limbs). This has the effect that passing betweenthe parallel 32-bit AVX2 representation and the serial 64-bitrepresentation (which uses radix \(2^{51}\)) amounts to regroupingdigits.
The field element representation is oriented around the AVX2
vpmuluqdq instruction, which multiplies the low 32 bits of each64-bit lane of each operand to produce a 64-bit result.
(a1 ?? b1 ?? c1 ?? d1 ??)(a2 ?? b2 ?? c2 ?? d2 ??)(a1*a2 b1*b2 c1*c2 d1*d2)
To unpack 32-bit values into 64-bit lanes for use in multiplicationit would be convenient to use the
vpunpck[lh]dq instructions,which unpack and interleave the low and high 32-bit lanes of twosource vectors.However, the AVX2 versions of these instructions are designed tooperate only within 128-bit lanes of the 256-bit vectors, so thatinterleaving the low lanes of
(a0 b0 c0 d0 a1 b1 c1 d1) with zerogives
(a0 00 b0 00 a1 00 b1 00). Instead, we pre-shuffle the datalayout as
(a0 b0 a1 b1 c0 d0 c1 d1) so that we can unpack the"low" and "high" parts as
(a0 00 b0 00 c0 00 d0 00)(a1 00 b1 00 c1 00 d1 00)
The data layout for a vector of four field elements \( (a,b,c,d)\) with limbs \( a_0, a_1, \ldots, a_9 \) is as
[u32x8; 5] inthe form
(a0 b0 a1 b1 c0 d0 c1 d1)(a2 b2 a3 b3 c2 d2 c3 d3)(a4 b4 a5 b5 c4 d4 c5 d5)(a6 b6 a7 b7 c6 d6 c7 d7)(a8 b8 a9 b9 c8 d8 c9 d9)
Since this breaks cleanly into two 128-bit lanes, it may be possible to adapt it to 128-bit vector instructions such as NEON without too much difficulty.
To analyze the size of the field element coefficients during the computations, we can parameterize the bounds on the limbs of each field element by \( b \in \mathbb R \) representing the excess bits above that limb's radix, so that each limb is bounded by either \(2^{25+b} \) or \( 2^{26+b} \), as appropriate.
The multiplication routine requires that its inputs are bounded with \( b < 1.75 \), in order to fit a multiplication by \( 19 \) into 32 bits. Since \( \lg 19 < 4.25 \), \( 19x < 2^{32} \) when \( x < 2^{27.75} = 2^{26 + 1.75} \). However, this is only required for one of the inputs; the other can grow up to \( b < 2.5 \).
In addition, the multiplication and squaring routines do not canonically reduce their outputs, but can leave some small uncarried excesses, so that their reduced outputs are bounded with \( b < 0.007 \).
The non-parallel portion of the doubling formulas is $$ \begin{aligned} (S_5 &&,&& S_6 &&,&& S_8 &&,&& S_9 ) &\gets (S_1 + S_2 &&,&& S_1 - S_2 &&,&& S_1 + 2S_3 - S_2 &&,&& S_1 + S_2 - S_4) \end{aligned} $$
Computing \( (S_5, S_6, S_8, S_9 ) \) as $$ \begin{matrix} & S_1 & S_1 & S_1 & S_1 \\ +& S_2 & & & S_2 \\ +& & & S_3 & \\ +& & & S_3 & \\ +& & 2p & 2p & 2p \\ -& & S_2 & S_2 & \\ -& & & & S_4 \\ =& S_5 & S_6 & S_8 & S_9 \end{matrix} $$ results in bit-excesses \( < (1.01, 1.60, 2.33, 2.01)\) for \( (S_5, S_6, S_8, S_9 ) \). The products we want to compute are then $$ \begin{aligned} X_3 &\gets S_8 S_9 \leftrightarrow (2.33, 2.01) \\ Y_3 &\gets S_5 S_6 \leftrightarrow (1.01, 1.60) \\ Z_3 &\gets S_8 S_6 \leftrightarrow (2.33, 1.60) \\ T_3 &\gets S_5 S_9 \leftrightarrow (1.01, 2.01) \end{aligned} $$ which are too large: it's not possible to arrange the multiplicands so that one vector has \(b < 2.5\) and the other has \( b < 1.75 \). However, if we flip the sign of \( S_4 = S_0^2 \) during squaring, so that we output \(S_4' = -S_4 \pmod p\), then we can compute $$ \begin{matrix} & S_1 & S_1 & S_1 & S_1 \\ +& S_2 & & & S_2 \\ +& & & S_3 & \\ +& & & S_3 & \\ +& & & & S_4' \\ +& & 2p & 2p & \\ -& & S_2 & S_2 & \\ =& S_5 & S_6 & S_8 & S_9 \end{matrix} $$ resulting in bit-excesses \( < (1.01, 1.60, 2.33, 1.60)\) for \( (S_5, S_6, S_8, S_9 ) \). The products we want to compute are then $$ \begin{aligned} X_3 &\gets S_8 S_9 \leftrightarrow (2.33, 1.60) \\ Y_3 &\gets S_5 S_6 \leftrightarrow (1.01, 1.60) \\ Z_3 &\gets S_8 S_6 \leftrightarrow (2.33, 1.60) \\ T_3 &\gets S_5 S_9 \leftrightarrow (1.01, 1.60) \end{aligned} $$ whose right-hand sides are all bounded with \( b < 1.75 \) and whose left-hand sides are all bounded with \( b < 2.5 \), so that we can avoid any intermediate reductions.
Modules
constants [
]
feature="simd_backend" and (
avx2 or
avx512ifma) and
avx2 and non-
avx512ifma
This module contains constants used by the AVX2 backend.
edwards [
]
feature="simd_backend" and (
avx2 or
avx512ifma) and
avx2 and non-
avx512ifma
Parallel Edwards Arithmetic for Curve25519.
field [
]
feature="simd_backend" and (
avx2 or
avx512ifma) and
avx2 and non-
avx512ifma
An implementation of 4-way vectorized 32bit field arithmetic using AVX2. |
The problem of counting such "imperfect" matchings in bipartite graphs is #P-complete.This has been proved by Les Valiant himself, on page 415 of the paperLeslie G. ValiantThe Complexity of Enumeration and Reliability ProblemsSIAM J. Comput., 8(3), 410–421
It's NP-complete by a reduction from cliques in graphs. Given an arbitrary graph $G$, construct a bipartite graph from its incidence matrix, by making one side $U$ of the bipartition correspond to the edges of $G$ and the other side correspond to the vertices of $G$. Then $G$ has a clique of size $\omega$ if and only if the constructed bipartite graph has a ...
The answer here seems to imply there is a more general result. For this particular case, here is a self contained way to reduce the problem to maximum weight perfect matching. Assume $k$ is even.Given $G=(L\cup R, E)$, we construct a new graph $G'=(V',E')$ as follows, let $|R|=n$.Add vertices in $R$ to $V'$.For each vertex $v \in L$, add vertices $v_1,...
Would you be satisfied with generating planar cubic bipartite maps (i.e., such graphs equipped with a planar embedding specified by a cyclic ordering on half-edges)? That problem was addressed in:Gilles Schaeffer, Bijective Census and Random Generation of Eulerian Planar Maps with Prescribed Vertex Degrees, The Electronic Journal of Combinatorics 4(1), ...
For an extreme example, chordal graphs can have as many as $\binom{n}{2}$ edges but chordal graphs that happen to also be bipartite can have only $n-1$ edges (they are forests). Or even more extremely, consider complete graphs versus (complete $\cap$ bipartite) graphs. But perhaps it makes sense to restrict your problem only to classes of graphs that are ...
According to the 59th slide of the following pdf:https://grow2015.sciencesconf.org/file/174789 (A talk by D.Kratsch at GROW 2015)we have thatA vertex in a graph is weakly simplicial if its neighborhood is anindependent set and the neighborhoods of its neighbors form achain under inclusion.See the slide for an illustration. The context where ...
If the graph is acyclic, which implies that it is also bipartite, then the perfect matching is unique by the following algorithm:While the graph is not empty, pick a leaf vertex $u$ (which exists because the graph is acyclic) add the edge between $u$ and its unique neighbor $v$ to the matching and then remove $u$ and $v$. If at some point graphs without ...
Your problem is NP-hard.Consider a partition problem instance with input $a_1,\ldots,a_n$.We create a complete bipartite graph with $2$ source vertices each with capacity $\frac{1}{2} \sum_{i=1}^n a_i$, and $n$ sink vertices, where the $i$th vertex has capacity $a_i$. Each edge has infinite capacity.It's easy to verify there is a partition with equal ...
The following is a list of results I'm going to prove:if $k$ and $l$ are parts of the input and $d = 1$ is a fixed constant then the problem is polynomial time solvableif $k$ and $l$ are parts of the input and $d \ge 2$ is a fixed constant then the problem is NP-completeif $k$ is a fixed constant then the problem is polynomial time solvableif $l$ is a ...
This problem is a special case of the b-matching problem, and hence can be solved in polynomial time. Extensive information on b-matchings can be found for instance in the book:László Lovász and Michael D. Plummer:Matching TheoryISBN-10: 0-8218-4759-7ISBN-13: 978-0-8218-4759-6
In his 1962 paper "The Maximum Connectivity of a Graph", Harary describes a way to construct for integers $p$ and $q$ with $q\ge p-1$ a way to construct a graph with $p$ vertices and $q$ edges that is $k=\lfloor 2q/p\rfloor$-connected. Roughly, the idea is to give indices from $0$ to $p-1$ to the $p$ vertices and then add edges between vertices whose ...
This fact can be found inGodsil, C. D. (1985), "Inverses of trees", Combinatorica 5 (1): 33–39, doi:10.1007/BF02579440(without proof, near the bottom of the first page): "noting that a tree with a perfect matching has just one perfect matching".The same paper provides a more general characterization of the bipartite graphs with a unique perfect ...
In case anyone else is looking for a practical answer: the program plantri by Brinkmann and McKay can generate small (up to 64 vertices as-is, up to 255 with some hacking) planar bipartite cubic graphs as the duals of Eulerian planar triangulations. The program does not sample uniformly at random, but it is claimed to be efficient enough to generate all ...
You can take any degree 3 bipartite graph $G$ and take its disjoint union $G'$ with a cycle $C$ of length 2m. The new graph $G'$ is bipartite, and has average degree $\frac{3n + 2m}{m+n} = 2 + \frac{n}{n+m}$. Also, the number of perfect matchings in $G'$ is exactly twice the number of perfect matchings in $G$, because the perfect matchings of $G'$ are the ...
Your problem is equivalent to MAX SMTI (Stable Marriage with Ties and Incomplete lists). You can find the current best approximation algorithm for MAX SMTI in the following paper:Z. Kiraly, Linear Time Local Approximation Algorithm for MaximumStable Marriage, Algorithms 2013, 6, 471-484.There are many papers on MAX SMTI, but unfortunately I am not aware ...
I think there is no paper solving that exact problem, but "Online Vertex-Weighted Matching" by Aggarwal, Goel, Karande, and Mehta (2011) is very close. If I understood correctly, they solve your problem only with all capacities equal to one.My best guess is that you will have to do some work to extend their guarantees and algorithm to your setting. On the ...
The precoloring extension problem is the following:Input: a number $k$ and a graph $G$ some of whose edges are labeled with labels in $\{1, 2, \ldots, k\}$.Decision: is it possible to color the edges of $G$ with colors $1, 2, \ldots, k$ such that no adjacent edges share a color and such that each initially labeled edge is colored with the color it is ...
If $G$ has a disjoint vertex cycle cover then I agree that $H$ must have a perfect matching, but I don't see the other direction (or I have misunderstood how you define $H$ exactly). I think the construction you were thinking of is the one described in this answer: https://cstheory.stackexchange.com/a/8570/38111.As for your question (assuming that $H$ is ...
After some thinking, I found an answer. If one has a better one I'll accept it.From a cost matrix of shape $n\times m$ with $n<m$, it is easy to add nodes that will not change anything by giving all their incident edges the same weight $w$, that is adding $(m-n)*m$ edges.
Yes, there certainly is. There is a trivial solution. First, solve the resource allocation problem (which can be done in exponential time by enumerating all candidate solutions). Then, use the solution to construct a bipartite graph whose maximum-weight matching corresponds to the solution in some way; this is straightforward. Finally, output that graph. ...
I believe "maximum weight fair bipartite matching" as you've defined it is NP-hard. Even more, determining the existence of a fair bipartite matching is NP-hard.Before I give a proof sketch, for intuition, consider the following small instance. Take $G'=(L, R, E'=L\times R)$ where $L=\{a,b\}$, $R=\{c,d,e,f\}$. Take $p$ such that $p(u,w) = 0$ for $u\in ... |
When we are comparing two models against some data, will we obtain the same (set of) posterior odds for the models both when we use the Bayes' factor and when we use the discriminant rule?
If not, which one is considered more "accurate" and why is the other being used?
I have found that the Bayes' factor is described also as:
$B_{i,j} = \frac{\pi_{data}(D|M_i)}{\pi_{data}(D|M_j)} = \frac{L_{max}^{(i)}}{L_{max}^{(j)}} \frac{W_i}{W_j}$
where $W_i$ is the Ochham's factor: $W_i = \int_{V_{\theta_{i}}} \pi_{prior}(\theta_i|M_i) \frac{L_{(i)}(\theta_i)}{L_{max}^{(i)}} d\theta_j$
Is $\frac{L_{max}^{(i)}}{L_{max}^{(j)}} \frac{W_i}{W_j}$ anyhow relevant to the ratio of 'allocations' at each model ($\frac{allocations-to-model_i}{allocations-to-model_j}$), made by the Bayes' Discriminant rule?
Edit-: After the response of conjugateprior, I would like to add this:
I am wondering whether a discriminant analysis (such as ML discriminant rule i.e. Bayesian discr. for priors 1/2 and 1/2) is an acceptable way of doing model selection when we are comparing two models from which we inferred posterior information about some parameter/hyperparameter via Bayes' theorem. I.e. although we assume that the data are generated by one model, if you do discriminant analysis, won't the percentage of the mixture reveal the model that the Bayesians' describe as 'more likely'? |
Current browse context:
cond-mat.mes-hall
Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Mesoscale and Nanoscale Physics Title: Theory of the strongly nonlinear electrodynamic response of graphene: A hot electron model
(Submitted on 13 Aug 2019)
Abstract: An electrodynamic response of graphene to a strong electromagnetic radiation is considered. A hot electron model (HEM) is introduced and a corresponding system of nonlinear equations is formulated. Solutions of this system are found and discussed in detail for intrinsic and doped graphene: the hot electron temperature, non-equilibrium electron and holes densities, absorption coefficient and other physical quantities are calculated as functions of the incident wave frequency $\omega$ and intensity $I$, of the equilibrium chemical potential $\mu_0$ and temperature $T_0$, scattering parameters, as well as of the ratio $\tau_\epsilon/\tau_{\rm rec}$ of the intra-band energy relaxation time $\tau_\epsilon$ to the recombination time $\tau_{\rm rec}$. The influence of the radiation intensity on the absorption coefficient $A$ at low ($\hbar\omega\lesssim 2|\mu_0|$, $dA/dI>0$) and high ($\hbar\omega\gtrsim 2|\mu_0|$, $dA/dI<0$) frequencies is studied. The results are shown to be in good agreement with recent experimental data. Submission historyFrom: Sergey Mikhailov [view email] [v1]Tue, 13 Aug 2019 13:35:28 GMT (601kb) |
As dkaeae indicated in his comment, a right-moving or staying Turing machine (TM) is essentially a finite deterministic automaton (FDA). Here's a proof.
Let $M$ be such a machine, whose transition rules is in the form of $\delta(q, \gamma)=(t, \beta, d)$ where $q$ is the current state, $\gamma$ is the contents of the current cell, $t$ is the new state, $\beta$ is the symbol that replaces $\gamma$ in the current cell, and $d$ is the direction to move the head, which is either $R$ or $S$, meaning moving right or staying put respectively.
Let us check the behavior of $M$ when it has just applied a transition rule of the form $\delta(q, \gamma)=(t, \beta, S)$. Now $M$ is in state $t$ on top of $\beta$. In the next step, $M$ will read the symbol $\beta$, applying the unique rule on the pair $(t,\beta)$.
Suppose that rule is $\delta(t, \beta)=(u, \alpha, d)$.
Then $M$ will change state to $u$, rewrite the current cell to $\alpha$ and move in the direction of $d$.
We have found that once $M$ was in state $q$ on top of $\gamma$, it will change state to $u$, rewrite the current cell to $\alpha$ and move in the direction of $d$.
So we can replace the transition rule $\delta(q, \gamma)=(p, \beta, S)$ by $\delta(q, \gamma)=(u, \alpha, d)$ in the specification of $M$ without changing the language accepted by $M$.
Suppose that rule is not defined.
Then $M$ will halt. We can remove the transition rule $\delta(q, \gamma)=(p, \beta, S)$ in the specification of $M$ without changing the language accepted by $M$.
Applying the replacement or removal above repeatedly until there is no rule in $M$ that tells $M$ to stay in the same cell, we find that $L(M)$ is the language accepted by a right-moving only TM.
This question and answer tells us a right-moving only TM is essentially a DFA. Basically, a rule in a right-moving only TM, $\delta(q, \gamma)=(t, \beta, R)$ corresponds to $\delta(q, \gamma)=(t)$, a rule in the corresponding DFA. Since the language of a DFA is decidable, so is $L(M)$. |
Clearly, if $g$ is a deterministic function then for two random variables $X, Y$:
$X \rightarrow Y \rightarrow g(Y)$ i.e. they form a Markov Chain.
If $g$ is a one-to-one mapping we can further state: $X \rightarrow g(Y) \rightarrow Y$ as $g(Y)$ is a sufficient statistics.
I am wondering about the converse of the above statement. If $X \rightarrow g(Y) \rightarrow Y$ holds, can we conclude that $g$ is injective?
I tried proving it but am a bit uncertain about the steps taken. Could somebody have a look at my argumentation and point out the flaws?
Here it is:
Let $X \rightarrow g(Y) \rightarrow Y$. This implies that $X$ and $Y$ are conditionally independent given $g(Y)$. We can thus write: $p(X=x|g(Y)=g(y), Y=y) = p(X=x|g(Y) = g(y))$. However: $p(X=x|g(Y)=g(y), Y=y) = p(X=x|Y = y)$.
Therefore: $p(X=x|g(Y) = g(y)) = p(X=x|Y = y)$. Assume $g$ is not injective. Then there exist at least a pair $y_1, y_2$ such that $g(y_1) = g(y_2)$. Thereby:
$p(X=x|g(Y) = g(y_1)) = \sum_{y:g(y)=g(y_1)} p(X=x|Y = y) = p(X=x|Y = y_1) + p(X=x|Y = y_2) \neq p(X=x|g(Y) = y_1)$
By contradiction, $g$ is injective. |
I am looking for an efficient method to compute $$\sum_{i=1}^\left|B\right|\left|Ax_i-b_i\right|^2\rightarrow min$$ under the condition $$\forall i, x_i\ge 0,$$ where $A$ is an n-by-m matrix and $B$ is a set of n-dimensional vectors. Only $B$, $n$ and $m$ are given. In other words the goal is to find $A$ so that the sum of squares of the orthogonal distance of the subspace defined by the columns of $A$ (under the non-negativity constraint) to every point in $B$ is minimal. $B$ is a finite set with approximately 10 to 1000 elements and $n$ is in the range of 50 to 1000 and $m$ will be rather small, i.e. $\le 10$.
EDIT: Rewritten after clearing up some notational confusion in the comments.
Let $X = \left[x_1, x_2, \ldots x_r\right]$ and $B = \left[b_1, b_2, \ldots b_r\right]$ where $r$ is the numbers of vectors. Your problem can then be written as
$$ \text{minimize}_{A,X\geq 0} ||AX-B||$$
You can transform this into a standard form quadratic program by expanding the square term, then distributing the sum (assuming your set B is finite). You can then use one of the standard solvers (see the links at the bottom of the wikipedia page for several options).
If you are looking for more specific advice, I'd suggest adding more detail about your specific problem, including the size of your problem (how big n and m are), and if there is any structure to your matrix A. |
We define the following problem as:
Let $M$ be a TM with alphabet $\Gamma$, with $\{a,b,$ #$\} \subset \Gamma$.
We define, for every natural number $n$ the graph $G_{M,n}$ by:
$V_{M,n} = \{a,b\}^n$, the vertices
$E_{M,n} = \{(x,y) : x,y \in V_{M,n} $ and $M$ accepts the string $x$#$y$ with a run that requires at most $n^{73}$space$\}$
Now define the following language $L$:
$L = \{(<M>, x, y) : x,y \in \{a,b\}^*, |x| = |y|$ and $x,y$ are connected by a path in the above graph$\}$
I want to show $L \in PSPACE$, without using Savitch's theorem.
My attempt:
Define the following algorithm $M_0$
On input $(<M>, x, y)$ with $M$ with the above requirements:
Check if $M$ accepts the string $x$#$y$ in space $|x|^{73}$ or less; if so accept, else go to step 2. Keep track of the set $Strings = \{x,y\}$ as a second tape. For every string $z \in \{a,b\}^{|x|} - Strings$, add $z$ so $Strings$ and run $M_0$ on
a. $(<M>, x, z)$
b. $(<M>, z, y)$
If both accept, accept. If all strings in $\{a,b\}^{|x|}$ have not yielded accept, reject.
Now I think it's rather obvious that $L(M_0) = L$ but I'm unsure it solves it in poly. space:
Step 1. requires computing $|x|^{73}$ in unary, which takes poly. space in $|x|$. From here it is known that testing whether a machine accepts a string in some specific space, given in unary, is possible in poly. space when that unary representation is given in the input, using a universal machine. So step 1. requires poly. space in $|x|$.
Each step of the recursion in 2. requires keeping track of the strings tried which is fine. But the depth of the recursion seems to be $2^{|x|}$ which is problematic.
Can you suggest if this solution may still work, or a hint for another? |
Note: There is a short summary at the bottom.
This is actually also described in Nielsen&Chuang: You don't learn about general measurements, because they are
completely equivalent to projective measurements + unitary time evolution + ancillary systems, all of which is described in your usual QM formalism. The Measurement Postulate
Let's start from the beginning. Let us first formulate the usual postulate of quantum mechanics, as you know it:
Measurement Postulate (first course):
Measurements are described by projection valued measures defined by the spectral measure of an observable (self-adjoint operator). The post measurement state the projection onto the subspace of the measurement.
Now in addition to this, we have a bunch of other postulates, in particular, we have the postulate that the quantum evolution is governed by the Schrödinger equation thus time evolution is a unitary evolution. That's all very nice, but when you go to your lab, you discover that that's not what happens.
As is pointed out in Nielsen & Chuang, it seems that sometimes, the quantum state is destroyed after measurements (the measurement is not a "non-demolition-measurement"), so the state after measurement does not seem to be well-described by a projection onto this eigenspace. But also, you'll actually find out that your evolution is not according to a Hamiltonian and it is not unitary. Energy might enter the system or leave it, depending on what you do.
Why is that? The key problem to realize is that all of the postulates in your first course refer to what we call a "closed system". None of them actually state this requirement, but they all need it. Only in a closed system is energy conserved (much like in classical mechanics), so we can expect time evolution to be unitary. Just as well, only in a closed system can we expect that measurements are always described by projective measurements.
Time Evolution of Open Quantum Systems
So, what about
open quantum systems, i.e. systems where in addition to our system $S$ with a Hilbert space $\mathcal{H}_S$, we have an uncontrolled environment $E$ (such as in the lab)? Let's consider time evolution as a training case, because it is much easier to understand from classical intuition - incidentally, we have the same problem in classical mechanics!
In an open system, as long as we know what the environment is doing, we can assign a Hilbert space $\mathcal{H}_E$, compute the Hamiltonian on the combined system $\mathcal{H}_S\otimes \mathcal{H}_E$, do time evolution and trace out the environment (the partial trace is the equivalent of forgetting the environment and only considering the system $S$). In other words, having prepared a state $\rho_S$ of the system and assuming it is not correlated with an environment state $\rho_E$ (this can be debated upon), the
time-evolved state $\rho_S$ is given by
$$ T(\rho)= \operatorname{tr}_E(U(\rho_S\otimes \rho_E)U^*) $$
where $\operatorname{tr}_E$ is the partial trace. But this is very cumbersome. We don't always know what the environment is doing. So instead of saying that the open quantum system is part of a bigger, closed system which undergoes a unitary time evolution $U$, we can
directly specify the time-evolution by specifying $T$. Then, $T$ will not be a unitary time-evolution, but a completely positive map. In classical mechanics, you do the same: Instead of considering the Lagrangian/Hamiltonian of the whole system, which you might not know, you can also try to consider only a part of that system and describe it by a master equation (this is routinely done in statistical mechanics). The same can be done in quantum mechanics, i.e. by the quantum master equation.
So what I want to argue is the following:
Using unitary time evolution or completely positive maps is ultimately the same (mathematically). In the lab, you will always have noise from the environment so your system will never be closed. Unitary time evolutions are clumsy, because they need you to specify the environment completely, which might be hard or nearly impossible to do, so it is much nicer to only work with the open system. The definition of a completely positive map lets you do that. Therefore, it is a "better" postulate in a physical sense, because it eliminates key problems when applying the model to your lab. Measurements in Open Quantum Systems
Essentially, we now have to do the exact same thing for measurements that we did for unitary time evolution. How do measurements look like if you restrict them to a subsystem?
[A small aside: Let's throw in another complication: Measurements are not really instantaneous, some of them take time. For example, suppose you have an atom with three states with different energies, one very much excited $E_3$ and two less excited states (one may be the ground state, let's call them $E_1$ and $E_2$). So you know that your system will be in either of the last states. Measuring which one of these, you can shine a laser with one of the two transition energies to the excited state, say the laser energy is $E_3-E_1$. If you get induced emission, your system was in state $E_1$, if you don't, it has to be in $E_2$. This of course takes time, so the system will evolve (and it won't be a free evolution, because the laser is doing something), so a simple measurement is not just a projective measurement, but we can hardly ever fully separate it from some time evolution. Often, this is no problem, sometimes it might be.]
What happens if we do this? How does the measurement look like on subsystems? Well, it turn out that just as completely positive maps are the restrictions of unitary time evolution, POVMs are the restrictions of measurements.
You can also see this from Naimark's dilation theorem: This theorem basically tells us that every POVM ultimately is a projective measurement if we factor in some environment. So in this sense, the POVM approach and the usual projective measurements are mathematically equivalent, if one always factors in the environment + maybe some additional unitary evolution. However, we have the same as above:
The formalism of POVMs is better suited to work with, because it does not require us to actually know or even think of the environment. We can get our measurement operators from the experiment and don't have to worry about them being projections or not (in the latter case, the system is surely not closed)
So the POVM formalism doesn't give us anything knew formally and mathematically, but it is a better way to think about actual quantum systems, which are usually not closed systems.
General Measurements and a new Postulate
Now we have POVMs. We could replace our postulate by the POVM postulate, which would cover the outcomes of experiments very well. So why don't we do it? Why don't Nielsen & Chuang do it?
Because we have actually lost something: The POVM was really only introduced to compute outcome probabilities, but if we start out with a POVM, it's not clear how we obtain a post-measurement state. Very often, we don't care, but sometimes we do, so we should think about this again (for example, when we consider "the optimal way to distinguish a set of quantum states", we at the moment don't care about the post-measurement state, so POVMs are all we need).
This "problem" of the post-measurement state can be addressed in several ways, one way is to take a POVM with effect operators $E_i$, specify a square root $M_i^*M=E$ and define a general measurement (which, in addition to the fact that for every generalized measurement $\{M_m\}_m$, $E_m:=M^*M$ defines a POVM tells you that the formalism of POVMs and general measurements is
mathematically equivalent). Now, square roots are not unique, so in order to talk about the post measurement state, you'll have to refer to experiments (or specify the environment and define the measurement there, which will provide you with a unique projective measurement on the closed system).
[If you want yet another way to think about this, you can pick yet another formalism, quantum instruments which essentially does the same thing.]
So in the end, we replace our old (closed system) postulate by the general (open system) postulate:
Measurement Postulate (Nielsen&Chuang):
Measurements are described by a collection of measurement operators $\{M\}_m$ that are not necessarily projections but fulfill $\sum_m M^*_mM_m=\mathbf{1}$. The post measurement state upon measurement of $m$ is the state after application of $M_m$.
From what I have argued above, it should not come as a surprise that the two postulates are mathematically equivalent. More precisely, if we augment POVMs/general measurements by unitary time evolution and the introduction of environment systems, any such measurement should really come from a projective measurement. This was my original post:
Sketch of Proof of the Equivalence of the two postulates
This is described on page 94 to 95 in Nielsen & Chuang:
Let $\{M\}_m$ be a "general measurement" with $m=1,\ldots,n$ on a Hilbert space $\mathcal{H}$. Define $U\in \mathcal{B}(\mathcal{H}\otimes \mathbb{C}^n)$ (i.e. $U$ is a bounded operator on the composite system) via definining:
$$ U|\psi\rangle|0\rangle= \sum_{m=1}^n (M_m|\psi\rangle)|m\rangle $$
where $|m\rangle$ is the standard orthonormal basis of $\mathbb{C}^n$. Then you can show that $U$ can be extended to a unitary operation $U\in \mathcal{B}(\mathcal{H}\otimes \mathbb{C}^n)$.
Now you define the projective measurement $P$ with projections $$P_m:=\mathbf{1}_{\mathcal{H}}\otimes |m\rangle\langle m|$$
and what you can show is that first performing $U$ and then measuring the projective measurement $P$ and tracing out the system $\mathbb{C}^n$ ("forgetting" about the system) is equivalent to performing the generalized measurement $M_m$. In particular:
$$ \frac{P_m U|\psi\rangle |0\rangle}{\sqrt{\langle \psi|\langle 0|U^*P_mU|\psi\rangle|0\rangle}}= \frac{(M_m|\psi\rangle)|m\rangle}{\sqrt{\langle \psi|M_m^*M_m|\psi\rangle}} $$
and the probabilities also add up. So general measurements add nothing new.
About Closed (Quantum) Systems:
We have of course
constructed the environment. Who tells us that this is the "real" physical environment or that the measurement in the real closed system is actually also projective? No one, actually. This is one other assumption that I've been making implicitly. However, I believe that this system has another deeper problem: Coming from the experimental/operational side, what actually is a closed quantum system? Unless (maybe) we consider the whole universe, we can never actually work with a completely closed system - and we can't consider the whole universe. I believe that there are actually arguments (higher level/quantum foundations) that tell us that the postulates are completely equivalent if there exists a closed quantum system, but this is philosophical.
But this means that we did add something "new": We got rid of the necessity of closed systems (if we also replace all the other axioms).
Lessons learned: (tl;dr)
So, what's the essence? I have argued that generalized measurements are nothing new, neither physically, nor mathematically, if we know about the difference of open and closed quantum systems. Therefore, they don't add anything that you didn't get from the old formalism already, so that your Quantum Mechanics 101 course is not wrong (barring problems with the definition of "closed quantum systems").
However, POVMs (or maybe general measurements) are the "right" way to think about measurements. The paradigm of open quantum systems, which is very important for real world experiments is inherently inscribed into POVMs and they also tell us why sometimes, measurements seem not repeatable in the lab. So POVMs are not some theoretical construct floating in philosophy space (closed quantum systems), but more
operational descriptions of measurements. In addition, they are better to work with when describing real world situations.
As a final note: General measurements are not considered heavily in the literature. Peter Shor was so kind as to point out an (old) example of their use with this Peres, Wooters paper (paywall!). Usually however, I find that people work with POVMs instead of general measurements. |
A system can be in any one of N states. Using the method of undetermined multipliers to show that for the maximum entropy, $S = -k \sum_i p_i \ln p_i$ where $p_i = 1/N$, $$S = k \ln N\,,$$
the solution is as follows:
Using an undetermined multiplier $\alpha$, write $$f = -k \sum_i(p_i \ln p_i - \alpha p_i)$$ $$\frac{\partial f}{\partial p_j} = -k (\ln p_i + 1 -\alpha ) = 0$$
$\ln p_j = \alpha -1$for all $j$, therefore all $p_i$ are equal. The maximum entropy is $$S_{max} = -k \sum_i(\frac{1}{N}\ln \frac{1}{N}) = k \ln N$$
I don't understand the last part on how to go from the sum to the final expression, given $\sum_i p_i = 1\;.$ |
4.6. Dropout¶
Just now, we introduced the classical approach of regularizing statistical models by penalyzing the \(\ell_2\) norm of the weights. In probabilistic terms, we could justify this technique by arguing that we have assumed a prior belief that weights take values from a Gaussian distribution with mean \(0\). More intuitively, we might argue that we encouraged the model to spread out its weights among many features and rather than depending too much on a small number of potentially spurious associations.
4.6.1. Overfitting Revisited¶
Given many more features than examples, linear models can overfit. But when there are many more examples than features, we can generally count on linear models not to overfit. Unfortunately, the reliability with which linear models generalize comes at a cost: Linear models can’t take into account interactions among features. For every feature, a linear model must assign either a positive or a negative weight. They lack the flexibility to account for context.
In more formal texts, you’ll see this fundamental tension betweengeneralizability and flexibility discussed as the
bias-variancetradeoff. Linear models have high bias (they can only represent a smallclass of functions), but low variance (they give similar results acrossdifferent random samples of the data).
Deep neural networks take us to the opposite end of the bias-variance spectrum. Neural networks are so flexible because they aren’t confined to looking at each feature individually. Instead, they can learn interactions among groups of features. For example, they might infer that “Nigeria” and “Western Union” appearing together in an email indicates spam but that “Nigeria” without “Western Union” does not.
Even when we only have a small number of features, deep neural networks are capable of overfitting. In 2017, a group of researchers presented a now well-known demonstration of the incredible flexibility of neural networks. They presented a neural network with randomly-labeled images (there was no true pattern linking the inputs to the outputs) and found that the neural network, optimized by SGD, could label every image in the training set perfectly.
Consider what this means. If the labels are assigned uniformly at random and there are 10 classes, then no classifier can get better than 10% accuracy on holdout data. Yet even in these situations, when there is no true pattern to be learned, neural networks can perfectly fit the training labels.
4.6.2. Robustness through Perturbations¶
Let’s think briefly about what we expect from a good statistical model. We want it to do well on unseen test data. One way we can accomplish this is by asking what constitutes a a ‘simple’ model? Simplicity can come in the form of a small number of dimensions, which is what we did when discussing fitting a model with monomial basis functions. Simplicity can also come in the form of a small norm for the basis functions. This led us to weight decay (\(\ell_2\) regularization). Yet a third notion of simplicity that we can impose is that the function should be robust under small changes in the input. For instance, when we classify images, we would expect that adding some random noise to the pixels should be mostly harmless.
In 1995, Christopher Bishop formalized a form of this idea when he proved that training with input noise is equivalent to Tikhonov regularization [Bishop.1995]. In other words, he drew a clear mathematical connection between the requirement that a function be smooth (and thus simple), as we discussed in the section on weight decay, with and the requirement that it be resilient to perturbations in the input.
Then in 2014, Srivastava et al.[Srivastava.Hinton.Krizhevsky.ea.2014] developed a clever ideafor how to apply Bishop’s idea to the
internal layers of the network,too. Namely they proposed to inject noise into each layer of the networkbefore calculating the subsequent layer during training. They realizedthat when training deep network with many layers, enforcing smoothnessjust on the input-output mapping misses out on what is happeninginternally in the network. Their proposed idea is called dropout, andit is now a standard technique that is widely used for training neuralnetworks. Throughout trainin, on each iteration, dropout regularizationconsists simply of zeroing out some fraction (typically 50%) of thenodes in each layer before calculating the subsequent layer.
The key challenge then is how to inject this noise without introducingundue statistical
bias. In other words, we want to perturb the inputsto each layer during training in such a way that the expected value ofthe layer is equal to the value it would have taken had we notintroduced any noise at all.
In Bishop’s case, when we are adding Gaussian noise to a linear model, this is simple: At each training iteration, just add noise sampled from a distribution with mean zero \(\epsilon \sim \mathcal{N}(0,\sigma^2)\) to the input \(\mathbf{x}\) , yielding a perturbed point \(\mathbf{x}' = \mathbf{x} + \epsilon\). In expectation, \(\mathbf{E}[\mathbf{x}'] = \mathbf{x}\).
In the case of dropout regularization, one can debias each layer by normalizing by the fraction of nodes that were not dropped out. In other words, dropout with drop probability \(p\) is applied as follows:
By design, the expectation remains unchanged, i.e., \(\mathbf{E}[h'] = h\). Intermediate activations \(h\) are replaced by a random variable \(h'\) with matching expectation. The name ‘dropout’ arises from the notion that some neurons ‘drop out’ of the computation for the purpose of computing the final result. During training, we replace intermediate activations with random variables.
4.6.3. Dropout in Practice¶
Recall the multilayer perceptron (Section 4.1) with a hidden layer and 5 hidden units. Its architecture is given by
When we apply dropout to the hidden layer, we are essentially removingeach hidden unit with probability \(p\), (i.e., setting their outputto \(0\)). We can view the result as a network containing only asubset of the original neurons. In the image below, \(h_2\) and\(h_5\) are removed. Consequently, the calculation of \(y\) nolonger depends on \(h_2\) and \(h_5\) and their respectivegradient also vanishes when performing backprop. In this way, thecalculation of the output layer cannot be overly dependent on any oneelement of \(h_1, \ldots, h_5\). Intuitively, deep learningresearchers often explain the intuition thusly: we do not want thenetwork’s output to depend too precariously on the exact activationpathway through the network. The original authors of the dropouttechnique described their intuition as an effort to prevent the
co-adaptation of feature detectors.
At test time, we typically do not use dropout. However, we note thatthere are some exceptions: some researchers use dropout at test time asa heuristic appraoch for estimating the
confidence of neural networkpredictions: if the predictions agree across many different dropoutmasks, then we might say that the network is more confident. For now wewill put off the advanced topic of uncertainty estimation for subsequentchapters and volumes. 4.6.4. Implementation from Scratch¶
To implement the dropout function for a single layer, we must draw as many samples from a Bernoulli (binary) random variable as our layer has dimensions, where the random variable takes value \(1\) (keep) with probability \(1-p\) and \(0\) (drop) with probability \(p\). One easy way to implement this is to first draw samples from the uniform distribution \(U[0,1]\). then we can keep those nodes for which the corresponding sample is greater than \(p\), dropping the rest.
In the following code, we implement a
dropout function that dropsout the elements in the
ndarray input
X with probability
drop_prob, rescaling the remainder as described above (dividing thesurvivors by
1.0-drop_prob).
import d2lfrom mxnet import autograd, gluon, init, np, npxfrom mxnet.gluon import nnnpx.set_np()def dropout(X, drop_prob): assert 0 <= drop_prob <= 1 # In this case, all elements are dropped out if drop_prob == 1: return np.zeros_like(X) mask = np.random.uniform(0, 1, X.shape) > drop_prob return mask * X / (1.0-drop_prob)
We can test out the
dropout function on a few examples. In thefollowing lines of code, we pass our input
X through the dropoutoperation, with probabilities 0, 0.5, and 1, respectively.
X = np.arange(16).reshape(2, 8)print(dropout(X, 0))print(dropout(X, 0.5))print(dropout(X, 1))
[[ 0. 1. 2. 3. 4. 5. 6. 7.] [ 8. 9. 10. 11. 12. 13. 14. 15.]][[ 0. 0. 0. 0. 8. 10. 12. 0.] [16. 0. 20. 22. 0. 0. 0. 30.]][[0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0.]]
4.6.4.1. Defining Model Parameters¶
Again, we can use the Fashion-MNIST dataset, introduced in Section 3.6. We will define a multilayer perceptron with two hidden layers. The two hidden layers both have 256 outputs.
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256W1 = np.random.normal(scale=0.01, size=(num_inputs, num_hiddens1))b1 = np.zeros(num_hiddens1)W2 = np.random.normal(scale=0.01, size=(num_hiddens1, num_hiddens2))b2 = np.zeros(num_hiddens2)W3 = np.random.normal(scale=0.01, size=(num_hiddens2, num_outputs))b3 = np.zeros(num_outputs)params = [W1, b1, W2, b2, W3, b3]for param in params: param.attach_grad()
4.6.4.2. Define the Model¶
The model defined below concatenates the fully-connected layer and theactivation function ReLU, using dropout for the output of eachactivation function. We can set the dropout probability of each layerseparately. It is generally recommended to set a lower dropoutprobability closer to the input layer. Below we set it to 0.2 and 0.5for the first and second hidden layer respectively. By using the
is_training function described in Section 2.5, we canensure that dropout is only active during training.
drop_prob1, drop_prob2 = 0.2, 0.5def net(X): X = X.reshape(-1, num_inputs) H1 = npx.relu(np.dot(X, W1) + b1) # Use dropout only when training the model if autograd.is_training(): # Add a dropout layer after the first fully connected layer H1 = dropout(H1, drop_prob1) H2 = npx.relu(np.dot(H1, W2) + b2) if autograd.is_training(): # Add a dropout layer after the second fully connected layer H2 = dropout(H2, drop_prob2) return np.dot(H2, W3) + b3
4.6.4.3. Training and Testing¶
This is similar to the training and testing of multilayer perceptrons described previously.
num_epochs, lr, batch_size = 10, 0.5, 256loss = gluon.loss.SoftmaxCrossEntropyLoss()train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, lambda batch_size: d2l.sgd(params, lr, batch_size))
4.6.5. Concise Implementation¶
Using Gluon, all we need to do is add a
Dropout layer (also in the
nn package) after each fully-connected layer, passing in the dropoutprobability as the only argument to its constructor. During training,the
Dropout layer will randomly drop out outputs of the previouslayer (or equivalently, the inputs to the subequent layer) according tothe specified dropout probability. When MXNet is not in training mode,the
Dropout layer simply passes the data through during testing.
net = nn.Sequential()net.add(nn.Dense(256, activation="relu"), # Add a dropout layer after the first fully connected layer nn.Dropout(drop_prob1), nn.Dense(256, activation="relu"), # Add a dropout layer after the second fully connected layer nn.Dropout(drop_prob2), nn.Dense(10))net.initialize(init.Normal(sigma=0.01))
Next, we train and test the model.
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
4.6.6. Summary¶ Beyond controlling the number of dimensions and the size of the weight vector, dropout is yet another tool to avoid overfitting. Often all three are used jointly. Dropout replaces an activation \(h\) with a random variable \(h'\) with expected value \(h\) and with variance given by the dropout probability \(p\). Dropout is only used during training. 4.6.7. Exercises¶ Try out what happens if you change the dropout probabilities for layers 1 and 2. In particular, what happens if you switch the ones for both layers? Increase the number of epochs and compare the results obtained when using dropout with those when not using it. Compute the variance of the the activation random variables after applying dropout. Why should you typically not using dropout? If changes are made to the model to make it more complex, such as adding hidden layer units, will the effect of using dropout to cope with overfitting be more obvious? Using the model in this section as an example, compare the effects of using dropout and weight decay. What if dropout and weight decay are used at the same time? What happens if we apply dropout to the individual weights of the weight matrix rather than the activations? Replace the dropout activation with a random variable that takes on values of \([0, \gamma/2, \gamma]\). Can you design something that works better than the binary dropout function? Why might you want to use it? Why not? |
argmax(w) sum(sign(Aw) == sign(b))
This is a strange objective. Basically: find \(w\), with \(v=Aw\) such that we maximize the number of \(v_i\) having the same sign as \(b_i\). I have never seen such an objective.
As \(A\) and \(b\) are constants, we can precompute \[ \beta_i = \mathrm{sign}(b_i)\] This simplifies the situation a little bit (but I will not need it below).
A different way to say "\(v_i\)
and\(b_i\) have the same sign" is to state: \[ v_i b_i > 0 \] I assumed here \(b_i \ne 0\). Similarly, the constraint \( v_i b_i < 0\) means: "\(v_i\) and\(b_i\) have the opposite sign."
If we introduce binary variables: \[\delta_i = \begin{cases} 1 & \text{if $ v_i b_i > 0$}\\ 0 & \text{otherwise}\end{cases}\] a model can look like: \[\begin{align} \max & \sum_i \delta_i \\ &\delta_i =1 \Rightarrow \sum_j a_{i,j} b_i w_j > 0 \\ & \delta_i \in \{0,1\}\end{align}\] The implication can be implemented using
indicator constraints, so we have now a linear MIP model.
Notes:
I replaced the \(\gt\) constraint by \(\sum_j a_{i,j} b_i w_j \ge 0.001\) If the \(b_i\) are very small or very large we can replace them by \(\beta_i\), i.e. \(\sum_j a_{i,j} \beta_i w_j \gt 0\) The case where some \(b_i=0\) is somewhat ignored here. In this model, we assume \(\delta_i=0\) for this special case. We can add explicit support for \(b_i=0\) by: \[\begin{align} \max & \sum_i \delta_i \\ &\delta_i =1 \Rightarrow \sum_j a_{i,j} b_i w_j > 0 && \forall i | b_i\ne 0 \\ & \delta_i =1 \Rightarrow \sum_j a_{i,j} w_j = 0 && \forall i | b_i = 0 \\ & \delta_i \in \{0,1\}\end{align}\] We could model this with binary variables or SOS1 variables. Binary variables require big-M values. It is not always easy to find good values for them. The advantage of indicator constraints is that they allow an intuitive formulation of the problem while not using big-M values. Many high-end solvers (Cplex, Gurobi, Xpress, SCIP) support indicator constraints. Modeling systems like AMPL also support them. Test with small data set
Let's do a test with a small random data set.
This means, for this 25 row problem we can find \(w\)'s such that 20 rows yield the same sign as \(b_i\).
References SCIP What is the function for sign?, https://stackoverflow.com/questions/53030430/scip-what-is-the-function-for-sign |
Of the top of my head...
Definitely not this salt. There will be little free ammonia from the solvation of the $\ce{NH4^+}$ cation. The $\ce{Cl^-}$ is a spectator anion.
The $\ce{Na^+}$ is a spectator cation. The $\ce{CO3^{2-}}$ will be mostly $\ce{CO3^{2-}}$ with a tiny bit of $\ce{HCO3^-}$. So the ratio of $\frac{\ce{HCO3^-}}{\ce{CO3^{2-}}}$ (and hence the pH) will change rapidly with the addition of $\ce{H+}$.
Some of ammonium will protonate the acetate anion. So you end up with a mixture of $\ce{NH4^+}$, $\ce{NH3}$, $\ce{OAc^{-}}$ and $\ce{HOAc}$. I think this solution would have the most buffer capacity for acid.
It seems you need to solve all of these for $\dfrac{d\text{pH}}{d\ce{H+}}$ -- well the last two anyway...
SOLUTION
My calculus-foo is lost on this, so let's just brute force an answer. Let's assume we have 1.00 liters of 0.1 molar solutions of each of the salts. We'll calculate the pH for the salt, add $1.00\times10^{-3}$ moles of $\ce{H+}$ and calculate the new pH. We can then calculate:
$\dfrac{d\text{pH}}{d\ce{H+}} = \dfrac{\text{pH}_2 - \text{pH}_1}{1.00\times10^{-3}}$
$\ce{NH4+ <=> H+ + NH3 }$
$\text{K}_\text{a} = 5.75\times10^{-10} = \dfrac{\ce{[H+][NH3]}}{\ce{[NH4+]}}$
Salt alone
assume $\ce{[H+]=[NH3]}$ and $\ce{[NH4+]=0.1}$ molar then simplify
$\ce{[H+]} = \sqrt{\text{K}_\text{a}\times\ce{[NH4+]}} = 7.56\times10^{-6}$
pH = 5.121
we can now solve for
$\dfrac{\ce{[NH3]}}{\ce{[NH4+]}} = \dfrac{\text{K}_\text{a}}{\ce{[H+]}} = \dfrac{5.75\times10^{-10}}{7.56\times10^{-6}} = 7.61\times 10^{-5} $
$\ce{[NH3] = 7.61\times10^{-6}}$
Salt plus 0.001 moles acid
So there isn't any significant amount of $\ce{NH3}$ to protonate and the acid added can be added to the initial concentration of $\ce{H+}$ from the salt.
final $\ce{[H+]} = 7.56\times10^{-6} + 1.00\times10^{-3} = 1.008\times10^{-3}$
pH = 2.997
$\dfrac{d\text{pH}}{d\ce{H+}} = \dfrac{\text{pH}_2 - \text{pH}_1}{1.00\times10^{-3}} = \dfrac{2.997 - 5.121 }{1.00\times10^{-3}} = -2124$
$\ce{CO3^{2-} + H2O <=> OH^- + HCO3^{-}}$
$\text{K}_\text{b} = 2.14\times10^{-4} = \dfrac{\ce{[OH^-][HCO3^-]}}{\ce{[CO3^{2-}]}}$
Salt alone
assume $x = \ce{[HCO3^-] = [OH^-]}$ and $\ce{[CO3^{2-}]} = 0.100 -x$ molar then simplify
$0 = x^2 + 2.14\times10^{-4}x - 2.14\times10^{-5}$
$x = 4.52\times10^{-3}$
$\ce{[OH-]} = 4.52\times10^{-3}$
$\ce{[H+]} = \dfrac{K_\rm{w}}{\ce{[OH-]}} =2.21\times10^{-12}$
pH = 11.655
Salt plus 0.001 moles acid
Now if we add 0.001 moles of $\ce{H+}$.
We'll essentially neutralize some $\ce{OH-}$ and make some $\ce{HCO3-}$, but we know that
$\ce{[HCO3-] - [OH-]} = 0.001$ or $\ce{[HCO3-]} = 0.001 + \ce{[OH-]}$
Let $x = \ce{[OH-]}$
$2.14\times10^{-4} = \dfrac{\ce{[OH^-][HCO3^-]}}{\ce{[CO3^{2-}]}}$
$2.14\times10^{-4} = \dfrac{(x)(0.001+x)}{0.1 - (0.001+x)} = \dfrac{x^2 + 0.001x}{0.099-x}$
$0 = x^2 + 1.214\times10^{-3}x - 2.1186\times10^{-4}$
$x = 0.004036$
$\ce{[OH^-]} = 0.004036$
$\ce{[H+]} = \dfrac{1.00\times10^{-14}}{\ce{[OH-]}} = 2.48\times10^{-12}$
pH = 11.606
$\dfrac{d\text{pH}}{d\ce{H+}} = \dfrac{\text{pH}_2 - \text{pH}_1}{1.00\times10^{-3}} = \dfrac{11.655 - 11.606}{1.00\times10^{-3}} = 49$
Salt alone
Salt plus 0.001 moles acid |
Note to the following well known theorem:
Theorem (1): If $\kappa$ be a "measurable" cardinal and $\mathcal{F}$ be a "non-principal $\kappa$-complete normal" ultrafilter on it then: $\langle V_{\kappa +1},\in \rangle \cong \prod_{\mathcal{F}}\lbrace \langle V_{\alpha +1}, \in \rangle~|~\alpha\in \kappa \rbrace$ Proof: Chang and Kiesler, Model Theory, Page 241.
In the other words the above theorem says that the "truth" of a particular sentence in $\kappa +1$ th level of von Neumann's cumulative hierarchy is "dependent" on the "truth" of that sentence in lower levels. So if a sentence be "almost everywhere" true below $\kappa +1$ th level, it cannot be false in it. In fact the lower stages give us a "truth approximation" for the $\kappa +1$ th stage. Now consider the following definition:
Definition (1): We say that a cumulative hierarchy $W$ has a truth approximation property at $\delta$ th level ($tap(W,\delta)$) iff there exists an ultrafilter $\mathcal{F}$ on $\delta$ such that: $\langle W_{\delta +1},\in \rangle \cong \prod_{\mathcal{F}}\lbrace \langle W_{\alpha +1}, \in \rangle~|~\alpha\in \delta \rbrace$ Corollary (1): $ZFC\Longrightarrow \forall \kappa \in measurable~cardinal~~~tap(V,\kappa)$ Corollary (2): $ZFC + \exists~a~measurable~cardinal \Longrightarrow \exists \delta>\omega~~~tap(V,\delta)$
Now there are some natural questions:
Question (1): Is the use of non-trivial $\kappa$-additive normal measure in proof of theorem (1) essential? In other words can one find a weaker large cardinal axiom than existence of a measurable cardinal, like $A$ such that: (a) $ZFC+A\Longrightarrow \exists \delta>\omega~~~tap(V,\delta)$
Moreover can $ZFC$ alone prove that there is an ordinal $\delta$ such that "$V$ has a truth approximation property at level $\delta$"? Precisely is the following statement true?
(b) $Con(ZFC)\Longrightarrow Con(ZFC+\forall \delta>\omega~~~\neg tap(V,\delta))$
Question (2): Is the inverse of corollary (2) true? In the other words does truth approximation property of von Neumann's cumulative hierarchy in a certain stage imply the existence of a measurable or weaker large cardinal? Precisely which one of these statements are true?
(a) $ZFC+\exists \delta>\omega~~tap(V,\delta) \Longrightarrow \exists~a~strongly~inaccessible~cardinal$ (b) $ZFC+\exists \delta>\omega~~~tap(V,\delta) \Longrightarrow \exists~a~measurable~cardinal$ Question (3): Are there any known "truth approximation properties" for other famous cumulative hierarchies like $L$ and $J$? Precisely is there any large cardinal axiom like $A$ and $B$ which the following statements be true:
(a) $ZFC+A\Longrightarrow \exists\delta>\omega~~~tap(L,\delta)$
(b) $ZFC+B\Longrightarrow \exists\delta>\omega~~~tap(J,\delta)$
More simplified, are there any large cardinals $\kappa$ and $\lambda$ and ultrafilters $\mathcal{F}$ and $\mathcal{G}$ on them such that the following statements be true?
(c) $\langle L_{\kappa +1},\in \rangle \cong \prod_{\mathcal{F}}\lbrace \langle L_{\alpha +1}, \in \rangle~|~\alpha\in \kappa \rbrace$ (d) $\langle J_{\lambda +1},\in \rangle \cong \prod_{\mathcal{G}}\lbrace \langle J_{\alpha +1}, \in \rangle~|~\alpha\in \lambda \rbrace$ |
Please tell me where I've gone wrong (if I did in fact make a mistake). I'm pricing a long forward on a stock. The usual setup applies:
This has payoff $S(T) - K$ at time $T$. We are at $t$ now. $S(T) = S(t)e^{(r-\frac12 \sigma^2)(T-t)+\sigma(W(T)-W(t))}$. $W(t)$ is a Wiener process. $K \in \mathbb{R}_+$. $Q$ is the risk-neutral measure. $\beta(t) = e^{rt}$ is the domestic savings account, a tradable asset. $r$ is the constant riskless rate. My Attempt:
$f(t,S) = E^Q[\frac{\beta(t)}{\beta(T)}(S(T)-K)|\mathscr{F}_t]$
$ = E^Q [\frac{\beta(t)}{\beta(T)}S(T)|\mathscr{F}_t] - E^Q [\frac{\beta(t)}{\beta(T)}K|\mathscr{F}_t]$
$ = E^{P_S}[\frac{\beta(t)}{\beta(T)}S(T) \frac{\beta(T)S(t)}{\beta(t)S(T)}|\mathscr{F}_t] - \frac{\beta(t)}{\beta(T)}K$
$ = S(t) - K\frac{\beta(t)}{\beta(T)}$
$ = S(t) - Ke^{-r(T-t)}$
This isn't graded homework or assignment. (It is ungraded homework) |
Current browse context:
astro-ph
Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Observational constraints to a unified cosmological model
(Submitted on 29 Nov 2014 (v1), last revised 27 Aug 2015 (this version, v2))
Abstract: We propose a phenomenological unified model for dark matter and dark energy based on an equation of state parameter $w$ that scales with the $\arctan$ of the redshift. The free parameters of the model are three constants: $\Omega_{b0}$, $\alpha$ and $\beta$. Parameter $\alpha$ dictates the transition rate between the matter dominated era and the accelerated expansion period. The ratio $\beta / \alpha$ gives the redshift of the equivalence between both regimes. Cosmological parameters are fixed by observational data from Primordial Nucleosynthesis (PN), Supernovae of the type Ia (SNIa), Gamma-Ray Bursts (GRB) and Baryon Acoustic Oscillations (BAO). The calibration of the 138 GRBs events is performed using the 580 SNIa of the Union2.1 data set and a new set of 79 high-redshift GRBs is obtained. The various sets of data are used in different combinations to constraint the parameters through statistical analysis. The unified model is compared to the $\Lambda$CDM model and their differences are emphasized. Submission historyFrom: R. R. Cuzinatto [view email] [v1]Sat, 29 Nov 2014 19:44:27 GMT (7391kb,D) [v2]Thu, 27 Aug 2015 17:31:35 GMT (4408kb,D) |
There are all sorts of things to consider here and I doubt there can be a definitive answer.
First: how many neutron stars are there - or more pertinently, what is the density in the solar neighbourhood.
There are about 1000 stars within 15pc of the Sun down to about $0.2M_{\odot}$. Most of these are main sequence stars that are
less massive (and more long-lived) than the Sun, with odd exceptions like Sirius and Arcturus. About 10% are white dwarfs that have evolved from objects with initial masses of $1-8M_{\odot}$. If we assume there are 900 stars within 15pc that were born with $M\leq 1M_{\odot}$, then we can integrate an assumed initial mass function and further assume that all stars with $8\leq M/M_{\odot}<25$ have already ended their lives as neutron stars. The lower limit is fairly solid; the upper limit is much more uncertain, but because of the steepness of the initial mass function ($N(M) \propto M^{-2.3}$) it doesn't really change the numbers of neutron stars much (but does change the [small] numbers of black holes!).
Thus the fraction of stars that end up as white dwarfs would be$$f_{\rm WD} \sim \frac{\int^{8}_{1} M^{-2.3}\ dM}{\int^{25}_{0.2} M^{-2.3}\ dM} = 0.11,$$which ignores the negligible contribution of even higher mass stars to the total. This is in reasonable agreement with observation, but will be slightly overestimated because not all stars more massive than $1M_{\odot}$ have died.
Armed with some confidence that this calculation works for white dwarfs, we can do the same calculation for neutron stars.$$f_{\rm NS} \sim \frac{\int^{25}_{8} M^{-2.3}\ dM}{\int^{25}_{0.2} M^{-2.3}\ dM} = 0.006.$$i.e. If there are 1000 total stars within 15pc, there should be 6 neutron stars.This is likely to be an
overestimate because a large fraction of neutron stars are created in supernovae and obtain a large momentum kick that can give them velocities of hundreds of km/s. That means they should be under-represented in the Galactic disc and some will have been ejected from the Galaxy. This is probably a factor of $\sim 2$ effect.
See also https://astronomy.stackexchange.com/questions/16678/how-far-away-is-the-nearest-compact-star-remnant-likely-to-be?noredirect=1&lq=1 for a similar calculation, where i used some slightly different assumptions and numbers (which gives a flavour of the uncertainties involved).
Second: Could we actually see these nearby neutron stars? Now, the age distribution is likely to be reasonably uniform over the age of the Galaxy or perhaps even weighted to older ages. Neutron stars lose their original "birth heat" on timescales of thousands to millions of years, mostly by the emission of neutron stars. By the time they get to a million years old they can only be kept hot by accretion from the interstellar medium or perhaps by some sort of Ohmic heating driven by the decay of their magnetic fields or frictional processes associated with their spindown and decoupling between superfluids and "normal" fluids in the crust and core.
Of these reheating mechanisms, accretion from the interstellar medium is probably not important for neutron stars close to the Sun, because our local ~100 pc is a local bubble of hot and relatively sparse interstellar gas. But the other processes are very uncertain and until we start detecting the thermal radiation from old neutron stars, we just don't know how luminous they will be.
In Position of Neutron Stars in H R diagrams I gave an estimate of $M_{v} \sim 23$ for the absolute visual magnitude of the neutron star surface has cooled to 10,000K or so, but I would say this could be uncertain by a factor of a few either way, which results in orders of magnitude uncertainties in their luminosities and so about $\pm 5$ magnitudes on $M_v$!If the absolute magnitude was $<20$, then there
is a chance that Gaia might detect one of those $\sim$ few old neutron stars within 15 pc. It would have a large parallax, probably a large proper motion, and a calculation of its luminosity compared with its temperature would quickly reveal it was much smaller than a hot white dwarf. On the other hand, if $M_V>20$ then there is virtually no chance that the Gaia survey would spot it, because that is about its apparent magnitude sensitivity limit. So unless one of those few nearby neutron stars was closer than a few pc it would just be too faint to see. The Large Synoptic Survey Telescope, due to start operation in 2023, should survey the sky to much fainter limits and really does stand a chance of detecting a population of these objects.
Third: I should point you to another possibility that I addressed (and dismissed) in https://astronomy.stackexchange.com/questions/16578/will-gaia-detect-inactive-neutron-stars/16699#16699 That is that Gaia might see the gravitational lensing of background stars by a foreground and nearby neutron star. In the answer referred to, I showed that this is possible but rather unlikely.
In conclusion there is little reassurance to be given. The likelihood of a neutron star disrupting the solar system is very low and since they are orders of magnitude less common than "ordinary stars" (which would do just as much damage!) and none of these ordinary stars show much likelihood of coming nearer than 10,000 au to us in the the forseeable millions of years (e.g. Bailer Jones 2018) it would be unfortunate in the extreme to be struck by something that is much rarer in the next 100 years. Conspiracy theorists and other wackos should focus on the very much more real threats of global warming and anti-biotic-resistant bacteria, rather than claiming that something we see $>400$ light years away can reach us in 75 years... |
Projections for HH measurements in the bbZZ(4l) final state with the CMS experiment at the HL-LHC
Pre-published on: 2019 August 18
Published on: —
Abstract
Prospects for the study of Higgs boson pair (HH) production in the $\mathrm{HH \to b\bar{b}4l}$ (l = e, $\mu$) channel are studied in the context of the High Luminosity LHC. The analysis is performed using a parametric simulation of the Phase-2 CMS detector response provided by the Delphes software and assuming an average of 200 proton-proton collisions per bunch crossing at a center-of-mass energy of 14 TeV. Assuming a projected integrated luminosity of 3000 fb$^{-1}$, the expected significance for the nonresonant standard model (SM) HH signal is 0.37 $\sigma$; a 95$\%$ confidence level (CL) upper limit on its cross section is set to 6.6 times the SM prediction. The statistical combination of five decay channels ($\mathrm{b\bar{b}b\bar{b}}$, $\mathrm{b\bar{b}\tau\tau}$, $\mathrm{b\bar{b}\gamma\gamma}$, $\mathrm{b\bar{b}WW}$, $\mathrm{b\bar{b}ZZ}$) results in an expected significance for the SM HH signal of 2.6 $\sigma$ and an expected 68$\%$ and 95$\%$ CL intervals for the self-coupling modifier $\kappa_{\lambda}$ = $\lambda_{HHH}/\lambda_{HHH}^{SM}$ of [$0.35, 1.9$] and [$-0.18, 3.6$], respectively. |
Suppose we have a triangular mesh of a two dimensional shape $\Omega$, and on this mesh we define a P1 finite element structure. I know that given $u,v$ by their values at the vertices of the triangles, we can compute $\int_\Omega \nabla u \cdot \nabla v$ using the stiffness matrix $A = (\int_\Omega \nabla \varphi_i\cdot \nabla\varphi_j)$ (where $\varphi_i$ is the P1 basis). This makes it possible to find expressions of the form $\int_\Omega |\nabla u|^2$ if $u$ is known at the triangle vertices.
How can we compute the approximation of the gradient $\nabla u$, given the values of $u$ on the points?
I know that the gradient of $\varphi_i$ is constant on each triangle, but how do we need to combine these gradients for heighboring triangles, such that the value we obtain in the end is relevant.
The purpose of doing this: I would like to integrate a quantity of the form $\int_\Omega \varphi(\nabla u)^2$, where $\varphi$ is a norm, different from the euclidean one. Now that I think about it, I guess it is necessary to compute the matrix $A = \int_\Omega b(\nabla \varphi_i,\nabla \varphi_j)$, where $b$ is the scalar product associated to $\varphi$. |
$\DeclareMathOperator{\diag}{diag}$ Consider the generalized eigenproblem $A\mathbf{x}=\lambda B\mathbf{x}$. When solving PDEs numerically (specifically, I am interested on finding the Dirichlet eigenmodes of the 2D laplacian in orthogonal coordinates, i.e. $\nabla^2 u=\lambda u$) this problem takes the form
$$ A_1X + XA_2^\text{T} = \lambda B \circ X$$
where $X\in\mathbb{R}^{m\times n}$ is our eigenvector (the eigenfunction $u$ sampled at the meshpoints), $A_1\in\mathbb{R}^{m\times m}$ and $A_2\in\mathbb{R}^{n\times n}$ are differentiation matrices, $B\in\mathbb{R}^{m\times n}$ is the Jacobian determinant (sampled at the meshpoints) and the symbol $\circ$ denotes element-wise multiplication. I am only interested on the eigenmodes corresponding to the smallest eigenvalues.
With the help of the Kronecker product, the problem can be vectorized as follows:
$$ (I \otimes A_1 + A_2 \otimes I)\mathbf{x} = \lambda \diag(B)\mathbf{x}$$
where $\mathbf{x}\in\mathbb{R}^{mn}$ is a vector formed by stacking the columns of $X$ and $\diag{(B)}$ is a diagonal matrix formed with the columns of $B$.
The problem with this is that the resulting matrices are unnecessarily sparse and have size of $mn\times mn$, and when I try to solve it for modestly large sizes of $m$ and $n$, say 200, my computer runs out of memory. Clearly that is not the way.
I understand that modern eigenvalue algorithms are highly sophisticated, and that coding one by myself is not the best option. So I am looking for an iterative solver that only depends on the computation of the matrix-vector products with $A$ and $B$ and that allows the user to provide them as a black-box. |
In addition to the different situations we have seen above, which involve modular functions and forms, there are also many ways to create new modular forms from given ones.
Vector space or graded ring structureThe most trivial way to construct a modular form from given ones is to use the vector space structure: if $f$ and $g$ have the same weight and multiplier system on some group $G$, then of course so does $\lambda f+\mu g$, for any constants $\lambda$ and $\mu$. Another way is to use the fact that the set of all modular forms with trivial multiplier for a given group has the structure of a graded ring. Even more is true: if $f$ and $g$ are modular for the same group $G$, then so is the product $fg$, whose weight will be the sum of the weights and multiplier system, the product of the multiplier systems. Differentiation
If we differentiate the modular identity $f(\gamma z)=v(\gamma)(cz+d)^kf(z)$ and use $(\gamma z)'=(cz+d)^{-2}$, we obtain \[f'(\gamma z)=v(\gamma)((cz+d)^{k+2}f'(z)+k(cz+d)^{k+1}f(z))\;.\]Thus $f'$ is almost a modular form (it is in fact a
quasi-modular form) of weight $k+2$, and it is exactly modular if $k=0$. There are many ways to "repair" this defect of modularity: two of them involve modifying the differentiation operator by using the auxiliary functions $1/y=1/\Im(z)$ or the quasi-Eisenstein series $E_2$. Another is to take suitable combinations which remove the extra terms that prevent modularity: for instance, if $f$ is of weight $k$ and $g$ of weight $\ell$, then $f^\ell/g^k$ is of weight $0$, so its derivative is really modular of weight $2$, and expanding shows that $\ell f'g-kfg'$ is modular of weight $k+\ell+2$. This is a special case of a series of bilinear operators called the Rankin‒Cohen operators. Changing the group
If we accept to modify the group on which a function is modular, there are many other ways to create new modular forms. For instance, if $f$ is modular on some group, then $f(mz)$ will also be modular of the same weight on some other group for any $m\in\Q^{\times}$. A similar construction implies that if $f(z)=\sum_{n\ge n_0}a_nq^{n/N}$ is modular, then so is $\sum_{n\equiv r\pmod{M}}a_nq^{n/N}$ and $\sum_{n\ge n_0}\psi(n)a_nq^{n/N}$ for any periodic function $\psi$. This last construction is called
twisting by $\psi$. Enlarging the group
Even more interesting is the possibility to
enlarge the group on which a function is modular: if $f$ is modular on some subgroup $H$ of finite index of some other group $G$, say with trivial multiplier system, and if $(\gamma_i)$ is a system of left coset representatives of $H\backslash G$, so that $G=\bigsqcup_iH\gamma_i$, then it is clear that $\sum_i$ $f|_k\gamma_i$ will be modular on $G$. This is a special case of the averaging procedure mentioned at the very beginning. An important example combining the above two methods is the construction of Hecke operators: let $p$ be a prime. If, for instance, $f$ is modular of weight $k$ on the full modular group $\Gamma$, the functions $f((z+j)/p)$ and $f(pz)$ will be modular only on the subgroup $\Gamma_0(p)$ of $\Gamma$, but it is immediate to show that the linear combination $g=\sum_{0\le j\le p-1}f((z+j)/p)+p^kf(pz)$ is again modular on the full modular group $\Gamma$, and we can define the Hecke operator $T(p)$ by $T(p)(f)=g/p$. Authors: Knowl status: Review status: beta Last edited by Andreea Mocanu on 2016-04-01 15:21:10 Referred to by: History:(expand/hide all) |
65 4 Homework Statement In a double slit experiment let d=5.00 D=30.0λ. Estimate the ratio of the intensity of the third order maximum with that of the zero-order maximum. Homework Equations interference diffraction Homework Statement:In a double slit experiment let d=5.00 D=30.0λ. Estimate the ratio of the intensity of the third order maximum with that of the zero-order maximum. Homework Equations:interference diffraction
i guess the goal is this equation
##I_{(\theta)}=I_0 \times(cos^2\beta)\times \left ( \frac {sin\alpha} \alpha \right)^2##
then i do
## D\sin \theta = 3\lambda##
##\sin\theta= \frac {3\lambda} D, \space \theta=5.74^0##
##\beta= \frac {\pi d} \lambda \sin \theta##
##\alpha = \frac {\pi D} \lambda \sin \theta \space##
substituting the data
##\alpha=\frac {\pi 30.0\lambda} \lambda \frac {3\lambda} {30.0\lambda}\space##
next
##\beta= \frac {\pi 5.00} \lambda \frac {3\lambda} {30.0\lambda}## i don't know how to solve this one, and solve the rest of the problem, how do i get rid of ##\space\lambda\space## at denominator?
any help please? |
I'm going to try to design an algorithm to find all the rational roots of a polynomial equation in range [a, b]. Can someone please tell me which algorithm currently solves the problem with lowest worst-case complexity? This algorithm will be for a general purpose computer(Turing Machine).
The paper Computing Real Roots of Real Polynomials by Sagraloff and Mehlhorn from 2015 provides an almost optimal algorithm and references for simpler algorithms that might be used in practice. The CGAL library (in version 4.9) for example uses the method developed by Arno Eigenwillig in his PhD thesis
Real Root Isolation for Exact and Approximate Polynomials Using Descartes' Rule of Signs.
If you only want to find all
rational roots, you can simply use the rational root theorem. This theorem states that, given a polynomial $a_n x^n + a_{n-1}x^{n-1} + \ldots + a_1x+a_0$, for any rational root $x=p/q$, where $p,q\in \mathbb N$ and $GCD(p,q)=1$, we have: $p$ is a divisor of $a_0$ and $q$ is a divisor of $a_n$.
So, one possible algorithm is to factorise $a_0$ and $a_n$ to get all possible $p,q$ and simply 'fill in' the combinations as a ratio to see if it is a root. This way, we find all possible roots. The complexity of the root finding is negligible to the factorisation, so the complexity of this method is the complexity of factorising $a_0$ and $a_n$, which will take a long time for large $a_0$ and $a_n$ (but is fast for small $a_0$ and $a_n$, independent of the rest of the equation!)
There is a speedup, however. If a root $p/q\in [a,b]$, this means that $p\in [aq,bq]$ and $q\in [p/b,p/a]$. If $a_0$ is small, but $a_n$ is large, we can find all divisors $p_i$ of $a_0$ and test for all integers in the range $[p_i/b,p_i/a]$ whether they divide $a_n$. If $a_n$ is large and $[a,b]$ not too big, this will be a lot faster than factoring $a_n$. This means that we only have to do one factorisation and can do it on the smallest of $a_0$ and $a_n$.
So, to get a complete overview of the worst case complexity for the methods described, define $a_{\max}=\max\{a_0,a_n\}$ and $a_{\min} = \min\{a_0,a_n\}$. Assume $b\geq a>1$ (another worst case exists when $a,b<1$, but that will have the same running time, only with $1/a$ and $1/b$). We will factor $a_{\min}$ and consider all it's divisors, of which there are $O(\log n)$
on average (The actual worst case upper bound is $\exp(O(\frac{\log n}{\log\log n}))$, but this factor will likely be dominated anyway, so I'd rather keep it simple. A derivation and more is given here).
All divisors of $a_{\min}$ are in the range $[1,\sqrt{a_{\min}}]$, so we do at most $\lceil (b-a)\sqrt{a_{\min}} \rceil$ divisor tests per factor of $a_\min$. Since we know that any factor of $n$ must be in $[1,\sqrt{n}]$, we have that $b-a\leq \sqrt{a_\max}$ to be useful (if not, replace $[a,b]$ by $[1,\sqrt{a_\max}]$). So, we do at most $\lceil \sqrt{a_{\min}a_\max} \rceil$ divisor tests. Testing whether a number is a divisor of $a_\max$ takes $O(\log a_\max)$ time, using the Euclidean algorithm.
Factoring $a_\min$ takes $O(F(a_\min))$, where $F(n):=\exp ((\log n)^{1/3}(\log \log n)^{2/3})$.
So, in total, this algorithm has a worst case complexity of $O(F(a_\min) + (b-a)\sqrt{a_\min}\log{a_\min}\log{a_\max})$ time. Since we can assume $(b-a)\leq a_\max$, the factoring is the only non-polynomial (in $a_\min$ or $a_\max$) part of this formula, so we get that the complexity is simply $O(F(a_\min))$.
I highly doubt that it is possible to find all rational roots within a range without factoring at least one of the coefficients, because that would mean (by the rational root theorem), that we have found a more efficient algorithm for factoring! In that case, the algorithm I gave is asymptotically optimal, as it is the cost of factoring the smallest of the coefficients $a_0$ and $a_n$. |
Why is there 10k resistor at DAC output? Is it used as current limiter? Why is current limiter needed if op-amp sinks very little current at its input?
Why is there a capacitor at the same + input?
The 10k and 470p combination form a low-pass filter. Cut-off frequency is given by \$ f = \frac {1}{2 \pi R C} = \frac {1}{2 \pi 10k \cdot 470p} = 34~kHz \$. It will reduce any high frequency switching noise from the DAC.
At the feedback resistor(453k one), why is there a capacitor across it?
This reduces the gain of the amplifier at high frequencies. Again this helps eliminate switching noise and amplifier stability.
Why is there extra connection from from DAC REF voltage(2.5V) to the voltage divider of negative voltage feedback of op-amp?
We can see that the DAC is powered from +3 V and GND. This is therefore the maximum and minimum possible output voltages. (It will actually only give up to 2.5 V on the output as that is the Vref setting.) The amplifier, however, is powered from +70 V and -70 V which implies that we are expecting to drive it positive and negative with respect to ground.
Since the DAC can only give a positive output we can assume that 1.25 V (half of Vref) represents the mid-point or "zero" of the AC signal output. We therefore reference the amplifier to that point.
Removing the offset
We have one little problem left: we want 0 V out of the amplifier when ADC out is 1.25 V. To do this we need to raise the inverting input reference point by \$ \frac {1.25}{A} \$ where
A is the gain. This is done by tweaking the ratio slightly with the potential divider ratio of 16k2 and 16k9. (These add to 33k3 which makes me think the values were worked out for convenience on a 3.33 V reference.) In any case it's 51.06% of 2.5 V. The amplifier will have to drive the output to less than 1.25 V to pull the inverting input down to match the non-inverting input. If the calculation has been done correctly the balance will occur when the output is at 0 V.
I'll leave it as an exercise to the reader to confirm the values are correct! |
If you reversed the directions of the two spins, the spin part of the wavefunction is a
singlet and it is anti-symmetric.
The total wavefunction (spatial * spin) needs to be overall antisymmetric for fermions, which means in your case the spatial part of the wavefunction is symmetric - there is the highest probability of finding the particles at the same position!
EDIT after comments: Pauli exclusion principle
The Pauli exclusion principle states that no two fermions can be in the same quantum state.
In an atom, this means that no two electrons can share the same set of quantum numbers.
A quantum number is related to a conserved quantity, like energy ($n$), magnitude of orbital angular momentum ($\ell$), projection of orbital angular momentum along the $z$-axis ($m_\ell$), spin ($s$) or spin projection along the $z$-axis ($m_s$). For more on this, see here.
Let's take two electrons in a bound electronic state, be it the $H^-$ ion, as in the hydrogen atom with two electrons.
I am choosing this so that I can use the hydrogenic wavefunction for both electrons:
$$|\Psi(r)\rangle = |\psi_{n,\ell,m_\ell} (\mathbf{r})\rangle \otimes |\chi_{s, m_s}\rangle, $$
where the first part is the
spatial and the second part the spin part of the wavefunction. This is assuming that spin and space don't talk to each other, so I can split the wavefunction, but it is not always true. You can find a list of tabulated hydrogenic wavefunctions $|\psi_{n,\ell,m_\ell} (\mathbf{r})\rangle$ online.
If I have two electrons, the total wavefunction of the system will be:
$$ |\Phi(\mathbf{r},\mathbf{r'}) \rangle_{\text{attempt}} = |\Psi(\mathbf{r})\rangle_1 \otimes |\Psi(\mathbf{r'})\rangle_2,$$ where the labels refer to the 2 electrons - each label is a set of quantum numbers.
This is where I need to introduce
quantum statistics: electrons are indistinguishable particles. They are only characterised by their intrinsic properties such as mass, charge, spin, lepton number etc. but there is nothing that marks any one electron apart from another. It's like you have two identical twins, Bob and Brian. Bob has glasses. But as soon as you put glasses on Brian, they become once again indistinguishable. Same with removing the glasses. *
So I have to encode the fact that the electrons that I have above labelled as $1$ and $2$ could actually also $2$ and $1$ without changing the resulting physics in the slightest.
To do this, I create a linear superposition:
$$ |\Phi(\mathbf{r},\mathbf{r'}) \rangle = |\Psi(\mathbf{r})\rangle_1 \otimes |\Psi(\mathbf{r'})\rangle_2 - |\Psi(\mathbf{r'})\rangle_2 \otimes |\Psi(\mathbf{r})\rangle_1,$$
where the minus sign is there to ensure that fermions have an antisymmetric total wavefunction - i.e. they get an overall minus sign to the original wavefunction when you swap them $1\leftrightarrow 2$. The physical origin of this is quantum statistics.
Using the expression for $|\Psi(r)\rangle$ above, and letting the $1$ electron have an
unprimed set of quantum numbers, and $2$ a primed, we have:
$$ |\Phi(\mathbf{r},\mathbf{r'}) \rangle = \underbrace{|\psi_{n,\ell,m_\ell} (\mathbf{r})\rangle \cdot |\chi_{s, m_s}\rangle}_1 \otimes \underbrace{|\psi_{n',\ell',m'_\ell} (\mathbf{r'})\rangle \cdot |\chi_{s', m_s'}\rangle}_2 \\ - \underbrace{|\psi_{n,\ell,m_\ell} (\mathbf{r'})\rangle \cdot |\chi_{s, m_s}\rangle}_2 \otimes \underbrace{|\psi_{n',\ell',m'_\ell} (\mathbf{r})\rangle \cdot |\chi_{s', m_s'}\rangle}_1. $$
What happens if the two electrons occupy the same quantum state? I.e. same energy, angular momentum etc.? Then they would share the same set of quantum numbers. And $|\Phi \rangle = 0$, i.e. there would not be a system existing in such a state.
But all it suffices to prevent the difference from going to $0$ is at least
one quantum number differing. Electrons have spin quantum number $s = 1/2$, so $m_s = \pm 1/2$. If the two electrons have opposite spin, one is in $m_s = 1/2$ and the other is in $m_s = -1/2$, different quantum states, hence they are allowed to live wherever you want - only limited by the particular form of the spatial wavefunction. Less rigourous argument
A less rigourous argument just states that the total wavefunction $|\Psi\rangle$ for two particles is a function of both labels:
$$ |\Psi\rangle = f(1,2),$$
but if they are fermions, there needs to be antisymmetry upon exchange of labels:
$$ f(1,2) = -f(2,1),$$
and if you insist that your $|\Psi\rangle$ is the same before and after particle exchange, i.e. that they have the same set of quantum numbers, then you have $f = 0$.
Singlets vs Triplets
To make your life easier, you can combine angular momenta so as to re-write $|\Psi\rangle$ from before into:
$$ |\Phi\rangle = |\phi(\mathbf{r})\rangle \otimes |\theta\rangle, $$ where once again I am assuming spin and space are mutually independent so I can split the wavefunction into a spatial $(|\phi(\mathbf{r})\rangle)$ and spin part $(|\theta\rangle)$.
The spin part is obtained from adding the two spins:
If they are parallel, then you have $S = s_1 + s_2 = 1$, $|S = 1\rangle$ with three projections along the $z$-axis, $|m_S = -S, 0, S\rangle$. This state is called a
triplet.
If they are anti-parallel, then $S = s_1 - s_2 = 1$, $|S = 0\rangle$ with only one projection along the $z$-axis, $|m_S = 0\rangle$. This state is called a
singlet.
For the singlet, in particular, $$|\theta\rangle = |S, m_S\rangle = |0,0\rangle = \frac{1}{\sqrt{2}}( \uparrow_1 \downarrow_2 - \downarrow_2 \uparrow_1),$$ where the minus sign ensures that not only $m_S = 0$, but also $S = 0$, because $m_S = 0$ could also be a state of $S=1$.
You see that this spin part is
anti-symmetric, which means that the spatial part will need to be symmetric in order to maintain an overall antisymmetry of the wavefunction.
If both electrons are in the
same spatial state, because, for example, you are considering the ground state, then the spatial part of the wavefunction has to be symmetric. Since $\psi'$ and $\psi$, only marked by their quantum numbers, are the same. The ground state would just be $\psi_1 \cdot \psi_2$. This is symmetric under exchange, which means the spin part has to be anti-symmetric, i.e. a singlet, i.e. the two spins have to be antiparallel. Which is the foundation of Hund's rule and to how we fill atomic shells in the periodic table - max two spins per state, one $\uparrow$ and one $\downarrow$. Visual representation
Another cute way of actually
seeing the Pauli exclusion principle is to actually plot a symmetric and anti-symmetric combination of the wavefunctions.
Let's take the eigenstates of a 2D infinite square well, namely the first two: $|\psi_1\rangle \propto \cos(x \, \text{or} \, y)$, $|\psi_2\rangle \propto \cos(2x \, \text{or} \, 2y)$.
Need of excited state
As I said before, if both electrons were in the same spatial state, we could only build a symmetric combination. To see both cases we need to go to the case of one ground and one excited.
Symmetric combination
$|\Psi \rangle_{\text{symmetric}}|^2 \propto |\cos(x)\,\cos(2y) + \cos(y)\,\cos(2x)|^2, $
There is a maximum where $x=y$, i.e. where the "particles" (excitations of the infinite well) occupy the same position.
Anti-symmetric combination
$|\Psi \rangle_{\text{symmetric}}|^2 \propto |\cos(x)\,\cos(2y) - \cos(y)\,\cos(2x)|^2, $
There is a no probablity of finding the particles at $x=y$, i.e. where the "particles" (excitations of the infinite well) occupy the same position.
Comment question
if the spins are antiparallel, then the two fermions CAN occupy the same position if we wish them to
Yes. In a Helium atom, both electrons are described by the same wavefunction. This looks exactly the same (spatially) for both, which means they are occupying the same position. "Occupying" means that the wavefunctions perfectly overlap, since electrons are not point particles in quantum unless measured.
This can only work if their 2 spins are antiparallel. If you had a third electron, e.g. $He^-$ or $Li$, it would need to go to the next energy level (the next
shell), which has a different $n$ and hence a different spatial wavefuntion $\psi_n$.
* : You could distinguish Bob from Brian if you know where they are seated, for example. This is why atoms in solids are
distinguishable, because bonds make them localised - this acts like a label. |
Monte Carlo is most useful when you lack analytic tractability or when you have a highly multidimensional problem.
For example, even using simple lognormal and poisson models, there exist path-dependent payoffs or multi-asset computations such that no analytic solution exists and such that any PDE finite difference solution would require 3 or more dimensions. Other times, you are employing a model where the SDE is not solvable, so an apparently one-dimensional problem still ends up forcing you to generate many incremental paths using Euler or Milstein integration.
Cases Where Monte Carlo Is A Poor Idea Weakly path-dependent options (e.g. lookbacks): Use PDE or series solutions Single-dimensional cases: If your problem is just one dimensional, such as pricing a payoff along the terminal distribution, you should never use Monte Carlo, since numerical quadrature is far superior in this case, even if you just use Riemann sums.
Cases Where Monte Carlo Is A Good Idea Strongly path-dependent options such as ratchet range options Portfolio risks and exotic baskets where high dimensionality comes into play. CDO tranche protection is a classic example. So are tail risk computations, especially for multi-asset portfolios. Intractable models where the terminal distribution cannot be computed, such as some stochastic vol models
To the point about single-dimensional cases -- it sounds like this describes your usage, perhaps because you are using some kind of implied distributional fit to agree with volatility skew. In this case Monte Carlo seems easy, but using a trapezoid rule integrator (or similar) will be as easy and far higher quality by about any measure.
Now Monte Carlo does make it tricky to accurately compute greeks. As with any model, we can compute greeks by using a finite difference "parameter bump", computing our greek
$$g_\mu =\frac{ V(\dots,\mu+\Delta \mu,\dots) - V(\dots, \mu,\dots)}{\Delta \mu}$$
but if there is a lot of random noise in those two separate computations of $V()$ then our $g_\mu$ will be inaccurate. Instead it is important to bring the differencing
inside the Monte Carlo formula. That is, we don't want to be doing
$$\hat{g}_\mu =\frac{ \frac1M \sum_{i=1}^M V(x_i,\dots,\mu+\Delta \mu,\dots) - \frac1M \sum_{i=1}^MV(y_i, \dots, \mu,\dots)}{\Delta \mu}$$
for two separate sample sets $x_i$ and $y_i$. Instead, we want to use the
same $x_i$ for both sums, meaning we effectively compute
$$g_\mu =\frac1{M {\Delta \mu}} \sum_{i=1}^M V(x_i,\dots,\mu+\Delta \mu,\dots) -V(x_i, \dots, \mu,\dots)$$
and end up with a far more accurate estimate, typically better than our estimate of option value.
I'll make one final note, which is that you feel you "get sound results without relying on the assumptions of the analytical methods". If your terminal distributions are empirically generated, then you are likely to misprice any options because you are not using anything close to a risk-neutral measure. For example, you almost certainly find yourself pricing a forward contract $F$ far higher than the true, arbitrageable value range $F \in [S_0 e^{r_L T}, S_0 e^{r_b T}]$ where $r_b, r_L$ are standard borrow-lend rates.
(Vytautas beat me to some of these points) |
In a previous question, link, I asked about how I could most effectively do a Fourier Transform of a radial function given at certain values and which we knew the asymptotical behaviour of. The Fourier transform reading$$\frac{4\pi}{q}\int^{\infty}_0 dr\, r \sin(qr) f(r).$$I tried several ways and ended up choosing FFT, which approximates it by calculating the DFT over the interval $[0,\Delta r]$ with $N$ points. (I used more particularly the NAG subroutine
C06FAF)
I still get some issue around $q=0$. Indeed, as can be seen on the figure , I have some weird peak at very low frequencies. The black curve that is flat in
q=0 is the analytical result while other curves are FFT calculations with increasing $\Delta r$. ($\color{blue}{\Delta r} >\color{red}{\Delta r} > {\bf \color{black}{\Delta r}} > \color{magenta}{\Delta r} $ and the dashed one being the highest $\Delta r$). As can be seen, this peak is narrower when $\Delta r$ gets larger.
The question is from where this peak does come ? And how could I get rid of it ? I already tried to take the mean of the function and substract it to the function but it does not change anything.
One subsidiary question is also the following : the subroutine calculates the integral. I then have to divide by $q$ to get this $4\pi/q$ factor. Though, at $q=0$, this can't be done for obvious reasons. So, what can be done instead ?
EDIT : the problem of the wide peak was simply due to the fact that I was doing a bad conversion between $q$ and the frequency from the routine. As far as the problem of the division by $q=0$ is concerned, I'm happy with
Endulum's answer. |
I am new to Stack Exchange, so please excuse the few mistakes that might have happened.
I like the large square roots, parantheses and braces, the wide accents, underbraces, etc that one gets with
mtpro2. Also, the TeX Gyre Pagella Math font is very much to my liking, especially since it has optical sizes inbuilt.
However, this question is not about my likings, but the fact, that I would like to use TeX Gyre Pagella Math with
unicode-math and take (especially) the square root, braces and parantheses from
mtpro2. Does anybody have an idea?
Here is an MWE: (note how the
sqrt is not slanted and the parantheses are not round)
\documentclass[10pt,a4paper]{article}\usepackage[lite]{mtpro2}\usepackage{unicode-math}\setmainfont[ExternalLocation,Numbers={OldStyle,Proportional},Mapping=tex-text, ItalicFont={texgyrepagella-italic}, BoldFont = {texgyrepagella-bold}, BoldItalicFont = {texgyrepagella-bolditalic}]{texgyrepagella-regular}\setsansfont[Scale=MatchLowercase,Numbers={OldStyle,Proportional}, Mapping=tex-text]{Calibri}\setmathfont[ExternalLocation]{texgyrepagella-math}\begin{document}\begin{equation}s_x = \sqrt{\frac{1}{n-1} \displaystyle \sum_{i=1}^{n}\left(x_i-\overline{x}\right)^2} = \left(\sqrt[4]{\frac{\frac{1}{n-1} \displaystyle \sum_{i=1}^{n}\left(x_i-\overline{x}\right)^2}{\left(\frac{1}{1}\right)}}\right)^2\end{equation}\end{document} |
I believe that you will want to process each channel separately over all of the images. Otherwise a mean and variance will have very little meaning. And if you convert to grayscale before training, then the network can only retrieve grayscale images, which doesn't seem to match with what the paper describes.
I've trouble distinguishing between "from each pixel its mean value over all images" and "standard deviation of all pixels over all images".
This processing will requires two passes:
On The first pass, you will compute the mean pixel values of each channel, and the variance over the entire set of pixels in a channel. When you are finished, you should have 3075 values: one mean value per pixel per channel (32*32*3=3072), and one variance per channel (3).
On the second pass you will modify the images by taking the subtracting from each pixel the mean you found in the first pass, and dividing by the standard deviation from the first pass.
Let $\mu_r,i,j$ be the mean value of the red channel at the $i,j$-th pixel, and let $\sigma_r^2$ be the variance of the red channel over all pixels in all images. Then the new value corresponding to the red channel of the $i,j$-th pixel in the $k$-th image $r_{i,j,k}$ becomes
$$ \tilde{r}_{i,j,k} = \left(r_{i,j,k} - \mu_{r,i,j}\right)/\sqrt{\sigma_r^2}$$
Where
$$\mu_{r,i,j} = \frac{1}{K}\sum_k r_{i,j,k}$$$$\sigma_r^2 = \frac{1}{IJK-1}\sum_{i,j,k} r_{i,j,k}^2 - \frac{1}{IJK-1}\sum_{i,j,k} r_{i,j,k}$$
This is my interpretation of the paper. However, since the focus of the paper is on autoencoders and deep belief networks, the image preprocessing step is only mentioned in passing. Like most academic papers, it gives enough details for the reader to understand what was done without necessarily giving enough details to reproduce the result without lots of reading between the lines.
Good luck to you! |
Here's how I think of your result:
Let's look for integers $n$, such that the beginning of the decimal expansion of $n^{\log n}$ agrees with that of $\pi$ (up to some point). Using a for loop, I found the following approximations for $n<100,000$:
$$ \pi \approx \frac{11^{\log 11}}{10^2},\frac{53599^{\log{53599}}}{10^{51}},\frac{59546^{\log{59546}}}{10^{52}}.$$
Note that the last two only approximate $\pi$ to 4 digits after the decimal point.
It seems that for $n<1,000,000$, $n=11$ gives the best approximation of the form $\frac{n^{\log{n}}}{10^{d(n^{\log n})-1}}$ where $d(m)$ is the number of digits of $m$ left to the decimal point.
I'm stil trying to find better approximations though...
EDIT:
In Mathematica I used something of the form
For[n = 1, n < 100000, n++,If[Floor[n^Log[n]/10^(IntegerLength[Floor[n^Log[n]]] - 5)] == 31415,Print[N[n^Log[n], 10], " ", n]]]
This will give you approximations good to 4 decimal places in the range $n<100000$.
EDIT2:
Using a longer loop for finer approximations I found
$$\pi \approx \frac{3214471^{\log 3214471}}{10^{97}},\frac{3745521^{\log 3745521}}{10^{99}} $$
both to 6 decimal places.
Overall, it seems that the case $n=11$ is extraordinarily good for small values of $n$. I still can't see why nonetheless. |
Using currently available fuels and technology and an unlimited budget. How big would a rocket without stages have to be to make it to a stable orbit? and how big would a rocket without staged have to be to escape Earth's gravity? "edited" - or for the easier question how big would the rocket have to be if it was massless and only the fuel had mass?
A rocket "without stages" is a one-stage rocket. There have been proposals of this type before. You can often find mentions of SSTO, which means "single stage to orbit". The X-33 was a fairly recent R&D attempt at this. That was a spaceplane and the rocket all in one.
The present question has a different attempt. It is asking about SSTO minimum size with no requirements for reentry, and apparently no payload requirement either. That's difficult to answer in a fully theoretical sense, without just saying "it's zero". If you allow the engineering into the discussion, it seems like there should obviously be an answer.
Further complicating things is the fact that the size of the rocket doesn't necessarily affect the mass fraction of propellant of the tank itself. It is important to look for this number for
tanks specifically. NASA gives these examples:
Propellant Rocket Percent Propellant for Earth OrbitSolid Rocket 96%Kerosene-Oxygen 94%Hypergols 93%Methane-Oxygen 90%Hydrogen-Oxygen 83%
There's no obvious academic reason I can't assume an orbital Solid Rocket tank that weighs 1 kg, and is still 96% propellent. As far as tanks go, they physically have a mass proportional to the volume of stuff held in it (to first order..).
Nonetheless, my 1 kg orbital rocket is sure to be highly ineffective. In fact, the Delta v to LEO will increase because air drag and gravity drag will be higher for this rocket. Air drag, mainly. Out of the normal 10 km/s to orbit, around 1 km/s is typically air drag.
A completely accurate way to answer your question would then be to find out when drag gets so large that the
tank mass itself satisfies the rocket equation as the payload. To do that, I have to get a metric for air drag. I'll assume it goes about the same speed as other rockets (because decreasing the speed increases gravity drag). That means that this is purely a consequence of surface area to mass ratio.
$$ \Delta v_{drag} \propto \frac{A}{M} \propto \frac{R^2}{R^3} \propto \frac{1}{R} \propto M^{-1/3} \\ \Delta v_{drag} = \Delta v_{nominal} \left( \frac{ M }{ M_{nominal} } \right)^{-1/3} $$
I'm ball-parking things, so I'll assume that the nominal case is a shuttle that has delta v from drag of 1 km/s and weighs 1e6 kg. Now, we formalize the total Delta v and plug it into the rocket equation.
$$ \Delta v = 9 km/s + \Delta v_{drag} \\ v_e = 4 km/s \\ \Delta v = v_\text{e} \ln \frac {m_0} {m_1} $$
Plug and chug...
$$ 9 km/s + 1 km/s \left( \frac{ M }{ 1,000,000 kg } \right)^{-1/3} = 4 km/s \times \ln \frac{1}{0.04} \\ M = 17,179 \text{ kg} $$
The more I look at this, the more I realize this really is your answer. This is a rocket that weighs 17 tonnes. Its delta v to orbit is 12.8 km/s, as opposed to the normal (more favorable) 10 km/s, and this is because of the higher air drag because of its size. It will stand 47 feet tall, and it will deliver its own tank into orbit (which weighs 680 kg). Nothing else.
This is your answer, because you can not go any larger without adding multiple stages.
Is this an efficient way to deliver tanks into orbit? No. However, as people have pointed out before, if we wanted tanks in orbit, we would have used the shuttle external tank, which was 300 m/s short of full orbit when it was dropped into the ocean.
It's still sort of somewhat the most efficient way to deliver a tank to orbit
at that size. If you want more tanks in orbit for cheaper, you'll need larger rockets, so that they'll be more efficient. However, I think the problem would reduce to the same thing if you continued to scale it up, and left the rest as empty space. This is a completely insane idea, but it would follow the same mathematics I used here. In other words, you strap a large empty tank onto the solid booster rocket.
Obviously, a better approach would be to take up inflatable modules in a normal rocket. If you absolutely needed rigid tanks, perhaps you could wrap them like Russian dolls around each other, forming a rocket larger than the 17 tonne baseline. Then you would unpack them when you got to orbit. Now you have lots of tanks, and can proceed with your evil plan.
I hereby call our plan "Single Tank To Orbit", abbreviated STTO.
Couple of additional points to AlanSE answer
You can experiment with rocket equation by yourself using WolframAlpha. Just plug in your own values for final mass (payload), exhaust velocity etc.
Lockheed Martin X-33 was supposed to be technology demonstrator for VentureStar single-stage-to-orbit spaceplane. So you can use its design specifications to compare with estimates from rocket equation. VentureStar was expected to deliver 20 tonnes of payload to low-earth orbit, and then safely return back to the surface. The mass at launch would have been 1000 tonnes.
One possibility to avoid the tyranny of the rocket equation is to use the atmospheric oxygen as oxidizer, that is to use air-breathing jet engines to provide at least part of delta-v. The scramjet technology could provide the speeds in the atmosphere up to Mach 12-17, thus reducing the delta-v needed for the final acceleration into orbit. |
Jan A. Sanders Vrije Universiteit Amsterdam
Under construction.
Contents introduction
The plan is to give an introduction to Lie algebra cohomology that can be followed on different levels. The development of the cohomological theory will require nothing beyond the basic rules for Lie algebras and representations. To make things more interesting, the theory is developed for Leibniz algebras, which are not so well known. This approach has the advantage of simplifying things because there is less choice. Of course, at some point, this implies that some creativity is needed to do the necessary generalizations.
definition of Lie (and Leibniz) algebra
A
Leibniz algebra \(\mathfrak{g}\) is a module or vector space over a ring or a field R (think of \(\mathbb{R}\) or\(\mathbb{C}\)) with a bilinear operation \([\cdot,\cdot]\) obeying the following rule: <math Jacobi>[[x,y],z]=[x,[y,z]]-[y,[x,z]],\quad x,y,z\in\mathfrak{g}</math>.
If, moreover, one has that
<math Antisymmetry>[x,y]+[y,x]=0,\quad x,y\in\mathfrak{g}</math>,
then we say that \(\mathfrak{g}\) is a
Lie algebra.Lie algebras have been extensively studied for more than a century, Leibniz algebras are a more recent invention and much less is known about them. corollary
\[ [[x,y]+[y,x],z]=0,\quad x,y,z\in \mathfrak{g}\]
example class of a Lie algebra
Let \(\mathcal{A}\) be an associative algebra, that is, \((xy)z=x(yz)\) for all \(x,y,z\in\mathcal{A}\) (in other words, one can forget the brackets around the multiplication). Then define a bracket by \[ [x,y]=xy-yx\] This defines a Lie algebra structure on \(\mathcal{A}\) (Check!).
the Lie algebra \(\mathfrak{sl}_2\)
Consider the triple \(\langle M, N, H \rangle \) with commutation relations \[ H=[M,N]\quad , [H,M]=2M,\quad [H,N]=-2N\] Checking the Jacobi identity is a lot of trivial work, which can be avoided by realizing the Lie algebra as an associative algebra.
example of a Leibniz algebra
Let \(\mathcal{A}\) be an associative algebra. Let \( P \) be a projector in \(\mathrm{End}(\mathcal{A})\), that is, \(P^2=P\). Suppose that \(P(aP(b))=P(a)P(b)\) and \(P(P(a)b)=P(a)P(b)\). Denote \(Px\) by \(\bar{x}\). Define \[ [x,y]=\bar{x}y-y\bar{x}\] This provides \(\mathcal{A}\) with a Leibniz algebra structure: \[[[x,y],z]-[x,[y,z]]+[y,[x,z]]=\overline{[x,y]}z-z\overline{[x,y]}-\bar{x}[y,z]+[y,z]\bar{x}+\bar{y}[x,z]-[x,z]\bar{y}\]
\[=\overline{(\bar{x}y-y\bar{x})}z-z\overline{(\bar{x}y-y\bar{x})} -\bar{x}(\bar{y}z-z\bar{y})+(\bar{y}z-z\bar{y})\bar{x} +\bar{y}(\bar{x}z-z\bar{x})-(\bar{x}z-z\bar{x})\bar{y}\] \[=(\bar{x}\bar{y}-\bar{y}\bar{x})z-z(\bar{x}\bar{y}-\bar{y}\bar{x}) -\bar{x}\bar{y}z+\bar{x}z\bar{y}+\bar{y}z\bar{x}-z\bar{y}\bar{x} +\bar{y}\bar{x}z-\bar{y}z\bar{x}-\bar{x}z\bar{y}+z\bar{x}\bar{y}\] \[=(\bar{x}\bar{y}-\bar{y}\bar{x})z-z(\bar{x}\bar{y}-\bar{y}\bar{x}) -\bar{x}\bar{y}z-z\bar{y}\bar{x} +\bar{y}\bar{x}z+z\bar{x}\bar{y}\] \[=0\]
When \(P\) is the identity on \(\mathcal{A}\), then one has a Lie algebra.
An example of this is the following. Let \(\mathcal{A}\) consist of formal power series \(a(z)=\sum_{i\in\mathbb{Z}}a_i z^i\) and let \((P a)(z)=a_0\).
morphism
Let \(\phi:\mathfrak{a}\rightarrow\mathfrak{b}\) be a linear map. If \(\phi([x,y]_{\mathfrak{a}})=[\phi(x),\phi(y)]_{\mathfrak{b}}\) then \(\phi\) is a Lie (Leibniz) algebra morphism.
linear forms
The space of \(n\)-linear (linear in the \(R\)-module structure) forms, with arguments in \(\mathfrak{g}\) and values in \(\mathfrak{a}\), is denoted by \(C^n(\mathfrak{g},\mathfrak{a})\). Notice that these are not required to be antisymmetric, contrary to the common Lie algebra cohomology convention.
super remark
A
super Leibniz algebra is a module \(\mathfrak{g}=\mathfrak{g}^0\oplus\mathfrak{g}^1\) and a bracket such that\[ [\mathfrak{g}^i,\mathfrak{g}^j]\subset\mathfrak{g}^{i+j \mathrm{mod} 2}\]obeing, with \(x\in\mathfrak{g}^{|x|} \) and \(y\in\mathfrak{g}^{|y|}\) (where \(|\cdot|:\mathfrak{g}^i\mapsto i\))and \(z\in\mathfrak{g}\),the super Jacobi identity\[ [[x,y],z]=[x,[y,z]]-(-1)^{|x||y|}[y,[x,z]]\]Observe that \(\mathfrak{g}^0\) itself is a Leibniz algebra.
Since asymmetry is not assumed in a Leibniz algebra, the order of the elements in an expression cannot be changed around.This makes it a rather trivial exercise to check that the theory to be developed below immediately applies to the super case.For instance, in the corollary above, we just have to keep track of to interchange in the order to obtain\[ [[x,y]+(-1)^{|x||y|}[y,x],z]=0\]A
super Lie algebra is a super Leibniz algebra with \[ [x,y]=-(-1)^{|x||y|}[y,x],\quad x\in \mathfrak{g}^{|x|},y\in\mathfrak{g}^{|y|}\]Observe that \(\mathfrak{g}^0\) itself is a Lie algebra. representations of Lie algebras
Let \(\mathfrak{g}\) be a Lie algebra and \(\mathfrak{a}\) be a module or a vector space. Then we say that \(d^{(0)}:\mathfrak{g}\rightarrow End(\mathfrak{a})\) is a
representation of \(\mathfrak{g}\) in \(\mathfrak{a}\) if <math Representation0>d^{(0)}([x,y])=[d^{(0)}(x),d^{(0)}(y)]=d^{(0)}(x)d^{(0)}(y)-d^{(0)}(y)d^{(0)}(x),\quad x,y\in\mathfrak{g}</math>. example of a representation
Take \(\mathfrak{a}=\mathfrak{g}\) and \(d^{(0)}(x)y=[x,y]\). This is called the
adjoint representation and written as \(\mathrm{ad}(x)y\). representation of \(\mathfrak{sl}_2\)
Let \(\mathfrak{a}=\R^2\).Take \[ d^{(0)}(M)=\begin{bmatrix} 0&1\\0&0\end{bmatrix}, \quad d^{(0)}(N)=\begin{bmatrix} 0&0\\1&0\end{bmatrix} ,\quad d^{(0)}(H)=\begin{bmatrix} 1&0\\0&-1\end{bmatrix}\]Then \( d^{(0)}([H,M])=[d^{(0)}(H),d^{(0)}(M)]\), etc, that is, \( d^{(0)}\) is a
representation of \(\mathfrak{sl}_2\)in \(\R^2\). Since \( d^{(0)}(x_1 N + x_2 M +x_3 H)=0\) implies \(x_1=x_2=x_3=0\), one can now easily check the Jacobi identity for \(\mathfrak{sl}_2\), since it follows from the Jacobi identity in the case of an associative algebra. representations of Leibniz algebras
The definition of a Leibniz algebra representation is best motivated by the construction in the second lecture. The idea is to form a new Leibniz algebra given a Leibniz algebra \(\mathfrak{g}\) and a module \(\mathfrak{a}\) as follows. One considers the direct sum (as \(R\)-modules) \(\mathfrak{a}\oplus_R \mathfrak{g}\) and one requires the Jacobi identity to hold: \[[[a_1+x_1,a_2+x_2],a_3+x_3]=[a_1+x_1,[a_2+x_2,a_3+x_3]]-[a_2+x_2,[a_1+x_1,a_3+x_3]],\quad a_i\in\mathfrak{a}, x_i\in\mathfrak{g},\quad i=1,2,3\] One defines \[ d_+^{(0)}(x)a=[x,a]\] \[ d_-^{(0)}(x)a=-[a,x]\] Require \([\mathfrak{a},\mathfrak{a}]=0\), \([\mathfrak{a},\mathfrak{g}]\subset\mathfrak{a}\) and \([\mathfrak{g},\mathfrak{a}]\subset\mathfrak{a}\). Then this leads to the following definition.
definition
If \(d_\pm^{(0)}\) obeys the following three axioms
<math Representation>d_+^{(0)}([x,y])=d_+^{(0)}(x)d_+^{(0)}(y)-d_+^{(0)}(y)d_+^{(0)}(x),\quad x,y\in\mathfrak{g}</math>, <math Representation1>d_-^{(0)}([x,y])=d_+^{(0)}(x)d_-^{(0)}(y)-d_-^{(0)}(y)d_{\pm}^{(0)}(x),\quad x,y\in\mathfrak{g}</math>,
then it is called a
Leibniz algebra representationNotice that the two conditions in (<ref>Representation1</ref>) give rise to compatibily conditions <math Representation2>d_{-}^{(0)}(y)d_{+}^{(0)}(x)=d_{-}^{(0)}(y)d_{-}^{(0)}(x),\quad x,y\in\mathfrak{g}</math>. definition
If there is only one representation \(d^{(0)}=d_+^{(0)}=d_-^{(0)}\), obeying
<math Representation2>d^{(0)}([x,y])=d^{(0)}(x)d^{(0)}(y)-d^{(0)}(y)d^{(0)}(x),\quad x,y\in\mathfrak{g}</math>
one speaks of an
even representation. remark
In the case of a Lie algebra, one simply has \(d_{\pm}^{(0)}=d^{(0)}\) (One could also take \(d_+^{(0)}=d^{(0)}\) and \(d_-^{(0)}=0\); this would however have the later disadvantage that the coboundary operator would not carry antisymmetric forms to antisymmetric forms).\(\quad\square\)
example of a representation
Take \(\mathfrak{a}=\mathfrak{g}\) and \(d^{(0)}(x)y=[x,y]\). This is called the
adjoint representation and written as \(\mathrm{ad}_+(x)y\) or \(-\mathrm{ad}_-(y)x\). final super remark
In the super case this would have to be be changed to
<math SRepresentation>d_+^{(0)}([x,y])=d_+^{(0)}(x)d_+^{(0)}(y)-(-1)^{|x||y|}d_+^{(0)}(y)d_+^{(0)}(x),\quad x\in\mathfrak{g}^{|x|},y\in\mathfrak{g}^{|y|}</math> <math SRepresentation1>d_-^{(0)}([x,y])=d_+^{(0)}(x)d_-^{(0)}(y)-(-1)^{|x||y|}d_-^{(0)}(y)d_{\pm}^{(0)}(x),\quad x\in\mathfrak{g}^{|x|},y\in\mathfrak{g}^{|y|}</math> the coboundary operator
We now define the first instance of the
coboundary operator \(d^0\):Let \(a^0\in\mathfrak{a}=C^0(\mathfrak{g},\mathfrak{a})\). Then define \(d^0 a^0\in C^1(\mathfrak{g},\mathfrak{a})\) by <math coboundary0>d^0 a^0 (x)=d_-^{(0)}(x)a^0</math>.
Thus \(d^0 :C^0(\mathfrak{g},\mathfrak{a})\rightarrow C^1(\mathfrak{g},\mathfrak{a})\). By itself, the zeroth order coboundary operator is not much fun. But there is more. Let \(a^1\in C^1(\mathfrak{g},\mathfrak{a})\). Then define \(d^1 a^1\in C^2(\mathfrak{g},\mathfrak{a})\) by
<math coboundary1>d^1 a^1(x,y)=d_+^{(0)}(x)a^1(y)-d_-^{(0)}(y)a^1(x)-a^1([x,y])</math>.
Thus \(d^1:C^1(\mathfrak{g},\mathfrak{a})\rightarrow C^2(\mathfrak{g},\mathfrak{a})\). One checks that \(d^1d^0=0\): \[ d^1d^0 a^0(x,y)=d_+^{(0)}(x)d^0a^0(y)-d_-^{(0)}(y)d^0a^0(x)-d^0a^0([x,y])= d_+^{(0)}(x)d_-^{(0)}(y)a^0-d_-^{(0)}(y)d_-^{(0)}(x)a^0-d_-^{(0)}([x,y])a^0= 0. \]
In general, when one has defined \(d^i:C^i(\mathfrak{g},\mathfrak{a})\rightarrow C^{i+1}(\mathfrak{g},\mathfrak{a})\) such that \(d^{i+1}d^i=0\), then one calls \(d^\cdot\) a coboundary operator. To treat the example of central extensions one needs one more coboundary operator. Let \(a^2\in C^2(\mathfrak{g},\mathfrak{a})\) be a two-form. Then define
<math coboundary2>d^2 a^2(x,y,z)=d_+^{(0)}(x)a^2(y,z)-d_+^{(0)}(y)a^2(x,z)+d_-^{(0)}(z)a^2(x,y)-a^2([x,y],z)-a^2(y,[x,z])+a^2(x,[y,z])</math>. remark
These definitions are motivated by the central extension problem in the second lecture.
exercise
Show that \(d^2 d^1=0\). |
This question pops up in my head every few weeks and I'm struggling to really understand the concept / theory behind it.
We all know there are different kind of measures of dependencies out there. From Pearson's $\rho$ to Kendall's $\tau$ to more sophisticated tail dependency measures from copula:
$$\lim_{q\downarrow0}\frac{C(q,q)}{q}$$
where $C$ is the copula of two random variables $X, Y$.
In portfolio theory we often asses risk with the standard deviation in optimization, i.e. $\sqrt{w\Sigma w}$. The nice thing about pearson correlation is that we know $Var(w^TX)= w\Sigma w$ if $\Sigma$ is the covariance matrix of the random variables $X$. This simple fact allows us to use the dependency measure (pearson correlation) to define a risk measure which has a simple structure.
If we think deeper about it. We have a dependency matrix $\Sigma$ containing pairwise dependency measures. Using $f_\Sigma (w):= w\Sigma w$ maps then these pairwise dependency to a single risk number for the total portfolio.
First questionCan we come up with such meaningful $f$ for other pairwise dependency measures as well to obtain an overall single risk number for the portfolio? Meaningful in this context simply means that we really capturing a sort of risk of the overall portfolio.
I often see that the same quadratic $f$ is used but one just replaces the pairwise dependency matrix. For example using Kendall's tau, correlation or tail dependencies as entries in $\Sigma$.
Second questionDoes this replacing even make sense from a mathematical and risk perspective? What are the pitfalls then of just blindly using such a $\Sigma$ with a different dependency measure? For example the package FRAPO calculates a minimal tail dependency portfolio by quoting from this link Akin to the optimisation of a global minimum-variance portfolio, the minimum tail dependennt portfolio is determined by replacing the variance-covariance matrix with the matrix of the lower tail dependence coefficients as returned by tdc.
At least to me it's not clear why this should work and represent a number which tells you something about the risk of the portfolio.
Example for answer Wintermute
I'm not sure if I understand / agree. I did the following example in R. Suppose we have 4 assets with the following correlation matrix:
> cor <- matrix(c(1, 0.9, -0.95, -0.96, 0.9 , 1, -0.98, -0.92, -0.95, -0.98, 1, 0.97, -0.96, -0.92, 0.97, 1), nrow=4, byrow=T)> cor [,1] [,2] [,3] [,4][1,] 1.00 0.90 -0.95 -0.96[2,] 0.90 1.00 -0.98 -0.92[3,] -0.95 -0.98 1.00 0.97[4,] -0.96 -0.92 0.97 1.00
i.e. two asset are highly correlated two are highly uncorrelated. Assume we have an equally weighted portfolio and volatilities (standard deviation of the assets are $(0.12,0.08,0.15,0.02)$. Then the covariance matrix is
> cov <- matrix(c(cor[1,1]*0.12*0.12, cor[1,2]*0.12*0.08, cor[1,3]*0.12*0.15, cor[1,4]*0.12*0.02, cor[2,1]*0.12*0.08, cor[2,2]*0.08*0.08, cor[2,3]*0.08*0.15, cor[2,4]*0.08*0.02, cor[3,1]*0.12*0.15, cor[3,2]*0.15*0.08, cor[3,3]*0.15*0.15, cor[3,4]*0.15*0.02, cor[4,1]*0.12*0.02, cor[4,2]*0.02*0.08, cor[4,3]*0.02*0.15, cor[4,4]*0.02*0.02),nrow=4,byrow=T)> cov [,1] [,2] [,3] [,4][1,] 0.014400 0.008640 -0.01710 -0.002304[2,] 0.008640 0.006400 -0.01176 -0.001472[3,] -0.017100 -0.011760 0.02250 0.002910[4,] -0.002304 -0.001472 0.00291 0.000400
checking quickly if I didn't mess around and if the matrices are positive semi-definite:
> cov2cor(cov) [,1] [,2] [,3] [,4][1,] 1.00 0.90 -0.95 -0.96[2,] 0.90 1.00 -0.98 -0.92[3,] -0.95 -0.98 1.00 0.97[4,] -0.96 -0.92 0.97 1.00> eigen(cov)$values[1] 4.237828e-02 1.181651e-03 1.278374e-04 1.223631e-05$vectors [,1] [,2] [,3] [,4][1,] -0.56773237 0.78783396 -0.2372159 0.02694845[2,] -0.37734597 -0.49612636 -0.7636994 -0.16802342[3,] 0.72548256 0.36316206 -0.5520474 -0.19243722[4,] 0.09468381 -0.03591096 -0.2360838 0.96644184> eigen(cor)$values[1] 3.840405106 0.115703083 0.038349747 0.005542064$vectors [,1] [,2] [,3] [,4][1,] -0.4960354 0.5839966 -0.63706112 -0.08396417[2,] -0.4947472 -0.7033709 -0.19753124 -0.47061241[3,] 0.5078001 0.2206994 -0.08387845 -0.82848975[4,] 0.5013114 -0.3398664 -0.74033705 0.29168255
Looks all about right. Now lets calculate $w^T\Sigma w$ and $w^TCw$. According to you we should see a difference. However, they are pretty inline with each other:
> sum(w*(cov%*%w))[1] 9.55e-05> sum(w*(cor%*%w))[1] 0.0075> w[1] 0.25 0.25 0.25 0.25
the risk (using the covariance matrix) is zero as we would expect. |
Preliminary note: You have not stated the support of your density, so I am going to assume it is $x \geqslant \mu$, since this gives a valid density function. Under this interpretation, this means that you have a shifted exponential distribution with shift parameter $\mu$ and scale parameter $\sigma$.
Firstly, the function $T$ is not a one-to-one function of $T'$, so no, you can't just assert that. Secondly, the result you are trying to prove is in fact incorrect; the statistic $T$ is not sufficient for $\sigma$ in this case. To prove sufficiency you would generally apply the Fisher-Neyman factorisation theorem, which says that $T$ is sufficient if and only if you can write the likelihood function as:
$$L_\mathbf{x}(\theta) \propto g(\theta, T(\mathbf{x})),$$
where $\theta$ is the unknown parameter. (In other words, the likelihood function depends on the data vector $\mathbf{x}$ only through the statistic $T(\mathbf{x})$.) Denoting $m_n \equiv \min (x_1,...,x_n)$ we have:
$$\begin{equation} \begin{aligned}L_\mathbf{x}(\mu,\sigma) &= \prod_{i=1}^n f(x_i|\mu,\sigma) \\[6pt]&= \prod_{i=1}^n \frac{1}{\sigma} \cdot \exp \Big(- \frac{x_i-\mu}{\sigma} \Big) \cdot \mathbb{I}(x_i \geqslant \mu) \\[6pt]&= \frac{1}{\sigma^n} \cdot \exp \Big(- \frac{n(\bar{x} - \mu)}{\sigma} \Big) \cdot \mathbb{I}(m_n \geqslant \mu) \\[6pt]&= \frac{1}{\sigma^n} \cdot \exp \Big(- \frac{n(\bar{x} - m_n)}{\sigma} - \frac{n(m_n - \mu)}{\sigma} \Big) \cdot \mathbb{I}(m_n - \mu \geqslant 0) \\[6pt]&= \frac{1}{\sigma^n} \cdot \exp \Big(- \frac{n(\bar{x} - m_n)}{\sigma} \Big) \cdot \exp \Big(- \frac{n(m_n - \mu)}{\sigma} \Big) \cdot \mathbb{I}(m_n - \mu \geqslant 0). \\[6pt]\end{aligned} \end{equation}$$
From this decomposition, we can see that the statistic $(\bar{x} - m_n, m_n)$ is minimal sufficient for both parameters, and also minimal sufficient for $\sigma$ when $\mu$ is known. (We can also see that the statistic $m_n$ is minimal sufficient for $\mu$ when $\sigma$ is known.) |
Theoretically it's possible to do this, although it will often not be practical.
Let's consider this in polynomial space. For a filter of order N you have 2*N+1 independent variables (N for the denominator and N+1 for the numerator). Let's look an arbitrary point $z_{k}$ in the z-plane and let's say the value of the transfer function at this point is H($z_{k}$). The relationship between the transfer function and all the filter coefficients can be written as equation that's linear in all filter coefficients as follows: $$\sum_{n=0}^{2*N}b_{n}\cdot z_{k}^{-n} - H(z_{k})\cdot \sum_{n=1}^{2*N}a_{n}\cdot z_{k}^{-n}=H(z_{k})$$So if you pick M different frequencies $z_{k}$ you'll end up with a set of M complex linear equations or 2*M real equations. Since your number of unknowns is odd (2*N+1) you probably always want to pick one frequency where z is real, i.e z = 1 or $\omega$ = 0.
If M is larger than N than the system of equations is linearly dependent. You can find the filter order by starting at N=1 and increase N until the equation system becomes linearly dependent. The largest N at which the system is linearly independent is the actual filter order. For this approach, it doesn't even matter what frequencies you pick. As long as they are different, any set of frequencies will work.
However, this a numerically very tricky problem. Polynomial representation for larger filter orders is numerically very fragile and the smallest amount of noise or uncertainty leads to very large numerical errors. For example, if you determine the values of the sampled transfer function through measurement, the required measurement accuracy will be prohibitive unless it's very benign low order filter. |
How to evaluate this limit $$\sum_{n=1}^\infty\frac{1}{n\sqrt[n]{n}}$$ and its convergence?
I tried ratio test, root test, Raabe's test. However, I'm not getting anywhere. Can you please help me? Thank you
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
How to evaluate this limit $$\sum_{n=1}^\infty\frac{1}{n\sqrt[n]{n}}$$ and its convergence?
I tried ratio test, root test, Raabe's test. However, I'm not getting anywhere. Can you please help me? Thank you
For $n$ sufficiently large, $\root n\of n<2$; so, $${1\over n\,\root n\of n}>{1\over 2n}$$ for sufficiently large $n$.
Since the series $\sum\limits_{n=1}^\infty {1\over 2n}$ diverges (it is essentially the harmonic series), it follows from the Comparison test that the series $\sum\limits_{n=1}^\infty {1\over n\,\root n\of n}$ diverges.
Hint: search an equivalent of $\sqrt[n]{n}$ as $n$ goes to $+\infty$ and use the Limit comparison test. |
I) Recall that the probability current 1D QM is
$$\tag{1} J(x)~:=~\frac{i\hbar}{2m}W(\psi,\psi^{\ast}) (x), $$
where
$$\tag{2} W(\psi,\psi^{\ast}) (x)~:=~\psi(x) \psi^{\prime}(x)^{\ast}- \psi^{\prime}(x)\psi(x)^{\ast}$$
is the Wronskian. Unitarity of the $S$-matrix is equivalent to the statement that
$$ \lim_{x\to -\infty} W(\psi,\psi^{\ast}) (x)~=~\lim_{x\to +\infty} W(\psi,\psi^{\ast}) (x), \tag{3}$$
cf. e.g. eq. (11) in my Phys.SE answer here. A sufficient condition for unitarity is if the Wronskian (2) does not depend on the $x$-position.
II) Firstly, there exists many semiclassical WKB approximations in the literature, e.g. to what order in $\hbar$ are we talking? If we consider some truncated WKB approximation scheme, there is no
a priori reason why the unitarity condition (3) should be satisfied.
Secondly, the one-directional nature of the connection formulas often makes it impossible to determine the exponentially growing branch in a classically forbidden side from knowledge about the classically allowed side of a turning point, in particular in situations with many turning points, cf. e.g. Refs. 1-3.
OP is presumably interested in the case of a single potential barrier with two turning points. This case has been complete resolved in the literature using e.g. uniform approximation techniques, and the resulting WKB formula preserves unitarity, see e.g. Ref. 3. However, we shall not consider these methods here.
For the rest of this answer, we will consider the following WKB formula
$$ \psi(x)~=~A(x)\sum_{\sigma=\pm 1} C_{\sigma}(x;x_0)~\exp\left(\frac{i}{\hbar}\sigma S(x;x_0)\right)$$ $$ ~=~A(x)\sum_{\sigma=\pm 1} B_{\sigma}(x;x_0)~\cos\left(\frac{1}{\hbar} S(x;x_0)+\sigma\frac{\pi}{4}\right),\tag{4}$$
written either in terms of an exponential function or a cosine function, where
$$A(x)~:=~\frac{1}{\sqrt{p(x)}} , \qquad S(x;x_0)~:=~ \int_{x_0}^x\!dx^{\prime}~p(x^{\prime}), $$ $$ p(x)~:=~ \sqrt{2m(E-V(x))},\qquad E,V(x)~\in~\mathbb{R}; \tag{5} $$
$x_0\in \mathbb{R}$ is fixed reference point; $\sigma\in\{\pm 1\}$ is a sign; and $B_{\sigma}(x;x_0),C_{\sigma}(x;x_0)\in\mathbb{C}$ are complex piecewise $x$-independent constants. As we shall see in Section VI, the piecewise constants $B_{\sigma}(x;x_0)$ and $C_{\sigma}(x;x_0)$ may jump at turning points because of the metaplectic correction/Maslov index, cf. e.g. this Phys.SE post and references therein. It is implicitly implied that the two square roots in formula (5) are appropriately chosen branches, not double-valued functions.
III) We have no
a priori argument why the Wronskian of the WKB approximation (4) should be $x$-independent to all orders in $\hbar$. Nevertheless, it is interesting and instructive to analyze its form. The Wronskian of the WKB approximation (4) becomes
$$ W(\psi,\psi^{\ast}) ~\stackrel{(2)+(4)}{=}~\sum_{\sigma=\pm 1}\sum_{\sigma^{\prime}=\pm 1}\exp\left[\frac{i}{\hbar}\left(\sigma S -\sigma^{\prime}S^{\ast}\right)\right]$$$$\left[|A|^2(C_{\sigma} C^{\prime\ast}_{\sigma^{\prime}}- C^{\prime}_{\sigma} C^{\ast}_{\sigma^{\prime}}) + C_{\sigma} C^{\ast}_{\sigma^{\prime}}\left\{ (AA^{\prime \ast} - A^{\prime}A^{\ast})- \frac{i}{\hbar}|A|^2(\sigma p+\sigma^{\prime} p^{\ast} ) \right\}\right] .\tag{6} $$
Firstly, one may argue (using arguments$^1$ similar to what is done below) that the term
$$AA^{\prime \ast} - A^{\prime}A^{\ast}~=~0\tag{7} $$
vanishes in eq. (6). Secondly, the term $ C_{\sigma} C^{\prime\ast}_{\sigma^{\prime}} - C^{\prime}_{\sigma} C^{\ast}_{\sigma^{\prime}} $ is proportional to delta functions with supports at the turning points.Hence, away from the turning points, the Wronskian (6) simplifies to
$$ W^{\neq}(\psi,\psi^{\ast}) ~\stackrel{(6)}{=}~- \frac{i}{\hbar}|A|^2 \sum_{\sigma=\pm 1}\sum_{\sigma^{\prime}=\pm 1}C_{\sigma} C^{\ast}_{\sigma^{\prime}}\left(\sigma p+\sigma^{\prime} p^{\ast} \right)\exp\left[\frac{i}{\hbar}\left(\sigma S -\sigma^{\prime}S^{\ast}\right)\right] $$$$~=~\frac{1}{\hbar}|A|^2 \sum_{\sigma=\pm 1}\sum_{\sigma^{\prime}=\pm 1}B_{\sigma} B^{\ast}_{\sigma^{\prime}} \left[p\sin\left(\frac{1}{\hbar} S+\sigma\frac{\pi}{4}\right)\cos\left(\frac{1}{\hbar} S^{\ast}+\sigma^{\prime}\frac{\pi}{4}\right)\right. $$$$\left. -p^{\ast}\cos\left(\frac{1}{\hbar} S+\sigma\frac{\pi}{4}\right)\sin\left(\frac{1}{\hbar} S^{\ast}+\sigma^{\prime}\frac{\pi}{4}\right)\right]. \tag{8} $$
IV)
Classically allowed region $E>V(x)$. Then $p$ and $S$ are both real, and we must choose same signs $\sigma=\sigma^{\prime}$ in the exponential version of eq. (8) to get non-zero contributions. The exponential factor is then just $1$. The Wronskian (8) is $x$-independent
$$\tag{9} W^>(\psi,\psi^{\ast}) ~\stackrel{(8)}{=}~- \frac{2i}{\hbar}|A|^2p\sum_{\sigma=\pm 1}\sigma |C_{\sigma}|^2~=~ \frac{1}{\hbar}|A|^2p\sum_{\sigma=\pm 1}\sigma B_{\sigma}B^{\ast}_{-\sigma}, $$
because $|A|^2p$ is $x$-independent, cf. eq. (5). The last expression of eq. (9) follows from the trigonometric version of eq. (8) with the help of a trigonometric addition formula.
V)
Classically forbidden region $E<V(x)$. Then $p$ and $S$ are both imaginary, and we must choose opposite signs $\sigma=-\sigma^{\prime}$ in the exponential version of eq. (8) to get non-zero contributions. The exponential factor is then just $1$. The Wronskian (8) is $x$-independent
$$\tag{10} W^<(\psi,\psi^{\ast}) ~\stackrel{(8)}{=}~- \frac{2i}{\hbar}|A|^2p\sum_{\sigma=\pm 1}\sigma C_{\sigma}C^{\ast}_{-\sigma}~=~ \frac{1}{\hbar}|A|^2p\sum_{\sigma=\pm 1}\sigma |B_{\sigma}|^2. $$
(The superscripts $>$ and $<$ refer to the classically allowed and forbidden regions, respectively.)
VI)
A turning point $V(x_0)=E \Leftrightarrow p(x_0)=0$. Let us for notational simplicity assume $x_0=0$. Assume that in a neighborhood around $x_0=0$, the potential can be approximated with a linear$^2$ potential
$$\tag{11} V(x)-E~\propto ~x,$$
i.e. one side is classically allowed and the other side is classically forbidden. Then eq. (5) implies
$$\tag{12} p(x)~\propto ~x^{\frac{1}{2}}, \qquad S(x)~\propto ~x^{\frac{3}{2}},\qquad A(x)~\propto ~x^{-\frac{1}{4}}. $$
The TISE of the linear potential (11) becomes the Airy equation. From the asymptotic expansion of the Airy functions, one derives the Airy connection formulas
$$\tag{13} \exp\left(\sigma\frac{2}{3}|x|^{\frac{3}{2}}\right)\quad\longleftrightarrow\quad\frac{3-\sigma}{2}\cos\left( \frac{2}{3}|x|^{\frac{3}{2}}+\sigma\frac{\pi}{4}\right) $$
between the positive and negative $x$-axis, where $\sigma\in\{\pm 1\}$. The Airy connection formulas (13) lead to the WKB connection formulas
$$\tag{14} C^<_{+1}~=~e^{\frac{i\pi}{4}}B^>_{+1}, \qquad 2C^<_{-1}~=~e^{\frac{i\pi}{4}}B^>_{-1}, $$
for the complex coefficients $B_{\sigma},C_{\sigma}\in\mathbb{C}$, if we adapt the following sign conventions
$$\tag{15} p^<~=~ip^>\qquad A^<~=~e^{-\frac{i\pi}{4}}A^>$$
for the square roots in eq. (5). In matrix-form we have
$$\tag{16} C^<~=~\begin{pmatrix} 1 & i \cr \frac{i}{2} &\frac{1}{2} \end{pmatrix} C^>\qquad\text{and}\qquad C^>~=~\begin{pmatrix} \frac{1}{2} & -i \cr -\frac{i}{2} &1 \end{pmatrix} C^<.$$
Note that the connection formulas (14) and (15) imply that the Wronskian
$$\tag{17} W^>(\psi,\psi^{\ast}) ~=~W^<(\psi,\psi^{\ast}) $$
of the WKB approximation (4) does not jump at the turning point, cf. eqs. (9) and (10).
The Airy connection formulas (13) may superficially look like double-directional connection formulas, but that's an illusion. Because of its asymptotic nature, it involves a (possibly illegitimate) implicit choice, which we have made for simplicity.
It is possibly to use more general complex methods involving the method of steepest descent, which leads to
the principle of exponential dominance:
"A multiplier $C_+$, or $C_-$ can only change on a good path when crossing a Stokes line on which its exponential is subdominant."
One may show that this principle obeys unitarity, see e.g. Ref. 3 for details.
TL;DR: We have argued using various simplifying assumptions that the WKB formula (4) preserves unitarity.
--
References:
D. Griffiths,
Intro to QM, Chapter 8.
A. Galindo & P. Pascual,
QM2, Chapter 9.
M.V. Berry & K.E. Mount,
Semiclassical approximations in wave mechanics, Rep. Prog. Phys 35 (1972) 315; Chapter 3 & 4.
$^1$ E.g. at a turning point $x_0$, the transformation $x\to -x$ induces a sign flip on the left-hand side of eq. (7). But minus zero is still zero!
$^2$ We leave it to the reader to analyze the cases of higher-order turning-points and the case of an infinite hard wall. |
Center manifold
Jack Carr (2006), Scholarpedia, 1(12):1826. doi:10.4249/scholarpedia.1826 revision #126955 [link to/cite this article]
One of the main methods of simplifying dynamical systems is to reducethe dimension of the system.
Centre manifold theory is a rigorous mathematical technique that makes this reduction possible, at least near equilibria.
Contents An Example
We first look at a simple example. Consider \[\tag{1} x' =ax^3 \,, \qquad y' =-y + y^2 \]
where \(a\) is a constant. Since the equations are uncoupled, we see that the stationary solution \( x=y =0\) of (1) is asymptotically stable if and only if \(a < 0\ .\) Suppose now that \[\tag{2} x' =ax^3 + xy - xy^2\,, \qquad y' =-y + bx^2 +x^2 y \]
Since the equations are coupled we cannot immediately decide if the stationary solution \( x=y =0\) of (2) is asymptotically stable. The key is an abstraction of the idea of uncoupled equations.
A curve \(y =h(x)\ ,\) defined for \(|x|\) small, is said to be an
invariant manifold for the system \[\tag{3}x' =f(x,y)\,, \qquad y' = g(x,y)\]
if the solution of (3) with \(x(0) =x_0\ ,\) \(y(0) = h(x_0)\) lies on the curve \(y =h(x)\) as long as \(x(t)\) remains small. For the system (1), \(y=0\) is an invariant manifold. Note that in deciding upon the stability of the stationary solution of (1), the only important equation is \(x' = ax^3\ ,\) that is, we need only study a first order equation on a particular invariant manifold.
Center manifold theory tells us that (2) has an invariant manifold \(y =h(x) = \mbox{O}(x^2)\) for small \(x\ .\) Furthermore, the local behaviour of solutions of the two dimensional system (2) can be determined by studying the scalar equation \[\tag{4} u' = au^3 + uh(u) -uh^2(u) \]
The theory also tells us how to compute approximations to the invariant manifold \(y = h(x)\ .\) For (2) we have that \(h(x) = bx^2 + \mbox{O}(x^4)\) and using this information in (4) gives \[\tag{5} u' =(a+b)u^3 + \mbox{O}(u^5) \]
Hence the stationary solution of (2) is asymptotically stable if \(a+b < 0\) and unstable if \(a+b>0\ .\) If \(a+b = 0\) we need a better approximation to the invariant manifold in order to decide on the stability.
Centre Manifolds
Consider the system \[\tag{6} x' =Ax + f(x,y)\,, \qquad y' = By+g(x,y)\,, \qquad (x,y ) \in \R^n \times \R^m \]
where all the eigenvalues of the matrix \(A\) have zero real parts and all the eigenvalues of the matrix \(B\) have negative real parts. The functions \(f\) and \(g\) are sufficiently smooth and \[ f(0,0) =0\,, \qquad Df(0,0) =0\,, \qquad g(0,0) =0\,, \qquad Dg(0,0) = 0 \] where \(Df\) is the Jacobian matrix of \(f\ .\)
If \(f\) and \(g\) are identically zero then (6) has the two obvious invariant manifolds \(x=0\) and \(y=0\ .\) The invariant manifold \(x=0\) is called the
stable manifold, and on the stable manifold all solutions decay to zero exponentially fast. The invariant manifold \(y=0\) is called the centre manifold. In general, an invariant manifold \(y = h(x)\) for (6) defined for small \(|x|\) with \(h(0)=0\) and \(Dh(0)=0\) is called a centre manifold. In more physical terms, the dynamics of y follows the dynamics of x and one may say that x enslaves the variable y. This interpretation has been called slaving principle. Main Results
The general theory states that there exists a centre manifold \(y =h(x)\) for (6) and that the equation on the centre manifold \[\tag{7} u' =Au + f(x,h(u))\,, \qquad u \in R^n \]
determines the dynamics of (6) near \((x, y) =(0,0)\ .\) In particular, if the stationary solution \(u=0\) of (7) is stable, we can represent small solutions of (6) as \(t \rightarrow \infty\) by \[ x(t) =u(t) + \mbox{O}(e^{-\gamma t} )\,, \qquad y(t) =h(u(t)) + \mbox{O}(e^{-\gamma t}) \] where \(\gamma > 0\) is a constant.
To use the above theory, we need to have enough information about the centre manifold \(y = h(x)\) in order to determine the local dynamics of (7). If we substitute \(y(t) = h(x(t))\) into the second equation in (6) we obtain \[\tag{8} N(h(x)) =h'(x)\left[ Ax +f(x,h(x)) \right] - Bh(x) -g(x,h(x)) = 0 \]
The general theory tells us that the solution \(h\) of (8) can be approximated by a polynomial in \(x\ ,\) that is, if \(N(\phi(x)) = \mbox{O}(|x|^q)\) as \(x \rightarrow 0\) then \(h(x) =\phi (x) + \mbox{O}(|x|^q)\ .\)
There is also an \(m\) dimensional invariant manifold \(W^s\) tangential to the y-axis called the stable manifold. On the stable manifold all solutions decay to zero exponentially fast. Figure 1 illustrates the local dynamics for equation (6). The details of the flow on the centre manifold \(y = h(x)\) depend on the higher order terms in equation (7) and we cannot assign directions to the flow without further information.
We have assumed that all of the eigenvalues of the matrix B in (6) have negative real parts. The theory can be extended to the case in which the matrix B has in addition some eigenvalues with positive real parts. In this case the stationary solution \(x=0, y=0\) of (6) is unstable due to the unstable eigenvalues. There exists a centre manifold for (6) which captures the behaviour of small bounded solutions. In particular, this gives a method of studying all sufficiently small equilibria, periodic orbits and heteroclinic orbits.
Local Bifurcations
Centre manifold reduction is central to the development of bifurcation theory. We illustrate this by means of a simple example. Consider \[\tag{9} x' =\epsilon x -x^3 +xy\,, \qquad y' =-y + y^2 -x^2 \]
where \(\epsilon\) is a small scalar parameter. The goal is to study small solutions of (9). The linearised problem about the zero equilibrium has eigenvalues \(-1\) and \(\epsilon\) so the theory does not directly apply. We can write the equations in the equivalent form \[\tag{10} x' =\epsilon x -x^3 +xy\,, \qquad y' = -y + y^2 -x^2 \,, \qquad \epsilon' = 0 \ .\]
When considered as an equation on \(\R^3\) the \(\epsilon x\) term in (10) is nonlinear and the system has an equilibrium at \((x,y,\epsilon) = (0,0,0)\ .\) The linearisation about this equilibrium has eigenvalues \(-1, 0 ,0\ ,\) that is, it has two zero eigenvalues and one negative eigenvalue. . The theory now applies so that the extended system (10) has a two dimensional centre manifold \(y =h(x,\epsilon)\) that can be approximated by a polynomial in \(x\) and \(\epsilon\ .\) The equation on the centre manifold is two dimensional and may be written in terms of the scalar variables \(u\) and \(\epsilon\) as \[ u' =\epsilon u - 2u^3 + \mbox{higher order terms} \,, \qquad \epsilon' = 0 \] and the local dynamics of (10) can be deduced from this equation.
Notes and Further Reading
The ideas for centre manifolds in finite dimensions have been around for a long time and have been developed by Carr (1981), Guckenheimer and Holmes (1983), Kelly (1967), Vanderbauwhede (1989) and others. For recent developments in the approximation of centre manifolds see Jolly and Rosa (2005). Pages 1-5 of the book by Li and Wiggins (1997) give an extensive list of the applications of centre manifold theory to infinite dimensional problems. Mielke (1996) has developed centre manifold theory for elliptic partial differential equations and has applied the theory to elasticity and hydrodynamical problems. Applications to phase transitions in biological, chemical and physical systems have been investigated by Haken (2004).
In addition, it is interesting to note that there is a stochastic extension of the center manifold theorem, which has been introduced by Boxler (1989). In this case, for instance the center and stable manifolds may fluctuate randomly.
References
J. Carr (1981), Applications of Centre Manifold Theory, Springer-Verlag.
J. Guckenheimer and P. Holmes (1983), Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer-Verlag.
M. S. Jolly and R. Rosa (2005), Computation of non-smooth local centre manifolds, IMA Journal of Numerical Analysis , 25, no. 4, 698-725.
A. Kelly (1967), The stable, center-stable, center, center-unstable and unstable manifolds. J. Diff. Eqns, 3, 546-570.
Li and S. Wiggins (1997), Invariant manifolds and fibrations for perturbed nonlinear Schrödinger equations. Springer-Verlag.
A. Mielke (1996), Dynamics of nonlinear waves in dissipative systems: reduction, bifurcation and stability. In Pitman Research Notes in Mathematics Series, 352. Longman.
A. Vanderbauwhede (1989). Center Manifolds, Normal Forms and Elementary Bifurcations, In Dynamics Reported, Vol. 2. Wiley.
H. Haken (2004), Synergetics: Introduction and Advanced topics, Springer Berlin
P. Boxler (1989), A stochastic version of center manifold theory, Probability Theory and Related Fields, 83(4), 509-545
Internal references John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. |
I am just working through an argument from Halliday Resnick to derive the Lorentz contraction (see quote below).
Some paragraphs before this, the authors note that:
If the rod is moving, however, you must note the positions of the end points
simultaneously(in your reference frame) or your measurement cannot be called a length.
A paragraph later they invoke the following argument:
Length contraction is a direct consequence of time dilation. Consider once more our two observers. This time, both Sally, seated on a train moving through a station, and Sam, again on the station platform, want to measure the length of the platform. Sam, using a tape measure, finds the length to be $L_0$, a proper length because the platform is at rest with respect to him. Sam also notes that Sally, on the train, moves through this length in a time $\Delta t = L_0/v$ where $v$ is the speed of the train; that is, $$ L_0 = v \Delta t \quad \text{(Sam)} $$ This time interval $\Delta t$ is not a proper time interval because the two events that define it (Sally passes the back of the platform and Sally passes the front of the platform) occur at two different places, and therefore Sam must use two synchronized clocks to measure the time interval $\Delta t$.
For Sally, however, the platform is moving past her. She finds that the two events measured by Sam occur
at the same placein her reference frame. She can time them with a single stationary clock, and so the interval $\Delta t_0$ that she measures is a proper time interval. To her, the length $L$ of the platform is given by $$ L = v \Delta t_0 \quad \text{(Sally)}. $$
Then they conclude by dividing the two equations above:
$$ \frac{L}{L_0} = \frac{v\Delta t_0}{v \Delta t} = \frac{1}{\gamma}$$ or $$ L = \frac{L_0}{\gamma} $$
which is the length contraction equation.
However I don't see in what sense the length was measured simultanous in the derivation above, how is the detailed connection between the statement that the length measurement has to be simulanous and the quoted derivation? |
The spacetime geometry around a rotating uncharged black hole is described by the Kerr metric. I'll give this below, and it will look terrifying, but bear with me because there's only one small bit of the equation we need to see why the horizon disappears. Anyhow, the Kerr metric is:
$$\begin{align} ds^2 &= -(1 - \frac{r_s r}{\rho^2})dt^2 \\ &+ \frac{\rho^2}{\Delta}dr^2 \\ &+ \rho^2d\theta^2 \\ &+ (r^2 + \alpha^2 + \frac{r_s r\alpha^2}{\rho^2}\sin^2\theta)\sin^2\theta d\phi^2 \\ &+ \frac{2r_sr\alpha\sin^2\theta}{\rho^2}dt d\phi\end{align}$$
Where:
$$\begin{align}r_s &= 2M \\\alpha &= \frac{J}{M} \\\rho^2 &= r^2 + \alpha^2\cos^2\theta \\\Delta &= r^2 - r_sr + \alpha^2\end{align}$$
In the equation $J$ is the angular momentum of the black hole, $r$ is the distance from the centre of the black hole, $\theta$ is the latitude, $\phi$ is the longitude and $t$ is time. The parameter being calculated $ds$ is the total distance moved if you move by a distance $dr$ and angles $d\theta$ and $d\phi$ in a time $dt$.
Now, suppose we stay at some fixed angle relative to the black hole so $d\theta = d\phi = 0$, and we measure the distance along the radius to the centre of the black hole. We'll choose some fixed time $t$ for the measurement, so $dt = 0$. With all these restrictions the metric simplifies drastically to:
$$ ds^2 = \frac{\rho^2}{\Delta}dr^2 $$
and this is what we need to understand the behaviour of the horizon, because the event horizon radius is the value of $r$ for which the value of $ds^2$ goes to infinity. This happens when $\Delta = 0$, because then we get a division by zero. So to find the event horizon radius we just have to solve the equation:
$$ \Delta = r^2 - r_sr + \alpha^2 = 0 $$
and this is just a quadratic in $r$, like we all learned to solved at school. Using the quadratic formula the solution is (given by the larger root):
$$ r = \frac{r_s + \sqrt{r_s^2 - 4\alpha^2}}{2} \tag{1} $$
And the variation of the event horizon radius $r$ with $\alpha/r$ looks like:
Note that the line stops at $r/r_s = 0.5$ and $\alpha/r_s = 0.5$. The line stops here because beyond this point the equation (1) for $r$ has no real roots, and this means there is no event horizon. But remember that $\alpha$ is linked to the angular momentum $J$ by:
$$ \alpha = \frac{J}{M} $$
So for any value of the angular momentum $J > M$ there is no event horizon, and this is why the event horizon disappears when you spin the black hole too fast.
However there are good reasons to suppose that a black hole can never spin this fast, and the disappearance of the event horizon isn't real, but rather a sign that we've tried to apply the Kerr metric to a system that cannot physically exist. There is an article here (160KB PDF) analysing the physics, and the conclusions are that it is physically impossible to spin a black hole that fast. |
I roughly understand the concept of the Lagrangian $L = T - V$ as well as the idea of stationary action $\delta \mathcal{S} =0$. However, I am confused what the Euler-Lagrange equation actually says.
Consider the Euler-Lagrange equation: \begin{equation} \frac{\partial L}{\partial q} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}}\right) = 0 \end{equation}
Here's my confusion:
To me, this looks like an empty mathematical exercise. If I know the partial derivatives $\frac{\partial L}{\partial q}$ and $\frac{\partial L}{\partial \dot{q}}$ and can take the derivative with respect to $t$ of the latter, what is the use of plugging all that into this setup? That's like saying after I show $2 + 3 = 5$, then show $x + y = z$, where $x=2,y=3,z=5$. In short, an empty exercise since it would be the same proof showing $x + y = z$ as showing $2 +3 = 5$ in this case.
Can someone explain what I am misunderstanding? |
Let's pretend we have a spatially discretized PDE of the following form:
\begin{align} \frac{\partial^2 \boldsymbol{u}^k}{\partial t^2} = D\boldsymbol{u}^k \end{align}
where $D$ can be any form for now and $k$ refers to this being the discretization at time $t_k$. Then let's suppose, for example, we use a central difference approximation for the time derivative. We would then arrive at the following:
\begin{align} \frac{\partial^2 \boldsymbol{u}^k}{\partial t^2} &= D\boldsymbol{u}^k \\ \frac{\boldsymbol{u}^{k+1} - 2\boldsymbol{u}^k + \boldsymbol{u}^{k-1}}{\Delta t^2} &= D\boldsymbol{u}^k \\ \boldsymbol{u}^{k+1} &= \left( 2I + \Delta t^2 D \right)\boldsymbol{u}^k - \boldsymbol{u}^{k-1}\\ \end{align}
Given the formulation above, how could one approach stability? First thoughts go to using Von Neumann stability analysis, but I have never seen it used in a vector-wise fashion such that one could take into account an arbitrary $D$ matrix. Any thoughts or references would be very useful.
Edit
The references provided in the comments were useful, but I found that only a few simple connections were needed to approach stability for this scenario. The key link was to take the expression above and cast it into a different difference equation with a more suitable form.
The more suitable case is by defining $\boldsymbol{w}^k = \left[(\boldsymbol{u}^{k})^{T}, (\boldsymbol{u}^{k-1})^{T}\right]^T$. We can than recast our difference equation into the form:
\begin{align} \boldsymbol{w}^{k+1} = G \boldsymbol{w}^k \end{align}
where
\begin{align} G = \begin{bmatrix} (2I + \Delta t^2 D) & -I \\ I & 0 \end{bmatrix} \end{align}
Then we know this system is stable if, given the set of eigenvalues $\lbrace \lambda_i \rbrace$ for $G$, the following holds:
\begin{align} \max_{i} \left| \lambda_i\right| \lt 1 \end{align}
With these results, we can check whether some differential operator $D$ is going to be stable with a given time discretization, particularly if it uses older $\boldsymbol{u}$ states. |
It is stated e.g. on Hackage, that the
ArrowApply and the
Monad typeclasses are equivalent. I have my doubts about this.
It is obvious that there are morphisms from
ArrowApply to
Monad and back, and one can check that one can prove the
Monad laws and the
ArrowApply laws in each case. What I doubt is that these morphisms are inverses.
They are defined this way:
Let $\leadsto: \operatorname{Type} \times \operatorname{Type} \to \operatorname{Type}$ be an instance of
ArrowApply, so there is $arr: (a \to b) \to (a \leadsto b)$, $first: (a \leadsto b) \to (a \times c \leadsto b \times c)$ and $app: ((b \leadsto c) \times b) \leadsto c$, and $\leadsto$ is the hom of some category. I'm leaving out the universal quantifiers on lower-case letters, like in Haskell-style.
We define a monad $m: \operatorname{Type} \to \operatorname{Type}$ by $m a = 1 \leadsto a$. For example, $\operatorname{return}: a \to m a$ is defined as $\operatorname{return} x = arr (\lambda y. x)$, and $\operatorname{join}$ and the monad laws are not very hard. Or the other way: Given a monad $m$, define $a \leadsto b = a \to m b$. This defines obviously a category and also gives rise to an
ArrowApply.
I cannot see at all that these constructions are inverse to each other. Here is what I believe to be a counter-example:
Define your arrow type as the fixpoint of $a \leadsto b = a \to (b \times (a \leadsto b))$. This is the terminal coalgebra of the functor $x \mapsto A \to (b \times x)$. It represents a stateful, causal stream function. Constructions like this are used in functional reactive programming, e.g. Yampa.
We then define the monad $m$ in terms of $\leadsto$. Intuitively, $m a$ is the type of stream functions from unit to $a$, in other words, streams in $a$. But Kleisli arrows of this monad aren't stream functions: $$m a = 1 \to a \times (1 \leadsto a) \cong a \times (m a)$$ $$\implies a \to m b \simeq a \to b \times (m b) \not\simeq a \to b \times (a \leadsto b) \simeq a \leadsto b$$ So in this case, the constructions are not equivalent, if I haven't made a mistake somewhere.
Are
ArrowApply and
Monad inequivalent then, since the constructions aren't inverses of each other? Or is there another construction that makes them equivalent? |
Using Kjetil's answer answer, with process91's comment, we arrive at the following procedure.
Derivation
We are given two unit column vectors, $A$ and $B$ ($\|A\|=1$ and $\|B\|=1$). The $\|\circ\|$ denotes the L-2 norm of $\circ$.
First, note that the rotation from $A$ to $B$ is just a 2D rotation on a plane with the normal $A \times B$. A 2D rotation by an angle $\theta$ is given by the following augmented matrix: $$G=\begin{pmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta & \cos\theta & 0 \\0 & 0 & 1\end{pmatrix}.$$
Of course we don't want to actually
compute any trig functions. Given our unit vectors, we note that $\cos\theta=A\cdot B$, and $\sin\theta=||A\times B||$. Thus $$G=\begin{pmatrix}A\cdot B & -\|A\times B\| & 0 \\\|A\times B\| & A\cdot B & 0 \\0 & 0 & 1\end{pmatrix}.$$
This matrix represents the rotation from $A$ to $B$ in the base consisting of the following column vectors:
normalized vector projection of $B$ onto $A$: $$u={(A\cdot B)A \over \|(A\cdot B)A\|}=A$$
normalized vector rejection of $B$ onto $A$: $$v={B-(A\cdot B)A \over \|B- (A\cdot B)A\|}$$
the cross product of $B$ and $A$: $$w=B \times A$$
Those vectors are all orthogonal, and form an orthogonal basis. This is the detail that Kjetil had missed in his answer. You could also normalize $w$ and get an orthonormal basis, if you needed one, but it doesn't seem necessary.
The basis change matrix for this basis is:$$F=\begin{pmatrix}u & v & w \end{pmatrix}^{-1}=\begin{pmatrix} A & {B-(A\cdot B)A \over \|B- (A\cdot B)A\|} & B \times A\end{pmatrix}^{-1}$$
Thus, in the original base, the rotation from $A$ to $B$ can be expressed as right-multiplication of a vector by the following matrix: $$U=F^{-1}G F.$$
One can easily show that $U A = B$, and that $\|U\|_2=1$. Also, $U$ is the same as the $R$ matrix from Rik's answer.
2D Case
For the 2D case, given $A=\left(x_1,y_1,0\right)$ and $B=\left(x_2,y_2,0\right)$, the matrix $G$ is the forward transformation matrix itself, and we can simplify it further. We note$$\begin{aligned} \cos\theta &= A\cdot B = x_1x_2+y_1y_2 \\\sin\theta &= \| A\times B\| = x_1y_2-x_2y_1\end{aligned}$$
Finally,$$U\equiv G=\begin{pmatrix}x_1x_2+y_1y_2 & -(x_1y_2-x_2y_1) \\x_1y_2-x_2y_1 & x_1x_2+y_1y_2\end{pmatrix}$$and$$U^{-1}\equiv G^{-1}=\begin{pmatrix}x_1x_2+y_1y_2 & x_1y_2-x_2y_1 \\-(x_1y_2-x_2y_1) & x_1x_2+y_1y_2\end{pmatrix}$$
Octave/Matlab Implementation
The basic implementation is very simple. You could improve it by factoring out the common expressions of
dot(A,B) and
cross(B,A). Also note that $||A\times B||=||B\times A||$.
GG = @(A,B) [ dot(A,B) -norm(cross(A,B)) 0;\
norm(cross(A,B)) dot(A,B) 0;\
0 0 1];
FFi = @(A,B) [ A (B-dot(A,B)*A)/norm(B-dot(A,B)*A) cross(B,A) ];
UU = @(Fi,G) Fi*G*inv(Fi);
Testing:
> a=[1 0 0]'; b=[0 1 0]';
> U = UU(FFi(a,b), GG(a,b));
> norm(U) % is it length-preserving?
ans = 1
> norm(b-U*a) % does it rotate a onto b?
ans = 0
> U
U =
0 -1 0
1 0 0
0 0 1
Now with random vectors:
> vu = @(v) v/norm(v);
> ru = @() vu(rand(3,1));
> a = ru()
a =
0.043477
0.036412
0.998391
> b = ru()
b =
0.60958
0.73540
0.29597
> U = UU(FFi(a,b), GG(a,b));
> norm(U)
ans = 1
> norm(b-U*a)
ans = 2.2888e-16
> U
U =
0.73680 -0.32931 0.59049
-0.30976 0.61190 0.72776
-0.60098 -0.71912 0.34884
Implementation of Rik's Answer
It is computationally a bit more efficient to use Rik's answer. This is also an Octave/MatLab implementation.
ssc = @(v) [0 -v(3) v(2); v(3) 0 -v(1); -v(2) v(1) 0]
RU = @(A,B) eye(3) + ssc(cross(A,B)) + \
ssc(cross(A,B))^2*(1-dot(A,B))/(norm(cross(A,B))^2)
The results produced are same as above, with slightly smaller numerical errors since there are less operations being done. |
Find the degree of the splitting field of $x^6+1$ over $\mathbb{Q}$
First I tried to find the roots of $x^6+1$ over $\mathbb{C}$ so I can create the splitting field:
$$x^6+1 = (x^3-i)(x^3+i)$$
but this strategy showed some complications like in Find the degree of the splitting field of $x^4 + 1$ over $\mathbb{Q}$
So I guess the goal is to find the roots in terms of $e^{\mbox{something}}$. It's easy to see that $i$ is a root of this equation, so we just convert $i$ to polar form involving $e$ like this: $i = e^{\frac{\pi}{2}}$ by thinking in the complex plane and where the $i$ vector points. Now, to find the other $5$ roots, we just add $\frac{n2\pi}{6}$ for $n=1,2,3,4,5$, or we just say that the $6$ roots are: $e^{i\left(\frac{\pi}{2} + n\frac{2\pi}{6}\right)}, n=0,1,2,3,4,5$.
We know that each root has a negative version, so they form pairs. We can just take the first $3$ representants, that is: $e^{i\left(\frac{\pi}{2} + n\frac{2\pi}{6}\right)}, n=0,1,2$, the others are the negative versions of it.
We must find $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}, \pm e^{i\frac{5\pi}{6}}, \pm e^{i\frac{7\pi}{6}}\right):\mathbb{Q}\right]$. The idea is to find $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}, \pm e^{i\frac{5\pi}{6}},\pm e^{i\frac{7\pi}{6}} \right):\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}, \pm e^{i\frac{5\pi}{6}}\right)\right]$, $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}, \pm e^{i\frac{5\pi}{6}}\right):\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}\right)\right]$ and $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}\right):\mathbb{Q}\right]$
So we can multiply and use the theorem of multiplicity of degrees to find the main degree.
In order to find the degree of each of those extensions, I must verify the basis of each one of the bigger extension over the smaller. For example, in $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}\right):\mathbb{Q}\right]$, the elements are going to be of the form $a + be^{i\frac{\pi}{2}} + ce^{i\frac{-\pi}{2}}$ for $a,b,c\in\mathbb{Q}$ (the $c$ coefficient is the inverse of $e^{i\frac{\pi}{2}}$). I should now see if repeated multiplication of $e^{i\frac{\pi}{2}}$ gives me its inverse, which doesn't, so the basis is $a + be^{i\frac{\pi}{2}} + ce^{i\frac{-\pi}{2}}$ and $\left[\mathbb{Q}\left(\pm e^{i\frac{\pi}{2}}\right):\mathbb{Q}\right]=3$? By using this reasoning I guess the other degress would be greater than $1$ and the main degree would be over $6$, what I think is impossible.
So... am I doing something wrong or am I solving it in a hard way?
Even if you have a better way to solve it, like proving $x^6+1$ is the irreducible polynomial that contains all the $6$ roots, could you tell what is wrong with my reasoning? Thank you! |
I want to prove that the implicit function theorem implies the inverse function theorem. I saw in another post what's written below as the proof for this but I don't understand what they've done.
$$ \text{ For } f : \mathbb{R}^n \to \mathbb{R}^n \text{, consider } F:\mathbb{R}^n\times\mathbb{R}^n \to \mathbb{R}^n \text{ given by } F({\bf x}, {\bf y}) = f({\bf y}) - {\bf x}$$
Do we consider $f(x)$ to be the implicit function satisfying $F\big(x,f(x)\big)=0$ , and by the definition of $F$ we get $F\big(x,f(x)\big)=0=f\big(f(x)\big)-x \Longrightarrow f\big(f(x)\big)=x$. It seems I was wrong by assuming defining $f$ as the implicit function and I should have just let it be $g$. So if we instead get the final implcation being $f\big(g(x)\big)=x$ is that proof of the inverse function theorem?
Thanks! |
In 1988, IMO presented a problem, to prove that $k$ must be a square if $a^2+b^2=k(1+ab)$, for positive integers $a$, $b$ and $k$. I am wondering about the solutions, not obvious from the proof. Beside the trivial solutions a or $b=0$ or 1 with $k=0$ or $1$, an obvious solution is $a=b^3$ so that the equation becomes $b^6+b^2=b^2(1+b^4)$ . Are there any other solutions?
This is a famous problem, here is one of the solutions that I like the most that I read it in a book previously, but later in a topic on here I realized the importance of the problem (The credit goes to T. Andreescu & R. Gelca If I remember it correctly, but I'm not sure, since 11 individuals in that year solved this problem and I'm not sure about their solutions):
Solution: Suppose that $\displaystyle \frac{a^2+b^2}{a.b+1}=x$ We want to prove that for every non-negative integer pairs $(\alpha,\beta)$ with the property that $\displaystyle \frac{\alpha^2+\beta^2}{\alpha\beta+1}=x$ and $\alpha \geq \beta$ the pair that minimizes $\alpha+\beta$ must imply $\beta=0$. If that happened, then $x=\alpha^2$. So, suppose that $(\alpha,\beta)$ is such a pair that minimizes $\alpha+\beta$ but $\beta>0$. then we can obtain the equation $y^2 - \beta xy + \beta^2 -x =0$ from $\displaystyle \frac{y^2+\beta^2}{y\beta+1}=x$. This equation has $\alpha$ as one of its roots and since the sum of the roots is $\beta x$, the other root must be $\beta x - \alpha$. Now if we prove that $0 \leq \beta x - \alpha < \alpha$ then we're done because this contradicts the minimality of $(\alpha,\beta)$
$x = \displaystyle \frac{\alpha^2+\beta^2}{\alpha\beta+1}<\frac{2\alpha^2}{\alpha \beta} = \frac{2 \alpha}{\beta} \implies \beta x-\alpha < \alpha$
and it's also possible to show that $\beta.x - \alpha \geq 0$ but honestly I don't remember that part of the proof and I leave it to you. That completes the proof.
(Also check that wikipedia link provided by pre-kidney).
The technique for this type of problem is called "Vieta root jumping". On the wikipedia page describing this technique, this very problem is used as an example. See here.
Of course there are many more solutions; we can enumerate them by flipping repeatedly and using symmetry. For example, your solution $(b, b^3)$ is the result of flipping $(b,0)$ when $k=b^2$. But if we flip around $b^3$ instead, we get the new solution $(b^5-b, b^3)$.
Indeed, $$x^2-b^5x+(b^6-b^2)=(x-b)(x-b^5+b)$$ so we also have $$b^6 + (b^5-b)^2 = b^2(1 + b^3(b^5-b))$$
All solutions can be written as: $$ \text{if $n$ is even:}\enspace a_n= { \sum\limits_{i=0}^{{n\over{2}}} {(-1)^{i+{n\over{2}}}{{n\over{2}}+i\choose {n\over{2}}-i}g^{4i+1}}} \\ \text{if $n$ is odd:}\enspace a_n= \sum\limits_{i=0}^{{n-1\over{2}}} (-1)^{i+{n-1\over{2}}}{1+{n-1\over{2}}+i \choose {n-1\over{2}}-i}g^{4i+3} \implies \\ {{a_n}^2+{a_{n+1}}^2\over{a_na_{n+1}+1}}=g^2\\ $$ Or also: $$ a_n={g{\bigg({g^2+\sqrt{g^4-4}\over{2}}\bigg)}^n\over{\sqrt{g^4-4}}} - {g{\bigg({g^2-\sqrt{g^4-4}\over{2}} \bigg)}^n\over{\sqrt{g^4-4}}} \\ $$
Not quite a proof to the original IMO problem, but there definately is a very easy way to compute
all possible answers.Also it demonstrates that here, Vieta jumping is basically just usingsymmetry to jump between the two solutions of the good old quadratic formula.
Nothing complicated below, but I haven't seen this explanation anywhere else.
Suppose: $$ {{a^2+b^2} \over {1+ab}} = c $$ Then using the quadratic formula to solve b (and a little rewriting) gives: $$ b = {ac \pm \sqrt {a^2 (c^2-4) + 4c} \over 2} $$ Using this we can compute all answers for any given c. Eg: assume c = 4. Then we get: $$ b = {a \cdot 4 \pm \sqrt {a^2 \cdot 12 + 16} \over 2} $$ We know that a = 0 will always provide a solution. That gives us: $$ a = 0 \Rightarrow b = {{0 \cdot 4\pm \sqrt {0 \cdot 12 + 16}} \over 2} = 2 \Rightarrow (a,b) = (0,2) $$ But because of the symmetry, if (0,2) is a solution, then (2,0) must also be a solution. And since the quadratic formula has 2 solutions, it will produce another solution. $$ a = 2 \Rightarrow \space b = {{2 \cdot 4 \pm \sqrt {4 \cdot 12 + 16}} \over 2} = 0 \space or \space 8 \Rightarrow (a,b) = (2,8) $$ $$ a = 8 \Rightarrow \space b = {{8 \cdot 4 \pm \sqrt {64 \cdot 12 + 16}} \over 2} = {32 \pm 28\over 2} = 2 \space or \space 30 \Rightarrow (a,b) = (8,30) $$ $$ a = 30 \Rightarrow \space b = {{30 \cdot 4 \pm \sqrt {900 \cdot 12 + 16}} \over 2} = {120 \pm 104\over 2} = 8 \space or \space 112 \Rightarrow (a,b) = (30,112) $$ Same can be done for c = 9, 16, 25, etc.
Supposing that
$$ \frac{a^2 + b^2}{ab + 1} = k$$ then
$$ a^2 - a(kb) + (b^2 - k) = 0 $$
So using quadratic formula gives: $$ a = \frac{kb \pm \sqrt{k^2b^2 -4(b^2 -k)}}{2}$$
The solutions are when $k = b^2 $ and thus $a= kb = b^3$
So $$ b = 1 , a = 1 , k= 1$$ $$b = 2 , a = 8, k = 4$$
$$ b= 3, a= 27, k = 9 $$ and so on....
There are other ways of getting more solutions than these such as b = 8, a = 30 and k = 4 where k is not $b^2$ or $a = b^3$ and of course the zero one. I have not yet found a way to find more solutions than this.
Let the following sequence of polynomials\begin{equation*}P_1(n)\,=\,n\,,\,\, P_2(n)\,=\,n^3,\ldots,P_{k+2}(n)\,=\,\frac{P^2_{k+1}(n)-n^2}{P_k(n)}, \,\,\,k\in\mathbb N,\end{equation*}which in fact have the following general formula\begin{equation}P_m(n)\,=\,\sum_{r=0}^{\left[\frac{m-1}{2}\right]}(-1)^r\,\left(\begin{array}{c} \!m\!-\!r\!-\!1\! \\ r\end{array}\right)\,n^{2m-4r-1}.\label{polunomial}\end{equation}Then one can show the following:
If $a,b$ are positive integers with $b\ge a$, such that$$\frac{a^2+b^2}{1+ab}=k$$is also an integer, then $k$ is a perfect square, i.e., $k=n^2$, and there exists an $m\in\mathbb N$, such that$$a=P_m(n)\quad\text{and}\quad b=P_{m+1}(n).$$
The answer to this question is based on the method of Vieta Jumping.Graphing the given equation on a graph will give us hyperbolas with a certain pattern in solutions. Wikipedia have a page related to this topic.
First, in Excel, I put the numbers in a column, and then arranged the equation in 3 columns based on the formula, and I realized that some of the remaining were zero, and then I became interested in the subject, and it took about 8 hours to realize that for all the real numbers, equation can be correct.If “r” is integer we can find integer numbers as like as “a” and “b” that Makes equation can be correct. If r is a real number we can find real numbers as like as “a” and “b” that Makes equation can be correct.Prove that for any real number such as “r”
We can make this equation r^2=(a^2+b^2 )/(ab+1 ) that a,b∈R (a^2+b^2 )/(ab+1 ) =A For the all of real numbers as “a” a ∈ R a^2=a^2 ① a^4+1 = a^4+1 ⟹ (a^4+1)/(a^4+1 )=1 ② ① & ②⟹ a^2=a^2 (a^4+1 )/(a^4+1 )= (a^6+a^2 )/(a^4+1 ) = (a^2+a^6 )/(a(a^3 )+1) = B IF (A=B) ⟹ (a^2+b^2 )/(ab+1 ) = (a^2+a^6 )/(a(a^3 )+1) (If (m )/(n ) = (x )/y ⟹ m y= nx) ⟹ (a^2+b^2) * ( a(a^3 )+1)=( ab+1) * ( a^2+a^6) a^6+a^2+a^4 b^2+b^2= a^3 b+a^7 b+a^2+a^6a^4 b^2+b^2=a^3 b+a^7 b ⟹b^2 (a^4+1)=a^3 b(a^4+1)
b^2=a^3 b ⟹ {(b=0 ⟹ a^2/1=a^2 and the A is true for all of real numbers)/(b≠0 ⟹b=a^3 for all of real number )
We know that the set of real numbers is closed with multiplication Then we can make equation (r^2 = (a^2+b^2 )/(ab+1 ) ) for every real number Therefore for all integer numbers we can find two integer numbers that can make equation. If we select r from each set the other numbers must select from same set. |
For the hydrogen atom the quantised energy levels are:
$$E_n = \frac{- 13.6 eV}{n^2}\quad\text{with}\quad n = 1,2,3...$$
One peculiar property of this quantisation is that for large $n$ the energy levels are ever closer together and for $E \geq 0$ (that is $n = \infty$) the energy spectrum becomes a continuum. The electron is then free, of course.
For the hydrogen atom the Potential Energy function is:
$$V(r) = \frac{ - e^2}{4 \pi \epsilon_0 r}$$
Obviously for $r = 0, V = - \infty$, for $r = \infty, V = 0$
Suppose that for a quantum system we construct a Potential Energy of the general form:
$$V(r) = - \frac{V_0}{f(r)},$$
with $f(r)$ a
symmetric function of $r$ with a root at $r = 0$ (so that $V(0) = - \infty$).
Lets also assume that $\frac{1}{f(r)}$ tends to $0$ for $r = \infty$ (so that $V(\infty) = 0$).
Intuitively I feel that with such a Potential Energy function, the quantised energy would also smoothly convert to a continuum for high quantum numbers, going fully continuous at $E \geq 0$.
My question is, can this be demonstrated or even proved (or disproved, of course)? |
Solutions2tma@gmail.com
Whatsapp: 08155572788 Question This combined force law \(\widetilde{F}= q(\widetilde{E}+\frac{\widetilde V}{C} \times \widetilde{B})\) is known as Answer Lorentz Question How would you describein continous medium mechanics,the response of a linear elastic medium to strain. Answer Hookes law Question The total force that acts on a charge if we have both electric and magnetic fields can be called a _______ Answer Lorentz Question How would you describe the motion of two wires in motion running currents,next to one another in parallel.(opposite direction). Answer They are repulsed Question How would you describe the motion of two wires in motion running currents,next to one another in parallel.(same direction). Answer They are attracted Question As with the electric field,the magnetic field is generated by all the pther current in the universe. How would you describe the magnetic field \(\widetilde{B}\) interms of a vector \widetilde{A}\)? Answer \(\widetilde{B}=\widetilde\bigtriangledown\times\widetilde{A}\) Question It was noted that a moving charge may experience another force which is proportional to its velocity J,with regards tomagnetic field B. This could be describe thus. Answer \(\widetilde {F_{\beta}}= q \widetilde {V}\times \widetilde B\) Question The Electric field can be described by a scalar potential field V,which is related to the electric field by the formular. Answer \(E=-\bigtriangledown V$$ Question How can you describe mathematically,those objects that is possible of possessing electrical charges,and that these charges can exert a force on each other even through the vacuum,where q=Electrical charge , E=Eectrical field Answer \(\widetilde{F_{E}}= q\widetilde {E}\) Question List the basic equations of Electromagnetism. Answer Maxwell equations and Lorentz forces. Solutions2tma@gmail.com Whatsapp: 08155572788
Use past TMAs to deal with new TMAs and Exams
1 post • Page
1of 1
Users browsing this forum: No registered users and 4 guests |
I am currently studying uniform and pointwise convergence and I am stuck at a somewhat basic distinction. Until now our lecturer has been talking about sequences of functions. For example:
$$f_n(x)=\frac{n}{x} \space \space \text{or} \space \space f_n(x)=x^n$$
However, in our problem set I am supposed to test the following series for uniform convergence:
$$\sum_{n=0}^{\infty}\frac{\sin(nt)}{e^n}$$
Are sequences of functions and series related? Is the process of showing uniform convergence different for series? |
Scalar and vector potentials in electromagnetism. The scalar potential is potential energy per unit charge. For potential energy, use the potential-energy tag.
In electromagnetic theory, there are two kinds of potential. First, and most common, is the scalar potential, denoted $\varphi$ or $\phi$, defined as the potential energy per unit charge of a charged object in an electric field.
$$\varphi(\vec{r}) = \frac{U(\vec{r})}{q}$$
Scalar potential should not be confused with the related concept of voltage, denoted $V$, which is the difference between the scalar potential at two points.
In a static system, where charges and currents are constant (and thus the electric and magnetic fields are also constant), the electric-field is the gradient of scalar potential.
$$\vec{E} = -\vec{\nabla}\varphi$$
The other kind of potential is the vector potential, denoted $\vec{A}$, which is not directly related to potential energy but is related to the magnetic-field $\vec{B}$ via the curl:
$$\vec{B} = \vec{\nabla}\times\vec{A}$$
In special-relativity, these two potentials are combined into a four-vector $A^{\mu}$, defined as
$$A^{\mu} = \biggl(\frac{\varphi}{c}, A_x, A_y, A_z\biggr)$$ |
This problem can be expressed as the original Merton's portfolio problem.
Consider wealth process defined by SDE
$$d X _ { t } = \frac { X _ { t } \alpha _ { t } } { S _ { t } } d S _ { t } + \frac { X _ { t } \left( 1 - \alpha _ { t } \right) } { S _ { t } ^ { 0 } } d S _ { t } ^ { 0 }$$
where $\alpha_t$ is proportion of the investment in the risky asset $S_t$, and $S_t^0$ is the risk-free asset.
Optimality criterion may depend on the risk aversion of the investor, and the problem is to maximize expected utility of the investor for appropriate utility function $U$:
$$E \left[ U \left( X _ { T } \right) \right] \rightarrow \max$$
Classical choice of the utility function is CRRA:
$$u ( x ) = \frac { x ^ { 1 - \gamma } } { 1 - \gamma }$$
where $\gamma$ is constant and corresponds to the risk-aversion of the investor.
If the asset $S_t$ follows Black-Scholes dynamics (in conformance with your assumption of log-normal returns)
$$\begin{aligned} d S _ { t } ^ { 0 } & = r S _ { t } ^ { 0 } d t \\ d S _ { t } & = \mu S _ { t } d t + \sigma S _ { t } d W _ { t } \end{aligned}$$
remarkably there is a closed-form solution which it is to invest a constant proportion of wealth in the risky asset
$$\alpha_t = \frac { \mu - r } { \gamma \sigma ^ { 2 } }$$
Notice that the solution can be interpreted as the mean-variance trade-off. |
I'm having great difficulty with the following problem:
This question concerns the integral $\int_{0}^{2}\int_{0}^{\sqrt{4-y^2}}\int_{\sqrt{x^2+y^2}}^{\sqrt{8-x^2-y^2}}\!z\ \mathrm{d}z\ \mathrm{d}x\ \mathrm{d}y$. Sketch or describe in words the domain of integration. Rewrite the integral in both cylindrical and spherical coordinates. Which is easier to evaluate?
Below is what I believe I have established so far...
The projection of this integral's domain onto the $xy$-plane is the portion of the circle $x^2+y^2=4$ on $0\le x\le2,\ y\ge0$.
The bounds on $z$ correspond to
$z^2=x^2+y^2$ (cone) and $x^2+y^2+z^2=8$ (sphere).
These bounds intersect at
$x^2+y^2=4$.
Below $z=2$ (where the bounds on $z$ intersect), I believe that the cone and cylinder, $x^2+y^2=4$, are completely inside the sphere.
Would it hence be correct to say that the region of integration is the solid lying between the cone and the cylinder, on $x\ge0$, $y\ge0$ and $0\le z\le2$? I'm struggling to visualize this problem.
When I attempt to move on, and evaluate the integral in cylindrical/spherical coordinates, my solutions differ by a factor of 2.
That is, I evaluated this integral as,
$\int_{0}^{\frac{\pi}{2}}\int_{0}^{2}\int_{0}^{\sqrt{8-r^2}}\!z\ r\ \mathrm{d}z\ \mathrm{d}r\ \mathrm{d}\theta=2\pi$
And,
$\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\sqrt{2}}\!\rho\ \cos\phi\ \rho^2 \sin \phi\ \mathrm{d}\rho\ \mathrm{d}\theta\ \mathrm{d}\phi=4\pi$
Can you please help me to identify where I am going wrong?
Thank you very much. |
Some history: In the pre-Wilsonian interpretation of quantum field theory, renormalisability of theories was considered an essential requirement. That is, you should be able to remove all ultraviolet divergences of the theory (that occur in perturbation theory) by absorbing them in a finite number of parameters in the lagrangian. If you could not do that, then the theory was called "non-renormalisable" and considered pathological. The renormalisability criteria placed strong constraints of the theory, determining which finite number of interactions were allowed. The standard model is renormalisable.
One effect of the renormalization process was that coupling constants became scale (energy) dependent, and such running coupling constants were understood to be physical. eg asymptotic freedom is the result of the QCD coupling constant becoming weak at large energies.
Wilson's goal and approach in condensed matter was different. He was interested in studying the behaviour of complicated theories near the second order phase transition where the correlation length becomes very large, that is when the theory is essentially scale invariant. That means that you could study the system via an
effective theory near the phase transition point without worrying about the underlying microscopic degrees of freedom. In this approach you write down a simple field theory with the desired symmetries and you use a cut-off $\Lambda$ in all calculations since your theory is only an approximation at low energies $E \ll \Lambda$ (long distance scales).
So in the Wilsonian approach there are no ultraviolet divergences. But it also meant that you had to include many more interactions in your lagrangian. The coupling constants also became $\Lambda$ dependent and you had a running of the couplings here too. In practice, in the limit $E/\Lambda \to 0$, only a finite number of couplings are important: the ones that are "marginal" and "relevant". The ones that are "irrelevant" are the ones that a particle physicist would have labelled as "non-renormalisable".
So it was initially a different philosophy and approach in high-energy physics and condensed matter, but gradually high-energy physicists understood that the Wilsonian perspective of effective field theories is applicable to their studies of quantum field theory.
In the modern perspective, the standard model is believed to be a low-energy approximation of some as yet unknown theory. Although the current model is renormalisable, it is believed that "non-renormaliable = irrelevant" interactions would become important at higher energies, and they would represent new processes. So you can write down what the next possible interaction could be (eg to give neutrinos small masses) and make some predictions.
Such an effective field theory approach is also useful when you want to make low energy predictions from your high energy theories.
So, in summary, the Wilsonian perspective of quantum field theories and the renormalisation group is important because it gives them a very physical and intuitive meaning.
Now, some comments about "conformal field theories". A bigger symmetry than just scale invariance is "conformal invariance". In two dimensional space conformal invariance results in an infinite dimensional symmetry group which places strong constraints on a theory and allows many exact results to be obtained in condensed matter systems. It is also important in string theory as the world sheet is two dimensional. Conformal symmetry is less powerful in higher dimensions as the symmetry group is finite and the symmetry is typically broken by quantum effects through the generation of mass scales. |
Markdown Cells¶
Text can be added to Jupyter Notebooks using Markdown cells. You can change the cell type to Markdown by using the
Cell menu, the toolbar, or the key shortcut
m. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here:
Markdown basics¶
You can make text
italic or bold by surrounding a block of text with a single or double * respectively
You can build nested itemized or enumerated lists:
One
Sublist
This
Sublist - That - The other thing
Two
Sublist
Three
Sublist
Now another list:
Here we go
Sublist
Sublist
There we go
Now this
You can add horizontal rules:
Here is a blockquote:
Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren’t special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one– and preferably only one –obvious way to do it. Although that way may not be obvious at first unless you’re Dutch. Now is better than never. Although never is often better than
rightnow. If the implementation is hard to explain, it’s a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea – let’s do more of those!
And shorthand for links:
You can use backslash to generate literal characters which would otherwise have special meaning in the Markdown syntax.
\*literal asterisks\* *literal asterisks*
Use double backslash to generate the literal $ symbol.
Headings¶
You can add headings by starting a line with one (or multiple)
# followed by a space, as in the following example:
# Heading 1# Heading 2## Heading 2.1## Heading 2.2
Embedded code¶
You can embed code meant for illustration instead of execution in Python:
def f(x): """a docstring""" return x**2
or other languages:
for (i=0; i<n; i++) { printf("hello %d\n", i); x += 4;}
LaTeX equations¶
Courtesy of MathJax, you can include mathematical expressions both inline: \(e^{i\pi} + 1 = 0\) and displayed:
Inline expressions can be added by surrounding the latex code with
$:
$e^{i\pi} + 1 = 0$
Expressions on their own line are surrounded by
$$:
$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$
GitHub flavored markdown¶
The Notebook webapp supports Github flavored markdown meaning that you can use triple backticks for code blocks:
```pythonprint "Hello World"``````javascriptconsole.log("Hello World")```
Gives:
print "Hello World"
console.log("Hello World")
And a table like this:
| This | is ||------|------|| a | table|
A nice HTML Table:
This
is
a
table
General HTML¶
Because Markdown is a superset of HTML you can even add things like HTML tables:
Header 1
Header 2
row 1, cell 1
row 1, cell 2
row 2, cell 1
row 2, cell 2
Local files¶
If you have local files in your Notebook directory, you can refer to these files in Markdown cells directly:
[subdirectory/]<filename>
For example, in the images folder, we have the Python logo:
<img src="../images/python_logo.svg" />
and a video with the HTML5 video tag:
<video controls src="../images/animation.m4v" />
These do not embed the data into the notebook file, and require that the files exist when you are viewing the notebook.
Security of local files¶
Note that this means that the Jupyter notebook server also acts as a generic file server for files inside the same tree as your notebooks. Access is not granted outside the notebook folder so you have strict control over what files are visible, but for this reason it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory).
When you run the notebook in a password-protected manner, local file access is restricted to authenticated users unless read-only views are active. |
Let $T_0$ be the set theory axiomatized by $ZFC^-$ (that is $ZFC$ without powerset) + every set is countable + $\mathbb{V}=\mathbb{L}$.
Question 1: Suppose $\phi$ is a sentence of set theory. Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_0$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" Update: no. Question 2: Is there a set $T$ of $\Pi_2$ sentences, such that $ZFC^- \cup T$ is complete? Update: no, essentially. See my answer below. Questions 3 and 4 after comments. Comments. Question 1 is related to, but different from several previous questions on Mathoverflow, e.g. Nice Algebraic Statements Independent from ZF + V=L (constructibility) or On statements independent of ZFC + V=L or Natural statements independent from true $\Pi^0_2$ sentences. Regarding the first two---Hamkins gives a long list of examples of things independent of $\mathbb{V}= \mathbb{L}$, but it seems that my schema takes care of all of them. (Of course in those questions $ZFC$ was assumed.) Moreover, Question 1 was partly motivated by Dorais' answer to the second question above, which references a question of Shelah from The Future of Set Theory. I'm leaving "large cardinal axiom" undefined here (so Question 1 can't be formalized), but obviously we should exclude inconsistent axioms, or things like $ZFC + $ "there are no transitive set models of ZFC."
If we pick a definition of ``large cardinal axiom", then "every set is countable" + $\mathbb{V}=\mathbb{L}$ + "there are transitive set models of $A$ of arbitrarily high ordinal height" (for each large cardinal axiom $A$) is a set of $\Pi_2$ sentences. So a negative answer to Question 2 implies a negative answer to Question 1.
We can ignore (recursive) large cardinal axiom schemas $\Gamma$ because we can just replace them by $ZFC_0$+"there is a transitive set model of $\Gamma$" where $ZFC_0$ is some large enough finite fragment of $ZFC$. In particular we don't have to write "$ZFC + A$".
$T_0$ + this axiom schema (loosely speaking) is of personal significance to me: in fact I believe it to be true. (Namely, given whatever universe of sets $\mathbb{V}$ in which we are working, it seems reasonable to suppose there is a larger universe of sets $\mathbb{W} \models \mathbb{V}=\mathbb{L}$ in which $\mathbb{V}$ is countable, or such that $\mathbb{W}$ is a model of a given large cardinal axiom $A$. Ergo,...)
Let $\mathcal{L}_{\mbox{set}}$ be the language of set theory $\{\in\}$ and let $\mathcal{L}_1$ be $\mathcal{L}_{\mbox{set}} \cup \{P\}$, $P$ a new unary relation symbol. Let $T_1$ be $T_0$ + the axioms asserting that $P \subseteq \mbox{ON}$ is stationary (for $\in$-definable classes) and for every $\alpha \in P$, $(\mathbb{V}_\alpha, \in) \preceq (\mathbb{V}, \in)$. We insist that large cardinal axioms $A$ be sentences of $\mathcal{L}_{\mbox{set}}$.
Question 3: Suppose $\phi$ is a sentence of set theory (i.e. of $\mathcal{L}_{\mbox{set}}$). Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_1$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" Question 4: Is there a set $T$ of $\Pi_2$ sentences of set theory, such that $T_1 \cup T$ decides every sentence of set theory? |
What's the relationship between Kahler differentials and ordinary differential forms?
Let $M$ be a differentiable manifold, $A=C^\infty (M)$ its ring of global differentiable functions and $\Omega^1 (M)$ the A-module of global differential forms of class $C^\infty$. The A-module of Kähler differentials $\Omega_k(A)$ is the free A-module over the symbols $db$ ($b \in A$) divided out by the relations
$d(a+b)=da+db,\quad d(ab)= adb+bda,\quad d\lambda=0 \quad(a,b\in A, \quad \lambda \in k)$
There is an obvious surjective map $\quad \Omega_k(A) \to \Omega^1 (M)$ because the relations displayed above are valid in the classical interpretation of the calculus (Leibniz rule). However, I do not believe at all that it is injective.
For example, if $\: M=\mathbb R$, I see absolutely no reason why $\mathrm{d}\sin(x)=\cos(x)\mathrm{d}x$ should be true in $\Omega_k(A) $ (beware the sirens of calculus). Things would be worse if we considered $C^\infty$ functions which, unlike the sine, are not analytic. The same sort of reasoning applies to holomorphic manifolds and also to local rings of differentiable or holomorphic functions on manifolds.
To sum up: the differentials used in differentiable or holomorphic manifold theory are a quotient of the corresponding Kähler differentials but are not isomorphic to them. (And I think David's claim that they are isomorphic is mistaken.)
There is a discussion of this issue at the $n$-category cafe. I'd encourage people who were interested in this question to head over there and see if they can lend some insight.
Here is a sketch of a proof that $d (e^x) \neq e^x dx$ in the Kahler differentials of $C^{\infty}(\mathbb{R})$. More generally, we should be able to show that, if $f$ and $g$ are $C^{\infty}$ functions with no polynomial relation between them, then $df$ and $dg$ are algebraically independent, but I haven't thought through every detail.
Choose any sequence of points $x_1$, $x_2$, in $\mathbb{R}$, tending to $\infty$. Inside the ring $\prod_{i=0}^{\infty} \mathbb{R}$, let $X$ and $e^X$ be the sequences $(x_i)$ and $(e^{x_i})$. Choose a nonprincipal ultrafilter on the $x_i$ and let $K$ be the corresponding quotient of $\prod_{i=0}^{\infty} \mathbb{R}$.
$K$ is a field. Within $K$, the elements $X$ and $e^X$ do not obey any polynomial relation with real coefficients. (Because, for any nonzero polynomial $f$, $f(x,e^x)$ only has finitely many zeroes.) Choose a transcendence basis, $\{ z_a \}$, for $K$ over $\mathbb{R}$ and let $L$ be the field $\mathbb{R}(z_a)$.
Any function $\{ z_a \} \to L$ extends to a unique derivation $L \to L$, trivial on $\mathbb{R}$. In particular, we can find $D:L \to L$ so that $D(X)=0$ and $D(e^X) =1$. Since $K/L$ is algebraic and characteristic zero, $D$ extends to a unique derivation $K \to K$. Taking the composition $C^{\infty} \to K \to K$, we have a derivation $C^{\infty}(\mathbb{R}) \to K$ with $D(X)=0$ and $D(e^X)=1$. By the universal property of the Kahler differentials, this derivation factors through the Kahler differentials. So there is a quotient of the Kahler differentials where $dx$ becomes $0$, and $d(e^x)$ does not, so $dx$ does not divide $d(e^x)$.
I'm traveling and can't provide references for most of the facts I am using aobut derivations of fields, but I think this is all in the appendix to Eisenbud's
Commutative Algebra. UPDATE: My answer essentially just gives the definition of Kahler differentials and differential forms and misses the point of the question. Georges' answer addresses the relationship between the two. As David before me, I also encourage you to vote Georges' answer up and mine down.
Let $M$ be a smooth manifold and $p$ a point in $M$. The usual definition of the tangent space to $M$ at $p$ is as the vector space of linear maps $D: C^{\infty}(M) \to \mathbb{R}$ satisfying the Leibniz rule $$D(fg) = D(f)g(p) + f(p)D(g)$$ Equivalently, let $I$ be the ideal of $C^{\infty}(M)$ consisting of all functions vanishing at $p$. Then $T_p M$ is the dual of the vector space $I/I^2$ (which you hence call the cotangent space to $M$ at $p$). Indeed, $D(f) = 0$ for every $f \in I^2$, and conversely any linear map $r: I/I^2 \to \mathbb{R}$ gives rise to a derivation $D(f) := r(f-f(p))$.
Now let $X$ be a scheme over a field $k$ (you can generalize this to a morphism of schemes) and $x$ a closed point. Consider the local ring $\mathcal{O}_{x, X}$ of functions regular at $X$. Then the stalk at $x$ of the sheaf of Kahler differentials $\Omega^1_X$ corepresents the functor taking an $\mathcal{O}_{x,X}$-module $\mathcal{F}_x$ to $\mathrm{Der}(\mathcal{O}_{x,X}, \mathcal{F}_x)$. In particular, $$\mathrm{Der}(\mathcal{O}_{x,X}, k) \cong \mathrm{Hom}(\Omega^1_{X,x}, k)$$
It is in this sense that you think of $\Omega^1_{X,x}$ as the cotangent space to $X$ at $x$. Indeed, in this case $\Omega^1_{X,x} \cong m/m^2$ where $m \subset \mathcal{O}_{x, X}$ is the ideal of functions vanishing at $x$.
UPDATE The previous answer that was here was pretty much completely wrong.Thanks to Georges for several corrections. I encourage everybody to vote my answer down and his up.
If $M$ is a $C^{\infty}$ manifold and $A$ is the ring $C^{\infty}(M)$ then there is a natural map from the Kahler differentials $\Omega_{A/\mathbb{R}}$ to the $C^{\infty}$ one-forms. This is surjective but far from injective. For example, $d \sin x \neq \cos x dx$ in the Kahler differentials. The basic problem is that Kahler differentials are only linear for finite sums, so they can't "see" nonpolynomial relations.
If you replace $C^{\infty}$ by complex analytic and $M$ is an open simply connected set in $\mathbb{C}^n$, then the Kahler differentials map to the holomorphic $(1,0)$-forms. This is still true for any complex manifold if interpreted as a statement about sheaves; let $\mathcal{H}$ be the sheaf of holomorphic functions and define the sheaf of Kahler differentials by sheafifying the presheaf $U \mapsto \Omega_{\mathcal{H}(U)/\mathbb{C}}$; then we again have a map from Kahler differentials to holomorphic $(1,0)$ forms.
In algebraic geometry, if $M$ is a smooth complex algebraic variety, one usually considers the sheaf gotten by sheafifiying the presheaf $U \mapsto \Omega_{\mathcal{O}(U)/\mathbb{C}}$. (We are now using the Zariski topology.) These are, by definition, the algebraic $(1,0)$ forms. They are not isomorphic to the holomorphic $(1,0)$ forms but, if $M$ is projective, they have the same cohomology by GAGA.
Hello everybody, I'm not a mathematician :) so sorry not to write the details here. Basically, I studied engineering and control theory. In nonlinear control theory we often use differentials, either Kahler or ordinary, to study the systems. There has been a long discussion whether those two notions are isomorphic or not. I believe, we have the answer finally. Together with my colleagues we have shown that differential one-froms are isomorphic to a quotient space (module) of Kahler differentials. These two modules coincide when they are modules over a ring of linear differential operators over the field of algebraic functions. It was published as G. Fu, M. Halás, Ü. Kotta, Z. Li: Some remarks on Kähler differentials and ordinary differentials in nonlinear control theory. In: Systems and control letters, article in press, 2011. Available online at http://www.sciencedirect.com/science/article/pii/S0167691111001198
The key ingredient in proofs that derivations are differential forms is the ability to evaluate coefficients: e.g. to go from $\mathrm{d}f(x) = f'(x) \mathrm{d}x$ to $\mathrm{d}f(x)|_a = f'(a) \mathrm{d}x|_a$.
Evaluation maps $\theta_P : \mathcal{C}^\infty(M) \to \mathbb{R} : f \to f(P)$ are examples of ring homomorphisms; abstractly, for any ring homomorphism $\varphi : R \to S$, we can consider the corresponding "evaluation" $$ \varphi_* : \Omega_R \cong R \otimes_R \Omega_R \to S \otimes_R \Omega_R $$ The $S$-module $S \otimes_R \Omega_R$ is isomorphic to the quotient of $\Omega_R$ by the relations $f \mathrm{d}g = \varphi(f) \mathrm{d}g$, and $\varphi_*(f \mathrm{d}g) = \varphi(f) \mathrm{d}g$.
In the case that we use one of the $\theta_P$, $\theta_{P*}$ is evaluating the coefficients of a Kähler differential at $P$, and so the corresponding $\mathbb{R} \otimes_{\mathcal{C}^\infty(M)} \Omega_{\mathcal{C}^\infty(M) / \mathbb{R}}$ looks like the cotangent space to $M$ at $P$.
Write $\mathrm{d}f|_P$ for $\theta_{P*}(\mathrm{d}f)$.
Now, we can apply the usual argument. For $M = \mathbb{R}^n$, to any function $f$, we can take the differential of a Taylor polynomial and get $$ \mathrm{d}f(x)|_P = f'(P) \mathrm{d}x|_P$$ so it's clear that we really do get the cotangent space to $M$ at $P$.
If we take all Kähler differentials and identify differentials that have the same "values" at every point of the manifold, then we get the ordinary differential forms.
ConjectureIf $M$ is a compactmanifold, then $\Omega_{\mathcal{C}^\infty(M) / \mathbb{R}} \cong \Omega^1(M)$.
(this conjecture has been posted to Are Kähler differentials the same as one-forms for compact manifolds?) |
Consider a ring with radius $R$ and charge density $\lambda=\lambda_0\cos\phi$, where $\phi$ is the angular coordinate in the cylindrical coordinate. If I want to find the dipole moment of this charge distribution, then I put it into $$\vec{p}=\int{\mathrm d^3r~\rho(\vec{r})\vec{r}},$$ where I tried $$\rho(\vec{r})=\lambda_0 \cos\phi \times \delta(s-R) \delta(z)$$ and $$\vec{r}=s\hat{s}+z\hat{z}$$ So, the dipole then becomes: $$\vec{p}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty}[\lambda_0 \cos\phi \times \delta(s-R) \delta(z)](s\hat{s}+z\hat{z})s~\mathrm ds~\mathrm d\phi ~\mathrm dz$$ The delta function kills the $z$ component and leave the $s$ component, so: $$\vec{p}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty}\lambda_0 \cos\phi \times \delta(s-R) \delta(z)s^2~\mathrm ds~\mathrm d\phi ~\mathrm dz\hat{s}=\lambda_0R^2\int_{0}^{2\pi}\cos\phi~\mathrm d\phi~\hat{s}=\vec{0}$$ The answer is $$\vec{p}=\frac{1}{2}\lambda_0R^2\hat{x}$$ What's wrong with my solution? Actually this is problem 4.1 from Zangwill's Modern Electrodynamics. I read it's solution, but don't get why it works under Cartesian coordinates but not under cylindrical coordinate.
The previous answer is just a repeat of whats in the solution manual and the p provided in this is incorrect. I am taking a class for grins and was assigned this problem and beat myself up trying to get the stated answer. I finally did it in both spherical and cylindrical coordinates and got the same answer in both. I emailed prof and he confirmed the answer in the book is incorrect. The actual answer is $\pi \lambda_0R^2\hat{x}$
note that the $\phi$ integration is $\int_0^{2\pi}cos^2 \phi = \pi$
the $\theta$ integral is $\int_0^{\pi} sin(\theta)\delta(\theta - \frac{\pi}{2})d\theta = sin(\frac{\pi}{2}) = 1$
the $r$ integral is $\int_0^{\infty} r^2\delta(r-R) = R^2$
so the answer is $\pi\lambda_0R^2\hat{x}$
For a linear charge density around a ring, $\lambda=\lambda(\phi)$. The volume charge density would be: $$\rho(r)=\lambda(\phi)\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}$$ Now the dipole moment would be: $$p=\int_{0}^{2\pi}~\mathrm d\phi\int_{0}^{\pi}~\sin(\theta)~\mathrm d\theta \int_{0}^{\infty}~\mathrm dr~r^{2}\lambda\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}=2\pi R\lambda=Q$$ Since $\hat{r}=\hat{z}\cos\theta +\hat{y}\sin\theta \sin\phi +\hat{x}\sin\theta \cos\phi$, the dipole moment would be: $$\int ~\mathrm d^{3}r~\rho(r) =\int_{0}^{2\pi}~\mathrm d\phi\int_{0}^{\pi}~\sin(\theta)~\mathrm d\theta \int_{0}^{\infty}~\mathrm dr~r^{2}~[\hat{z}\cos\theta +\hat{y}\sin\theta \sin\phi +\hat{x}\sin\theta \cos\phi]~\lambda_{0}\cos\phi\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}$$ The $\hat{x}$ integral is non zero and would result in $$\boxed{p=\frac{1}{2}\lambda_{0}R^{2}\hat{x}}$$
protected by Qmechanic♦ Feb 5 '17 at 19:52
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I want to calculate the VaR for a long position (S) in stockprices after one year. Therefore i tried two methods:
analytical solution: $VaR = S\cdot p_0\cdot \sigma_d \cdot \Phi^{-1}(1-\alpha)\cdot \sqrt{252}$
MC with geometric brownian motion:
I. model stock price (assuming $\mu = 0$): $p_{t+1} = p_t + p_t\cdot \sigma \cdot dW_t$
II. perfrom MC-Simulation of multiple price-series
III. determine $\alpha$-Quantile of price-distribution for t = 252 ( $p^\alpha_{252}$)
IV. $VaR = S\cdot (p^\alpha_{252} - p_0)$
with
$S$: Stock - Position $\sigma_d$: volatility of daily returns $\alpha$: Risk-Quantile $p_t$: stock-Price $dW$: Wiener process
Now here are my questions:
are those methods correct? If yes, i noticed that both methods lead to different results (VaR for GBM is higher). Why is that so? Which method should i use? I was wondering why the analytical VaR-solution is symmetric concerning upside/downside risk although price levels are lognormal-distributed? |
8.11. Bidirectional Recurrent Neural Networks¶
So far we assumed that our goal is to model the next word given what we’ve seen so far, e.g. in the context of a time series or in the context of a language model. While this is a typical scenario, it is not the only one we might encounter. To illustrate the issue, consider the following three tasks of filling in the blanks in a text:
I am _____
I am _____ very hungry.
I am _____ very hungry, I could eat half a pig.
Depending on the amount of information available we might fill theblanks with very different words such as
‘happy’, ‘not’, and ‘very’. Clearly the end of the phrase (if available) conveyssignificant information about which word to pick. A sequence model thatis incapable of taking advantage of this will perform poorly on relatedtasks. For instance, to do well in named entity recognition (e.g. torecognize whether Green refers to Mr. Green or to the color)longer-range context is equally vital. To get some inspiration foraddressing the problem let’s take a detour to graphical models. 8.11.1. Dynamic Programming¶
This section serves to
illustrate the problem. The specific technicaldetails do not matter for understanding the deep learning counterpartbut they help in motivating why one might use deep learning and why onemight pick specific architectures.
If we want to solve the problem using graphical models we could for instance design a latent variable model as follows: we assume that there exists some latent variable \(h_t\) which governs the emissions \(x_t\) that we observe via \(p(x_t|h_t)\). Moreover, the transitions \(h_t \to h_{t+1}\) are given by some state transition probability \(p(h_t|h_{t-1})\). The graphical model then looks as follows:
For a sequence of \(T\) observations we have thus the following joint probability distribution over observed and hidden states:
Now assume that we observe all \(x_i\) with the exception of some \(x_j\) and it is our goal to compute \(p(x_j|x^{-j})\). To accomplish this we need to sum over all possible choices of \(h = (h_1, \ldots, h_T)\). In case \(h_i\) can take on \(k\) distinct values this means that we need to sum over \(k^T\) terms - mission impossible! Fortunately there’s an elegant solution for this: dynamic programming. To see how it works consider summing over the first two hidden variable \(h_1\) and \(h_2\). This yields:
In general we have the
forward recursion
The recursion is initialized as \(\pi_1(h_1) = p(h_1)\). In abstract terms this can be written as \(\pi_{t+1} = f(\pi_t, x_t)\), where \(f\) is some learned function. This looks very much like the update equation in the hidden variable models we discussed so far in the context of RNNs. Entirely analogously to the forward recursion we can also start a backwards recursion. This yields:
We can thus write the
backward recursion as
with initialization \(\rho_T(h_T) = 1\). These two recursions allow us to sum over \(T\) variables in \(O(kT)\) (linear) time over all values of \((h_1, \ldots h_T)\) rather than in exponential time. This is one of the great benefits of probabilistic inference with graphical models. It is a very special instance of the Generalized Distributive Law proposed in 2000 by Aji and McEliece. Combining both forward and backward pass we are able to compute
Note that in abstract terms the backward recursion can be written as \(\rho_{t-1} = g(\rho_t, x_t)\), where \(g\) is some learned function. Again, this looks very much like an update equation, just running backwards unlike what we’ve seen so far in RNNs. And, indeed, HMMs benefit from knowing future data when it is available. Signal processing scientists distinguish between the two cases of knowing and not knowing future observations as filtering vs. smoothing. See e.g. the introductory chapter of the book by Doucet, de Freitas and Gordon, 2001 on Sequential Monte Carlo algorithms for more detail.
8.11.2. Bidirectional Model¶
If we want to have a mechanism in RNNs that offers comparable look-ahead ability as in HMMs we need to modify the recurrent net design we’ve seen so far. Fortunately this is easy (conceptually). Instead of running an RNN only in forward mode starting from the first symbol we start another one from the last symbol running back to front. Bidirectional recurrent neural networks add a hidden layer that passes information in a backward direction to more flexibly process such information. The figure below illustrates the architecture of a bidirectional recurrent neural network with a single hidden layer.
In fact, this is not too dissimilar to the forward and backward recurrences we encountered above. The main distinction is that in the previous case these equations had a specific statistical meaning. Now they’re devoid of such easily accessible interpretaton and we can just treat them as generic functions. This transition epitomizes many of the principles guiding the design of modern deep networks - use the type of functional dependencies common to classical statistical models and use them in a generic form.
8.11.2.1. Definition¶
Bidirectional RNNs were introduced by Schuster and Paliwal, 1997. For a detailed discussion of the various architectures see also the paper by Graves and Schmidhuber, 2005. Let’s look at the specifics of such a network. For a given time step \(t\), the mini-batch input is \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) (number of examples: \(n\), number of inputs: \(d\)) and the hidden layer activation function is \(\phi\). In the bidirectional architecture: We assume that the forward and backward hidden states for this time step are \(\overrightarrow{\mathbf{H}}_t \in \mathbb{R}^{n \times h}\) and \(\overleftarrow{\mathbf{H}}_t \in \mathbb{R}^{n \times h}\) respectively. Here \(h\) indicates the number of hidden units. We compute the forward and backward hidden state updates as follows:
Here, the weight parameters \(\mathbf{W}_{xh}^{(f)} \in \mathbb{R}^{d \times h}, \mathbf{W}_{hh}^{(f)} \in \mathbb{R}^{h \times h}, \mathbf{W}_{xh}^{(b)} \in \mathbb{R}^{d \times h}, and \mathbf{W}_{hh}^{(b)} \in \mathbb{R}^{h \times h}\) and bias parameters \(\mathbf{b}_h^{(f)} \in \mathbb{R}^{1 \times h} and \mathbf{b}_h^{(b)} \in \mathbb{R}^{1 \times h}\) are all model parameters.
Then we concatenate the forward and backward hidden states\(\overrightarrow{\mathbf{H}}_t\) and\(\overleftarrow{\mathbf{H}}_t\) to obtain the hidden state\(\mathbf{H}_t \in \mathbb{R}^{n \times 2h}\) and input it to theoutput layer. In deep bidirectional RNNs the information is passed on as
input to the next bidirectional layer. Lastly, the output layercomputes the output \(\mathbf{O}_t \in \mathbb{R}^{n \times q}\)(number of outputs: \(q\)):
Here, the weight parameter \(\mathbf{W}_{hq} \in \mathbb{R}^{2h \times q}\) and bias parameter \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\) are the model parameters of the output layer. The two directions can have different numbers of hidden units.
8.11.2.2. Computational Cost and Applications¶
One of the key features of a bidirectional RNN is that information from both ends of the sequence is used to estimate the output. That is, we use information from future and past observations to predict the current one (a smoothing scenario). In the case of language models this isn’t quite what we want. After all, we don’t have the luxury of knowing the next to next symbol when predicting the next one. Hence, if we were to use a bidirectional RNN naively we wouldn’t get very good accuracy: during training we have past and future data to estimate the present. During test time we only have past data and thus poor accuracy (we will illustrate this in an experiment below).
To add insult to injury bidirectional RNNs are also exceedingly slow. The main reason for this is that they require both a forward and a backward pass and that the backward pass is dependent on the outcomes of the forward pass. Hence gradients will have a very long dependency chain.
In practice bidirectional layers are used very sparingly and only for a narrow set of applications, such as filling in missing words, annotating tokens (e.g. for named entity recognition), or encoding sequences wholesale as a step in a sequence processing pipeline (e.g. for machine translation). In short, handle with care!
8.11.2.3. Training a BLSTM for the Wrong Application¶
If we were to ignore all advice regarding the fact that bidirectional LSTMs use past and future data and simply apply it to language models we will get estimates with acceptable perplexity. Nonetheless the ability of the model to predict future symbols is severely compromised as the example below illustrates. Despite reasonable perplexity numbers it only generates gibberish even after many iterations. We include the code below as a cautionary example against using them in the wrong context.
import d2lfrom mxnet import npxfrom mxnet.gluon import rnnnpx.set_np()# Load databatch_size, num_steps = 32, 35train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps)# Define modelvocab_size, num_hiddens, num_layers, ctx = len(vocab), 256, 2, d2l.try_gpu()lstm_layer = rnn.LSTM(num_hiddens, num_layers, bidirectional=True)model = d2l.RNNModel(lstm_layer, len(vocab))# Trainnum_epochs, lr = 500, 1d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.2, 84277 tokens/sec on gpu(0)time travellerererererererererererererererererererererererererertravellerererererererererererererererererererererererererer
The output is clearly unsatisfactory for the reasons described above. For a discussion of more effective uses of bidirectional models see e.g. the sentiment classification in Section 13.9.
8.11.3. Summary¶ In bidirectional recurrent neural networks, the hidden state for each time step is simultaneously determined by the data prior and after the current timestep. Bidirectional RNNs bear a striking resemblance with the forward-backward algorithm in graphical models. Bidirectional RNNs are mostly useful for sequence embedding and the estimation of observations given bidirectional context. Bidirectional RNNs are very costly to train due to long gradient chains. 8.11.4. Exercises¶ If the different directions use a different number of hidden units, how will the shape of \(\boldsymbol{H}_t\) change? Design a bidirectional recurrent neural network with multiple hidden layers. Implement a sequence classification algorithm using bidirectional RNNs. Hint - use the RNN to embed each word and then aggregate (average) all embedded outputs before sending the output into an MLP for classification. For instance, if we have \((\mathbf{o}_1, \mathbf{o}_2, \mathbf{o}_3)\) we compute \(\bar{\mathbf{o}} = \frac{1}{3} \sum_i \mathbf{o}_i\) first and then use the latter for sentiment classification. |
For the elastic scattering of identical particles, A + A → A + A, what are the Mandelstam variables?
Solution
The Mandelstam variables are defined, for a process 1 + 2 → 3 + 4, as
\begin{eqnarray}
s &=& (p_1 + p_2)^2 \nonumber \\ t &=& (p_1 − p_3)^2 \nonumber \\ u &=& (p_1 − p_4)^2 \nonumber \end{eqnarray}
where the \(p\)s are the four-momenta.
\(s\) variable
For the \(s\) variable, the scattering is something like two particles hitting each other head-on with \(\vec p_2=-\vec p_1\). We can calculate it for the case of identical particles like so:
\begin{eqnarray}
s = (p_1 + p_2)^2 &=& p_1^2+ 2p_1p_2 + p_2^2 \nonumber \\ &=& m_1^2 + 2p_1p_2 + m_2^2 \nonumber \\ &=& 2m_1^2 + 2(E_1E_2 − \vec p_1 \vec p_2) \nonumber \\ &=& 2m_1^2 + 2(E_1^2 + {\vec p_1}^2) \nonumber \\ &=& 2m_1^2 + 2\big((m_1^2 + {\vec p_1}^2) + {\vec p_1}^2\big) \nonumber \\ &=& 4(m^2 + \vec p^2) \end{eqnarray} \(t\) variable
The relevant collision for the \(t\) variable is shown in the diagram. The momenta of particles 1 and 2 are equal and oppsite. Since they have the same mass, they must have the same energy too. After the collision, the particles again move with equal and opposite momentum, so \(E_1=E_2=E_3=E_4\) and \(|\vec p_1|=|\vec p_2|=|\vec p_3|=|\vec p_4|\). The calculation is at the bottom. The last relation \eqref{eq:lte} is true for any t-channel process.
\(u\) variable
The \(u\) variable is basically the same process as for the \(t\) variable, except the angle of scattering is \(\phi\) in the diagram instead of \(\theta\). This leads to the argument of the cosine being shifted by \(\pi\), thus changing its sign.
Back to \(t\):
\begin{eqnarray} t=(p_1-p_3)^2&=& p_1^2 – 2p_1p_3 + p_3^2 \nonumber \\ &=& 2m_1^2 – 2E_1E_3 + 2 \vec p_1 \vec p_3 \nonumber \\ &=& 2m_1 – 2E_1^2 + 2 {\vec p_1}^2 \cos \theta \nonumber \\ \label{eq:lte} &=& -2 {\vec p}^2 (1-\cos \theta) \le 0 \end{eqnarray}
Therefore, for \(u\):
\begin{eqnarray}
u=-2 {\vec p}^2 (1+\cos \theta) \end{eqnarray} |
statsmodels.multivariate.factor_rotation.rotate_factors¶
statsmodels.multivariate.factor_rotation.
rotate_factors(
A, method, *method_args, **algorithm_kwargs)¶
Subroutine for orthogonal and oblique rotation of the matrix \(A\). For orthogonal rotations \(A\) is rotated to \(L\) according to\[L = AT,\]
where \(T\) is an orthogonal matrix. And, for oblique rotations \(A\) is rotated to \(L\) according to\[L = A(T^*)^{-1},\]
where \(T\) is a normal matrix.
Parameters A
numpy
matrix(
default
None)
non rotated factors
method
str
should be one of the methods listed below
method_args
list
additional arguments that should be provided with each method
algorithm_kwargs
dictionary
algorithmstr (default gpa)
should be one of:
‘gpa’: a numerical method
‘gpa_der_free’: a derivative free numerical method
‘analytic’ : an analytic method
Depending on the algorithm, there are algorithm specific keyword arguments. For the gpa and gpa_der_free, the following keyword arguments are available:
max_triesint (default 501)
maximum number of iterations
tolfloat
stop criterion, algorithm stops if Frobenius norm of gradient is smaller then tol
For analytic, the supported arguments depend on the method, see above.
See the lower level functions for more details.
Returns
The
tuple\((L,T)\)
Notes
What follows is a list of available methods. Depending on the method additional argument are required and different algorithms are available. The algorithm_kwargs are additional keyword arguments passed to the selected algorithm (see the parameters section). Unless stated otherwise, only the gpa and gpa_der_free algorithm are available.
Below,
\(L\) is a \(p\times k\) matrix;
\(N\) is \(k\times k\) matrix with zeros on the diagonal and ones elsewhere;
\(M\) is \(p\times p\) matrix with zeros on the diagonal and ones elsewhere;
\(C\) is a \(p\times p\) matrix with elements equal to \(1/p\);
\((X,Y)=\operatorname{Tr}(X^*Y)\) is the Frobenius norm;
\(\circ\) is the element-wise product or Hadamard product.
obliminorthogonal or oblique rotation that minimizes \[\phi(L) = \frac{1}{4}(L\circ L,(I-\gamma C)(L\circ L)N).\]
For orthogonal rotations:
\(\gamma=0\) corresponds to quartimax,
\(\gamma=\frac{1}{2}\) corresponds to biquartimax,
\(\gamma=1\) corresponds to varimax,
\(\gamma=\frac{1}{p}\) corresponds to equamax.
For oblique rotations rotations:
\(\gamma=0\) corresponds to quartimin,
\(\gamma=\frac{1}{2}\) corresponds to biquartimin.
method_args:
gammafloat
oblimin family parameter
rotation_methodstr
should be one of {orthogonal, oblique}
orthomax : orthogonal rotation that minimizes\[\phi(L) = -\frac{1}{4}(L\circ L,(I-\gamma C)(L\circ L)),\]
where \(0\leq\gamma\leq1\). The orthomax family is equivalent to the oblimin family (when restricted to orthogonal rotations). Furthermore,
\(\gamma=0\) corresponds to quartimax,
\(\gamma=\frac{1}{2}\) corresponds to biquartimax,
\(\gamma=1\) corresponds to varimax,
\(\gamma=\frac{1}{p}\) corresponds to equamax.
method_args:
gammafloat (between 0 and 1)
orthomax family parameter
CF : Crawford-Ferguson family for orthogonal and oblique rotation which minimizes:\[\phi(L) =\frac{1-\kappa}{4} (L\circ L,(L\circ L)N) -\frac{1}{4}(L\circ L,M(L\circ L)),\]
where \(0\leq\kappa\leq1\). For orthogonal rotations the oblimin (and orthomax) family of rotations is equivalent to the Crawford-Ferguson family. To be more precise:
\(\kappa=0\) corresponds to quartimax,
\(\kappa=\frac{1}{p}\) corresponds to varimax,
\(\kappa=\frac{k-1}{p+k-2}\) corresponds to parsimax,
\(\kappa=1\) corresponds to factor parsimony.
method_args:
kappafloat (between 0 and 1)
Crawford-Ferguson family parameter
rotation_methodstr
should be one of {orthogonal, oblique}
quartimaxorthogonal rotation method
minimizes the orthomax objective with \(\gamma=0\)
biquartimaxorthogonal rotation method
minimizes the orthomax objective with \(\gamma=\frac{1}{2}\)
varimaxorthogonal rotation method
minimizes the orthomax objective with \(\gamma=1\)
equamaxorthogonal rotation method
minimizes the orthomax objective with \(\gamma=\frac{1}{p}\)
parsimaxorthogonal rotation method
minimizes the Crawford-Ferguson family objective with \(\kappa=\frac{k-1}{p+k-2}\)
parsimonyorthogonal rotation method
minimizes the Crawford-Ferguson family objective with \(\kappa=1\)
quartiminoblique rotation method that minimizes
minimizes the oblimin objective with \(\gamma=0\)
quartiminoblique rotation method that minimizes
minimizes the oblimin objective with \(\gamma=\frac{1}{2}\)
target : orthogonal or oblique rotation that rotates towards a target
matrix : math:H by minimizing the objective\[\phi(L) =\frac{1}{2}\|L-H\|^2.\]
method_args:
Hnumpy matrix
target matrix
rotation_methodstr
should be one of {orthogonal, oblique}
For orthogonal rotations the algorithm can be set to analytic in which case the following keyword arguments are available:
full_rankbool (default False)
if set to true full rank is assumed
partial_target : orthogonal (default) or oblique rotation that partially rotates towards a target matrix \(H\) by minimizing the objective:\[\phi(L) =\frac{1}{2}\|W\circ(L-H)\|^2.\]
method_args:
Hnumpy matrix
target matrix
Wnumpy matrix (default matrix with equal weight one for all entries)
matrix with weights, entries can either be one or zero
Examples
>>> A = np.random.randn(8,2) >>> L, T = rotate_factors(A,'varimax') >>> np.allclose(L,A.dot(T)) >>> L, T = rotate_factors(A,'orthomax',0.5) >>> np.allclose(L,A.dot(T)) >>> L, T = rotate_factors(A,'quartimin',0.5) >>> np.allclose(L,A.dot(np.linalg.inv(T.T))) |
Lieb-Liniger model of a Bose Gas
Elliott H. Lieb (2008), Scholarpedia, 3(12):8712. doi:10.4249/scholarpedia.8712 revision #91428 [link to/cite this article] The Lieb-Liniger model describes a gas of particles moving in one-dimension and satisfying Bose-Einstein statistics.
Contents Introduction
A model of a gas of particles moving in one-dimension and satisfying Bose-Einstein statistics was introduced in 1963 (Lieb-Liniger 1963, Lieb 1963) in order to study whether the available approximate theories of such gases, specifically Bogolubov's theory, would conform to the actual properties of the model gas. The model is based on a well defined Schrödinger Hamiltonian for particles interacting with each other via a two-body potential, and all the eigenfunctions and eigenvalues of this Hamiltonian can, in principle, be calculated exactly.
The ground state as well as the low-lying excited states were computed and found to be in agreement with Bogolubov's theory when the potential is small, except for the fact that there are actually two types of elementary excitations instead of one, as predicted by Bogolubov's and other theories.
The model seemed to be only of academic interest until, with the sophisticated experimental techniques developed in the first decade of the 21\(^{\rm st}\) century, it became possible to produce this kind of gas using real atoms as particles.
Definition and Solution of the Model
There are \(N\) particles with coordinates \(x\) on the line \([0,L]\ ,\) with periodic boundary conditions. Thus, an allowed wave function \(\psi(x_1, x_2, \dots, x_j, \dots,x_N)\) is symmetric, i.e., \(\psi(\dots, x_i,\dots, x_j, \dots) = \psi(\dots, x_j,\dots, x_i, \dots) \) for all \(i \neq j\) and \(\psi\) satisfies \(\psi( \dots, x_j=0, \dots ) =\psi(\dots, x_j=L,\dots )\) for all \(j\ .\) The Hamiltonian, in appropriate units, is
\( H = -\sum\nolimits_{j=1}^N \partial^2/\partial x_j^2 +2c \sum\nolimits_{1\leq i< j\leq N} \delta(x_i-x_j)\ , \)
where \(\delta \) is the Dirac delta function, i.e., the interaction is a contact interaction. The constant \(c\geq 0\) denotes its strength. The delta function gives rise to a boundary condition when two coordinates, say \(x_1 \) and \(x_2\) are equal; this condition is that as \(x_2 \searrow x_1\ ,\) the derivative satisfies \((\frac{\partial}{\partial x_2} - \frac{\partial}{\partial x_1} ) \psi (x_1, x_2)|_{x_2=x_1+}= c \psi (x_1=x_2)\ .\) The hard core limit \(c=\infty\) is known as the Tonks-Girardeau gas (Girardeau 1960).
Schrödinger's time independent equation, \(H\psi = E\psi\) is solved by explicit construction of \(\psi\ .\) Since \(\psi \) is symmetric it is completely determined by its values in the simplex \(\mathcal{R} \ ,\) defined by the condition that \(0\leq x_1\leq x_2 \leq \dots, \leq x_N \leq L\ .\) In this region one looks for a \(\psi \) of the form considered by H.A. Bethe in 1931 in the context of magnetic spin systems -- the Bethe Ansatz. That is, for certain real numbers \(k_1< k_2 < \cdots <k_N\ ,\) to be determined,
\( \psi(x_1, \dots, x_N) = \sum_P a(P)\exp (\ i \sum_{j=1}^N k_{P j} x_j)\ , \)
where the sum is over all \(N !\) permutations, \(P\ ,\) of the integers \(1,2, \dots, N\ ,\) and \(P\) maps \(1,2,\dots,N\) to \( P1,P2,\dots,PN\ .\) The coefficients \(a(P)\ ,\) as well as the \(k\)'s are determined by the condition \(H\psi =E\psi\ ,\) and this leads to
\( E= \sum\nolimits_{j=1}^N\, k_j^2 \)
\( a(P) = \prod\nolimits_{1\leq i<j \leq N} \left(1+\frac{ic}{k_{Pi} -k_{Pj}}\right) \ . \)
T.C. Dorlas (Dorlas, 1993) proved that all eigenfunctions of \(H\) are of this form.
These equations determine \(\psi\) in terms of the \(k\)'s, which, in turn, are determined by the periodic boundary conditions. These lead to \(N\) equations\[ L\, k_j= 2\pi I_j\ -2 \sum\nolimits_{i=1}^N \arctan \left(\frac{k_j-k_i}{c} \right) \qquad \qquad {\rm for}\ \ j=1, \, \dots,\, N \ , \]
where \(I_1 < I_2<\cdots < I_N\) are integers when \(N\) is odd and, when \(N\) is even, they take values \(\pm \frac12, \pm \frac32, \dots\ .\) For the ground state the \(I\)'s satisfy
\( I_{j+1} -I_j = 1, \quad {\rm for} \ 1\leq j <N \qquad {\rm and } \ \, I_1=-I_N \ . \)
The first kind of elementary excitation consists in choosing \(I_1,\dots, I_{N-1} \) as before, but increasing \(I_N\) by an amount \(n>0\) (or decreasing \(I_1\) by \(n\)). The momentum of this state is \(p= 2\pi n /L\) (or \(-2\pi n /L\)).
For the second kind, choose some \(0< n \leq N/2\) and increase \(I_i\to I_i+1\) for all \(i\geq n\ .\) The momentum of this state is \(p= \pi - 2\pi n/L\ .\) Similarly, there is a state with \(p= -\pi +2\pi n/L\ .\) The momentum of this type of excitation is limited to \(|p| \leq \pi.\)
These excitations can be combined and repeated many times. Thus, they are bosonic-like. If we denote the ground state (= lowest) energy by \(E_0\) and the energies of the states mentioned above by \(E_{1,2}(p)\) then \(\epsilon_{1}(p) = E_{1}(p)-E_0\) and \(\epsilon_{2}(p) = E_{2}(p)-E_0\) are the excitation energies of the two modes.
Thermodynamic Limit
To discuss a gas we take a limit \(N\) and \(L\) to infinity with the density \(\rho =N/L\) fixed. The ground state energy per particle \(e = E_0/N\ ,\) and the \(\epsilon_{1,2}(p)\) all have limits as \(N\to \infty\ .\) While there are two parameters, \(\rho\) and \(c\ ,\) simple length scaling \(x\to \rho x\) shows that there is really only one, namely \(\gamma =c/\rho\ .\)
To evaluate \(E_0\) we assume that the \(N\) \(k\)'s lie between numbers \(K\) and \(-K\ ,\) to be determined, and with a density \(L\, f(k)\ .\) This \(f\) is found to satisfy the equation (in the interval \(-K \leq k \leq K\))
\( 2c\int\nolimits_{-K}^K \frac{f(p)}{c^2 +(p-k)^2} dp = 2\pi f(k) -1 \quad {\rm and} \quad \int\nolimits_{-K}^K f(p) dp = \rho \ , \)
which has a unique positive solution. An excitation distorts this density \(f\) and similar integral equations determine these distortions. The ground state energy per particle is given by
\( e = \frac{1}{\rho}\int\nolimits_{-K}^K k^2 f(k) dk .\) Figure 1 shows how \(e\) depends on \(\gamma\) and also shows Bogolubov's approximation to \(e\ .\) The latter is asymptotically exact to second order in \(\gamma\ ,\) namely, \(e\approx \gamma -4\gamma^{3/2}/\pi\ .\) At \(\gamma =\infty\ ,\) \(e = \pi^2/3\ .\)
Figure 2 shows the two excitation energies \(\epsilon_1(p)\) and \(\epsilon_2 (p)\) for a small value of \(\gamma = 0.787\ .\) The two curves are similar to these for all values of \(\gamma >0\ ,\) but the Bogolubov approximation (dashed) becomes worse as \(\gamma \) increases.
From three to one dimension.
This one-dimensional gas can be made using real, three-dimensional atoms as particles. One can prove, mathematically, from the Schrödinger equation for three-dimensional particles in a long cylindrical container, that the low energy states are described by the one-dimensional Lieb-Liniger model. This was done for the ground state in (Lieb-Seiringer-Yngvason 2003) and for excited states in (Seiringer-Yin 2008). The cylinder does
not have to be as narrow as the atomic diameter; it can be much wider if the excitation energy in the direction perpendicular to the axis is large compared to the energy per particle \(e\ .\) References Elliott H. Lieb and Werner Liniger, Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State, Physical Review 130: 1605-1616, 1963 Elliott H. Lieb, Exact Analysis of an Interacting Bose Gas. II. The Excitation Spectrum, Physical Review 130:1616-1624,1963. Teunis C. Dorlas, Orthogonality and Completeness of the Bethe Ansatz Eigenstates of the nonlinear Schrödinger model, Communications in Mathematical Physics, 154:347-376,1993. Marvin Girardeau, Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension, Journal of Mathematical Physics, 1:516-523,1960. Elliott H. Lieb, Robert Seiringer and Jakob Yngvason, One-dimensional Bosons in Three-dimensional Traps, Physical Review Letters, 91:#150401--1-4,2003. Robert Seiringer and Jun Yin, The Lieb-Liniger Model as a Limit of Dilute Bosons in Three Dimensions, Communications in Mathematical Physics, 284:459-479,2008 |
In this post of Prof Tao, it shows a lemma about the decomposition of bounded linear functional and a related exercise. The content is
Lemma 9(Jordan decomposition for functions) Let $I \in C_c(X \rightarrow \bf{R})^*$ be a (real) continuous linear functional. Then there exist positive linear functions $I^+, I^- \in C_c(X \rightarrow \bf{R})^*$ such that $I = I^+ - I^-$.
Exercise 15Show that among all possible choices for the functionals $I^+, I^-$ appearing in the above lemma, there is a unique choice which is minimal in the sense that for any other functionals $\tilde I^+, \tilde I^-$ obeying the conclusions of the lemma, one has $\tilde I^+(f) \geq I^+(f)$ and $\tilde I^-(f) \geq I^-(f)$ for all $f \in C_c(X \rightarrow \bf{R})$.
I have a question about the
existence of the so-called minimal decomposition, because roughly speaking I think THE sense of ordering between two decompositions defined in the exercise is not necessarily applicable to any two decompositions.
More precisely, given any two decompositions $I=I^+_1-I^-_1=I^+_2-I^-_2$, the only thing we can get is that the equality $I^+_1(f)-I^+_2(f)=I^-_1(f)-I^-_2(f)$ holds for any $f\in C_c(X)$. But this fact does not imply that $I^+_1(f)-I^+_2(f)\ge 0$($\le 0$) for any $f\in C_c(X)$. It can take positive or negative values for different $f$. It is required only that $I^+_1-I^+_2$ and $I^-_1-I^-_2$ are the same
general linear functional. Then we can't say which decomposition is greater or less than which, right ?
In my opinion, this ordering is a
partial order among all possible decompositions. Right ?
(I know the uniqueness part is easy to prove if two minimal decompositions exist, because the difference of
two minimal positive functionals must be $0$. Then they must be equal.) |
What is a good, single, "periodic table" of all the particles of the Standard Model?
I thought Particle Data Group would have a single-page PDF of this, but I couldn't find a single table listing all the particles.
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Usually you see something like this for the fermions: $$ \begin{array}{rccc} \text{Quarks: } & u \choose d & c \choose s & t \choose b \\ \text{Leptons: } & e \choose \nu_e & \mu \choose \nu_\mu & \tau \choose \nu_\tau \end{array} $$ Sometimes the four gauge bosons ($\gamma$, $W^±$, $Z^0$, $g$) will be listed along the side, making a 4×4 block of particles. I find that misleading: there's nothing special about the photon (or any of the others) that should put it in the same row as the quarks with charge +2/3.
For more details, the Contemporary Physics Education Project has some nice posters, like this one:
For composite states, like the different $\pi$ and $\rho$ mesons, the nucleons and delta baryons, etc., the Particle Data Group maintains a large database of particles. There are a few sets of composite particles where an additional symmetry permits arranging the particles by common properties, such as the baryon decuplet. But in chemistry you don't find any periodic table of
molecules; for the most part in physics the repeating properties of composite particles are just as messy. |
updated: Added the MathML code in source form at the bottom of this, as requested.
I am now configuring tex4ht to generate MathML instead of .png for the math. Then MathJax is loaded for final display. I found 3 problems (so far) with the MathML code generated by tex4ht.
The first one,
\pm (plus or minus sign) generates wrong mathml code, the workaround I just found is to use
\mp (minus or plus) instead (but I prefer to use
\pm as it is more common). For some reason
\mp works while
\pm does not.
The third problem (dot) also have a good work around in Latex which I can use. The second problem does not have work around in Latex code so far. (The extra large size of the summation sign)
So I need some help in what to do configure tex4ht or Mathml to avoid this problem, at least for case (2), as the others I have a work around for now. But I prefer to use
\pm for case (1).
I am on Linux, using TL 2013. Please assume I am a newbie when giving any instructions on what to do.
Below I show screen shot of the math generated by MathML with the pdflatex output next to it, and the latex code.
Below that is the MathML code for each case. These cases are numbered 1 to 3. Case (2) (the big summation) has a MathML workaround, which is to remove
<mo mathsize="big">, however, I do not know how to tell htlatex about this, or where to configure this change, or where to change things in TL installation to do this change. I can't ofcourse edit the HTML file each time I build it to fix mathml by hand. I have hundreds or thousands of small html files that I build from Latex.
The MWE follows (collection_of_problems.tex)
\documentclass[12pt]{report}\usepackage[T1]{fontenc}\usepackage{amsmath}\usepackage{amsfonts}\usepackage{amssymb}\begin{document}%(1)\[y=\pm\sqrt{x}\]\[ y=\mp\sqrt{x} %work around, but I prefer (1) not this one.\]% (2)\[k= \sum_{i}^{2} m_{i}(x)\]%bad dot (3)\[\overset{\rightarrow}{a} = \overset{\centerdot}{\omega}_{2}\]\[\overset{\rightarrow}{a} = \overset{\cdot}{\omega}_{2}\]\[\overset{\rightarrow}{a} = \overset{\bullet}{\omega}_{2}\]\end{document}
The command to use to build the htm file
htlatex collection_of_problems.tex "my,htm"
Where my.cfg file, in the same folder, is the following:
\Preamble{mathml,ext=htm}\Configure{VERSION}{} \Configure{DOCTYPE}{\HCode{<!DOCTYPE html>\Hnewline}} \Configure{HTML}{\HCode{<html>\Hnewline}}{\HCode{\Hnewline</html>}} \Configure{@HEAD}{} \Configure{@HEAD}{\HCode{<meta charset="UTF-8" />\Hnewline}} \Configure{@HEAD}{\HCode{<meta name="generator" content="TeX4ht (http://www.cse.ohio-state.edu/\string~gurari/TeX4ht/)" />\Hnewline}} \Configure{@HEAD}{\HCode{<link rel="stylesheet" type="text/css" href="\expandafter\csname aa:CssFile\endcsname" />\Hnewline}} \Configure{@HEAD}{\HCode{<script type="text/javascript"\Hnewline src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"\Hnewline ></script>\Hnewline}} \Configure{@HEAD}{\HCode{<style type="text/css">\Hnewline .MathJax_MathML {text-indent: 0;}\Hnewline </style>\Hnewline}}\begin{document}\EndPreamble
When the .htm is created, simply double click on it and assuming you are connected to the internet, MathJax will be loaded in your browser and you'll see the problem as shown in the screen shots above. To build the pdf file,
pdflatex file.tex ofcourse.
ps. strange why
\mp works but not
\pm. Here is the code for
\pm
Updated:
MathML code. Case (1)
<div class="par-math-display"><!--l. 12--><math xmlns="http://www.w3.org/1998/Math/MathML" display="block" ><mrow><mi>y</mi> <mo class="MathClass-rel">=</mo> <mo class="MathClass-bin">±</mo><msqrt><mrow><mi>x</mi></mrow></msqrt></mrow></math>
case (2)
<p class="nopar" ><div class="par-math-display"><!--l. 17--><math xmlns="http://www.w3.org/1998/Math/MathML" display="block" ><mrow><mi>k</mi> <mo class="MathClass-rel">=</mo><munderover accentunder="false" accent="false"><mrow><mo mathsize="big"> ∑</mo></mrow><mrow><mi>i</mi></mrow><mrow><mn>2</mn></mrow></munderover><msub><mrow><mi>m</mi></mrow><mrow><mi>i</mi></mrow></msub><mrow ><mo class="MathClass-open">(</mo><mrow><mi>x</mi></mrow><mo class="MathClass-close">)</mo></mrow></mrow></math>
case (3)
<div class="par-math-display"><!--l. 23--><math xmlns="http://www.w3.org/1998/Math/MathML" display="block" ><mrow><mover class="overset"><mrow><mi>a</mi></mrow><mrow> <mo class="MathClass-rel">→</mo></mrow></mover> <mo class="MathClass-rel">=</mo><msub><mrow> <mover class="overset"><mrow><mi>ω</mi></mrow><mrow> <mo class="MathClass-bin">▪</mo></mrow></mover></mrow><mrow><mn>2</mn></mrow></msub></mrow></math></div>
update 2:
A workaround is found, thanks to @Khaled Hosny in the comments below, which is to use UTF-8. Changing the tex4ht command to change the default, fixed the problem with
\pm. This is the new command to use.
htlatex foo.tex "my,htm,charset=utf-8" " -cunihtf -utf8"
Only problem left, is the large size of summation. Case (2) above. Changing the default renderer to MathML from the default HTML-CSS fixed this as well. But I am not sure now if this is something I can control and change from my end using MathJax setting, or if this is something on the client browser side only, so I can let it go. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.