text
stringlengths
256
16.4k
Sequence data can allow migration/transmission patterns (i.e. who infected whom) to be uncovered. Genetic samples yield trees: information about events ancestralto samples. Can use a chemical reaction notation to describe rates and effects of possible events: The parameters $\lambda$ and $\mu$ are probabilities per [time unit] that any given individual experiences a birth or a death. Additionally, the model allows each surviving lineage at the end of the process (present day) to be sampled with probability $\rho$. Gives rise to differential equations which can be solved to obtain the following tree probability: \begin{equation*} P(T|\lambda,\mu,\psi,r,t_0) = g(t_0) =\lambda^{n+m-1}\psi^{k+m}(4\rho)^n\prod_{i=0}^{n+m-1}\frac{1}{q(x_i)}\prod_{i=1}^{m}p_0(y_i)q(y_i) \end{equation*} where $q(t)=4\rho/g(t)$. [Stadler, J. Theor. Biol., 2010] There are several distinct parameterizations besides the basic $\lambda,\mu,\psi$ parameterization, including: Probability of coalescence in generation $i-m$: $P(m)=(1-p_{\textrm{coal}})^{m-1}p_{\textrm{coal}}$ Continuous time limit (large $N$, small $g$): $P(t)=e^{-\frac{1}{Ng}t}\frac{1}{Ng}$ Question: How can this be generalized to $k$ samples? Answer: $p_{\text{coal}}=\frac{k(k-1)}{2}\frac{1}{N}=\binom{k}{2}\frac{1}{N}$ Birth-death model can infer effective reproductive number dynamics: Coalescent model can infer effective population size dynamics:
First, the Unruh and Hawking radiation aren't quite "the same thing". They have a similar origin and the Unruh radiation may be considered a flat space (large black hole) limit of the Hawking radiation. Now, the near-horizon metric of an extremal black hole is $AdS_2\times S^2$ while for a non-extremal one, the $AdS_2$ is replaced by the Rindler space. This $AdS_2$ (two-dimensional anti de Sitter space) has consequences. First, in the static coordinates, the proper distance of any observer from the event horizon diverges. On the Wikipedia page I linked to, the formula$$ ds^2=-\frac{r^2}{M^2}\,dt^2+\frac{M^2}{r^2}\,dr^2+M^2\,\big(d\theta^2+\sin^2\theta\,d\phi^2 \big)$$implies that near $r=0$ (which corresponded to the horizon $r=M=Q$ in the original coordinates), $s =\int ds$ is proportional to the integral of $M/r$, and therefore logarithmically diverges. The metric in the displayed formula above is locally $AdS_2$ – the curvature is constant and it's a maximally symmetric space etc. – but it is only a part of the $AdS_2$ space. The coordinates we got were the so-called "Poincaré coordinate" and they only covered a part of the $AdS_2$ space, the so-called Poincaré patch. The Poincaré patch covers the green portion of the "global $AdS_2$" on the right part of the picture above. The observer sitting at the horizon however moves along the upper 45° tilted boundary of the green triangle and his trajectory inevitably is a geodesic. So he experiences no local acceleration – and no Unruh radiation. This is actually related to the fact that the near-horizon metric $AdS_2\times S^2$ with the appropriate electric or magnetic flux is a solution to Einstein's equations by itself – while the non-extremal near-horizon metric isn't a solution by itself. Because the local curvature of these trajectories vanishes, the Unruh temperature vanishes, as also expected from the fact that when the "two horizons" coincide, the gravitational acceleration at the horizon vanishes. So because the acceleration and temperature near this horizon is zero, there is no Unruh or Hawking radiation seen by this observer. In the non-extremal case, there is a radiation that the observer keeping himself a bit above the horizon sees. Locally, it may be interpreted as the Unruh radiation, and the Unruh radiation could be undone in the flat space by using the non-accelerating reference frame. However, in a finite non-extremal black hole spacetime, things are different. The static Schwarzschild coordinates behave at $r=\infty$ as non-accelerating coordinate in the Minkowski space, but near $r=r_0$, they behave as the locally accelerating frame where the Unruh radiation exists. With the Schwarzschild choice of the time and the corresponding energy, we know that the fields aren't in the ground state of this Schwarzschild $H$ near $r=r_0$ because there's the Unruh radiation. Because $H$ is a symmetry of the background, it must be true after some time, too. At $r\to\infty$, these excitations must be still there, even though the curvature may already be neglected at $r\to\infty$. So that's why the Unruh radiation is seen as a real, Hawking radiation by the observer at infinity (where the attraction by the black hole becomes negligible).
$\underline{\bf Background}$ In 2005, Regev [1] introduced the Learning with Errors (LWE) problem, a generalization of the Learning Parity with Error problem. The assumption of this problem's hardness for certain parameter choices now underlies the security proofs for a host of post-quantum cryptosystems in the field of lattice-based cryptography. The "canonical" versions of LWE are described below. Preliminaries: Let $\mathbb{T} = \mathbb{R}/\mathbb{Z}$ be the additive group of reals modulo 1, i.e. taking values in $[0, 1)$. For positive integers $n$ and $2 \le q \le poly(n)$, a "secret" vector ${\bf s} \in \mathbb{Z}_q^n$, a probability distribution $\phi$ on $\mathbb{R}$, let $A_{{\bf s}, \phi}$ be the distribution on $\mathbb{Z}_q^n \times \mathbb{T}$ obtained by choosing ${\bf a} \in \mathbb{Z}_q^n$ uniformly at random, drawing an error term $x \leftarrow \phi$, and outputting $({\bf a}, b' = \langle{\bf a}, s\rangle/q + x) \in \mathbb{Z}_q^n \times \mathbb{T}$. Let $A_{{\bf s}, \overline{\phi}}$ be the "discretization" of $A_{{\bf s}, \phi}$. That is, we first draw a sample $({\bf a}, b')$ from $A_{{\bf s}, \phi}$ and then output $({\bf a}, b) = ({\bf a}, \lfloor b'\cdot q\rceil) \in \mathbb{Z}_q^n \times \mathbb{Z}_q$. Here $\lfloor\circ\rceil$ denotes rounding $\circ$ to the nearest integral value, so we can view $({\bf a}, b)$ as $({\bf a}, b= \langle {\bf a}, {\bf s} \rangle + \lfloor q\cdot x\rceil)$. In the canonical setting, we take the error distribution $\phi$ to be a Gaussian. For any $\alpha > 0$, the density function of a 1-dimensional Gaussian probability distribution over $\mathbb{R}$ is given by $D_{\alpha}(x)=e^{-\pi(x/\alpha)^2}/\alpha$. We write $A_{{\bf s}, \alpha}$ as shorthand for the discretization of $A_{{\bf s}, D_\alpha}$ LWE Definition: In the search version $LWE_{n, q, \alpha}$ we are given $N = poly(n)$ samples from $A_{{\bf s}, \alpha}$, which we can view as "noisy" linear equations (Note: ${\bf a}_i, {\bf s} \in \mathbb{Z}_q^n, b_i \in \mathbb{Z}_q$): $$\langle{\bf a}_1, {\bf s}\rangle \approx_\chi b_1\mod q$$ $$\vdots$$ $$\langle{\bf a}_N, {\bf s}\rangle \approx_\chi b_N\mod q$$ where the error in each equation is independently drawn from a (centered) discrete Gaussian of width $\alpha q$. Our goal is to recover ${\bf s}$. (Observe that, with no error, we can solve this with Gaussian elimination, but in the presence of this error, Gaussian elimination fails dramatically.) In the decision version $DLWE_{n, q, \alpha}$, we are given access to an oracle $\mathcal{O}_{\bf s}$ that returns samples $({\bf a}, b)$ when queried. We are promised that the samples either all come from $A_{{\bf s}, \alpha}$ or from the uniform distribution $U(\mathbb{Z}_q^n)\times U(\mathbb{Z}_q)$. Our goal is to distinguish which is the case. Both problems are believed to be $hard$ when $\alpha q > 2\sqrt n$. Connection to Complexity Theory: It is known (see [1], [2] for details) that LWE corresponds to solving a Bounded Distance Decoding (BDD) problem on the dual lattice of a GapSVP instance. A polynomial time algorithm for LWE would imply a polynomial time algorithm to approximate certain lattice problems such as SIVP and SVP within $\tilde O(n/\alpha)$ where $1/\alpha$ is a small polynomial factor (say, $n^2$). Current Algorithmic Limits When $\alpha q \le n^\epsilon$ for $\epsilon$ strictly less than 1/2, Arora and Ge [3] give a subexponential-time algorithm for LWE. The idea is that, from well-known properties of the Gaussian, drawing error terms this small fits into a "structured noise" setting except with exponentially low probability. Intuitively in this setting, every time we would have received 1 sample, we receive a block of $m$ samples with a promise that no more than some constant fraction contain error. They use this observation to "linearize" the problem, and enumerate over the error space. $\underline{\bf Question}$ Suppose we are, instead, given access to an oracle $\mathcal{O}_{\bf s}^+$. When queried, $\mathcal{O}_{\bf s}^+$ first queries $\mathcal{O}_{\bf s}$ to obtain a sample $({\bf a}, b)$. If $({\bf a}, b)$ was drawn from $A_{{\bf s}, \alpha}$, then $\mathcal{O}_{\bf s}^+$ returns a sample $({\bf a}, b, d) \in \mathbb{Z}_q^n \times \mathbb{Z}_q \times \mathbb{Z}_2$ where $d$ represents the "direction" (or $\pm$-valued "sign") of the error term. If $({\bf a}, b)$ was drawn at random, then $\mathcal{O}_{\bf s}^+$ returns $({\bf a}, b, d) \leftarrow U(\mathbb{Z}_q^n)\times U(\mathbb{Z}_q)\times U(\mathbb{Z}_2)$. (Alternatively, we could consider the case when the bit $d$ is chosen adversarially when $b$ is drawn uniformly at random.) Let $n, q, \alpha$ be as before, except that now $\alpha q > c\sqrt n$ for a sufficiently large constant $c$, say. (This is to ensure that the absolute error in each equation remains unaffected.) Define the Learning with Signed Error (LWSE) problems $LWSE_{n, q, \alpha}$ and $DLWSE_{n, q, \alpha}$ as before, except that now we have the additional bit of advice for each error term's sign. Are either version of LWSE significantly easier than their LWE counterparts? E.g. 1. Is there a subexponential-time algorithm for LWSE? 2. What about a polynomial-time algorithm based on, say, linear programming? In addition to the above discussion, my motivation is an interest in exploring algorithmic options for LWE (of which we currently have relatively few to choose from). In particular, the only restriction known to provide good algorithms for the problem is related to the magnitude of the error terms. Here, the magnitude remains the same, but the range of error in each equation is now "monotone" in a certain way. (A final comment: I'm unaware of this formulation of the problem appearing in the literature; it appears to be original.) References: [1] Regev, Oded. "On Lattices, Learning with Errors, Random Linear Codes, and Cryptography," in JACM 2009 (originally at STOC 2005) (PDF) [2] Regev, Oded. "The Learning with Errors Problem," invited survey at CCC 2010 (PDF) [3] Arora, Sanjeev and Ge, Rong. "New Algorithms for Learning in Presence of Errors," at ICALP 2011 (PDF)
Conway’s puzzle M(13) is a variation on the 15-puzzle played with the 13 points in the projective plane $\mathbb{P}^2(\mathbb{F}_3) $. The desired position is given on the left where all the counters are placed at at the points having that label (the point corresponding to the hole in the drawing has label 0). A typical move consists in choosing a line in the plane going through the point where the hole is, choose one of the three remaining points on this line and interchange the counter on it for the hole while at the same time interchanging the counters on the other two points. In the drawing on the left, lines correspond to the little-strokes on the circle and edges describe which points lie on which lines. For example, if we want to move counter 5 to the hole we notice that both of them lie on the line represented by the stroke just to the right of the hole and this line contains also the two points with counters 1 and 11, so we have to replace these two counters too in making a move. Today we will describe the groupoid corresponding to this slide-puzzle so if you want to read on, it is best to play a bit with Sebastian Egner’s M(13) Java Applet to see the puzzle in action (and to use it to verify the claims made below). Clicking on a counter performs the move taking the counter to the hole. For the 15-puzzle I’ve gone to great lengths of detail here and there explaining how a groupoid naturally crops up having as its objects the reachable positions and as its morphisms the legal slide-sequences. Here, I’ll economize on details. We can encode a position by a permutation in $S_{13} $ by recording the counters (the hole having counter 0) as we move along the circle clockwise starting at the point of label 0 (the top-point). Basic moves transpose two pairs of counters so are given by a product of two transpositions. For example, the move described above from the initial position is $~(0,5)(1,11) $. Again it is clear how to make a groupoid from the reachable positions and the legal move-sequences and how all actual calculations can be done inside the group $S_{13} $. Two small remarks. (1) The situation is more symmetric than in the 15-puzzle. Here we have precisely 12 possible basic moves from any given position corresponding to the 12 non-hole counters which can be thrown into the hole. (2) Related to this, we have another way to encode move-sequences here. For each basic move we can jot down the label of the point whose counter we will throw to the hole (note : label, not counter!). The point of this being that we can now describe all reachable positions having the hole at the top point (the label 0 point) as those obtained from a move sequence of the form $~[0-i_1-i_2-\ldots-i_k-0]~ $ for all choices of $i_j $ between 0 and 12. However, not all these sequences give different positions and we want to determine how many distinct such positions we have. They will again form a subgroup of $S_{12} $ and the aim will be to show that this subgroup is the sporadic simple Mathieu group $M_{12} $. We will check now that $M_{12} $ is contained in this group. _Next time_ we will prove the other inclusion. Clearly, there are several different ways to label the 13 points and lines in the projective plane and unfortunately the choice of the Conway-Elkies-Martin paper is different from that of the Java Applet. For example, in the Applet-labeling {1,3,4,8} are on a line, whereas the paper-labeling assumes the following point/line labels $l_0 = \{ 0,1,2,3 \}, l_1= \{ 0,4,5,6 \}, l_2 = \{ 0,9,10,11 \}, l_3 = \{ 0,7,8,12 \}, l_4= \{ 1,4,8,9 \} $ $l_5 = \{ 1,6,7,11 \}, l_6= \{ 1,5,10,12 \}, l_7= \{ 3,5,8,11 \}, l_8 = \{ 3,4,7,10 \} $ $l_9=\{ 2,4,11,12 \}, l_{10}=\{ 2,6,8,10 \}, l_{11}=\{ 2,5,7,9 \}, l_{12} = \{ 3,6,9,12 \} $ We need to find a dictionary between the two labeling-systems. Again there are several options, but here is the first one I found. Relabeling the points of the Applet as on the left (also indicated is the labeling of the lines) we get the labeling of the paper. Hence, to all CEM-paper-sequences we have to apply the dictionary 0(0), 1(1), 2(11), 3(5), 4(12), 5(10), 6(4), 7(8), 8(6), 9(2), 10(7), 11(3), 12(9) and use the bracketed labels to perform the sequence in the Java Applet. For example, if Conway-Elkies-Martin compute the effect of the move-sequence [0-11-7-9-8-3-0] (read from left to right) then we first have to translate this via the dictionary to the move-sequence [0-3-8-2-6-5-0]. Then, we perform this sequence in the Java-applet (note again : a basic move is indicated by the label of the point to click on NOT the counter) and record the final position. Below we depict the final positions for the three move-sequences [0-3-8-2-6-5-0], [0-9-1-2-0-5-6-12-0] and [0-1-8-0-5-4-0-1-8-0] which are our translations of the three basic move-sequences on page 9 of the CEM-paper (from left to right). This gives us three reachable positions having their hole at the top. They correspond to teh following permutations in the symmetric group $S_{12} $ (from left to right) $\alpha = (1,10,8,7,2,6,5,3,11,12,4), \beta=(1,9)(2,11)(3,7)(4,10)(5,12)(6,8) $ $ \gamma=(2,6)(3,11)(5,8)(10,12) $ Using GAP (or the arithmetic progression loop description of $M_{12} $ as given in Chp.11 section 18 of Conway-Sloane modulo relabeling ) we find that the group generated by these three elements is simple and of order 95040 and is isomorphic to the sporadic Mathieu group $M_{12} $. This corresponds to the messy part of the 15-puzzle in which we had to find enough reachable positions to generate $A_{15} $. The more conceptual part (the OXO-labeling showing that all positions must belong to $A_{15} $) also has a counterpart here. But, before we can tell that story we have to get into linear codes and in particular the properties of the _tetra-code_… Reference John H. Conway, Noam D. Elkies and Jeremy L. Martin “The Mathieu Group $M_{12} $ and its pseudogroup extension $M_{13} $” arXiv-preprint
Consider the powers of $2$: $$ \begin{array}{rcl} 2^1 & = & 2\\ 2^2 & = & 4\\ 2^3 & = & 8\\ 2^4 & = & 16\\ 2^5 & = & 32\\ 2^6 & = & 64\\ 2^7 & = & 128\\ 2^8 & = & 256\\ 2^9 & = & 512\\ 2^{10} & = & 1024\\ \end{array} $$ $$\cdots$$ They always end in $2$, $4$, $6$ or $8$, but we know little about their beginnings Is there a power of $2$ starting with $9$? Is there a power of $2$ starting with $9999\cdots 9999$ ($99$ nines)? When considering more and more powers of $2$, which is the average of powers starting with $9$ in the limit? There are powers of $2$ starting with $9$, for instance $2^{53} = 9007199254740992$ Now we can't proceed by direct inspection, because humongous numbers are involved... The right direction to prove that there are powers of $2$ starting with $9999\cdots 9999$ ($99$ nines) is to work with logarithms to the base $10$ (denoted $\log$): if some power starts with $9999\cdots 9999$ $$2^n=9999\cdots 9999\cdots\cdots\cdots$$ then $$n\log 2=\log(2^n)=\log(9999\cdots 9999\cdots\cdots\cdots)=\log(9.999\cdots 9999\cdots\cdots\cdots)+m$$ where $m$ is the number of digits of $9999\cdots 9999\cdots\cdots\cdots$ minus $1$. Considering the fractional part, $$\{n\log 2\}=\log(9.999\cdots\cdots\cdots)$$ that is, we're looking for some $n$ that satisfies $$\log(9.999\cdots 9999)\leqslant\{n\log 2\}\lt\log(10)=1$$ and this is possible, because $\log 2$ is an irrational number (check it out! $2^q\neq 10^p$ for positive integers $p$ and $q$)! And this ensures that the fractional parts of $n\log 2$ are densein the interval [0,1]: one may find arbitrarily close elements of this set to any given number $0\leqslant x\leqslant 1$. So we only have to wait for some $\{n\log 2\}$ to fall into $[\log(9.999\cdots 9999),1)$; we will have to wait a lot, sure, but there will be such powers And with the same argument, we find out that there are powers of $2$ starting with any given finite sequence of numbers. Cool, isn't it? Since in the limit the values $\{n\log 2\}$ are equally distributed in the interval $[0,1]$, the average of powers starting with $9$ is $$\log 10-\log 9\simeq 0.0458$$ That's quite interesting: there are powers of $2$ starting with any number, but bigger numbers appear less. Had you ever thought about it?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-3 of 3 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
Zagreb Indices and Multiplicative Zagreb Indices of Eulerian Graphs 211 Downloads Citations Abstract For a graph \(G = (V(G), E(G))\), let d( u), d( v) be the degrees of the vertices u, v in G. The first and second Zagreb indices of G are defined as \( M_1(G) = \sum _{u \in V(G)} d(u)^2\) and \( M_2(G) = \sum _{uv \in E(G)} d(u)d(v)\), respectively. The first (generalized) and second Multiplicative Zagreb indices of G are defined as \(\Pi _{1,c}(G) = \prod _{v \in V(G)}d(v)^c\) and \(\Pi _2(G) = \Pi _{uv \in E(G)} d(u)d(v)\), respectively. The (Multiplicative) Zagreb indices have been the focus of considerable research in computational chemistry dating back to Narumi and Katayama in 1980s. Denote by \({\mathcal {G}}_{n}\) the set of all Eulerian graphs of order n. In this paper, we characterize Eulerian graphs with first three smallest and largest Zagreb indices and Multiplicative Zagreb indices in \({\mathcal {G}}_{n}\). KeywordsExtremal bounds Zagreb index Multiplicative Zagreb index Eulerian graphs Mathematics Subject Classification05C12 05C05 Notes Acknowledgements This work is partially supported by National Natural Science Foundation of China (Nos. 11601006, 11471016, 11401004, 11571134, 11371162), Anhui Provincial Natural Science Foundation (Nos. KJ2015A331, KJ2013B105, 1408085QA03), the Self-determined Research Funds of Central China Normal University from the colleges basic research and operation of MOE. The authors would like to express their sincere gratitude to the anonymous referees and the editor for many friendly and helpful suggestions, which led to great deal of improvement of the original manuscript. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.Wang, S., Farahani, M., Baig, A., Sajja, W.: The sadhana polynomial and the sadhana index of polycyclic aromatic hydrocarbons PAHk. J. Chem. Pharm. Res. 8, 526–531 (2016)Google Scholar 13. 14. 15. 16. 17. 18. 19. 20.Nikolić, S., Kovac̆ević, G., Milicc̆ević, A., Trinajstić, N.: The Zagreb indices 30 years after. Croat. Chem. Acta 76, 113–124 (2003)Google Scholar 21. 22.Narumi, H., Katayama, M.: Simple topological index. A newly devised index characterizing the topological nature of structural isomers of saturated hydrocarbons. Mem. Fac. Eng. Hokkaido Univ. 16, 209–214 (1984)Google Scholar 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.Farahani, M.R.: Zagreb indices and Zagreb polynomials of polycyclic aromatic hydrocarbons PAHs. J. Chem. Acta. 2, 70–72 (2013)Google Scholar 34. 35.
Suppose we have a normally distributed node: $\theta \sim N(\theta_0, \sigma_0^2)$ whose PDF will be referred to as $g(\theta)\,$. We will make a decision among two choices. Our utility depends upon the value of $\theta$ and the choice we make. We assume that for each choice, the utility function is linear: $U(1, \theta) = m_1\theta + b_1\,$ $U(2, \theta) = m_2\theta + b_2\,$ The utility associated with the “best” decision in this case is given by: $\max_i E_\theta[U(i, \theta)]\,$ $= \max_i E_\theta[m_i\theta + b_i]\,$ $= \max_i (m_i E_\theta[\theta] + b_i)\,$ (using the linearity property of Expected Values) $= \max_i (m_i \theta_0 + b_i)\,$ where the subscript in $E_\theta[]$ means to integrate out $\theta$ when taking the Expected Value. Note that there is a point $\theta_b$ such that $U(1, \theta_b) = U(2, \theta_b)$. When $\theta$ holds this value, we are indifferent to decision 1 vs. decision 2. That is: $U(1, \theta) = U(2, \theta)$ $ m_1\theta_b + b_1 = m_2\theta_b + b_2$ $ (m_1-m_2)\theta_b = b_2 - b_1$ so the breakeven point $\theta_b$ is given by: $\theta_b = \frac{b_1 - b_2}{m_2 - m_1}$ Or $\theta_b = \frac{b_2 - b_1}{m_1 - m_2}$ There is another node $y$ which may or may not be observed: $y|\theta \sim N(\theta, \sigma_y^2)$ Some cost is associated with observing $y$, so it may or may not be worthwhile to make the observation. We will find the Expected Value of Sample Information (EVSI) to help us make this decision. If EVSI is greater than the cost of observing $y$, then we should choose to observe $y$. If it is less than the cost, we should not make the observation. We define $\tilde{\theta}(y)$ to be the posterior mean of $\theta|y$. Recall from earlier class discussions that the posterior is distributed: $\theta|y \sim N(\frac{\sigma_y^2 \theta_0 + \sigma_0^2 y}{\sigma_0^2 + \sigma_y^2}, \frac{\sigma_0^2 \sigma_y^2}{\sigma_0^2 + \sigma_y^2})$ Thus the posterior mean of $\theta|y$ is a linear function of $y$ which we will call$\tilde{\theta}(y)$: $E_\theta[\theta|y]=\tilde{\theta}(y) = \frac{\sigma_y^2 \theta_0 + \sigma_0^2 y}{\sigma_0^2 + \sigma_y^2} = \frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} y + \frac{\sigma_y^2}{\sigma_0^2 +\sigma_y^2} \theta_0$ This function can be inverted: $\tilde{\theta}(y) - \frac{\sigma_y^2}{\sigma_0^2 +\sigma_y^2} \theta_0 = \frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} y$ $\frac{\tilde{\theta}(y) - \frac{\sigma_y^2}{\sigma_0^2 +\sigma_y^2} \theta_0} {\frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2}}= y$ So $y= \tilde{\theta}^{-1}(\theta) = \frac{(\sigma_0^2 + \sigma_y^2) \theta - \sigma_y^2 \theta_0}{\sigma_0^2}$ The breakeven observation $y_b$ is the observation which causes the posterior mean $\theta|y_b$ to move to the breakeven point: $y_b = \tilde{\theta}^{-1}(\theta_b)$ If we observe the value $y_b$, we will be indifferent to decision 1 vs. decision 2. Since the utility functions are linear, we will always prefer one of the decisions if we observe a value less than $y_b$, and we will always prefer the other decision if we observe a value greater than $y_b$. We can calculate $y_b$: $y_b = \frac{(\sigma_0^2 + \sigma_y^2) \theta_b - \sigma_y^2 \theta_0}{\sigma_0^2} = \theta_b + \frac{\sigma_y^2}{\sigma_0^2} (\theta_b - \theta_0)$ Recall the definition of the Expected Value of Sample Information and the discussion about the Expected Prior Utility: $EVSI = E_y\left[\max_i E_\theta[U(i, \theta)|y]\right] - max_j E_\theta[U(j, \theta)]\,$ $= E_y\left[\max_i (m_i E_\theta[\theta|y] + b_i)\right] - max_j (m_j E_\theta[\theta] + b_j)\,$ $= E_y\left[\max_i (m_i \tilde{\theta}(y) + b_i)\right] - max_j (m_j \theta_0 + b_j)\,$ Without loss of generality, assume that decision 1 is better than decision 2 in the case of the prior. Re-label choice 1 and 2 if this is not the case. That is re-label such that: $E[U(1, \theta)] > E[U(2, \theta)]\,$ This lets us write: $EVSI = E_y\left[\max_i (m_i \tilde{\theta}(y) + b_i)\right] - (m_1 \theta_0 + b_1)\,$ The next step is tricky. First, we show that $E_y\left[\tilde{\theta}(y)\right] = \theta_0$: $E_y[\tilde{\theta}(y)] = E_y[\frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} y + \frac{\sigma_y^2}{\sigma_0^2 + \sigma_y^2} \theta_0]\,$ $= \frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} E_y[y] + \frac{\sigma_y^2}{\sigma_0^2 + \sigma_y^2} \theta_0\,$ $= \frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} \theta_0 + \frac{\sigma_y^2}{\sigma_0^2 + \sigma_y^2} \theta_0 = \theta_0\,$ where the last step is possible because: $Y \sim N(\theta_0, \sigma_0^2 + \sigma_y^2)\,$ Now we can use this fact (in reverse) to make a clever substitution for $\theta_0$ and rewrite the EVSI equation as follows: $EVSI = E_y\left[\max_i (m_i \tilde{\theta}(y) + b_i)\right] - (m_1 E_y[\tilde{\theta}(y)] + b_1)\,$ $= E_y\left[\max_i (m_i \tilde{\theta}(y) + b_i)\right] - E_y[m_1 \tilde{\theta}(y) + b_1]\,$ $= E_y\left[\max_i (m_i \tilde{\theta}(y) + b_i) - (m_1 \tilde{\theta}(y) + b_1)\right]\,$ Using the definition of the expected value, this becomes: $EVSI = \int_{-\infty}^{\infty}\left[\max_i (m_i \tilde{\theta}(y) + b_i) - (m_1 \tilde{\theta}(y) + b_1)\right] \bar{f}(y) dy$ $= \int_{-\infty}^{y_b}\left[\max_i (m_i \tilde{\theta}(y) + b_i) - (m_1 \tilde{\theta}(y) + b_1)\right] \bar{f}(y) dy$ $+ \int_{y_b}^{\infty}\left[\max_i (m_i \tilde{\theta}(y) + b_i) - (m_1 \tilde{\theta}(y) + b_1)\right] \bar{f}(y) dy $ where $\bar{f}(y)$ is the marginal pdf of Y and $Y \sim N(\theta_0, \sigma_0^2 + \sigma_y^2)\,$ as noted above. To get rid of the max in the EVSI equation, we will look at two separate cases. In the first case, we suppose that $\theta_0 > \theta_b$ and note that $\tilde{\theta}(\cdot)$ is monotone increasing. When the observation $y > y_b$, $\tilde{\theta}(y) > \theta_b$. We don't change our mind, and decision 1 is still the best choice. However, when $y < y_b$, decision 2 is the best choice, given the observation. $EVSI = \int_{-\infty}^{y_b}[(m_2 \tilde{\theta}(y) + b_2) - (m_1 \tilde{\theta}(y) + b_1)] \bar{f}(y) dy$ $+ \int_{y_b}^{\infty}[(m_1 \tilde{\theta}(y) + b_1) - (m_1 \tilde{\theta}(y) + b_1)] \bar{f}(y) dy $ $= \int_{-\infty}^{y_b}[(m_2 - m_1) \tilde{\theta}(y) + (b_2 - b_1)] \bar{f}(y) dy + 0$ $= (m_2 - m_1) \int_{-\infty}^{y_b} \left[\tilde{\theta}(y) + \frac{b_2 - b_1}{m_2 - m_1}\right] \bar{f}(y) dy $ $= (m_2 - m_1) \int_{-\infty}^{y_b} \left[\tilde{\theta}(y) - \theta_b\right] \bar{f}(y) dy $ Note that $(m_2 - m_1)$ must be negative if choice 1 is preferred in the prior. Let's flip things around so that these quantities are positive and then convert to use the absolute value. The use of the absolute value allows us to drop the requirement that choice one is preferred in the prior. $= |m_1 - m_2| \int_{-\infty}^{y_b} \left[\theta_b - \tilde{\theta}(y)\right] \bar{f}(y) dy $ In the second case, we suppose that $\theta_0 < \theta_b$. Since $\tilde{\theta}(\cdot)$ is monotone increasing, $\tilde{\theta}(y) < \theta_b$ when $y < y_b$ and $\tilde{\theta}(y) > \theta_b$ when $y > y_b$. Thus, decision 2 remains the best choice when $y < y_b$, but decision 1 is the best choice when $y > y_b$. $EVSI = \int_{-\infty}^{y_b}[(m_1 \tilde{\theta}(y) + b_1) - (m_1 \tilde{\theta}(y) + b_1)] \bar{f}(y) dy$ $+ \int_{y_b}^{\infty}[(m_2 \tilde{\theta}(y) + b_2) - (m_1 \tilde{\theta}(y) + b_1)] \bar{f}(y) dy $ $= 0 + \int_{y_b}^{\infty}[(m_2 - m_1) \tilde{\theta}(y) + (b_2 - b_1)] \bar{f}(y) dy$ $= (m_2 - m_1) \int_{y_b}^{\infty} \left[\tilde{\theta}(y) + \frac{b_2 - b_1}{m_2 - m_1}\right] \bar{f}(y) dy $ $= |m_1 - m_2| \int_{y_b}^{\infty} \left[\tilde{\theta}(y) - \theta_b\right] \bar{f}(y) dy $ We have found a formula for EVSI when $\theta_0 > \theta_b$, and another formula for when $\theta_0 < \theta_b$. To get to this point, we assumed that $m_1 > m_2$. However, at this point, we can drop the assumption because those variables only appear as a scaling factor in front of the integral. When $\theta_0 > \theta_b$, $EVSI = |m_1 - m_2| \int_{-\infty}^{y_b} \left[\theta_b - \tilde{\theta}(y)\right] \bar{f}(y) dy $ and when $\theta_0 < \theta_b$, $EVSI = |m_1 - m_2| \int_{y_b}^{\infty} \left[\theta_b - \tilde{\theta}(y)\right] \bar{f}(y) dy $ These equations are the same except in the endpoints of the integral. The first integral is called the “left-hand linear loss integral,” and the second is the “right-hand linear loss integral.” They look messy, but we can simplify them. For now, let's say: $EVSI_{\theta_0 > \theta_b} = |m_1 - m_2| L_l(y)$ $EVSI_{\theta_0 < \theta_b} = |m_1 - m_2| L_r(y)$ Before we get started on the simplification of the linear loss functions, recall that the standard normal (usually named “z”) has mean 0 and standard deviation 1. The standard normal PDF is $\phi(\cdot)$, and the standard normal CDF is $\Phi(\cdot)$. Standardized values let us plug values into $\phi$ and $\Phi$. To standardize a value from another normal distribution, subtract the mean and divide by the standard deviation: $z = \frac{y-\mu}{\sigma}$ Recall that Y is distributed as: $y \sim N(\theta_0, \sigma_0^2 + \sigma_y^2)$ Therefore, the standardized value is: $z = \frac{y - \theta_0}{\sqrt{\sigma_0^2 + \sigma_y^2}}$ Inverting this, we find that: $y = z \sqrt{\sigma_0^2 + \sigma_y^2} + \theta_0$ We will simplify calculations if we define a helper variable: $t = \sqrt{\sigma_0^2 + \sigma_y^2}$ Using $t$, we can simplify the following equations: $y = t z + \theta_0\,$ $\tilde{\theta}(y) = \frac{\sigma_0^2}{\sigma_0^2 + \sigma_y^2} y + \frac{\sigma_y^2}{\sigma_0^2 +\sigma_y^2} \theta_0 = \frac{\sigma_0^2}{t^2} y + \frac{\sigma_y^2}{t^2} \theta_0$ Then we substitute in for $y$ in terms of $z$: $\tilde{\theta}(tz + \theta_0) = \frac{\sigma_0^2}{t^2} (tz + \theta_0) + \frac{\sigma_y^2}{t^2} \theta_0 = \frac{\sigma_0^2}{t} z + \frac{\sigma_0^2 + \sigma_y^2}{t^2} \theta_0 = \frac{\sigma_0^2}{t} z + \theta_0$ We begin simplifying the left hand linear loss integral: $L_l(y) = \int_{-\infty}^{y_b} \left[\theta_b - \tilde{\theta}(y)\right] \bar{f}(y) dy$ $= \theta_b \int_{-\infty}^{y_b} \bar{f}(y) dy - \int_{-\infty}^{y_b} \tilde{\theta}(y) \bar{f}(y) dy$ We will perform the following change of variables of $z$ for $y$: $dy = t dz\,$ $\bar{f}(y) dy = \phi(z) dz$ (the t cancels out with the standard deviation in the normal pdf) Substituting this in, we get: $L_l(y_b) = \theta_b \int_{-\infty}^{z_b} \phi(z) dz - \int_{-\infty}^{z_b} \tilde{\theta}(tz + \theta_0) \phi(z) dz$ $= \theta_b \Phi(z_b) - \int_{-\infty}^{z_b} (\frac{\sigma_0^2}{t} z + \theta_0) \phi(z) dz$ $= \theta_b \Phi(z_b) - \theta_0 \int_{-\infty}^{z_b} \phi(z) dz - \frac{\sigma_0^2}{t} \int_{-\infty}^{z_b} z \phi(z) dz$ $= (\theta_b - \theta_0) \Phi(z_b) - \frac{\sigma_0^2}{t} \int_{-\infty}^{z_b} z \phi(z) dz$ The last messy bit left is the integral, which turns out to be very simple: $\int_{-\infty}^{z_b} z \phi(z) dz$ $= \int_{-\infty}^{z_b} z \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} z^2} dz$ $= \frac{1}{\sqrt{2\pi}} \left[ -e^{-\frac{1}{2} z^2} \right]_{-\infty}^{z_b}$ $= - \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2} z_b^2} $ $= - \phi(z_b)\,$ Now we can substitute this back in: $L_l(y_b) = (\theta_b - \theta_0) \Phi(z_b) + \frac{\sigma_0^2}{t} \phi(z_b)$ Breakeven point of $\theta$: $\theta_b = -\frac{b_2 - b_1}{m_2 - m_1}$ Breakeven observation: $y_b = \theta_b + \frac{\sigma_y^2}{\sigma_0^2} (\theta_b - \theta_0)$ The normalized breakeven point is: $z_b = \frac{y_b - \theta_0}{\sqrt{\sigma_0^2 + \sigma_y^2}}$ Formula for EVSI: $EVSI_{\theta_0 = \theta_b} = 0$ $EVSI_{\theta_0 > \theta_b} = |m_1 - m_2| \frac{\sigma_0^2}{\sqrt{\sigma_0^2 + \sigma_y^2}} L_N(-z_b)$ $EVSI_{\theta_0 < \theta_b} = |m_1 - m_2| \frac{\sigma_0^2}{\sqrt{\sigma_0^2 + \sigma_y^2}} L_N(z_b)$ $L_N$ in these equations is the linear loss integral for the normal distribution: $L_N(x) = \phi(x) - x (1 - \Phi(x))\,$ where $\phi$ is the normal PDF and $\Phi$ is the normal CDF.
In my last post, I covered the initial steps to setting up an LED projection system that can handle arbitrary LED locations. The LEDs were all constrained to a single strip, mostly because I can't commit to dedicating the LEDs to any single project and cutting the strip up. I wrapped the strip around a cylinder and was able to project a few images and a video with reasonable success. The biggest problem was the imperfect placement of the LEDs. My projection system depends on knowing the 3D position of every LED in the display in order to properly project an image. Using simple math to describe the positions like I did before only works if the LEDs are placed very accurately. To allow for projections on to an LED strip that is not placed on any sort of simple mathematical curve, I've worked out a method of automatically locating every LED in 3D space using some webcams and some more math. Single-Camera Mapping To start, let's look at how a webcam can be used to recognize an LED. An appropriately messy pile of LEDs. In an image recorded from the camera, the strip appears as a mostly white strand with very small non-white regions denoting the location of an LED. Not a great situation for automatic feature detection. What happens when we turn on an LED? This is done by sending the appropriate commands to my Arduino LED strip controller to turn on exactly one LED. Now we have a bright spot showing us the location of a single LED. To automatically locate this LED, we could write an algorithm to search through the camera image to find the brightest spot and return its position. Unfortunately, this kind of method is easily confused by anything else in the camera image that is bright. A way to fix this is to subtract off the image with the LED off to eliminate anything that hasn't changed between the two frames. Now all we have left is a single bright point so search for. Using OpenCV to handle capturing and storing the webcam images, the location of the bright spot can be located as such: Show/Hide Code These positions are not the full 3D positions of each LED that we will eventually need, but just the projection of each LED position on to the webcam field of view. We can loop through each LED in the strip and determine this projected position: You might notice that when an LED lights up a significant portion of the paper the strip is taped to, the LED-finding algorithm picks a point somewhere between the actual LED and the lit up paper. This is because the algorithm is finding the center of light in the image as opposed to the peak of light. I feel this is more appropriate in this case due to the fact that some of the LEDs are pointed away and will only be seen in the light they shine on the paper. Instead of treating those LEDs as lost, I might as well register the lit up paper as the source of light from the viewers perspective and map it appropriately. This is enough information to perform a projection mapping that is only viewable from the perspective of the webcam. Multi-Camera Mapping Consider a single camera looking at a single lit LED. The LED position on the camera image gives us the angle relative to the direction the camera is pointing that the LED is located. From the LED position (as determined in the above algorithm), we can work out a vector line that originates from the camera and hits the LED at some point: \[ \vec{f} = \vec{c} + t\hat{\vec{n}} \] Here, $\vec{c}$ is the position of the camera, $\hat{\vec{n}}$ is the normal vector originating from the camera and pointing towards the LED, and $t$ is the vector line distance parameter. The issue here is that we don't know how far along this vector the LED sits, or rather, what the appropriate value of $t$ is. But suppose we add a second camera that can see the single LED but is positioned elsewhere. Again we can define a vector line originating from this new camera that hits the LED. If we know the positions and orientations of the two cameras relative to each other, we know the equations for each line in a common coordinate system. \[ \vec{f_1} = \vec{c_1} + t\hat{\vec{n_1}} \] \[ \vec{f_2} = \vec{c_2} + s\hat{\vec{n_2}} \] Ideally, these two lines will intersect ($\vec{f_1} = \vec{f_2}$) exactly where the LED exists in 3D space. Solving for this intersection point using some high school math gives us this point. In this way, we can use two cameras to determine the 3D position of each LED. Unfortunately, these two vector lines will rarely intersect. Due to imperfections in the camera positions, the measurements thereof, and the camera optics, the two vector lines will most often be skew. Instead of finding the point of intersection, we need to find the closest point between these two lines. To do this, we need to find the values for $s$ and $t$ that minimize the distance between the two lines. One way to solve for these values is to write out the distance between the two lines for any ($s$,$t$) and set its derivative with respect to each quantity equal to zero. This is a pain to write out, so here's an easier way of doing it. First we define the distance vector: \[ \vec{d}(s,t) = \vec{f_1} - \vec{f_2} = \vec{c_1} - \vec{c_2} + t\hat{\vec{n_1}} - s\hat{\vec{n_2}} \] We want to know the values of $s$ and $t$ that minimize the length of this vector. We also know from geometry that when this vector distance is minimized, it will be perpendicular to both original lines. We can express this by saying that there will be no component of the distance line along the original lines: \[ \hat{\vec{n_1}} \cdot \vec{d}(s,t) = 0 \] \[ \hat{\vec{n_2}} \cdot \vec{d}(s,t) = 0 \] Expanding these out and expressing the two equations with two unknowns as a matrix problem, \[ \left( \begin{array}{cc} \hat{\vec{n_1}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_1}} \cdot \hat{\vec{n_2}} \\ \hat{\vec{n_2}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_2}} \cdot \hat{\vec{n_2}} \end{array} \right) \begin{pmatrix} s \\ t \end{pmatrix} = \begin{pmatrix} -\hat{\vec{n_1}} \cdot (\vec{c_1} - \vec{c_2}) \\ -\hat{\vec{n_2}} \cdot (\vec{c_1} - \vec{c_2}) \end{pmatrix} \] The diagonal of the 2x2 matrix can of course be simplified to 1 and -1. Solving this matrix problem for $s$ and $t$ allows us to compute where on each original vector line the distance vector hits. Since I have no reason to favor one camera over the other, I assume the LED is most likely located half-way between these two points. With the matrix inversion expanded out, the code to find the LED based on input vectors looks like this: This of course depends on having a decent estimate of the two camera-to-LED vectors. I've decided to place my two cameras so that they both are looking at a common point in space that I decide is the origin of the coordinate system. This way, I can simple measure the location of each camera with a ruler, and know not only the vector positions but also the normal vector that is produced when looking at the middle of each camera image. When a camera locates an LED in its field of view, the normal vector is then determined relative to the one pointing to the origin. The process of finding these normal vectors is a little tedious and requires a few 3D rotation matrices. I'll leave the math and code for that process out of this post for simplicity. To test out this method, I set up two identical webcams in two identical 3D printed frames pointed at a sphere of LEDs: Spiral Sphere of LEDs, for another project Similar to before, I march through each LED turning it off and on to produce background-subtracted images that make locating the LED easy. Instead of mapping the image coordinates directly to a source image, I run the two estimates of LED location through the 3D mapping algorithm described above. The resulting point cloud looks promising: The reason so few LEDs made it into the final point cloud is that only LEDs visible through both cameras can be located in 3D space. Spacing the cameras out more increases the accuracy of this method, but decreases the number of LEDs on the sphere they can both see. One solution to this problem would be to add more cameras around the sphere. This would increase the number of LEDs seen with two cameras, and introduces the possibility of using three cameras at once to locate a single LED. When using three or more cameras, finding the point in space that minimizes the distance to each camera-to-LED vector line becomes a least squares problem, similar in form to the one I went through on my robot arm project. The next step in this project on LEDs is to do something visually interesting with this mapping technique. So far I've just been projecting the colorbar test image to verify the mapping, but the real point is to project more complicated images and videos onto a mass of LEDs. In the next few weeks, I will be talking about the process of using this projection mapping to do something interesting with the 3D printed spiral sphere I've shown at the end of this post.
I saw that curtain rods at my house were bent due to the weight of curtains on them and wondered whether a beam analysis can be carried out to verify its deformation under the load. Here is a picture of the initially straight curtain rod without curtains. And here is a picture of the bent curtain rod when curtains were hung from it. I analyzed this scenario as a linear, elastic, small deformation simply supported beam problem with uniformly distributed loads on it to check whether the maximum deformation (sagging) of the beam (curtain rod) at its center matches with theoretical calculations. Though the problem appears to be a pretty straight forward one, the challenge was to measure or estimate physical quantities (dimensions, weight, material properties) with reasonable accuracy to validate the actual results theoretically. 1st I measured the dimensions. The span length of the beam was easy to measure using an inch tape. It turned out to be \(L=2884\) mm. Next I needed the cross-section dimensions with reasonable accuracy (something like \(\frac{1}{10}\)th of an mm). Additionally the OD of the rod was corrugated which prompted me to find out the mean OD to simplify the analysis. Here’s a picture of the cross-section (The ends are slightly worn out here, but still you’ll get the idea.) To measure the OD I rolled up three rounds of thread on the outer surface of the rod, unrolled it, measured its length and divided by \(3\pi\). Next using a scale I visually estimated the height of the corrugations to be around 0.8 mm. Using the measured outside OD and approximate height of corrugations I calculated the mean OD of the rod which turned out to be \(OD_{mean}=238/3\pi-0.8=24.45\) mm. Here’s a picture showing thread rolled over the rod. To estimate the ID, I inserted a sheet of rolled paper inside the hole of the rod to match its ID. Then repeated a similar procedure, this time rolling thread over the paper to estimate the ID as \(ID=211/3\pi=22.39\) mm. Here’s a picture of the thread rolled over the paper inserted inside the curtain rod hole. The curtain rod was made of Aluminium with a brown paint over it. Young’s Modulus assumed for the Aluminium rod was \(E=69\) GPA. Next I set out to compute the uniformly distributed load on the curtain rod with the hanging curtains. To accomplish this I weighed the curtains on a weighing machine and figured out its weight as \(wt_{curtains}=3\) kg. Here’s a picture showing the curtain weight. Next I estimated the weight of the Aluminium rod as its volume*density. \(wt_{Al}=\frac{\pi}{4}*(24.45^2-22.39^2)*2884*(2700*10^{-9})=0.59\) kg. Knowing the weights I calculated the udl intensity as \(w=\frac{wt_{curtains}+wt_{Al}}{L}=\frac{3.59}{2884}=0.001245\) kg/mm \(=0.0122\) N/mm Next I calculated the moment of inertia for the cross-section before moving to the last part of this analysis. \(I=\frac{\pi(OD_{mean}^4-ID^4)}{64}=5206\) \(mm^4\) Now the final part. For the sake of brevity, I wouldn’t do a derivation here, but the maximum central deformation for a simply supported beam with a uniformly distributed load is given by \(\delta=\frac{5wL^4}{384EI}=\frac{5*0.0122*2884^4}{384*69*1000*5206}=30.6\) mm. When I measured the central deformation of the rod it came out to be 31 mm (I used the shadow of rod on the wall before and after hanging the curtains to measure this). That the result got validated with such high accuracy in spite of the rough method used in the analysis actually amused me as might have amused you.
In this lecture notes by Ola Svensson: http://theory.epfl.ch/osven/courses/Approx13/Notes/lecture4-5.pdf, it is said that we don't know if Euclidean TSP is in NP: The reason being that we do not know how to calculate square roots efficiently. On the other hand there is this paper by Papadimitriou: http://www.sciencedirect.com/science/article/pii/0304397577900123 saying it is NP-complete, which also means it is in NP. Although he doesn't prove it in the paper, I think he consider the membership in NP trivial, as is usually the case with such problems. I am confused by this. Honestly, the claim that we don't know if Euclidian TSP is in NP shocked me, since I just assumed it is trivial -- taking the TSP tour as a certificat, we can easily check it is valid tour. But the problem is that there can be some square roots. So the lecture notes basically claim that we cannot in polynomial time solve the following problem: Given rational number $q_1,\ldots,q_n,A\in\mathbb{Q}$, decide if $\sqrt{q_1}+\cdots+\sqrt{q_n}\leq A$. Question 1: What do we know about this problem? This begs the following simplification, which I was unable to find: Question 2: Is this reducible to the special case when $n=1$? Is this special case polynomial-time solvable? Thinking about it for a while, I came to this. We want polynomial time complexity with respect to the number of bits of the input, i.e., not the size of the numbers themselves. We can easily work out the sum to a polynomial number of decimal digits. To get a bad case, we need an instance of $q_{1,k},\ldots,q_{n,k},A_k\in\mathbb{Q}$ for $k=1,2,\ldots$ such that for every polynomial $p$, there exist an integer $k$ such that $\sqrt{q_{1,k}}+\cdots+\sqrt{q_{n,k}}$ and $A_k$ agree on the first $p(\text{input-size})$ digits of decimal expansion. Question 3: Is there such an instance of reational number? But what is $\text{input-size}$? This depends on the way the rational numbers are represented! Now I am curious about this: Question 4: Is is algorithmically important if rational number are given as ratio of two integer (such as $24/13$) or in the decimal expansion (such as $2.5334\overline{567}$)? In other words, is there a family of rational numbers such that the size of decimal expansion is not polynomially bounded in the size of ratio representation or the other way around?
A Remark on the Continuous Subsolution Problem for the Complex Monge-Ampère Equation 51 Downloads Abstract We prove that if the modulus of continuity of a plurisubharmonic subsolution satisfies a Dini-type condition then the Dirichlet problem for the complex Monge-Ampère equation has the continuous solution. The modulus of continuity of the solution also given if the right hand side is locally dominated by capacity. KeywordsDirichlet problem Complex Monge-Ampère equation Weak solutions Subsolution problem Mathematics Subject Classification (2010)32W20 32U40 1 Introduction ψbe a continuous function on the boundary of Ω. We look for the solution to the equation: PSHstands for plurisubharmonic functions, and \(d^{c} = i (\overline {\partial } -\partial )\). It was shown in [9] and [10] that for the measures satisfying certain bound in terms of the Bedford-Taylor capacity [4], the Dirichlet problem has a (unique) solution. The precise statement is as follows. his admissible, then so is A hfor any number A> 0. Define F ( h x) a Borel measure μsatisfies E⊂ Ω. Then, by [9] the Dirichlet problem (1.1) has a solution. μhas density with respect to the Lebesgue measure in L , p p> 1 then this bound is satisfied [9]. By the recent results in [12, 13] if μis bounded by the Monge-Ampère measure of a Hölder continuous plurisubharmonic function φ h, and consequently, the Dirichlet problem (1.1) is solvable with Hölder continuous solution. The main result in this paper says that we can considerably weaken the assumption on φand still get a continuous solution of the equation. φon \(\bar {\varOmega }\), i.e., φ( z) − φ( w)|≤ ϖ(| z− w|) for every \( z,w\in \bar {\varOmega }\). Let us state the first result. Theorem 1.1 Let\(\varphi \in PSH({\varOmega }) \cap C^{0}(\bar {\varOmega }), \) φ= 0 on ∂ Ω . Assume that its modulus of continuity satisfies the Dini type condition μsatisfies μ≤ ( d d c φ) in n Ω, then the Dirichlet problem (1.1) admits a unique solution. Let us mention in this context that it is still an open problem if a continuous subsolution φ implies the solvability of (1.1). The modulus of continuity of the solution to the Dirichlet problem (1.1) was obtained in [3] for μ = f d V 2 n with f( x) being continuous on \(\bar {\varOmega }\). We also wish to study this problem for the measures which satisfy the inequality (1.2). For simplicity, we restrict ourselves to measures belonging to \(\mathcal H(\alpha ,{\varOmega })\). In other words, we take the function h( x) = C x for positive constants n α C, α> 0 in the inequality (1.2). We introduce the following notion, which generalizes the one in [8]. Consider a continuous increasing function \(F_{0}:[0,\infty ) \to [0,\infty )\) with F(0) = 0. Definition 1.2 μis called uniformly locally dominated by capacity with respect to F 0if for every cube I( z, r) =: I⊂ B := I B( z,2 r) ⊂⊂ Ωand for every set E⊂ I, According to [1], the Lebesgue measure d V 2 n satisfies this property with \(F_{0} = C_{\alpha } \exp (-\alpha / x^{-1/n})\) for every 0 < α< 2 n. The case F 0( x) = C xwas considered in [8]. We refer the reader to [5] for more examples of measures satisfying this property. Here is our second result. Theorem 1.3 Assume\(\mu \in \mathcal H(\alpha ,{\varOmega })\) with compact support and satisfying the condition(1.4) for some F 0 . Then, the modulus of continuity of the solutionu of the Dirichlet problem(1.1) satisfies for0 < δ< R 0 and2 R 0= dist(supp μ, ∂ Ω) > 0 , C, α 1depend only on α, μ, Ω. 2 Preliminaries Kin a domain \({\varOmega }\subset \mathbb {C}^{n}, \) its relative extremal function u is given by K u is continuous, we call K Ka regular set. It is easy to see that the 𝜖-envelope Kis regular, and thus any compact set can be approximated from above by regular compact sets. Kwith respect to Ω(now usually called the Bedford-Taylor capacity) is defined by the formula μbelongs to \(\mathcal H(\alpha ,{\varOmega })\), α> 0, if there exists a uniform constant C> 0 such that for every compact set E⊂ Ω, 3 Proof of Theorem 1.1 In this section, we shall prove Theorem 1.1. We need the following lemma. The proof of this lemma is based on a similar idea as the one in [11, Lemma 3.1] where the complex Hessian equation is considered. The difference is that we have much stronger volume-capacity inequality for the Monge-Ampère equation. Lemma 3.1 Assume the measure μ is compactly supported. Fix0 < α< 2 n and τ= α/(2 n+ 1) . There exists a uniform constantC such that for every compact set K⊂ Ω , K) := cap( K, Ω). Proof K⊂⊂ Ω. Without loss of generality, we may assume that Kis regular. Denote by φ the standard regularization of ε φin the terminology of [10]. We choose ε> 0 so small that Ω = { ε z∈ Ω: dist( z, ∂ Ω) > ε}. Since for every \(K \subset {\varOmega }^{\prime \prime }\) we have C 0depending only on \(\varOmega , {\varOmega }^{\prime }\)) in what follows we shall write cap( K) for either one of these capacities. We have u be the relative extremal function of K Kwith respect to \({\varOmega }^{\prime }\). Consider the set \(K^{\prime } = \{ 3\delta u_{K} + \varphi _{\varepsilon } < \varphi - 2\delta \}\). Then, εis so small that it satisfies (3.2), otherwise the inequality (3.1) holds true by increasing the constant) and plug in the formula for δto get that μonce we have s= 1/ x, and then t= e −, this is equivalent to τ/ s ϖ( t) implies that his admissible in the case of μwith compact support. Then, by [10, Theorem 5.9] the Dirichlet problem (1.1) has a unique solution. Ωby compact sets μ to be the restriction of j μto E . Denote by j u the solution of (1.1) with j μreplaced by μ . By the comparison principle j u tends to \(u=\lim u_{j}\) uniformly and the continuity of j ufollows. The proof is complete. 4 The Modulus of Continuity of Solutions In this section, we study the modulus of continuity of the solution of the Dirichlet problem with the right hand side in the class \(\mathcal H(\alpha ,{\varOmega })\) under the additional condition that a given measure is locally dominated by capacity. In what follows we need [8, Lemma 2] whose proof is based on the lemma due to Alexander and Taylor [2, Lemma 3.3]. For the reader’s convenience, we give the proofs. The latter can be simplified by using the Błocki inequality [6]. Lemma 4.1 Let\(B^{\prime } = \{|z-z_{0}| <r \} \subset \subset B= \{|z-z_{0}| <R\}\) be two concentric balls centered at z 0 in\(\mathbb C^{n}\) . Let\(u \in PSH(B) \cap L^{\infty }(B)\) with u< 0 . There is a constant\(C = C(n, \frac {R}{r})\) independent ofu such that R/ r= 3 then the constant Cdepends only on n. Proof z 0= 0. Set ρ:= ( r+ R)/2 and B( ρ) = {| z− z 0| < ρ}. We use the Błocki inequality [6] for v( z) = | z| 2− ρ 2and β:= d d c v= d d | c z| 2, to get σ 2is the area of the unit sphere, n− 1 n( t)/ t 2is increasing, we have n− 2 u< 0, it follows that N( R) < − u(0). Hence, R= 3 r, then Cis also independent of r. □ Lemma 4.2 Denote for ρ≥ 0 , B = {| ρ z− z 0| < e ρ R 0}. Given z 0∈ Ω and two numbers R> 1 , R 0> 0 such that B ⊂⊂ M Ω, and given v∈ P S H( Ω) such that− 1 < v< 0 , denote byE the set δ∈ (0,1). Then, there exists C 0depending only on nsuch that Proof z∈ B ∖ R B 0and \(a_{0}:= \sup _{B_{0}} v\) we have Ewith respect to B 2. One has z 1∈ Ḅ 0, we have E⊂{| z− z 1| < 2 R 0}⊂ B 2. Therefore, Lemma 4.1 gives u. Define for δ> 0 small z∈ Ω set δ δ 0> 0 such that z∈ ∂ Ω and 0 < δ δ< δ 0. Here, we used the result of Bedford and Taylor [3, Theorem 6.2] (with minor modifications) to extend ψplurisubharmonically onto Ωso that its modulus of continuity on \(\bar {\varOmega }\) is controlled by the one on the boundary. Therefore, for a suitable extension of u to δ Ω, using the stability estimate for measure in \(\mathcal H(\alpha ,{\varOmega })\) as in [7, Theorem 1.1] (see also [12, Proposition 2.10]), we get Lemma 4.3 There are uniform constants C, α 1 depending only on Ω, α, μ such that δ< δ 0. Thanks to this lemma, we know that the right hand side tends to zero as δ decreases to zero. We shall use the property “locally dominated by capacity” to obtain a quantitative bound via Lemma 4.2. μby K. Since \(\|u\|_{\infty }\) is controlled by a constant C= C( α, Ω, μ), without loss of generality, we may assume that ε< 1 Ω⊂⊂ [0,1] 2. Let us write \(z = (x^{1}, \dots ,x^{2n}) \in \mathbb R^{2n}\) and denote the semi-open cube centered at a point n z 0, of diameter 2 rby μsatisfies for every cube E⊂ I, the inequality F 0(0) = 0. 2congruent cubes of diameter 3 n s −= 2 s δ, where \( s \in \mathbb N\). Then I = s I( z , s δ) and \(B_{I_{s}} = B(z_{s}, 2\delta )\) for some z ∈ s I 0. Hence, r= 2 δand R= 2 R 0, we have for B := s B( z ,4 s δ) corresponding to each cube I s R 0= dist( K, ∂ Ω). Therefore, combining the above inequalities, we get that Notes Acknowledgements The first author was partially supported by NCN grant 2017/27/B/ST1/01145. The second author was supported by the NRF Grant 2011-0030044 (SRC-GAIA) of The Republic of Korea. He also would like to thank Kang-Tae Kim for encouragement and support. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.Kołodziej, S., Nguyen, N.-C.: An inequality between complex Hessian measures of Hölder continuous m-subharmonic functions and capacity. Geometric Analysis. In: Chen, J., Lu, P., Lu, Z., Zhang, Z. (eds.) Honor of Gang Tian’s 60th Birthday, Progress in Mathematics series by Birkhauser. to appear (2019)Google Scholar 12. 13.Nguyen, N. -C.: On the Hölder continuous subsolution problem for the complex Monge-Ampère equation, II. preprint arXiv:http://arXiv.org/abs/1803.02510 Copyright information Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
"The Complexity of Songs" was a journal article published by computer scientist Donald Knuth in 1977, as an in-joke about computational complexity theory. The article capitalizes on the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts with little or no meaningful content. The article notes how some songs can reach a complexity level, for a song of length N words, as formula: O(log N). The gist of the article is repeated below, maintaining the wit of the key concepts. Article Summary Knuth writes that "our ancient ancestors invented the concept of refrain" to reduce the space complexity of songs, which becomes crucial when a large number of songs is to be committed to one's memory. Knuth's Lemma 1 states that if N is the length of a song, then the refrain decreases the song complexity to cN, where the factor c < 1. Knuth further demonstrates a way of producing songs with O(√N N {\displaystyle {\sqrt {N}}} ) complexity, an approach "further improved by a Scottish farmer named O. MacDonald". More ingenious approaches yield songs of complexity O(logN log N {\displaystyle \log N} ), a class known as " m bottles of beer on the wall". Finally, the progress during the 20th century — stimulated by the fact that "the advent of modern drugs has led to demands for still less memory" — leads to the ultimate improvement: Arbitrarily long songs with space complexity O(1), e.g. for a song to be defined by the recurrence relation. Prof. Kurt Eisemann of San Diego State University in his letter to the Communications of the ACM further improves the latter seemingly unbeatable estimate. He begins with an observation that for practical applications the value of the "hidden constant" c in the Big Oh notation may be crucial in making the difference between the feasibility and unfeasibility: for example a constant value of 10 80 would exceed the capacity of any known device. He further notices that a technique has already been known in Mediaeval Europe whereby textual content of an arbitrary tune can be recorded basing on the recurrence relation S k = C 2 S k-1 S k = C 2 S k − 1 {\displaystyle S_{k}=C_{2}S_{k-1}} , where C 2 =' la' C 2 = ′ l a ′ {\displaystyle C_{2}='la'} , yielding the value of the big-Oh constant c equal to 2. However it turns out that another culture achieved the absolute lower bound of O(0). As Prof. Eisemann puts it: "When the Mayflowervoyagers first descended on these shores, the native Americans proud of their achievement in the theory of information storage and retrieval, at first welcomed the strangers with the complete silence. This was meant to convey their peak achievement in the complexity of songs, namely the demonstration that a limit as low as c= 0 is indeed obtainable." However the Europeans were unprepared to grasp this notion, and the chiefs, in order to establish a common ground to convey their achievements later proceeded to demonstrate an approach described by the recurrent relation S k = C 1 S k − 1 {\displaystyle S_{k}=C_{1}S_{k-1}} S k = C 1 S k-1, where C 1 =' i' C 1 = ′ i ′ {\displaystyle C_{1}='i'} , with a suboptimal complexity given by c = 1. The O(1) space complexity result was also implemented by Guy L. Steele, Jr., perhaps challenged by Knuth's article. Dr. Steele's TELNET Song used a completely different algorithm based on exponential recursion, a parody on some implementations of TELNET. It has been suggested that the complexity analysis of human songs can be a useful pedagogic device for teaching students complexity theory. The article "On Superpolylogarithmic Subexponential Functions" by Prof. Alan Sherman writes that Knuth's article was seminal for analysis of a special class of functions. Source: The Complexity Of Songs
I’m currently taking a (meta)logic class. There are assigned problem sets. A lot of people either don’t know how to type logical symbols or else cannot be bothered to fight with Word. I’m a fan of LaTeX. I like it for several reasons, one of them being easy use of logical symbols. There are a lot of guides to using LaTeX. To my knowledge, none start from nothing and end with just what’s needed for a logic class. So here I fill in that void. My goal is to be comprehensive enough to cover what’s needed to type up assignments for a logic class while not including anything else so someone can be up and running with just this guide in a few minutes. Setting Up First, you need something to edit your text and something to compile it to a PDF or whatever other format you like. I personally use Overleaf. It’s a free, online application that lets you type in one column with live updates to what it looks like on the page in the other column. It also has templates, allows collaboration, and has some other nice features that are not important to our purposes here. (Full disclosure: The link is a referral link. If you refer people, you get extra storage space and pro features for free. The default free features and space are fine, though.) There are other popular options. If you need to compile offline, I suggest TeXmaker. If you go this route, you need to download MiKTeX. If you want to write something very long, you may want to type into a text editor and then copy and paste into Overleaf or TeXmaker. (By “long” I mean over fifty pages, give or take based on things like included pictures.) Onto the actual typing process. If you’re using Overleaf, go to the “My Projects” page and then create a new project. Choose “blank paper”. Then you’ll have this code: \documentclass{article} \usepackage[utf8]{inputenc} \begin{document} (Type your content here.) \end{document} If you’re not using Overleaf, go ahead and put that code into your document. There is a bit of tweaking to the basic template to make this better. Before the \begin{document} line, add a line containing just \usepackage{amsmath}. Then add lines with add \title{TITLE} and \author{NAME}. Then after the \begin{document} line, add a line saying \maketitle. If you want it to not be huge, type \small\maketitle\normalsize. (The \small makes it small. The \normalsize makes the stuff after it normal size.) At this point my document looks like this. \documentclass{article} \usepackage[utf8]{inputenc} \title{Phil 125 Homework Set 2} \author{Nichole Smith} \begin{document} \small\maketitle\normalsize (Type your content here.) \end{document} Typing the Document Everything after this replaces “(Type your content here.)”. Typing letters and numbers works as you would expect. Certain symbols are used by the code so typing them is not straightforward. (The & and squiggle brackets are the most notable here.) Single line breaks are ignored. So if you type some stuff, hit return/enter, and then type some more, it will show up as one paragraph. (This can be useful. I like to type every step of a proof in a new line. Then it compiles into a paragraph.) Double line breaks give you a new paragraph. If you want extra space, use \vspace{1cm} as its own paragraph. You can choose lengths other than 1cm if you want. Onto the logic specific stuff. Of critical importance is math mode. Whenever you surround text with dollar signs ($) LaTeX treats it as mathematical symbols. So, if you type $x$ it will be italicized like a variable should be. Math mode does not have spaces. So $two words$ will not have a space between them. (If you need a space while in math mode for some reason, “\ ” gives you a space. That is a backslash with a space after it.) Note all logical symbols have to be typed in math mode. The logical symbols: \land gives you the and symbol \lor gives you the or symbol \lnot gives you the not symbol \rightarrow gives you the material conditional arrow \Rightarrow gives you the logical implication arrow \leftrightarrow gives you the biconditional arrow \Leftrightarrow gives you the logical equivalence arrow (So, capitalizing the arrow tags makes them the bigger arrows) = is the equal sign Parentheses are parentheses \subset gives you the strict subset symbol \subseteq gives you the subset symbol In general, typing \not immediately before another symbol puts a slash through it. E.g. \not\subseteq gives you the not a subset symbol \in gives you the element symbol \times gives you the times sign \neq gives you the not equal sign > and < can be typed directly. To get the or equal to versions, type \geq or \leq \emptyset gives you the empty set symbol \{ and \} give you squiggle brackets \& gives you the & symbol \top and \bot give you the tautology and contradiction symbols. \Alpha and \alpha give you upper and lower case alpha. The other Greek letters are similar. | gives you the Sheffer stroke and \downarrow gives you the Peirce dagger. An underscore gives you subscript. A caret gives you superscript. E.g. psub 1 is typed $p_1$. \hdots gives you a nice ellipsis. Use \cdots if you want them elevated to the middle of the line. Anything on a line after % will not be compiled. So if you want to make a note to self, you can. I think this covers it. Most of them are pretty straightforward. If you do need more, this webpage has a nifty list. Or, detexify lets you just draw what you want, and it gives you the code. At this point you’re ready to type stuff. I will provide an example now. Say problem 2 asks you to symbolize “neither both p and q, nor q only if p” with the and, material conditional, and nor operators. Then you type: 2. The sentence “neither both $p$ and $q$, nor $q$ only if $p$” symbolized with the and, material conditional, and nor operators is $(p\land q)\downarrow(q\rightarrow p)$. Truth Tables LaTeX can also handle tables very nicely. If you’re lazy, there are online tools to make tables. They have quite a few options. You’re probably fine using that. I prefer more control for my truth tables. Again, you’re fine without. But in case anyone is interested, I’ll explain. Maybe you’ll want to be able to edit the code the generator spits out. (I often use a generator to start and then tweak as needed.) First, here’s the code for the truth table for p_1 or not p_1: \begin{tabular}{c|cccc} $p_1$ & $p_1$ & $\lor$ & $\lnot$ & $p_1$ \\ \hline T & & \textcolor{red}{T} & F & \\ F & & \textcolor{red}{T} & T & \\ \end{tabular} How do you construct this thing? First set up the tabular environment: \begin{tabular}{} \end{tabular} The second set of squiggle brackets after \begin let you set up the columns. Each c gives a center aligned column. If you want left or right aligned columned, use l or r instead of c. Yes, you can mix the three. The | gives a vertical line going down the entire table. Note for truth tables you want a column for every single symbol. That way nothing is under the variables and you can have a straight line of Ts and Fs under the connectives. So, for p_1 or not p_1 we want a column for p_1, a bar, then columns for each of p_1, or, not, and p_1. That’s four more. So, we have: \begin{tabular}{c|cccc} \end{tabular} We have the table set up. Now to fill it in. The first line of the table has the atomic sentences on the left and then the sentence in question on the right. Type the content of each column, separated by &. Then end the line with \\. So, to have the first line of the truth table: \begin{tabular}{c|cccc} $p_1$ & $p_1$ & $\lor$ & $\lnot$ & $p_1$ \\ \end{tabular} To have the horizontal line, type \hline on its own line. Then more on to the next row, doing the same thing you did for the first row. Note that if you want nothing in a certain spot, just leave the space between the two &s empty. So, for the second row, you want a T under the first p_1 (The one on the left side of the table), then nothing under the first one on the right, then a T under the and sign, an F under the not sign, and then nothing under the last p_1. The third line is similar. Now we have: \begin{tabular}{c|cccc} $p_1$ & $p_1$ & $\lor$ & $\lnot$ & $p_1$ \\ \hline T & & T & F & \\ F & & T & T & \\ \end{tabular} This is a fine truth table. But, maybe you want to bold the truth values for the main connective. To make T bold, type \textbf{T}. You can replace “T” with other text, of course. If you’re using Overleaf, highlighting the text and pressing Ctrl+B will put the tag in automatically. This brings us to the complete table as quoted in the beginning of this section. The comment section is open. Questions and suggestions are welcome. (Edit notes: As Soren pointed out, I originally put the wrong symbol for commenting. I also realized the amsmath package is not needed, so I removed that. Since these are usually printed in black and white anyway, I got rid of color in favor of boldface type. This has the added benefit of avoiding the need for packages entirely. In the third edit I added the \leq and \geq tags as well as \hdots because I realized they’re needed for indexing variables. \hdots requires the amsmath package, so I added that line back in. Using bold instead of color still seems to be better.)
I was reading about Ito's formula and Girsanov theorem, but I am still struggling to grasp how in reality these are combined to compute the price of an option. What are the main source to understand this topic in a very practical manner? In a practical manner, here is how you get to the PDE of your option: Use Girsanov theorem to go from the real-world measure to the risk-neutral measure (basically subtract the market price of risk $\mathrm dW^Q_t = \mathrm d W^P_t - \frac{\mu -r}{\sigma} \mathrm dt$). This will change your SDE. Discounted option price $e ^{-rt} v(t, S_t)$ has to be a martingale in the risk-neutral world. Hence use the Ito's formula to calculate the differential $\mathrm d (e ^{-rt} v(t, S_t))$ and set the drift term to zero, which will give you the PDE that your option price must satisfy. Set the boundary conditions for the PDE based on the payoffs of the option. The PDE and boundary conditions are individual to each option, but the derivation is always similar to that of the Black-Scholes PDE (sometimes you will have other differentials involved, for example, running maximum in the look-back options etc). "Stochastic Calculus for Finance II - Continuous-time models" by Shreve Chapters 4 and 7 are good references for this topic.
Assume that $y/ \log x \rightarrow \infty$ and that $y/x \rightarrow 0$. Then, from a conjecture by Montgomery and Soundararajan, we expect the number of primes in the interval $[x,x+y]$ to be normally distributed with mean $y/\log x$ and standard deviation $\sqrt{y(\log x/y)/(\log x)^2}$. But numerical testing produces a slight and systematic deviation from this conjecture. Specifically, I have tested for different interval lengths $y=x^c$. For each value of $c$, I calculated $\pi(x+y)-\pi(x)$, the number of primes in $[x,x+y]$, for $N$ non-overlapping intervals. Then I normalized the data by subtracting the mean and dividing by the standard deviation corresponding to each interval, as stated by the conjecture. We should therefore expect the resulting $N$ samples to be normally distributed with mean 0 and standard deviation $\sigma=1$. However, what I get is the following: \begin{matrix} c& \sigma& N\\ \hline 0.20 & 0.967 & 240000\\ 0.25 & 0.966 & 240000\\ 0.30 & 0.965 & 240000\\ 0.40 & 0.958 & 240000\\ 0.50 & 0.947 & 240000\\ 0.55 & 0.917 & 40000\\ 0.60 & 0.899 & 20000\\ 0.65 & 0.891 & 10000\\ 0.70 & 0.889 & 5000\\ \end{matrix} While each data set indeed are normally distributed, what appears to be happening is that the standard deviation decreases with increasing $c$, as compared to the conjecture by Montgomery and Soundararajan. The numerical results seem to be reasonably consistent, so I don't think they are an artifact of sampling only a finite number of intervals (however, please point out if I did any apparent mistakes in the above). I am not fluent enough in the theory to walk through Montgomery and Soundararajan's arguments myself, so I would greatly appreciate any comments on or explanation of this finding. EDIT Including the lower order term in Montgomery and Soundararajan's conjecture, as suggested in the answer below by Lucia, we have the following revised numerics: \begin{matrix} c& \sigma& N\\ \hline 0.20 & 1.000 & 240000\\ 0.25 & 1.001 & 240000\\ 0.30 & 1.003 & 240000\\ 0.40 & 1.002 & 240000\\ 0.50 & 1.000 & 240000\\ 0.55 & 1.001 & 40000\\ 0.60 & 0.992 & 20000\\ 0.65 & 0.995 & 10000\\ 0.70 & 1.009 & 5000\\ \end{matrix} These numbers strongly support Montgomery and Soundararajan's conjecture, so it is clear that the missing lower order term was indeed responsible for the observed discrepancy.
Current browse context: physics.atom-ph Change to browse by: Bookmark(what is this?) Physics > Atomic Physics Title: Sensitivity of isotope shift to distribution of nuclear charge density (Submitted on 17 Jul 2019) Abstract: It is usually assumed that the field isotope shift (FIS) is completely determined by the change of the averaged squared values of the nuclear charge radius $\langle r^2\rangle$. Relativistic corrections modify the expression for FIS, which is actually described by the change of $\langle r^{2 \gamma}\rangle$, where $\gamma=\sqrt{1 - Z^2 \alpha^2}$. In the present paper we consider corrections to FIS which are due to the nuclear deformation and due to the predicted reduced charge density in the middle of the superheavy nuclei produced by a very strong proton repulsion (hole in the nuclear centre). Specifically, we investigate effects which can not be completely reduced to the change of $\langle r^2 \rangle$ or $\langle r^{2 \gamma}\rangle$. Submission historyFrom: Vladimir Dzuba [view email] [v1]Wed, 17 Jul 2019 10:49:58 GMT (19kb)
Two years ago I have written a post “Naughty APEs and the quest for the holy grail“, where I have discussed why percentage-based error measures (such as MPE, MAPE, sMAPE) are not good for the task of forecasting performance evaluation. However, it seems to me that I did not explain the topic to the full extent – the time has shown that there are some other issues that need to be discussed in detail, so I have decided to write another post on the topic, possibly repeating myself a little bit. This time we won’t have imaginary forecasters, we will be more serious. Introduction We start from a fact, well-known in statistics. MSE is minimised by mean value, while MAE is minimised by the median. There is a lot of nice examples, papers and post, explaining, why this happens and how, so we don’t need to waste time on that. But there are two elements related to this that need to be discussed. First, some people think that this property is applicable only to the estimation of models. For some reason, it is implied that forecasts evaluation is a completely different activity, unrelated to the estimation. Something like: as soon as you estimate a model, you are done with “MSE is minimised by mean” thingy, the evaluation is not related to this. However, when we select the best performing model based on some error measure, we inevitably impose properties of that error on the forecasts. So, if a method performs better than the other in terms of MAE, it means that it produces forecasts closer to the median of the data than the others do. For example, zero forecast will always be one of the most accurate forecasts for intermittent demand in terms of MAE, especially when number of zeroes in the data is greater than 50%. The reason for this is obvious: if you have so many zeroes, then saying that we won’t sell anything in the next foreseeable future is a safe strategy. And this strategy works, because MAE is minimised by median. The usefulness of such forecast is a completely different topic, but a thing to carry out from this, is that in general, MAE-based error measures should not be used on intermittent demand. Second, some researchers think that if a model is optimised, for example, using MSE, then it should always be evaluated using the MSE-based error measure. This is not completely correct. Yes, the model will probably perform better if the loss function is aligned with the error measure used for the evaluation. However, this does not mean that we cannot use MAE-based error measures, if the loss is not MAE-based. These are still slightly different tasks, and the selection of the error measures should be motivated by a specific problem (for which the forecast is needed) not by a loss function used. For example, in case of inventory management neither MAE nor MSE might be useful for the evaluation. One would probably need to see how models perform in terms of safety stock allocation, and this is a completely different problem, which does not necessarily align well with either MAE or MSE. As a final note, in some cases we are interested in estimating models via the likelihood maximisation, and selecting an aligned error measure in those cases might be quite challenging. So, as a minor conclusion, MSE-based measures should be used, when we are interested in identifying the method that outperforms the others in terms of mean values, while the MAE-based should be preferred for the medians, irrespective to how we estimate our models. As one can already see, there might be some other losses, focusing, for example, on specific quantiles (such as pinball loss). But the other question is, what statistics minimise MAPE and sMAPE. The short answer is: “we don’t know”. However, Stephan Kolassa and Martin Roland (2011) showed on a simple example that in case of strictly positive distribution the MAPE prefers the biased forecasts, and Stephan Kolassa (2016) noted that in case of log normal distribution, the MAPE is minimised by the mode. So at least we have an idea of what to expect from MAPE. However, the sMAPE is a complete mystery in this sense. We don’t know what it does, and this is yet another reason not to use it in forecasts evaluation at all (see the other reasons in the previous post). We are already familiar with some error measures from the previous post, so I will not rewrite all the formulae here. And we already know that the error measures can be in the original units (MAE, MSE, ME), percentage (MAPE, MPE), scaled or relative. Skipping the first two, we can discuss the latter two in more detail. Scaled measures Scaled measures can be quite informative and useful, when comparing different forecasting methods. For example, sMAE and sMSE (from Petropoulos & Kourentzes, 2015): \begin{equation} \label{eq:sMAE} \text{sMAE} = \frac{\text{MAE}}{\bar{y}}, \end{equation} \begin{equation} \label{eq:sMSE} \text{sMSE} = \frac{\text{MSE}}{\bar{y}^2}, \end{equation} where \(\bar{y}\) is the in-sample mean of the data. These measures have a simple interpretation, close to the one of MAPE: they show the mean percentage errors, relative to the mean of the series (not to each specific observation in the holdout). They don’t have problems that APEs have, but they might not be applicable in cases of non-stationary data, when mean changes over time. To make a point, they might be okay for the series on the graph on the left below, where the level of series does not change substantially, but their value might change dramatically, when the new data is added on the graph on the right, with trend time series. MASE by Rob Hyndman and Anne Koehler (2006) does not have this issue, because it is scaled using the mean absolute in-sample first differences of the data: \begin{equation} \label{eq:MASE} \text{MASE} = \frac{\text{MAE}}{\frac{1}{T-1}\sum_{t=2}^{T}|y_t -y_{t-1}|} . \end{equation} The motivation here is statistically solid: while the mean can change over time, the first difference of the data are usually much more stable. So the denominator of the formula becomes more or less fixed, which solves the problem, mentioned above. Unfortunately, MASE has a different issue – it is uninterpretable. If MASE is equal to 1.3, this does not really mean anything. Yes, the denominator can be interpreted as a mean absolute error of in-sample one-step-ahead forecasts of Naive, but this does not help with the overall interpretation. This measure can be used for research purposes, but I would not expect practitioners to understand and use it. And let’s not forget about the dictum “MAE is minimised by median”, which implies that, in general, neither MASE nor sMAE should be used on intermittent demand. Relative measures Finally, we have relative measures. For example, we can have relative MAE or RMSE. Note that Davydenko & Fildes, 2013 called them “RelMAE” and “RelRMSE”, while the aggregated versions were “AvgRelMAE” and “AvgRelRMSE”. I personally find these names tedious, so I prefer to call them “rMAE”, “rRMSE” and “ArMAE” and “ArRMSE” respectively. They are calculated the following way: \begin{equation} \label{eq:rMAE} \text{rMAE} = \frac{\text{MAE}_a}{\text{MAE}_b}, \end{equation} \begin{equation} \label{eq:rRMSE} \text{rRMSE} = \frac{\text{RMSE}_a}{\text{RMSE}_b}, \end{equation} where the numerator contains the error measure of the method of interest, and the denominator contains the error measure of a benchmark method (for example, Naive method in case of continuous demand or forecast from an average for the intermittent one). Given that both are aligned and are evaluated over the same part of the sample, we don’t need to bother about the changing mean in time series, which makes both of them easily interpretable. If the measure is greater than one, then our method performed worse than the benchmark; if it is less than one, the method is doing better. Furthermore, as I have mentioned in the previous post, both rMAE and rRMSE align very well with the idea of forecast value, developed by Mike Gilliland from SAS, which can be calculated as: \begin{equation} \label{eq:FV} \text{FV} = 1-\text{relative measure} \cdot 100\%. \end{equation} So, for example, rMAE = 0.96 means that our method is doing 4% better than the benchmark in terms of MAE, so that we are adding the value to the forecast that we produce. As mentioned in the previous post, if you want to aggregate the relative error measures, it makes sense to use geometric means instead of arithmetic ones. This is because the distribution of relative measures is typically asymmetric, and the arithmetic mean would be too much influenced by the outliers (cases, when the models performed very poorly). Geometric one, on the other hand, is much more robust, in some cases aligning with the median value of a distribution. The main limitation of relative measures is that they cannot be properly aggregated, when the error (either MAE or MSE) is equal to zero either in the numerator or in the denominator, because the geometric mean becomes either equal to zero or infinity. This does not stop us from analysing the distribution of the errors, but might cause some inconveniences. To be honest, we don’t face these situations very often in real world, because this implies that we have produced perfect forecast for the whole holdout sample, several steps ahead, which can only happen, if there is no forecasting problem at all. An example would be, when we know for sure that we will sell 500 bottles of beer per day for the next week, because someone pre-ordered them from us. The other example would be an intermittent demand series with zeroes in the holdout, where the Naive would produce zero forecast as well. But I would argue that Naive is not a good benchmark in this case, it makes sense to switch to something like simple mean of the series. I struggle to come up with other meaningful examples from the real world, where the mean error (either MAE or MSE) would be equal to zero for the whole holdout. Having said that, if you notice that either rMAE or rRMSE becomes equal to zero or infinite for some time series, it makes sense to investigate, why that happened, and probably remove those series from the analysis. It might sound as I am advertising relative measures. To some extent I do, because I don’t think that there are better options for the comparison of the point forecasts than these at the moment. I would be glad to recommend something else as soon as we have a better option. They are not ideal, but they do a pretty good job for the forecasts evaluation. So, summarising, I would recommend using relative error measures, keeping in mind that MSE is minimised by mean and MAE is minimised by median. And in order to decide, what to use between the two, you should probably ask yourself: what do we really need to measure? In some cases it might appear that none of the above is needed. Maybe you should look at the prediction intervals instead of point forecasts… This is something that we will discuss next time. Examples in R To make this post a bit closer to the application, we will consider a simple example with smooth package v2.5.3 and several series from the M3 dataset. Load the packages: library(smooth) library(Mcomp) Take a subset of monthly demographic time series (it’s just 111 time series, which should suffice for our experiment): M3Subset <- subset(M3, 12, "demographic") Prepare the array for the two error measures: rMAE and rRMSE (these are calculated based on the measures() function from greybox errorMeasures <- array(NA, c(length(M3Subset),2,3), dimnames=list(NULL, c("rMAE","rRMSE"), c("CES","ETS(Z,Z,Z)","ETS(Z,X,Z)"))) Do the loop, applying the models to the data and extracting the error measures from the accuracy variable. By default, the benchmark in rMAE and rRMSE is the Naive method. for(i in 1:length(M3Subset)){ errorMeasures[i,,1] <- auto.ces(M3Subset[[i]])$accuracy[c("rMAE","rRMSE")] errorMeasures[i,,2] <- es(M3Subset[[i]])$accuracy[c("rMAE","rRMSE")] errorMeasures[i,,3] <- es(M3Subset[[i]],"ZXZ")$accuracy[c("rMAE","rRMSE")] cat(i); cat(", ") } Now we can analyse the results. We start with the ArMAE and ArRMSE: exp(apply(log(errorMeasures),c(2,3),mean)) CES ETS(Z,Z,Z) ETS(Z,X,Z) rMAE 0.6339194 0.8798265 0.8540869 rRMSE 0.6430326 0.8843838 0.8584140 As we see, all models did better than Naive: ETS is approximately 12 – 16% better, while CES is more than 35% better. Also, CES outperformed both ETS options in terms of rMAE and rRMSE. The difference is quite substantial, but in order to see this clearer, we can reformulate our error measures, dividing rMAE of each option by (for example) rMAE of ETS(Z,Z,Z): errorMeasuresZZZ <- errorMeasures for(i in 1:3){ errorMeasuresZZZ[,,i] <- errorMeasuresZZZ[,,i] / errorMeasures[,,"ETS(Z,Z,Z)"] } exp(apply(log(errorMeasuresZZZ),c(2,3),mean)) CES ETS(Z,Z,Z) ETS(Z,X,Z) rMAE 0.7205050 1 0.9707448 rRMSE 0.7270968 1 0.9706352 With these measures, we can say that CES is approximately 28% more accurate than ETS(Z,Z,Z) both in terms of MAE and RMSE. Also, the exclusion of the multiplicative trend in ETS leads to the improvements in the accuracy of around 3% for both MAE and RMSE. We can also analyse the distributions of the error measures, which sometimes can give an additional information about the performance of the models. The simplest thing to do is to produce boxplots: boxplot(errorMeasures[,1,]) abline(h=1, col="grey", lwd=2) points(exp(apply(log(errorMeasures[,1,]),2,mean)),col="red",pch=16) Given that the error measures have asymmetric distribution, it is difficult to analyse the results. But what we can spot is that the boxplot of CES is located lower than the boxplots of the other two models. This indicates that the model is performing consistently better than the others. The grey horizontal line on the plot is the value for the benchmark, which is Naive in our case. Notice that in some cases all the models that we have applied to the data do not outperform Naive (there are values above the line), but on average (in terms of geometric means, red dots) they do better. Producing boxplot in log scale might sometimes simplify the analysis: boxplot(log(errorMeasures[,1,])) abline(h=0, col="grey", lwd=2) points(apply(log(errorMeasures[,1,]),2,mean),col="red",pch=16) The grey horizontal line on this plot still correspond to Naive, which in log-scale is equal to zero (log(1)=0). In our case this plot does not give any additional information, but in some cases it might be easier to work in logarithms rather than in the original scale due to the potential magnitude of positive errors. The only thing that we can note is that CES was more accurate than ETS for the first, second and third quartiles, but it seems that there were some cases, where it was less accurate than both ETS and Naive (the upper whisker and the outliers). There are other things that we could do in order to analyse the distribution of error measures more thoroughly. For example, we could do statistical tests (such as Nemenyi) in order to see whether the difference between the models is statistically significant or if it is due to randomness. But this is something that we should leave for the future posts.
Sample Quantiles The generic function quantile produces sample quantiles corresponding to the given probabilities. The smallest observation corresponds to a probability of 0 and the largest to a probability of 1. Keywords univar Usage quantile(x, …) # S3 method for defaultquantile(x, probs = seq(0, 1, 0.25), na.rm = FALSE, names = TRUE, type = 7, …) Arguments x numeric vector whose sample quantiles are wanted, or an object of a class for which a method has been defined (see also ‘details’). NAand NaNvalues are not allowed in numeric vectors unless na.rmis TRUE. probs numeric vector of probabilities with values in \([0,1]\). (Values up to 2e-14outside that range are accepted and moved to the nearby endpoint.) na.rm logical; if true, any NAand NaN's are removed from xbefore the quantiles are computed. names logical; if true, the result has a namesattribute. Set to FALSEfor speedup with many probs. type an integer between 1 and 9 selecting one of the nine quantile algorithms detailed below to be used. … further arguments passed to or from other methods. Details A vector of length length(probs) is returned; if names = TRUE, it has a names attribute. The default method works with classed objects sufficiently like numeric vectors that sort and (not needed by types 1 and 3) addition of elements and multiplication by a number work correctly. Note that as this is in a namespace, the copy of sort in base will be used, not some S4 generic of that name. Also note that that is no check on the ‘correctly’, and so e.g. quantile can be applied to complex vectors which (apart from ties) will be ordered on their real parts. Types quantile returns estimates of underlying distribution quantiles based on one or two order statistics from the supplied elements in x at probabilities in probs. One of the nine quantile algorithms discussed in Hyndman and Fan (1996), selected by type, is employed. All sample quantiles are defined as weighted averages of consecutive order statistics. Sample quantiles of type \(i\) are defined by: $$Q_{i}(p) = (1 - \gamma)x_{j} + \gamma x_{j+1}$$ where \(1 \le i \le 9\), \(\frac{j - m}{n} \le p < \frac{j - m + 1}{n}\), \(x_{j}\) is the \(j\)th order statistic, \(n\) is the sample size, the value of \(\gamma\) is a function of \(j = \lfloor np + m\rfloor\) and \(g = np + m - j\), and \(m\) is a constant determined by the sample quantile type. Discontinuous sample quantile types 1, 2, and 3 For types 1, 2 and 3, \(Q_i(p)\) is a discontinuous function of \(p\), with \(m = 0\) when \(i = 1\) and \(i = 2\), and \(m = -1/2\) when \(i = 3\). Type 1 Inverse of empirical distribution function. \(\gamma = 0\) if \(g = 0\), and 1 otherwise. Type 2 Similar to type 1 but with averaging at discontinuities. \(\gamma = 0.5\) if \(g = 0\), and 1 otherwise. Type 3 SAS definition: nearest even order statistic. \(\gamma = 0\) if \(g = 0\) and \(j\) is even, and 1 otherwise. Continuous sample quantile types 4 through 9 For types 4 through 9, \(Q_i(p)\) is a continuous function of \(p\), with \(\gamma = g\) and \(m\) given below. The sample quantiles can be obtained equivalently by linear interpolation between the points \((p_k,x_k)\) where \(x_k\) is the \(k\)th order statistic. Specific expressions for \(p_k\) are given below. Type 4 \(m = 0\). \(p_k = \frac{k}{n}\). That is, linear interpolation of the empirical cdf. Type 5 \(m = 1/2\). \(p_k = \frac{k - 0.5}{n}\). That is a piecewise linear function where the knots are the values midway through the steps of the empirical cdf. This is popular amongst hydrologists. Type 6 \(m = p\). \(p_k = \frac{k}{n + 1}\). Thus \(p_k = \mbox{E}[F(x_{k})]\). This is used by Minitab and by SPSS. Type 7 \(m = 1-p\). \(p_k = \frac{k - 1}{n - 1}\). In this case, \(p_k = \mbox{mode}[F(x_{k})]\). This is used by S. Type 8 \(m = (p+1)/3\). \(p_k = \frac{k - 1/3}{n + 1/3}\). Then \(p_k \approx \mbox{median}[F(x_{k})]\). The resulting quantile estimates are approximately median-unbiased regardless of the distribution of x. Type 9 \(m = p/4 + 3/8\). \(p_k = \frac{k - 3/8}{n + 1/4}\). The resulting quantile estimates are approximately unbiased for the expected order statistics if xis normally distributed. Further details are provided in Hyndman and Fan (1996) who recommended type 8. The default method is type 7, as used by S and by R < 2.0.0. References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. Hyndman, R. J. and Fan, Y. (1996) Sample quantiles in statistical packages, American Statistician 50, 361--365. 10.2307/2684934. See Also Aliases quantile quantile.default Examples library(stats) # NOT RUN {quantile(x <- rnorm(1001)) # Extremes & Quartiles by defaultquantile(x, probs = c(0.1, 0.5, 1, 2, 5, 10, 50, NA)/100)### Compare different typesquantAll <- function(x, prob, ...) t(vapply(1:9, function(typ) quantile(x, prob=prob, type = typ, ...), quantile(x, prob, type=1)))p <- c(0.1, 0.5, 1, 2, 5, 10, 50)/100signif(quantAll(x, p), 4)## for complex numbers:z <- complex(re=x, im = -10*x)signif(quantAll(z, p), 4)# } Documentation reproduced from package stats, version 3.6.1, License: Part of R 3.6.1
David Mumford did receive earlier this year the 2007 AMS Leroy P. Steele Prize for Mathematical Exposition. The jury honors Mumford for “his beautiful expository accounts of a host of aspects of algebraic geometry”. Not surprisingly, the first work they mention are his mimeographed notes of the first 3 chapters of a course in algebraic geometry, usually called “Mumford’s red book” because the notes were wrapped in a red cover. In 1988, the notes were reprinted by Springer-Verlag. Unfortnately, the only red they preserved was in the title. The AMS describes the importance of the red book as follows. “This is one of the few books that attempt to convey in pictures some of the highly abstract notions that arise in the field of algebraic geometry. In his response upon receiving the prize, Mumford recalled that some of his drawings from The Red Book were included in a collection called Five Centuries of French Mathematics. This seemed fitting, he noted: “After all, it was the French who started impressionist painting and isn’t this just an impressionist scheme for rendering geometry?”” These days it is perfectly possible to get a good grasp on difficult concepts from algebraic geometry by reading blogs, watching YouTube or plugging in equations to sophisticated math-programs. In the early seventies though, if you wanted to know what Grothendieck’s scheme-revolution was all about you had no choice but to wade through the EGA’s and SGA’s and they were notorious for being extremely user-unfriendly regarding illustrations… So the few depictions of schemes available, drawn by people sufficiently fluent in Grothendieck’s new geometric language had no less than treasure-map-cult-status and were studied in minute detail. Mumford’s red book was a gold mine for such treasure maps. Here’s my favorite one, scanned from the original mimeographed notes (it looks somewhat tidier in the Springer-version) It is the first depiction of $\mathbf{spec}(\mathbb{Z}[x]) $, the affine scheme of the ring $\mathbb{Z}[x] $ of all integral polynomials. Mumford calls it the”arithmetic surface” as the picture resembles the one he made before of the affine scheme $\mathbf{spec}(\mathbb{C}[x,y]) $ corresponding to the two-dimensional complex affine space $\mathbb{A}^2_{\mathbb{C}} $. Mumford adds that the arithmetic surface is ‘the first example which has a real mixing of arithmetic and geometric properties’. Let’s have a closer look at the treasure map. It introduces some new signs which must have looked exotic at the time, but have since become standard tools to depict algebraic schemes. For starters, recall that the underlying topological space of $\mathbf{spec}(\mathbb{Z}[x]) $ is the set of all prime ideals of the integral polynomial ring $\mathbb{Z}[x] $, so the map tries to list them all as well as their inclusions/intersections. The doodle in the right upper corner depicts the ‘generic point’ of the scheme. That is, the geometric object corresponding to the prime ideal $~(0) $ (note that $\mathbb{Z}[x] $ is an integral domain). Because the zero ideal is contained in any other prime ideal, the algebraic/geometric mantra (“inclusions reverse when shifting between algebra and geometry”) asserts that the gemetric object corresponding to $~(0) $ should contain all other geometric objects of the arithmetic plane, so it is just the whole plane! Clearly, it is rather senseless to depict this fact by coloring the whole plane black as then we wouldn’t be able to see the finer objects. Mumford’s solution to this is to draw a hairy ball, which in this case, is sufficiently thick to include fragments going in every possible direction. In general, one should read these doodles as saying that the geometric object represented by this doodle contains all other objects seen elsewhere in the picture if the hairy-ball-doodle includes stuff pointing in the direction of the smaller object. So, in the case of the object corresponding to $~(0) $, the doodle has pointers going everywhere, saying that the geometric object contains all other objects depicted. Let’s move over to the doodles in the lower right-hand corner. They represent the geometric object corresponding to principal prime ideals of the form $~(p(x)) $, where $p(x) $ in an irreducible polynomial over the integers, that is, a polynomial which we cannot write as the product of two smaller integral polynomials. The objects corresponding to such prime ideals should be thought of as ‘horizontal’ curves in the plane. The doodles depicted correspond to the prime ideal $~(x) $, containing all polynomials divisible by $x $ so when we divide it out we get, as expected, a domain $\mathbb{Z}[x]/(x) \simeq \mathbb{Z} $, and the one corresponding to the ideal $~(x^2+1) $, containing all polynomials divisible by $x^2+1 $, which can be proved to be a prime ideals of $\mathbb{Z}[x] $ by observing that after factoring out we get $\mathbb{Z}[x]/(x^2+1) \simeq \mathbb{Z}[i] $, the domain of all Gaussian integers $\mathbb{Z}[i] $. The corresponding doodles (the ‘generic points’ of the curvy-objects) have a predominant horizontal component as they have the express the fact that they depict horizontal curves in the plane. It is no coincidence that the doodle of $~(x^2+1) $ is somewhat bulkier than the one of $~(x) $ as the later one must only depict the fact that all points lying on the straight line to its left belong to it, whereas the former one must claim inclusion of all points lying on the ‘quadric’ it determines. Apart from these ‘horizontal’ curves, there are also ‘vertical’ lines corresponding to the principal prime ideals $~(p) $, containing the polynomials, all of which coefficients are divisible by the prime number $p $. These are indeed prime ideals of $\mathbb{Z}[x] $, because their quotients are $\mathbb{Z}[x]/(p) \simeq (\mathbb{Z}/p\mathbb{Z})[x] $ are domains, being the ring of polynomials over the finite field $\mathbb{Z}/p\mathbb{Z} = \mathbb{F}_p $. The doodles corresponding to these prime ideals have a predominant vertical component (depicting the ‘vertical’ lines) and have a uniform thickness for all prime numbers $p $ as each of them only has to claim ownership of the points lying on the vertical line under them. Right! So far we managed to depict the zero prime ideal (the whole plane) and the principal prime ideals of $\mathbb{Z}[x] $ (the horizontal curves and the vertical lines). Remains to depict the maximal ideals. These are all known to be of the form $\mathfrak{m} = (p,f(x)) $ where $p $ is a prime number and $f(x) $ is an irreducible integral polynomial, which remains irreducible when reduced modulo $p $ (that is, if we reduce all coefficients of the integral polynomial $f(x) $ modulo $p $ we obtain an irreducible polynomial in $~\mathbb{F}_p[x] $). By the algebra/geometry mantra mentioned before, the geometric object corresponding to such a maximal ideal can be seen as the ‘intersection’ of an horizontal curve (the object corresponding to the principal prime ideal $~(f(x)) $) and a vertical line (corresponding to the prime ideal $~(p) $). Because maximal ideals do not contain any other prime ideals, there is no reason to have a doodle associated to $\mathfrak{m} $ and we can just depict it by a “point” in the plane, more precisely the intersection-point of the horizontal curve with the vertical line determined by $\mathfrak{m}=(p,f(x)) $. Still, Mumford’s treasure map doesn’t treat all “points” equally. For example, the point corresponding to the maximal ideal $\mathfrak{m}_1 = (3,x+2) $ is depicted by a solid dot $\mathbf{.} $, whereas the point corresponding to the maximal ideal $\mathfrak{m}_2 = (3,x^2+1) $ is represented by a fatter point $\circ $. The distinction between the two ‘points’ becomes evident when we look at the corresponding quotients (which we know have to be fields). We have $\mathbb{Z}[x]/\mathfrak{m}_1 = \mathbb{Z}[x]/(3,x+2)=(\mathbb{Z}/3\mathbb{Z})[x]/(x+2) = \mathbb{Z}/3\mathbb{Z} = \mathbb{F}_3 $ whereas $\mathbb{Z}[x]/\mathfrak{m}_2 = \mathbb{Z}[x]/(3,x^2+1) = \mathbb{Z}/3\mathbb{Z}[x]/(x^2+1) = \mathbb{F}_3[x]/(x^2+1) = \mathbb{F}_{3^2} $ because the polynomial $x^2+1 $ remains irreducible over $\mathbb{F}_3 $, the quotient $\mathbb{F}_3[x]/(x^2+1) $ is no longer the prime-field $\mathbb{F}_3 $ but a quadratic field extension of it, that is, the finite field consisting of 9 elements $\mathbb{F}_{3^2} $. That is, we represent the ‘points’ lying on the vertical line corresponding to the principal prime ideal $~(p) $ by a solid dot . when their quotient (aka residue field is the prime field $~\mathbb{F}_p $, by a bigger point $\circ $ when its residue field is the finite field $~\mathbb{F}_{p^2} $, by an even fatter point $\bigcirc $ when its residue field is $~\mathbb{F}_{p^3} $ and so on, and on. The larger the residue field, the ‘fatter’ the corresponding point. In fact, the ‘fat-point’ signs in Mumford’s treasure map are an attempt to depict the fact that an affine scheme contains a lot more information than just the set of all prime ideals. In fact, an affine scheme determines (and is determined by) a “functor of points”. That is, to every field (or even every commutative ring) the affine scheme assigns the set of its ‘points’ defined over that field (or ring). For example, the $~\mathbb{F}_p $-points of $\mathbf{spec}(\mathbb{Z}[x]) $ are the solid . points on the vertical line $~(p) $, the $~\mathbb{F}_{p^2} $-points of $\mathbf{spec}(\mathbb{Z}[x]) $ are the solid . points and the slightly bigger $\circ $ points on that vertical line, and so on. This concludes our first attempt to decypher Mumford’s drawing, but if we delve a bit deeper, we are bound to find even more treasures… (to be continued).
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
I rarely saw any equation in physics which involved cube roots or odd roots.Even while solving problems I rarely saw any odd root or cube root. So why nature prefers even powers of physical quantities? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community I suspect this has to do with the fact that everybody knows the quadratic formula, while the general solution to higher degree polynomials is either impractical or non existent. These are rarer but they do occur: for instance in GR the horizons of the Schwarzschild-de Sitter metric are the solution of a third degree polynomial, while in fluid mechanics the radius of a circle of viscous fluid on a table grows like $t^{1/8}$ As others pointed out, cube roots do occur in many physical problems. Still, the question remains, why are they so much less common than square roots? One fundamental reason is the Pythagoras theorem: square roots pop up when calculating distances, regardless of the dimension. If you take a bit more general perspective, a great deal of examples also fit into the same category. For example, the $\sqrt{n}$ that pops up in the Central Limit theorem is the standard deviation of the sum of $n$ i. i. d. random variables, and the standard deviation is the (in fact, $L^2$) distance in the space of random variables. Similarly, in QM/QFT, you often get square roots because you work in $L^2$. This MathOverflow question even goes as far as to ask whether there are any other natural reasons for square roots to pop up. It turns out there are, but they are not so trivial to pinpoint! Now, what would be a reason for the third root to occur that would be as fundamental? For one thing, our world is $3$-dimensional, so that the volume scales as the third power of the linear dimension. This underlines the "turkey cooking time" example in the comments above. Of course, there is a share of examples where you add some exponents, and occasionally get $3$ without any "deep" reason, just as the third simplest natural number. Then you can invert a problem and get a third root. In my view, Kepler's law falls in this cathegory: for a circular orbit of radius $r$, we have $T=2\pi r/|v|$, and the velocity $v$ needed to generate the centrifugal force to offset a $\frac{1}{r^2}$ gravity is $1/\sqrt{r}$. So, the ultimate reason for the $\frac32$ exponent is that $\frac32=1+\frac12$. One "systematic" reason for the $1/3$ power to occur comes from saddle point approximations (a k. a Laplace/stationary phase methods). Assume that you are trying to compute the asymptotics of an integral of the form $\int_\gamma g(z)e^{nf(z)}dz$. over some contour $\gamma$ in the complex plane. The general recipe is to drag the contour so that it passes through a critical point $z_0$ of $f$, and then argue that only the part near $z_0$ contributes, where $f$ can be replaced by its Taylor polynomial. In the "generic" case, this Taylor series will be quadratic, so that the change of variables $w=n(z-z_0)^2$ which eliminates $n$ from the integral leads to $n^{-\frac12}$ factor. However, in the "second simplest" case, the first non-trivial term is cubic, leading instead to $n^{-\frac13}$ factor. To my understanding, this is the source of $1/3$ scaling and Tracey-Widom distributions in random matrices/interacting particle systems/ KPZ universality class.
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$ Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine" @LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$ I enjoyed developing the lebesgue integral though. I thought that was cool But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons? uh wait a sec... ... I need arbitrary intersection to produce singletons from open intervals... hmm... 0 does not even have a nbhd, since any set containing 0 is closed I have no idea how to deal with points having empty nbhd o wait a sec... the open set of any topology must contain the whole set itself so I guess the nbhd of 0 is $\Bbb{R}$ Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…) Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time So going from a to b directly is no more efficient than go from a to 0 and then 0 to b hmm... $d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$ $d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$ so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination But if the starting point is nonzero, then the distance increases quadratically Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius) only when travelling via the origin, will such qudratic penalty in travelling distance be not apply More interesting things can be said about slight generalisations of this metric: Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ... To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ... 1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d... Any alg geom guys on? I know zilch about alg geom to even start analysing this question Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24 grr, thought I can get some more intuition in SR by using open balls tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two. The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$. And all it has in the proof is the assertion that the above is clearly true. I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k... Consider $a,b$ both irrational and the interval $[a,b]$ Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$ It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b... Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat... (For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero. We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi… W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$ However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below: We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages: First denote an enumeration of the rationals as follows: $\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short: Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works... Let's say that for $N$ clients, Lotta will take $d_N$ days to retire. For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$ Where $d_1$ = 1. Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps.
Hyperbolic--parabolic singular perturbation for mildly degenerate Kirchhoff equations: Global-in-time error estimates 1. Dipartimento di Matematica, University of Pisa, Italy 2. Dipartimento di Matematica Applicata, University of Pisa, Italy $\varepsilon u_\varepsilon''+ u_\varepsilon'+m(|A^{1/2}u_\varepsilon|^2)Au_\varepsilon=0, \quad u_\varepsilon(0)=u_0,\quad u_\varepsilon'(0)=u_1,$ and the first order limit problem $u'+m(|A^{1/2}u_\varepsilon|^2)Au=0, \quad u(0)=u_0,$ where $\varepsilon>0$, $H$ is a Hilbert space, $A$ is a self-adjoint nonnegative operator on $H$ with dense domain $D(A)$, $(u_0,u_1)\in D(A^{3/2})\times D(A^{1/2})$, and $m:[0,+\infty)\to [0,+\infty)$ is a function of class $C^1$. We prove global-in-time estimates for the difference $u_\varepsilon(t)-u(t)$ provided that $u_0$ satisfies the nondegeneracy condition $m(|A^{1/2}u_0|^2)>0$, and the function $\sigma m(\sigma^2)$ is nondecreasing in a right neighborhood of its zeroes. The abstract results apply to parabolic and hyperbolic partial differential equations with non-local nonlinearities of Kirchhoff type. Keywords:singular perturbations, degenerate damped hyperbolic equations, Degenerate parabolic equations, Kirchhoff equations.. Mathematics Subject Classification:35B25, 35B40, 35L8. Citation:Marina Ghisi, Massimo Gobbino. Hyperbolic--parabolic singular perturbation for mildly degenerate Kirchhoff equations: Global-in-time error estimates. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1313-1332. doi: 10.3934/cpaa.2009.8.1313 [1] [2] Morteza Fotouhi, Leila Salimi. Controllability results for a class of one dimensional degenerate/singular parabolic equations. [3] Raluca Clendenen, Gisèle Ruiz Goldstein, Jerome A. Goldstein. Degenerate flux for dynamic boundary conditions in parabolic and hyperbolic equations. [4] [5] Jiebao Sun, Boying Wu, Jing Li, Dazhi Zhang. A class of doubly degenerate parabolic equations with periodic sources. [6] Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. [7] Zhigang Wang, Lei Wang, Yachun Li. Renormalized entropy solutions for degenerate parabolic-hyperbolic equations with time-space dependent coefficients. [8] Xingwen Hao, Yachun Li, Qin Wang. A kinetic approach to error estimate for nonautonomous anisotropic degenerate parabolic-hyperbolic equations. [9] Françoise Demengel, O. Goubet. Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations. [10] Hiroshi Watanabe. Existence and uniqueness of entropy solutions to strongly degenerate parabolic equations with discontinuous coefficients. [11] Piermarco Cannarsa, Patrick Martinez, Judith Vancostenoble. The cost of controlling weakly degenerate parabolic equations by boundary controls. [12] Piermarco Cannarsa, Patrick Martinez, Judith Vancostenoble. Persistent regional null contrillability for a class of degenerate parabolic equations. [13] Kristian Bredies. Weak solutions of linear degenerate parabolic equations and an application in image processing. [14] Hiroshi Watanabe. Solvability of boundary value problems for strongly degenerate parabolic equations with discontinuous coefficients. [15] [16] Chunlai Mu, Zhaoyin Xiang. Blowup behaviors for degenerate parabolic equations coupled via nonlinear boundary flux. [17] Hongtao Li, Shan Ma, Chengkui Zhong. Long-time behavior for a class of degenerate parabolic equations. [18] Emmanuele DiBenedetto, Ugo Gianazza and Vincenzo Vespri. Intrinsic Harnack estimates for nonnegative local solutions of degenerate parabolic equations. [19] Kenneth Hvistendahl Karlsen, Nils Henrik Risebro. On the uniqueness and stability of entropy solutions of nonlinear degenerate parabolic equations with rough coefficients. [20] Pawan Kumar Mishra, Sarika Goyal, K. Sreenadh. Polyharmonic Kirchhoff type equations with singular exponential nonlinearities. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Characterizing the Flow and Choosing the Right Interface Fluid flow is involved in many engineering applications. In addition to typical CFD simulations, which replace experiments in wind tunnels, flow must also be considered in the cooling of electronic devices or in the chemical industry, where reacting species are transported by a fluid. COMSOL Multiphysics offers dedicated interfaces for various flow types. When should we use the Laminar Flow or Turbulent Flow interface? The $1 Million Problem: Understanding the Nature of Flow The nature of flow is very complex and the governing equations — the Navier-Stokes equations — are numerically challenging. The British applied mathematician, Sir Horace Lamb, is reported to have said: “I am an old man now, and when I die and go to Heaven, there are two matters on which I hope for enlightenment. One is quantum electrodynamics and the other is the turbulent motion of fluids. And about the former I am really rather optimistic.” Maybe he is lucky and also got the answer to the latter one, but here on earth, it is one of the Millennium Prize Problems of the Clay Mathematics Institute. You can be rewarded U.S. $1 million if you prove that the Navier-Stokes equations, in three dimensions, have a solution and that the solution has no singularities. The proof would help us understand the nature of turbulence, which is still the biggest challenge for CFD codes. Of course, nature always has a solution ready and we can see it in the cloud formations in the sky, the waves in the sea, and the boiling water in the pot. But, we also want a numerical solution for our applications to predict and optimize their performances. For that, COMSOL Multiphysics contains a number of interfaces that solve equations derived from the Navier-Stokes equations and are suitable for different situations. Here, we want to give clarity on when the Laminar Flow and Turbulent Flow interface are suitable to describe some characteristic flow patterns. Characterizing the Flow After you choose the dimension, the first thing to consider in terms of your flow simulation is whether or not you need to take temperature variations into account. This determines whether you choose the Non-Isothermal Flow interface, where you solve the Navier-Stokes equations together with the heat transfer equation, or can neglect temperature variations and solve only the Navier-Stokes equations. That sounds easy enough. To decide if you now need to select one of the turbulent interfaces or if the laminar approach is sufficient, is not as easy. Reynolds Number and Flow Regime Technical applications mainly deal with forced flow. The fluid is set in motion by an external source, like a pump or fan, and enters the modeling domain with a certain velocity. Dimensionless numbers help describe the flow characteristics without starting a simulation. The dimensionless number characterizing the flow regime is the Reynolds number, which describes the ratio of inertia to viscous forces: where \rho is the density, U is the characteristic velocity, d is the characteristic length scale, and \mu is the dynamic viscosity. Here, you can see that \mathrm{Re} is not clearly defined. What are the characteristic velocities and length scales? What if the material properties depend on the temperature? At very low Reynolds numbers,\mathrm{Re}\ll1, the viscous forces dominate over the inertia forces. Thus, the latter may be neglected in the Navier-Stokes equations. For that, COMSOL Multiphysics contains the Creeping Flow interface. Alternatively, if you have non-isothermal flow, you can activate the “Neglect inertial term (Stokes flow)” check-box in the associated settings window. For higher Reynolds numbers but below some critical value, \mathrm{Re}_c (the critical Reynolds number), the flow is laminar. Above \mathrm{Re}_c, it becomes turbulent. The value for \mathrm{Re}_c must be determined manually through experimental set-ups or numerical experiments for each configuration. Luckily, this is already done for typical engineering applications and can be found in related literature. But, even if you have found the value for \mathrm{Re}_c in the literature, the transition from laminar to turbulent is not immediate and there is a transitional zone where both regimes exist. In that case, it is not obvious which approach is suitable: laminar or turbulent? Rayleigh Number and Natural Convection In natural convection, the flow is driven by buoyancy forces. Typically, buoyancy forces arise due to temperature differences, but concentration gradients can also be the driving mechanism. Natural convection plays an important role in geosciences. In earth’s outer core, natural convection creates earth’s magnetic field and in the atmosphere, it dictates the world’s climate. The cooling due to natural convection (in building physics or electronic devices, for example) is often modeled using heat transfer coefficients, which were determined through experiments or numerical calculations. Modeling free convection is always a coupling of the Heat Transfer and Flow interfaces. Therefore, the Non-Isothermal Flow interface is a good choice. The dimensionless number characterizing free convection is called the Rayleigh number, which is the product of the Grashof number, \mathrm{Gr}, the square of the ratio of viscous to buoyant timescales, and the Prandtl number, \mathrm{Pr}, the ratio of conductive to viscous timescales: where \alpha is the thermal expansion coefficient, \Delta T, the temperature difference, and d is now the height of the layer over which the buoyancy forces are active. Similar to the Reynolds number, there is also a critical Rayleigh number, \mathrm{Ra}_c. Here, \mathrm{Ra}<\mathrm{Ra}_c means that heat is only transported by conduction. At \mathrm{Ra}=\mathrm{Ra}_c, convection becomes the dominating heat transfer process in a steady laminar flow regime. With an increasing Rayleigh number, the steady flow becomes unstable and, finally, turbulent. Flow Regimes in Different Configurations Now, we want to discuss the different flow regimes and which interface and study type are suitable for each. 1. Flow Around a Cylinder With increasing Reynolds numbers, this type of flow develops a Kármán vortex street, which is a benchmark example for CFD validation without temperature variations. The Reynolds number for flow around a cylinder uses the diameter of the cylinder as the characteristic length while the material properties are constant (this is similar for other obstacles, too). Over a large range of Reynolds numbers, the flow field behind the obstacle forms periodically swirling vortices, as shown in the below example. Stationary velocity field for \mathrm{Re}\approx 2. The flow is truly laminar and has a stationary solution. This type of flow can be solved with the Laminar Flow interface and a stationary study. Time-dependent velocity field for \mathrm{Re}\approx 100 for 7s. The velocity field changes in space and time. With a proper mesh and time step size, this flow can be solved with the Laminar Flow interface and a time-dependent study. A further increase of the Reynolds number will raise the frequency of the eddies and finally result in turbulent flow. Particularly in the transition regime, 3D instabilities arise and must be resolved with a 3D laminar flow interface. Once the flow gets fully turbulent, you can switch back to 2D and use a turbulent flow interface. 2. A Shell-and-Tube Heat Exchanger The shell-and-tube heat exchanger is a common type of heat exchanger and is a typical example of non-isothermal flow with forced convection. Water flows through the tube side and air flows through the shell side of the heat exchanger. Both materials have temperature-dependent properties, which need to be considered for the calculation of the Reynolds number. The characteristic length inside the tubes is the tube diameter, but in the inlet and outlet regions, it is not clear what the characteristic length is. It is equally unclear when it comes to the air flow around the tubes and baffles. These guide the air flow, thus increasing the amount of heat transferred between the two fluids. You can refer to literature, such as this resource, to find example calculations that provide a good estimation for \mathrm{Re}. It is interesting to see how the Reynolds number can vary over the modeling domain. In COMSOL Multiphysics, you can plot \mathrm{Re} after the simulation. To do so, add a 3D volume plot for the water domain and enter, nitf.U*nitf.rho*0.015[m]/nitf.mu in the Expression field. Then, for every point based on the local velocity, density, and viscosity, the Reynolds number is calculated with the pipe diameter as the characteristic length. Where this length scale applies, the Reynolds number exceeds the critical value for flow inside pipes and is high enough for the flow to be turbulent. For this case, we use a turbulent flow interface with a stationary study. This means that we do not resolve the space and time-dependent behavior of all eddies that may appear. Rather, we calculate a mean velocity field where the influence of the eddies on the heat exchanger’s properties are taken into account by additional variables. Streamline plot along the velocity field for the tube side. Colors indicate the Reynolds number. 3. Natural Convection in a Spherical Shell The last example originates from geophysical topics and deals with buoyancy driven natural convection in a spherical shell (without rotation). When convection starts, it first forms stationary convection cells (Rayleigh-Bénard cells). This increases the buoyancy force and thereby \mathrm{Ra}, causing these cells to start to move. Finally, they break up and smaller eddies, on a shorter timescale, dominate the flow regime and turbulence arises. The animation below shows the natural convection for a spherical shell with a buoyancy force acting in radial direction. The Navier-Stokes equations are defined with dimensionless parameters instead of material properties while the buoyancy force is expressed in terms of the Rayleigh number. The model was solved using a laminar flow approach with a time dependent study. Starting from a linear temperature profile, buoyancy first forms symmetric convection cells, but soon an asymmetric velocity and temperature profile is obtained for Ra = 250. Conclusion We have examined different flow regimes and can conclude that it is not always obvious which flow interface to choose. If the dimensionless numbers \mathrm{Re} or \mathrm{Ra} are significantly smaller or larger than their critical value for the configuration, the choice is clear. For truly laminar flow, which is often present in microfluidic devices, you would choose the Laminar Flow interface. If \mathrm{Re}\ll 1, you should choose the Creeping Flow interface. Many industrial applications deal with high velocities and high Reynolds numbers, in which case a turbulent flow interface is required. Read our previous blog post for a detailed discussion about which turbulence model you should choose. After all, CFD simulations are challenging and the nature of flow is still not fully understood. The COMSOL software provides interfaces to model all flow regimes using up-to-date numerical techniques. Examples from our Model Gallery (listed below) can help you get an idea of which interface is suitable for your application. Model Downloads Showing natural convection with laminar flow: Natural convection with laminar flow: Buoyancy Flow in Free Fluids The instructions contain a detailed explanation of the dimensionless parameters and their influence on the thermal behavior Buoyancy Flow in Free Fluids Considering natural and forced convection with turbulent flow: Turbulent flow through a 90-degree pipe elbow: Comments (4) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Answer $5.971$ $inches$ Work Step by Step $Given,$ $r=\sqrt \frac{V}{\pi h}$ Squaring, $r^{2}=\frac{V}{\pi h}$ Thus, $h=\frac{V}{\pi r^{2}}$ $h=\frac{75}{4\pi}\approx5.971$ $inches$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
$\newcommand{\I}{\mathbb{I}}\newcommand{\E}{\mathbb{E}}$Given our observed variables $x = \{x_i\}$, hidden variables $z = \{z_i\}$ and distribution parameters $\theta = \{(\mu_j, \sigma_j)\}$, let's define the complete-data log-likelihood as$$\begin{align*}\ell_c(\theta) &= \sum_{i=1}^n \log \Pr[x_i, z_i|\theta] = \sum_{i=1}^n \sum_{j=1}^k \I[z_i = j] \log \Pr[x_i, z_i = j| \theta]\\&= \sum_{i=1}^n \sum_{j=1}^k \I[z_i = j] \log (\Pr[x_i|\theta_j] \Pr[z_i=j|\theta])\end{align*}$$ where $\I[\cdot]$ is the indicator function, $\Pr[x_i|\theta_j]$ is the probability of observing $x_i$ given the $j$th Gaussian and $\Pr[z_i=j|\theta]$ is the mixing weight of the $j$th distribution (i.e., the probability of observing the $j$th hidden variable). This is the model we have if we somehow knew which Gaussian generated a given data-point. We obviously don't know, so we would like to compute the expected value of this complete-data log-likelihood. That is, what is the expected outcome of this full-knowledge formulation? To do that let us define the $Q(\theta^t|\theta^{t-1})$ function as$$\begin{align*}Q(\theta^t|\theta^{t-1}) &= \E\left[\sum_{i=1}^n \sum_{j=1}^k \I[z_i = j] \log \Pr[x_i, z_i = j| \theta]\right]\\ &= \sum_{i=1}^n \sum_{j=1}^k \E[\I[z_i = j]] \log \Pr[x_i, z_i = j| \theta]\\&= \sum_{i=1}^n \sum_{j=1}^k \Pr[z_i = j | x_i] \log \Pr[x_i, z_i = j| \theta]\\&= \sum_{i=1}^n \sum_{j=1}^k \underbrace{\Pr[z_i = j | x_i]}_{r_{ij}} \log (\Pr[x_i|\theta_j]\underbrace{\Pr[z_i = j| \theta]}_{\pi_j}).\\&= \sum_{i=1}^n \sum_{j=1}^k r_{ij} \log \Pr[x_i|\theta_j] + \sum_{i=1}^n \sum_{j=1}^k r_{ij} \log \pi_j.\end{align*}$$Computing $r_{ij}$ for the E-step is done by simple Bayes theorem,$$ r_{ij} = \frac{\pi_j \Pr[x_i|\theta_j^{t-1}]}{\sum_{j'=1}^k \pi_{j'} \Pr[x_i|\theta_{j'}^{t-1}]}.$$ Now for the M-step we would like to compute the mixing weights $\pi$ and parameters $\theta$ that maximize $Q$. Luckily for us, we can optimize for each independently as seen by the final formulation! By taking the derivative of $Q$ wrt $\pi$ with the Lagrange constraint that $\sum_j \pi_j = 1$ we have$$\pi_j = \frac{\sum_{i=1}^n r_{ij}}{n}.$$ In order to optimize $Q$ wrt $(\mu_j, \sigma_j)$ lets look at the part only dependent on them (ignoring some constants). $$\sum_{i=1}^n r_{ij} \log \Pr[x_i|\theta_j] =-\frac{1}{2}\sum_{i=1}^n r_{ij} \left(\log \sigma_j + \frac{(x_i - \mu_j)^2}{\sigma_j}\right)$$Using this information helps when taking the derivative of $Q$ wrt to each parameter. Finally we have, $$\begin{align*}\mu_j &= \frac{\sum_{i=1}^n r_{ij} x_i}{\sum_{i=1}^n r_{ij}},&\sigma_j &= \frac{\sum_{i=1}^n r_{ij} x_i^2}{\sum_{i=1}^n r_{ij}} - \mu_j^2.\end{align*}$$ In your case, $x = (1, 3, 4, 5, 7, 8, 9, 13, 14, 15, 16, 18, 23)$, $z = (z_1,\dotsc, z_{13})$, $k=2$, and $\theta = ((\mu_1, \sigma_1),(\mu_2, \sigma_2))$. EM begins by randomly initializing $\pi = (\pi_1, \pi_2)$ and $\theta$ and then alternating between E-step and M-step. Once the parameters haven't changed much (or better the observed-data log-likelihood hasn't changed much) you can stop and output your current values $\theta^t$ and $\pi$. One benefit from using $|\ell(\theta^t) - \ell(\theta^{t-1})|$ over $||\theta^t - \theta^{t-1}||$ as a stopping condition is for bug checking. EM guarantees that the likelihood will always monotonically increase; therefore, you can simply check the values each iteration to much sure that your likelihood is greater than or equal to last iteration's. If it isn't, then something is wrong. Finally one important thing to note is that computing a MLE for a mixture of Gaussians will tend to overfit. One way to overcome this is by incorporating priors for $\pi$ and $\theta$ and then computing the MAP estimate instead. This would only change the M-step; however, its derivation is dependent on which prior you choose. EDIT: Since it is clear you are more interested in just a working example than really understanding any of this here is Python code that will run EM for a basic GMM. Running this in ipython gives: In [1]: import em In [2]: x = (1, 3, 4, 5, 7, 8, 9, 13, 14, 15, 16, 18, 23) In [3]: em.em(x, 2) Out[3]: (array([ 0.49645528, 0.50354472]), array([ 15.89753149, 5.10207907]), array([ 3.8611819 , 2.65861623])) where the final output are the mixing weights, means for each Gaussian, and stdevs for each Gaussian.
I'm finding the integral $$\int_{0}^{\infty} \frac{\log(x)}{x^{3/4}(1+x)} dx $$ I do this by considering $$ \oint_V \frac{\log(z)}{z^{3/4}(1+z)} \,dz$$ over the closed loop shown. I take the limit as the radius of the larger circle tends to infinity and as the radius of the smaller circle tends towards zero. The integrand approaches zero when z tends to infinity (along C, larger circle). Assuming that the contour integral along the smaller circle tends towards zero as the smaller tends towards zero, I find that: $$2*\int_{0}^{\infty} \frac{\log(x)}{x^{3/4}(1+x)} dx + 2\pi i*\int_{0}^{\infty} \frac{1}{x^{3/4}(1+x)} dz = \frac{-2\pi^{2}}{e^{\frac{3i\pi}{4}}} $$ $$ = \sqrt{2}\pi^{2}(1+i)$$ By the residue theorem (using the residue at z = -1) If I take the real part of both sides, I find that $I = \frac{\pi^{2}}{\sqrt{2}}$ (which is wrong, $ I = -\sqrt{2}\pi^{2} $) Trying to find my error, my question is, when I take the limit of the radius of the smaller circle tending to zero, whether the line integral along that smaller circle tends to zero as i have ln(r) term which tends to -infinity. Does this mean I should take an alternative contour because of the log function? I am confused by this because the textbook suggests I use this contour I have shown.
Numerical Blow-up Solutions for Nonlinear Parabolic Equations Numerical Blow-up Solutions for Nonlinear Parabolic Equations Abstract This paper concerns the study of the numerical approximation for the following initial-boundary value problem: \begin{equation} u_t(x,t)=(u^{m}(x,t))_{xx}+\alpha u^{p}(x,t),\quad x\in(0,1),\quad t\in(0,T), \end{equation} \begin{equation} u(0,t)=0,\quad u(1,t)=0,\quad t\in(0,T), \end{equation} \begin{equation} u(x,0)=u_{0}(x)\geq 0,\quad x\in[0,1], \end{equation} where $p\geq m>1$ and $\alpha>0$. When $p=m$, we show that there exists a positive number $\alpha^{*}$ such that if $\alpha>\alpha^{*}$ then any solution of a semidiscrete form of (0.1)--(0.3) blows up in a finite time whereas if $\alpha<\alpha^{*}$ then any solution exists globally and decays to zero. We have obtained the same result using a discrete form of (0.1)--(0.3). When $p>m$, we prove that any solution of a semidiscrete form of (0.1)--(0.3) blows up in a finite time and estimate its semidiscrete blow-up time. In this last case, under some assumptions, we show that the semidiscrete blow-up time converges to the real one when the mesh size goes to zero. Finally, we give some numerical experiments to illustrate our analysis.
EDIT: Philip Ball has updated his article on Nature News, correcting the most serious of its errors. While everyone makes mistakes, few actually admit to them, so I think this action is rather praiseworthy. Correspondingly, I’m removing criticism of that mistake in my post. Recently I have read an excellent essay by Philip Ball on the measurement problem: clear, precise, non-technical, free of bullshit and mysticism. I was impressed: a journalist managed to dispel confusion about a theme that even physicists themselves are confused about. It might be worth checking out what this guy writes in the future. I was not so impressed, however, when I saw his article about quantum teleportation, reporting on Jian-Wei Pan’s group amazing feat of teleporting a quantum state from a ground station to a satellite. While Philip was careful to note that nothing observable is going on faster than light, he still claims that something unobservable is going on faster than light, and that there is some kind of conspiracy by Nature to cover that up. This is not only absurd on its face, but also needs the discredited notion of wavefunction collapse to make sense, which Philip himself noted was replaced by decoherence as a model of how measurements happen. For these reasons, very few physicists still take this description of the teleportation protocol seriously. It would be nice if the media would report on the current understanding of the community instead of repeating misconceptions from the 90s. But enough ranting. I think the best way to counter the spreading of misinformation about quantum mechanics is not to just criticize people who get it wrong, but instead to give the correct explanation about the phenomena. I’m going to explain it twice, first in a non-technical way in the hope of helping interested laypeople, and then in a technical way, for people who do know quantum mechanics. So, without further ado, here’s how quantum teleportation actually works (this is essentially Deutsch and Hayden‘s description): Alice has a quantum bit, which she wants to transmit to Bob. Quantum bits are a bit like classical bits as they can be in the states 0 or 1 (and therefore used to store information like blogs or photos1), and entirely unlike classical bits as they can also be in a superposition of 0 and 1. Now if Alice had a classical bit, it would be trivial to transmit it to Bob: she would just use the internet. But the internet cannot handle superpositions between 0 and 1: if you tried to send a qubit via the internet you would lose this superposition information (the Dutch are working on this, though). To preserve this superposition information Alice would need an expensive direct optical fibre connection to Bob’s place, that we assume she doesn’t have. What she do? She can try to measure this superposition information, record it in classical bits, and transmit those via the internet. But superposition information is incredibly finicky: if Alice has only one copy of the qubit, she cannot obtain it. She can only get a good approximation to it if she measures several copies of the qubit. Which she might not have, or even if she does, it will be only an approximation to her qubit, not the real deal. So again, what can she do? That’s where quantum teleportation comes in. If Alice and Bob share a Bell state (a kind of entangled state), they can use it to transmit this fragile superposition information perfectly. Alice needs to do a special kind of measurement — called Bell basis measurement — in the qubit she wants to transmit together with her part of the Bell state. Now, this is where everyone’s brains melt and all the faster-than-light nonsense comes from. It appears that after Alice does her measurement the part of the Bell state that belongs to Bob instantaneously becomes the qubit Alice wanted to send, just with some error that depends on her measurement result. In order to correct the error, Bob then needs to know Alice’s measurement result, which he can only find out after a light signal has had time to propagate from her lab to his. So it is as if Nature did send the qubit faster than light, but cleverly concealed this fact with this error, just so that we wouldn’t see any violation of relativity. Come on. Trying to put ourselves back in the centre of the universe, are we? Anyway, this narrative only makes sense if you believe in some thoroughly discredit interpretations of quantum mechanics2. If you haven’t kept your head buried in the sand in the last decades, you know that measurements work through decoherence: Alice’s measurement is not changing the state of Bob in any way. She is just entangling her qubit with the Bell state and herself and anything else that comes in the way. And this entanglement spreads just through normal interactions: photons going around, molecules colliding with each other. Everything very decent and proper, nothing faster than light. Now, in this precious moment after she has done her measurement and before this cloud of decoherence has had time to spread to Bob’s place, we can compare the silly story told in the previous paragraph with reality. We can compute the information about Alice’s qubit that is available in Bob’s place, and see that it is precisely zero. Nature is not trying to conceal anything from us, it is just a physical fact that the real quantum state that describes Alice and Bob’s systems is a complicated entangled state that contains no information about Alice’s qubit in Bob’s end. But the cool thing about quantum teleportation is that if Bob knows the measurement result he is able to sculpt Alice’s qubit out of this complicated entangled state. But he doesn’t, because the measurement result cannot get to him faster than light. Now, if we wait a couple of nanoseconds more, the cloud of decoherence hits Bob, and then we are actually in the situation where Bob’s part of the Bell state has become Alice’s qubit, modulo some easily correctable error. But now there is no mystery to it: the information got there via decoherence, no faster than light. Now, for the technical version: Alice has a qubit $\ket{\Gamma} = \alpha\ket{0} + \beta\ket{1}$, which she wishes to transmit to Bob, but she does not have a good noiseless quantum transmission channel that she can use, just a classical one (aka the Internet). So what can they do? Luckily they have maximally entangled state $\ket{\phi^+} = \frac1{\sqrt2}(\ket{00}+\ket{11})$ saved from the time when they did have a good quantum channel, so they can just teleport $\ket{\Gamma}$. To do that, note that initial state they have, written in the order Alice’s state, Alice’s part of $\ket{\phi^+}$, and Bob’s part of $\ket{\phi^+}$, is \[ \ket{\Gamma}\ket{\phi^+} = \frac{1}{\sqrt2}( \alpha\ket{000}+\alpha\ket{011} + \beta\ket{100} + \beta{111}), \] and if we rewrite the first two subsystems in the Bell basis we obtain \[ \ket{\Gamma}\ket{\phi^+} = \frac{1}{2}( \ket{\phi^+}\ket{\Gamma} + \ket{\phi^-}Z\ket{\Gamma} + \ket{\psi^+}X\ket{\Gamma} + \ket{\psi^-}XZ\ket{\Gamma}),\] so we see that conditioned on Alice’s state being a Bell state, Bob’s state is just a simple function of $\ket{\Gamma}$. Note that at this point nothing was done to the quantum system, so Bob’s state did not change in any way. If we calculate the reduce density matrix at his lab, we see that it is the maximally mixed state, which contains no information about $\ket{\Gamma}$ whatsoever. Now, clearly we want Alice to measure her subsystems in the Bell basis to make progress. She does that, first applying an entangling operation to map the Bell states to the computational basis, and then she makes the measurement in the computational basis.3 After the entangling operation, the state is \[ \frac{1}{2}( \ket{00}\ket{\Gamma} + \ket{01}Z\ket{\Gamma} + \ket{10}X\ket{\Gamma} + \ket{11}XZ\ket{\Gamma}),\] and making a measurement in the computational basis — for now modelled in a coherent way — and storing the result in two extra qubits results in the state \[ \frac{1}{2}( \ket{00}\ket{00}\ket{\Gamma} + \ket{01}\ket{01}Z\ket{\Gamma} + \ket{10}\ket{10}X\ket{\Gamma} + \ket{11}\ket{11}XZ\ket{\Gamma}).\] Now something was done to this state, but still there is no information at Bob’s: his reduced density matrix is still the maximally mixed state. Looking at this entangled state, though, we see that if Bob applies the operations $\mathbb{I}$, $X$, $Z$, or $ZX$ to his qubit conditioned on the measurement result he will extract $\ket{\Gamma}$ from it. So Alice simply sends the qubits with the measurement result to Bob, who uses it to get $\ket{\Gamma}$ in his side, the teleportation protocol is over, and Alice and Bob lived happily ever after. Nothing faster than light happened, and the information from Alice to Bob clearly travelled through the qubits with the measurement results. The interesting thing we saw was that by expending one $\ket{\phi^+}$ and by sending two classical bits we can transmit one quantum bit. Everything ok? No, no, no, no, no!, you complain. What was this deal about modelling a measurement coherently? This makes no sense, measurements must by definition cause lots of decoherence! Indeed, we’re getting there. Now with decoherence, the state after the measurement in the computational basis is \[ \frac{1}{2}( \ket{E_{00}}\ket{00}\ket{00}\ket{\Gamma} + \ket{E_{01}}\ket{01}\ket{01}Z\ket{\Gamma} + \ket{E_{10}}\ket{10}\ket{10}X\ket{\Gamma} + \ket{E_{11}}\ket{11}\ket{11}XZ\ket{\Gamma}),\] where $\ket{E_{ij}}$ is the state of the environment, labelled according to the result of the measurement. You see that there is no collapse of the wavefunction4: in particular Bob’s state is in the same entangled superposition as before, and his reduced density matrix is still the maximally mixed state. Moreover, as any physical process, decoherence spreads at most as fast as the speed of light, so even after Alice has been engulfed by the decoherence and has obtained a definite measurement result, Bob will still for some time remain unaffected by it, with the state still being adequately described by the above superposition. Only after a relativity-respecting time interval he will become engulfed as well, coherence will be killed, and the state relative to him and Alice will be adequately described by (e.g.) \[ \ket{E_{10}}\ket{10}\ket{10}X\ket{\Gamma}.\] Now we are in the situation people usually describe: his qubit is in a definite state, and he merely does not know which is it. Alice then sends him the measurement result — 10 — via the Internet, from which he deduces that he needs to apply operation $X$ to recover $\ket{\Gamma}$, and now the teleportation protocol is truly over.
Conjugacy classes of finite index subgroups of the modular group $\Gamma = PSL_2(\mathbb{Z}) $ are determined by a combinatorial gadget : a modular quilt. By this we mean a finite connected graph drawn on a Riemann surface such that its vertices are either black or white. Moreover, every edge in the graph connects a black to a white vertex and the valency (that is, the number of edges incodent to a vertex) of a black vertex is either 1 or 2, that of a white vertex is either 1 or 3. Finally, for every white vertex of valency 3, there is a prescribed cyclic order on the edges incident to it. On the left a modular quilt consisting of 18 numbered edges (some vertices and edges re-appear) which gives a honeycomb tiling on a torus. All white vertices have valency 3 and the order of the edges is given by walking around a point in counterclockwise direction. For example, the order of the edges at the top left vertex (which re-appears at the middle right vertex) can be represented by the 3-cycle (6,11,14), that around the central vertex gives the 3-cycle (2,7,16). As we will see _another time_, the modular group $\Gamma $ is freely generated by the two elements $U=\begin{bmatrix} 0 & 1 \\\ -1 & 0 \end{bmatrix}~\qquad~V=\begin{bmatrix} 1 & 1 \\\ -1 & 0 \end{bmatrix} $ and remark that $U^2=V^3=1 $. To a modular quilt having d edges we can associate a transitive permutation representation of $\Gamma $ on d letters (the labels of the edges) such that the action of U is given by the order two permutation given by the product of all 2-cycles of incident edges to black 2-valent vertices and the action of V is given by the order three permutation given by the cyclic ordering of edges around white 3-valent vertices in the quilt. For the example above we have $U \rightarrow (1,7)(2,15)(3,9)(4,17)(5,11)(6,13)(8,18)(10,14)(12,16) $ $V \rightarrow (1,13,8)(2,7,16)(3,15,10)(4,9,18)(5,17,12)(6,11,14) $ The (index d) subgroup of $\Gamma $ corresponding to the modular quilt is then the stabilizer subgroup of a fixed edge. Note that choosing a different edge gives a conjugate subgroup. Conversely, given an index d subgroup G we can label the d left-cosets in $\Gamma/G $ by the numbers 1,2,…,d and describe the action of left multiplication by U and V on the cosets as permutations in the symmetric group $S_d $. Because U has order two, its permutation will be a product of two cycles which we can interprete as giving the information on edges incident to 2-valent black vertices. Similarly, V has order three and hence its permutation consists of 3-cycles giving the ordering of edges around 3-valent white vertices. Edges not appearing in U (resp. V) have as their leaf-vertex a black (resp. white) vertex of valency 1. Because the permutation action is transitive, this procedure gives a connected graph on d edges, d white and d black vertices and is a modular quilt. In order to connect modular quilts to special hyperbolic polygons we need the intermediate concept of cuboid tree diagrams. These are trees (that is, connected graphs without cycles) such that all internal vertices are 3-valent (and have an order on the incident edges) and the leaf-vertices are tinted either red or blue. In addition, there is an involution on the red vertices. The tree on the left is a cuboid tree, the involution interchanges the two top red vertices (indicated by having the same number). We associate to such a cuboid tree diagram a modular quilt by taking as the white vertices : all internal vertices together with the blue leaf-vertices, and as the black vertices : the midpoints of internal edges, together with the midpoints of edges connecting a blue leaf-vertex, together will all red leaf-vertices. If two red leaf-vertices correspond under the involution, we glue the corresponding black vertices together. That is, the picture of the right is the resulting modular quilt. Conversely, starting with a modular quilt we can always construct from it a cuboid tree diagram by breaking cycles in black vertices until there are no cycles left. All black leaf-vertices in the resulting tree are tinted red and correspond under the involution when they came from the same black quilt-vertex. Remaining leaf-vertices are tinted blue. All internal black vertices are removed (and the edges incident to them glued into larger edges) and all internal white vertices become the internal vertices of the cuboid tree. While a cuboid tree diagram determines the modular quilt uniquely, there are in general several choices of breaking up cycles in a modular quilt, so also several cuboid tree diagrams determining the same modular quilt. That is, we have shown that there are natural maps cuboid tree —->> modular quilt <----> conjugacy class of finite index subgroup where the first map is finite to one and the second map is a bijection. Observe that we can also use modular quilts (or their associated cuboid trees) as a mnemotechnic device to remember the construction of groups, generated by an order two and an order three element and having a low dimensional faithful permutation representation. For example, the sporadic simple Mathieu group $M_{12} $ has a 12-dimensional permutation representation encoded by the above left quilt, which we call the M(12) quilt. That is, $M_{12} $ is generated by the two permutations (1,2)(3,4)(5,6)(7,8)(9,10)(11,12) and (1,2,3)(4,7,5)(8,9,11) Hence the cuboid tree on the right can be called the M(12) tree. Similarly, the sporadic simple Mathieu group $M_{24} $ has a 24-dimensional permutation representation which can be represented by the modular quilt, the M(24) quilt That is, $M_{24} $ is generated by the permutations (1,2,3)(4,5,7)(8,9,15)(10,11,13)(16,17,19)(20,21,23) and (1,2)(3,4)(5,6)(7,8)(9,10)(11,12)(13,14)(15,16)(17,18)(19,20)(21,22)(23,24) with corresponding M(24) tree with the two red vertices interchanging under the involution. References Ravi S. Kulkarni, “An arithmetic-geometric method in the study of the subgroups of the modular group” Amer. J. Math. 113 (1991) 1053-1133 Similar Posts: Hyperbolic Mathieu polygons quivers versus quilts the modular group and superpotentials (1) Monstrous dessins 3 the iguanodon dissected Superpotentials and Calabi-Yaus The Dedekind tessellation Monsieur Mathieu permutation representations of monodromy groups Klein’s dessins d’enfant and the buckyball
An Inequality from Marocco, with a Proof, or Is It? Statement $1\le a,b,c,d\le 2.$ Prove that $4|(a-b)(b-c)(c-d)(d-a)|\le abcd.$ Solution Without lost of generality, we may assume $a\le b\le c\le d.$ If a pair of (cyclically) successive numbers are equal, the inequality obviously holds. So assume $1\le a\lt b\lt c\lt d\le 2.$ Consider the polynomial $f(x)=(a-x)(x-c).$ It attains its maximum at $\displaystyle x=\frac{a+c}{2}$ which is equal to $\displaystyle f(\frac{a+c}{2})=\frac{(c-a)^2}{4}.$ This is the maximal value $|f(x)|$ attains in the interval $[a,c].$ Since $b\in (a,c),$ it follows that $\displaystyle |(a-b)(b-c)|\le\frac{(c-a)^2}{4}\le\frac{1}{4}.$ On the other hand, the difference of any two numbers in the interval $[1,2]$ does not exceed $1,$ which implies $|(c-d)(d-a)|\le 1.$ Combinting the two inequalities we obtain $\displaystyle |(a-b)(b-c)(c-d)(d-a)|\le\frac{1}{4}\cdot 1\le\frac{abcd}{4}.$ What's wrong? The problem imposes a certain order on the numbers $a,b,c,d$ that assumes nothing about their magnitudes. The additional assumption $a\le b\le c\le d$ may be simply not true (which in fact what happens). Acknowledgment 65463180
Is there anything reliable known about who actually discovered the Chebyshev polynomials and what the motivation and circumstances were? The reason why I am interested in knowing, is that I needed a solution for a variant of those polynomials: instead of all extrema having the same magnitude, I wanted to have them attain predefined values in a fixed order (I have found a solution for that problem, but involves a system of polynomial equations) and I wonder, whether the definition of the Chebyshev polynomials has been "guessed" or developed for a specific problem. Edit: at the request of @Hans, here is formal definition of my problem: given a sequence $(y_1,\ ...\ y_{n-1}), (y_{i+2}-y_{i+1})(y_{i+1}-y_i)<0$ of values, determine a polynomial $p(x)$ of degree $n$ and, $\ n$-$1$ abszissas $\ \xi_1 <,\ ...,\ <\xi_{n-1}$, so that $\ p(\xi_i)=y_i, p'(\xi_i)=0$ It should be noted that the polynomials that I am looking for, have no special properties, except for the predefined values in the extrema.The leading coefficient can be set to $1$ and the constant term to $0$. $$\ $$ Construction of polynomials with predefined sequence of function-values for its local extrema: we can w.l.o.g. assume that the sought polynomial has leading coefficient $1$, a local extremum in the origin and, that all other local extrema are located at positive abszissas. Then polynomial is $$p(x) =\frac{1}{n}\int x\prod_{i=2}^{n-1}(x-\xi_i)$$ and $$p(\xi_i)=y_i$$ would a be system of polynomial equations for determining the $\xi_i$ and thus $p(x)$; the only problem being that, because of the symmetry, in the current formulation there is no control over the ordering of the $y(\xi_i)$. That can however easily be fixed by defining $$\xi_k=\sum_{i=2}^{k}a_i^2$$ and solving the system of polynomial equations $$p(\sum_{i=2}^{k}a_i^2)=y_k$$
Consider a system of \(N\) classical particles. The particles a confined to a particular region of space by a ``container'' of volume \(V\). The particles have a finite kinetic energy and are therefore in constant motion, driven by the forces they exert on each other (and any external forces which may be present). At a given instant in time \(t\), the Cartesian positions of the particles are \(r_1(t), \cdots , r_N(t) \)) ) . The time evolution of the positions of the particles is then given by Newton's second law of motion: \[ m_i \ddot {r} _i = F_i ( r_1, \cdots , r_N ) \] where \(F_1, \cdots , F_N \) are the forces on each of the \(N\) particles due to all the other particles in the system. The notation \(\ddot {r} _i = \frac {d^2 r_i}{dt^2}\). N Newton's equations of motion constitute a set of \(3N\) coupled second order differential equations. In order to solve these, it is necessary to specify a set of appropriate initial conditions on the coordinates and their first time derivatives, \( \{r_1 (0), \cdots , r_N(0), \dot {r} _1 (0), \cdots , \dot {r} _N (0) \} \). Then, the solution of Newton's equations gives the complete set of coordinates and velocities for all time \(t\).
I want to show that if $(x,y)$ is a solution to the negative pell equation ($x^2-dy^2=-1)$, then $\frac{x}{y}$ is a convergent of the continued fraction expansion of $\sqrt{d}.$ I think it's easier to see the connection in the other direction. Here's a slightly imprecise way to see this. Let $[a_0; a_1, a_2, \dots]$ be the regular continued fraction of $\sqrt{d}$. Cutting this infinite expression off at $a_m$ gives the convergent $$\frac{h_m}{k_m} \approx \sqrt{d}$$ which gives the best approximation by any rational with denominator less than or equal to $k_m$. So $$h_m \approx \sqrt{d}k_m$$ $$h_m^2 \approx dk_m^2$$ $$h_m^2 - dk_m^2 \approx 0.$$ But this is an integer expression, so being as close to $0$ as possible without actually equaling $0$ means $|h_m^2 - dk_m^2| = 1.$ So $(h_m, k_m)$ satisfies the Pell equation. The fact that the convergents give the "best possible rational approximation" correspond to the minimality condition on $x$ and $y$ in the Pell equation! This is also the fact that allows us to (sometimes) use the Pell equation to find the fundamental unit of the real quadratic number field $\mathbb{Q}[\sqrt{d}]$.
We saw that the icosahedron can be constructed from the alternating group $A_5 $ by considering the elements of a conjugacy class of order 5 elements as the vertices and edges between two vertices if their product is still in the conjugacy class. This description is so nice that one would like to have a similar construction for the buckyball. But, the buckyball has 60 vertices, so they surely cannot correspond to the elements of a conjugacy class of $A_5 $. But, perhaps there is a larger group, somewhat naturally containing $A_5 $, having a conjugacy class of 60 elements? This is precisely the statement contained in Galois’ last letter. He showed that 11 is the largest prime p such that the group $L_2(p)=PSL_2(\mathbb{F}_p) $ has a (transitive) permutation presentation on p elements. For, p=11 the group $L_2(11) $ is of order 660, so it permuting 11 elements means that this set must be of the form $X=L_2(11)/A $ with $A \subset L_2(11) $ a subgroup of 60 elements… and it turns out that $A \simeq A_5 $… Actually there are TWO conjugacy classes of subgroups isomorphic to $A_5 $ in $L_2(11) $ and we have already seen one description of these using the biplane geometry (one class is the stabilizer subgroup of a ‘line’, the other the stabilizer subgroup of a point). In the very same paper containing the first depiction of the Dedekind tessellation, Klein found that there should be a degree 11 cover $\mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} $ with monodromy group $L_2(11) $, ramified only in the three points ${ 0,1,\infty } $ such that there is just one point lying over $\infty $, seven over 1 of which four points where two sheets come together and finally 5 points lying over 0 of which three where three sheets come together. In 1879 he wanted to determine this cover explicitly in the paper “Ueber die Transformationen elfter Ordnung der elliptischen Funktionen” (Math. Annalen) by describing all Riemann surfaces with this ramification data and pick out those with the correct monodromy group. He manages to do so by associating to all these covers their ‘dessins d’enfants’ (which he calls Linienzuges), that is the pre-image of the interval [0,1] in which he marks the preimages of 0 by a bullet and those of 1 by a +, such as in the innermost darker graph on the right above. He even has these two wonderful pictures explaining how the dessin determines how the 11 sheets fit together. (More examples of dessins and the correspondences of sheets were drawn in the 1878 paper.) The ramification data translates to the following statements about the Linienzuge : (a) it must be a tree ($\infty $ has one preimage), (b) there are exactly 11 (half)edges (the degree of the cover), (c) there are 7 +-vertices and 5 o-vertices (preimages of 0 and 1) and (d) there are 3 trivalent o-vertices and 4 bivalent +-vertices (the sheet-information). Klein finds that there are exactly 10 such dessins and lists them in his Fig. 2 (left). Then, he claims that one the two dessins of type I give the correct monodromy group. Recall that the monodromy group is found by giving each of the half-edges a number from 1 to 11 and looking at the permutation $\tau $ of order two pairing the half-edges adjacent to a +-vertex and the order three permutation $\sigma $ listing the half-edges by cycling counter-clockwise around a o-vertex. The monodromy group is the group generated by these two elements. Fpr example, if we label the type V-dessin by the numbers of the white regions bordering the half-edges (as in the picture Fig. 3 on the right above) we get $\sigma = (7,10,9)(5,11,6)(1,4,2) $ and $\tau=(8,9)(7,11)(1,5)(3,4) $. Nowadays, it is a matter of a few seconds to determine the monodromy group using GAP and we verify that this group is $A_{11} $. Of course, Klein didn’t have GAP at his disposal, so he had to rule out all these cases by hand. gap> g:=Group((7,10,9)(5,11,6)(1,4,2),(8,9)(7,11)(1,5)(3,4)); Group([ (1,4,2)(5,11,6)(7,10,9), (1,5)(3,4)(7,11)(8,9) ]) gap> Size(g); 19958400 gap> IsSimpleGroup(g); true Klein used the fact that $L_2(7) $ only has elements of orders 1,2,3,5,6 and 11. So, in each of the remaining cases he had to find an element of a different order. For example, in type V he verified that the element $\tau.(\sigma.\tau)^3 $ is equal to the permutation (1,8)(2,10,11,9,6,4,5)(3,7) and consequently is of order 14. Perhaps Klein knew this but GAP tells us that the monodromy group of all the remaining 8 cases is isomorphic to the alternating group $A_{11} $ and in the two type I cases is indeed $L_2(11) $. Anyway, the two dessins of type I correspond to the two conjugacy classes of subgroups $A_5 $ in the group $L_2(11) $. But, back to the buckyball! The upshot of all this is that we have the group $L_2(11) $ containing two classes of subgroups isomorphic to $A_5 $ and the larger group $L_2(11) $ does indeed have two conjugacy classes of order 11 elements containing exactly 60 elements (compare this to the two conjugacy classes of order 5 elements in $A_5 $ in the icosahedral construction). Can we construct the buckyball out of such a conjugacy class? To start, we can identify the 12 pentagons of the buckyball from a conjugacy class C of order 11 elements. If $x \in C $, then so do $x^3,x^4,x^5 $ and $x^9 $, whereas the powers ${ x^2,x^6,x^7,x^8,x^{10} } $ belong to the other conjugacy class. Hence, we can divide our 60 elements in 12 subsets of 5 elements and taking an element x in each of these, the vertices of a pentagon correspond (in order) to $~(x,x^3,x^9,x^5,x^4) $. Group-theoretically this follows from the fact that the factorgroup of the normalizer of x modulo the centralizer of x is cyclic of order 5 and this group acts naturally on the conjugacy class of x with orbits of size 5. Finding out how these pentagons fit together using hexagons is a lot subtler… and in The graph of the truncated icosahedron and the last letter of Galois Bertram Kostant shows how to do this. Fix a subgroup isomorphic to $A_5 $ and let D be the set of all its order 2 elements (recall that they form a full conjugacy class in this $A_5 $ and that there are precisely 15 of them). Now, the startling observation made by Kostant is that for our order 11 element $x $ in C there is a unique element $a \in D $ such that the commutator$~b=[x,a]=x^{-1}a^{-1}xa $ belongs again to D. The unique hexagonal side having vertex x connects it to the element $b.x $which belongs again to C as $b.x=(ax)^{-1}.x.(ax) $. Concluding, if C is a conjugacy class of order 11 elements in $L_2(11) $, then its 60 elements can be viewed as corresponding to the vertices of the buckyball. Any element $x \in C $ is connected by two pentagonal sides to the elements $x^{3} $ and $x^4 $ and one hexagonal side connecting it to $\tau x = b.x $.
I have this second-order differential equation: $$x''(t) + \frac{1}{(\tau + t)}x'(t) + k^2x(t) = 0$$ I want to make the solution to this ODE amenable to a closed form Bessel function, and so a suggested way is to make a change of variables so that we can compare the differential equation above to the transformation equation below: (where this $x$ is analogous to my $t$, and this $y$ is analogous to my $x(t)$) Transformation equation: The goal (or atleast the way I did it for a simple function) was to compare and identify what values the parameters $\alpha, \beta, C, m$ must have so that the form of differential equation is captured by a Bessel function that makes use of these parameters (such as a linear combination of $x^{\alpha}J_m(Cx^{\beta})$ and $x^{\alpha}Y_m(Cx^{\beta})$). This method allowed me to solve a simple equation like the Airy equation. But if I try to do that in this case, the moment I divide the boxed equation on both sides by $x^2$, you get a $\frac{1}{x}$ as the co-efficient for the first-derivative term, which doesn't represent the form of my differential equation's 2nd term (which has $\frac{1}{\tau + t}$ as its coefficient). I am wondering if I am missing something here, or perhaps there's an intermediary step that's required before I can use this method. Ultimately, I just need a solution to that differential equation that is represented as a Bessel function.
Homogenization of trajectory attractors of Ginzburg-Landau equations with randomly oscillating terms 1. Baku branch of M.V. Lomonosov Moscow State University, Universitetskaya st., 1, Xocasan, Binagadi district, Baku, AZ 1144, Azerbaijan 2. M.V. Lomonosov Moscow State University, Moscow, 119991, Russian Federation 3. Institute for Information Transmission Problems, Russian Academy of Sciences, Bolshoy Karetniy 19, Moscow 127051, Russian Federation 4. Voronezh State University, Universitetskaya sq. 1, Voronezh 394018, Russian Federation 5. Laboratory of Fluid Dynamics and Seismic ( ${\partial _t}u = (1 + \alpha i)\Delta u + R{\mkern 1mu} u + (1 + \beta i)|u{|^2}u + g,$ $R$ $β$ $g$ Keywords:Trajectory attractors, random homogenization, Ginzburg-Landau equations, nonlinear equations. Mathematics Subject Classification:Primary: 35B40, 35B41, 35B45, 35Q30. Citation:Gregory A. Chechkin, Vladimir V. Chepyzhov, Leonid S. Pankratov. Homogenization of trajectory attractors of Ginzburg-Landau equations with randomly oscillating terms. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1133-1154. doi: 10.3934/dcdsb.2018145 References: [1] Y. Amirat, O. Bodart, G. A. Chechkin and A. L. Piatnitski, Boundary homogenization in domains with randomly oscillating boundary, [2] V. I. Arnol'd and A. Avez, [3] A. V. Babin and M. I. Vishik, [4] N. S. Bakhvalov and G. P. Panasenko, [5] K. A. Bekmaganbetov, G. A. Chechkin, V. V. Chepyzhov and A. Yu. Goritsky, Homogenization of Trajectory Attractors of 3D Navier-Stokes system with Randomly Oscillating Force, [6] K. A. Bekmaganbetov, G. A. Chechkin and V. V. Chepyzhov, Homogenization of Random Attractors for Reaction-Diffusion Systems, [7] A. Bensoussan, J. -L. Lions and G. Papanicolau, [8] [9] N. N. Bogolyubov and Ya. A. Mitropolski, [10] A. Bourgeat, I. D. Chueshov and L. Pankratov, Homogenization of attractors for semilinear parabolic equations in domains with spherical traps, [11] A. Bourgeat and L. Pankratov, Homogenization of reaction-diffusion equations in domains with "traps". [12] L. Boutet de Monvel, I.D. Chueshov and E. Ya. Khruslov, Homogenization of attractors for semilinear parabolic equations on manifolds with complicated microstructure, [13] F. Boyer and P. Fabrie, [14] G. A. Chechkin, A. L. Piatnitski and A. S. Shamaev, [15] [16] G. A. Chechkin, T. P. Chechkina, C. D'Apice, U. De Maio and T. A. Mel'nyk, Asymptotic Analysis of a Boundary Value Problem in a Cascade Thick Junction with a Random Transmission Zone, [17] G. A. Chechkin, T. P. Chechkina, C. D'Apice, U. De Maio and T. A. Mel'nyk, Homogenization of 3D Thick Cascade Junction with the Random Transmission Zone Periodic in One direction, [18] G. A. Chechkin, C. D'Apice, U. De Maio and A. L. Piatnitski, On the Rate of Convergence of Solutions in Domain with Random Multilevel Oscillating Boundary, [19] [20] V. V. Chepyzhov, A. Yu. Goritski and M. I. Vishik, Integral manifolds and attractors with exponential rate for nonautonomous hyperbolic equations with dissipation, [21] [22] [23] [24] V. V. Chepyzhov and M. I. Vishik, [25] V. V. Chepyzhov and M. I. Vishik, Global attractors for non-autonomous Ginzburg-Landau equation with singularly oscillating terms, [26] I. D. Chueshov, [27] I. D. Chueshov and L. S. Pankratov, Upper semicontinuity of attractors of semilinear parabolic equations with asymptotically degenerating coefficients, [28] [29] [30] [31] N. Dunford and J. T. Schwartz, [32] M. Efendiev and S. Zelik, Attractors of the reaction-diffusion systems with rapidly oscillating coefficients and their homogenization, [33] [34] [35] J. K. Hale, [36] [37] [38] V. V. Jikov, S. M. Kozlov and O. A. Oleinik, [39] E. Ya. Khruslov and L. S. Pankratov, Homogenization of boundary problems for GinzburgLandau equation in weakly connected domains, [40] J. -L. Lions, [41] J. -L. Lions and E. Magenes, [42] V. A. Marchenko and E. Ya. Khruslov, [43] [44] [45] E. Sánchez-Palencia, [46] R. Temam, 68. Springer-Verlag, New York, 1997.Google Scholar [47] [48] [49] [50] [51] [52] show all references References: [1] Y. Amirat, O. Bodart, G. A. Chechkin and A. L. Piatnitski, Boundary homogenization in domains with randomly oscillating boundary, [2] V. I. Arnol'd and A. Avez, [3] A. V. Babin and M. I. Vishik, [4] N. S. Bakhvalov and G. P. Panasenko, [5] K. A. Bekmaganbetov, G. A. Chechkin, V. V. Chepyzhov and A. Yu. Goritsky, Homogenization of Trajectory Attractors of 3D Navier-Stokes system with Randomly Oscillating Force, [6] K. A. Bekmaganbetov, G. A. Chechkin and V. V. Chepyzhov, Homogenization of Random Attractors for Reaction-Diffusion Systems, [7] A. Bensoussan, J. -L. Lions and G. Papanicolau, [8] [9] N. N. Bogolyubov and Ya. A. Mitropolski, [10] A. Bourgeat, I. D. Chueshov and L. Pankratov, Homogenization of attractors for semilinear parabolic equations in domains with spherical traps, [11] A. Bourgeat and L. Pankratov, Homogenization of reaction-diffusion equations in domains with "traps". [12] L. Boutet de Monvel, I.D. Chueshov and E. Ya. Khruslov, Homogenization of attractors for semilinear parabolic equations on manifolds with complicated microstructure, [13] F. Boyer and P. Fabrie, [14] G. A. Chechkin, A. L. Piatnitski and A. S. Shamaev, [15] [16] G. A. Chechkin, T. P. Chechkina, C. D'Apice, U. De Maio and T. A. Mel'nyk, Asymptotic Analysis of a Boundary Value Problem in a Cascade Thick Junction with a Random Transmission Zone, [17] G. A. Chechkin, T. P. Chechkina, C. D'Apice, U. De Maio and T. A. Mel'nyk, Homogenization of 3D Thick Cascade Junction with the Random Transmission Zone Periodic in One direction, [18] G. A. Chechkin, C. D'Apice, U. De Maio and A. L. Piatnitski, On the Rate of Convergence of Solutions in Domain with Random Multilevel Oscillating Boundary, [19] [20] V. V. Chepyzhov, A. Yu. Goritski and M. I. Vishik, Integral manifolds and attractors with exponential rate for nonautonomous hyperbolic equations with dissipation, [21] [22] [23] [24] V. V. Chepyzhov and M. I. Vishik, [25] V. V. Chepyzhov and M. I. Vishik, Global attractors for non-autonomous Ginzburg-Landau equation with singularly oscillating terms, [26] I. D. Chueshov, [27] I. D. Chueshov and L. S. Pankratov, Upper semicontinuity of attractors of semilinear parabolic equations with asymptotically degenerating coefficients, [28] [29] [30] [31] N. Dunford and J. T. Schwartz, [32] M. Efendiev and S. Zelik, Attractors of the reaction-diffusion systems with rapidly oscillating coefficients and their homogenization, [33] [34] [35] J. K. Hale, [36] [37] [38] V. V. Jikov, S. M. Kozlov and O. A. Oleinik, [39] E. Ya. Khruslov and L. S. Pankratov, Homogenization of boundary problems for GinzburgLandau equation in weakly connected domains, [40] J. -L. Lions, [41] J. -L. Lions and E. Magenes, [42] V. A. Marchenko and E. Ya. Khruslov, [43] [44] [45] E. Sánchez-Palencia, [46] R. Temam, 68. Springer-Verlag, New York, 1997.Google Scholar [47] [48] [49] [50] [51] [52] [1] Kolade M. Owolabi, Edson Pindza. Numerical simulation of multidimensional nonlinear fractional Ginzburg-Landau equations. [2] Hans G. Kaper, Bixiang Wang, Shouhong Wang. Determining nodes for the Ginzburg-Landau equations of superconductivity. [3] [4] [5] Noboru Okazawa, Tomomi Yokota. Smoothing effect for generalized complex Ginzburg-Landau equations in unbounded domains. [6] N. I. Karachalios, H. E. Nistazakis, A. N. Yannacopoulos. Remarks on the asymptotic behavior of solutions of complex discrete Ginzburg-Landau equations. [7] Yuta Kugo, Motohiro Sobajima, Toshiyuki Suzuki, Tomomi Yokota, Kentarou Yoshii. Solvability of a class of complex Ginzburg-Landau equations in periodic Sobolev spaces. [8] Bixiang Wang, Shouhong Wang. Gevrey class regularity for the solutions of the Ginzburg-Landau equations of superconductivity. [9] [10] Alessia Berti, Valeria Berti, Ivana Bochicchio. Global and exponential attractors for a Ginzburg-Landau model of superfluidity. [11] N. I. Karachalios, Hector E. Nistazakis, Athanasios N. Yannacopoulos. Asymptotic behavior of solutions of complex discrete evolution equations: The discrete Ginzburg-Landau equation. [12] Tianlong Shen, Jianhua Huang. Ergodicity of the stochastic coupled fractional Ginzburg-Landau equations driven by [13] Iuliana Oprea, Gerhard Dangelmayr. A period doubling route to spatiotemporal chaos in a system of Ginzburg-Landau equations for nematic electroconvection. [14] Dingshi Li, Xiaohu Wang. Asymptotic behavior of stochastic complex Ginzburg-Landau equations with deterministic non-autonomous forcing on thin domains. [15] Yun Lan, Ji Shu. Dynamics of non-autonomous fractional stochastic Ginzburg-Landau equations with multiplicative noise. [16] Dingshi Li, Lin Shi, Xiaohu Wang. Long term behavior of stochastic discrete complex Ginzburg-Landau equations with time delays in weighted spaces. [17] Bo You, Yanren Hou, Fang Li, Jinping Jiang. Pullback attractors for the non-autonomous quasi-linear complex Ginzburg-Landau equation with $p$-Laplacian. [18] V. V. Chepyzhov, A. Miranville. Trajectory and global attractors of dissipative hyperbolic equations with memory. [19] [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Sigmoidal kinetic profiles are the result of enzymes that demonstrate positive cooperative binding. cooperativity refers to the observation that binding of the substrate or ligand at one binding site affects the affinity of other sites for their substrates. For enzymatic reactions with multiple substrate binding sites, this increased affinity for the substrate causes a rapid and coordinated increase in the velocity of the reaction at higher \([S]\) until \(V_{max}\) is achieved. Plotting the \(V_0\) vs. \([S]\) for a cooperative enzyme, we observe the characteristic sigmoidal shape with low enzyme activity at low substrate concentration and a rapid and immediate increase in enzyme activity to \(V_{max}\) as \([S]\) increases. The phenomenon of cooperativity was initially observed in the oxygen-hemoglobin interaction that functions in carrying oxygen in blood. Positive cooperativity implies allosteric binding – binding of the ligand at one site increases the enzyme’s affinity for another ligand at a site different from the other site. Enzymes that demonstrate cooperativity are defined as allosteric. There are several types of allosteric interactions: (positive & negative) homotropic and heterotropic . Figure 1: Rate of Reaction (velocity) vs. Substrate Concentration. Positive and negative allosteric interactions (as illustrated through the phenomenon of cooperativity) refer to the enzyme's binding affinity for other ligands at other sites, as a result of ligand binding at the initial binding site. When the ligands interacting are all the same compounds, the effect of the allosteric interaction is considered homotropic. When the ligands interacting are different, the effect of the allosteric interaction is considered heterotropic. It is also very important to remember that allosteric interactions tend to be driven by ATP hydrolysis. Hill Coefficient The degree of cooperativity is determined by Hill equation (Equation 1) for non-Michaelis-Menten kinetics. The Hill equation accounts for allosteric binding at sites other than the active site. \(n\) is the "Hill coefficient." When n < 1, there is negative cooperativity; When n = 1, there is no cooperativity; When \(n > 1\), there is positive cooperativity \[ \theta = \dfrac{[L]^n}{K_d+[L]^n} = \dfrac{[L]^n}{K_a^n+[L]^n} \label{1}\] where \( \theta \) is the fraction of ligand binding sites filled \([L]\) is the ligand concentration \(K_d\) is the apparent dissociation constant derived from the law of mass action (equilibrium constant for dissociation) \(K_a\) is the ligand concentration producing half occupation (ligand concentration occupying half of the binding sides), that is also the microscopic dissociation constant \(n\) is the Hill coefficient that describes the cooperativity Taking the logarithm of both sides of the equation leads to an alternative formulation of the HIll Equation. \[ \log \left( \dfrac{\theta}{1-\theta} \right) = n\log [L] - \log K_d \label{2}\] Currently, there are 2 models for illustrating cooperativity: the concerted model and the sequential model The concerted modelillustrates cooperativity by assuming that proteins have two or more subunits, and that each part of the protein molecule is able to exist in either the relaxed (R) state or the tense (T) state - the tense state of a protein molecule is favored when it doesn't have any substrates bound. All aspects, including binding and dissociation constants are the same for each ligand at the respective binding sites. This model can also be referred to as the Monod-Wyman-Changeuxmodel, as named after its founders. The sequential modelaims to demonstrate cooperativity by assuming that the enzyme/protein molecule affinity is relative and changes as substrates bind. Unlike the concerted model, the sequential model accounts for different species of the protein molecule. References Raymond Chang. Physical Chemistry for the biosciences. University Science Books. 2005 Biological Sciences Review Notes. Kaplan, Inc. 2007 "The Hill equation revisited: uses and misuses." J N Weiss Contributors Tinuke Fashokun
A microscopic state or microstate of a classical system is a specification of the complete set of positions and momenta of the system at any given time. In the language of phase space vectors, it is a specification of the complete phase space vector of a system at any instant in time. For a conservative system, any valid microstate must lie on the constant energy hypersurface, \(H (x) = E \). Hence, specifying a microstate of a classical system is equivalent to specifying a point on the constant energy hypersurface. The concept of classical microstates now allows us to give a more formal definition of an ensemble. An ensemble is a collection of systems sharing one or more macroscopic characteristics but each being in a unique microstate. The complete ensemble is specified by giving all systems or microstates consistent with the common macroscopic characteristics of the ensemble. The idea of ensemble averaging can also be expressed in terms of an average over all such microstates (which comprise the ensemble). A given macroscopic property, \(A\), and its microscopic function \(a = a (x) \), which is a function of the positions and momenta of a system, i.e. the phase space vector, are related by \[ A = \langle a \rangle_{ensemble} = \frac {1}{N} \sum _{\lambda = 1}^N a(x_{\lambda})\] where \(x_{\lambda}\) is the microstate of the \(\lambda \) th member of the ensemble. Ergodic Hypothesis However, recall the original problem of determining the microscopic detailed motion of each individual particle in a system. In reality, measurements are made only on a single system and all the microscopic detailed motion is present. However, what one observes is still an average, but it is an average over time of the detailed motion, an average that also washes out the microscopic details. Thus, the time average and the ensemble average should be equivalent, i.e. \[ A = \langle a \rangle_{ensemble} = \lim _{T \to \infty } \frac {1}{T} \int _0^T dt a(x (t)) \] This statement is known as the ergodic hypothesis. A system that is ergodic is one for which, given an infinite amount of time, it will visit all possible microscopic states available to it (for Hamiltonian dynamics, this means it will visit all points on the constant energy hypersurface). No one has yet been able to prove that a particular system is truly ergodic, hence the above statement cannot be more than a supposition. However, it states that if a system is ergodic, then the ensemble average of a property \(A(x) \) can be equated to a time average of the property over an ergodic trajectory.
Difference between revisions of "Probability Seminar" (→April 4, TBA) (→April 11, Eviatar Procaccia, Texas A&M) Line 89: Line 89: == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == + + + + == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == Revision as of 11:06, 1 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
OpenCV 4.0.0 Open Source Computer Vision This tutorial demonstrates to you how to use F-transform for image filtering. You will see: As I shown in previous tutorial, F-transform is a tool of fuzzy mathematics highly usable in image processing. Let me rewrite the formula using kernel \(g\) introduced before as well: \[ F^0_{kl}=\frac{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} \iota_{kl}(x,y) g(x,y)}{\sum_{x=0}^{2h+1}\sum_{y=0}^{2h+1} g(x,y)}, \] where \(\iota_{kl} \subset I\) centered to pixel \((k \cdot h,l \cdot h)\) and \(g\) is a kernel. More details can be found in related papers. Image filtering changes input in a defined way to enhance or simply change some concrete feature. Let me demonstrate some simple blur. As a first step, we load input image. Following the F-transform formula, we must specify a kernel. So now, we have two kernels that differ in radius. Bigger radius leads to bigger blur. The filtering itself is applied as shown below. Output images look as follows.
Category:Continuous Functions This category contains results about Continuous Functions. Let $f: A \to \R$ be a real function. Let $x \in A$ be a point of $A$. $\displaystyle \lim_{y \to x} \ f \left({y}\right) = f \left({x}\right)$ Let $f : \R \to \R$ be a real function. Let $f: A \to \R$ be a real function. Subcategories This category has only the following subcategory. C ► Continuous Real Functions (4 C, 6 P) Pages in category "Continuous Functions" The following 53 pages are in this category, out of 53 total. C Combination Theorem for Continuous Functions Combination Theorem for Continuous Functions/Combined Sum Rule Combination Theorem for Continuous Functions/Multiple Rule Combination Theorem for Continuous Functions/Product Rule Combination Theorem for Continuous Functions/Quotient Rule Combination Theorem for Continuous Functions/Sum Rule Combination Theorem for Continuous Mappings/Topological Ring/Combined Rule Combined Sum Rule for Continuous Functions Complex Modulus Function is Continuous Concave Real Function is Continuous Condition for Continuity on Interval Constant Function is Continuous Constant Function is Continuous/Real Function Constant Real Function is Continuous Continuity of Root Function Continuous Function is Riemann Integrable Continuous Function on Closed Interval is Bijective iff Strictly Monotone Continuous Function on Compact Space is Bounded Continuous Function on Compact Space is Uniformly Continuous Continuous Image of Closed Interval is Closed Interval Continuous Image of Connected Space is Connected/Corollary 2 Continuous Image of Connected Space is Connected/Corollary 3 Continuous Injection of Interval is Strictly Monotone Continuous Replicative Function Convex Real Function is Continuous
If you look at the points of these toposes you get horribly complicated ‘non-commutative’ spaces, such as the finite adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}^f_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (in case of the arithmetic site) and the full adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (for the scaling site). In Vienna, Connes gave a nice introduction to the arithmetic site in two lectures. The first part of the talk below also gives an historic overview of his work on the RH However, not everyone is as optimistic about the topos-approach as he seems to be. Here’s an insightful answer on MathOverflow by Will Sawin to the question “What is precisely still missing in Connes’ approach to RH?”. Especially interesting is section 2 “The geometry behind the zeros of $\zeta$” in which they explain how looking at the zeros locus inevitably leads to the space of adele classes and why one has to study this space with the tools from noncommutative geometry. Perhaps further developments will be disclosed in a few weeks time when Connes is one of the speakers at Toposes in Como. The Fibonacci sequence reappears a bit later in Dan Brown’s book ‘The Da Vinci Code’ where it is used to login to the bank account of Jacques Sauniere at the fictitious Parisian branch of the Depository Bank of Zurich. Last time we saw that the Hankel matrix of the Fibonacci series $F=(1,1,2,3,5,\dots)$ is invertible over $\mathbb{Z}$\[H(F) = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix} \in SL_2(\mathbb{Z}) \]and we can use the rule for the co-multiplication $\Delta$ on $\Re(\mathbb{Q})$, the algebra of rational linear recursive sequences, to determine $\Delta(F)$. For a general integral linear recursive sequence the corresponding Hankel matrix is invertible over $\mathbb{Q}$, but rarely over $\mathbb{Z}$. So we need another approach to compute the co-multiplication on $\Re(\mathbb{Z})$. Any integral sequence $a = (a_0,a_1,a_2,\dots)$ can be seen as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the integral polynomial ring $\mathbb{Z}[x]$ to $\mathbb{Z}$ itself via the rule $\lambda_a(x^n) = a_n$. If $a \in \Re(\mathbb{Z})$, then there is a monic polynomial with integral coefficients of a certain degree $n$ Alternatively, we can look at $a$ as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the quotient ring $\mathbb{Z}[x]/(f(x))$ to $\mathbb{Z}$. The multiplicative structure on $\mathbb{Z}[x]/(f(x))$ dualizes to a co-multiplication $\Delta_f$ on the set of all such linear maps $(\mathbb{Z}[x]/(f(x)))^{\ast}$ and we can compute $\Delta_f(a)$. We see that the set of all integral linear recursive sequences can be identified with the direct limit\[\Re(\mathbb{Z}) = \underset{\underset{f|g}{\rightarrow}}{lim}~(\frac{\mathbb{Z}[x]}{(f(x))})^{\ast} \](where the directed system is ordered via division of monic integral polynomials) and so is equipped with a co-multiplication $\Delta = \underset{\rightarrow}{lim}~\Delta_f$. Btw. the ring structure on $\Re(\mathbb{Z}) \subset (\mathbb{Z}[x])^{\ast}$ comes from restricting to $\Re(\mathbb{Z})$ the dual structures of the co-ring structure on $\mathbb{Z}[x]$ given by\[\Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \] From this description it is clear that you need to know a hell of a lot number theory to describe this co-multiplication explicitly. As most of us prefer to work with rings rather than co-rings it is a good idea to begin to study this co-multiplication $\Delta$ by looking at the dual ring structure of\[\Re(\mathbb{Z})^{\ast} = \underset{\underset{ f | g}{\leftarrow}}{lim}~\frac{\mathbb{Z}[x]}{(f(x))} \]This is the completion of $\mathbb{Z}[x]$ at the multiplicative set of all monic integral polynomials. In fact, Habiro got interested is a certain subring of $\Re(\mathbb{Z})^{\ast}$ which we now know as the Habiro ring and which seems to be a red herring is all stuff about the field with one element, $\mathbb{F}_1$ (more on this another time). Habiro’s ring is and its elements are all formal power series of the form\[a_0 + a_1 (q-1) + a_2 (q^2-1)(q-1) + \dots + a_n (q^n-1)(q^{n-1}-1) \dots (q-1) + \dots \]with all coefficients $a_n \in \mathbb{Z}$. Here’s a funny property of such series. If you evaluate them at $q \in \mathbb{C}$ these series are likely to diverge almost everywhere, but they do converge in all roots of unity! Some people say that these functions are ‘leaking out of the roots of unity’. If the ring $\Re(\mathbb{Z})^{\ast}$ is controlled by the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, then Habiro’s ring is controlled by the abelianzation $Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq \hat{\mathbb{Z}}^{\ast}$. \[\Delta~:~\Re(\mathbb{Z}) \rightarrow \Re(\mathbb{Z}) \otimes_{\mathbb{Z}} \Re(\mathbb{Z}) \]with properties dual to those of usual multiplication. To describe this co-multiplication in general will have to await another post. For now, we will describe it on the easier ring $\Re(\mathbb{Q})$ of all rational linear recursive sequences. For such a sequence $q = (q_0,q_1,q_2,\dots) \in \Re(\mathbb{Q})$ we consider its Hankel matrix. From the sequence $q$ we can form symmetric $k \times k$ matrices such that the opposite $i+1$-th diagonal consists of entries all equal to $q_i$\[H_k(q) = \begin{bmatrix} q_0 & q_1 & q_2 & \dots & q_{k-1} \\q_1 & q_2 & & & q_k \\q_2 & & & & q_{k+1} \\\vdots & & & & \vdots \\q_{k-1} & q_k & q_{k+1} & \dots & q_{2k-2} \end{bmatrix} \]The Hankel matrix of $q$, $H(q)$ is $H_k(q)$ where $k$ is maximal such that $det~H_k(q) \not= 0$, that is, $H_k(q) \in GL_k(\mathbb{Q})$. Let $S(q)=(s_{ij})$ be the inverse of $H(q)$, then the co-multiplication map\[\Delta~:~\Re(\mathbb{Q}) \rightarrow \Re(\mathbb{Q}) \otimes \Re(\mathbb{Q}) \]sends the sequence $q = (q_0,q_1,\dots)$ to\[\Delta(q) = \sum_{i,j=0}^{k-1} s_{ij} (D^i q) \otimes (D^j q) \]where $D$ is the shift operator on sequence\[D(a_0,a_1,a_2,\dots) = (a_1,a_2,\dots) \] If $a \in \Re(\mathbb{Z})$ is such that $H(a) \in GL_k(\mathbb{Z})$ then the same formula gives $\Delta(a)$ in $\Re(\mathbb{Z})$. For the Fibonacci sequences $F$ the Hankel matrix is\[H(F) = \begin{bmatrix} 1 & 1 \\ 1& 2 \end{bmatrix} \in GL_2(\mathbb{Z}) \quad \text{with inverse} \quad S(F) = \begin{bmatrix} 2 & -1 \\ -1 & 1 \end{bmatrix} \]and therefore\[\Delta(F) = 2 F \otimes ~F – DF \otimes F – F \otimes DF + DF \otimes DF \]There’s a lot of number theoretic and Galois-information encoded into the co-multiplication on $\Re(\mathbb{Q})$. To see this we will describe the co-multiplication on $\Re(\overline{\mathbb{Q}})$ where $\overline{\mathbb{Q}}$ is the field of all algebraic numbers. One can show that Here, $\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}]$ is the group-algebra of the multiplicative group of non-zero elements $x \in \overline{\mathbb{Q}}^{\ast}_{\times}$ and each $x$, which corresponds to the geometric sequence $x=(1,x,x^2,x^3,\dots)$, is a group-like element\[\Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \] $\overline{\mathbb{Q}}[d]$ is the universal Lie algebra of the $1$-dimensional Lie algebra on the primitive element $d = (0,1,2,3,\dots)$, that is\[\Delta(d) = d \otimes 1 + 1 \otimes d \quad \text{and} \quad \epsilon(d) = 0 \] Finally, the co-algebra maps on the elements $S_i$ are given by\[\Delta(S_i) = \sum_{j=0}^i S_j \otimes S_{i-j} \quad \text{and} \quad \epsilon(S_i) = \delta_{0i} \] That is, the co-multiplication on $\Re(\overline{\mathbb{Q}})$ is completely known. To deduce from it the co-multiplication on $\Re(\mathbb{Q})$ we have to consider the invariants under the action of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ as\[\Re(\overline{\mathbb{Q}})^{Gal(\overline{\mathbb{Q}}/\mathbb{Q})} \simeq \Re(\mathbb{Q}) \] Unlike the Fibonacci sequence, not every integral linear recursive sequence has an Hankel matrix with determinant $\pm 1$, so to determine the co-multiplication on $\Re(\mathbb{Z})$ is even a lot harder, as we will see another time. Big news on Mochizuki's groundbreaking IUT: Over 1000 comments on his 4 papers have been addressed & the final versions sent back to the journal for approval. Hopefully, will be published soon. Here's Ivan Fesenko's interview about IUT on the AMS website.https://t.co/6GLk3Xh0lm In case you prefer an English translation: The big ABC. Here’s her opening paragraph: “In a children’s story written by the Swiss author Peter Bichsel, a lonely man decides to invent his own language. He calls the table “carpet”, the chair “alarm clock”, the bed “picture”. At first he is enthusiastic about his idea and always thinks of new words, his sentences sound original and funny. But after a while, he begins to forget the old words.” The article is less optimistic than other recent popular accounts of Mochizuki’s story, including: “Table is called “carpet”, chair is called “alarm clock”, bed is called “picture”. In the story by Peter Bichsel, the lonely man ends up having so much trouble communicating with other people that he speaks only to himself. It is a very sad story.” Perhaps things will turn out for the better, and we’ll hear about it sometime. In several of his talks on #IUTeich, Mochizuki argues that usual scheme theory over $\mathbb{Z}$ is not suited to tackle problems such as the ABC-conjecture. The idea appears to be that ABC involves both the additive and multiplicative nature of integers, making rings into ‘2-dimensional objects’ (and clearly we use both ‘dimensions’ in the theory of schemes). The idea of the comultiplication being that if we have two elements $r,s \in R$ with corresponding ring maps $f_r~:~\mathbb{Z}[x] \rightarrow R \quad x \mapsto r$ and $f_s~:~\mathbb{Z}[x] \rightarrow R \quad x \mapsto s$, composing their tensorproduct with the comultiplication determines another element $v \in R$ which we can take either the product $v=r.s$ or sum $v=r+s$, depending on the comultiplication map $\Delta$. The role of the counit is merely sending $x$ to the identity element of the operation. Thus, if we want to represent the functor forgetting the addition, and retaining the multiplication we have to put on $\mathbb{Z}[x]$ the structure of a biring \[\Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1 \] (making $x$ into a ‘group-like’ element for Hopf-ists). The functor $F_{\times}$ forgetting the multiplication but retaining the addition is represented by the Hopf-ring $\mathbb{Z}[x]$, this time with \[\Delta(x) = x \otimes 1 + 1 \otimes x \quad \text{and} \quad \epsilon(x) = 0 \] (that is, this time $x$ becomes a ‘primitive’ element). Perhaps this adds another feather of weight to the proposal in which one defines algebras over the field with one element $\mathbb{F}_1$ to be birings over $\mathbb{Z}$, with the co-ring structure playing the role of descent data from $\mathbb{Z}$ to $\mathbb{F}_1$.
How to Couple Radiating and Receiving Antennas in Your Simulations In Part 3 of our series on multiscale modeling in high-frequency electromagnetics, let’s turn our attention to the receiving antenna. We’ve already covered theory and definitions in Part 1 and radiating antennas in Part 2. Today, we will couple a radiating antenna at one location with a receiving antenna 1000 λ away. For verification, we will calculate the received power via line-of-sight transmission and compare it with the Friis transmission line equation that we covered in Part 1. Simulating the Background Field In the simulation of our receiving antenna, we will use the Scattered Field formulation. This formulation is extremely useful when you have an object in the presence of a known field, such as in radar cross section (RCS) simulations. Since there are a number of scattered field simulations in the Application Gallery, and it has been discussed in a previous blog post, we will assume a familiarity with this technique and encourage you to review those resources if the Scattered Field formulation is new to you. The Scattered Field formulation is useful for computing a radar cross section. When comparing the implementation we will use here with the scattering examples in the Application Gallery, there are two differences that need to be referenced explicitly. The first is that, unlike the scattering examples, we will use a receiving antenna with a Lumped Port. With the Lumped Port excitation set to Off, it will receive power from the background field. This is automatically calculated in a predefined variable, and since the power is going into the lumped power, the value will be negative. The second difference, which we will spend more time discussing, is that the receiving antenna will be in a separate component than the emitting antenna and we will have to reference the results of one component in the other to link them. Multiple Components in the Same Model What does it mean when we have two or more components in a model? The defining feature of a component is that it has its own geometry and spatial dimension. If you would like to have a 2D axisymmetric geometry and a 3D geometry in the same simulation, then they would each require their own component. If you would like to do two 3D simulations in the same model, you only need one component, although in some situations it can be beneficial to separate them anyways. Let’s say, for example, that you have two devices with relatively complicated geometries. If they are in the same component, then anytime you make a geometric change to one, they both need to be rebuilt (and remeshed). In separate components this would not be the case. Another common use of multiple components is submodeling, where the macroscopic structure is analyzed first and then a more detailed analysis is performed on a smaller region of the model. When we split into components, however, we then need to link the results between the simulations. In our case, we have two antennas at a distance of 1000 λ. Separating them into distinct components is not strictly required, but we are going to do it anyways to keep things general. We will add in ray tracing later in this series and some users may find this multiple component method useful with an arbitrarily complex ray tracing geometry. While we go through the details, it’s important that we have a clear image of the big picture. The main idea that we are pursuing in this post is that we first simulate an emitting antenna and calculate the radiated fields in a specific direction. Specifically, this is the direction of the receiving antenna. We then account for the distance between the antennas and use the calculated fields as the background field in a Scattered Field formulation for the receiving antenna. The emitting antenna is centered at the origin in component 1 and the receiving antenna is centered at the origin in component 2. Everything we will discuss here is simply the technical details of determining the emitted fields from the first simulation and using them as a background field in a second simulation. Note: The overwhelming majority of the COMSOL Multiphysics® software models only have one component and only shouldhave one component. Ensure that you have a sufficient need for multiple components in your model before implementing them, as there is a very real possibility of causing yourself extra work without benefit. Connecting Components with Coupling Operators There are a number of coupling operators, also known as component couplings, available in COMSOL Multiphysics. Generally speaking, these operators map the results from one spatial location to another. Said in another way, you can call for results in one location (the destination), but have the results evaluated at a separate location (the source). While this may seem trivial at first glance, it is an incredibly powerful and general technique. Let’s look at a few specific examples: We can evaluate the maximum or minimum value of a variable in a 3D domain, but call that result globally. This is a 3D to 0D mapping and allows us to create a temperature controller. Note that this can also be used with boundaries or edges, as well as averages or spatial integrations. We can extrude 2D simulation results to a 3D domain. This allows you to exploit translation symmetry in one physics (2D) and use the results in a more complex 3D model. We can project 3D data onto a 2D boundary (or 2D to 1D, etc.) A simple example of this is creating shadow puppets on a wall, but can also be useful for analyzing averages over a cross section. As mentioned above, we want to simulate the emitting antenna (just like we did in Part 2 of the series) and calculate the radiated fields at a distance of 1000 λ. We then use a component coupling to map the fields to being centered about the origin in component 2. Mapping the Radiated Fields If we look at the far-field evaluation discussed in Part 2, we know that the x-component of the far field at a specific location is The only complication is determining where to calculate the scattering amplitude. This is because component couplings need the source and destination to be locations that exist in the geometry. We don’t want to define a sphere in component 1 at the actual location of the receiving antenna, since that defeats the entire purpose of splitting the two antennas into two components. What we will do instead is create a variable for the magnitude of r, and then evaluate the scattering amplitude at a point in the geometry that shares the same angular coordinates, (\theta,\phi), as the point we are actually interested in. In the image below, we show the point where we would like to evaluate the scattering amplitude. Image showing where the scattering amplitude should be calculated and how the coordinates of that point can be determined. Defining the Point and Coupling Operator We add a point to the geometry using the rescaling of the Cartesian coordinates shown in the above figure. Only x is shown in the figure, but the same scaling is also applied to y and z. For the COMSOL Multiphysics implementation, shown below, we have assumed that the receiving antenna is centered at a location of (1000 λ, 0, 0), and the two parameters used are ant_dist = |\vec{r}_1| and sim_r = |\vec{r}|. The required point for the correct scattering amplitude evaluation. Note that we create a selection group from this point. This is so that it can be referenced without ambiguity. We then use this selection for an integration operator. Since we are integrating only over a single point, we simply return the value of the integrand at that point similar to using a Dirac delta function. The integration operator is defined using the selection group for the evaluation point. Running the Background Field Simulation in COMSOL Multiphysics® The above discussion was all about how to evaluate the scattering amplitude at the correct location. The only remaining step is to use this in a background field simulation of the half-wavelength dipole discussed in Part 1. When we add in the known distance between the antennas, we get the following: The variable definition for r. Note that this is defined in component 2. The background field settings. In the settings, we see that the expression used for the background field in x is comp1.intop1(emw.Efarx)*exp(-j*k*r)/(r/1[m]), which matches the equation cited above. Also note that r is defined in component 2, while intop1() is defined in component 1. Since we are calling this from within component 2, we need to include the correct scope for the coupling operator, comp1.intop1(). The remainder of the receiving antenna simulation is functionally equivalent to other Scattered Field simulations in the Application Gallery, so we will not delve into the specifics here. It is interesting to note that running either the emission or background field simulations by themselves is quite straightforward. All of the complication in this procedure is in correctly calculating the fields from component 1 and using them in component 2. All of this heavy lifting has paid off in that we can now fully simulate the received power in an antenna-to-antenna simulation, and the agreement between the simulated power and the Friis transmission equation is excellent. We can also obtain much more information from our simulation than we can purely from the Friis equation, since we have full knowledge of the electromagnetic fields at every point in space. It is worth mentioning one final point before we conclude. We have only evaluated the far field at an individual point, so there is no angular dependence in the field at the receiving antenna. Because we are interested in antennas that are generally far apart, this is a valid approximation, although we will discuss a more general implementation in Part 4. Concluding Thoughts on Coupling Radiating and Receiving Antennas We have now reached a major benchmark in this blog series. After discussing terminology in Part 1 and emission in Part 2, we can now link a radiating antenna to a receiving antenna and verify our results against a known reference. The method we have implemented here can also be more useful than the Friis equation, as we have fully solved for the electromagnetic fields and any polarization mismatch is automatically accounted for. There is one remaining issue, however, that we have not discussed. The method used here is only applicable to line-of-sight transmission through a homogeneous medium. If we had an inhomogeneous medium between the antennas or multipath transmission, that would not be appropriately accounted for either by this technique or the Friis equation. To solve that issue, we will need to use ray tracing to link the emitting and receiving antennas. In Part 4 of this blog series, we will show you how we can link a radiating source to a ray optics simulation. Further Reading Browse previous posts in the Multiscale Modeling in High-Frequency Electromagnetics blog series Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
№ 9 All Issues Feller M. N. Ukr. Mat. Zh. - 2012. - 64, № 11. - pp. 1492-1499 We present solutions of the boundary-value problem $U(0, x) = u_0, \;U(t, 0) = u_1$, and the external boundary-value problem $U(0, x) = v_0,\; U(t, x)|_{Γ} = v_1,\; \lim_{||x||_H→∞} U(t, x) = v_2$ for the nonlinear hyperbolic equation $$\frac{∂^2U(t, x)}{∂t^2} + α(U(t, x)) \left[\frac{∂U(t, x)}{∂t}\right]^2 = ∆_LU(t, x)$$ with infinite-dimensional Levy Laplacian $∆_L$. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 237-244 We propose an algorithm for the solution of the boundary-value problem $U(0,x) = u_0,\;\; U(t, 0) = u_1$ and the external boundary-value problem $U(0, x) = v_0, \;\;U(t, x) |_{\Gamma} = v_1, \;\; \lim_{||x||_H \rightarrow \infty} U(t, x) = v_2$ for the nonlinear hyperbolic equation $$\frac{\partial}{\partial t}\left[k(U(t,x))\frac{\partial U(t,x)}{\partial t}\right] = \Delta_L U(t,x)$$ with divergent part and infinite-dimensional Levy Laplacian $\Delta_L$. Boundary-value problems for a nonlinear parabolic equation with Lévy Laplacian resolved with respect to the derivative Ukr. Mat. Zh. - 2010. - 62, № 10. - pp. 1400–1407 We present the solutions of boundary-value and initial boundary-value problems for a nonlinear parabolic equation with Lévy Laplacian $∆_L$ resolved with respect to the derivative $$\frac{∂U(t,x)}{∂t}=f(U(t,x),Δ_LU(t,x))$$ in fundamental domains of a Hilbert space. Ukr. Mat. Zh. - 2009. - 61, № 11. - pp. 1564-1574 We present the solutions of the initial-value problem in the entire space and the solutions of the boundary-value and initial-boundary-value problems for the wave equation $$\frac{∂^2U(t,x)}{∂x^2} = Δ_LU(t,x)$$ with infinite-dimensional Lévy Laplacian $Δ_L$ in the class of Gâteaux functions. Ukr. Mat. Zh. - 2000. - 52, № 5. - pp. 690-701 We present a method for the solution of the Cauchy problem for three broad classes of nonlinear parabolic equations $$\frac{{\partial U\left( {t,x} \right)}}{{\partial t}} = f\left( {\Delta _L U\left( {t,x} \right)} \right), \frac{{\partial U\left( {t,x} \right)}}{{\partial t}} f\left( {t,\Delta _L U\left( {t,x} \right)} \right),$$ and $$\frac{{\partial U\left( {t,x} \right)}}{{\partial t}} = f\left( {U\left( {t,x} \right), \Delta _L U\left( {t,x} \right)} \right)$$ with the infinite-dimensional Laplacian Δ L. Ukr. Mat. Zh. - 1999. - 51, № 3. - pp. 423–427 We present a method of solving for the nonlinear equation f( U( x),Δ L 2 U( x)) = Δ L U( x) (Δ L Ukr. Mat. Zh. - 1998. - 50, № 11. - pp. 1574–1577 Solutions are found for the nonlinear equation Δ L 2 U( x) = f( U( x)) (here, Δ L Ukr. Mat. Zh. - 1996. - 48, № 5. - pp. 719-721 We propose a method for the solution of the nonlinear equation f( U( x),Δ U( x))= F( x) (Δ L L U( x)=γ, γ≠0) unsolved with respect to the infinite-dimensional Laplacian, and for the solution of the Dirichlet problem for this equation. Necessary and sufficient conditions of harmonicity of functions of infinitely many variables (Jacobian case) Ukr. Mat. Zh. - 1994. - 46, № 6. - pp. 785–788 A criterion of harmonicity of functions in a Hilbert space is given in the case of weakened mutual dependence of the second derivatives. Ukr. Mat. Zh. - 1990. - 42, № 12. - pp. 1687–1693 Ukr. Mat. Zh. - 1989. - 41, № 7. - pp. 997-1001 Ukr. Mat. Zh. - 1983. - 35, № 2. - pp. 200—206 Ukr. Mat. Zh. - 1980. - 32, № 1. - pp. 69 - 79
As you suggest in your question and Todd Trimble mentions in a comment, one interesting choice of morphism between Poisson manifolds is that of a coisotropic correspondence: if $M, M'$ are Poisson manifolds, depending on exactly how you work you either think about coisotropic submanifolds in $\bar M \times M'$, or maps $N \to \bar M \times M'$ with coisotropic image, where $\bar M$ is the same manifold as $M$ but with the opposite Poisson structure (and I give $\bar M \times M'$ the product Poisson structure that you're rightly not fond of). Then it is a straightforward fact that a correspondence $N \subseteq M\times M'$ which is the graph of a smooth map $M \to M'$ is coisotropic in $\bar M \times M'$ iff the map is a Poisson map. Note that this all generalizes the category in which objects are symplectic manifolds and morphisms are Lagrangian correspondences --- then a correspondence that is the graph of a smooth function is the graph of a symplectomorphic open embedding iff it is Lagrangian. It also has just as many bad properties. Notably, only composition between generic morphisms is defined, as in the non generic case some intersections may not be transverse. So to make it into a category requires the same kind of $A_\infty$ work (or Wehrheim-Woodward method, or...). I know that some of Alan Weinstein's recent papers discuss this category. This category generalizes easily to the algebraic case that you ask about. Recall that an ideal in a Poisson algebra is coisotropic if it is a Lie subalgebra for the bracket (not necessarily a Lie ideal!), and that a submanifold of a Poisson manifold is coisotropic iff its vanishing ideal is coisotropic. So what I'm suggesting is that if $P,P'$ are Poisson algebras, and writing $\bar P$ for $P$ with the opposite Poisson structure, then one interesting notion of "morphism" $P \to P'$ is a coisotropic ideal in $\bar P \otimes P'$. Dima Shlyakhtenko has suggested more or less the same category in another answer. There is the following philosophy: Poisson manifolds / algebras are a sort of "infinitesimal" piece of noncommutative algebra, and under this rough relationship coisotropic submanifolds are supposed to correspond to (left, say) modules. Then coisotropic correspondences are roughly the same as bimodules. Recall that from an algebra point of view, bimodules are a fairly natural notion of morphism: they are precisely the left adjoints (say, or right adjoints, or adjunctions) between the corresponding categories of modules. The module theory of an algebra knows a lot about the algebra, including its Hochschild homology and cohomology (and hence its center, its perturbative deformation theory, and so on). Of course, it is far from the case that the tensor product of algebras has much to do with the (co)product in any category. Rather, remembering only the Morita theory of algebras helps to explain what is their tensor product: it is the tensor product in the 2-category of (nice) categories with left-adjoints as morphisms, in the sense of being universal for "bilinear" maps. One can be quite precise about this: the 2-category of algebras and bimodules is a categorification of the 1-category of abelian groups. Actually, if you remember the underlying algebra, then that's the same as remembering its module theory along with the data of a "rank-1 free module", and so this is a categorification of the 1-category of abelian groups with a distinguished element. (Morita theory is like linear maps that ignore the distinguished element.) Incidentally, it is now straightforward to invent the notion of "sesquialgebra", which is an algebra object in the 2-category of algebras and bimodules, or equivalently a closed monoidal category structure on the module theory of said algebra. The same notion in Poisson manifolds is an algebra object in the category of Poisson manifolds and coisotropic correspondences, so this includes the Poisson Lie monoids. Alan Weinstein and collaborators a few years ago tried to write down a good notion of "Hopfish algebra" for controlling when this map would be invertible, but my opinion is that their paper doesn't quite get it right. What you should do is the following. Recall that a functor between monoidal categories is strong-monoidal if it comes equipped with a natural isomorphism between the two ways of composing the functor and the corresponding monoidal structures (and maybe extra data for associativity, etc.). A strong monoidal functor between closed monoidal categories also determines a natural transformation between "inner homs", which need not be a natural iso. If it is, call the monoidal functor "hopfish" or "strongly closed". A bialgebra is a sesquialgebra with a marked right adjoint to Vect (equivalently, a marked "rank 1 free algebra", the image of the 1-dimensional vector space under the corresponding left adjoint) which is strong monoidal; a Hopf algebra is a bialgebra in which the strong monoidal functor is hopfish.
Search Now showing items 1-10 of 76 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... Electroweak boson production in p–Pb and Pb–Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV with ALICE (Elsevier, 2017-11) W and Z bosons are massive weakly-interacting particles, insensitive to the strong interaction. They provide therefore a medium-blind probe of the initial state of the heavy-ion collisions. The final results for the W and ... Investigating the Role of Coherence Effects on Jet Quenching in Pb-Pb Collisions at $\sqrt{s_{NN}} =2.76$ TeV using Jet Substructure (Elsevier, 2017-11) We report measurements of two jet shapes, the ratio of 2-Subjettiness to 1-Subjettiness ($\it{\tau_{2}}/\it{\tau_{1}}$) and the opening angle between the two axes of the 2-Subjettiness jet shape, which is obtained by ...
Forgot password? New user? Sign up Existing user? Log in A number α∈R\alpha \in \mathbb{R}α∈R is called algebraic if there exists a polynomial p(x)p(x)p(x) with rational coefficients such that p(α)=0p(\alpha) = 0p(α)=0. Let S⊂RS \subset \mathbb{R}S⊂R denote the set of algebraic numbers. Which of the following is true of S?S?S? Problem Loading... Note Loading... Set Loading...
OpenCV 4.1.2-pre Open Source Computer Vision In this tutorial you will learn how to: A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data ( supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In which sense is the hyperplane obtained optimal? Let's consider the following simple problem: For a linearly separable set of 2D-points which belong to one of two classes, find a separating straight line. In the above picture you can see that there exists multiple lines that offer a solution to the problem. Is any of them better than the others? We can intuitively define a criterion to estimate the worth of the lines: A line is bad if it passes too close to the points because it will be noise sensitive and it will not generalize correctly. Therefore, our goal should be to find the line passing as far as possible from all points. Then, the operation of the SVM algorithm is based on finding the hyperplane that gives the largest minimum distance to the training examples. Twice, this distance receives the important name of margin within SVM's theory. Therefore, the optimal separating hyperplane maximizes the margin of the training data. Let's introduce the notation used to define formally a hyperplane: \[f(x) = \beta_{0} + \beta^{T} x,\] where \(\beta\) is known as the weight vector and \(\beta_{0}\) as the bias. The optimal hyperplane can be represented in an infinite number of different ways by scaling of \(\beta\) and \(\beta_{0}\). As a matter of convention, among all the possible representations of the hyperplane, the one chosen is \[|\beta_{0} + \beta^{T} x| = 1\] where \(x\) symbolizes the training examples closest to the hyperplane. In general, the training examples that are closest to the hyperplane are called support vectors. This representation is known as the canonical hyperplane. Now, we use the result of geometry that gives the distance between a point \(x\) and a hyperplane \((\beta, \beta_{0})\): \[\mathrm{distance} = \frac{|\beta_{0} + \beta^{T} x|}{||\beta||}.\] In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the support vectors is \[\mathrm{distance}_{\text{ support vectors}} = \frac{|\beta_{0} + \beta^{T} x|}{||\beta||} = \frac{1}{||\beta||}.\] Recall that the margin introduced in the previous section, here denoted as \(M\), is twice the distance to the closest examples: \[M = \frac{2}{||\beta||}\] Finally, the problem of maximizing \(M\) is equivalent to the problem of minimizing a function \(L(\beta)\) subject to some constraints. The constraints model the requirement for the hyperplane to classify correctly all the training examples \(x_{i}\). Formally, \[\min_{\beta, \beta_{0}} L(\beta) = \frac{1}{2}||\beta||^{2} \text{ subject to } y_{i}(\beta^{T} x_{i} + \beta_{0}) \geq 1 \text{ } \forall i,\] where \(y_{i}\) represents each of the labels of the training examples. This is a problem of Lagrangian optimization that can be solved using Lagrange multipliers to obtain the weight vector \(\beta\) and the bias \(\beta_{0}\) of the optimal hyperplane. The training data of this exercise is formed by a set of labeled 2D-points that belong to one of two different classes; one of the classes consists of one point and the other of three points. Set up SVM's parameters In this tutorial we have introduced the theory of SVMs in the most simple case, when the training examples are spread into two classes that are linearly separable. However, SVMs can be used in a wide variety of problems (e.g. problems with non-linearly separable data, a SVM using a kernel function to raise the dimensionality of the examples, etc). As a consequence of this, we have to define some parameters before training the SVM. These parameters are stored in an object of the class cv::ml::SVM. Here: Regions classified by the SVM The method cv::ml::SVM::predict is used to classify an input sample using a trained SVM. In this example we have used this method in order to color the space depending on the prediction done by the SVM. In other words, an image is traversed interpreting its pixels as points of the Cartesian plane. Each of the points is colored depending on the class predicted by the SVM; in green if it is the class with label 1 and in blue if it is the class with label -1. Support vectors We use here a couple of methods to obtain information about the support vectors. The method cv::ml::SVM::getSupportVectors obtain all of the support vectors. We have used this methods here to find the training examples that are support vectors and highlight them.
1. In a previous section, we showed that matrix multiplication is not commutative, that is, [latex]AB\ne BA[/latex] in most cases. Can you explain why matrix multiplication is commutative for matrix inverses, that is, [latex]{A}^{-1}A=A{A}^{-1}?[/latex] 2. Does every [latex]2\times 2[/latex] matrix have an inverse? Explain why or why not. Explain what condition is necessary for an inverse to exist. 3. Can you explain whether a [latex]2\times 2[/latex] matrix with an entire row of zeros can have an inverse? 4. Can a matrix with an entire column of zeros have an inverse? Explain why or why not. 5. Can a matrix with zeros on the diagonal have an inverse? If so, find an example. If not, prove why not. For simplicity, assume a [latex]2\times 2[/latex] matrix. In the following exercises, show that matrix [latex]A[/latex] is the inverse of matrix [latex]B[/latex]. 6. [latex]A=\left[\begin{array}{cc}1& 0\\ -1& 1\end{array}\right],B=\left[\begin{array}{cc}1& 0\\ 1& 1\end{array}\right][/latex] 7. [latex]A=\left[\begin{array}{cc}1& 2\\ 3& 4\end{array}\right],B=\left[\begin{array}{cc}-2& 1\\ \frac{3}{2}& -\frac{1}{2}\end{array}\right][/latex] 8. [latex]A=\left[\begin{array}{cc}4& 5\\ 7& 0\end{array}\right],B=\left[\begin{array}{cc}0& \frac{1}{7}\\ \frac{1}{5}& -\frac{4}{35}\end{array}\right][/latex] 9. [latex]A=\left[\begin{array}{cc}-2& \frac{1}{2}\\ 3& -1\end{array}\right],B=\left[\begin{array}{cc}-2& -1\\ -6& -4\end{array}\right][/latex] 10. [latex]A=\left[\begin{array}{ccc}1& 0& 1\\ 0& 1& -1\\ 0& 1& 1\end{array}\right],B=\frac{1}{2}\left[\begin{array}{ccc}2& 1& -1\\ 0& 1& 1\\ 0& -1& 1\end{array}\right][/latex] 11. [latex]A=\left[\begin{array}{ccc}1& 2& 3\\ 4& 0& 2\\ 1& 6& 9\end{array}\right],B=\frac{1}{4}\left[\begin{array}{ccc}6& 0& -2\\ 17& -3& -5\\ -12& 2& 4\end{array}\right][/latex] 12. [latex]A=\left[\begin{array}{ccc}3& 8& 2\\ 1& 1& 1\\ 5& 6& 12\end{array}\right],B=\frac{1}{36}\left[\begin{array}{ccc}-6& 84& -6\\ 7& -26& 1\\ -1& -22& 5\end{array}\right][/latex] For the following exercises, find the multiplicative inverse of each matrix, if it exists. 13. [latex]\left[\begin{array}{cc}3& -2\\ 1& 9\end{array}\right][/latex] 14. [latex]\left[\begin{array}{cc}-2& 2\\ 3& 1\end{array}\right][/latex] 15. [latex]\left[\begin{array}{cc}-3& 7\\ 9& 2\end{array}\right][/latex] 16. [latex]\left[\begin{array}{cc}-4& -3\\ -5& 8\end{array}\right][/latex] 17. [latex]\left[\begin{array}{cc}1& 1\\ 2& 2\end{array}\right][/latex] 18. [latex]\left[\begin{array}{cc}0& 1\\ 1& 0\end{array}\right][/latex] 19. [latex]\left[\begin{array}{cc}0.5& 1.5\\ 1& -0.5\end{array}\right][/latex] 20. [latex]\left[\begin{array}{ccc}1& 0& 6\\ -2& 1& 7\\ 3& 0& 2\end{array}\right][/latex] 21. [latex]\left[\begin{array}{ccc}0& 1& -3\\ 4& 1& 0\\ 1& 0& 5\end{array}\right][/latex] 22. [latex]\left[\begin{array}{ccc}1& 2& -1\\ -3& 4& 1\\ -2& -4& -5\end{array}\right][/latex] 23. [latex]\left[\begin{array}{ccc}1& 9& -3\\ 2& 5& 6\\ 4& -2& 7\end{array}\right][/latex] 24. [latex]\left[\begin{array}{ccc}1& -2& 3\\ -4& 8& -12\\ 1& 4& 2\end{array}\right][/latex] 25. [latex]\left[\begin{array}{ccc}\frac{1}{2}& \frac{1}{2}& \frac{1}{2}\\ \frac{1}{3}& \frac{1}{4}& \frac{1}{5}\\ \frac{1}{6}& \frac{1}{7}& \frac{1}{8}\end{array}\right][/latex] 26. [latex]\left[\begin{array}{ccc}1& 2& 3\\ 4& 5& 6\\ 7& 8& 9\end{array}\right][/latex] For the following exercises, solve the system using the inverse of a [latex]2\times 2[/latex] matrix. 27. [latex]\begin{array}{l}\text{ }5x - 6y=-61\hfill \\ 4x+3y=-2\hfill \end{array}[/latex] 28. [latex]\begin{array}{l}8x+4y=-100\\ 3x - 4y=1\end{array}[/latex] 29. [latex]\begin{array}{l}3x - 2y=6\hfill \\ -x+5y=-2\hfill \end{array}[/latex] 30. [latex]\begin{array}{l}5x - 4y=-5\hfill \\ 4x+y=2.3\hfill \end{array}[/latex] 31. [latex]\begin{array}{l}-3x - 4y=9\hfill \\ 12x+4y=-6\hfill \end{array}[/latex] 32. [latex]\begin{array}{l}-2x+3y=\frac{3}{10}\hfill \\ -x+5y=\frac{1}{2}\hfill \end{array}[/latex] 33. [latex]\begin{array}{l}\frac{8}{5}x-\frac{4}{5}y=\frac{2}{5}\hfill \\ -\frac{8}{5}x+\frac{1}{5}y=\frac{7}{10}\hfill \end{array}[/latex] 34. [latex]\begin{array}{l}\frac{1}{2}x+\frac{1}{5}y=-\frac{1}{4}\\ \frac{1}{2}x-\frac{3}{5}y=-\frac{9}{4}\end{array}[/latex] For the following exercises, solve a system using the inverse of a [latex]3\text{}\times \text{}3[/latex] matrix. 35. [latex]\begin{array}{l}3x - 2y+5z=21\hfill \\ 5x+4y=37\hfill \\ x - 2y - 5z=5\hfill \end{array}[/latex] 36. [latex]\begin{array}{l}\text{ }4x+4y+4z=40\hfill \\ \text{ }2x - 3y+4z=-12\hfill \\ \text{ }-x+3y+4z=9\hfill \end{array}[/latex] 37. [latex]\begin{array}{l}\text{ }6x - 5y-z=31\hfill \\ \text{ }-x+2y+z=-6\hfill \\ \text{ }3x+3y+2z=13\hfill \end{array}[/latex] 38. [latex]\begin{array}{l}6x - 5y+2z=-4\hfill \\ 2x+5y-z=12\hfill \\ 2x+5y+z=12\hfill \end{array}[/latex] 39. [latex]\begin{array}{l}4x - 2y+3z=-12\hfill \\ 2x+2y - 9z=33\hfill \\ 6y - 4z=1\hfill \end{array}[/latex] 40. [latex]\begin{array}{l}\frac{1}{10}x-\frac{1}{5}y+4z=\frac{-41}{2}\\ \frac{1}{5}x - 20y+\frac{2}{5}z=-101\\ \frac{3}{10}x+4y-\frac{3}{10}z=23\end{array}[/latex] 41. [latex]\begin{array}{l}\frac{1}{2}x-\frac{1}{5}y+\frac{1}{5}z=\frac{31}{100}\hfill \\ -\frac{3}{4}x-\frac{1}{4}y+\frac{1}{2}z=\frac{7}{40}\hfill \\ -\frac{4}{5}x-\frac{1}{2}y+\frac{3}{2}z=\frac{1}{4}\hfill \end{array}[/latex] 42. [latex]\begin{array}{l}0.1x+0.2y+0.3z=-1.4\hfill \\ 0.1x - 0.2y+0.3z=0.6\hfill \\ 0.4y+0.9z=-2\hfill \end{array}[/latex] For the following exercises, use a calculator to solve the system of equations with matrix inverses. 43. [latex]\begin{array}{l}2x-y=-3\hfill \\ -x+2y=2.3\hfill \end{array}[/latex] 44. [latex]\begin{array}{l}-\frac{1}{2}x-\frac{3}{2}y=-\frac{43}{20}\hfill \\ \frac{5}{2}x+\frac{11}{5}y=\frac{31}{4}\hfill \end{array}[/latex] 45. [latex]\begin{array}{l}12.3x - 2y - 2.5z=2\hfill \\ 36.9x+7y - 7.5z=-7\hfill \\ 8y - 5z=-10\hfill \end{array}[/latex] 46. [latex]\begin{array}{l}0.5x - 3y+6z=-0.8\hfill \\ 0.7x - 2y=-0.06\hfill \\ 0.5x+4y+5z=0\hfill \end{array}[/latex] For the following exercises, find the inverse of the given matrix. 47. [latex]\left[\begin{array}{cccc}1& 0& 1& 0\\ 0& 1& 0& 1\\ 0& 1& 1& 0\\ 0& 0& 1& 1\end{array}\right][/latex] 48. [latex]\left[\begin{array}{rrrr}\hfill -1& \hfill 0& \hfill 2& \hfill 5\\ \hfill 0& \hfill 0& \hfill 0& \hfill 2\\ \hfill 0& \hfill 2& \hfill -1& \hfill 0\\ \hfill 1& \hfill -3& \hfill 0& \hfill 1\end{array}\right][/latex] 49. [latex]\left[\begin{array}{rrrr}\hfill 1& \hfill -2& \hfill 3& \hfill 0\\ \hfill 0& \hfill 1& \hfill 0& \hfill 2\\ \hfill 1& \hfill 4& \hfill -2& \hfill 3\\ \hfill -5& \hfill 0& \hfill 1& \hfill 1\end{array}\right][/latex] 50. [latex]\left[\begin{array}{rrrrr}\hfill 1& \hfill 2& \hfill 0& \hfill 2& \hfill 3\\ \hfill 0& \hfill 2& \hfill 1& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 3& \hfill 0& \hfill 1\\ \hfill 0& \hfill 2& \hfill 0& \hfill 0& \hfill 1\\ \hfill 0& \hfill 0& \hfill 1& \hfill 2& \hfill 0\end{array}\right][/latex] 51. [latex]\left[\begin{array}{rrrrrr}\hfill 1& \hfill 0& \hfill 0& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 1& \hfill 0& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 1& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0& \hfill 1& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0& \hfill 0& \hfill 1& \hfill 0\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1\end{array}\right][/latex] For the following exercises, write a system of equations that represents the situation. Then, solve the system using the inverse of a matrix. 52. 2,400 tickets were sold for a basketball game. If the prices for floor 1 and floor 2 were different, and the total amount of money brought in is $64,000, how much was the price of each ticket? 53. In the previous exercise, if you were told there were 400 more tickets sold for floor 2 than floor 1, how much was the price of each ticket? 54. A food drive collected two different types of canned goods, green beans and kidney beans. The total number of collected cans was 350 and the total weight of all donated food was 348 lb, 12 oz. If the green bean cans weigh 2 oz less than the kidney bean cans, how many of each can was donated? 55. Students were asked to bring their favorite fruit to class. 95% of the fruits consisted of banana, apple, and oranges. If oranges were twice as popular as bananas, and apples were 5% less popular than bananas, what are the percentages of each individual fruit? 56. A sorority held a bake sale to raise money and sold brownies and chocolate chip cookies. They priced the brownies at $1 and the chocolate chip cookies at $0.75. They raised $700 and sold 850 items. How many brownies and how many cookies were sold? 57. A clothing store needs to order new inventory. It has three different types of hats for sale: straw hats, beanies, and cowboy hats. The straw hat is priced at $13.99, the beanie at $7.99, and the cowboy hat at $14.49. If 100 hats were sold this past quarter, $1,119 was taken in by sales, and the amount of beanies sold was 10 more than cowboy hats, how many of each should the clothing store order to replace those already sold? 58. Anna, Ashley, and Andrea weigh a combined 370 lb. If Andrea weighs 20 lb more than Ashley, and Anna weighs 1.5 times as much as Ashley, how much does each girl weigh? 59. Three roommates shared a package of 12 ice cream bars, but no one remembers who ate how many. If Tom ate twice as many ice cream bars as Joe, and Albert ate three less than Tom, how many ice cream bars did each roommate eat? 60. A farmer constructed a chicken coop out of chicken wire, wood, and plywood. The chicken wire cost $2 per square foot, the wood $10 per square foot, and the plywood $5 per square foot. The farmer spent a total of $51, and the total amount of materials used was [latex]14{\text{ ft}}^{2}[/latex]. He used [latex]{\text{3 ft}}^{2}[/latex] more chicken wire than plywood. How much of each material in did the farmer use? 61. Jay has lemon, orange, and pomegranate trees in his backyard. An orange weighs 8 oz, a lemon 5 oz, and a pomegranate 11 oz. Jay picked 142 pieces of fruit weighing a total of 70 lb, 10 oz. He picked 15.5 times more oranges than pomegranates. How many of each fruit did Jay pick?
Difference between revisions of "Algebraic Geometry Seminar Fall 2016" (→Botong Wang) (→Fall 2016 Schedule) Line 63: Line 63: |TBA |TBA |Daniel and Jordan |Daniel and Jordan + + + + + |} |} Revision as of 15:39, 26 September 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Here is the schedule for the previous semester. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) TBA Daniel November 4 Reserved TBA Daniel November 11 Daniel Litt (Columbia) TBA Jordan November 18 David Stapleton (Stony Brook) TBA Daniel December 2 Rohini Ramadas (Michigan) TBA Daniel and Jordan December 9 Robert Walker (Michigan) TBA Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]i_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]i_k\leq i_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. This is joint work with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Article Keywords: replicated regression model; best unbiased estimators Summary: The aim of the paper is to estimate a function $\gamma=tr(D\beta\beta')+tr(C\sum)$ (with $d, C$ known matrices) in a regression model $(Y, X\beta,\sum)$ with an unknown parameter $\beta$ and covariance matrix $\sum$. Stochastically independent replications $Y_1,\ldots, Y_m$ of the stochastic vector $Y$ are considered, where the estimators of $X\beta$ and $\sum$ are $\bar{Y}=\frac 1 m \sum ^m _{i=1} Y_i$ and $\hat{\sum}=(m-1)^{-1} \sum^m_{i=1}(Y_i-\bar{Y})(Y_i-\bar{Y})'$, respectively. Locally and uniformly best inbiased estimators of the function $\gamma$, based on $\bar{Y}$ and $\hat{\sum}$, are given. References: [1] Jürgen Kleffe: C. R. Rao's MINQUE for replicated and multivariate observations. Lecture Notes in Statistics 2. Mathematical Statistics and Probability Theory. Proceedings Sixth International Conference. Wisla (Poland) 1978. Springer N. York, Heidelberg, Berlin 1979, 188-200. [2] Jürgen Kleffe, Júlia Volaufová: Optimality of the sample variance-covariance matrix in repeated measurement designs. (Submitted to Sankhyā). [4] C. R. Rao S. K. Mitra: Generalized Inverse of Matrices and Its Applications . J. Wiley, N. York 1971. MR 0338013 [5] R. Thrum J. Kleffe: Inequalities for moments of quadratic forms with applications to a.s. convergence . Math. Operationsforsch. Statistics Ser. Statistics (in print). MR 0704788
For two particular (twelve-and thirteen-dimensional) sets of two-retrit states (corresponding to 9 x 9 density matrices with real off-diagonal entries), I have been able to calculate the Hilbert-Schmidt probabilities that members of the sets have positive partial transposes (the PPT property). The first set is composed of the two-retrit $X$-states--having non-zero diagonal and anti-diagonal entries, and all others zero. For this set, the Hilbert-Schmidt PPT-probability is $\frac{16}{3 \pi^2} \approx 0.54038$. (For the rebit-retrit and two-rebit $X$-states [https://arxiv.org/abs/1501.02289 p.3], the [now, separability] probability is--somewhat surprisingly--the very same. For still higher [than two-retrit] dimensional states, the PPT-probabilities of their $X$-states seem not to be presently known--and also not known for the $8 \times 8$ $X$-states.) The second (thirteen-dimensional) set is a one-parameter enlargement of the two-retrit $X$-states, in which the (1,2) (and (2,1)) entries are now unrestricted. For this set, the HS PPT-probability increases to $\frac{65}{36 \pi} \approx 0.574726$. (It remains an interesting research question of to what extent, if any, does this probability change if other--than the (1,2) (and (2,1)) entries--are chosen to be similarly unrestricted.) So, now is there any manner, by which I can determine to what extent these two sets of (12- and 13-dimensional) PPT-states are bound entangled or separable? Also, along similar lines, it would be of interest to try to compute the Hilbert-Schmidt PPT-probability of the eight-dimensional "magic simplex $\mathcal{W}$ presented in sec. IV of the paper "The geometry of bipartite qutrits including bound entanglement" https://arxiv.org/abs/0705.1403 . However, at this point, I have not yet been able to fully implement in Mathematica, the steps required.
For a typical macroscopic system, the total number of particles \(N \sim 10^{23}\). Since an essentially infinite amount of precision is needed in order to specify the initial conditions (due to exponentially rapid growth of errors in this specification), the amount of information required to specify a trajectory is essentially infinite. Even if we contented ourselves with quadrupole precision, however, the amount of memory needed to hold just one phase space point would be about 128 bytes = \(2^7 \sim 10^2\) bytes for each number or \(10^2 \times 6 \times 10^{23} \sim 10^{17} \) Gbytes. The largest computers we have today have perhaps \(10^3\) Gbytes of memory, so we are off by 14 orders of magnitude just to specify 1 point in phase space. Do we need all this detail? (Yes and No). Yes- There are plenty of chemically interesting phenomena for which we really would like to know how individual atoms are moving as a process occurs. Experimental techniques such as ultrafast laser spectroscopy can resolve short time scale phenomena and, thus, obtain important insights into such motions. From a theoretical point of view, although we cannot follow \(10^{23}\) particles, there is some hope that we could follow the motion of a system containing \(10^4\) or \(10^5\) particles, which might capture most of the features of true macroscopic matter. Thus, by solving Newton's equations of motion numerically on a computer, we have a kind of window into the microscopic world. This is the basis of what are known as molecular dynamics calculations. No- Intuitively, we would expect that if we were to follow the evolution of a large number of systems all described by the same set of forces but having starting from different initial conditions, these systems would have essentially the same macroscopic characteristics, e.g. the same temperature, pressure, etc. even if the microscopic detailed evolution of each system in time would be very different. This idea suggests that the microscopic details are largely unimportant. Since, from the point of view of macroscopic properties, precise microscopic details are largely unimportant, we might imagine employing a construct known as the ensemble concept in which a large number of systems with different microscopic characteristics but similar macroscopic characteristics is used to "wash out'' the microscopic details via an averaging procecure. This is an idea developed by individuals such as Gibbs, Maxwell, and Boltzmann. Ensemble: Consider a large number of systems each described by the same set of microscopic forces and sharing some common macroscopic property (e.g. the same total energy). Each system is assumed to evolve under the microscopic laws of motion from a different initial condition so that the time evolution of each system will be different from all the others. Such a collection of systems is called an ensemble. The ensemble concept then states that macroscopic observables can be calculated by performing averages over the systems in the ensemble. For many properties, such as temperature and pressure, which are time-independent, the fact that the systems are evolving in time will not affect their values, and we may perform averages at a particular instant in time. Thus, let \(A\) denote a macroscopic property and let \(a\) denote a microscopic function that is used to compute \(A\). An example of \(A\) would be the temperature, and \(a\) would be the kinetic energy (a microscopic function of velocities). Then, \(A\) is obtained by calculating the value of \(a\) in each system of the ensemble and performing an average over all systems in the ensemble: \[ A = \frac {1}{N} \sum _{\lambda = 1}^N a_{\lambda} \] where \(N\) is the total number of members in the ensemble and \(a_{\lambda}\) is the value of \(a\) in the \(\lambda\) th system. The questions that naturally arise are: How do we construct an ensemble? How do we perform averages over an ensemble? How many systems will an ensemble contain? How do we distinguish time-independent from time-dependent properties in the ensemble picture? Answering these questions will be our main objective in Statistical Mechanics.
OpenCV 4.0.0 Open Source Computer Vision In this tutorial, the basic concept of fuzzy transform is presented. You will learn: The presented explanation demands knowledge of basic math. All related papers are cited and mostly accessible on https://www.researchgate.net/. In the last years, the theory of F-transforms has been intensively developed in many directions. In image processing, it has had successful applications in image compression and reduction, image fusion, edge detection and image reconstruction [159] [45] [201] [157] [156] [207]. The F-transform is a technique that places a continuous/discrete function in correspondence with a finite vector of its F-transform components. In image processing, where images are identified by intensity functions of two arguments, the F-transform of the latter is given by a matrix of components. Let me introduce F-transform of a 2D grayscale image \(I\) that is considered as a function \(I:[0,M]\times [0,N]\to [0,255]\) where \([0,M]=\{0,1,2,\ldots,M\}; [0,N]=\{0,1,2,\ldots,N\}\). It is assumed that the image is defined at points (pixels) that belong to the set \(P\), where \(P=\{(x,y)\mid x=0,1,\ldots, M;y=0,1,\ldots, N\}\). Let \(A_0, \dots ,A_m\) and \(B_0, \dots ,B_n\) be basic functions, \(A_0, \dots ,A_m : [0,M] \to [0, 1]\) be fuzzy partition of \([0,M]\) and \(B_0, \dots ,B_n :[0,N]\to [0, 1]\) be fuzzy partition of \([0,N]\). Assume that the set of pixels \(P\) is sufficiently dense with respect to the chosen partitions. This means that for all \(k\in{0,\dots, m}(\exists x\in [0,M]) \ A_k(x)>0\), and for all \(l\in{0,\dots, n}(\exists y\in [0,N])\ B_l(y)>0\). We say that the \(m\times n\)-matrix of real numbers \(F^0_{mn}[I] = (F^0_{kl})\) is called the (discrete) F-transform of \(I\) with respect to \(\{A_0, \dots,A_m\}\) and \(\{B_0, \dots,B_n\}\) if for all \(k=0,\dots,m,\ l=0,\dots,n\): \[ F^0_{kl}=\frac{\sum_{y=0}^{N}\sum_{x=0}^{M} I(x,y)A_k(x)B_l(y)}{\sum_{y=0}^{N}\sum_{x=0}^{M} A_k(x)B_l(y)}. \] The coefficients \(F^0_{kl}\) are called components of the \(F^0\)-transform. \(F^1\)-transform has been presented in [158]. We say that matrix \(F^1_{mn}[I] = (F^1_{kl}), k=0,\ldots, m, l=0,\ldots, n\), is the \(F^1\)-transform of \(I\) with respect to \(\{A_k\times B_l\mid k=0,\ldots, m, l=0,\ldots, n\}\), and \(F^1_{kl}\) is the corresponding \(F^1\)-transform component. The \(F^1\)-transform components of \(I\) are linear polynomials in the form \[ F^1_{kl}(x,y)= c^{00}_{kl} + c^{10}_{kl}(x-x_k) + c^{01}_{kl}(y-y_l), \] where the coefficients are given by \[ c_{kl}^{00} =\frac{\sum_{y=0}^{N}\sum_{x=0}^{M} I(x,y)A_k(x)B_l(y)}{\sum_{y=0}^{N}\sum_{x=0}^{M} A_k(x)B_l(y)}, \\ c_{kl}^{10} =\frac{\sum_{y=0}^{N}\sum_{x=0}^{M} I(x,y)(x - x_k)A_k(x)B_l(y)}{\sum_{y=0}^{N}\sum_{x=0}^{M} (x - x_k)^2A_k(x)B_l(y)}, \\ c_{kl}^{01} =\frac{\sum_{y=0}^{N}\sum_{x=0}^{M} I(x,y)(y - y_l)A_k(x)B_l(y)}{\sum_{y=0}^{N}\sum_{x=0}^{M} (y - y_l)^2A_k(x)B_l(y)}. \] The technique of F-transforms uses two steps: direct and inverse. The direct step is described in the previous section whereas the inverse is as follows \[ O(x,y)=\sum_{k=0}^{m}\sum_{l=0}^{n} F^d_{kl}A_k(x)B_l(y), \] where \(O\) is the output (reconstructed) image and \(d\) is F-transform degree. In fact, the algorithm computes the F-transform components of the input image \(I\) and spreads the components afterwards to the size of \(I\). For details see [156]. Application to image processing is possible to take from two different views. The pixels are processed one by one in a way that appropriate basic functions are found for each of them. It will be exactly four, two in each direction. We need some helper structure in the memory for collecting their values. The values will be used in the nominator of the related fuzzy component. Implementation of this approach uses keyword FL as fast processing (because of more optimizations) and linear basic function. In this way, image is divided to the regular areas. Each area is processed separately using kernel window. This approach benefits from easy to understand, matrix based processing with straight forward parallelization. This approach uses kernel \(g\). Let us show linear case with radius \(h = 2\) as an example. \[ A = (0, 0.5, 1, 0.5, 0) \\ B^T = (0, 0.5, 1, 0.5, 0) \\ g = AB^T=\left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 0 & 0.25 & 0.5 & 0.25 & 0 \\ 0 & 0.5 & 1 & 0.5 & 0 \\ 0 & 0.25 & 0.5 & 0.25 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \right) \]
Peter Saveliev Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA. My current projects are these two books: In part, the latter book is about Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands: Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases: (a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! -Why is discrete calculus better than infinitesimal calculus? -Why? -Because it can be integer-valued! -And? -And the integer-valued calculus can detect if the space is non-orientable! Read Integer-valued calculus, an essay making a case for discrete calculus by appealing to topology and physics. -The political “spectrum” might be a circle!- So? -Then there can be no fair decision-making system! Read The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter.
Contained in-between each level of the polynomial hierarchy are various complexity classes, including $\Delta_i^{\text{P}}$, $\text{DP}$, $\text{BH}_k$, and $\Sigma_i^\text{P} \cap \Pi_i^\text{P}$. For lack of better terminology, I will refer to these and any others as intermediate classes between levels $i$ and $i+1$ in the polynomial hierarchy. For the purposes of this question, assume they are the classes contained in $\Sigma_{i+1}^\text{P} \cap \Pi_{i+1}^\text{P}$ but contain $\Sigma_i^\text{P}$ and/or $\Pi_i^\text{P}$. We want to avoid including $\Sigma_{i+1}^\text{P} \cap \Pi_{i+1}^\text{P}$, if possible, as it is trivially equivalent to $\text{PH}$ if it collapses to the ${i+1}^{th}$ level. In addition, define the following: $\text{DP}_i = \left \{ L \cap L' : L \in \Sigma_i^\text{P} \text{ and } L' \in \Pi_i^{\text{P}} \right \}$ The above is a generalization of the class $\text{DP}$ (also written $\text{D}^\text{P}$). In this definition, $\text{DP}$ is equivalent to $\text{DP}_1$. It is considered in another cstheory.se question. It is easy to see that $\text{DP}_i \subseteq \Delta_{i+1}^{\text{P}}$ and contains both $\Sigma_i^\text{P}$ and $\Pi_i^\text{P}$. Reference Diagram: Question: Suppose that the polynomial hierachy collapses to the ${i+1}^{th}$ level, but does not collapse to the $i^{th}$ level. That is, $\Sigma_{i+1}^\text{P}=\Pi_{i+1}^\text{P}$ and $\Sigma_{i}^\text{P}\neq\Pi_{i}^\text{P}$. Can we say anything more about the relationships between these intermediate classes themselves and others in any level below $i+1$? Is there a schema for a collection of complexity classes where, for every collection, the classes are equivalent if and only if the $\text{PH}$ collapses exactly to an arbitrarily chosen level? Just as a followup, suppose that the hierarchy collapsed to any particular one of these intermediate classes (such as $\Delta_{i+1}^{\text{P}}$). Depending on the class selected, do we know if this collapse must continue to extend downwards, perhaps even to the $i^{th}$ level? The above question was partially explored and answered in a paper by Hemaspaandra et. al: A Downward Collapse within the Polynomial Hierarchy Does someone happen to know of additional examples not mentioned in this paper or have further intuition as to what needs to happen in order for a class to accomplish this?
Current browse context: math Change to browse by: Bookmark(what is this?) Mathematics > Differential Geometry Title: Spectral sections, twisted rho invariants and positive scalar curvature (Submitted on 23 Sep 2013 (v1), last revised 25 Apr 2014 (this version, v3)) Abstract: We had previously defined the rho invariant $\rho_{spin}(Y,E,H, g)$ for the twisted Dirac operator $\not\partial^E_H$ on a closed odd dimensional Riemannian spin manifold $(Y, g)$, acting on sections of a flat hermitian vector bundle $E$ over $Y$, where $H = \sum i^{j+1} H_{2j+1} $ is an odd-degree differential form on $Y$ and $H_{2j+1}$ is a real-valued differential form of degree ${2j+1}$. Here we show that it is a conformal invariant of the pair $(H, g)$. In this paper we express the defect integer $\rho_{spin}(Y,E,H, g) - \rho_{spin}(Y,E, g)$ in terms of spectral flows and prove that $\rho_{spin}(Y,E,H, g)\in \mathbb Q$, whenever $g$ is a Riemannian metric of positive scalar curvature. In addition, if the maximal Baum-Connes conjecture holds for $\pi_1(Y)$ (which is assumed to be torsion-free), then we show that $\rho_{spin}(Y,E,H, rg) =0$ for all $r\gg 0$, significantly generalizing our earlier results. These results are proved using the Bismut-Weitzenb\"ock formula, a scaling trick, the technique of noncommutative spectral sections, and the Higson-Roe approach. Submission historyFrom: Varghese Mathai [view email] [v1]Mon, 23 Sep 2013 09:48:25 GMT (23kb) [v2]Sat, 2 Nov 2013 20:05:59 GMT (326kb,D) [v3]Fri, 25 Apr 2014 20:53:53 GMT (313kb,D)
pandas.Series.ewm¶ Series. ewm( com=None, span=None, halflife=None, alpha=None, min_periods=0, freq=None, adjust=True, ignore_na=False, axis=0)¶ Provides exponential weighted functions New in version 0.18.0. Parameters: com: float, optional Specify decay in terms of center of mass, \(\alpha = 1 / (1 + com),\text{ for } com \geq 0\) span: float, optional Specify decay in terms of span, \(\alpha = 2 / (span + 1),\text{ for } span \geq 1\) halflife: float, optional Specify decay in terms of half-life, \(\alpha = 1 - exp(log(0.5) / halflife),\text{ for } halflife > 0\) alpha: float, optional Specify smoothing factor \(\alpha\) directly, \(0 < \alpha \leq 1\) New in version 0.18.0. min_periods: int, default 0 Minimum number of observations in window required to have a value (otherwise result is NA). freq: None or string alias / date offset object, default=None Deprecated since version 0.18.0: Frequency to conform to before computing statistic adjust: boolean, default True Divide by decaying adjustment factor in beginning periods to account for imbalance in relative weightings (viewing EWMA as a moving average) ignore_na: boolean, default False Ignore missing values when calculating weights; specify True to reproduce pre-0.15.0 behavior Returns: a Window sub-classed for the particular operation Notes Exactly one of center of mass, span, half-life, and alpha must be provided. Allowed values and relationship between the parameters are specified in the parameter descriptions above; see the link at the end of this section for a detailed explanation. The freq keyword is used to conform time series data to a specified frequency by resampling the data. This is done with the default parameters of resample()(i.e. using the mean). When adjust is True (default), weighted averages are calculated using weights (1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1. When adjust is False, weighted averages are calculated recursively as: weighted_average[0] = arg[0]; weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i]. When ignore_na is False (default), weights are based on absolute positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and alpha (if adjust is False). When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on relative positions. For example, the weights of x and y used in calculating the final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is True), and 1-alpha and alpha (if adjust is False). More details can be found at http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows Examples >>> df = DataFrame({'B': [0, 1, 2, np.nan, 4]}) B 0 0.0 1 1.0 2 2.0 3 NaN 4 4.0 >>> df.ewm(com=0.5).mean() B 0 0.000000 1 0.750000 2 1.615385 3 1.615385 4 3.670213
Commun. Math. Anal. Volume 20, Number 1 (2017), 69 - 82 Nonlinear Eigenvalue Problem for the p-Laplacian Nonlinear Eigenvalue Problem for the p-Laplacian Abstract This article is devoted to the study of the nonlinear eigenvalue problem \begin{eqnarray*} % \nonumber to remove numbering (before each equation) -\Delta_{p} u &=& \lambda |u|^{p-2}u~~\text{in}~~\Omega,\\ |\nabla u|^{p-2}\frac{\partial u}{\partial \nu}&+&\beta |u|^{p-2}u=\lambda |u|^{p-2}u ~~\text{on }~~~\partial\Omega, \end{eqnarray*} where $\nu$ denotes the unit exterior normal, $10$. Using Ljusternik-Schnirelman theory, we prove the existence of a nondecreasing sequence of positive eigenvalues and the first eigenvalue is simple and isolated. Moreover, we will prove that the second eigenvalue coincides with the second variational eigenvalue obtained via the Ljusternik-Schnirelman theory.
Difference between revisions of "Geometry and Topology Seminar" (→Spring Abstracts) (→JingZhou Sun(Stony Broo)) Line 295: Line 295: ===Matthew Kahle (Ohio)=== ===Matthew Kahle (Ohio)=== ''TBA'' ''TBA'' − ===JingZhou Sun(Stony + ===JingZhou Sun(Stony )=== "TBA" "TBA" Revision as of 12:18, 3 January 2014 Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 February 7 February 14 February 21 February 28 March 7 March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) TBA Dymarz April 11 April 18 April 25 May 2 May 9 Spring Abstracts Matthew Kahle (Ohio) TBA JingZhou Sun(Stony Brook) "TBA"
The Oscar in the category The Best Rejected Research Proposal in Mathematics(ever) goes to … Alexander Grothendieck for his proposal Esquisse d’un Programme, Grothendieck\’s research program from 1983, written as part of his application for a position at the CNRS, the French equivalent of the NSF. An English translation is available. Here is one of the problems discussed : Give TWO non-trivial elements of$Gal(\overline{\mathbb{Q}}/\mathbb{Q}) $ the _absolute_ Galois group of the algebraic closure of the rational numbers $\overline{\mathbb{Q}} $, that is the group of all $\mathbb{Q} $-automorphisms of $\overline{\mathbb{Q}} $. One element most of us can give (complex-conjugation) but to find any other element turns out to be an extremely difficult task. To get a handle on this problem, Grothendieck introduced his _’Dessins d’enfants’_ (Children’s drawings). Recall from last session the pictures of the left and right handed Monsieur Mathieu The left hand side drawing was associated to a map $\mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} $ which was defined over the field $\mathbb{Q} \sqrt{-11} $ whereas the right side drawing was associated to the map given when one applies to all coefficients the unique non-trivial automorphism in the Galois group $Gal(\mathbb{Q}\sqrt{-11}/\mathbb{Q}) $ (which is complex-conjugation). Hence, the Galois group $Gal(\mathbb{Q}\sqrt{-11}/\mathbb{Q}) $ acts _faithfully_ on the drawings associated to maps $\mathbb{P}^1_{\mathbb{Q}\sqrt{-11}} \rightarrow \mathbb{P}^1_{\mathbb{Q}\sqrt{-11}} $ which are ramified only over the points ${ 0,1,\infty } $. Grothendieck’s idea was to extend this to more general maps. Assume that a projective smooth curve (a Riemann surface) X is defined over the algebraic numbers $\overline{\mathbb{Q}} $ and assume that there is a map $X \rightarrow \mathbb{P}^1_{\mathbb{C}} $ ramified only over the points ${ 0,1,\infty } $, then we can repeat the procedure of last time and draw a picture on X consisting of d edges (where d is the degree of the map, that is the number of points lying over another point of $\mathbb{P}^1_{\mathbb{C}} $) between white resp. black points (the points of X lying over 1 (resp. over 0)). Call such a drawing a ‘dessin d\’enfant’ and look at the collection of ALL dessins d’enfants associated to ALL such maps where X runs over ALL curves defined over $\overline{\mathbb{Q}} $. On this set, there is an action of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q}) $ and if this action would be faithful, then this would give us insight into this group. However, at that time even the existence of a map $X \rightarrow \mathbb{P}^1 $ ramified in the three points ${ 0,1,\infty } $ seemed troublesome to prove, as Grothendieck recalls in his proposal In more erudite terms, could it be true that every projective non-singular algebraic curve defined over a number field occurs as a possible ‚ modular curve‚ parametrising elliptic curves equipped with a suitable rigidification? Such a supposition seemed so crazy that I was almost embarrassed to submit it to the competent people in the domain. Deligne when I consulted him found it crazy indeed, but didn’t have any counterexample up his sleeve. Less than a year later, at the International Congress in Helsinki, the Soviet mathematician Bielyi announced exactly that result, with a proof of disconcerting simplicity which fit into two little pages of a letter of Deligne ‚ never, without a doubt, was such a deep and disconcerting result proved in so few lines! In the form in which Bielyi states it, his result essentially says that every algebraic curve defined over a number field can be obtained as a covering of the projective line ramified only over the points 0, 1 and infinity. This result seems to have remained more or less unobserved. Yet, it appears to me to have considerable importance. To me, its essential message is that there is a profound identity between the combinatorics of finite maps on the one hand, and the geometry of algebraic curves defined over number fields on the other. This deep result, together with the algebraic- geometric interpretation of maps, opens the door onto a new, unexplored world within reach of all, who pass by without seeing it. Belyi’s proof is indeed relatively easy (full details can be found in the paper Dessins d’enfants on the Riemann sphere by Leila Schneps). Roughly it goes as follows : as both X and the map are defined over $\overline{\mathbb{Q}} $ the map is only ramified over (finitely many) $\overline{\mathbb{Q}} $-points. Let S be the finite set of all Galois-conjugates of these points and consider the polynomial $f_0(z_0) = \prod_{s \in S} (z_0 -s) \in \mathbb{Q}[z_0] $ Now, do a resultant trick. Consider the polynomial $f_1(z_1) = Res_{z_0}(\frac{d f_0}{d z_0},f_0(z_0)-z_1) $ then the roots of $f_1(z_1) $ are exactly the finite critical values of $f_0 $, $f_1 $ is again defined over $\mathbb{Q} $ and has lower degree (in $z_1 $) than $f_0 $ in $z_1 $. Continue this trick a finite number of times untill you have constructed a polynomial $f_n(z_n) \in \mathbb{Q}[z_n] $ of degree zero. Composing the original map with the maps $f_j $ in succession yields that all ramified points of this composition are $\mathbb{Q} $-points! Now, we only have to limit the number of these ramified $\mathbb{Q} $-points (let us call this set T) to three. Take any three elements of T, then there always exist integers $m,n \in \mathbb{Z} $ such that the three points go under a linear fractional transformation (a Moebius-function associated to a matrix in $PGL_2(\mathbb{Q}) $) to ${ 0,\frac{m}{m+n},1 } $. Under the transformation $z \rightarrow \frac{(m+n)^{m+n}}{m^m n^n}z^m(1-z)^n $ the points 0 and 1 go to 0 and $\frac{m}{m+n} $ goes to 1 whence the ramified points of the composition are one less in number than T. Continuing in this way we can get the set of ramified $\mathbb{Q} $-points of a composition at most having three elements and then a final Moebius transformation gets them to ${ 0,1,\infty } $, done! As a tribute for this clever argument, maps $X \rightarrow \mathbb{P}^1 $ ramified only in 0,1 and $\infty $ are now called Belyi morphisms. Here is an example of a Belyi-morphism (and the corresponding dessin d’enfants) associated to one of the most famous higher genus curves around : the Klein quartic (if you haven’t done so yet, take your time to go through this marvelous pre-blog post by John Baez). One can define the Klein quartic as the plane projective curve K with defining equation in $\mathbb{P}^2_{\\mathbb{C}} $ given by $X^3Y+Y^3Z+Z^3X = 0 $ K has a large group of automorphism, namely the simple group of order 168 $G = PSL_2(\mathbb{F}_7) = SL_3(\mathbb{F}_2) $ It is a classical fact (see for example the excellent paper by Noam Elkies The Klein quartic in number theory) that the quotient map $K \rightarrow K/G = \mathbb{P}^1_{\mathbb{C}} $ is ramified only in the points 0,1728 and $\infty $ and the number of points of K lying over them are resp. 56, 84 and 24. Now, compose this map with the Moebius transormation taking ${ 0,1728,\infty } \rightarrow { 0,1,\infty } $ then the resulting map is a Belyi-map for the Klein quartic. A topological construction of the Klein quartic is fitting 24 heptagons together so that three meet in each vertex, see below for the gluing data-picture in the hyperbolic plane : the different heptagons are given a number but they appear several times telling how they must fit together) The resulting figure has exactly $\frac{7 \times 24}{2} = 84 $ edges and the 84 points of K lying over 1 (the white points in the dessin) correspond to the midpoints of the edges. There are exactly $\frac{7 \times 24}{3}=56 $ vertices corresponding to the 56 points lying over 0 (the black points in the dessin). Hence, the dessin d\’enfant associated to the Klein quartic is the figure traced out by the edges on K. Giving each of the 168 half-edges a different number one assigns to the white points a permutation of order two and to the three-valent black-points a permutation of order three, whence to the Belyi map of the Klein quartic corresponds a 168-dimensional permutation representation of $SL_2(\mathbb{Z}) $, which is not so surprising as the group of automorphisms is $PSL_2(\mathbb{F}_7) $ and the permutation representation is just the regular representation of this group. Next time we will see how one can always associate to a curve defined over $\overline{\mathbb{Q}} $ a permutation representation (via the Belyi map and its dessin) of one of the congruence subgroups $\Gamma(2) $ or $\Gamma_0(2) $ or of $SL_2(\mathbb{Z}) $ itself.
Definition:Square Number Contents Definition Square numbers are those denumerating a collection of objects which can be arranged in the form of a square. They can be denoted: $S_1, S_2, S_3, \ldots$ $\exists m \in \Z: n = m^2$ where $m^2$ denotes the integer square function. Euclid's Definition In the words of Euclid: $S_n = \begin{cases} 0 & : n = 0 \\ S_{n-1} + 2 n - 1 & : n > 0 \end{cases}$ $\displaystyle S_n = \sum_{i \mathop = 1}^n \left({2 i - 1}\right) = 1 + 3 + 5 + \cdots + \left({2 n - 1}\right)$ $\forall n \in \N: S_n = P \left({4, n}\right) = \begin{cases} 0 & : n = 0 \\ P \left({4, n - 1}\right) + 2 \left({n - 1}\right) + 1 & : n > 0 \end{cases}$ where $P \left({k, n}\right)$ denotes the $k$-gonal numbers. The first few square numbers are as follows: $0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, \ldots$ Also known as A square number is often referred to as a square. For emphasis, a square number is sometimes referred to as a perfect square, but this could cause confusion with the concept of perfect number, so its use is discouraged. Also see Odd Number Theorem which shows that $\displaystyle n^2 = \sum_{j \mathop = 1}^n \paren {2 j - 1}$ Definition:Polygonal Number: Results about square numberscan be found here. Figurate numbers, that is: and so on, were classified and investigated by the Pythagorean school in the $6$th century BCE. This was possibly the first time this had ever been done. Sources 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): Chapter $2$: Some Properties of $\Z$: Exercise $2.13$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): Glossary 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.2$: Pythagoras (ca. $580$ – $500$ B.C.) 1992: George F. Simmons: Calculus Gems... (previous) ... (next): Chapter $\text {A}.13$: Fermat ($1601$ – $1665$) 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): Glossary
It’s 0, except on the trivial cases where it is 1. But clearly this is the wrong way to formulate the question, as there are interesting things to be said about the probabilities of infinite sequences of coin tosses. The situation is analogous to uniformly sampling real numbers from the $[0,1]$ interval: the probability of obtaining any specific number is just 0. The solution, however, is simple: we ask instead what is the probability of obtaining a real number in a given subinterval. The analogous solution works for the case of coin tosses: instead of asking the probability of a single infinite sequence, one can ask the probability of obtaining an infinite sequence that starts with a given finite sequence. To be more concrete, let’s say that the probability of obtaining Heads in a single coin toss is $p$, and for brevity let’s denote the outcome Heads by 1 and Tails by 0. Then the probability of obtaining the sequence 010 is $p(1-p)^2$, which is the same as the probability of obtaining the sequence 0100 or the sequence 0101, which is the same as the probability of obtaining a sequence in the set {01000, 01001, 01010, 01011}, which is the same as the probability of obtaining an infinite sequence that starts with 010. There is nothing better to do with infinite sequences of zeroes and ones than mapping them into a real number in the interval $[0,1]$, so we shall do that. The set of infinite sequences that start with 010 are then very conveniently represented by the interval $[0.010,0.010\bar1]$, also known as $[0.010,0.011]$ for those who do not like infinite strings of ones, or $[0.25,0.375]$ for those who do not like binary. Saying then that the probability of obtaining a sequence in $[0.010,0.010\bar{1}]$ is $p(1-p)^2$ is assigning a measure to this interval, which we write as \[ \rho([0.010,0.010\bar{1}]) = p(1-p)^2 \] Now if we can assign a sensible probability to every interval contained in $[0,1]$ we can actually extend it into a proper probability measure over the set of infinite sequences of coin tosses using standard measure-theoretical arguments. For me this is the right answer to the question posed on the title of this post. So, how do we go about assigning a sensible probability to every interval contained in $[0,1]$? Well, the argument of the previous paragraph can clearly be extended to any interval of the form $[k/2^n, (k+1)/2^n]$. We just need write $k$ in the binary basis, padded with zeroes on the left until it reaches $n$ binary digits, and count the number of 0s and 1s. In symbols: \[ \rho\left(\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right]\right) = p^{n_1(k,n)}(1-p)^{n_0(k,n)} \] The extension to any interval where the extremities are binary fractions is straightforward. We just break them down into intervals where the numerators differ by one and apply the previous rule. In symbols: \[ \rho\left(\left[\frac{k}{2^n}, \frac{l+1}{2^n}\right]\right) = \sum_{i=k}^{l} p^{n_1(i,n)}(1-p)^{n_0(i,n)} \] We are essentially done, since we can approximate any real number as well as we want we want by using binary fractions 1. But life is more than just binary fractions, so I’ll show explicitly how to deal with the interval \[[0,1/3] = [0,0.\bar{01}] \] The key thing is to choose a nice sequence of binary fractions $a_n$ that converges to $1/3$. It is convenient to use a monotonically increasing sequence, because then we don’t need to worry about minus signs. If furthermore the sequence starts with $0$, then \[ [0,1/3] = \bigcup_{n\in \mathbb N} [a_n,a_{n+1}] \] and \[ \rho([0,1/3]) = \sum_{n\in \mathbb N} \rho([a_n,a_{n+1}]) \] An easy sequence that does the job is $(0,0.01,0.0101,0.010101,\ldots)$. It lets us write the interval as \[ [0,1/3] = [0.00, 0.00\bar{1}] \cup [0.0100, 0.0100\bar{1}] \cup [0.010100, 0.010100\bar{1}] \cup … \] which gives us a simple interpretation of $\rho([0,1/3])$: it is the probability of obtaining a sequence of outcomes starting with 00, or 0100, or 010100, etc. The formula for the measure of $[a_n,a_{n+1}]$ is also particularly simple: \[ \rho([a_n,a_{n+1}]) = p^{n-1}(1-p)^{n+1} \] so the measure of the whole interval is just a geometric series: \[ \rho([0,1/3]) = (1-p)^2\sum_{n\in\mathbb N} \big(p(1-p)\big)^{n-1} = \frac{(1-p)^2}{1-p(1-p)} \] It might feel like something is missing because we haven’t examined irrational numbers. Well, not really, because the technique used to do $1/3$ clearly applies to them, as we only need a binary expansion of the desired irrational. But still, this is not quite satisfactory, because the irrationals that we know and love like $1/e$ or $\frac{2+\sqrt2}4$ have a rather complicated and as far as I know patternless binary expansion, so we will not be able to get any nice formula for them. On the other hand, one can construct some silly irrationals like the binary Liouville constant \[ \ell = \sum_{n\in\mathbb N} 2^{-n!} \approx 0.110001000000000000000001\]whose binary expansion is indeed very simple: every $n!$th binary digit is a one, and the rest are zeroes. The measure of the $[0,\ell]$ interval is then \[ \rho([0,\ell]) = \sum_{n\in \mathbb N} \left(\frac{p}{1-p}\right)^{n-1} (1-p)^{n!} \]Which I have no idea how to sum (except for the case $p=1/2$ ;) But I feel that something different is still missing. We have constructed a probability measure over the set of coin tosses, but what I’m used to think of as “the probability” for uncountable sets is the probability density, and likewise I’m used to visualize a probability measure by making a plot of its density. Maybe one can “derive” the measure $\rho$ to obtain a probability density over the set of coin tosses? After all, the density is a simple derivative for well-behaved measures, or the Radon-Nikodym derivative for more naughty ones. As it turns out, $\rho$ is too nasty for that. The only condition that a probability measure needs to satisfy in order to have a probability density is that it needs to attribute measure zero to every set of Lebesgue measure zero, and $\rho$ fails this condition. To show that, we shall construct a set $E$ such that its Lebesgue measure $\lambda(E)$ is zero, but $\rho(E)=1$. Let $E_n$ be the set of infinite sequences that start with a $n$-bit sequence that contains at most $k$ ones2. Then \[ \rho(E_n) = \sum_{i=0}^k \binom ni p^i (1-p)^{n-i} \] and \[ \lambda(E_n) = 2^{-n} \sum_{i=0}^k \binom ni \] These formulas might look nasty if you haven’t fiddled with entropies for some time, but they actually have rather convenient bounds, which are valid for $p < k/n < 1/2$: \[ \rho(E_n) \ge 1 - 2^{-n D\left( \frac kn || p\right)} \] and \[ \lambda(E_n) \le 2^{-n D\left( \frac kn || \frac 12\right)} \] where $D(p||q)$ is the relative entropy of $p$ with respect to $q$. They show that if $k/n$ is smaller than $1/2$ then $\lambda(E_n)$ is rather small (loosely speaking, the number of sequences whose fraction of ones is strictly less than $1/2$ is rather small), and that if $k/n$ is larger than $p$ then $\rho(E_n)$ is rather close to one (so again loosely speaking, what this measure does is weight the counting of sequences towards $p$ instead of $1/2$: if $k/n$ were smaller than $p$ then $\rho(E_n)$ would also be rather small). If we now fix $k/n$ in this sweet range (e.g. by setting $k = \lfloor n(p + 0.5)/2\rfloor$)3 then \[ E = \bigcap_{i \in \mathbb N} \bigcup_{n \ge i} E_n,\] is the set we want, some weird kind of limit of the $E_n$. Then I claim, skipping the boring proof, that \[ \rho(E) = 1 \]and \[ \lambda(E) = 0 \] But don’t panic. Even without a probability density, we can still visualize a probability measure by plotting its cumulative distribution function \[ f(x) = \rho([0,x]) \]which for $p = 1/4$ is this cloud-like fractal:
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
( previous, home, next ) Further reading Statistics of deadly quarrels by Richardson, 1975. ISBN:0910286108 Zipf's law does not just apply to word-frequencies. It's been found to apply to city sizes, income distributions, social networks, and a variety of contexts. One common form of data we might encounter in a study is what people call a "bag" or "multiset". A bag is like a set in that the elements have no intrinsic order, but a bag differs from a set in the one important way. In a set an element is either present or absent, so when we list the elements, each thing in the set is listed only once. In a bag, each element can occur once, twice, or any arbitrary number of times, and two bags are equal if and only if each element occurs the same number of times in both. Bags are such a natural idea that they have been used in mathematics for centuries even though they weren't formally recognized and named until the 1970's. Here are some examples of bags. Often, rather than list out all the elements of a multiset repeatedly, we can save space by listing each element along with an integer representing how many times that element appears (it's multiplicity). So the bag \[\{ a,a,a,f,b,b,a,a \}\] might be written \[\begin{gather*} \{ (a,5), (b,2), (f,1) \} \quad\text{or just}\quad a^5 b^2 f \end{gather*}\] In computations, vectors are often used to represent bags. The commonness of words in the King James bible is an easily accessible example, as the data can be found online. [ Data : hide , shown as table , shown as CSV shown as QR ] # https://www.kingjamesbibleonline.org/Popular-Bible-Words.php# times words show up in the King James version of the Bible, ranked from most to least (top 250)28364282692125712044116831128590228684832675827365712767966664663865796568644260735973590056065431456344874432436942933981367936083603358134783441341033073237316230582952293729022834282827472634260625512509249622712251221022102165215421452124207820421990198019531881187418601840176017591727169216581656164816411609157515471544152614881487147714681438141914081396139213811358132913061296125512541239120711891164116311221088108410801070106110491046102410201012992989978977977977974965959956953953944934925923923894881878873861859835834831819819817804798793793787785778774770756737734727726716714700699699671659657648644641640639633633631621617610609604604603601599591590584578571565554553551550550548544544540538533528528526524523523511503503501501498496496495490487483482480475470470463463462458456452450443442441440433426424424423423415414413410405402401399399399398398 You might think " So what? What can we hope to do with a data set of number of times each word is used? Without the order, the words are nearly meaningless random jumble." That's true, in sense that the order is key in communicating meaning. But in another way, it turns out there is atleast one somewhat surprising thing lurking in these data. Words are unequal. Some words are used much more commonly than others -- "the" and "and" are used all at the time, while "gerrymander" and "upholster" are used rarely, and only in specific contexts. One might argue that the linguistic importance of a word is predicted by how often it is used relative to other words. One way to represent this is to plot the empirical CDF of word usage. To make an empirical CDF, make a list of the number of times each word appears (but forget about the words themselves). Sort numbers from smallest (most rarely used) to largest (most commonly used), and plot the value as a function of normalized rank, i.e., \[\mathcal{F} _ n(t)=\frac{\textrm{num of values }\leq t}{\textrm{num of samples, }n}\] where \(n\) is the total number of different words, plot(value, rank/float(n)) This picture shows that there are lots of words that are rather uncommon. There are almost 800,000 words in the bible. Half of those words occur less than 2000 times each, while a few words occur more than 10,000 times each. But this picture doesn't reveal much more of a pattern than that. A better picture can be made by ranking all the words, in order from most used to least used, and then plotting the frequency of each word's use as a function of it's rank. These are called rank-order statistics. Rank order statistics are particularly useful if you don't know the denominator. What we'd need for the empirical CDF, above, is frequency, but we don't know how many words there are in the Bible. (OK, there are 783,137 words, but in many contexts, the denominator is uncertain or inconvenient to obtain. Recall, for example, our analysis of telephone line loading.) If plotted on a log-log plot, they reveal a surprising pattern. n = len(values)values.sort().reverse()rank = range(1,n+1)subplot(2,1,1)plot(rank, value, 'bo')subplot(2,1,2)loglog(rank, value, 'go') Notice that the rank-order statistic is a straight line on log-log plot! This indicates a power law relationship between rank and abundance, \(y=ax^{-b}\). Using linear least squares or polyfit or whatever you prefer, we find the elasticity \(b \approx 0.93\). The predictive power here is a function of rank. We predict that in the Bible, the \(1000^{\textrm{th}}\) most common word will appear roughly 129 times. This phenomena is an example of Zipf's law: given some body of written work, the frequency of any word is inversely proportional to its rank. Today, Zipf's law is used to refer to any discrete power-law distribution of ranked data: \[f(r;b) \propto 1/r^b.\] The associated probability generating function is \[\hat{r}(s) = \frac{1}{\zeta(b)} \sum _ {r=1}^{\infty} \frac{s^b}{r^b}\] where \(\zeta(b)\) is the Riemann zeta-function. The normalization is, on the one hand, limiting: without normalizing we were able to extrapolate to less frequent words not observed. However with the normalization we can compare with other works. For example, word frequency in "A Connecticut Yankee in King Arthur's Court" by Mark Twain: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 They look pretty close! In the case of A Connecticut Yankee, word frequency in the first 250 words is only slightly different, approximately \(f(r)\propto r^{-1}\). (Fun fact: In both cases, the top two words are 'and' and 'the', but the order is switched! It's 'and' first in the Bible and 'the' first in Twain's book). Warning: Be careful of the interpretation. The empirical CDF at the start of lecture, with words on the \(x\)-axis, gave us the probability that a word chosen from a list of unique words appearing in the Bible less than \(n\) times. Zipf's law is a closely related to the Pareto distribution and power-law distributions. These are important particularly because the way a distribution's tail thins out can have important consequences in many theories. One proposal to explain the commonness of data fit well by Zipf's law is that there is a network underlying these data and that this network has self-similar properties reflected by power-law behaviors. The more connections you have, the faster you make new connections. Consider a network where each node represents a person and an edge between two nodes represents a "friendship" between two people. What happens to the network as new people join? When someone new joins the network, they are represented by a new node corresponding to this person. And let's suppose that the new person immediately adds \(m\) relationships to distinct existing network members. Let's let \(N(k,t)\) be the expected number of nodes of degree \(k\) (i.e. person connected to \(k\) others) after \(t\) new people have joined the network. The new subscriber is more likely to find friends that are well-connected. So, suppose the new member's relationships are essentially draw at random from all the other connections among members. If the network initially had \(T _ 0\) connections, then will be \(T(t) = T _ 0 + 2mt\) connections total out there after \(t\) new people have joined, since \(t\) people each add \(m\) relationship edges, and each edge is connected to two individuals. A degree \(k\)-individual (person with \(k\) friends) has \(k\) connections, so the chance that that individual is picked by the new subscriber will be \(k/(2mt + T _ 0)\).The precise process for the evolution of \(N(k,t)\) requires finding the expectation of \(P(x,k,m)\) where \(P\), the probability that \(x\) nodes of degree \(k\) are picked after \(m\) draws of edge connections, with replacement, is determined by the difference equation \[\begin{gather*} P(x+1,k,s+1) = \left( \frac{k N(k)-k x }{T} \right) P(x,k,s) + \left(1 - \frac{k (N(k) - (x+1))}{T} \right) P(x+1,k,s), \\ P(0,k,0) = 1, \quad P(x>0,k,0) = 0, \quad P(x > N(k),k,s) = 0. \end{gather*}\] But when the number of connections is large enough (\(T \gg k N(k)\), \(T\gg m\)) and there are many people (\(N(k) \gg m\)), then, on average, \(m k N(k,t) /(2mt + T _ 0)\) people with \(k\) connections gain a new relationship and now have \(k+1\) connections. If we apply this to actors of all degree, taking \(T _ 0 = 0\), we expect \(N(k,t)\) to obey the following difference equation as each new movie is produced. \[\begin{gather*} N(k,t+1) = N(k,t) + \frac{ m (k-1)}{2 m t} N(k-1, t) - \frac{ m k}{2 m t} N(k, t) + 1_{k = m} \end{gather*}\] If \(k > m\), then we can simplify this to the difference equation \[N(k, t+1) - N(k, t) = \frac{(k-1)}{2t} N(k-1, t) - \frac{k}{2 t} N(k,t).\] At the lower boundary, where \(k = m\), then we have the slightly simplier equation \[N(m,t+1) - N(m,t) = 1 - \frac{mN(m,t)}{2t}.\] We can't get \(k < m\) since everyone starts with \(m\) connections! This should make us throw our assumption of \(T _ 0 = 0\) kind-of out the window, but I'll leave it as something for you to ponder. So, we now have a system of linear recurrence equations, where the recurrences occur in two variables: the degree \(k\) of nodes in our network, and the number of movies \(t\) made so far. The variable \(t\) does not appear in our coefficients, but the variable \(k\) does. Thus, our recurrence system is similar to a linear, variable coefficient, partial differential equation. Variable coefficient differential equations rarely have solutions that can be expressed without recourse to special functions, and so we don't have much prior reason to be optimistic here. But, it turns out that if we try the ansatz \[N(k,t) = \frac{Ct}{k(k+1)(k+2)}\] we discover that it works provided \(C = 2 m (m+1)\). And so, we discover the solution \[N(k,t) = \frac{2 m t \left(m + 1\right)}{k \left(k + 1\right) \left(k + 2\right)}\]That's the average number of people with \(k\) friends. For \(m=10\), Sympy code to help with solving: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 This is a scale free relationship, in the sense that the curve is approximately self-similar for a range of observations -- for large \(k\), \(N(k,t) \sim k^{-3}\). From this calculation, we can compute the number of friends the average person will have: \[\textrm{num of friends }=\frac{\sum_{k=m}^\infty k N(k,t)}{\sum_{k=m}^\infty N(k,t)}=2m.\] And the fraction of people who have less than the average number is \[\frac{\sum_{k=m}^{2m-1}N(k,t)}{\sum_{k=m}^\infty N(k,t)}=\frac{(3m+1)}{2(2m+1)}\geq\frac{2}{3}.\] (since \(2/3\leq (3m+1)/(2(2m+1))<3/4\) for finite \(m\geq 1\)). But that means, more than half the people have less than the average number of friends! That's why your friends probably have more friends than you do... Another important form of Zipf's law is the citation patterns found in scientific articles. Rather than doing equations, let's make a simulation of a simplified version of this. Start with a list containing the integers 1 and 2. Choose one number randomly from the list and append it back to the list. You can use the sample()'' function from therandom'' module to do this. Append the next number (3) to the list. Repeat these last two steps 1,000 times, incrementing the new number each time until you get a list of 2,002 integers. Now, count up how many times each integer appears in the list and plot a rank-value plot on a log-log scale. Explain how we know that the plot exhibits a power-law scaling, and approximate the scaling exponent. ( previous, home, next )
Exact Controllability of Semilinear Stochastic Evolution Equations Exact Controllability of Semilinear Stochastic Evolution Equations Abstract In this paper we study the exact controllability of the following semilinear stochastic evolution equation in a Hilbert space $X$ $dx(t)=\{Ax(t)+Bu(t)+f(t,\omega,x(t),u(t)) \}dt + \{\Sigma(t) +\sigma(t,\omega,x(t),u(t)) \}dw(t),$ where the control u is a stochastic process in the Hilbert space $A:D(A)\subset X\rightarrow X$, is the infinitesimal generator of a strongly continuous semigroup $\left\{S(t)\right\}_{t\geq 0}$ on $X$ and $B\in L(U,X)$. To this end, we give necessary and sufficient conditions for the exact controllability of the linear part of this system $dx(t)=Ax(t)dt+Bu(t)dt+\Sigma(t)dw(t).$. Then, under a Lipschitzian condition on the non linear terms $f$ and $\sigma$ we prove that the exact controllability of this linear system is preserved by the semilinear stochastic system. Moreover, we obtain explicit formulas for a control steering the system from the initial state $\xi_0$ to a final state $\xi_1$ on time $T >0$, for both system, the linear and the nonlinear one. Finally, we apply this result to a semilinear damped stochastic wave equation.
Learn what to expect in the new updates The documentation for matplotlib is generated from ReStructured Text using the Sphinx documentation generation tool. Sphinx-1.0 or later and numpydoc 0.4 or later is required. The documentation sources are found in the doc/ directory inthe trunk. To build the users guide in html format, cd into doc/ and do: python make.py html or: ./make.py html you can also pass a latex flag to make.py to build a pdf, or pass noarguments to build everything. The output produced by Sphinx can be configured by editing the conf.pyfile located in the doc/. The actual ReStructured Text files are kept in doc/users, doc/devel, doc/api and doc/faq. The main entry point is doc/index.rst, which pulls in the index.rst file for the usersguide, developers guide, api reference, and faqs. The documentation suite isbuilt as a single document in order to make the most effective use of crossreferencing, we want to make navigating the Matplotlib documentation as easy aspossible. Additional files can be added to the various guides by including their base file name (the .rst extension is not necessary) in the table of contents. It is also possible to include other documents through the use of an include statement, such as: .. include:: ../../TODO In addition to the “narrative” documentation described above, matplotlib also defines its API reference documentation in docstrings. For the most part, these are standard Python docstrings, but matplotlib also includes some features to better support documenting getters and setters. Matplotlib uses artist introspection of docstrings to supportproperties. All properties that you want to support through setpand getp should have a set_property and get_propertymethod in the Artist class. Yes, this isnot ideal given python properties or enthought traits, but it is ahistorical legacy for now. The setter methods use the docstring withthe ACCEPTS token to indicate the type of argument the method accepts.e.g., in matplotlib.lines.Line2D: # in lines.pydef set_linestyle(self, linestyle): """ Set the linestyle of the line ACCEPTS: [ '-' | '--' | '-.' | ':' | 'steps' | 'None' | ' ' | '' ] """ Since matplotlib uses a lot of pass-through kwargs, e.g., in everyfunction that creates a line ( plot(), semilogx(), semilogy(), etc...), it can be difficult forthe new user to know which kwargs are supported. Matplotlib usesa docstring interpolation scheme to support documentation of everyfunction that takes a **kwargs. The requirements are: The functions matplotlib.artist.kwdocd and matplotlib.artist.kwdoc() to facilitate this. They combinepython string interpolation in the docstring with the matplotlibartist introspection facility that underlies setp and getp.The kwdocd is a single dictionary that maps class name to adocstring of kwargs. Here is an example from matplotlib.lines: # in lines.pyartist.kwdocd['Line2D'] = artist.kwdoc(Line2D) # in axes.pydef plot(self, *args, **kwargs): """ Some stuff omitted The kwargs are Line2D properties: %(Line2D)s kwargs scalex and scaley, if defined, are passed on to autoscale_view to determine whether the x and y axes are autoscaled; default True. See Axes.autoscale_view for more information """ passplot.__doc__ = cbook.dedent(plot.__doc__) % artist.kwdocd Note there is a problem for Artist __init__ methods, e.g., matplotlib.patches.Patch.__init__(),which supports Patch kwargs, since the artist inspector cannotwork until the class is fully defined and we can’t modify the Patch.__init__.__doc__ docstring outside the class definition.There are some some manual hacks in this case, violating the“single entry point” requirement above – see the artist.kwdocd['Patch'] setting in matplotlib.patches. The Sphinx website contains plenty of documentation concerning ReST markup and working with Sphinx in general. Here are a few additional things to keep in mind: Please familiarize yourself with the Sphinx directives for inlinemarkup. Matplotlib’s documentation makes heavy use of cross-referencing andother semantic markup. For example, when referring to external files, use the :file: directive. Function arguments and keywords should be referred to using the emphasisrole. This will keep matplotlib’s documentation consistent with Python’sdocumentation: Here is a description of *argument* Please do not use the default role: Please do not describe `argument` like this. nor the literal role: Please do not describe ``argument`` like this. Sphinx does not support tables with column- or row-spanning cells for latex output. Such tables can not be used when documenting matplotlib. Mathematical expressions can be rendered as png images in html, and in the usual way by latex. For example: :math:`\sin(x_n^2)` yields: , and: .. math:: \int_{-\infty}^{\infty}\frac{e^{i\phi}}{1+x^2\frac{e^{i\phi}}{1+x^2}} yields: Interactive IPython sessions can be illustrated in the documentation using the following directive: .. sourcecode:: ipython In [69]: lines = plot([1,2,3]) which would yield: In [69]: lines = plot([1,2,3]) Footnotes [1] can be added using [#]_, followed later by: .. rubric:: Footnotes.. [#] Footnotes [1] For example. Use the note and warning directives, sparingly, to draw attention toimportant comments: .. note:: Here is a note yields: Note here is a note also: Warning here is a warning Use the deprecated directive when appropriate: .. deprecated:: 0.98 This feature is obsolete, use something else. yields: Deprecated since version 0.98: This feature is obsolete, use something else. Use the versionadded and versionchanged directives, which have similarsyntax to the deprecated role: .. versionadded:: 0.98 The transforms have been completely revamped. New in version 0.98: The transforms have been completely revamped. Use the seealso directive, for example: .. seealso:: Using ReST :ref:`emacs-helpers`: One example A bit about :ref:`referring-to-mpl-docs`: One more yields: Please keep the Glossary in mind when writing documentation. You cancreate a references to a term in the glossary with the :term: role. The autodoc extension will handle index entries for the API, but additional entries in the index need to be explicitly added. Please limit the text width of docstrings to 70 characters. Keyword arguments should be described using a definition list. Note matplotlib makes extensive use of keyword arguments as pass-through arguments, there are a many cases where a table is used in place of a definition list for autogenerated sections of docstrings. Figures can be automatically generated from scripts and included in the docs. It is not necessary to explicitly save the figure in the script, this will be done automatically at build time to ensure that the code that is included runs and produces the advertised figure. The path should be relative to the doc directory. Any plotsspecific to the documentation should be added to the doc/pyplotsdirectory and committed to git. Plots from the examples directorymay be referenced through the symlink mpl_examples in the docdirectory. e.g.: .. plot:: mpl_examples/pylab_examples/simple_plot.py The :scale: directive rescales the image to some percentage of theoriginal size, though we don’t recommend using this in most casessince it is probably better to choose the correct figure size and dpiin mpl and let it handle the scaling. A directive for including a matplotlib plot in a Sphinx document. By default, in HTML output, plot will include a .png file with alink to a high-res .png and .pdf. In LaTeX output, it will include a.pdf. The source code for the plot may be included in one of three ways: A path to a source fileas the argument to the directive:.. plot:: path/to/plot.py When a path to a source file is given, the content of the directive may optionally contain a caption for the plot:.. plot:: path/to/plot.py This is the caption for the plot Additionally, one my specify the name of a function to call (with no arguments) immediately after importing the module:.. plot:: path/to/plot.py plot_function1 Included as inline contentto the directive:.. plot:: import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np img = mpimg.imread('_static/stinkbug.png') imgplot = plt.imshow(img) Using doctestsyntax:.. plot:: A plotting example: >>> import matplotlib.pyplot as plt >>> plt.plot([1,2,3], [4,5,6]) The plot directive supports the following options: format : {‘python’, ‘doctest’} Specify the format of the input include-source : bool Whether to display the source code. The default can be changed using the plot_include_sourcevariable in conf.py encoding : str If this source file is in a non-UTF8 or non-ASCII encoding, the encoding must be specified using the :encoding:option. The encoding will not be inferred using the -*- coding -*-metacomment. context : bool or str If provided, the code will be run in the context of all previous plot directives for which the :context:option was specified. This only applies to inline code plot directives, not those run from files. If the :context: resetis specified, the context is reset for this and future plots. nofigs : bool If specified, the code block will be run, but no figures will be inserted. This is usually useful with the :context:option. Additionally, this directive supports all of the options of the image directive, except for target (since plot will add its owntarget). These include alt, height, width, scale, align and class. The plot directive has the following configuration options: plot_include_source Default value for the include-source option plot_html_show_source_link Whether to show a link to the source in HTML. plot_pre_code Code that should be executed before each plot. plot_basedir Base directory, to which plot::file names are relative to. (If None or empty, file names are relative to the directory where the file containing the directive is.) plot_formats File formats to generate. List of tuples or strings:[(suffix, dpi), suffix, ...] that determine the file format and the DPI. For entries whose DPI was omitted, sensible defaults are chosen. plot_html_show_formats Whether to show links to the files in HTML. plot_rcparams A dictionary containing any non-standard rcParams that should be applied before each plot. plot_apply_rcparams By default, rcParams are applied when contextoption is not used in a plot directive. This configuration option overrides this behavior and applies rcParams before each plot. plot_working_directory By default, the working directory will be changed to the directory of the example, so the code can get at its data files, if any. Also its path will be added to sys.pathso it can import any helper modules sitting beside it. This configuration option can be used to specify a central directory (also added to sys.path) where data files and helper modules for all code are located. plot_template Provide a customized template for preparing restructured text. Any figures that rely on optional system configurations need to be handled alittle differently. These figures are not to be generated during thedocumentation build, in order to keep the prerequisites to the documentationeffort as low as possible. Please run the doc/pyplots/make.py scriptwhen adding such figures, and commit the script and the images togit. Please also add a line to the README in doc/pyplots for any additionalrequirements necessary to generate a new figure. Once these steps have beentaken, these figures can be included in the usual way: .. plot:: pyplots/tex_unicode_demo.py :include-source: The source of the files in the examples directory areautomatically included in the HTML docs. An image is generated andincluded for all examples in the api and pylab_examplesdirectories. To exclude the example from having an image rendered,insert the following special comment anywhere in the script: # -*- noplot -*- We have a matplotlib google/gmail account with username mplgithubwhich we used to setup the github account but can be used for otherpurposes, like hosting google docs or youtube videos. You can embed amatplotlib animation in the docs by first saving the animation as amovie using matplotlib.animation.Animation.save(), and thenuploading to matplotlib’s youtubechannel and inserting theembedding string youtube provides like: .. raw:: html <iframe width="420" height="315" src="http://www.youtube.com/embed/32cjc6V0OZY" frameborder="0" allowfullscreen> </iframe> An example save command to generate a movie looks like this ani = animation.FuncAnimation(fig, animate, np.arange(1, len(y)), interval=25, blit=True, init_func=init)ani.save('double_pendulum.mp4', fps=15) Contact Michael Droettboom for the login password to upload youtube videos of google docs to the mplgithub account. In the documentation, you may want to include to a document in thematplotlib src, e.g., a license file or an image file from mpl-data,refer to it via a relative path from the document where the rst fileresides, e.g., in users/navigation_toolbar.rst, we refer to theimage icons with: .. image:: ../../lib/matplotlib/mpl-data/images/subplots.png In the users subdirectory, if I want to refer to a file in the mpl-datadirectory, I use the symlink directory. For example, from customizing.rst: .. literalinclude:: ../../lib/matplotlib/mpl-data/matplotlibrc One exception to this is when referring to the examples dir. Relativepaths are extremely confusing in the sphinx plot extensions, sowithout getting into the dirty details, it is easier to simply includea symlink to the files at the top doc level directory. This way, APIdocuments like matplotlib.pyplot.plot() can refer to theexamples in a known location. In the top level doc directory we have symlinks pointing tothe mpl examples: home:~/mpl/doc> ls -l mpl_*mpl_examples -> ../examples So we can include plots from the examples dir using the symlink: .. plot:: mpl_examples/pylab_examples/simple_plot.py We used to use a symlink for mpl-data too, but the distrobecomes very large on platforms that do not support links (e.g., the fontfiles are duplicated and large) To maximize internal consistency in section labeling and references, use hyphen separated, descriptive labels for section references, e.g.,: .. _howto-webapp: and refer to it using the standard reference syntax: See :ref:`howto-webapp` Keep in mind that we may want to reorganize the contents later, solet’s avoid top level names in references like user or develor faq unless necessary, because for example the FAQ “what is abackend?” could later become part of the users guide, so the label: .. _what-is-a-backend is better than: .. _faq-backend In addition, since underscores are widely used by Sphinx itself, let’s prefer hyphens to separate words. For everything but top level chapters, please use Upper lower forsection titles, e.g., Possible hangups rather than PossibleHangups Class inheritance diagrams can be generated with the inheritance-diagram directive. To use it, you provide thedirective with a number of class or module names (separated bywhitespace). If a module name is provided, all classes in that modulewill be used. All of the ancestors of these classes will be includedin the inheritance diagram. A single option is available: parts controls how many of parts inthe path to the class are shown. For example, if parts == 1, theclass matplotlib.patches.Patch is shown as Patch. If parts== 2, it is shown as patches.Patch. If parts == 0, the fullpath is shown. Example: .. inheritance-diagram:: matplotlib.patches matplotlib.lines matplotlib.text :parts: 2 There is an emacs mode rst.el whichautomates many important ReST tasks like building and updatingtable-of-contents, and promoting or demoting section headings. Hereis the basic .emacs configuration: (require 'rst)(setq auto-mode-alist (append '(("\\.txt$" . rst-mode) ("\\.rst$" . rst-mode) ("\\.rest$" . rst-mode)) auto-mode-alist)) Some helpful functions: C-c TAB - rst-toc-insert Insert table of contents at pointC-c C-u - rst-toc-update Update the table of contents at pointC-c C-l rst-shift-region-left Shift region to the leftC-c C-r rst-shift-region-right Shift region to the right
Definition:Square/Function Contents Definition Let $\F$ denote one of the standard classes of numbers: $\N$, $\Z$, $\Q$, $\R$, $\C$. The square (function) on $\F$ is the mapping $f: \F \to \F$ defined as: $\forall x \in \F: \map f x = x \times x$ where $\times$ denotes multiplication. The square (function) on $\F$ is the mapping $f: \F \to \F$ defined as: $\forall x \in \F: \map f x = x^2$ where $x^2$ denotes the $2$nd power of $x$. Square Function in Specific Number Systems Specific contexts in which the square function is used include the following: The (real) square function is the real function $f: \R \to \R$ defined as: $\forall x \in \R: \map f x = x^2$ The (integer) square function is the integer function $f: \Z \to \Z$ defined as: $\forall x \in \Z: \map f x = x^2$ Also see Results about the square functioncan be found here.
The quantity pH, or "power of hydrogen," is a numerical representation of the acidity or basicity of a solution. It can be used to calculate the concentration of hydrogen ions [H +] or hydronium ions [H 3O +] in an aqueous solution. Solutions with low pH are the most acidic, and solutions with high pH are most basic. Definitions Although pH is formally defined in terms of activities, it is often estimated using free proton or hydronium concentration: \[ pH \approx -\log[H_3O^+] \label{eq1}\] or \[ pH \approx -\log[H^+] \label{eq2}\] \(K_a\), the acid ionization constant, is the equilibrium constant for chemical reactions involving weak acids in aqueous solution. The numerical value of \(K_a\) is used to predict the extent of acid dissociation. A large \(K_a\) value indicates a stronger acid (more of the acid dissociates) and small \(K_a\) value indicates a weaker acid (less of the acid dissociates). For a chemical equation of the form \[ HA + H_2O \leftrightharpoons H_3O^+ + A^- \] \(K_a\) is express as \[ K_a = \dfrac{[H_3O^+][A^-]}{[HA]} \label{eq3} \] where \(HA\) is the undissociated acid and \(A^-\) is the conjugate base of the acid. Since \(H_2O\) is a pure liquid, it has an activity equal to one and is ignored in the equilibrium constant expression in (Equation \ref{eq3}) like in other equilibrium constants. Howto: Solving for \(K_a\) When given the pH value of a solution, solving for \(K_a\) requires the following steps: Set up an ICE table for the chemical reaction. Solve for the concentration of \(\ce{H3O^{+}}\) using the equation for pH: \[ [H_3O^+] = 10^{-pH} \] Use the concentration of \(\ce{H3O^{+}}\) to solve for the concentrations of the other products and reactants. Plug all concentrations into the equation for \(K_a\) and solve. Example \(\PageIndex{1}\) Calculate the \(K_a\) value of a 0.2 M aqueous solution of propionic acid (\(\ce{CH3CH2CO2H}\)) with a pH of 4.88. \[ \ce{CH_3CH_2CO_2H + H_2O \leftrightharpoons H_3O^+ + CH_3CH_2CO_2^- } \nonumber\] SOLUTION ICE TABLE ICE \(\ce{ CH_3CH_2CO_2H }\) \(\ce{ H_3O^+ }\) \(\ce{ CH_3CH_2CO_2^- }\) Initial Concentration (M) 0.2 0 0 Change in Concentration (M) -x +x +x Equilibrium Concentration (M) 0.2 - x x x According to the definition of pH (Equation \ref{eq1}) \[\begin{align*} -pH = \log[H_3O^+] &= -4.88 \\[4pt] [H_3O^+] &= 10^{-4.88} \\[4pt] &= 1.32 \times 10^{-5} \\[4pt] &= x \end{align*}\] According to the definition of \(K_a\) (Equation \ref{eq3} \[\begin{align*} K_a &= \dfrac{[H_3O^+][CH_3CH_2CO_2^-]}{[CH_3CH_2CO_2H]} \\[4pt] &= \dfrac{x^2}{0.2 - x} \\[4pt] &= \dfrac{(1.32 \times 10^{-5})^2}{0.2 - 1.32 \times 10^{-5}} \\[4pt] &= 8.69 \times 10^{-10} \end{align*}\] References Petrucci,et al. General Chemistry:Principles & Modern Applications; Ninth Edition, Pearson/Prentice Hall; Upper Saddle River, New Jersey 07. Contributors Paige Norberg (UCD) and Gabriela Mastro (UCD)
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Peter Saveliev Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA. My current projects are these two books: In part, the latter book is about Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands: Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases: (a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! -Why is discrete calculus better than infinitesimal calculus? -Why? -Because it can be integer-valued! -And? -And the integer-valued calculus can detect if the space is non-orientable! Read Integer-valued calculus, an essay making a case for discrete calculus by appealing to topology and physics. -The political “spectrum” might be a circle!- So? -Then there can be no fair decision-making system! Read The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter.
ROTATIONAL ANALYSIS OF THE $A^{3}\Pi(1)-X^{1}\Sigma^{+}$ SYSTEM OF IC1 IN EMISSION AND IN ABSORPTION Issue Date:1978 MetadataShow full item record Publisher:Ohio State University Abstract: Detailed rotational analyses have been made of the $A^{3}\Pi(1) \leftrightarrow X^{1}\Sigma^{+}$ system of ICl, both in emission and absorption- The A $\rightarrow$ X emission system was excited by recombination of ground state $I(^{2}P_{3}/2)$ and $C1(^{2}P_{3}/2)$ atoms in a flow system at total pressures near 2 Torr. Ten bands with $0\leq v^{\prime}\leq 2$ and $6\leq v^{\prime\prime}\leq 9$ of $I^{35}C1$ were recorded photoelectrically in the range $\lambda 8000-9800 $ \AA, and were analysed to provide the first reliable rotational constants and terra values for these levels. The A $\leftarrow$ X system in absorption was photographed at high resolution using a 3.5 m Ebert spectrograph in the range$\lambda 7000-8000 $ \AA. Sixteen bands of $I^{35}C1$ with $3\leq v^{\prime}\leq 8$ and $2\leq v^{\prime\prime}\leq 5$ were analyzed. The present data and those obtained by refitting the precisely measured wavenumbers from the absorption spectrum at shorter $wavelengths^{1}$ were $merged^{2}$ to provide a self-consistent and extensive set of molecular constants for both the $A^{3}\Pi(1)$ and $X^{1}\Sigma^{+}$ states. The RKR turning points for these states and the Franck-Condon factors for the A - X system have been calculated. Description: $^{1}$E. Hulthen et al. Arkiv Fysik, 14, 31 (1958); ibid 18, 479 (1960). $^{2}$D. L. Albritton, A. L, Schmeltekopf and R. N. Zare, J. Mol. Spectrosc. 67, 132 (1977)."" Author Institution: Department of Chemistry, Dalhousie University Author Institution: Department of Chemistry, Dalhousie University Type:article Other Identifiers:1978-Sigma-07 Items in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated.
Take the expression $\sum_{k=1}^\infty a_k$. Sometimes this expressions refers to the sequence of partial sums $\left(\sum_{k=1}^n a_k\right)_{n\in\mathbb N}$ and sometimes to the limit of this sequence $\lim_{n\to\infty} \sum_{k=1}^n a_k$ (when this limit exists). For example in the expression The series $\sum_{k=1}^\infty \left(\frac 12\right)^n$ converges. the term $\sum_{k=1}^\infty \left(\frac 12\right)^n$ is a sequence. In the expression It is $\sum_{k=1}^\infty \left(\frac 12\right)^n = 1$. the same expression $\sum_{k=1}^\infty \left(\frac 12\right)^n$ is a real number. For me this situation is unsatisfactory. Any expression in mathematics shall have an unique interpretation and shall not be ambiguous in its meaning. For example we explain the necessary of the expression $\sum_{k=1}^n a_k$ because it might not be clear how to interpret an expression like $a_1 + a_2 + a_3 + \ldots + a_n$ (i.e. how to fill the dots). So there also shall not be an ambiguity in $\sum_{k=1}^\infty a_k$. My question: Is there a textbook or script which differs the two interpretations of $\sum_{k=1}^\infty \left(\frac 12\right)^n$, i.e the sequence of partial sums $\left(\sum_{k=1}^n a_k\right)_{n\in\mathbb N}$ and $\lim_{n\to\infty} \sum_{k=1}^n a_k$? I am looking for a textbook, where the author omit the expression $\sum_{k=1}^\infty \left(\frac 12\right)^n$ or where he uses this expression only for the partial sum sequence or only for the limit. Reason for my question: I am looking for a clear notation I can use for my textbook. So also suggestions for a better notation are welcome ;-)
I am about halfway the most important part of Onsager's paper, so I'll try to summarize what I've understood so far, I'll edit later when I have more to say. Onsager starts by using the 1D model to illustrate his methodology and fix some notations, so I'm gonna follow him but I'll use some more "modern" notations. In the 1D Ising model, only neighbouring spins interact, therefore, the energy of interactions is represented by $$E=-J\mu^{(k)}\mu^{(k-1)}$$ where $J$ is the interaction strength. The partition function is $$Z = \sum_{\mu^{(1)},\ldots,\mu^{(N)}=\pm 1} e^{-\sum_k J\mu^{(k)}\mu^{(k-1)}/kT}$$ Onsager notes that the exponential can be seen as a matrix component: $$\langle \mu^{(k-1)}| V | \mu^{(k)} \rangle = e^{-J\mu^{(k)}\mu^{(k-1)}/kT}$$ The partition sum becomes the trace of a matrix product in this notation $$Z = \sum_{\mu^{(1)},\mu^{(N)}=\pm 1} \langle \mu^{(1)}| V^{N-1} | \mu^{(N)} \rangle$$ So for large powers $N$ of $V$, the largest eigenvalue will dominate. In this case, $V$ is just a $2\times 2$ matrix and the largest eigenvalue is $2\cosh(J/kT)=2\cosh(H)$, introducing $H=J/kT$. Now, to construct the 2D Ising model, Onsager proposes to build it by adding a 1D chain to another 1D chain, and then repeat the procedure to obtain the full 2D model. First, he notes that the energy of the newly added chain $\mu$ will depend on the chain $\mu'$ to which it is added as follows: $$E = -\sum_{j=1}^n J \mu_j \mu'_j $$ But if we exponentiate this to go to the partition formula, we get the $n$th power of the matrixwe defined previously, so using notation that Onsager introduced there $$ V_1 = (2 \sinh(2H))^{n/2} \exp(H^{*}B)$$ with $H^{*}=\tanh^{-1}(e^{-2H})$ and $B=\sum_j C_j$ with $C_j$ the matrix operator that works on a chain as follows $$C_j |\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle = |\mu_1,\ldots,-\mu_j,\ldots,\mu_n \rangle $$ Then, to account for the energy contribution from spins within a chain, he notes that the total energy is $$E = -J' \sum_{j=1}^n \mu_j\mu_{j+1}$$ adding periodicity, that is the $n$th atom is a neighbor to the 1st. Also note that the interaction strength should not be equal to the interchain interaction strength. He introduces new matrix operators $s_j$ which act on a chain as $$s_j|\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle = \mu_j |\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle $$ and in this way constructs a matrix $$V_2 = \exp(H'A) = \exp(H'\sum_j s_j s_{j+1})$$ Now, the 2D model can be constructed by adding a chain through application of $V_1$ and then define the internal interactions by using $V_2$. So one gets the following chain of operations $$\cdots V_2 V_1 V_2 V_1 V_2 V_1 V_2 V_1 V_2 V_1$$ It is thus clear that the matrix to be analyzed in our 2D model is $V=V_2 V_1$. This is our new eigenvalue problem: $$\lambda | \mu_1,\ldots,\mu_n \rangle = \exp(H'\sum_j s_j s_{j+1}) \sum_{\mu'_1,\ldots,\mu'_n=\pm 1} \exp(H\sum_j \mu_j \mu'_{j})| \mu'_1,\ldots,\mu'_n \rangle$$ Now, the quaternions come into play. Onsager notes that the operators $s_j$ and $C_j$ he constructed form a quaternion algebra. Basically, the basis elements $(1,s_j,C_j,s_jC_j)$ generate the quaternions and since for different $j$'s the operators commute, we have a tensor product of quaternions, thus a quaternion algebra. -- To be continued --This post imported from StackExchange Physics at 2014-04-01 16:36 (UCT), posted by SE-user Raskolnikov
The Monster is the largest of the 26 sporadic simple groups and has order 808 017 424 794 512 875 886 459 904 961 710 757 005 754 368 000 000 000 = 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71. It is not so much the size of its order that makes it hard to do actual calculations in the monster, but rather the dimensions of its smallest non-trivial irreducible representations (196 883 for the smallest, 21 296 876 for the next one, and so on). In characteristic two there is an irreducible representation of one dimension less (196 882) which appears to be of great use to obtain information. For example, Robert Wilson used it to prove that The Monster is a Hurwitz group. This means that the Monster is generated by two elements g and h satisfying the relations $g^2 = h^3 = (gh)^7 = 1 $ Geometrically, this implies that the Monster is the automorphism group of a Riemann surface of genus g satisfying the Hurwitz bound 84(g-1)=#Monster. That is, g=9619255057077534236743570297163223297687552000000001=42151199 * 293998543 * 776222682603828537142813968452830193 Or, in analogy with the Klein quartic which can be constructed from 24 heptagons in the tiling of the hyperbolic plane, there is a finite region of the hyperbolic plane, tiled with heptagons, from which we can construct this monster curve by gluing the boundary is a specific way so that we get a Riemann surface with exactly 9619255057077534236743570297163223297687552000000001 holes. This finite part of the hyperbolic tiling (consisting of #Monster/7 heptagons) we’ll call the empire of the monster and we’d love to describe it in more detail. Look at the half-edges of all the heptagons in the empire (the picture above learns that every edge in cut in two by a blue geodesic). They are exactly #Monster such half-edges and they form a dessin d’enfant for the monster-curve. If we label these half-edges by the elements of the Monster, then multiplication by g in the monster interchanges the two half-edges making up a heptagonal edge in the empire and multiplication by h in the monster takes a half-edge to the one encountered first by going counter-clockwise in the vertex of the heptagonal tiling. Because g and h generated the Monster, the dessin of the empire is just a concrete realization of the monster. Because g is of order two and h is of order three, the two permutations they determine on the dessin, gives a group epimorphism $C_2 \ast C_3 = PSL_2(\mathbb{Z}) \rightarrow \mathbb{M} $ from the modular group $PSL_2(\mathbb{Z}) $ onto the Monster-group. In noncommutative geometry, the group-algebra of the modular group $\mathbb{C} PSL_2 $ can be interpreted as the coordinate ring of a noncommutative manifold (because it is formally smooth in the sense of Kontsevich-Rosenberg or Cuntz-Quillen) and the group-algebra of the Monster $\mathbb{C} \mathbb{M} $ itself corresponds in this picture to a finite collection of ‘points’ on the manifold. Using this geometric viewpoint we can now ask the question What does the Monster see of the modular group? To make sense of this question, let us first consider the commutative equivalent : what does a point P see of a commutative variety X? Evaluation of polynomial functions in P gives us an algebra epimorphism $\mathbb{C}[X] \rightarrow \mathbb{C} $ from the coordinate ring of the variety $\mathbb{C}[X] $ onto $\mathbb{C} $ and the kernel of this map is the maximal ideal $\mathfrak{m}_P $ of $\mathbb{C}[X] $ consisting of all functions vanishing in P. Equivalently, we can view the point $P= \mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P $ as the scheme corresponding to the quotient $\mathbb{C}[X]/\mathfrak{m}_P $. Call this the 0-th formal neighborhood of the point P. This sounds pretty useless, but let us now consider higher-order formal neighborhoods. Call the affine scheme $\mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P^{n+1} $ the n-th forml neighborhood of P, then the first neighborhood, that is with coordinate ring $\mathbb{C}[X]/\mathfrak{m}_P^2 $ gives us tangent-information. Alternatively, it gives the best linear approximation of functions near P. The second neighborhood $\mathbb{C}[X]/\mathfrak{m}_P^3 $ gives us the best quadratic approximation of function near P, etc. etc. These successive quotients by powers of the maximal ideal $\mathfrak{m}_P $ form a system of algebra epimorphisms $\ldots \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n+1}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} \rightarrow \ldots \ldots \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{2}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P} = \mathbb{C} $ and its inverse limit $\underset{\leftarrow}{lim}~\frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} = \hat{\mathcal{O}}_{X,P} $ is the completion of the local ring in P and contains all the infinitesimal information (to any order) of the variety X in a neighborhood of P. That is, this completion $\hat{\mathcal{O}}_{X,P} $ contains all information that P can see of the variety X. In case P is a smooth point of X, then X is a manifold in a neighborhood of P and then this completion $\hat{\mathcal{O}}_{X,P} $ is isomorphic to the algebra of formal power series $\mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ where the $x_i $ form a local system of coordinates for the manifold X near P. Right, after this lengthy recollection, back to our question what does the monster see of the modular group? Well, we have an algebra epimorphism $\pi~:~\mathbb{C} PSL_2(\mathbb{Z}) \rightarrow \mathbb{C} \mathbb{M} $ and in analogy with the commutative case, all information the Monster can gain from the modular group is contained in the $\mathfrak{m} $-adic completion $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} = \underset{\leftarrow}{lim}~\frac{\mathbb{C} PSL_2(\mathbb{Z})}{\mathfrak{m}^n} $ where $\mathfrak{m} $ is the kernel of the epimorphism $\pi $ sending the two free generators of the modular group $PSL_2(\mathbb{Z}) = C_2 \ast C_3 $ to the permutations g and h determined by the dessin of the pentagonal tiling of the Monster’s empire. As it is a hopeless task to determine the Monster-empire explicitly, it seems even more hopeless to determine the kernel $\mathfrak{m} $ let alone the completed algebra… But, (surprise) we can compute $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} $ as explicitly as in the commutative case we have $\hat{\mathcal{O}}_{X,P} \simeq \mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ for a point P on a manifold X. Here the details : the quotient $\mathfrak{m}/\mathfrak{m}^2 $ has a natural structure of $\mathbb{C} \mathbb{M} $-bimodule. The group-algebra of the monster is a semi-simple algebra, that is, a direct sum of full matrix-algebras of sizes corresponding to the dimensions of the irreducible monster-representations. That is, $\mathbb{C} \mathbb{M} \simeq \mathbb{C} \oplus M_{196883}(\mathbb{C}) \oplus M_{21296876}(\mathbb{C}) \oplus \ldots \ldots \oplus M_{258823477531055064045234375}(\mathbb{C}) $ with exactly 194 components (the number of irreducible Monster-representations). For any $\mathbb{C} \mathbb{M} $-bimodule $M $ one can form the tensor-algebra $T_{\mathbb{C} \mathbb{M}}(M) = \mathbb{C} \mathbb{M} \oplus M \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus \ldots \ldots $ and applying the formal neighborhood theorem for formally smooth algebras (such as $\mathbb{C} PSL_2(\mathbb{Z}) $) due to Joachim Cuntz (left) and Daniel Quillen (right) we have an isomorphism of algebras $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} \simeq \widehat{T_{\mathbb{C} \mathbb{M}}(\mathfrak{m}/\mathfrak{m}^2)} $ where the right-hand side is the completion of the tensor-algebra (at the unique graded maximal ideal) of the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $, so we’d better describe this bimodule explicitly. Okay, so what’s a bimodule over a semisimple algebra of the form $S=M_{n_1}(\mathbb{C}) \oplus \ldots \oplus M_{n_k}(\mathbb{C}) $? Well, a simple S-bimodule must be either (1) a factor $M_{n_i}(\mathbb{C}) $ with all other factors acting trivially or (2) the full space of rectangular matrices $M_{n_i \times n_j}(\mathbb{C}) $ with the factor $M_{n_i}(\mathbb{C}) $ acting on the left, $M_{n_j}(\mathbb{C}) $ acting on the right and all other factors acting trivially. That is, any S-bimodule can be represented by a quiver (that is a directed graph) on k vertices (the number of matrix components) with a loop in vertex i corresponding to each simple factor of type (1) and a directed arrow from i to j corresponding to every simple factor of type (2). That is, for the Monster, the bimodule $\mathfrak{m}/\mathfrak{m}^2 $ is represented by a quiver on 194 vertices and now we only have to determine how many loops and arrows there are at or between vertices. Using Morita equivalences and standard representation theory of quivers it isn’t exactly rocket science to determine that the number of arrows between the vertices corresponding to the irreducible Monster-representations $S_i $ and $S_j $ is equal to $dim_{\mathbb{C}}~Ext^1_{\mathbb{C} PSL_2(\mathbb{Z})}(S_i,S_j)-\delta_{ij} $ Now, I’ve been wasting a lot of time already here explaining what representations of the modular group have to do with quivers (see for example here or some other posts in the same series) and for quiver-representations we all know how to compute Ext-dimensions in terms of the Euler-form applied to the dimension vectors. Right, so for every Monster-irreducible $S_i $ we have to determine the corresponding dimension-vector $~(a_1,a_2;b_1,b_2,b_3) $ for the quiver $\xymatrix{ & & & & \vtx{b_1} \\ \vtx{a_1} \ar[rrrru]^(.3){B_{11}} \ar[rrrrd]^(.3){B_{21}} \ar[rrrrddd]_(.2){B_{31}} & & & & \\ & & & & \vtx{b_2} \\ \vtx{a_2} \ar[rrrruuu]_(.7){B_{12}} \ar[rrrru]_(.7){B_{22}} \ar[rrrrd]_(.7){B_{23}} & & & & \\ & & & & \vtx{b_3}} $ Now the dimensions $a_i $ are the dimensions of the +/-1 eigenspaces for the order 2 element g in the representation and the $b_i $ are the dimensions of the eigenspaces for the order 3 element h. So, we have to determine to which conjugacy classes g and h belong, and from Wilson’s paper mentioned above these are classes 2B and 3B in standard Atlas notation. So, for each of the 194 irreducible Monster-representations we look up the character values at 2B and 3B (see below for the first batch of those) and these together with the dimensions determine the dimension vector $~(a_1,a_2;b_1,b_2,b_3) $. For example take the 196883-dimensional irreducible. Its 2B-character is 275 and the 3B-character is 53. So we are looking for a dimension vector such that $a_1+a_2=196883, a_1-275=a_2 $ and $b_1+b_2+b_3=196883, b_1-53=b_2=b_3 $ giving us for that representation the dimension vector of the quiver above $~(98579,98304,65663,65610,65610) $. Okay, so for each of the 194 irreducibles $S_i $ we have determined a dimension vector $~(a_1(i),a_2(i);b_1(i),b_2(i),b_3(i)) $, then standard quiver-representation theory asserts that the number of loops in the vertex corresponding to $S_i $ is equal to $dim(S_i)^2 + 1 – a_1(i)^2-a_2(i)^2-b_1(i)^2-b_2(i)^2-b_3(i)^2 $ and that the number of arrows from vertex $S_i $ to vertex $S_j $ is equal to $dim(S_i)dim(S_j) – a_1(i)a_1(j)-a_2(i)a_2(j)-b_1(i)b_1(j)-b_2(i)b_2(j)-b_3(i)b_3(j) $ This data then determines completely the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $ and hence the structure of the completion $\widehat{\mathbb{C} PSL_2}_{\mathfrak{m}} $ containing all information the Monster can gain from the modular group. But then, one doesn’t have to go for the full regular representation of the Monster. Any faithful permutation representation will do, so we might as well go for the one of minimal dimension. That one is known to correspond to the largest maximal subgroup of the Monster which is known to be a two-fold extension $2.\mathbb{B} $ of the Baby-Monster. The corresponding permutation representation is of dimension 97239461142009186000 and decomposes into Monster-irreducibles $S_1 \oplus S_2 \oplus S_4 \oplus S_5 \oplus S_9 \oplus S_{14} \oplus S_{21} \oplus S_{34} \oplus S_{35} $ (in standard Atlas-ordering) and hence repeating the arguments above we get a quiver on just 9 vertices! The actual numbers of loops and arrows (I forgot to mention this, but the quivers obtained are actually symmetric) obtained were found after laborious computations mentioned in this post and the details I’ll make avalable here. Anyone who can spot a relation between the numbers obtained and any other part of mathematics will obtain quantities of genuine (ie. non-Inbev) Belgian beer…
Search Now showing items 11-20 of 55 Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
for numerical working there is a useful rule which applies when the denominator polynomial is a product of distinct linear factors. suppose $f(x)=\frac{P(x)}{Q(x)}$ where $deg(Q)=n \gt deg(P)$ and $$Q(x) = \prod_{k=1}^n (x-\alpha_k)$$where the $\alpha_k$ are all different. then define $Q_k(x) = \frac{Q(x)}{(x-\alpha_k)}$ in these happy circumstances we may write:$$\frac{P(x)}{Q(x)} = \sum_{k=1}^n \frac{P(\alpha_k)}{Q_k(\alpha_k)} (x-\alpha_k)^{-1}$$ in the example given we have $\alpha_1=0$ and $\alpha_2=-2$ giving $Q_1(x)=x+2$, $Q_1(\alpha_1) = 2$ and $Q_2(x)=x$, $Q_2(\alpha_2) = -2$ with $P(\alpha_1)=P(0)=-8$ and $P(\alpha_2)=P(-2)=8$. you will see that this gives the answer already obtained. don't be put off by the explication - with a little practice this is a very straightforward procedure where it is applicable. for example this can be written straight down: $$\frac{x^2+x+1}{(x-1)(x-2)(x-3)} = \frac{\frac32}{(x-1)} - \frac7{(x-2)} + \frac{\frac{13}2}{(x-3)}$$
Here's the question: Evaluate $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}$ if $\boldsymbol{F} = (x+y) \boldsymbol{\hat{i}} + x \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$ and $S$ is the surface of the cube bounded by the planes $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$. Here's my attempt: Suppose the faces whose equations are $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$ are respectively named $S_1$, $S_2$ and so on respectively and let $\boldsymbol{\hat{n}}$ denote the unit vector normal to them. Now on $S_1$, $\boldsymbol{F} = y \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$, $\boldsymbol{\hat{n}}=\boldsymbol{\hat{i}}$. Therefore $\iint_{S_1} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \int_{0}^{1} \int_{0}^{1} y \mathrm{d}y \mathrm{d}z = \frac{1}{2}$. Similarly we have $\iint_{S_2} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{3}{2}$, $\iint_{S_3} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_4} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_5} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 0$ and $\iint_{S_6} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 1$. Hence overall we have $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 4$. But the answer on the textbook seems to be $2$. I checked everything up and there doesn't seem to be any error on my part but I was wondering how the answer doesn't match up.
A particle, traveling at $0.5c$ relative to a stationary observer, travels $3.95 \rm ~cm$ in its frame of reference. What is the distance the particle travel in the observer's frame of reference? Since $3.95 cm$ is the proper length (the distance it travels in its rest frame (which has to be its own frame since in its own frame it is at rest) is proper length by definition) thus the distance measured in the laboratory is $$ L_O = \frac{L_P}{\gamma}$$ where $\gamma = \frac{2}{\sqrt3}$ Thus $L_O = 3.42\rm ~cm$ But isn't it contradictory since the measured length in the laboratory should be higher?
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
2019-06-21 12:21 [GSI-2019-00752] Report/Journal Article et al Exploring the sensitivity of gravitational wave detectors to neutron star physics [arXiv:1901.03885] ddc:530 Detailed record - Similar records 2019-06-14 13:20 [GSI-2019-00743] Report/Journal Article et al Evidence of a resonant structure in the $e^+e^-\to \pi^+D^0D^{*-}$ cross section between 4.05 and 4.60 GeV [arXiv:1808.02847] The cross section of the process e+e-→π+D0D*- for center-of-mass energies from 4.05 to 4.60 GeV is measured precisely using data samples collected with the BESIII detector operating at the BEPCII storage ring. Two enhancements are clearly visible in the cross section around 4.23 and 4.40 GeV. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-31 13:40 [GSI-2019-00697] Report/Journal Article Volume Dependence of N-Body Bound States [arXiv:1701.00279] We derive the finite-volume correction to the binding energy of an N -particle quantum bound state in a cubic periodic volume. Our results are applicable to bound states with arbitrary composition and total angular momentum, and in any number of spatial dimensions. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-31 13:05 [GSI-2019-00694] Report/Journal Article Prompt photon production and photon-jet correlations at the LHCi [MS-TP-17-09; arXiv:1709.04154] Next-to-leading order predictions matched to parton showers are compared with recent ATLAS data on isolated photon production and CMS data on associated photon and jet production in pp and pPb collisions at different centre-of-mass energies of the LHC. We find good agreement and, as expected, considerably reduced scale uncertainties compared to previous theoretical calculations. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-31 12:45 [GSI-2019-00693] Report/Journal Article Resonance decay dynamics and their effects on $p_T$-spectra of pions in heavy-ion collisions [arXiv:1705.01514] The influence of resonance decay dynamics on the momentum spectra of pions in heavy-ion collisions is examined. Taking the decay processes ω→3π and ρ→2π as examples, I demonstrate how the resonance width and details of decay dynamics (via the decay matrix element) can modify the physical observables. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-29 14:19 [GSI-2019-00692] Report/Journal Article Fierz-complete NJL model study. II. Toward the fixed-point and phase structure of hot and dense two-flavor QCD [arXiv:1801.08338] Nambu-Jona-Lasinio-type models are often employed as low-energy models for the theory of the strong interaction to analyze its phase structure at finite temperature and quark chemical potential. In particular, at low temperature and large chemical potential, where the application of fully first-principles approaches is currently difficult at best, this class of models still plays a prominent role in guiding our understanding of the dynamics of dense strong-interaction matter. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-28 12:54 [GSI-2019-00691] Report/Journal Article et al Elliptic flow of electrons from beauty-hadron decays extracted from Pb--Pb collision data at $\sqrt{s_{\rm NN}}$ = 2.76 TeV [arXiv:1705.00161] We present a calculation of the elliptic flow of electrons from beauty-hadron decays in semi-central Pb–Pb collisions at centre-of-mass energy per colliding nucleon pair, represented as $\sqrt{s_\mathrm{NN}}$ , of 2.76 TeV. The result is obtained by the subtraction of the charm-quark contribution in the elliptic flow of electrons from heavy-flavour hadron decays in semi-central Pb–Pb collisions at $\sqrt{s_\mathrm{NN}} = 2.76\ \hbox {TeV}$ recently made publicly available by the ALICE collaboration.. ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-28 12:47 [GSI-2019-00690] Report/Journal Article et al Discriminating WIMP-nucleus response functions in present and future XENON-like direct detection experiments [INT-PUB-18-006; arXiv:1802.04294] The standard interpretation of direct-detection limits on dark matter involves particular assumptions of the underlying WIMP-nucleus interaction, such as, in the simplest case, the choice of a Helm form factor that phenomenologically describes an isoscalar spin-independent interaction. In general, the interaction of dark matter with the target nuclei may well proceed via different mechanisms, which would lead to a different shape of the corresponding nuclear structure factors as a function of the momentum transfer q. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-28 10:20 [GSI-2019-00689] Report/Journal Article et al Polyakov loop fluctuations in the presence of external fields [arXiv:1801.08040] We study the implications of the spontaneous and explicit Z(3) center symmetry breaking for the Polyakov loop susceptibilities. To this end, ratios of the susceptibilities of the real and imaginary parts, as well as of the modulus of the Polyakov loop are computed within an effective model using a color group integration scheme. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records 2019-05-28 10:03 [GSI-2019-00688] Report/Journal Article et al Double-folding potentials from chiral effective field theory [arXiv:1708.02527] The determination of nucleus–nucleus potentials is important not only to describe the properties of the colliding system, but also to extract nuclear-structure information and for modelling nuclear reactions for astrophysics. We present the first determination of double-folding potentials based on chiral effective field theory at leading, next-to-leading, and next-to-next-to-leading order. [...] ddc:530 OpenAccess: PDF ; Detailed record - Similar records
TL;DR: It depends on how you choose to measure entanglement on a pair of qubits. If you trace out the extra qubits, then "No". If you measure the qubits (with the freedom to choose the optimal measurement basis), then "Yes". Let $|\Psi\rangle$ be a pure quantum state of 3 qubits, labelled A, B and C. We will say that A and B are entangled if $\rho_{AB}=\text{Tr}_C(|\Psi\rangle\langle\Psi|)$ is not positive under the action of the partial transpose map. This is a necessary and sufficient condition for detecting entanglement in a two-qubit system. The partial trace formalism is equivalent to measuring qubit C in an arbitrary basis and discarding the result. There's a class of counter-examples that show that entanglement is not transitive, of the form$$|\Psi\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|1\phi\phi\rangle),$$provided $|\phi\rangle\neq |0\rangle,|1\rangle$. If you trace out qubit $B$ or qubit $C$, you'll get the same density matrix both times:$$\rho_{AC}=\rho_{AB}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|00\rangle\langle 1\phi|\langle\phi|0\rangle+|1\phi\rangle\langle 00|\langle0|\phi\rangle\right)$$You can take the partial transpose of this (taking it on the first system is the cleanest):$$\rho^{PT}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|10\rangle\langle 0\phi|\langle\phi|0\rangle+|0\phi\rangle\langle 10|\langle0|\phi\rangle\right)$$Now take the determinant (which is equal to the product of the eigenvalues). You get$$\text{det}(\rho^{PT})=-\frac{1}{16}|\langle 0|\phi\rangle|^2(1-|\langle 0|\phi\rangle|^2)^2,$$which is negative, so there must be a negative eigenvalue. Thus, $(AB)$ and $(AC)$ are entangled pairs. Meanwhile$$\rho_{BC}=\frac12(|00\rangle\langle 00|+|\phi\phi\rangle\langle\phi\phi |).$$Since this is a valid density matrix, it is non-negative. However, the partial transpose is just equal to itself. So, there are no negative eigenvalues and $(BC)$ is not entangled. Localizable Entanglement One might, instead, talk about the localizable entanglement. Before further clarification, this is what I thought the OP was referring to. In this case, instead of tracing out a qubit, one can measure it in a basis of your choice, and calculate the results separately for each measurement outcome. (There is later some averaging process, but that will be irrelevant to us here.) In this case, my response is specifically about pure states, not mixed states. The key here is that there are different classes of entangled state. For 3 qubits, there are 6 different types of pure state: a fully separable state 3 types where there is an entangled state between two parties, and a separable state on the third a W-state a GHZ state Any type of quantum state can be converted into one of the standard representatives of each class just by local measurements and classical communication between the parties. Note that the conditions of $(q_1,q_2)$ and $(q_2,q_3)$ being entangled remove the first 4 cases, so we only have to consider the last 2 cases, W-state and GHZ-state. Both representatives are symmetric under exchange of the particles:$$|W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)\qquad |GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle)$$(i.e. if I swap qubits A and B, I still have the same state).So, these representatives must have the required transitivity properties: If A and B are entangled, then B and C are entangled, as are A and C. In particular, Both of these representatives can be measured in the X basis in order to localize the entanglement. Thus, any pure state that you're given must be such that you can include the measurement to convert it into the standard representative into the measurement for localizing the entanglement, and you're done!
Hiroshima Mathematical Journal Hiroshima Math. J. Volume 47, Number 2 (2017), 155-179. Bounds on Walsh coefficients by dyadic difference and a new Koksma-Hlawka type inequality for Quasi-Monte Carlo integration Abstract In this paper we give a new Koksma-Hlawka type inequality for Quasi-Monte Carlo (QMC) integration. QMC integration of a function $f\colon[0,1)^s\rightarrow\mathbb{R}$ by a finite point set $\mathcal{P}\subset[0,1)^s$ is the approximation of the integral $I(f):=\int_{[0,1)^s}f(\mathbf{x})\,d\mathbf{x}$ by the average $I_{\mathcal{P}}(f):=\frac{1}{|\mathcal{P}|}\sum_{\mathbf{x} \in \mathcal{P}}f(\mathbf{x})$. We treat a certain class of point sets $\mathcal{P}$ called digital nets. A Koksma-Hlawka type inequality is an inequality providing an upper bound on the integration error $\text{Err}(f;\mathcal{P}):=I(f)-I_{\mathcal{P}}(f)$ of the form $|\text{Err}(f;\mathcal{P})|\le C\cdot \|f\|\cdot D(\mathcal{P})$. We can obtain a Koksma-Hlawka type inequality by estimating bounds on $|\hat{f}(\mathbf{k})|$, where $\hat{f}(\mathbf{k})$ is a generalized Fourier coefficient with respect to the Walsh system. In this paper we prove bounds on the Walsh coefficients $\hat{f}(\mathbf{k})$ by introducing an operator called ‘dyadic difference’ $\partial_{i,n}$. By converting dyadic differences $\partial_{i,n}$ to derivatives $\frac{\partial }{\partial x_i}$, we get a new bound on $|\hat{f}(\mathbf{k})|$ for a function $f$ whose mixed partial derivatives up to order $\alpha$ in each variable are continuous. This new bound is smaller than the known bound on $|\hat{f}(\mathbf{k})|$ under some instances. The new Koksma-Hlawka type inequality is derived using this new bound on the Walsh coefficients. Article information Source Hiroshima Math. J., Volume 47, Number 2 (2017), 155-179. Dates Received: 8 November 2016 Revised: 7 December 2016 First available in Project Euclid: 7 July 2017 Permanent link to this document https://projecteuclid.org/euclid.hmj/1499392824 Digital Object Identifier doi:10.32917/hmj/1499392824 Mathematical Reviews number (MathSciNet) MR3679888 Zentralblatt MATH identifier 06775346 Citation Yoshiki, Takehito. Bounds on Walsh coefficients by dyadic difference and a new Koksma-Hlawka type inequality for Quasi-Monte Carlo integration. Hiroshima Math. J. 47 (2017), no. 2, 155--179. doi:10.32917/hmj/1499392824. https://projecteuclid.org/euclid.hmj/1499392824
I've got a fun question, which is somewhat testing my topology skills. The space we're working with is $\mathbb{R} \rightarrow \mathbb{R}/\sim$, which sends $x$ to $[x] = \{y \in \mathbb{R}: x-y \in \mathbb{Q} \}$, and what I'm trying to show is that $\mathbb{R}/\sim$ isn't Hausdorff. What I'm struggling with is proving that, for certain $[x],[y] \in \mathbb{R}/\sim$, that ALL open $U_{[x]}, U_{[y]}$ have a non-empty intersection. Intuition says that these open sets overlap, since in any open set around $[x]$ or $[y]$, there must be a rational, so this open set must also conatin all of $\mathbb{Q}$, so these open sets of $\mathbb{Q}$ in common. Formalizing this is giving me trouble. How do I take an arbitrary open set in such an equivalence class? Is this just $[B_\varepsilon(x)]=\{[x] \in \mathbb{R}/\sim: x \in B_\varepsilon(x)\}$?
Let’s begin by choosing a simple quantitative problem requiring a single measurement—What is the mass of a penny? As you consider this question, you probably recognize that it is too broad. Are we interested in the mass of a United States penny or of a Canadian penny, or is the difference relevant? Because a penny’s composition and size may differ from country to country, let’s limit our problem to pennies from the United States. There are other concerns we might consider. For example, the United States Mint currently produces pennies at two locations (Figure 4.1). Because it seems unlikely that a penny’s mass depends upon where it is minted, we will ignore this concern. Another concern is whether the mass of a newly minted penny is different from the mass of a circulating penny. Because the answer this time is not obvious, let’s narrow our question to—What is the mass of a circulating United States Penny? Figure 4.1 An uncirculated 2005 Lincoln head penny. The “D” below the date indicates that this penny was produced at the United States Mint at Denver, Colorado. Pennies produced at the Philadelphia Mint do not have a letter below the date. Source: United States Mint image (www.usmint.gov). A good way to begin our analysis is to examine some preliminary data. Table 4.1 shows masses for seven pennies from my change jar. In examining this data it is immediately apparent that our question does not have a simple answer. That is, we can not use the mass of a single penny to draw a specific conclusion about the mass of any other penny (although we might conclude that all pennies weigh at least 3 g). We can, however, characterize this data by reporting the spread of individual measurements around a central value. Penny Mass (g) 1 3.080 2 3.094 3 3.107 4 3.056 5 3.112 6 3.174 7 3.198 4.1.1 Measures of Central Tendency One way to characterize the data in Table 4.1 is to assume that the masses are randomly scattered around a central value that provides the best estimate of a penny’s expected, or “true” mass. There are two common ways to estimate central tendency: the mean and the median. Mean The mean, X, is the numerical average for a data set. We calculate the mean by dividing the sum of the individual values by the size of the data set \[ \overline{X}=\frac{\sum_{i}^{ }X_i}{n} \] where X i is the i thmeasurement, and nis the size of the data set. Example 4.1 What is the mean for the data in Table 4.1? Solution To calculate the mean we add together the results for all measurements \[\mathrm{3.080 + 3.094 + 3.107 + 3.056 + 3.112 + 3.174 + 3.198 = 21.821\: g}\] and divide by the number of measurements \[\overline{X} = \mathrm{\dfrac{21.821\: g}{7}=3.117\:g}\] The mean is the most common estimator of central tendency. It is not a robust estimator, however, because an extreme value—one much larger or much smaller than the remainder of the data—strongly influences the mean’s value. 1 For example, if we mistakenly record the third penny’s mass as 31.07 g instead of 3.107 g, the mean changes from 3.117 g to 7.112 g! Note An estimator is robust if its value is not affected too much by an unusually large or unusually small measurement. Median The median, \(\widetilde{X}\), is the middle value when we order our data from the smallest to the largest value. When the data set includes an odd number of entries, the median is the middle value. For an even number of entries, the median is the average of the n/2 and the ( n/2) + 1 values, where n is the size of the data set. Note When n = 5, the median is the third value in the ordered data set; for n = 6, the median is the average of the third and fourth members of the ordered data set. Example 4.2 What is the median for the data in Table 4.1? Solution To determine the median we order the measurements from the smallest to the largest value 3.056 3.080 3.094 3.107 3.112 3.174 3.198 Because there are seven measurements, the median is the fourth value in the ordered data set; thus, the median is 3.107 g. As shown by Examples 4.1 and 4.2, the mean and the median provide similar estimates of central tendency when all measurements are comparable in magnitude. The median, however, provides a more robust estimate of central tendency because it is less sensitive to measurements with extreme values. For example, introducing the transcription error discussed earlier for the mean changes the median’s value from 3.107 g to 3.112 g. 4.1.2 Measures of Spread If the mean or median provides an estimate of a penny’s expected mass, then the spread of individual measurements provides an estimate of the difference in mass among pennies or of the uncertainty in measuring mass with a balance. Although we often define spread relative to a specific measure of central tendency, its magnitude is independent of the central value. Changing all measurements in the same direction, by adding or subtracting a constant value, changes the mean or median, but does not change the spread. (Problem 12 at the end of the chapter asks you to show that this is true.) There are three common measures of spread: the range, the standard deviation, and the variance. Range The range, w, is the difference between a data set’s largest and smallest values. \[w = X_\ce{largest} - X_\ce{smallest}\] The range provides information about the total variability in the data set, but does not provide any information about the distribution of individual values. The range for the data in Table 4.1 is \[w = \mathrm{3.198\: g - 3.056\: g = 0.142\: g}\] Standard Deviation The standard deviation, s, describes the spread of a data set’s individual values about its mean, and is given as \[ s=\sqrt{\frac{\sum_{i}^{ }(X_i-\overline{X})^2}{n-1}} \tag{4.1}\] where X i is one of n individual values in the data set, and X is the data set’s mean value. Frequently, the relative standard deviation, s r, is reported. \[ s_r =\frac{s}{\overline{X}} \] The percent relative standard deviation, % s r, is s r × 100. Example 4.3 What are the standard deviation, the relative standard deviation and the percent relative standard deviation for the data in Table 4.1? Solution To calculate the standard deviation we first calculate the difference between each measurement and the mean value (3.117), square the resulting differences, and add them together to give the numerator of equation 4.1. \[\begin{align} (3.080-3.117)^2 = (-0.037)^2 = 0.001369\\ (3.094-3.117)^2 = (-0.023)^2 = 0.000529\\ (3.107-3.117)^2 = (-0.010)^2 = 0.000100\\ (3.056-3.117)^2 = (-0.061)^2 = 0.003721\\ (3.112-3.117)^2 = (-0.005)^2 = 0.000025\\ (3.174-3.117)^2 = (+0.057)^2 = 0.003249\\ (3.198-3.117)^2 = (+0.081)^2 = \underline{0.006561}\\ 0.015554 \end{align}\] For obvious reasons, the numerator of equation 4.1 is called a sum of squares. Next, we divide this sum of the squares by n – 1, where n is the number of measurements, and take the square root. \[ s = \sqrt{\frac{0.015554}{7-1}}=\mathrm{0.051\:g} \] Finally, the relative standard deviation and percent relative standard deviation are \[\mathrm{\mathit{s}_r= \dfrac{0.051\: g}{3.117\: g} = 0.016 \hspace{20px} \%\mathit{s}_r = (0.016) × 100\% = 1.6\%}\] It is much easier to determine the standard deviation using a scientific calculator with built in statistical functions. NoteMany scientific calculators include two keys for calculating the standard deviation. One key calculates the standard deviation for a data set of n samples drawn from a larger collection of possible samples, which corresponds to equation 4.1. The other key calculates the standard deviation for all possible samples. The later is known as the population’s standard deviation, which we will cover later in this chapter. Your calculator’s manual will help you determine the appropriate key for each. Variance Another common measure of spread is the square of the standard deviation, or the variance. We usually report a data set’s standard deviation, rather than its variance, because the mean value and the standard deviation have the same unit. As we will see shortly, the variance is a useful measure of spread because its values are additive. Example 4.4 What is the variance for the data in Table 4.1? Solution The variance is the square of the absolute standard deviation. Using the standard deviation from Example 4.3 gives the variance as \[s^2 = (0.051)^2 = 0.0026\] Practice Exercise 4.1 The following data were collected as part of a quality control study for the analysis of sodium in serum; results are concentrations of Na + in mmol/L. 140 143 141 137 132 157 143 149 118 145 Report the mean, the median, the range, the standard deviation, and the variance for this data. This data is a portion of a larger data set from Andrew, D. F.; Herzberg, A. M. Data: A Collection of Problems for the Student and Research Worker, Springer-Verlag:New York, 1985, pp. 151–155. Click here to review your answer to this exercise.
In the OP's particular case, the situation is somehwat simpler than the general case that José discusses. That's because the family of left-invariant metrics on $\mathrm{SU}(4)$ that the OP wants to consider has special properties, although just how special does not become apparent until one looks at the problem from a rather different viewpoint, using the fact that $\mathrm{SU}(4)$ is $\mathrm{Spin}(6)$. (In fact, one has $\mathrm{SU}(4)/\{\pm I_4\}=\mathrm{SO}(6)$, and the problem is much easier to describe and treat as a problem on $\mathrm{SO}(6)$, as will be seen.) First, though, a quick review of the geodesic equations for a left-invariant metric on a compact, semi-simple Lie group $G$: If $\kappa:{\frak{g}}\times{\frak{g}}\to\mathbb{R}$ is the Killing form on ${\frak{g}} = T_eG$, and $\omega:TG\to{\frak{g}}$ is the canonical left-invariant form, then the standard bi-invariant metric on $G$ is given by $\mathrm{d}s^2 = -\kappa(\omega,\omega)$. Any other left-invariant metric on $G$ can be written uniquely in the form $\mathrm{d}\bar s^2 = -\kappa(B\omega,\omega)$, where $B:{\frak{g}}\to{\frak{g}}$ is a positive definite $\kappa$-symmetric linear isomorphism. To find the $\mathrm{d}\bar s^2$-geodesic passing through $g_0\in G$ with initial velocity $L'_{g_0}(v_0)\in T_{g_0}G = L'_{g_0}\bigl({\frak{g}}\bigr)$, one has a $2$-step procedure: First, one finds the curve $v:\mathbb{R}\to{\frak{g}}$ that satisfies the Euler equation (a nonlinear ODE initial value problem)$$v'(t) = B^{-1}\bigl[v(t),Bv(t)\bigr],\qquad v(0) = v_0$$and then the curve $g:\mathbb{R}\to G$ satisfying the Lie equation$$\omega\bigl(g'(t)\bigr) = v(t),\qquad g(0) = g_0\,.$$(When $G$ is a matrix group, this latter equation is just $g'(t) = g(t) v(t)$, with initial value $g(0) = g_0$.) Note that, when $v_0$ is an eigenvector of $B$, the solution of the Euler equation is $v(t) = v_0$, and so the geodesic is just $g(t) = g_0 \exp(tv_0)$ (i.e., the left-translation of a $1$-parameter subgroup). More generally, if $B$ preserves a subalgebra ${\frak{s}}\subset {\frak{g}}$ that contains $v_0$, then the problem reduces to finding the geodesic in the corresponding subgroup $S\subset G$ (which is totally geodesic in $G$ with respect to the metric $\mathrm{d}\bar s^2$). Next, in the OP's specific case, one has ${\frak{g}} = {\frak{su}}(4) = {\frak{so}}(6)$ and the OP has prescribed an orthogonal basis $\mathbf{b}$ consisting of 15 elements in ${\frak{su}}(4)$ and wants to consider, all together, the $15$-dimensional cone of metrics determined by the set of positive definite symmetric transformations $B:{\frak{su}}(4)\to {\frak{su}}(4)$ that preserve the $15$ lines spanned by the elements of $\mathbf{b}$. What is not apparent in the OP's description is the great deal of symmetry that the basis $\mathbf{b}$ possesses. This is much more apparent when one, instead, uses the alternative form ${\frak{so}}(6)$, i.e., the skew-symmetric linear transformations of $\mathbb{R}^6$ with its standard inner product. In this form, one can describe the OP's basis $\mathbf{b}$ as follows: Let $e_1,\dots,e_6$ be an orthonormal basis of $\mathbb{R}^6$ and let $E_{ij}\in {\frak{so}}(6)$ for $1\le i<j\le 6$ be the rank $2$ linear transformation that satisfies $E_{ij}(e_i) = e_j$ and $E_{ij}(e_j) = - e_i$. Then the basis $\mathbf{b} = \bigl(E_{ij}\bigr)_{i<j}$ is orthonormal with respect to the Killing form of ${\frak{so}}(6)$, and it corresponds, under an appropriate isomorphism, to the OP's prescribed basis of ${\frak{su}}(4)$, at least up to signs (which are immaterial to the problem). (Verifying this is an interesting exercise for the reader.) Now, a subalgebra ${\frak{s}}\subset {\frak{so}}(6)$ is invariant under all of the positive definite linear transformations $B:{\frak{so}}(6)\to {\frak{so}}(6)$ that preserve $\mathbf{b}$ up to multiples if and only if it has a basis that is a subset of $\mathbf{b}$. There are many such subspaces, and this makes it easy to compute the geodesics for many initial values $v_0$: There are $15$ such maximal tori ${\frak{t}}\subset {\frak{so}}(6)$: For any permutation $\pi = \bigl(\pi(1),\ldots,\pi(6)\bigr)$ let ${\frak{t}}_\pi$ be spanned by the three elements $E_{\pi(1)\pi(2)}$, $E_{\pi(3)\pi(4)}$, and $E_{\pi(5)\pi(6)}$. Then, for $v_0\in {\frak{t}}_\pi$, the solution to the Euler equation is $v(t) = v_0$, so the corresponding geodesics for all of the $15$-parameter family of left-invariant metrics are left-translates of $1$-parameter subgroups. There are $20$ such copies of ${\frak{so}}(3)\subset {\frak{so}}(6)$: For any triple $(i,j,k)$ with $1\le i<j<k\le 6$, let ${\frak{so}}(3)_{ijk}$ be spanned by the elements$E_{ij}$, $E_{ik}$, and $E_{jk}$. Then this defines a subgroup of $\mathrm{SO}(6)$ that is totally geodesic for all of the metrics in the OP's class, and each of these metrics restricts to be a left-invariant metric on each such $\mathrm{SO}(3)$. (Unfortunately, these include the general left-invariant metrics on $\mathrm{SO}(3)$, and, as is well-known, the geodesic equations for the generic such metric on $\mathrm{SO}(3)$ can only be integrated using the Jacobian elliptic functions. [See any good book on mechanics for this integration, where it is described as solving the rigid body problem. Also, note the onset of chaos already in this simple case.] As a result, it follows that it is hopeless to expect a general solution in any explicit form, even for the Euler equation.) Note, by the way, that these $20$ copies of totally geodesic $\mathrm{SO}(3)$s in $\mathrm{SO}(6)$ can be grouped into $10$ pairs that commute with each other, which generates $10$ totally geodesic copies of $\mathrm{SO}(3)\times\mathrm{SO}(3)$ on which the geodesic equations for all the metrics in the family reduce to solving independent pairs of $3$-dimensional rigid body problems. There are, of course, other subgroups that are totally geodesic for the entire $15$-dimensional cone of metrics, such as $15$ copies of $\mathrm{SO}(2)\times\mathrm{SO}(4)$, and $6$ copies of $\mathrm{SO}(5)$. But the Euler equations become progressively harder to solve, and, as far as I know, there is no general solution known for this family of left-invariant metrics on $\mathrm{SO}(5)$ and maybe not even for $\mathrm{SO}(4)$. (Even the $\mathrm{SO}(2)\times\mathrm{SO}(4)$ case is not easy: Even though the Lie algebra of $\mathrm{SO}(4)$ splits as the direct sum of two subalgebras, this splitting is not preserved by the generic linear transformation $B$ in the $15$-dimensional family, and, as a result, the Euler equations do not usually uncouple to simpler equations.) My conclusion is that, while one can compute the geodesics for this family of left-invariant metrics on $\mathrm{SO}(6)$ for special subspaces of initial conditions for the Euler equations, to get the general solution in any explicit form is probably not possible.
I know that this function ($g$ means coupling) is non-analytical in $g=0$, so this function is only appreciable under non-perturbative calculations, so is a non-perturbative phenomena.This function is present on many critical/cross temperatures like in Kondo problem and Superconductors. This functions happens in QCD, when we fix physical coupling equal one.Is always that:$$E=E_0\,e^{-\frac{1}{\rho |g|}}$$or, replacing $|g|$, $g^2$ and $\rho$ is some density of state. When we perceive (physically) that the perturbative series don't converge (like Dyson argument), we treat our series as assymptotic. If the series diverges as $n!$, we can use Borel summation and come up with some integration over a meromorphic function in $(0,\,\infty)$. After some calculation, the poles of this meromorphic function gives contributions like $e^{-\frac{1}{\rho |g|}}$. From this site, seems to me that only instanton made this contribution (instanton corrections). But renormalons could give the same contribution (no?). Bound states, nearly-bound states and tunneling mechanisms that connect different nearly-bound states seems to me the reason of the appearance of this terms and the divergence of perturbative calculation. But is very interesting that this corrections added in perturbative calculations are very tiny, exponentially tiny,...a far scale,... the typical scale of the bound state or the width of a tunneling barrier that holds nearly-bound state. In the physical examples that I gave, the Kondo temperature tells us the size of the cloud around the impurity, the QCD energy gives us the size of the proton, Cooper instability gives the size of the electron-electron pair, a QM double well problem gives the distance of the wells,...so on, so on. Always a formation of bound state through scales. Short distance plus small interactions giving long distance bounded states. I came with this by physical intuition. Can someone can give a mathematical proof of that?This post imported from StackExchange Physics at 2016-05-31 07:24 (UTC), posted by SE-user Nogueira
Peter Saveliev Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA. My current projects are these two books: In part, the latter book is about Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands: Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases: (a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! -Why is discrete calculus better than infinitesimal calculus? -Why? -Because it can be integer-valued! -And? -And the integer-valued calculus can detect if the space is non-orientable! Read Integer-valued calculus, an essay making a case for discrete calculus by appealing to topology and physics. -The political “spectrum” might be a circle!- So? -Then there can be no fair decision-making system! Read The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter.
Given $N$ points $X_i$ in a metric spaceand a measure of "middleness"$ \qquad \qquad \mathsf{middle}( X_i ) \equiv \frac{1}{N} \sum_j \mathsf{metric}( X_i, X_j ) $ can one find an $X_i$ near the middle of all $N$ points, i.e. roughly minimizing $\mathsf{middle}( X_i )$, in time and space both better than $O( N^2 )$ ? If not in general, are there cases that can be done — trees, Euclidean metrics ? Clarification: by "space better than $O(N^2)$" I mean,are there approximate methods that give many nearby pairs after looking at $O(N^{1+\epsilon})$ of all pairs ? This is broader than just middles. Of course, guarantees are then gone, or empirical or statistical. But methods that work for $N$ 10000 or 1000000 would have broad application. (Is that clear, is it worth a separate question ?)
Hill's Equations There's a name for objects in "toroidal orbits" - Hill's equations, in the "Hill's frame." This paper: Burns, R., McLaughlin, C., Leitner, J., & Martin, M. (2000). TechSat 21: formation design, control, and simulation. In Aerospace conference proceedings, 2000 IEEE (Vol. 7, pp. 19-25). shows the three equations below, and claims they are found here: Hill, George William. "Researches in the lunar theory." American journal of Mathematics 1, no. 1 (1878): 5-26. but I don't see them in that three equation form. I suspect these are versions from a more recent paper. \ddot x - 2 \omega \dot y - 3 \omega^2 x ~=~ f_x ~~~~ x is the radial direction \ddot y + 2 \omega \dot x ~=~ f_y ~~~~~~~~~~~~~~~ y is the orbital direction \ddot z + \omega^2 z ~=~ f_z ~~~~~~~~~~~~~~~~ z is perpendicular to the orbital plane. Right handed triad mentioned in Burns, the equation works with left triad, too.
Tuned Mass Dampers A Tuned Mass Damper (TMD) is a mechanical device designed to add damping to a structure for a certain range of exciting frequencies. The extra damping will reduce the movement of the structure to an acceptable level. A tuned mass damper contains a mass that is able to oscillate in the same direction as the structure. The oscillation frequency of the mass can be tuned using springs, suspension bars, or ball transfers. When the structure starts to oscillate, the mass of the TMD will initially remain stationary due to inertia. A frictional or hydraulic component connected between the structure and the TMD mass then turns the kinetic energy of the structure into thermal energy, which results in a lower vibration amplitude of the structure. The design of an TMD depends on the oscillation frequency and mass of the structure, direction of the movements (one horizontal direction, two horizontal directions, or vertical) and the available space. Guarantee Flow Engineering has more than 20 years of experience in the calculation, designing, fabrication, and installation of Tuned Mass Damper (TMD) systems. Our TMD systems are applied to bridges, flagpoles, chimneys, distillation columns and other slender structures. Our TMD systems have proven themselves in practice. We guarantee that the solutions we offer ensure the reduction of unwanted vibrations to an acceptable level, thereby preventing fatigue damage. Tuning By cleverly designing a Tuned Mass Damper (TMD), a single device can add sufficient damping for two or more of the structures natural frequencies, reducing the number of necessary TMDs. In order to perform such optimizations, we model the structure with a finite element package to calculate the modes of vibration of the structure. The TMD is then tuned to add at least the necessary damping for each natural frequency. Design Considerations In simple situations a structure with a connected Tuned Mass Damper (TMD) can be modelled as in the following figure. Here \(k\) is the spring constant, \(c\) is the damper constant, and \(m\) is the mass. Subscript \(1\) pertains to the structure and subscript \(2\) to the TMD. A TMD can significantly reduce the response of a structure, as can be seen from the following graph. The effects of varying several design parameters are given below. Mass Ratio \(\mu\) Increasing the mass ratio \(\mu\) (increasing the damper mass) will decrease the structural displacement. The normalized structural displacement amplitude can be computed with the formula given by J.P. Den Hartog in “Mechanical Vibrations”: $$ \frac{\left| z_{1} \right|}{x_{st}} = \sqrt{1+\frac{2}{\mu}} $$ As can be seen from the figure, Den Hartog’s approach, calculating with \(\zeta_{1}=0\), is slightly conservative for steel structures (\(\zeta_{1}=0.2\%\)) at the lower mass ratios. Damper Frequency \(f\) The eigenfrequencies of a structure may not be known to a sufficient level of accuracy at the time that the TMDs are designed. It is then useful to define a range in which the frequency of the eigenmode to be damped is sure to reside. By designing an appropriate TMD for the entire range, the need for measuring a structures eigenfrequencies before a TMD can be produced is negated. In the case that the structure has multiple eigenfrequencies relatively near to each other, a wide range TMD may be used to add damping to several eigenmodes. Reducing the cost of the vibration damping system. Internal Damping Ratio \(\zeta_{2}\) The increase in amplitude from mis-tuned internal damping can be significant. It is because of this effect that we advise changing the internal dampers at set intervals of 15 to 25 years, depending on the damper used. Our ongoing research into maintenance free tuned mass dampers has solved this issue for linear tuned mass dampers. See our solution for linear tuned mass dampers: Magnovisco Linear Dampers
Article О динамике эндоморфизмов двумерного тора с одномерными базисными множествами In this paper we consider endomorphisms given on 2-manifold satisfying axiom A. F. Przytycki obtained necessary and sufficient conditions for $\Omega$-stability of such endomorphisms. He also showed that in every neighborhood of an omega-unstable endomorphism a countable number of pairwise omega non-conjugate endomorphisms exists. Here we introduce an example of one-parametric family of endomorphisms of 2-torus that are pairwise topologically non-conjugate but $\Omega$-conjugate. In 1978 J. Palis invented continuum topologically non-conjugate systems in a neighbourhood of a system with a heteroclinic contact (moduli). W. de Melo and С. van Strien in 1987 described a diffeomorphism class with a finite number of moduli: a chain of saddles taking part in the heteroclinic contact of such diffeomorphism includes not more than three saddles. Surprisingly, such effect does not happen in flows. Here we consider gradient flows of the height function for an orientable surface of genus g>0. Such flows have a chain of 2 g saddles. We found that the value of moduli for such flows is 2 g-1 which is the straight consequence of the sufficient topological conjugacy conditions for such systems given in our paper. A complete topological equivalence invariant for such systems is four-colour graph carrying the information about its cells relative position. Equipping the graph's edges with the analytical parameters -- moduli, connected with the saddle connections, gives the sufficient conditions of the flows topological conjugacy. In present paper we consider a class of 3-manifolds' diffeomorphisms lying on the border of a set of gradient-like systems and different from the last not more than one tangencies' orbits of two-dimensional separatrices. It is proved that for studying diffeomorphisms necessary and sufficient condition for topological conjugacy of two diffeomorphisms from this class is a coincidence of classes of equivalence of their schemes and moduli of stability corresponding tangencies' orbits.}{Topological conjugacy, heteroclinic tangencies, moduli of stability. This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. We consider a class $G$ of Morse-Smale diffeomorphisms on the sphere $S^n$ of dimension $n\geq 4$ such that invariant manifolds of different saddle periodic points of any diffeomorphisms from $G$ have no intersection. Dynamics of an arbitrary diffeomorphism $f\in G$ can be represented as ``sink-source'' dynamics where the ``sink'' $A_f$ (the ``source'' $R_f$) is the connected unions of one- and zero-dimensional unstable (stable) manifolds. We study a structure of the space $V_f=S^n\setminus (A_f\cup R_f)$ and the topology of embedding in $V_f$ of separatrices of dimension $(n-1)$. We prove that the orbit space $\widehat{V}_f=V_f/_f$ is homeomorphic to the direct product $\mathbb{S}^{n-1}\times \mathbb{S}^1$, and the projection $l_\sigma\subset \widehat{V}_f$ of $(n-1)$-dimensional separatrix of a saddle periodic point $\sigma$ is either homeomorphic to the direct product $\mathbb{S}^{n-2}\times \mathbb{S}^1$ and bounds in $\widehat{V}_f$ a manifold homeomorphic to $\mathbb{B}^{n-1}\times \mathbb{S}^1$, or homeomorphic to a non-oriented locally-trivial fiber bundle under the circle $\mathbb{S}^1$ with the fiber $\mathbb{S}^{n-2}$, and such the manifold may by only one amount all projection of the separatrices. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself. Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Even a virtual course needs an opening line, so here it is : Take your favourite $SL_2(\mathbb{Z}) $-representation Here is mine : the permutation presentation of the Mathieu group(s). Emile Leonard Mathieu is remembered especially for his discovery (in 1861 and 1873) of five sporadic simple groups named after him, the Mathieu groups $M_{11},M_{12},M_{22},M_{23} $ and $M_{24} $. These were studied in his thesis on transitive functions. He had a refreshingly direct style of writing. I’m not sure what Cauchy would have thought (Cauchy died in 1857) about this ‘acknowledgement’ in his 1861-paper in which Mathieu describes $M_{12} $ and claims the construction of $M_{24} $. Also the opening sentenses of his 1873 paper are nice, something along the lines of “if no expert was able to fill in the details of my claims made twelve years ago, I’d better do it myself”. However, even after this paper opinions remained divided on the issue whether or not he did really achieve his goal, and the matter was settled decisively by Ernst Witt connecting the Mathieu groups to Steiner systems (if I recall well from Mark Ronan’s book Symmetry and the monster) As Mathieu observed, the quickest way to describe these groups would be to give generators, but as these groups are generated by two permutations on 12 respectively 24 elements, we need to have a mnemotechnic approach to be able to reconstruct them whenever needed. Here is a nice approach, due to Gunther Malle in a Luminy talk in 1993 on “Dessins d’enfants” (more about them later). Consider the drawing of “Monsieur Mathieu” on the left. That is, draw the left-handed bandit picture on 6 edges and vertices, divide each edge into two and give numbers to both parts (the actual numbering is up to you, but for definiteness let us choose the one on the left). Then, $M_{12} $ is generated by the order two permutation describing the labeling of both parts of the edges $s=(1,2)(3,4)(5,8)(7,6)(9,12)(11,10) $ together with the order three permutation obtained from cycling counterclockwise around a trivalent vertex and calling out the labels one encounters. For example, the three cycle corresponding to the ‘neck vertex’ is $~(1,2,3) $ and the total permutation is $t=(1,2,3)(4,5,6)(8,9,10) $ A quick verification using GAP tells that these elements do indeed generate a simple group of order 95040. Similarly, if you have to reconstruct the largest Mathieu group from scratch, apply the same method to the the picture above or to “ET Mathieu” drawing on the left. This picture I copied from Alexander Zvonkin‘s paper How to draw a group as well as the computational details below. This is all very nice and well but what do these drawings have to do with Grothendieck’s “dessins d’enfants”? Consider the map from the projective line onto itself $\mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}}$ defined by the rational map $f(z) = \frac{(z^3-z^2+az+b)^3(z^3+cz^2+dz+e)}{Kz} $ where N. Magot calculated that $a=\frac{107+7 \sqrt{-11}}{486}, b=-\frac{13}{567}a+\frac{5}{1701}, c=-\frac{17}{9}, d=\frac{23}{7}a+\frac{256}{567}, e=-\frac{1573}{567}a+\frac{605}{1701} $ and finally $K = -\frac{16192}{301327047}a+\frac{10880}{903981141} $ One verifies that this map is 12 to 1 everywhere except over the points ${ 0,1,\infty } $ (that is, there are precisely 12 points mapping under f to a given point of $\mathbb{P}^1_{\mathbb{C}} – { 0,1,\infty } $. From the expression of f(z) it is clear that over 0 there lie 6 points (3 of which with multiplicity three, the others of multiplicity one). Over $\infty $ there are two points, one with multiplicity 11 and one with multiplicity one. The difficult part is to compute the points lying over 1. The miraculous fact of the given values is that $f(z)-1 = \frac{-B(z)^2}{Kz} $ where $B(z)=z^6+\frac{1}{11}(10c-8)z^5+(5a+9d-7c)z^4+(2b+4ac+8e-6d)z^3+$ $(3ad+bc-5e)z^2+2aez-be) $ and hence there are 6 points lying over 1 each with mutiplicity two. Right, now consider the complex projective line $\mathbb{P}^1_{\mathbb{C}} $ as the Riemann sphere $S^2 $ and mark the six points lying over 1 by a white vertex and the six points lying over 0 with a black vertex (in the source sphere). Now, lift the real interval $[0,1] $ in the target sphere $\mathbb{P}^1_{\mathbb{C}} = S^2 $ to its inverse image on the source sphere. As there are exactly 12 points lying over each real number $0 \lneq r \lneq 1 $, this inverse image will consist of 12 edges which are noncrossing and each end in one black and one white vertex. The obtained graph will look like the \”Monsieur Mathieu\” drawing above with the vertices corresponding to the black vertices and the three points over 1 of multiplicity three corresponding to the trivalent vertices, those of multiplicity one to the three end-vertices. The white vertices correspond to mid-points of the six edges, so that we do get a drawing with twelve edges, one corresponding to each number. From the explicit description of f(z) it is clear that this map is defined over $\mathbb{Q}\sqrt{-11} $ which is also the smallest field containing all character-values of the Mathieu group $M_{12} $. Further, the Galois group of the extension $Gal(\mathbb{Q}\sqrt{-11}/\mathbb{Q}) = \mathbb{Z}/2\mathbb{Z} $ and is generated by complex conjugation. So, one might wonder what would happen if we replaced in the definition of the rational map f(z) the value of a by $a = \frac{107-\sqrt{-11}}{486} $. It turns out that this modified map has the same properties as $f(z) $ so again one can draw on the source sphere a picture consisting of twelve edges each ending in a white and black vertex. If we consider the white vertices (which incidentally each lie on two edges as all points lying over 0 are of multiplicity two) as mid-points of longer edges connecting the black vertices we obtain a drawing on the sphere which looks like \”Monsieur Mathieu\” but this time as a right handed bandit, and applying our mnemotechnic rule we obtain _another_ (non conjugated) embedding of $M_{12} $ in the full symmetric group on 12 vertices. What is the connection with $SL_2(\mathbb{Z}) $-representations? Well, the permutation generators s and t of $M_{12} $ (or $M_{24} $ for that matter) have orders two and three, whence there is a projection from the free group product $C_2 \star C_3 $ (here $C_n $ is just the cyclic group of order n) onto $M_{12} $ (respectively $M_{24} $). Next time we will say more about such free group products and show (among other things) that $PSL_2(\mathbb{Z}) \simeq C_2 \star C_3 $ whence the connection with $SL_2(\mathbb{Z}) $. In a following lecture we will extend the Monsieur Mathieu example to arbitrary dessins d\’enfants which will allow us to assign to curves defined over $\overline{\mathbb{Q}} $ permutation representations of $SL_2(\mathbb{Z}) $ and other _cartographic groups_ such as the congruence subgroups $\Gamma_0(2) $ and $\Gamma(2) $. Similar Posts: Modular quilts and cuboid tree diagrams Hyperbolic Mathieu polygons The best rejected proposal ever permutation representations of monodromy groups Monstrous dessins 3 The Dedekind tessellation the modular group and superpotentials (1) Dessinflateurs quivers versus quilts Klein’s dessins d’enfant and the buckyball
Bound state solutions of Schrödinger-Poisson system with critical exponent School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China $\tag{P}\label{0.1} \begin{cases}- Δ u+V(x)u+K(x)φ u=|u|^{2^*-2}u, &x∈ \mathbb{R}^3,\\-Δ φ=K(x)u^2,&x∈ \mathbb{R}^3,\end{cases}$ $2^*=6 $ $\mathbb R^3$ $ K∈ L^{\frac{1}{2}}(\mathbb{R}^3)$ $V∈ L^{\frac{3}{2}}(\mathbb{R}^3)$ $|V|_{\frac{3}{2}}+|K|_{\frac{1}{2}}$ Mathematics Subject Classification:Primary:35J20;Secondary:35J6. Citation:Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 605-625. doi: 10.3934/dcds.2017025 References: [1] [2] [3] [4] [5] V. Benci and C. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u=u^{(N+2)/(N-2)}$ in $\mathbb R^3$, [6] [7] [8] H. Brezis, [9] [10] [11] [12] X. M. He and W. M. Zou, Existence and concentration of ground states for Schrödinger-Poisson equations with critical growth [13] I. Ianni and G. Vaira, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part I: Necessary condition, [14] I. Ianni, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part II: Existence, [15] [16] [17] P. L. Lions, The concentration-compactness method in the calculus of variations. The locally compact case, parts 1 and 2, [18] Z. S. Liu and S. J. Guo, On ground state solutions for the Schrödinger-Poisson equations with critical growth, [19] D. Ruiz, Semiclassical states for coupled Schrödinger-Maxwell equations: Concentration around a sphere, [20] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of the potential, [21] [22] [23] J. Wang, J. X. Xu, F. B. Zhang and X. M. Chen, Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system, [24] M. Willem, [25] [26] Z. P. Wang and H. S. Zhou, Positive solutions for a nonlinear stationary Schrödinger-Poisson system in $\mathbb{R}^3 $, [27] [28] [29] L. G. Zhao, H. Liu and F. K. Zhao, Existence and concentration of solutions for the Schrödinger-Poisson equations with steep well potential, [30] [31] show all references References: [1] [2] [3] [4] [5] V. Benci and C. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u=u^{(N+2)/(N-2)}$ in $\mathbb R^3$, [6] [7] [8] H. Brezis, [9] [10] [11] [12] X. M. He and W. M. Zou, Existence and concentration of ground states for Schrödinger-Poisson equations with critical growth [13] I. Ianni and G. Vaira, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part I: Necessary condition, [14] I. Ianni, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part II: Existence, [15] [16] [17] P. L. Lions, The concentration-compactness method in the calculus of variations. The locally compact case, parts 1 and 2, [18] Z. S. Liu and S. J. Guo, On ground state solutions for the Schrödinger-Poisson equations with critical growth, [19] D. Ruiz, Semiclassical states for coupled Schrödinger-Maxwell equations: Concentration around a sphere, [20] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of the potential, [21] [22] [23] J. Wang, J. X. Xu, F. B. Zhang and X. M. Chen, Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system, [24] M. Willem, [25] [26] Z. P. Wang and H. S. Zhou, Positive solutions for a nonlinear stationary Schrödinger-Poisson system in $\mathbb{R}^3 $, [27] [28] [29] L. G. Zhao, H. Liu and F. K. Zhao, Existence and concentration of solutions for the Schrödinger-Poisson equations with steep well potential, [30] [31] [1] Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. [2] Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. [3] Sitong Chen, Xianhua Tang. Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system. [4] Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. [5] Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. [6] [7] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [8] Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. [9] Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. [10] Claudianor O. Alves, Minbo Yang. Existence of positive multi-bump solutions for a Schrödinger-Poisson system in $\mathbb{R}^{3}$. [11] [12] Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang. Standing waves for Schrödinger-Poisson system with general nonlinearity. [13] Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents. [14] [15] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. [16] Margherita Nolasco. Breathing modes for the Schrödinger-Poisson system with a multiple--well external potential. [17] [18] Juntao Sun, Tsung-Fang Wu, Zhaosheng Feng. Non-autonomous Schrödinger-Poisson system in $\mathbb{R}^{3}$. [19] Zhengping Wang, Huan-Song Zhou. Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Answer 80 degrees Work Step by Step We know that there are 180 degrees per pi radians. Thus, we find: $$=\frac{4\pi}{9} \ radians \times \frac{180^{\circ}}{\pi\ radians}=80^{\circ}$$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.