text
stringlengths
256
16.4k
When a differential equation is of the form \(y' = f(x)\), we can just integrate: \(y = \int f(x) dx + C\). Unfortunately this method no longer works for the general form of the equation \(y' = f(x, y)\). Integrating both sides yields \[y = \int f(x, y) dx + C\] Notice the dependence on \(y\) in the integral. 1.3.1 Separable equations Let us suppose that the equation is separable. That is, let us consider \[y' = f(x)g(y),\] for some functions \(f(x)\) and \(g(y)\). Let us write the equation in the Leibniz notation \[\frac{dy}{dx} = f(x)g(y)\] Then we rewrite the equation as \[\frac{dy}{g(y)} = f(x) dx\] Now both sides look like something we can integrate. We obtain \[\int \frac{dy}{g(y)} = \int f(x) dx + C\] If we can find closed form expressions for these two integrals, we can, perhaps, solve for \(y.\) Example \(\PageIndex{1}\): Take the equation \[ y' = xy\] First note that \(y = 0\) is a solution, so assume \(y \ne 0\) from now on. Write the equation as \(\frac{dy}{dx} = xy,\) then \[\int \frac{dy}{y} = \int x dx + C.\] We compute the antiderivatives to get \[\ln \left \vert y \right \vert = \frac{x^2}{2} + C\] Or \[\left \vert y \right \vert = e^{\frac{x^2}{2}} e^{C} = De^{\frac{x^2}{2}} \] where \(D > 0\) is some constant. Because \(y = 0\) is a solution and because of the absolute value we actually can write: \(y = De^{\frac{x^2}{2}} \) for any number \(D\) (including zero or negative). We check: \[y' = Dxe^{\frac{x^2}{2}} = x \left ( De^{\frac{x^2}{2}} \right ) = xy\] We should be a little bit more careful with this method. You may be worried that we were integrating in two different variables. We seemed to be doing a different operation to each side. Let us work this method out more rigorously. \[\frac{dy}{dx} = f(x)g(y)\] We rewrite the equation as follows. Note that \(y = y(x)\) is a function of \(x\) and so is \(\frac{dy}{dx}!\) \[\frac{1}{g(y)} \frac{dy}{dx} = f(x)\] We integrate both sides with respect to \(x.\) \[\int \frac{1}{g(y)} \frac{dy}{dx} dx = \int f(x) dx + C\] We can use the change of variables formula. \[\int \frac{1}{g(y)} dy = \int f(x) dx + C\] And we are done. 1.3.2 Implicit solutions It is clear that we might sometimes get stuck even if we can do the integration. For example, take the separable equation \[y' = \frac{xy}{y^2 + 1}\] We separate variables, \[\frac{y^2 + 1}{y} dy = \left ( y + \frac{1}{y} \right ) dy = x dx\] We integrate to get \[\frac{y^2}{2} + ln \left \vert y \right \vert = \frac{x^2}{2} + C\] or perhaps the easier looking expression (where \(D = 2C\)) \[y^2 + 2ln \left \vert y \right \vert = x^2 + D\] It is not easy to find the solution explicitly as it is hard to solve for \(y\). We, therefore, leave the solution in this form and call it an implicit solution. It is still easy to check that an implicit solution satisfies the differential equation. In this case, we differentiate to get \[y' \left ( 2y + \frac{2}{y} \right ) = 2x\] It is simple to see that the differential equation holds. If you want to compute values for \(y\), you might have to be tricky. For example, you can graph \(x\) as a function of \(y\), and then flip your paper. Computers are also good at some of these tricks. We note that the above equation also has the solution \(y = 0\). The general solution is \(y^2 + 2ln \left \vert y \right \vert = x^2 + C\) together with \(y = 0\). These outlying solutions such as \(y = 0\) are sometimes called singular solutions. Example \(\PageIndex{2}\): Solve \(x^2y' = 1 - x^2 + y^2 -x^2y^2\), \(y(1) = 0.\) First factor the right hand side to obtain \[x^2y' = \left ( 1- x^2 \right ) \left ( 1 + y^2 \right )\] We separate variables, integrate and solve for \(y\) \[\frac{y'}{1 + y^2} = \frac {1 - x^2}{x^2},\] \[\frac{y'}{1 + y^2} = \frac {1}{ x^2} -1,\] \[\text{arctan}(y) = -\frac{1}{x^2} - x + C,\] \[y = \tan \left( -\frac{1}{x} - x + C \right )\] \[y = \tan \left( -\frac{1}{x} - x + C \right )\] Example \(\PageIndex{3}\): Bob made a cup of coffee, and Bob likes to drink coffee only once it will not burn him at 60 degrees. Initially at time \(t = 0\) minutes, Bob measured the temperature and the coffee was 89 degrees Celsius. One minute later, Bob measured the coffee again and it had 85 degrees. The temperature of the room (the ambient temperature) is 22 degrees. When should Bob start drinking? Let \(T\) be the temperature of the coffee, and let \(A\) be the ambient (room) temperature. Newton’s law of cooling states that the rate at which the temperature of the coffee is changing is proportional to the difference between the ambient temperature and the temperature of the coffee. That is, \[\frac{dT}{dt} = k(A - T),\] for some constant \(k\). For our setup \( A = 22\), \(T(0) = 89\), \(T(1) = 85\). We separate variables and integrate (let \(C\) and \(D\) denote arbitrary constants) \[\frac{1}{T -A} \frac {dT}{dt} = -k,\] \[\ln (T - A) = -kt + C, \, \, \, \, \, \left ( \text {note that} T - A > 0 \right )\] \[ T - A = De^{-kt},\] \[ T = A + De^{-kt}\] Example \(\PageIndex{4}\): \[ -\frac {3}{y^2} y' = x ,\] \[ \frac {3}{y} = \frac {x^2}{2} + C,\] \[ y = \frac {3}{ \frac{x^2}{2} + C} = \frac {6}{x^2 + 2C}.\]
UNIQUE FACTORIZATION, FERMAT’S LAST THEOREM, BEAL’S CONJECTURE AbstractIn this paper the following statememt of Fermat\rq{}s Last Theorem is proved. If $x, y, z$ are positive integers$\pi$ is an odd prime and $z^\pi=x^\pi+y^\pi, x, y, z$ are all even. Also, in this paper, is proved (Beal\rq{}s conjecture): The equation $z^\xi=x^\mu+y^\nu$ has no solution in relatively prime positive integers $x, y, z, $ with $\xi, \mu, \nu$ primes at least $3. References H. Edwards, {it Fermat's Last Theorem:A Genetic Introduction to Algebraic Number Theory/}, Springer-Verlag, New York, (1977). A. Wiles, {it Modular ellipic eurves and Fermat's Last Theorem/}, Ann. Math. 141 (1995), 443-551. A. Wiles and R. Taylor, {it Ring-theoretic properties of certain Heche algebras/}, Ann. Math. 141 (1995), 553-573. Journal of Progressive Research in Mathematics, 10(1), 1434-1439. Retrieved from http://scitecresearch.com/journals/index.php/jprm/article/view/936 Copyright (c) 2016 Journal of Progressive Research in Mathematics This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. TRANSFER OF COPYRIGHT JPRM is pleased to undertake the publication of your contribution to Journal of Progressive Research in Mathematics. The copyright to this article is transferred to JPRM(including without limitation, the right to publish the work in whole or in part in any and all forms of media, now or hereafter known) effective if and when the article is accepted for publication thus granting JPRM all rights for the work so that both parties may be protected from the consequences of unauthorized use.
TL;DR: "Scientifically correct" (according to current established science) and "faster-than-light travel" cannot be used in the same context without some form of negation. What you are asking for is not possible within the boundaries of science as we know it. Here's why: Our best model for this type of effects, insofar as I know, is special and general relativity. Special relativity postulates that colinear velocities are added according to the formula $$ s = \cfrac{v+u}{1 + \cfrac{vu}{c^2}} $$ for an initial velocity $v$ and a total acceleration $u$ (over some period of time) yielding a final velocity $s$. For small values of $v$ and $u$, this behaves like we are used to, because for such values, the fraction $\frac{vu}{c^2}$ is very small, so the term $1 + \frac{vu}{c^2}$ is very close to 1 giving $s \approx v+u$. Of course, in some situations, even with everyday velocities this approximation might not be good enough. However, look what happens if we set $v = 0.90c$ and $u = 0.10c$ (meaning that in an intertial reference frame, our initial velocity is 0.90 times the speed of light, and we increase our velocity by 0.10 times the speed of light). Intuitively, the velocity would come out as $(0.90 + 0.10)c = c$, but it turns out that this is not the case at all. Rather, using units of $c$ for simplicity: $$ s = \cfrac{0.90 + 0.10}{1 + \cfrac{0.90 \times 0.10}{1^2}} \approx 0.9174 $$ See what happens? In an inertial reference frame, our velocity only rose from $0.900c$ to about $0.917c$, an increase of 1.9%, even though we tried to raise the velocity by 11% ($0.10c$ out of $0.90c$). This effect becomes even more pronounced as your initial velocity approaches $c$ ($v \to c$). For example, look what happens if we are moving at $0.99c$ and increase our velocity by $0.10c$ (yes, I really mean that): $$ s = \cfrac{0.99 + 0.10}{1 + \cfrac{0.99 \times 0.10}{1^2}} \approx 0.9918 $$ for a 0.18% increase for the same effort that got us 1.9% starting at 90% of $c$. And of course, in the real world, these are both absurdly high values for $u$, reminiscent of instantaneous acceleration. Instead, we should be working with $u \to 0$ (because in the real world, the time over which we measure acceleration goes to 0), but since that's difficult to show in a single equation, I'll settle for $u = 10^{-12}c \approx 0.3~\text{mm/s}$ which isn't a totally unrealistic change of velocity over a short period of time given something resembling a real-world device trying to propel itself. Now look what we get if we start out at $0.90c$: $$ s = \cfrac{0.90 + 10^{-12}}{1 + \cfrac{0.90 \times 10^{-12}}{1^2}} = 0.900~000~000~000~189~999... $$ Our velocity increase, which we tried to make $\frac{10^{-12}}{0.90} \approx 1 \times 10^{-12}$, became $\frac{0.90000000000019 - 0.90}{0.90} \approx 2 \times 10^{-13}$. We only got 1/5 of the increase that we spent the effort for, and at 90% of $c$, we are still a good long ways away from $c$. It only gets worse from there. Eventually, this means that the energy cost of increasing your velocity grows in an exponential fashion. If you work the math all the way, effectively solving $$\lim \limits_{v \to c, u \to 0} \cfrac{v+u}{1 + \cfrac{vu}{c^2}}$$ you end up with an energy requirement that grows toward infinity as you get closer and closer to the speed of light. Because instantaneous velocity changes are not possible (because of inertia, for one thing), you can't simply "jump past" the difficult part of the acceleration curve. Because your spacecraft will, at every instant, have an instantaneous velocity (along some vector) and an instantaneous acceleration ($\vec{a} = \vec{\Delta v} / \Delta t$ for some $\Delta t \to 0$), you will thus only ever be able to (with humungous energy expenditure) approach the speed of light, but you will never be able to reach the speed of light. As your velocity increases, the marginal utility of any given acceleration (within the local frame of reference) decreases; you get less and less (inertial reference frame) acceleration out of any given amount of effort. Because inertial reference frames are what we are generally concerned with when going places, this means you work exponentially harder but get exponentially less utility for your efforts. If you want a single formula that explains why faster-than-light travel is impossible in the real world as we currently understand it, the mass-energy equivalence $E=mc^2$ (as suggested by AndreiROM) isn't what you are looking for (in fact, it might even to a limited extent be your friend, if you can figure out how to do the mass-energy conversion); rather, the one you want is the relativistic colinear velocity addition formula and an understanding of how it behaves as $v \to c$.
Degree $n$ : $40$ Transitive number $t$ : $19$ Group : $C_5\times D_4:C_2$ Parity: $1$ Primitive: No Nilpotency class: $2$ Generators: (3,4)(7,8)(11,12)(13,14)(19,20)(23,24)(25,26)(29,30)(33,34)(39,40), (1,30,18,7,35,23,10,39,28,13)(2,29,17,8,36,24,9,40,27,14)(3,31,19,5,34,21,11,37,25,15)(4,32,20,6,33,22,12,38,26,16), (1,34,28,19,10,3,35,25,18,11)(2,33,27,20,9,4,36,26,17,12)(5,40,31,24,15,8,37,29,21,14)(6,39,32,23,16,7,38,30,22,13) $|\Aut(F/K)|$: $20$ |G/N| Galois groups for stem field(s) 2: $C_2$ x 7 4: $C_2^2$ x 7 5: $C_5$ 8: $C_2^3$ 10: $C_{10}$ x 7 16: $Q_8:C_2$ Resolvents shown for degrees $\leq 10$ Degree 2: $C_2$ x 3 Degree 4: $C_2^2$ Degree 5: $C_5$ Degree 8: $Q_8:C_2$ Degree 10: $C_{10}$ x 3 Degree 20: 20T3 There are no siblings with degree $\leq 10$ A number field with this Galois group has no arithmetically equivalent fields. There are 50 conjugacy classes of elements. Data not shown. Order: $80=2^{4} \cdot 5$ Cyclic: No Abelian: No Solvable: Yes GAP id: [80, 48] Character table: Data not available.
I have faced some difficulties to do the following integral: $$ I=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\int_{0}^{\infty}dr~r^2\frac{3x^2y^2\cos(u r \sin\theta \cos\phi)\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}\mathrm e^{-\frac{r^2}{2}} \tag{1}, $$ where $x$, $y$, and $u$ are real positive constants. I tried at least two ways to solve this integral: First attempt: I began to solve the $r$ integral first. By using Mathematica, then $$ I=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\frac{3x^2y^2(1-u^2\sin^2\theta\cos^2\phi)\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}\mathrm e^{-\frac{u^2}{2}\sin^2\theta\cos^2\phi} \tag{2}. $$ After that, I looked for a solution for $\phi$ integral. My best attempt was: $$ I_\phi(x,y,u,\theta)=\frac{2}{B}\left[B\left(\frac{1}{2}\right)F_1\left(\frac{1}{2},1,-;1;\nu,-\frac{a}{2}\right)-aB\left(\frac{1}{2}\right)F_1\left(\frac{3}{2},1,-;2;\nu,-\frac{a}{2}\right)\right], $$ where $B=x^2\sin^2\theta+x^2y^2\cos^2\theta$, $a=u^2\sin^2\theta$, and $\nu=\frac{x^2-y^2}{x^2+x^2y^2\cot^2\theta}$. In this way, the final results it's something like that: $$ I= \int_{0}^{\pi} \mathrm d \theta~3x^2y^2\sin\theta \cos^2\theta~ I_\phi(x,y,u,\theta). \tag{3}. $$ Eq. $(3)$ cannot be further simplied in general and is the nal result. Second attempt: To avoid the hypergeometric function $F_1$, I tried to start with the $\phi$ integral. In this case, my initial problem is an integral something like that: $$ \int_{0}^{2\pi} \mathrm d \phi \frac{\cos(A \cos\phi)}{a^2\cos^2\phi+b^2\sin^2\phi}. \tag{4} $$ This integral $(4)$ can be solved by series (see Vincent's answer and Jack's answer). However those solutions, at least for me, has not a closed form. This is my final step on this second attempt :( What is the point? It turns out that someone has managed to solve the integral $(1)$, at least the integral in $r$ and $\phi$. The final resuls found by this person was: $$ I_G=\frac{12 \pi x~y}{(1-x^2)^{3/2}}\int_{0}^{\sqrt{1-x^2}} \mathrm dk \frac{k^2 \exp\left(-\frac{u^2}{2}\frac{x^2k^2}{(1-x^2)(1-k^2)}\right)}{\sqrt{1-k^2}\sqrt{1-k^2\frac{1-y^2}{1-x^2}}}, $$ where, I belive, $k=\sqrt{1-x^2}\cos\theta$. As you can see in this following code performed in Mathematica IG[x_, y_, u_] := Sqrt[Pi/2] NIntegrate[(12 Pi x y)/(1 - x^2)^(3/2) (v^2 Exp[-(u^2 x^2 v^2)/(2 (1 - x^2) (1 - v^2))])/(Sqrt[1 - v^2] Sqrt[1 - v^2 (1 - y^2)/(1 - x^2)]), {v, 0, Sqrt[1 - x^2]}] IG[.3, .4, 1] ** 4.53251 **I[x_, y_, u_] := NIntegrate[(r^2 Sin[a] Cos[ u r Sin[a] Cos[b]] 3 x^2 y^2 Cos[a]^2 Exp[-r^2/ 2])/((y^2 Cos[b]^2 + x^2 Sin[b]^2) Sin[a]^2 + x^2 y^2 Cos[a]^2), {r, 0, Infinity}, {a, 0, Pi}, {b, 0, 2 Pi}]I[.3, .4, 1]** 4.53251 ** the integrals $I$ and $I_G$ are equals. Indeed, since that they emerge from the same physical problem. So, my question is: what are the steps applied for that integral $I$ gives the integral $I_G$? Edit Since my question was not solved yet, I think it is because it is a tough question, I will show a particular case of the integral $I$, letting $u=0$. I hope with this help you help me. In this case, the $r$ integral in $(1)$ is trivial and the integral takes the form: $$ I_P=\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta~\sin\theta\frac{3x^2y^2\cos^2\theta}{(y^2\cos\phi+x^2\sin^2\phi)\sin^2\theta+x^2y^2\cos^2\theta}. \tag{5} $$ The $\phi$ integral can be integrated with the help of Eq. 3.642.1 in Gradstein and Ryzhik's tables of integrals. Thereby, the $I_P$ takes the for: $$ I_P=3xy\int_{0}^{\pi}d\theta\frac{\sin\theta\cos^2\theta}{\sqrt{1+(x^2-1)\cos^2\theta}\sqrt{1+(y^2-1)\cos^2\theta}}. \tag{6}$$ Now the change of variable $k=\sqrt{1-x^2}\cos\theta$ bring expression $(6)$ to the form $$ I_P= \frac{(const) x~y}{(1-x^2)^{3/2}}\int_{0}^{\sqrt{1-x^2}} \mathrm dk \frac{k^2}{\sqrt{1-k^2}\sqrt{1-k^2\frac{1-y^2}{1-x^2}}}. $$ Did you notice how $I_G$ and $I_P$ are similar? Do you think a similar approach can be applied to my original problem? Please, let me know. Edit 2 The integral $(1)$ is also evaluated in Appendix A.4 of this thesis. However, there he used cylindrical symmetry. Edit: ended My bounty ended, and unfortunately, I don't have enough reputation to offer another one. My question was not solved. Perhaps to solve that it is necessary to make some physical consideration. Anyway, I thanks to all who helped me. If I can solve this, I put the solution here. I've solved this problem applying the Schwinger proper-time substitution: $$\frac{1}{q^2}=\int_{0}^{\infty}\mathrm{d\xi}~\mathrm{e^{-q^2\xi}} $$
In the paper ``Analytic Continuation Of Chern-Simons Theory'' (arXiv:1001.2933) Witten postulates that hyperbolic volume of 3-dimensional manifold coincides with the value of the Chern-Simons functional of the hyperbolic connection (see section 5.3.4). Let me state this more precisely. Let $M$ be a three dimensional spin manifold. Consider a Riemannian metric $\rho$ on $M$ with constant negative curvature $-1$. The universal cover ($\tilde{M},\tilde{\rho}$) is isometric to the hyperbolic space (${H^3},\rho^{st}$). The fundamental group of $M$ acts on $H^3$ by isometries. It therefore defines a homomorphism $g:\pi_1(M)\to \text {Isom}(H^3)=\text {PSL}(2,{\mathbb{C}})$. Def The hyperbolic connection $A_{\rho}$ on the trivial $PSL(2,\mathbb{C})$-bundle $E$ on $M$ is a flat connection with monodromy representation $g$. Rem The inclusion $\text{SO}(3)\subset \text{PSL}(2,\mathbb{C})$ is a homotopy equivalence. Since $M$ has a spin structure we can lift $A_{\rho}$ to an $SL(2,\mathbb{C})$-connection. Def The value of the Chern-Simons functional on an $\text{SL}(2,\mathbb{C})$-connection $A$ in the trivial bundle on $M$ is given by\begin{equation}CS(A):=\int_{M}tr[A,dA]+\frac{2}{3}tr[A,A\wedge A].\end{equation}Here $tr[\cdot,\cdot]$ is defined as follows:\begin{equation}tr[\cdot,\cdot]: \Omega^n(M, g)\otimes\Omega^m(M, g) \xrightarrow{\wedge} \Omega^{m+n} (M, g\otimes g) \xrightarrow{tr} \Omega^{m+n} (M,\mathbb{C}). \end{equation}Here $g$ is a simple Lie algebra and the trace over the last arrow is the standard non-degenerate invariant symmetric bilinear form on $g$. Rem A gauge transformation $s \in \Omega^0(M,E)$ changes CS by an integer: $CS(A)-CS(s^*A)\in 2\pi \mathbb{Z}$. Finally, what I'm seeking for is a reference for the formula (which seems to be well known) \begin{equation} 2\pi \text{ Im } \text{CS}(A_{\rho})=\text{Vol}_{\rho}. \end{equation}
(I'm new to Poisson processes, so please edit if my terminology is incorrect.) Edit: per comments, here is a (more) general version of the originally posted problem (which is now at the bottom, below the line); I hope it's more clear/helpful. The context is approximating a distribution defined by the $\arg\min$ of a (non-Poisson) process in Theorem 1 of this paper. Setup:I want to know the first two moments of $\Delta_m \stackrel{d}{=}\arg\min_u Z(u)$, where $u=(u_1,\ldots,u_d)'\in\mathbb R^d$. The $d$ elements correspond to regressors, where the first element is one (the constant term) and the other elements are regressors distributed according to measure $\mu$. Centering the non-constants at zero,$$Z(u)\equiv -(mu_1,0,0,\ldots,0)' + D + \sum_{k\ne0} \int_0^{u'X_k}\\!\left[1(\Gamma_k\le s)- 1(\Gamma_k<0)\right]\\,\mathrm ds,$$where $m$ is a positive integer, index $k$ takes integer values, $X_k$ are iid with distribution $\mu$, $\Gamma_k=E_1+\cdots+E_k$ for $k>0$ with $E_i\stackrel{iid}{\sim}\rm{Exp}(1)$ independent of all $X_k$ also, $\Gamma_k=-(E_{-1}+\cdots+E_{-k})$ for $k<0$, and $D$ is a random (independent) vector with mean zero and known (if a bit complicated) distribution that I hope I can lay aside for now and add back later. The original paper says the points $\{(\Gamma_k,X_k')':k\ne0\}$ are points of a Poisson process with mean measure $$m(d\epsilon,dx)\equiv \lambda(d\epsilon)\mu(dx),$$ where $\lambda$ is Lebesgue measure. With $d=2$, we can graph the points $(X_k,\Gamma_k)$ in two dimensions, using only the non-constant element of $X_k$. Picking $(u_1,u_2)'$ determines a line in this plane, $\Gamma=u_1+u_2X$. The first first-order condition (FOC) says that the number of points in the green region minus the number in the red is equal to $m$: http://econ.ucsd.edu/~dkaplan/images/FOC1_v4_small.png The second FOC is similar, but weighting each point by the $X_k$ value, and should equal zero. Or if you weight the points by $|X_k|$, the red and green for $X_k<0$ switch: http://econ.ucsd.edu/~dkaplan/images/FOC2_v1_small.png Note: these are not exact equalities, because the integral term in $Z(u)$ is discontinuous; the FOCs are more like "the smallest $u$ such that these are $\ge m$ and $\ge0$." Question: what are the first two moments of $\Delta_m\stackrel{d}{=}\arg\min_u Z(u)$? I think the mean is $(\Gamma_m,0)$, but can't prove it; I have no ideas for the variance. Original post, for special case $d=1$. Original setup: consider a Poisson process on $[0,\infty)$, with points $\{ \Gamma_k\}_{k=1}^\infty$, $\Gamma_k=E_1+\cdots+E_k$, $E_i\stackrel{iid}{\sim}\rm{Exponential}(1)$. Consider $u=\inf\{t:1\le\sum_{k=1}^\infty \mathbb 1(\Gamma_k\le t)\}$, where $1(\cdot)$ is the indicator function (one if true, zero if false). In this special case, $u=\Gamma_1=E_1$, and thus $u\sim\rm{Exp}(1)$. We have a closed form solution for $u$, and the first two (central) moments are $\mathbb E(u)=1$ and $\rm{Var}(u)=1$. Original question: can we derive $\mathbb E(u)=1$ and $\rm{Var}(u)=1$ without using the closed form of $u$? (In my general problem, there is no closed form.) So we know $u=\inf\{t:1\le\sum_{k=1}^\infty \mathbb 1(\Gamma_k\le t)\}$, but we can't just use $u=\Gamma_1$ to calculate moments. Original notes/thoughts: As a reminder, the mean measure of the considered Poisson process is $m(A)=\lambda(A)$, where $\lambda$ is Lebesgue measure (e.g., $\lambda([a,b])=b-a$; just the total length of set $A\subset [0,\infty)$). This means that the expected number of points in any interval is equal to the length of the interval. Also, the probability of one event occurring in $[t,t+dt]$ is $dt$ since the rate is one here. Also potentially helpful: define the random counting measure $\hat N(t)=\sum_{k=1}^\infty 1(\Gamma_k\le t)$. Then, $\mathbb E(\hat N(t))=\lambda([0,t])=t$, and $u=\inf\{t:1\le \hat N(t)\}$. For the first moment, I noticed that in this case, $\inf\{t:1\le \mathbb E(\hat N(t))\}=\inf\{t:1\le t\}=1=\mathbb E(u)$. That is, solving for $u$ after plugging in the mean measure happens to yield the mean of $u$. This struck me as not true generally: $\mathbb E(u(X))\ne u(\mathbb E(X))$. But maybe some property of the Poisson process and/or $u$'s characterization means that $u$ is a ``linear'' function of the process, so this is generally true here? For the second moment, I haven't made any progress. Tried thinking about the expectation $\mathbb E\left[(u-\mathbb E(u))^2\right]=\mathbb E[(u-1)^2]$, but don't know how I could get that without cheating. Also wondered if there was a ``variance of the process,'' if the plug-in approach for the first moment turns out to be valid. But basically stuck. Last, recall either $\rm{Var}(u)$ or $\mathbb E(u^2)$ is sufficient, since $\rm{Var}(u)=\mathbb E(u^2)-(\mathbb E(u))^2$.
Imagine this game. I pick a permutation $p$ of $1..n$ and give you an oracle. When the oracle is queried with any sequence of $n$ numbers $\in 1..n$, it masks each number by applying some unknown bijection $f$ and then permutes the masked values using $p$. Your goal is to repeatedly query the oracle to recover $p$ with as few queries as possible. A $\theta(\log(n))$ solution is obvious. First query $1, 2, 3, 4, \ldots$ and $2, 1, 4, 3, \ldots$, the second query reversing each of the $\lfloor \frac{n}{2} \rfloor$ pairs. As swapping a pair in the input swaps the corresponding pair in the output, we identify the $\lfloor \frac{n}{2} \rfloor$ pairs that they map to, albeit with unknown correspondence. (If $n$ is odd, the $n$th item is the one remaining in place so $p(n)$ is known.) Then query $1, 1, 3, 3, \ldots$ to identify which item in each output pair corresponds to the first item of an input pair. With 3 queries we have reduced the problem to one of size $\lfloor \frac{n}{2} \rfloor$ by querying from now on with identical pairs of items; recurse. We use $3\log n \pm O(1)$ queries. Can we do better? For example, is there a strategy using $\log n \pm O(1)$ queries? Regarding lower bounds, if $q$ queries is optimal for $\frac{n}{2}$ we can run the strategy twice in parallel, side-by-side, in $q$ queries for $n$ but we still lack information about which $\frac{n}{2}$ of the input maps to the corresponding $\frac{n}{2}$ elements of the output, suggesting we need at least one additional query for $n$. This argument seems to imply $\Omega(\log(n))$ query complexity - can it be made rigorous?
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Partial Fractions The integration technique of partial fractions is a way to integrate rational functions of the form $f(x)=\frac{P(x)}{Q(x)}$. (Recall that $f$ is a rational function when $P(x)$ and $Q(x)$ are polynomials.) When considering $\int\frac{P(x)}{Q(x)}\,dx$, first look for a simple substitution, as with any integral. If you see a way to use integration by parts, or even trig substitution, you should probably try this first, as those methods can be a little simpler. Sometimes partial fraction decomposition is the obvious and only choice. For example, $\displaystyle\frac{2}{x^2-1} = \frac{1}{x-1} - \frac{1}{x+1}$.The process: 1) If the degree of $P(x)$ is greater or equal to the degree of $Q(x)$, then we need to use long division to find $\frac{P(x)}{Q(x)}=S(x)+\frac{R(x)}{Q(x)}$, resulting in the degree of the remainder $R$ being less than the degree of $Q$. 2) To decompose $\frac{P(x)}{Q(x)}$ or $\frac{R(x)}{Q(x)}$ (if you did long division), we first factor $Q(x)$. 3) Use the following process. Finally, we have to integrate the resulting terms. Linear factors give logs. Substitution or trig substitution will usually take care of the other factors. Example: $\displaystyle\int \frac{x^3}{x^2- 1}\,dx$ DO: Justify each equal sign below with the work needed to get from the LHS to the RHS of the equal sign. $\displaystyle \int \frac{x^3}{x^2- 1}\,dx = \int \left(x + \frac{x}{x^2- 1}\right)\,dx= \int x\,dx+ \frac{1}{2}\int\left( \frac{1}{x-1} + \frac{1}{x+1} \right)\,dx $ $\displaystyle\quad\qquad\qquad= \frac{x^2}{2}+ \frac{1}{2}\Bigl(\ln\lvert x-1\rvert +\ln\lvert x+1\rvert \Bigr)+ C$ DO: Just for practice, evaluate this integral using trig substitution. Which method do you prefer here?
Here the details of the iguanodon series. Start with the Farey sequence $F(n) $of order n which is the sequence of completely reduced fractions between 0 and 1 which, when in lowest terms, have denominators less than or equal to n, arranged in order of increasing size. Here are the first eight Fareys F(1) = {0⁄1, 1⁄1} F(2) = {0⁄1, 1⁄2, 1⁄1} F(3) = {0⁄1, 1⁄3, 1⁄2, 2⁄3, 1⁄1} F(4) = {0⁄1, 1⁄4, 1⁄3, 1⁄2, 2⁄3, 3⁄4, 1⁄1} F(5) = {0⁄1, 1⁄5, 1⁄4, 1⁄3, 2⁄5, 1⁄2, 3⁄5, 2⁄3, 3⁄4, 4⁄5, 1⁄1} F(6) = {0⁄1, 1⁄6, 1⁄5, 1⁄4, 1⁄3, 2⁄5, 1⁄2, 3⁄5, 2⁄3, 3⁄4, 4⁄5, 5⁄6, 1⁄1} F(7) = {0⁄1, 1⁄7, 1⁄6, 1⁄5, 1⁄4, 2⁄7, 1⁄3, 2⁄5, 3⁄7, 1⁄2, 4⁄7, 3⁄5, 2⁄3, 5⁄7, 3⁄4, 4⁄5, 5⁄6, 6⁄7, 1⁄1} F(8) = {0⁄1, 1⁄8, 1⁄7, 1⁄6, 1⁄5, 1⁄4, 2⁄7, 1⁄3, 3⁄8, 2⁄5, 3⁄7, 1⁄2, 4⁄7, 3⁄5, 5⁄8, 2⁄3, 5⁄7, 3⁄4, 4⁄5, 5⁄6, 6⁄7, 7⁄8, 1⁄1} Farey sequences have plenty of mysterious properties. For example, in 1924 J. Franel and Edmund Landau proved that an asymptotic density result about Farey sequences is equivalent to the Riemann hypothesis. More precisely, let a(n) be the number of terms in the Farey sequence F(n) (that is, a(1)=2,a(2)=3,…,a(8)=23 etc. This is sequence A005728 in the online integer sequences catalog). Let $F(n)_j $ denote the j-th term in F(n), then the following conjecture is equivalent to the Riemann hypothesis For every $\epsilon > 0 $ there is a constant C depending on $\epsilon $ such that $\sum_{j=1}^{a(n)} | F(n)_j – \frac{j}{a(n)} | < C n^{\frac{1}{2}+\epsilon} $ when n goes to infinity. Anyway, let us continue our construction. Farey sequences are clearly symmetric around 1/2 so let us just take half of them, so we jump to 1 when we have reached 1/2. Let us extend this halved Farey on both sides with $\infty $ and call it the modified Farey sequence f(n). For example, $f(3) = {~\infty,0,\frac{1}{3},\frac{1}{2},1,\infty } $ Now consider the Farey code in which we identify the two sides connected to $\infty $ and mark two consecutive Farey numbers as [tex]\xymatrix{f(n)_i \ar@{-}[r]_{\bullet} & f(n)_{i+1}}[/tex] That is, the Farey code associated to the modified sequence f(3) is [tex]\xymatrix{\infty \ar@{-}[r]_{1} & 0 \ar@{-}[r]_{\bullet} & \frac{1}{3} \ar@{-}[r]_{\bullet} & \frac{1}{2} \ar@{-}[r]_{\bullet} & 1 \ar@{-}[r]_{1} & \infty}[/tex] Recall from earlier that to a Farey-code we can associate a special polygon by first taking the hyperbolic convex hull of all the terms in the sequence (the region bounded by the vertical lines and the bottom red circles in the picture on the left) and adding to it for each odd interval [tex]\xymatrix{f(n)_i \ar@{-}[r]_{\bullet} & f(n)_{i+1}}[/tex] the triangle just outside the convex hull consisting of two odd edges in the Dedekind tessellation (then we obtain the region bounded by the black geodesics for the sequence f(3)). Next, we can associate to this special polygon a cuboid tree diagram by considering all even and odd vertices on the boundary (which are tinted red, respectively blue) together with all odd vertices in the interior of the special polygon. These are indicated in the left picture below. If we connect these vertices with the geodesics in the polygon we get a cuboid tree diagram. The obtained cuboid tree diagram is depicted on the right below. Finally, identifying the red points (as they lie on geodesics connected to $\infty $ which are identified in the Farey code), adding even points on the remaining geodesics and numbering the obtained half-lines we obtain the dessin d’enfant given on the left hand side. To such a dessin we can associate its monodromy group which is a permutation group on the half-lines generated by an order two element indicating which half-lines make up a line and an order three element indicating which half-lines one encounters by walking counter-clockwise around a three-valent vertex. For the dessin on the left the group is therefore the subgroup of $S_{12} $ generated by the elements $\alpha = (1,2)(3,4)(5,6)(7,8)(9,10)(11,12) $ $\beta = (1,2,3)(4,5,7)(8,9,11) $ and a verification with GAP tells us that this group is the sporadic Mathieu group $M_{12} $. This concludes the description of the second member of the Iguanodon series. If you like to check that the first 8 iguanodons are indeed the simple groups $L_2(7), M_{12}, A_{16}, M_{24}, A_{28}, A_{40}, A_{48}, A_{60}, \ldots $ the following dissection of the Iguanodon may prove useful
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Gibbs energy came from the relation $$\Delta S_\text{universe}= \Delta S_\text{surroundings}+\Delta S_\text{system}\ge 0.$$ By little algebra, we can arrive at $$\Delta S_\text{universe}= \frac{-\Delta H_\text{system}}{T} + \Delta S_\text{system} \\ \implies -T\Delta S_\text{universe} = \Delta H_\text{system} - T\Delta S_\text{system} $$ From the relation of Gibbs free energy $$\Delta G_\text{system} = \Delta H_\text{system}- T\Delta S_\text{system}\;$$ we get $$\Delta S_\text{universe}= \frac{-\Delta G_\text{system}}{T}.$$ So, that means, $-\Delta G_\text{system}$ is the energy dispersed to the universe from the system to increase the entropy of the universe. Now, all we know that the energy that has been dispersed to for the sake of entropy cannot be used for work viz: $$d S= \frac{\delta Q}{T}\; ;$$ we can't use $\delta Q$ to do work as it is used up for increasing entropy. Similarly, how could we use $-\Delta G_\text{system}$ for doing work as it has been used for increasing the entropy of the universe? As written by Frank Lambert in his site: [...] Strong and Halliwell rightly maintained that $- \Delta G\; ,$ the "free energy", is not a true energy because it is not conserved.Explicitly ascribed in the derivation of the preceding paragraph (or implicitly in similar derivations), $- \Delta G$ is plainly the quantity of energy that can be dispersed to the universe, the kind of entity always associated with entropy increase, and not simply energy in one of its many forms. Therefore, it is not difficult to see that $\Delta G$ is indeed not a true energy.Instead, as dispersible energy, when divided by $T\; ,$ $\frac{\Delta G}{T}$ is an entropy function — the total entropy change associated with a reaction, not simply the entropy change in a reaction, i.e., $S_\text{products} - S_\text{reactants}\; ,$ that is the $\Delta S_\text{system}.$ ... My questions are: $\bullet$ Why is $-\Delta G_\text{system}$ not a true energy as written above? Why can't it be conserved? I want to understand what those lines actually mean? $\bullet$ If $-\Delta G_\text{system}$ is used to increase the entropy of the universe, then how could it be used for work as commonly told$^1$? $^1$ I'm referring to this statement: Gibbs energy is the energy of the system available to do work.
The Annals of Probability Ann. Probab. Volume 27, Number 3 (1999), 1324-1346. How Often Does a Harris Recurrent Markov Chain Recur? Abstract Let ${X _n}_ {n\geq 0}$ be a Harris recurrent Markov chain with state space $(E,\mathscr{E})$, transition probability $P(x, A)$ and invariant measure $\pi$ . Given a nonnegative $\pi$-integrable function $f$ on $E$, the exact asymptotic order is given for the additive functionals \sum_{k=1}^{n} f(X_k) \quad n=1,2,\dots in the forms of both weak and strong convergences. In particular, the frequency of ${X_n}_{n \geq 0}$ visiting a given set $A\in \mathscr{E}$ with $0 < \pi (A) < + \infty$ is determined by taking $f = I_A$. Under the regularity assumption, the limits in our theorems are identified. The one- and two-dimensional random walks are taken as the examples of applications. Article information Source Ann. Probab., Volume 27, Number 3 (1999), 1324-1346. Dates First available in Project Euclid: 29 May 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1022677449 Digital Object Identifier doi:10.1214/aop/1022677449 Mathematical Reviews number (MathSciNet) MR1733150 Zentralblatt MATH identifier 0981.60023 Citation Chen, Xia. How Often Does a Harris Recurrent Markov Chain Recur?. Ann. Probab. 27 (1999), no. 3, 1324--1346. doi:10.1214/aop/1022677449. https://projecteuclid.org/euclid.aop/1022677449
If you need a primer covering various domains of math then Dan Stefanica's text will do the job. The text covers multivariable calculus, lagrange multipliers, black scholes PDF, greeks & hedging, newton's method, bootstrapping, taylor series, numerical integration, and risk neutral valuation. It also includes a mathematical appendix.If you want an ... Ledoit and Wolf shrinkage methods ("Honey I shrunk the sample covariance matrix")Ceria and Stubbs - Robust optimization literature (2006)Stock & Watson (2002ab) - papers on large N small P estimationRockafellar & Uryasev (2000) - "Optimization of CVaR and coherent risk measures"Sorensen, Qian, Hua - "Quantitative Portfolio Management"Ang ... I doubt you will find one book that covers everything you need, but here are a few that I continually come back to whenever I have some questions on the mathematics.Analysis of Financial Time Series by Ruey TsayAn Introduction to High-Frequency Finance by Dacorogna et alProbability and Statistics by DeGroot and SchervishStatistical Inference by Casella ... Big PictureTime-series variance is driven mostly by discount rates, whereas expected cash flows dominate the cross-sectional variance. These results are important because they highlight the value of focusing on both dimensions of stock prices and returns: time-series and cross-section. On the other hand, however, they also show that a single mechanism is ... Of course making money is always the key issue. That (not completely facetious) comment aside:On the practical side, in many firms IT is struggling with being clear, transparent, and intuitive in their handling of multiple curves and their associated risks. Stumbling over your own systems is an annoying way to lose money. These risks can be surprisingly ... Sure. The formula for vega (you probably recall) is$$v(\sigma) = S n( d_1(\sigma) )\sqrt{T-t}$$The gaussian PDF, $n(\cdot)$, is strictly non-convex, having a local maximum at zero. There is therefore a corresponding maximum of vega occurring where the strike $K_\text{max}$ solves$$d_1(\sigma)=0$$which works out to$$K_\text{max} = S \exp((... I have only seen one framework that works in a research oriented development environment which is the spiral model. Using try agile methodologies is impossible because the frontier of tasks is not known. Agile is very useful for building/maintaining known applications with known functionality and problem spaces. It is not useful for research oriented ... Grinold and Kahn (2000) remains the bible for people just starting to get into quantitative portfolio management. Some readers may prefer the treatment in Litterman (2003). Both of these, however, are thorough books covering all the foundational material.Most of the recent work in portfolio management has built upon the research covered in those books. ... "Extreme programming" is a buzzword that has received a lot of hype in the past few years. However it's important to note that it's only one item in the long list of SW development philosophies and that it's not - contrary to its proponents' claims - a panacea.On the other side it's very beneficial to follow a few simple rules while writing even small ... Theta Calculus, a system for representation of complex financial instruments.Kupper & Drapeau's unification of risk concepts.Several papers by Schmid, Bodnar, Okhrin on optimal portfolio weights and tests of same. For example, A test for the weights of the global minimum variance portfolio in an elliptical model.Similarly, Kan and Smith's work on the ... To see the connection between put-call parity and option price you should read this highly insightful paper by Espen Gaarder Haug & Nassim Nicholas Taleb:Option traders use (very) sophisticated heuristics, never the Black–Scholes–Merton formulaIt shows how you can heuristically derive option pricing formulas by adapting the tails and skewness by ... Two Day-Traders went to go eat (after the market closed of course!). One of the traders orders a steak and has a hard time cutting it. He then asks for a sharper knife... when he unfolds his cutlery the knife hits his plate and falls out of the table and lands right by his foot which was slightly sticking out of the table. The other trader looks at him and ... there is no stealing of data unless you delete it from the original source. Let me elaborate, as the semantics are very important here. Stealing, even with quotes around it, "Stealing" requires that something is removed from the original place. You steal car. You copy a file, as such data is protected via copyright when it can and other subsequent acts that ... "Individual Craftsmanship"...I am not sure how you want to apply this skill set later. Craftsman to me means someone who simply applies a tool set, it does not imply (according to the dictionary definition) whether professionally to earn money or in order to teach or treat it as a personal hobby. So please let me comment on all three:Professionally in the ... There are some Agile benefits that you will reap, even if you are the sole programmer.You may feel silly doing a scrum by yourself in the morning. But you may find it to be a benefit to plan what you would like to work on that day, and to think about what you might need that day (especially if you need to read about solving a quant problem).Planning out ... Alpha is easier to measure and easier to obtain in the cross-section than in the time-series.Low information coefficient combined with high breadth still make for a decent information ratio.The breadth of your strategies is always lower than you think.When markets collapse, correlation goes to one. As an agile developer and quant finance programmer, I think that unit testing is invaluable. Because you really never know if your code is doing what it is supposed to do without tests. How do you know that your code is calculating your proprietary indicators correctly?You probably ran your new code and checked the result against some other code or system ... Summary Answer: Those are interested to benchmark against indexes who sell such index products (pricing data, trade marks, rights to use and publish), and of course portfolio managers because they look generally much better when indexed against indexes than when being assessed through risk-adjusted returns. The general public is sadly just too uninformed to ... The first reason is the answer to this question :should I bother invest in your fund and not simply invest in the S&P500 etf ?The second is :Are you a fraud ?If someone claims to use a long only strategy with stocks from the S&P500, you expect his fund returns to be correlated to the S&P500 to some extent. If it is not the case => fraud. There are a few things:Non-cynical:Active absolute return managers tend to underperform passive benchmarks after fees. So if you can get a manager that can outperform a passive benchmark (perhaps who has a mostly passive strategy with some active tilts), then you aredoing well.Your scenario of a portfolio dropping 48% is not realistic. Most asset ...
In older NeverEndingBooks-posts (and here) proofs were given that the modular group $\Gamma = PSL_2(\mathbb{Z}) $ is the group free product $C_2 \ast C_3 $, so let’s just skim over details here. First one observes that $\Gamma $ is generated by (the images of) the invertible 2×2 matrices $U= \begin{bmatrix} 0 & -1 \\\ 1 & 0 \end{bmatrix} $ and $V= \begin{bmatrix} 0 & 1 \\\ -1 & 1 \end{bmatrix} $ A way to see this is to consider X=U.V and Y=V.U and notice that multiplying with powers of X adds multiples of the second row to the first (multiply on the left) or multiples of the first column to the second (multiply on the right) and the other cases are handled by taking multiples with powers of Y. Use this together with the fact that matrices in $GL_2(\mathbb{Z}) $ have their rows and columns made of coprime numbers to get any such matrix by multiplication on the left or right by powers of X and Y into the form $\begin{bmatrix} \pm 1 & 0 \\\ 0 & \pm 1 \end{bmatrix} $ and because $U^2=V^3=\begin{bmatrix} -1 & 0 \\\ 0 & -1 \end{bmatrix} $ we see that $\Gamma $ is an epimorphic image of $C_2 \ast C_3 $. To prove isomorphism one can use the elegant argument due to Roger Alperin considering the action of the Moebius transformations $u(z) = -\frac{1}{z} $ and $v(z) = \frac{1}{1-z} $ (with $v^{-1}(z) = 1-\frac{1}{z} $) induced by the generators U and V on the sets $\mathcal{P} $ and $\mathcal{N} $ of all positive (resp. negative) irrational real numbers. Observe that $u(\mathcal{P}) \subset \mathcal{N} $ and $v^{\pm}(\mathcal{N}) \subset \mathcal{P} $ Hence, if $w $ is a word in $u $ and $v^{\pm} $ of off length we either have $w(\mathcal{P}) \subset \mathcal{N} $ or $w(\mathcal{N}) \subset \mathcal{P} $ so $w $ can never be the identity. If the length is even we can conjugate $w $ such that it starts with $v^{\pm} $. If it starts with $v $ then $w(\mathcal{P}) \subset v(\mathcal{N}) $ is a subset of positive rationals less than 1 whereas if it starts with $v^{-1} $ then $w(\mathcal{P}) \subset v^{-1}(\mathcal{N}) $ is a subset of positive rationals greater than 1, so again it cannot be the identity. Done! By a result of Aleksandr Kurosh it follows that every modular subgroup is the group free product op copies of $C_2, C_3 $ or $C_{\infty} $ and we would like to determine the free generators explicitly for a cofinite subgroup starting from its associated Farey code associated to a special polygon corresponding to the subgroup. To every even interval [tex]\xymatrix{x_i = \frac{a_i}{b_i} \ar@{-}[r]_{\circ} & x_{i+1}= \frac{a_{i+1}}{b_{i+1}}}[/tex] in the Farey code one associates the generator of a $C_2 $ component $A_i = \begin{bmatrix} a_{i+1}b_{i+1}+ a_ib_i & -a_i^2-a_{i+1}^2 \\\ b_i^2+b_{i+1}^2 & -a_{i+1}b_{i+1}-a_ib_i \end{bmatrix} $ to every odd interval [tex]\xymatrix{x_i = \frac{a_i}{b_i} \ar@{-}[r]_{\bullet} & x_{i+1} = \frac{a_{i+1}}{b_{i+1}}}[/tex] in the Farey code we associate the generator of a $C_3 $ component $B_i = \begin{bmatrix} a_{i+1}b_{i+1}+a_ib_{i+1}+a_ib_i & -a_i^2-a_ia_{i+1}-a_{i+1}^2 \\\ b_i^2+b_ib_{i+1} + b_{i+1}^2 & -a_{i+1}b_{i+1} – a_{i+1}b_i – a_i b_i \end{bmatrix} $ and finally, to every pair of free intervals [tex]\xymatrix{x_k \ar@{-}[r]_{a} & x_{k+1}} \ldots \xymatrix{x_l \ar@{-}[r]_{a} & x_{l+1}}[/tex] we associate the generator of a $C_{\infty} $ component $C_{k,l} = \begin{bmatrix} a_l & -a_{l+1} \\\ b_l & – b_{l+1} \end{bmatrix} \begin{bmatrix} a_{k+1} & a_k \\\ b_{k+1} & b_k \end{bmatrix}^{-1} $ Kulkarni’s result states that these matrices are free generators of the cofiniite modular subgroup determined by the Farey code. For example, for the M(12) special polygon on the left (bounded by the thick black geodesics), the Farey-code for this Mathieu polygon is [tex]\xymatrix{\infty \ar@{-}[r]_{1} & 0 \ar@{-}[r]_{\bullet} & \frac{1}{3} \ar@{-}[r]_{\bullet} & \frac{1}{2} \ar@{-}[r]_{\bullet} & 1 \ar@{-}[r]_{1} & \infty}[/tex] Therefore, the structure of the subgroup must be $C_{\infty} \ast C_3 \ast C_3 \ast C_3 $ with the generator of the infinite factor being $\begin{bmatrix} -1 & 1 \\\ -1 & 0 \end{bmatrix} $ and those of the cyclic factors of order three $\begin{bmatrix} 3 & -1 \\\ 13 & -4 \end{bmatrix}, \begin{bmatrix} 7 & -3 \\\ 19 & 8 \end{bmatrix} $ and $\begin{bmatrix} 4 & -3 \\\ 7 & -5 \end{bmatrix} $ This approach also gives another proof of the fact that $\Gamma = C_2 \ast C_3 $ because the Farey code to the subgroup of index 1 is [tex]\xymatrix{\infty \ar@{-}[r]_{\circ} & 0 \ar@{-}[r]_{\bullet} & \infty}[/tex] corresponding to the fundamental domain on the left. This finishes (for now) this thread on Kulkarni’s paper (or rather, part of it). On the Lost? page I will try to list threads in a logical ordering when they materialize. Reference Ravi S. Kulkarni, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1133
Definition: Latin square A Latin square of order \(n\) is an \(n\times n\) grid filled with \(n\) symbols so that each symbol appears once in each row and column. Example \(\PageIndex{1}\) Here is a Latin square of order 4: ♥ ♣ ♠ ♦ ♣ ♠ ♦ ♥ ♠ ♦ ♥ ♣ ♦ ♥ ♣ ♠ Usually we use the integers \(1\ldots n\) for the symbols. There are many, many Latin squares of order \(n\), so it pays to limit the number by agreeing not to count Latin squares that are "really the same'' as different. The simplest way to do this is to consider reduced Latin squares. A reduced Latin square is one in which the first row is \(1\ldots n\) (in order) and the first column is likewise \(1\ldots n\). Example \(\PageIndex{2}\) Consider this Latin square: 4 2 3 1 2 4 1 3 1 3 4 2 3 1 2 4 The order of the rows and columns is not really important to the idea of a Latin square. If we reorder the rows and columns, we can consider the result to be in essence the same Latin square. By reordering the columns, we can turn the square above into this: 1 2 3 4 3 4 1 2 2 3 4 1 4 1 2 3 Then we can swap rows two and three: 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 This Latin square is in reduced form, and is essentially the same as the original. Another simple way to change the appearance of a Latin square without changing its essential structure is to interchange the symbols. Example \(\PageIndex{3}\) Starting with the same Latin square as before: 4 2 3 1 2 4 1 3 1 3 4 2 3 1 2 4 we can interchange the symbols 1 and 4 to get: 1 2 3 4 2 1 4 3 4 3 1 2 3 4 2 1 Now if we swap rows three and four we get: 1 2 3 4 2 1 4 3 3 4 2 1 4 3 1 2 Notice that this Latin square is in reduced form, but it is not the same as the reduced form from the previous example, even though we started with the same Latin square. Thus, we may want to consider some reduced Latin squares to be the same as each other. Definition: isotopic and Isotopy Classes Two Latin squares are isotopic if each can be turned into the other by permuting the rows, columns, and symbols. This isotopy relation is an equivalence relation; the equivalence classes are the i sotopy classes. Latin squares are apparently quite difficult to count without substantial computing power. The number of Latin squares is known only up to \(n=11\). Here are the first few values for all Latin squares, reduced Latin squares, and non-isotopic Latin squares (that is, the number of isotopy classes): \(n\) All Reduced Non-isotopic 1 1 1 1 2 2 1 1 3 12 1 1 4 576 4 2 5 161280 56 2 How can we produce a Latin square? If you know what a group is, you should know that the multiplication table of any finite group is a Latin square. (Also, any Latin square is the multiplication table of a quasigroup.) Even if you have not encountered groups by that name, you may know of some. For example, considering the integers modulo \(n\) under addition, the addition table is a Latin square. Example \(\PageIndex{4}\) Example 4.3.6 Here is the addition table for the integers modulo 6: 0 1 2 3 4 5 1 2 3 4 5 0 2 3 4 5 0 1 3 4 5 0 1 2 4 5 0 1 2 3 5 0 1 2 3 4 Example 4.3.7 Here is another way to potentially generate many Latin squares. Start with first row \(1,\ldots, n\). Consider the sets \(A_i=[n]\backslash\{i\}\). From exercise 1 in section 4.1 we know that this set system has many sdrs; if \(x_1,x_2,\ldots,x_n\) is an sdr, we may use it for row two. In general, after we have chosen rows \(1,\ldots,j\), we let \(A_i\) be the set of integers that have not yet been chosen for column \(i\). This set system has an sdr, which we use for row \(j+1\). Definition 4.3.8 Suppose \(A\) and \(B\) are two Latin squares of order \(n\), with entries \(A_{i,j}\) and \(B_{i,j}\) in row \(i\) and column \(j\). Form the matrix \(M\) with entries \(M_{i,j}=(A_{i,j},B_{i,j})\); we will denote this operation as \(M=A\cup B\). We say that \(A\) and \(B\) are orthogonal if \(M\) contains all \(n^2\) ordered pairs \((a,b)\), \(1\le a\le n\), \(1\le b\le n\), that is, all elements of \(\{0,1,\ldots,n-1\}\times\{0,1,\ldots,n-1\}\). As we will see, it is easy to find orthogonal Latin squares of order \(n\) if \(n\) is odd; not too hard to find orthogonal Latin squares of order \(4k\), and difficult but possible to find orthogonal Latin squares of order \(4k+2\), with the exception of orders \(2\) and \(6\). In the 1700s, Euler showed that there are orthogonal Latin squares of all orders except of order \(4k+2\), and he conjectured that there are no orthogonal Latin squares of of order \(6\). In 1901, the amateur mathematician Gaston Tarry showed that indeed there are none of order \(6\), by showing that all possibilities for such Latin squares failed to be orthogonal. In 1959 it was finally shown that there are orthogonal Latin squares of all other orders. Theorem 4.3.9 There are pairs of orthogonal Latin squares of order \(n\) when \(n\) is odd. Proof This proof can be shortened by using ideas of group theory, but we will present a self-contained version. Consider the addition table for addition mod \(n\): 0 \(\cdots\) \(j\) \(\cdots\) \(n-1\) 0 0 \(\cdots\) \(j\) \(\cdots\) \(n-1\) \(\vdots\) \(i\) \(i\) \(\cdots\) \(i+j\) \(\cdots\) \(n+i-1\) \(\vdots\) \(n-1\) \(n-1\) \(\cdots\) \(n+j-1\) \(\cdots\) \(n-2\) We claim first that this (without the first row and column, of course) is a Latin square with symbols \(0,1,\ldots,n-1\). Consider two entries in row \(i\), say \(i+j\) and \(i+k\). If \(i+j\equiv i+j \pmod{n}\), then \(j\equiv k\), so \(j=k\). Thus, all entries of row \(i\) are distinct, so each of \(0,1,\ldots,n-1\) appears exactly once in row \(i\). The proof that each appears once in any column is similar. Call this Latin square \(A\). (Note that so far everything is true whether \(n\) is odd or even.) Now form a new square \(B\) with entries \(B_{i,j}=A_{2i,j}=2i+j\), where by \(2i\) and \(2i+j\) we mean those values mod \(n\). Thus row \(i\) of \(B\) is the same as row \(2i\) of \(A\). Now we claim that in fact the rows of \(B\) are exactly the rows of \(A\), in a different order. To do this, it suffices to show that if \(2i\equiv 2k\pmod{n}\), then \(i=k\). This implies that all the rows of \(B\) are distinct, and hence must be all the rows of \(A\). Suppose without loss of generality that \(i\ge k\). If \(2i\equiv 2k\pmod{n}\) then \(n\divides 2(i-k)\). Since \(n\) is odd, \(n\divides (i-k)\). Since \(i\) and \(k\) are in \(0,1,\ldots,n-1\), \(0\le i-k\le n-1\). Of these values, only \(0\) is divisible by \(n\), so \(i-k=0\). Thus \(B\) is also a Latin square. To show that \(A\cup B\) contains all \(n^2\) elements of \(\{0,1,\ldots,n-1\}\times\{0,1,\ldots,n-1\}\), it suffices to show that no two elements of \(A\cup B\) are the same. Suppose that \((i_1+j_1,2i_1+j_1)=(i_2+j_2,2i_2+j_2)\) (arithmetic is mod \(n\)). Then by subtracting equations, \(i_1=i_2\); with the first equation this implies \(j_1=j_2\). \(\square\) Example 4.3.10 When \(n=3\), $$\left[\matrix{ 0&1&2\cr 1&2&0\cr 2&0&1\cr}\right]\cup \left[\matrix{ 0&1&2\cr 2&0&1\cr 1&2&0\cr}\right]= \left[\matrix{ (0,0)&(1,1)&(2,2)\cr (1,2)&(2,0)&(0,1)\cr (2,1)&(0,2)&(1,0)\cr}\right]. $$ One obvious approach to constructing Latin squares, and pairs of orthogonal Latin squares, is to start with smaller Latin squares and use them to produce larger ones. We will produce a Latin square of order \(mn\) from a Latin square of order \(m\) and one of order \(n\). Let \(A\) be a Latin square of order \(m\) with symbols \(1,\ldots,m\), and \(B\) one of order \(n\) with symbols \(1,\ldots,n\). Let \(c_{i,j}\), \(1\le i\le m\), \(1\le j\le n\), be \(mn\) new symbols. Form an \(mn\times mn\) grid by replacing each entry of \(B\) with a copy of \(A\). Then replace each entry \(i\) in this copy of \(A\) with \(c_{i,j}\), where \(j\) is the entry of \(B\) that was replaced. We denote this new Latin square \(A\times B\). Here is an example, combining a \(4\times 4\) Latin square with a \(3\times 3\) Latin square to form a \(12\times 12\) Latin square: } \(\times\) \(=\) Theorem 4.3.11 f \(A\) and \(B\) are Latin squares, so is \(A\times B\). Proof Consider two symbols \(c_{i,j}\) and \(c_{k,l}\) in the same row. If the positions containing these symbols are in the same copy of \(A\), then \(i\not=k\), since \(A\) is a Latin square, and so the symbols \(c_{i,j}\) and \(c_{k,l}\) are distinct. Otherwise, \(j\not=l\), since \(B\) is a Latin square. The argument is the same for columns. \(\square\) Remarkably, this operation preserves orthogonality: Theorem 4.3.12 If \(A_1\) and \(A_2\) are Latin squares of order \(m\), \(B_1\) and \(B_2\) are Latin squares of order \(n\), \(A_1\) and \(A_2\) are orthogonal, and \(B_1\) and \(B_2\) are orthogonal, then \(A_1\times B_1\) is orthogonal to \(A_1\times B_2\). Proof We denote the contents of \(A_i\times B_i\) by \(C_i(w,x,y,z)\), meaning the entry in row \(w\) and column \(x\) of the copy of \(A_i\) that replaced the entry in row \(y\) and column \(z\) of \(B_i\), which we denote \(B_i(y,z)\). We use \(A_i(w,x)\) to denote the entry in row \(w\) and column \(x\) of \(A_i\). Suppose that \((C_1(w,x,y,z),C_2(w,x,y,z))=(C_1(w',x',y',z'),C_2(w',x',y',z'))\), where \((w,x,y,z)\not=(w',x',y',z')\). Either \((w,x)\not=(w',x')\) or \((y,z)\not=(y',z')\). If the latter, then \((B_1(y,z),B_2(y,z))= (B_1(y',z'),B_2(y',z'))\), a contradiction, since \(B_1\) is orthogonal to \(B_2\). Hence \((y,z)=(y',z')\) and \((w,x)\not=(w',x')\). But this implies that \((A_1(w,x),A_2(w,x))=(A_1(w',x'),A_2(w',x'))\), a contradiction. Hence \(A_1\times B_1\) is orthogonal to \(A_1\times B_2\). \(\square\) We want to construct orthogonal Latin squares of order \(4k\). Write \(4k=2^m\cdot n\), where \(n\) is odd and \(m\ge 2\). We know there are orthogonal Latin squares of order \(n\), by theorem 4.3.9. If there are orthogonal Latin squares of order \(2^m\), then by theorem 4.3.12 we can construct orthogonal Latin squares of order \(4k=2^m\cdot n\). To get a Latin square of order \(2^m\), we also use theorem 4.3.12. It suffices to find two orthogonal Latin squares of order \(4=2^2\) and two of order \(8=2^3\). Then repeated application of theorem 4.3.12 allows us to build orthogonal Latin squares of order \(2^m\), \(m\ge 2\). Two orthogonal Latin squares of order 4: $$\left[\matrix{ 1&2&3&4\cr 2&1&4&3\cr 3&4&1&2\cr 4&3&2&1\cr}\right] \left[\matrix{ 1&2&3&4\cr 3&4&1&2\cr 4&3&2&1\cr 2&1&4&3\cr }\right], $$ and two of order 8: $$ \left[\matrix{ 1&3&4&5&6&7&8&2\cr 5&2&7&1&8&4&6&3\cr 6&4&3&8&1&2&5&7\cr 7&8&5&4&2&1&3&6\cr 8&7&2&6&5&3&1&4\cr 2&5&8&3&7&6&4&1\cr 3&1&6&2&4&8&7&5\cr 4&6&1&7&3&5&2&8\cr }\right] \left[\matrix{ 1&4&5&6&7&8&2&3\cr 8&2&6&5&3&1&4&7\cr 2&8&3&7&6&4&1&5\cr 3&6&2&4&8&7&5&1\cr 4&1&7&3&5&2&8&6\cr 5&7&1&8&4&6&3&2\cr 6&3&8&1&2&5&7&4\cr 7&5&4&2&1&3&6&8\cr }\right]. $$
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Question: Is true that $L_1 = \{01^*0\}$ is $m$-complete in the class of decidable languages? $L_1$ is defined as: $$L_1 = \{01^*0\} := \{01^n0: n \in \mathbb{N}\}$$ Definition of a $m$-complete language: A language $L^*$ is $m$-complete in the class of decidable language, if for any decidable language $L$ we have that $L \leq_m L^*$ (that is, $L$ reduces to $L^*$). I'm really new in this realm, so I'm quite lost about how to tackle this exercise. I'm not sure if my approach is correct but I was trying to prove that the statement is not true by using a reduction from the Busy Beaver Function to $L_1$ (then, from this reduction and from $L_1$ being decidable we would have that BB problem is decidable, which is not possible). Let $A$ be the algorithm deciding $L_1$ and $w \in L_1$ a word with $|w| = m$ ones. Now when running $A$ on $w \in L_1$ we keep track of the number of steps $t$ it made just before halting. Since $t = m|\Gamma|^m|Q|$, where $\Gamma$ is the set of tape symbols and $Q$ is the set of states. From here we can find the number $|Q|$ of states required for printing $|w| = m$ ones... I stopped here because I feel there is already something wrong. First, even is we find that number $|Q|$ for which $A$ performed $t$ steps before $A(w) = \text{Accept}$, that does not necessarily mean that $|w| = m$ ones could have been achieved with a TM with lesser number of states. Second, ''knowing'' the numbers of ones I'm determining the numbers of states and not the other way around, which is as $BB$ is defined. Another alternative I was exploring was, on input $w$, to run all TM's $T_1, ... T_n$, that return $|w| = m$ ones. Then we record the input $w$ and the description of the machine $T_i,\ (i \in [1,n])$, that had the least number of states. If we do this $\forall w \in L_1$, we end up with a UTM $T'$, such that: $$T' = \{ \langle T^*, w\rangle: |Q|_{T^*} = \mathrm{min}(|Q|_{T_1}, ..., |Q|_{T_n}\ \text{}\}$$ Then clearly $T'$ decides $L_1$ (just have to run it on input $w \in L_1$) and at the same time allows us to compute $BB(n)$ for a fixed $n \in \mathbb{N}$ (since $n$ is encoded in the description of $T^*$). Probably here there are some flaws in my reasoning, but in this case, do I have to wait that, on input $w$, all those TM's reach a halt or can I assume that the first machine that halted is the one with the least number of states? I'd appreciate any help.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The composition of differentiable functions is differentiable. Inparticular, if $x(t)$ and $y(t)$ are differentiable functions of aparameter $t$, and if $f(x,y)$ is a differentiable function of $x$ and$y$, then $f(x(t),y(t))$ is a differentiable function of $t$. Wecompute its derivative with the The reason behind the chain rule is simple. Since $f(x,y)$ is differentiable, we can approximate changes in $f$ by its linearization, so $$\Delta f \approx f_x \Delta x + f_y \Delta y.$$ Dividing by $\Delta t$ and taking a limit as $\Delta t \to 0$ gives the chain rule. For functions of three of more variables, we just add a term for each variable. If $f(x,y,z)$ is a function of three variables, then $$\frac{d f({\bf r}(t))}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt} + \frac{\partial f}{\partial z} \frac{dz}{dt}.$$ Let $C$ be a curve defined by an equation $f(x,y)=c$, and let ${\bf r}(t)$ be a parametrization of that curve. Since $f({\bf r}(t))=c$ for all $t$, $df/dt=0$. But by the chain rule, $$f_x \frac{dx}{dt} + f_y \frac{dy}{dt} = \frac{df}{dt} = 0,$$so $$\frac{dy}{dx}=\frac{dy/dt}{dx/dt} = -\frac{f_x}{f_y}.$$Note that the vector $\langle -f_y, f_x \rangle$ is
The prime numbers are defined in terms of the multiplicative structure of the natural numbers, \(\mathbf{N}\). Specifically, the definition of prime–that \(p \in \mathbf{N}\) is prime if it has no non-trivial divisors–only refers to multiplication, not addition. The additive structure of the prime numbers is much less well understood, and indeed, many of the most famous open problems and deepest theorems in number theory concern the additive structure of the primes. For example, see the twin primes conjecture, the Goldbach conjecture, the abc conjecture, and the Green-Tao theorem. In this post, we consider a much weaker variant of the Goldbach Conjecture: Theorem 1. Every natural number \(m\) with \(1 Bertrand’s Postulate. Bertrand’s Postulate. For every natural number \(m > 1\) there exists a prime \(p\) satisfying \(m (m – 1) / 2\). Thus, \(m_1 = m – p_1 \leq m / 2\). Since each iteration of our procedure for finding \(p_1, \ldots, p_k\) cuts the size of the problem in half, the process should terminate after \(\log_2 m\) iterations. We are now ready to formalize this intuition with a proof. Proof of Theorem 1. We argue by induction on \(n\). For the base case, \(n = 1\), the only natural numbers \(m\) with \(1 1\). We will show that Theorem 1 also holds for \(n+1\). Suppose \(m\) satisfies \(1 (m – 1) / 2\), thus \(m_1 = m – p_1\) satisfies \(m_1
Which are the steps to compute the theta greek from the BS solution: $$c(t, x) = xN(d_+(T-t,x)) - K e ^{-r(T-t)}N(d_-(T-t,x))$$ with: $$ d_\pm (T-t, x) = \dfrac{1}{\sigma \sqrt{T-t}} \left[ \ln \left( \dfrac{x}{K} \right) + \left( r \pm \dfrac{\sigma^2}{2} \right) (T-t) \right] $$ I know that the answer is: $$ c_t(t,x) = -rKe^{-r(T-t)}N(d_-(T-t,x)) - \dfrac{\sigma x}{2 \sqrt{T-t}}N'(d_+(T-t,x)) $$ Now, form me it is clear how to obtain the first term: $-rKe^{-r(T-t)}N(d_-(T-t,x))$; the problem is how I can derive $d_-(T-t,x)$ in order to obtain: $$ - \dfrac{\sigma x}{2 \sqrt{T-t}} $$ Thanks in advance.
There's no problem with summing over an uncountable indexing set, but over the reals you don't really get anything interesting by moving from the countable case to the uncountable case. Suppose we have such an uncountable summation of nonnegative real numbers: $\sum_{r \in \Gamma} x_r$. Then we have a few cases: (i) One of the $x_r = \infty$. Then our sum, $\sum_{r \in \Gamma} x_r$, is also $\infty$. (ii) Suppose none of the $x_r =\infty$. Now we utilize dyadic decomposition to write the positive reals as a countable union of sets: $(0, \infty] = \cup_{j \in \mathbb{Z}} (2^j, 2^{j+1}]$.By the pigeonhole principle, either (a) There is a $j$ such that there are uncountably many nonzero $x_r$ in $(2^j, 2^{j+1}]$, in which case our sum is $\infty$ again or (b) there are only countably many nonzero $x_r$ in each interval for all $j$. In the latter case, we're back to our countable sum, as a countable union of countable sets is still countable. Now, if our indexing set $\Gamma$ is essentially countable in this way, we can motivate our sum as usual by defining $\sum_{r \in \Gamma} x_r = \sup_{E \subset \Gamma} \sum_{r \in E} x_r$, where $E$ is a finite set. So an uncountable sum can be defined in the same way, without really picking up many problems (or interesting cases, sadly).
Difference between revisions of "Upper and lower bounds" (→n=2) (→n=4) Line 93: Line 93: :<math>c_4=52</math>: :<math>c_4=52</math>: − Indeed, divide a line-free set in <math>[3]^4</math> into three blocks of <math>[3]^3</math>. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is <math>18+17+17=52</math>. In fact, there are exactly three ways to get to 52, namely + Indeed, divide a line-free set in <math>[3]^4</math> into three blocks of <math>[3]^3</math>. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is <math>18+17+17=52</math>. In fact, there are exactly three ways to get to 52, namely * xyz yz’x zxy’ * xyz yz’x zxy’ Line 108: Line 108: The second example here can also be described as <math>D_4</math> with 1111 and 2222 removed. The second example here can also be described as <math>D_4</math> with 1111 and 2222 removed. + + + + + + + + + + + + + + + + + + == n=5 == == n=5 == Revision as of 17:29, 16 February 2009 Upper and lower bounds for [math]c_n[/math] for small values of n. [math]c_n[/math] is the size of the largest subset of [math][3]^n[/math] that does not contain a combinatorial line. A spreadsheet for all the latest bounds on [math]c_n[/math] can be found here. In this page we record the proofs justifying these bounds. n 0 1 2 3 4 5 6 7 [math]c_n[/math] 1 2 6 18 52 150 450 [1302,1350] Contents Basic constructions For all [math]n \geq 1[/math], a basic example of a mostly line-free set is [math]D_n := \{ (x_1,\ldots,x_n) \in [3]^n: \sum_{i=1}^n x_i = 0 \ \operatorname{mod}\ 3 \}[/math]. (1) This has cardinality [math]|D_n| = 2 \times 3^{n-1}[/math]. The only lines in [math]D_n[/math] are those with A number of wildcards equal to a multiple of three; The number of 1s equal to the number of 2s modulo 3. One way to construct line-free sets is to start with [math]D_n[/math] and remove some additional points. Another useful construction proceeds by using the slices [math]\Gamma_{a,b,c} \subset [3]^n[/math] for [math](a,b,c)[/math] in the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c = n \},[/math]. (2) where [math]\Gamma_{a,b,c}[/math] is defined as the strings in [math][3]^n[/math] with [math]a[/math] 1s, [math]b[/math] 2s, and [math]c[/math] 3s. Note that [math]|\Gamma_{a,b,c}| = \frac{n!}{a! b! c!}.[/math] (3) Given any set [math]B \subset \Delta_n[/math] that avoids equilateral triangles [math] (a+r,b,c), (a,b+r,c), (a,b,c+r)[/math], the set [math]\Gamma_B := \bigcup_{(a,b,c) \in B} \Gamma_{a,b,c}[/math] (4) is line-free and has cardinality [math]|\Gamma_B| = \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!},[/math] (5) and thus provides a lower bound for [math]c_n[/math]: [math]c_n \geq \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!}.[/math] (6) All lower bounds on [math]c_n[/math] have proceeded so far by choosing a good set of B and applying (6). Note that [math]D_n[/math] is the same as [math]\Gamma_{B_n}[/math], where [math]B_n[/math] consists of those triples [math](a,b,c) \in \Delta_n[/math] in which [math]a \neq b\ \operatorname{mod}\ 3[/math]. Note that if one takes a line-free set and permutes the alphabet [math]\{1,2,3\}[/math] in any fashion (e.g. replacing all 1s by 2s and vice versa), one also gets a line-free set. This potentially gives six examples from any given starting example of a line-free set, though in practice there is enough symmetry that the total number of examples produced this way is less than six. (These six examples also correspond to the six symmetries of the triangular grid [math]\Delta_n[/math] formed by rotation and reflection.) Another symmetry comes from permuting the [math]n[/math] indices in the strings of [math][3]^n[/math] (e.g. replacing every string by its reversal). But the sets [math]\Gamma_B[/math] are automatically invariant under such permutations and thus do not produce new line-free sets via this symmetry. The basic upper bound Because [math][3]^{n+1}[/math] can be expressed as the union of three copies of [math][3]^n[/math], we have the basic upper bound [math]c_{n+1} \leq 3 c_n.[/math] (7) Note that equality only occurs if one can find an [math]n+1[/math]-dimensional line-free set such that every n-dimensional slice has the maximum possible cardinality of [math]c_n[/math]. n=0 [math]c_0=1[/math]: This is clear. n=1 [math]c_1=2[/math]: The three sets [math]D_1 = \{1,2\}[/math], [math]\{2,3\}[/math], and [math]\{1,3\}[/math] are the only two-element sets which are line-free in [math][3]^1[/math], and there are no three-element sets. n=2 [math]c_2=6[/math]: There are four six-element sets in [math][3]^2[/math] which are line-free, which we denote [math]x[/math], [math]y[/math], [math]z[/math], and [math]w[/math] and are displayed graphically as follows. 13 .. 33 .. 23 33 13 23 .. 13 23 .. x = 12 22 .. y = 12 .. 32 z = .. 22 32 w = 12 .. 32 .. 21 31 11 21 .. 11 .. 31 .. 21 31 [math]z[/math] is also the same as [math]D_2[/math]. Combining this with the basic upper bound (7) we see that [math]c_2=6[/math]. n=3 [math]c_3=18[/math]: We describe a subset [math]A[/math] of [math][3]^3[/math] as a string [math]abc[/math], where [math]a, b, c \subset [3]^2[/math] correspond to strings of the form [math]1**[/math], [math]2**[/math], [math]3**[/math] in [math][3]^3[/math] respectively. Thus for instance [math]D_3 = xyz[/math], and so from (7) we have [math]c_3=18[/math]. It turns out that [math]D_3 = xyz[/math] is the only 18-element line-free subset of [math][3]^3[/math]. To create an 17-element set, the only way is to remove a single element from one of xyz, yzx, or zxy. Proof: as [math]17=6+6+5[/math], and [math]c_2=6[/math], at least two of the slices of a 17-element line-free set must be from x, y, z, w, with the third slice having 5 points. If two of the slices are identical, the last slice can have only 3 points, a contradiction. If one of the slices is a w, then the 5-point slice will contain a diagonal, contradiction. By symmetry we may now assume that two of the slices are x and y, which force the last slice to be z with one point removed. Now one sees that the slices must be in the order xyz, yzx, or zxy, because any other combination has too many lines that need to be removed. n=4 [math]c_4=52[/math]: Indeed, divide a line-free set in [math][3]^4[/math] into three blocks [math]1***, 2***, 3***[/math] of [math][3]^3[/math]. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is [math]18+17+17=52[/math]. In fact, there are exactly three ways to get to 52, namely xyz yz’x zxy’ y’zx zx’y xyz z’xy xyz yzx’ where x' is x with either 2222 or 3333 removed (depending on whether the x' appears in the second block or the third) y' is y with either 1111 or 3333 removed z' is z with either 1111 or 2222 removed The second example here can also be described as [math]D_4[/math] with 1111 and 2222 removed. Indeed, given a 52-point line-free set, one of the blocks must be xyz and the others must be xyz, yzx, zyx with one point removed. If one of the xyz, yzx, zyx patterns are used twice, then the third block can have at most 8 points, a contradiction, so each pattern must be used exactly once. A pattern such as xyz zxy yzx cannot occur since the columns xzy, yxz, zyx of this pattern can contain at most 16 points each. So we must remove two points from xyz zyx yxz or a cyclic permutation thereof, and we can then easily reduce to the above classification. The same logic can classify 51-point line-free sets. Theorem: a 51-point set is formed by removing three points from xyz yzx zxy, yzx zxy xyz, or zxy xyz yzx. Proof. Suppose first that we can slice this set into three slices of 17 points. Each of the slices is then formed by removing one point from xyz, yxz, and zxy. Arguing as before we obtain the claim. If there is no such slicing available, then every slicing of the 51-point set must slice into an 18-point set, an 17-point set, and a 16-point set. By symmetry we may assume the 18-point slice is the first one, and the 17 point set is the next one: xyz ??? ??? Looking at the vertical slices, we see that the first column must also be an xyz: xyz y?? z?? this forces the second slice, which has 17 points, to be yzx with one point removed; in fact, the point removed must be either 2222 or 2333. This forces the third slice to be contained in zxy, and the claim follows. n=5 [math]c_5=150[/math]: We have the upper bound [math]c_5 \leq 154[/math] Suppose for contradiction that we had a pattern with [math]155 = 3 \times 52 - 1[/math] points, then two of the [math][3]^3[/math] slices must have 52 points and the third has 51. The slices with 52 come from removing two points from one of yzx zxy xyz zxy xyz yzx xyz yzx zxy while the slices with 51 can also be verified to come from removing three points from one of the above lines. So, after permutation, one is now removing seven points from yzx zxy xyz zxy xyz yzx xyz yzx zxy Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with. We have the lower bound [math]c_5 \geq 150[/math] One way to get 150 is to start with [math]D_5[/math] and remove the slices [math]\Gamma_{0,4,1}, \Gamma_{0,5,0}, \Gamma_{4,0,1}, \Gamma_{5,0,0}[/math]. Another pattern of 150 points is this: Take the 450 points in [math]{}[3]^6[/math] which are (1,2,3), (0,2,4) and permutations, then select the 150 whose final coordinate is 1. That gives this many points in each cube: 17 18 17 17 17 18 12 17 17 An integer programming method has established the upper bound [math]c_5\leq 150[/math], with 12 extremal solutions. This file contains the extermisers. One point per line and different extermisers separated by a line with “—” This is the linear program, readable by Gnu’s glpsol linear programing solver, which also quickly proves that 150 is the optimum. Each variable corresponds to a point in the cube, numbered according to their lexicografic ordering. If a variable is 1 then the point is in the set, if it is 0 then it is not in the set. There is one linear inequality for each combinatorial line, stating that at least one point must be missing from the line. n=6 [math]c_6=450[/math]: The upper bound follows since [math]c_6 \leq 3 c_5[/math]. The lower bound can be formed by gluing together all the slices [math]\Gamma_{a,b,c}[/math] where (a,b,c) is a permutation of (0,2,4) or (1,2,3). n=7 [math]1302 \leq c_7 \leq 1350[/math]: The upper bound follows since [math]c_7 \leq 3 c_6[/math]. The lower bound can be formed by removing 016,106,052,502,151,511,160,610 from [math]D_7[/math]. Larger n The following construction gives lower bounds for the number of triangle-free points, There are of the order [math]2.7 \sqrt{log(N)/N}3^N[/math] points for large N (N ~ 5000) It applies when N is a multiple of 3. For N=3M-1, restrict the first digit of a 3M sequence to be 1. So this construction has exactly one-third as many points for N=3M-1 as it has for N=3M. For N=3M-2, restrict the first two digits of a 3M sequence to be 12. This leaves roughly one ninth of the points for N=3M-2 as for N=3M. The current lower bounds for [math]c_{3m}[/math] are built like this, with abc being shorthand for [math]\Gamma_{a,b,c}[/math]: [math]c_3[/math] from (012) and permutations [math]c_6[/math] from (123,024) and perms [math]c_9[/math] from (234,135,045) and perms [math]c_{12}[/math] from (345,246,156,02A,057) and perms (A=10) [math]c_{15}[/math] from (456,357,267,13B,168,04B,078) and perms (B=11) To get the triples in each row, add 1 to the triples in the previous row; then include new triples that have a zero. A general formula for these points is given below. I think that they are triangle-free. (For N<21, ignore any triple with a negative entry.) There are thirteen groups of points in the centre, that are the same for all N=3M: (M-7, M-3, M+10) and perms (M-7, M, M+7) and perms (M-7, M+3, M+4) and perms (M-6, M-4, M+10) and perms (M-6, M-1, M+7) and perms (M-6, M+2, M+4) and perms (M-5, M-1, M+6) and perms (M-5, M+2, M+3) and perms (M-4, M-2, M+6) and perms (M-4, M+1, M+3) and perms (M-3, M+1, M+2) and perms (M-2, M, M+2) and perms (M-1, M, M+1) and perms There is also a string of points, that is slightly different for odd and even N: For N=6K: (2x, 2x+2, N-4x-2) and permutations (x=0..K-4) (2x, 2x+5, N-4x-5) and perms (x=0..K-4) (2x, 3K-x-4, 3K+x+4) and perms (x=0..K-4) (2x, 3K-x-1, 3K+x+1) and perms (x=0..K-4) (2x+1, 2x+5, N-4x-6) and perms (x=0..K-5) (2x+1, 2x+8, N-4x-9) and perms (x=0..K-5) (2x+1, 3K-x-1, 3K-x) and perms (x=0..K-5) (2x+1, 3K-x-4, 3K-x+3) and perms (x=0..K-5) For N=6K+3: the thirteen points mentioned above, and: (2x, 2x+4, N-4x-4) and perms, x=0..K-4 (2x, 2x+7, N-4x-7) and perms, x=0..K-4 (2x, 3K+1-x, 3K+2-x) and perms, x=0..K-4 (2x, 3K-2-x, 3K+5-x) and perms, x=0..K-4 (2x+1, 2x+3, N-4x-4) and perms, x=0..K-4 (2x+1, 2x+6, N-4x-7) and perms, x=0..K-4 (2x+1, 3K-x, 3K-x+2) and perms, x=0..K-4 (2x+1, 3K-x-3, 3K-x+5) and perms, x=0..K-4 For N=6K: An alternate construction: First define a sequence, of all positive numbers which, in base 3, do not contain a 1. Add 1 to all multiples of 3 in this sequence. This sequence does not contain a length-3 arithmetic progression. It starts 1,2,7,8,19,20,25,26,55, … Second, list all the (abc) triples for which the larger two differ by a number from the sequence, excluding the case when the smaller two differ by 1, but then including the case when (a,b,c) is a permutation of N/3+(-1,0,1) Asymptotics DHJ(3) is equivalent to the upper bound [math]c_n \leq o(3^n)[/math] In the opposite direction, observe that if we take a set [math]S \subset [3n][/math] that contains no 3-term arithmetic progressions, then the set [math]\bigcup_{(a,b,c) \in \Delta_n: a+2b \in S} \Gamma_{a,b,c}[/math] is line-free. From this and the Behrend construction it appears that we have the lower bound [math]c_n \geq 3^{n-O(\sqrt{\log n})}[/math] though this has to be checked. Numerics suggest that the first large n construction given above above give a lower bound of roughly [math]2.7 \sqrt{\log(n)/n} \times 3^n[/math], which would asymptotically be inferior to the Behrend bound. The second large n construction had numerical asymptotics for \log(c_n/3^n) close to [math]1.2-\sqrt{\log(n)}[/math] between n=1000 and n=10000, consistent with the Behrend bound. Numerical methods A greedy algorithm was implemented here. The results were sharp for [math]n \leq 3[/math] but were slightly inferior to the constructions above for larger n.
$Version "10.1.0 for Microsoft Windows (64-bit) (March 24, 2015)" The following integral was nicely and quickly done by Mathematica i1 = Integrate[1/(1 - x) (PolyLog[3, -x] + 3/4 Zeta[3]), {x, 0, 1}](* Out[1027]= (3 Zeta[3])/4 *)% // N(* Out[1028]= 0.901543 *) But unfortunately the result is wrong. This can be seen looking at the numerical integral i2 = NIntegrate[1/(1 - x) (PolyLog[3, -x] + 3/4 Zeta[3]), {x, 0, 1}](* Out[1029]= 0.859247 *) The results differ appreciably. Also it can be shown [1] that the integral is equivalent the following infinite sum $$s= \sum _{k=1}^{\infty } \frac{(-1)^{k+1} H_k}{k^3}$$ Numerically this is i3 = NSum[(-1)^(k + 1)/k^3 HarmonicNumber[k], {k, 1, 10^4}, WorkingPrecision -> 10, Method -> "AlternatingSigns"](* Out[1036]= 0.8592466552 *) This result is in agreement with $i2$. Hence we conclude that Mathematica returns a wrong result for the integral $i1$. References
Maybe every algebraic topology student, at some moment, will ask himself/herself the question: why are $\pi_*$ so difficult and mysterious, especially when compared with (co)homology? Think about the weird connections between $\pi_n(S^k)$ and number theory... it is insane! But one day I realize this idea may be wrong and biased. Homology, is probably not really easier than homotopy. One tends to think $H_*$ is easy, only because most of us only care about finite-dimensional manifolds in our daily life. And then one have that nice vanishing result for higher-dimensional $H_*$. Consider the infinite-dimensional Eilenberg−MacLane space $K(\mathbb{Z}, n)$ when $n>2$. Its $\pi_*$ is surely as easy as one could hope, but how about its $H_*$, especially the torsions? It appears to me, the better (?) statement might be "Spaces with simple $\pi_*$ tends to have complicated $H_*$. Spaces with simple $H_*$ tends to have complicated $\pi_*$." This gives one some strange feeling. It is almost like some kind of "Fourier transformation", some kind of duality. And this makes some spaces particularly interesting: spaces with both simple $H_*$ and simple $\pi_*$ at the same time. Let me start with some simple examples: 1) $S^1 \simeq K(\mathbb{Z}, 1)$. 2) $\Sigma^g$, that is, Riemann surfaces. 3) $K(\mathbb{Z}, 2) \simeq CP^\infty$. 4) $K(G, 1)$ when $G$ is a finite group. And I am looking forward to your examples of such spaces, especially comments from AT experts. Thank you very much.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Examples The video below works out some integration by parts examples. After watching the video, see if you can compute the same examples without looking back at the video: 1) DO: Compute $\displaystyle \int x \ln(x)\,dx$, letting $u=\ln(x)$ and $dv=x\,dx$. Be careful and precise with where you write $u$ and $dv$ as in the video - do it the same way every time to keep from making errors. (Notice that you cannot let $dv=\ln x$, since you do not (yet) know the antiderivative of $\ln x$.) 2) DO: Compute 3) DO: Set up $\int x \sin(x)\,dx$ (the same integral as above), but this time letting $u=\sin x$ and $dv=x\, dx$. What do you think about this integral $\displaystyle\int v\,du$? It is fine to try one way, then decide it might be better another way! 4) DO: Compute $\int x^2 e^x\,dx$. What would you choose for $u$ and $dv$ and why? Remember, you want your resulting integral $\displaystyle $\int v\,du$ to be simple to compute. If you need help on these examples, rewatch the video.
Suppose two players, Alice and Bob, each hold equal sized subsets of \([n] = {1, 2, \ldots, n}\). A third party, Carole, wishes to convince Alice and Bob that their subsets are disjoint, i.e., their sets have no elements in common. How efficiently can Carole prove to Alice and Bob that their sets are disjoint? To formalize the situation, suppose \(k\) is an integer with \(1 \leq k \leq n\), and let \(A\) and \(B\) be, respectively, Alice and Bob's subsets of \([n]\), both of size \(k\). Carole produces a certificateor proofthat \(A \cap B = \emptyset\) and sends this certificate to Alice and Bob. Alice and Bob individually verify that Carole’s certificate is valid with respect to their individual inputs. Alice and Bob then send each other (very short) messages saying whether or not they accept Carole’s proof. If they both accept the proof, they can be certain that their sets are disjoint In order for the proof system described above to be valid, it must satisfy the following two properties: Completeness If \(A \cap B = \emptyset\) then Carole can send a certificate that Alice and Bob both accept. Soundness If \(A \cap B \neq \emptyset\) then any certificate that Carole sends must be rejected by at least one of Alice and Bob. Before giving a “clever” solution to this communication problem, we describe a naive solution. Since Carole sees \(A\) and \(B\), her proof of their disjointness could simply be to send Alice and Bob the (disjoint) pair \(C = (A, B)\). Then Alice verifies the validity of the certificate \(C\) by checking that her input \(A\) is equal to the first term in \(C\), and similarly Bob checks that \(B\) is equal to the second term. Clearly, if \(A\) and \(B\) are disjoint, Alice and Bob will both accept \(C\), while if \(A\) and \(B\) intersect, Alice or Bob will reject every certificate that Carole could send. Let us quickly analyze the efficiency of this protocol. The certificate that Carole sends consists of a pair of \(k\)-subsets of \([n]\). The naive encoding of simply listing the elements of \(A\) and \(B\) requires \(2 k \log n\) bits — each \(i \in A \cup B\) requires \(\log n\) bits, and there are \(2 k\) such indices in the list. In fact, even if Carole is much cleverer in her encoding of the sets \(A\) and \(B\), she cannot compress the proof significantly for information theoretic reasons. Indeed, there are $$ {n \choose k}{n-k \choose k} $$ distinct certificates that Carole must be able to send, hence her message must be of length at least $$ \log {n \choose k} \geq \log ((n/k)^k) = k \log n – k \log k. $$ Is it possible for Carole, Alice, and Bob to devise a more efficient proof system for disjointness? The key observation in our more efficient protocol for set disjointness is the following: if \(A\) and \(B\) are disjoint if and only if there exists some \(S \subseteq [n]\) such that \(A \subseteq S\) and \(B \subset \bar{S}\), where we use \(\bar{S}\) to denote the complement of \(S\). If \(k\) is relatively small, say \(k = O(\log n)\), there are many — in fact \(2^{n – 2k}\)–such sets \(S\). So our strategy is to try to find a relatively small family \(\mathcal{F}\) of subsets such that for every disjoint pair \((A, B)\), \(\mathcal{F}\) contains a witness \(S\) to the disjointness of \(A\) and \(B\) in the sense that \(A \subseteq S\) and \(B \subseteq \bar{S}\). Finding an explicit family \(\mathcal{F}\) seems daunting, so we apply the probabilistic method. Specifically, we choose sets \(S \in \mathcal{F}\) uniformly at random, and show that with positive probability, a relatively small choice of \(\mathcal{F}\) will have the desired property. Theorem. There exists a family \(\mathcal{F}\) of size $$ |\mathcal{F}| \leq 2^{O(k + \log \log n)} $$ such that for every disjoint pair \((A, B)\) of \(k\)-subsets of \([n]\), there exists \(S \in \mathcal{F}\) such that \(A \subseteq S\) and \(B \subseteq \bar{S}\). The theorem implies that Carole can produce a certificate for the disjointness of \(A\) and \(B\) of length \(O(k + \log \log n)\). Indeed, it takes Carole only \(\log |\mathcal{F}| = O(k + \log \log n)\) bits to specify the suitable set \(S \in \mathcal{F}\). Once Alice and Bob are told \(S\), Alice can easily verify that \(A \subseteq S\) and Bob that \(B \subseteq \bar{S}\). In the case where \(k\) is a constant independent of \(n\), this certificate is exponentially shorter than the certificate in the naive protocol described above. Proof. We employ the probabilistic method to give a randomized “construction” of such a family \(\mathcal{F}\). Choose \(N\) sets $$ S_1, S_2, \ldots, S_N \subseteq[n] $$ independently, uniformly at random from \(\mathcal{P}([n])\). For any fixed disjoint pair \((A, B)\), we compute $$ \begin{align} \mathrm{Pr} {S_1, \ldots, S_N} (\text{for all } i, A \subseteq!!!!!/\ S\text{ or } B \subseteq!!!!!/\ \bar{S}) &= (\mathrm{Pr}{S \subseteq [n]} (A \subseteq!!!!!/\ \text{ or } B \subseteq!!!!!/\ \bar{S}))^N\ &= \left(1 – \frac{1}{4^k}\right)^N\ &\leq e^{-N / 4^k}. \end{align} $$ The first equality holds by the independence of choice of \(S_1, \ldots, S_N\). The second equality holds because for each \(i \in A\), \(\mathrm{Pr}(i \in S) = \frac 1 2\), and similarly for \(j \in B\), \(\mathrm{Pr}(j \in \bar S) = \frac 1 2\). The inequality follows from the fact that for all \(x > 1\), \((1 – 1/x)^x simultaneously. Thus, we apply the union bound to estimate $$ \begin{aligned} \mathrm{Pr}_{S_1, \ldots, S_N} \left((\exists \text{ disjoint }(A, B))(\forall i \in [n])[A \subseteq!!!!!/\ S_i \text{ or } B \subseteq!!!!!/\ \bar{S_i}]\right) &
Metric distance on a set $X$ A function $\rho$ with non-negative real values, defined on the Cartesian product $X\times X$ and satisfying for any $x, y\in X$ the conditions: 1) $\rho(x,y)=0$ if and only if $x = y$ (the identity axiom); 2) $\rho(x,y) + \rho(y,z) \geq \rho(x,z)$ (the triangle axiom); 3) $\rho(x,y) = \rho(y,x)$ (the symmetry axiom). Examples. 1) On any set there is the discrete metric \begin{equation} \rho(x,y) = 0 \text{ if } x=y \quad \text{and} \quad \rho(x,y) = 1 \text{ if } x\ne y. \end{equation} 2) In the space $\mathbb R^n$ various metrics are possible, among them are: \begin{equation} \rho(x,y) = \sqrt{\sum(x_i-y_i)^2}; \end{equation} \begin{equation} \rho(x,y)=\sup\limits_i|x_i-y_i|; \end{equation} \begin{equation} \rho(x,y)=\sum|x_i-y_i|; \end{equation} here $\{x_i\}, \{y_i\} \in \mathbb{R}^n$. 3) In a Riemannian space a metric is defined by a metric tensor, or a quadratic differential form (in some sense, this is an analogue of the first metric of example 2)). For a generalization of metrics of this type see Finsler space. 4) In function spaces on a (countably) compact space $X$ there are also various metrics; for example, the uniform metric \begin{equation} \rho(f,g)=\sup\limits_{x\in X}|f(x)-g(x)| \end{equation} (an analogue of the second metric of example 2)), and the integral metric \begin{equation} \rho(f,g)=\int\limits_X|f-g|\, dx. \end{equation} 5) In normed spaces over $\mathbb R$ a metric is defined by the norm $\|\cdot\|$: \begin{equation} \rho(x,y) = \|x-y\|. \end{equation} 6) In the space of closed subsets of a metric space there is the Hausdorff metric. If, instead of 1), one requires only: A metric (and even a pseudo-metric) makes the definition of a number of additional structures on the set $X$ possible. First of all a topology (see Topological space), and in addition a uniformity (see Uniform space) or a proximity (see Proximity space) structure. The term metric is also used to denote more general notions which do not have all the properties 1)–3); such are, for example, an indefinite metric, a symmetry on a set, etc. References [1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian) [2] J.L. Kelley, "General topology" , Springer (1975) [3] K. Kuratowski, "Topology" , 1 , PWN & Acad. Press (1966) (Translated from French) [4] N. Bourbaki, "Elements of mathematics. General topology" , Addison-Wesley (1966) (Translated from French) Comments Potentially, any metric space $(X,\rho)$ has a second metric $\sigma \geq \rho$ naturally associated: the intrinsic or internal metric. Potentially, because the definition may give $\sigma(x,y)=\infty$ for some pairs of points $x, y$. One defines the length (which may be $\infty$) of a continuous path $f:[0,1]\to X$ by $L(f)=\lim\limits_{\epsilon\to 0}\sup L_{\epsilon}(f)$, where $L_{\epsilon}(f)$ is the infimum of all finite sums $\sum \rho(x_i,x_{i+1})$ with $\{x_i\}$ a finite subset of $[0,1]$ which is an $\epsilon$-net (cf. Metric space) and is listed in the natural order. Then $\sigma(x,y)$ is the infimum of the lengths of paths $f$ with $f(0)=x$, $f(1)=y$, but $\sigma(x,y)=\infty$ if there is no such path of finite length. No reasonable topological restriction on $(X,\rho)$ suffices to guarantee that the intrinsic "metric" (or écart) $\sigma$ will be finite-valued. If $\sigma$ is finite-valued, suitable compactness conditions will assure that minimum-length paths, i.e. paths from $x$ to $y$ of length $\sigma(x,y)$, exist. When every pair of points $x, y$ is joined by a path (non-unique, in general) of length $\sigma(x,y)$, the metric is often called convex. (This is much weaker than the surface theorists' convex metric.) The main theorem in this area is that every locally connected metric continuum admits a convex metric [a1], [a2]. References [a1] R.H. Bing, "Partitioning a set" Bull. Amer. Math. Soc. , 55 (1949) pp. 1101–1110 [a2] E.E. Moïse, "Grille decomposition and convexification" Bull. Amer. Math. Soc. , 55 (1949) pp. 1111–1121 How to Cite This Entry: Metric. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Metric&oldid=29410
Search Now showing items 1-8 of 8 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV (Elsevier, 2015-01) We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ... Elliptic flow of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Springer, 2015-06-29) The elliptic flow coefficient ($v_{2}$) of identified particles in Pb--Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV was measured with the ALICE detector at the LHC. The results were obtained with the Scalar Product ... Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at s =2.76TeV (American Physical Society, 2015-01-07) The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at midrapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range ... Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV (Elsevier, 2015-02-04) Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ...
How do you find the 9th derivative of $(\cos(5 x^2)-1)/x^3$ and evaluate at $x=0$ without differentiating it straightforwardly with the quotient rule? The teacher's hint is to use Maclaurin Series, but I don't see how. My attempts at deriving yielded this: $$-10\sin(5x^2)/x^2 - 3(\cos(5x^2) - 1)/x^4$$ $$-100\cos(5x^2)/x + 50\sin(5x^2)/x^3 + 12(\cos(5x^2) - 1)/x^5$$ $$1000\sin(5x^2) + 600\cos(5x^2)/x^2 - 270\sin(5x^2)/x^4 - 60(\cos(5x^2) - 1)/x^6$$ As a programmer, I used sympy to calculate the derivative to be $$- 1000000000 x^{6} \sin{\left (5 x^{2} \right )} + 900000000 x^{4} \cos{\left (5 x^{2} \right )} + 540000000 x^{2} \sin{\left (5 x^{2} \right )} + 378000000 \cos{\left (5 x^{2} \right )} - 472500000 \frac{\sin{\left (5 x^{2} \right )}}{x^{2}} - 481950000 \frac{\cos{\left (5 x^{2} \right )}}{x^{4}} + 393120000 \frac{\sin{\left (5 x^{2} \right )}}{x^{6}} + 240408000 \frac{\cos{\left (5 x^{2} \right )}}{x^{8}} - 97977600 \frac{\sin{\left (5 x^{2} \right )}}{x^{10}} - 19958400 \frac{\cos{\left (5 x^{2} \right )} - 1}{x^{12}}$$ which is $378000000$ at $x=0$. Is there a simpler method to doing this by hand? Chegg seems to agree that the answer is $378000000$
\begin{equation} \frac{\partial C_i}{\partial t} = D_i \nabla^2 C_i - \frac{I \cdot \nabla t_i}{z_i F} - \sum_{i'} \frac{z_{i'}}{z_i} D_{i'}\nabla \cdot (t_i\nabla C_{i'}) \end{equation} \begin{equation} \nabla \eta = \frac{I}{\kappa} + \frac{F}{\kappa} \sum_i z_i D_i \nabla C_i \end{equation} The dependent variables are $C_i$ and $\eta$ ($t_i$ is a function of $C_i$). In the 1D case. I have some thought on difference scheme (TRBDF2) to try on the first equation. The second equation looks simple, but I'm afraid a simple centered difference will not work. I've try something like \begin{equation} \left. D_i\frac{\partial C_i}{\partial x}\right \vert_{k-1/2} = D_{i, k-1/2}\frac{C_{i,k}-C_{i,k-1}}{\Delta x} \approx \frac{D_{i, k}+D_{i,k-1}}{2}\frac{C_{i,k}-C_{i,k-1}}{\Delta x} \end{equation} but with no luck when implementing in the nonlinear solver MINPACK. What will be a good scheme on differencing this two equations? Also, should I run analysis on stability issue every time I came up with a difference scheme before I compute them? Another set of equation look like this, where the dependent variables are $C_i$, $\eta$, $i_2$, $\epsilon$ and $\epsilon_k$. I've try difference scheme and put on some efforts but in vain. Is there any general guideline as to the way of discretization? I'm using the method of line by the way. \begin{equation} \frac{\partial \epsilon C_i}{\partial t} = \nabla \cdot \biggl( \epsilon D_i \nabla C_i \biggr) t-\frac{i_2\cdot \nabla t_i}{z_i F} - \sum_j ai_j\biggl( \frac{t_i}{z_i F}+\frac{s_{i,j}}{n_j F} \biggr) - \sum_{i'} \frac{z_{i'}}{z_i}\epsilon D_{i'}\nabla \cdot (t_i\nabla C_{i'}) - R_i \end{equation} \begin{equation} \nabla \eta = -\biggl( \frac{I-i_2}{\sigma} \biggr) + \frac{i_2}{\epsilon \kappa} + \frac{F}{\kappa} \sum_i z_i D_i \nabla C_i \end{equation} \begin{equation} \nabla \cdot i_2 = a \sum_j i_j \end{equation} \begin{equation} \frac{\partial \epsilon}{\partial t} = -\sum_k \overset{\sim}{V_k} k_k\epsilon_k\biggl(\prod_i C_i^{\gamma_i,k}-K_{sp,k}\biggr) \end{equation} \begin{equation} \frac{\partial \epsilon_k}{\partial t} = \overset{\sim}{V_k} k_k\epsilon_k\biggl(\prod_i C_i^{\gamma_i,k}-K_{sp,k}\biggr) \end{equation} EDITTo be more specific I'm now trying to solve the initial condition for $\eta$ and $i_2$ which is not given. Since there is no concentration gradient, in the separator (of the battery) \begin{equation} \nabla \eta = \frac{I}{\kappa} \end{equation} \begin{equation} i_2=I \end{equation} In the porous cathode \begin{equation} \nabla \eta = -\biggl( \frac{I-i_2}{\sigma} \biggr) + \frac{i_2}{\epsilon \kappa} \end{equation} \begin{equation} \nabla \cdot i_2 = a \sum_j i_j \end{equation} $i_j$ is Butler-Volmer equation, which in the Newton's method tends to explode when initial guess is bad. \begin{equation} i_j = i_{o,j}\biggl[ \prod_i \biggl(\frac{C_i}{C_{i,o}}\biggr)^{p_{i,j}}\exp\biggl(\frac{\alpha_{aj} F}{RT}(\eta_j-U_j)\biggr) - \prod_i \biggl(\frac{C_i}{C_{i,o}}\biggr)^{q_{i,j}}\exp\biggl(-\frac{\alpha_{cj} F}{RT}(\eta_j-U_j)\biggr) \biggr] \end{equation} I discretize the spatial derivative with second order difference, and the output shows a jump every other mesh point (I did not encounter this when I wrote my own code with Newton's method and analytic Jacobian, but I am afraid later on I will make another error when writing down analytic Jacobian since there are so many variables. So I try the MINPACK with numerical Jacobian but with no luck). I felt like something is wrong in the boundary. Since $\eta$ is not continuous in the separator/porous cathode interface, so I imposed $\nabla \eta_s = \nabla \eta_c$. This is correct because $\eta=\phi_1-\phi_2$, and I assumed $\phi_2$ is continuous in the interface, $\phi_1$ does not exist in the separator but $\nabla \phi_1=0$ in the interface. Particularly in the interface mesh points I wrote \begin{equation} \frac{-3\eta_{s,n}+4\eta_{s,n-1}-\eta_{s,n-2}}{2\Delta x} = \frac{3\eta_{c,1}-4\eta_{c,2}+\eta_{c,3}}{2\Delta x} \end{equation} Thanks!
I am following the derivation of the field equations on the the Wikipedia page for $f(R)$ gravity. But I do not understand the following step: $$ \delta S = \int \frac{1}{2\kappa} \sqrt{-g} \left(\frac{\partial f}{\partial R} (R_{\mu\nu} \delta g^{\mu\nu}+g_{\mu\nu}\Box \delta g^{\mu\nu}-\nabla_\mu \nabla_\nu \delta g^{\mu\nu}) -\frac{1}{2} g_{\mu\nu} \delta g^{\mu\nu} f(R) \right) $$ the wiki article says, the next step is to integrate the second and third terms by parts to yield: $$ \delta S = \int \frac{1}{2\kappa} \sqrt{-g}\delta g^{\mu\nu} \left(\frac{\partial f}{\partial R} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu} f(R)+[g_{\mu\nu}\Box -\nabla_\mu \nabla_\nu] \frac{\partial f}{\partial R} \right)\, \mathrm{d}^4x $$ In other words, integrating by parts should yield: $$ \int \sqrt{-g} \left(\frac{\partial f}{\partial R} (g_{\mu\nu}\Box \delta g^{\mu\nu}-\nabla_\mu \nabla_\nu \delta g^{\mu\nu}) \right)\, d^4x $$ $$= \int \sqrt{-g}\delta g^{\mu\nu} \left([g_{\mu\nu}\Box -\nabla_\mu \nabla_\nu] \frac{\partial f}{\partial R} \right) \mathrm{d}^4x $$ From there getting the usual f(R) field equations is trivial. What I'm confused by is how to integrate by parts to get that. I have tried many different ways the one I think is most correct is: assuming $g_{\mu \nu} \Box$ and $\nabla_\mu \nabla_\nu$ are differential operators then $u' = g_{\mu \nu} \Box \delta g^{\mu\nu}$ and $v = f'$, similarly with the $\nabla_\mu \nabla_\nu$ so using the formula for integration by parts: $$ \int u'v = uv -\int uv' $$ I get: $$ \int \sqrt{-g} \left(f' (g_{\mu\nu}\Box \delta g^{\mu\nu}-\nabla_\mu \nabla_\nu \delta g^{\mu\nu}) \right)\, d^4x $$ $$= -\int \sqrt{-g}\delta g^{\mu\nu} \left([g_{\mu\nu}\Box -\nabla_\mu \nabla_\nu] f' \right) \mathrm{d}^4x $$ because the $uv$ term will disappear. So can any one explain to me why I have the minus sign and Wikipedia doesn't? Is it ok to use $g_{\mu \nu} \Box$ as a differential operator? I have tried other ways such as writing $\Box$ explicitly and using integration by parts twice but I also couldn't get the correct answer as i end up with terms such as $\nabla_\nu \nabla_\mu$ which cant be correct. There is a similar post on physics forums on this step but it does not answer my question and is now closed.
The formula you found comes from the barometric formula. It assumes a hydrostatic atmosphere (no vertical accelerations), a constant temperature with height (or a mean temperature) and uses the ideal gas law: $$h= -\frac{RT}{g} \cdot \log \frac{p}{p_{sfc}}$$where h is height above ground, R is the ideal gas constant, T is the mean temperature, g is the gravitational constant, p is the pressure of the cloud top in your case and $p_{sfc}$ is surface pressure (not reduced) The first factor comes down to a value of $-8$km if $T \approx 273$ K. This value is also known as scale height. $p_{sfc}$ is approx. 1013 hPa if you're at sea level. So LOGS would most probably mean the natural logarithm. I don't think there is a simple equation for the cloud top height from ground observations because e.g. how high a cumulus grows depends on the temperature of the surrounding air. Cloud base height and cloud top height are two totally different things. The cloud base height formula does not depend on the on the upper air temperature. Whether a thermal (which evolves into the cloud later on) reaches its cloud base or not, is not sure. As long as it's warmer than its surroundings, it rises. The same for the cloud: It rises up as long as it is warmer than its surroundings. You may know what's the temperature of the cloud at each height, but how cold the air at, say 15,000 ft is, varies a lot.
4.4.1: Odd and even periodic functions You may have noticed by now that an odd function has no cosine terms in the Fourier series and an even function has no sine terms in the Fourier series. This observation is not a coincidence. Let us look at even and odd periodic function in more detail. Recall that a function \(f(t)\) is odd if \(f(-t) = -f(t)\). A function \(f(t)\) is even if \(f(-t) = f(t)\). For example, \(\cos{(nt)}\) is even and \(\sin{(nt)}\) is odd. Similarly the function \(t^k\) is even if \(k\) is even and odd when \(k\) is odd. Exercise \(\PageIndex{1}\): Take two functions \(f(t)\) and \(g(t)\) and define their product \(h(t) =f(t)\,g(t)\). a) Suppose both are odd, is \(h(t)\) odd or even? b) Suppose one is even and one is odd, is \(h(t)\) odd or even? c) Suppose both are even, is \(h(t)\) odd or even? If \(f(t)\) and \(g(t)\) are both odd, then \(f(t) + g(t)\) is odd. Similarly for even functions. On the other hand, if \(f(t)\) is odd and \(g(t)\) even, then we cannot say anything about the sum \(f(t) + g(t)\). In fact, the Fourier series of any function is a sum of an odd (the sine terms) and an even (the cosine terms) function. In this section we consider odd and even periodic functions. We have previously defined the \(2L\)-periodic extension of a function defined on the interval \([-L,L]\). Sometimes we are only interested in the function on the range \([0,L]\) and it would be convenient to have an odd (resp. even) function. If the function is odd (resp. even), all the cosine (resp. sine) terms will disappear. What we will do is take the odd (resp. even) extension of the function to \([-L,L]\) and then extend periodically to a \(2L\)-periodic function. Take a function \(f(t)\) defined on \([0,L]\). On \((-L,L]\) define the functions \[ F_{\rm{odd}}(t) \overset{ \rm{def}}{=} \left\{ \begin{array}{ccc} f(t) & \rm{if} & 0 \leq t \leq L, \\ -f(-t) & \rm{if} & -L < t < 0, \end{array} \right. \\ F_{\rm{even}}(t) \overset{ \rm{def}}{=} \left\{ \begin{array}{ccc} f(t) & \rm{if} & 0 \leq t \leq L, \\ f(-t) & \rm{if} & -L < t < 0. \end{array} \right.\] Extend \(F_{odd}(t)\) and \(F_{even}(t)\) to be \(2L\)-periodic. Then \(F_{odd}(t)\) is called the odd periodic extension of \(f(t)\), and \(F_{even}(t)\) is called the even periodic extension of \(f(t)\). Exercise \(\PageIndex{2}\): Check that \(F_{odd}(t)\) is odd and that \(F_{even}(t)\) is even. Example \(\PageIndex{1}\): Take the function \(f(t) = t(1-t)\) defined on \([0,1]\). Figure 4.11 shows the plots of the odd and even extensions of \(f(t)\). Figure 4.11: Odd and even 2-periodic extension of \( f(t) = t(1-t),~ 0 \leq t \leq 1.\) 4.4.2 Sine and cosine series Let \(f(t)\) be an odd \(2L\)-periodic function. We write the Fourier series for \(f(t)\). First, we compute the coefficients \(a_n\) (including \( n=0 \)) and get \[a_n = \dfrac{1}{L} \int _{-L}^L f(t) \text{cos} \left(\frac{n\pi}{L}t \right) dt = 0. \] That is, there are no cosine terms in the Fourier series of an odd function. The integral is zero because \( f(t) \cos{(\frac{n\pi}{L}t)}\) is an odd function (product of an odd and an even function is odd) and the integral of an odd function over a symmetric interval is always zero. The integral of an even function over a symmetric interval \([-L,L]\) is twice the integral of the function over the interval \([0,L]\). The function \(f(t) \sin{(\frac{n\pi}{L}t)}\) is the product of two odd functions and hence is even. \[ b_n = \dfrac{1}{L} \int_{-L}^L f(t) \sin{\left( \dfrac{n\pi}{L}t \right)}\,dt = \dfrac{2}{L} \int_{0}^L f(t) \sin{\left( \dfrac{n\pi}{L}t \right)}\,dt.\] We now write the Fourier series of \(f(t)\) as \[ \sum_{n=1}^{\infty} b_n \sin{\left( \dfrac{n\pi}{L}t \right)}.\] Similarly, if \(f(t)\) is an even \(2L\)-periodic function. For the same exact reasons as above, we find that \(b_n=0\) and \[a_n = \dfrac{2}{L} \int_0^{L} f(t) \cos{\left( \dfrac{n\pi}{L}t\right)} \, dt.\] The formula still works for \(n=0\), in which case it becomes \[a_0= \dfrac{2}{L}\int_0^L f(t)\, dt.\] The Fourier series is then \[ \dfrac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos{\left( \dfrac{n\pi}{L} t \right)}.\] An interesting consequence is that the coefficients of the Fourier series of an odd (or even) function can be computed by just integrating over the half interval \([0,L]\). Therefore, we can compute the Fourier series of the odd (or even) extension of a function by computing certain integrals over the interval where the original function is defined. Theorem 4.4.1. Let \(f(t)\) be a piecewise smooth function defined on \([0,L]\). Then the odd extension of \(f(t)\) has the Fourier series \[ F_{odd}(t) = \sum_{n=1}^{\infty} b_n \sin{\left(\dfrac{n\pi}{L}t\right)},\] where \[ b_n = \dfrac{2}{L} \int_0^{L} f(t) \sin{\left( \dfrac{n\pi}{L} t \right)} \,dt. \] The even extension of \(f(t)\) has the Fourier series \[ F_{even}(t) = \dfrac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos{\left(\dfrac{n\pi}{L}t\right)},\] where \[ a_n = \dfrac{2}{L} \int_0^{L} f(t) \cos{\left( \dfrac{n\pi}{L} t \right)} \,dt. \] The series \( \sum_{n=1}^{\infty} b_n \sin{\left(\dfrac{n\pi}{L}t\right)}\) is called the sine series of \(f(t)\) and the series \( \dfrac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos{\left(\dfrac{n\pi}{L}t\right)}\) is called the cosine series of \(f(t)\). We often do not actually care what happens outside of \([0,L]\). In this case, we pick whichever series fits our problem better. \[ \left \langle f(t),g(y)\right \rangle =\int _0^L f(t)g(t)\,dt,\] and following the procedure of § 4.2. This point of view is useful, as we commonly use a specific series that arose because our underlying question led to a certain eigenvalue problem. If the eigenvalue problem is not one of the three we covered so far, you can still do an eigenfunction expansion, generalizing the results of this chapter. We will deal with such a generalization in chapter 5. It is not necessary to start with the full Fourier series to obtain the sine and cosine series. The sine series is really the eigenfunction expansion of \( f(t)\) using the eigenfunctions of the eigenvalue problem \( x'' + \lambda x=0, x(0) =0, x(L)=L\). The cosine series is the eigenfunction expansion of \( f(t)\) using the eigenfunctions of the eigenvalue problem \( x'' + \lambda x=0, x'(0) =0, x'(L)=L\). We could have, therefore, gotten the same formulas by defining the inner product \[ \left \langle f(t),g(t)\right \rangle =\int _0^L f(t)g(t)\,dt,\] and following the procedure of ch. 4.2. This point of view is useful, as we commonly use a specific series that arose because our underlying question led to a certain eigenvalue problem. If the eigenvalue problem is not one of the three we covered so far, you can still do an eigenfunction expansion, generalizing the results of this chapter. We will deal with such a generalization in chapter 5. Example \(\PageIndex{2}\): Find the Fourier series of the even periodic extension of the function \( f(t)= t^2\) for \( 0 \leq t \leq \pi\). We want to write \[ f(t) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos(nt),\] where \[ a_0= \frac{2}{\pi} \int_0^{\pi}t^2dt= \frac{2 \pi^2}{3},\] and \[ a_0= \frac{2}{\pi} \int_0^{\pi}t^2 \cos(nt) dt= \frac{2 }{\pi} \left[ t^2 \frac{1}{n} \sin(nt)\right]_0^{\pi} - \frac{4}{n \pi} \int_0^{\pi} t \sin(nt)dt \\ \frac{4}{n^2 \pi} \left[ t \cos(nt)\right]_0^{\pi} + \frac{4}{n^2 \pi} \int_0^{\pi} \cos(nt)dt = \frac{4(-1)^n}{n^2}.\] Note that we have “detected” the continuity of the extension since the coefficients decay as \(\frac{1}{n^2}\). That is, the even extension of \(t^2\) has no jump discontinuities. It does have corners, since the derivative, which is an odd function and a sine series, has jumps; it has a Fourier series whose coefficients decay only as \(\frac{1}{n}\). Explicitly, the first few terms of the series are \[ \dfrac{\pi^2}{3}-4\cos{(t)}+\cos{(2t)} - \dfrac{4}{9}\cos{(3t)} + \cdots \] Exercise \(\PageIndex{3}\): a) Compute the derivative of the even extension of \(f(t)\) above and verify it has jump discontinuities. Use the actual definition of \(f(t)\), not its cosine series! b) Why is it that the derivative of the even extension of \(f(t)\) is the odd extension of \(f'(t)\)? 4.4.3 Application Fourier series ties in to the boundary value problems we studied earlier. Let us see this connection in more detail. Suppose we have the boundary value problem for \( 0<t <L\). \[ x''(t) + \lambda x(t) = f(t),\] for the Dirichlet boundary conditions \(x(0) = 0, x(L)=0\). By using the Fredholm alternative (Theorem 4.1.2) we note that as long as \(\lambda\) is not an eigenvalue of the underlying homogeneous problem, there exists a unique solution. Note that the eigenfunctions of this eigenvalue problem are the functions \(\sin{\left(\frac{n\pi}{L}t\right)}\). Therefore, to find the solution, we first find the Fourier sine series for \(f(t)\). We write \(x\) also as a sine series, but with unknown coefficients. We substitute the series for \(x\) into the equation and solve for the unknown coefficients. If we have the Neumann boundary conditions \(x'(0) = 0\) and \(x'(L)=0\), we do the same procedure using the cosine series. Let us see how this method works on examples. Example \(\PageIndex{3}\): Take the boundary value problem for \(0<t<1\), \[x''(t) + 2x(t) = f(t),\] where \(f(t)=t\) on \(0<t<1\), and satisfying the Dirichlet boundary conditions \(x(0) = 0\) and \(x(1)=0\). We write \(f(t)\) as a sine series \[ f(t) = \sum_{n=1}^{\infty} c_n \sin{(n\pi t)},\] where \[ c_n = 2\int_0^1 t\, \sin{(n\pi t)}\,dt = \dfrac{2(-1)^{n+1}}{n\pi}.\] We write \(x(t)\) as \[x(t)=\sum_{n=1}^{\infty} b_n \sin{(n\pi t)}.\] We plug in to obtain \[ x''(t)+2x(t)= \sum_{n=1}^{\infty} -b_n n^2 \pi^2 \sin(n \pi t)+2 \sum_{n=1}^{\infty} b_n \sin(n \pi t) \\ = \sum_{n=1}^{\infty} b_n (2-n^2 \pi^2) \sin(n \pi t) \\ = f(t)= \sum_{n=1}^{\infty} \frac{2(-1)^{n+1}}{n \pi} \sin(n \pi t).\] Therefore, \[ b_n (2-n^2\pi^2) = \dfrac{2(-1)^{n+1}}{n\pi} .\] or \[ b_n = \dfrac{2(-1)^{n+1}}{n\pi(2-n^2\pi^2)} .\] We have thus obtained a Fourier series for the solution \[ x(t) = \sum_{n=1}^{\infty} \dfrac{2(-1)^{n+1}}{n\pi (2-n^2\pi^2)} \sin{(n\pi t)}.\] Example \(\PageIndex{4}\): Similarly we handle the Neumann conditions. Take the boundary value problem for \(0<t<1\), \[x''(t) + 2x(t) = f(t),\] where again \(f(t)=t\) on \(0<t<1\), but now satisfying the Neumann boundary conditions \(x'(0) = 0\) and \(x'(1)=0\). We write \(f(t)\) as a cosine series \[ f(t) = \dfrac{c_0}{2} + \sum_{n=1}^{\infty} c_n\cos{(n\pi t)},\] where \[ c_0=2\int_0^1 t \,dt =1\] and \[ c_n= 2 \int_0^1 t \cos(n \pi t)dt= \frac{2((-1)^n-1)}{\pi^2 n^2}= \left\{ \begin{array}{ccc} \frac{-4}{\pi^2 n^2} & \rm{if~} \it{n} \rm{~odd}, \\ 0 & \rm{if~} \it{n} \rm{~even}. \end{array} \right.\] We also write \(x(t)\) as a cosine series \[ x(t) = \dfrac{a_0}{2} + \sum_{n=1}^{\infty} a_n\cos{(n\pi t)}.\] We plug in to obtain \[ x''(t)+2x(t)= \sum_{n=1}^{\infty} \left[-a_n n^2 \pi^2 \cos(n \pi t) \right]+a_0+2 \sum_{n=1}^{\infty} \left[a_n \cos(n \pi t) \right] \\ =a_0 + \sum_{n=1}^{\infty} a_n (2-n^2 \pi^2) \cos(n \pi t) \\ = f(t)=\frac{1}{2}+ \sum_{\underset{n ~ odd}{n=1}}^{\infty} \frac{-4}{ \pi^2 n^2} \cos(n \pi t).\] Therefore, \(a = \frac{1}{2}\) and \(a_n=0\) for \(n\) even (\(n \geq 2\)) and for \(n\) odd we have \[ a_n(2-n^2\pi^2) = \dfrac{-4}{\pi^2n^2},\] or \[ a_n=\dfrac{-4}{n^2\,\pi^2(2-n^2\pi^2)}.\] The Fourier series for the solution \(x(t)\) is \[ x(t)= \frac{1}{4}+ \sum_{\underset{n ~ odd}{n=1}}^{\infty} \frac{-4}{ n^2 \pi^2(2- n^2 \pi^2)} \cos(n \pi t). \]
Quote: Originally Posted by yaser Correct. The solution was also given in slide 11 of Lecture 12 (regularization). yes my point was how do you solve this numerically - given that people will already have a good least squares code ( doing SVD on Z to avoid numerical ill conditioning), there is no need to implement (poorly) a new regularised least squares solver you can just add a few data points at the end of your training data and feed it into your least squares solver. ie \lambda |w|^2 = \sum_i (y_i-\sqrt(lambda)w_i)^2 ie if w is d dimensional you append to your Z matrix the additional matrix=sqrt(lambda)*eye(d) and append a d vector of zeros to your y (eye(d) is d by d identity matrix) [ but this is much better explained in the notes i linked to]
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
As the two nice answers of @marmot and @Mico did not at all address the main query (to code only once and get both the typesetting and the value) (see the answer by @JosephWright), I feel at liberty to add one more answer not addressing the request, but doing the computation with xfp. ( but see update at bottom for a ) \printandeval \documentclass{article} \usepackage[fleqn]{amsmath} \usepackage{xfp} \usepackage{siunitx} \begin{document} \begin{equation*} N = \frac{19.32\times1\times10^6\times6.023\times10^{23}}{197} =\num[scientific-notation=true, round-mode=figures, round-precision=4]{\fpeval{19.32*1*10^6*6.023*10^23/197}} \end{equation*} \end{document} I did not know how to instruct \fpeval to output in scientific notation (with a given number of places), but the options of \num came to the rescue. I will also mention xintexpr (as I authored it) despite the fact that I still have to add math functions to it (only sqrt currently is available). Now, its syntax \xintthefloatexpr...\relax causes issues with the way the siunitx \num parses its argument. One did not get into such problem with \fpeval because \fpeval uses braces. So let's just add one user interface macro to use braces too, and this makes \num happy. \documentclass{article} \usepackage[fleqn]{amsmath} \usepackage{xintexpr} % make \num of siunitx happy, let xintexpr do the rounding to 4 digits % of float precision (after having computed with 16 digits of % precision, per default) % #1 = final precision for printing, #2 = expression to evaluate \newcommand\floatround[2]{\xintthefloatexpr [#1]#2\relax} \usepackage{siunitx} \begin{document} \begin{equation*} N = \frac{19.32\times1\times10^6\times6.023\times10^{23}}{197} =\num{\floatround{4}{19.32*1*10^6*6.023*10^23/197}} % one can also use ** in place of ^ for powers \end{equation*} \end{document} Finally package numprint has much to recommend to print numbers according to language of document. And its \numprint macro (or \np with package option np) will accept directly the \xintthefloatexpr with no hiding within braces contrarily to \num of siunitx. \documentclass[english]{article} \usepackage{babel} \usepackage[fleqn]{amsmath} \usepackage{xintexpr} \usepackage[np, autolanguage]{numprint} \begin{document} \begin{equation*} N = \frac{19.32\times1\times10^6\times6.023\times10^{23}}{197} =\np{\xintthefloatexpr [4] 19.32*1*10^6*6.023*10^23/197\relax} \end{equation*} \end{document} In the simple example considered here one can do like this: \documentclass[english]{article} \usepackage{babel} \usepackage[fleqn]{amsmath} \usepackage{xintexpr} \usepackage[np, autolanguage]{numprint} \newcommand\printandeval[1]{#1=\begingroup \def\frac##1##2{(##1)/(##2)}% \def\times{*}% % etc... \edef\x{{\xintthefloatexpr[4]#1\relax}}% \expandafter\endgroup\expandafter\np\x } % (in the above we need to re-enact standard meaning of things % such as \times, before \np does the typesetting of the value, % this is the reason for the `\expandafter` chain. \begin{document} \begin{equation*} N = \printandeval{\frac{19.32\times1\times10^6\times6.023\times10^{23}}{197}} \end{equation*} \end{document} I do not try here to provide a completely general one, but in pratice adding a few more redefinitions will cover many test cases, possibly enough for real life usage. notice that braces in the input for typesetting did not have to be replaced by parentheses before evaluation (cf 10^{23}) Here is with xfp + siunitx: \documentclass{article} \usepackage[fleqn]{amsmath} \usepackage{xfp} \usepackage{siunitx} \newcommand\printandeval[1]{#1=\begingroup \def\frac##1##2{(##1)/(##2)}% \def\times{*}% % etc... \edef\x{[scientific-notation=true, round-mode=figures, round-precision=4]{\fpeval{#1}}}% \expandafter\endgroup\expandafter\num\x } \begin{document} \begin{equation*} N = \printandeval{\frac{19.32\times1\times10^6\times6.023\times10^{23}}{197}} \end{equation*} \end{document} Same output. The code above can be slightly more efficient with \edef\x{\endgroup\noexpand\np{\xintthefloatexpr[4]#1\relax}}% \x in the numprint+xintexpr case and \edef\x{\endgroup\num[scientific-notation=true, round-mode=figures, round-precision=4]{\fpeval{#1}}}% \x in the siunitx+xfp case. thanks to @egreg for chasing \expandafter's and pointing out \num is \protected and giving me opportunity in this edit to discover new ways to mark-up multi-line code... (I am too much active on github)
Radiation will kill you on the surface Several other posts hit the same point, but to summarize briefly, Europa's surface receives 540 rem per day, which is probably a fatal dose. However, the 1080 rem you get in two days is definitely a fatal dose. You need shielding, water (and ice) are great shielding, there you go. Water pressure is too high in the ocean Let me make a few super-general assumptions to simplify the math. First lets assume atmospheric pressure of zero, lets assume that the ice crust is pure water ice (density: 0.9167 g/cm$^2$), and lets assume the equation for hydrostatic pressure ($P=\rho gh$) can be naively applied under an ice sheet, an assumption that I argue is good enough. Other numbers we'll use are the surface gravity on Europa (1.315 m/s$^2$), 20km depth of surface ice on Europa (estimates range between 10-30km) $$P=\rho gh,$$ $$P = \left(\frac{0.9167 g}{cm^3}\right)\left( \frac{1 kg}{1000 g}\right)\left(\frac{1000000 cm^3}{1 m^3}\right)\left(\frac{1.315m}{s^2}\right)\left(20000m\right)$$ $$P=24.1 MPa = 237 Atm$$ That is equivalent to about 2500m below the ocean on earth. Building habitats with the strength of a submarine wouldn't be hard, but they are rated to about 500m tops. Building a habitat to handle that pressure would be hard. Inside the ice is just right On the other hand, you can find a happy medium in the middle. The 1MeV gamma tenth-thickness of water is about 0.6m. The tenth-thickness is the distance of material needed to attenuate radiation by a factor of 10. I couldn't find the tenth-thickness of ice, so I will assume it is the same (possibly a horrible assumption). Therefore, under 85m of ice, the radiation from Jupiter is about $$540 rem \cdot 10^\left(-\frac{85m}{0.6m}\right) \approx 0. $$ At this depth the pressure is $$P = \left(\frac{0.9167g}{cm^3}\right)\left(1.315 \frac{m}{s^2}\right)\left(85m\right) = 102.5kPa = 1.01 atm.$$ No radiation and atmospheric pressure. Sounds about right to me! Of course, this is not to say that there is atmospheric pressure in the air of a habitat, just because we are 85m below the surface of the ice; since ice is solid it doesn't work like that. But structures built at this depth won't have any pressure related problems that they don't already have on earth...unless the ice is moving. What building materials are available at 85m below the surface of Europa? Well....ice. Everything else you are going to have to bring yourself. The rocky surface is below another 20km of ice and maybe 100km of ocean. Pretty technically challenging to get something from there. However, Jupiter is just surrounded by moonlets and rings and what have you. It you want metal, just mine it out of loose material in the Jovian system and bring it down to the surface.
Using elementary graph theory identities one can show that the number of loops in a connected diagram is related to the number of external lines and the number of vertices of type $i$ each of which has $n_i$ lines attached to it, is related by$$\sum \left(\frac{n_i}{2}-1\right) V_i -\tfrac{1}{2}E +1= L $$So you can see that for a fixed process (fixed $E$), knowing the number of vertices of each type is equivalent to knowing the number of loops (which can correspond to a multitude of diagrams in the same "order"). In the standard model we have two classes of vertices; those with three or four lines. So as you can see specifying the total number of vertices (equivalently the order with respect to the sum of powers of all coupling counstants), isn't going to uniquely fix the number of loops, however specifying the number of vertices of each class, you can get a one to one correspondence between loop order and coupling constant power order, in which case both are equivalent to the quantum mechanical expansion in powers of $\hbar$. Derivation: To derive this formula you can treat each external line as type of vertex with only one line attached to it. That is $E\equiv V_1$ and corresponding to it $n_1=1$. Then we can rewrite $$\sum \left(\frac{n_i}{2}-1\right) V_i +1= L $$This formula can be understood by recursion, first we prove it's true for zero vertices, but this is obvious since for zero vertices we do have $L=1$, just draw a circle! Now to prove by recursion we assume the formula is correct, and prove that if we add one vertex of type $i$, we must introduce $(n_i/2-1)$ new loops. This can be easily seen by taking your diagram and putting a vertex anywhere on an internal line (notice that we no longer distinguish between internal and external lines because $E$ is just another type of vertex now). When you insert this vertex, two of it's legs are already eaten automatically, so we need to connect the remaining $n_i-2$ legs, note that we must connect them with each other, because all other vertices are already saturated, and leaving a leg hanging is equivalent to introducing an external vertex which we are not doing by assumption. Now this is only possible if $n_i$ is even, in which case we get $(n_i-2)/2$ new loops, which proves the recursion for even vertices. If the vertex is odd, we must introduce them in pairs and the same discussion ensues.
I am currently working on determining the nth nearest neighbors of a large survey of galaxies. I am not an astronomer by trade, so I am unsure of interpreting some of my results. I have discovered for the sample of galaxies their 9th nearest neighbor distances. It turns out the mode of this distribution by far is between 2mpc and 3mpc. Since I am unsure of the scale of these implications I would prefer if a well versed astrophysicist can tell me if the distance to the 9th nearest neighbor galaxy of 2-3mpc would be considered densely clustered. These galaxies are at intermediate redshift. If galaxies were randomly distributed (a spatial Poisson process), then the probability of having $N$ galaxies inside a radius $r$ sphere is $\Pr[N]=\lambda^N e^{-\lambda} / N!$ where $\lambda = (4\pi /3)\rho r^3$. So the cumulative distribution function of the distance to the $N$th neighbour is $$\Pr[R_N<r]=1-e^{-\alpha r^3}\sum_{n=0}^{N-1} \frac{\alpha^n r^{3n}}{n!}$$ ($\alpha=4\pi\rho/3$ for brevity). Taking the derivative to get the PDF gives $$f(r) = e^{-\alpha r^3}\left [(3\alpha r^2) + 3\sum_{n=1}^{N-2} \frac{\alpha^n r^{3n-1}}{(n-1)!}\right ].$$ While one can try taking the derivative of this to get the mode (or setting the CDF to 1/2 to find the median) this becomes algebraically messy so a numerical solution is likely best. I get that the mode for the 9th neighbour is 2.44 times the mode of the first neighbour. The problem here is of course that actual galaxies are highly clumped. That will cause a (potentially large) reduction in the median distance. In fact, most galaxies have satellite galaxies, so unless one has a definition in the survey excluding them the answer will be that the 9th nearest neighbour is very close (in the case of the Milky Way the distance would be about 30 kpc). So any answer will depend on the criteria on inclusion in this particular catalogue.
1. ReviewThe function knapsack lacks a docstring that would explain what arguments the function takes (what kind of things are in items? must items be a sequence, or can it be an iterable?) and what it returns.This kind of function is ideal for doctests.The comments say things like "Create an (N+1) by (W+1) 2-d list". But what is N and what is W? ... A few things:Naming:It's neither Fibonaci, nor febonani, nor fibonanci it's fibonacci. Please get your names to reflect what you're actually talking about and not some disfigured mutation of it :(memoized OTOH is a relatively nice name, I'd probably prefer memoizedFibonacciNumbers, but that's a thing of preferenceCalculating:quoting wikipedia:... No, not \$O(n)\$. Not even close. I'll get back to this later though. First, the code:It's a little difficult to tell what your loop logic is. curr is getting incremented each time, but i isn't... whereas typically we'd use i as a loop index. I'd propose using i as the loop index, and then keeping a humble number count named count (or num_humbles or ... There are some other Java gurus around here who might know some Java-specific tips and tricks, but... I want to look at something else you're doing in the algorithm.First of all, we know from the problem statement that any positive integer will have at least 1 possible answer, that is, itself.What we can also know from basic math knowledge is that the ... With regard to your time complexity:With your calculation of the nth Fibonacci number, you could do it in \$O(1)\$ time by using the relation:$$F_n = \left\lfloor \frac{\varphi^n}{\sqrt{5}} + \frac{1}{2} \right\rfloor$$So the 484th Fibonacci number would be equal to:$$\left\lfloor \frac{\varphi^{484}}{\sqrt{5}} + \frac{1}{2} \right\rfloor \approx 6.... There are some major issues with your approach, the major one being that it is going to terribly break down for really large numbers. But if you want to go with it, here are a couple of improvements:Don't use a dictionary. If your indices are consecutive integers, then a list is the right data structure.You already know that most of your entries are going ... Assuming you're using the current c++ standard, since you don't specify in your question.Prefer stoi to atoi. atoi has issues.Prefer standard int types: std::uint128_t instead of unsigned long long intCreate a type alias for your map type since you use it a lot.Don't include unnecessary headers in the fibbonacci.h file. Include them in the ... Is this a generator or a calculator? Generators are objects that behave like iterators, yielding the next value on every call.std::map<int, unsigned long long int> __fib_result_map = initializeMap();#ifndef __fibonacci#define __fibonacciKnow the rules regarding underscore usage. From the C++ Standard:17.6.4.3.2 Global names [global.names]... ConceptYou have called this a "Longest Increasing Subsequence" problem, but it's not, is it? It's counting all possible sequences of a specific length, not just the longest. As a result, I think you may have confused yourself in a few ways....NamingYour variable and method names are ... horrible. Variables should always be useful names, and, while N ... 1. ReviewThe function coinChange carries out three tasks: (i) read the input; (ii) solve the problem; (iii) print the solution. This makes it hard to test, because it's hard to run the code on different problem instances (you have to feed it the problem in text form on standard input), and becaue it's hard to automatically check the output (you have to ... If someone were to say: isPowerOf2(9223372036854775808) what would your function do?For a start, it will end up with an array of a very, very large size.... that can't be right....... next, computers are binary machines, and binary numbers are powers of 2.... there has to be a way for the base infrastructure to make that easier? Well, there is.When ... Recursive SolutionCode-wise, I see no issues. The one thing I'd point out is that it might be clearer to name your functions A and B instead of Nth and Aux, since the problem has to do with finding A and it isn't readily apparent what Aux means. It's a little unclear, but not a dealbreaker.Bottom-up methodI don't really understand why your memoized ... #include <cstdio>If your goal is to write C++, you're really off to a bad start using the C standard I/O library.const unsigned short n = 20;The modern way to do compile time constants is with constexpr.But I don't understand why you chose to use unsigned short for all your indexing calculations. Any operations you try to do on it will ... You should have a look at Python's official style-guide, PEP8. One of its recommendations is to use PascalCase for class names and lower_case for function and variable names.Since you read all words into memory anyway, you can just write:with open(in_file_name) as fp:self.words = fp.read().split()This is because split splits by default on all ... @Vogel612 has already mentioned all major defects and areas of improvement in your code. I want to talk about one more thing:Your package naming is horrible. You are using com.java.fib, please do not ever do that again, because:Although Java classes are prefixed with java.*, it still creates confusion as people might think this is a library class. In ... Currently, your code is next to unreadable.The variable names are too cryptic. Names like den and S don't tell anything about the context of the variable.Please, use descriptive names.You don't give space for your variables to breath. You crammed everything up and that makes it harder to read.An example:for(i=0;i<n;i++)temp[i] = S / den[i] ;... BugYou code is currently too simplistic. All it does is make change from the highest denomination possible. It fails on the following input:Enter the total change you want: 6Enter the no. of different denominations of coins available: 3Enter the different denominations in ascending order:1 3 4min no of coins = 3Your program thought the ... The loop# Move every element in the previous generation up one index into the new generationfor k in range(0, len(previous)):answer[k + 1] = previous[k]makes the next allGenerations list one element longer than the previous one. After n iterations you have lists of lengths 1, 2, ... n, making them \$O(n^2)\$ total. Just populating ... Why are you using unsigned short?I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.Consider using ... I do have a lot of comments, but I find your solution pretty good. The algorithm makes sense, and your comments make it easy to follow.The function is closer to a computation than a search, so I wouldn't call it find…. The N parameter is redundant, since it can be deduced from the dimensions of prices. On the other hand, color_names is hard-coded (and ... using namespace stdusing namespace std;Count the number of characters you use with this (twenty) and compare to the number of characters you save by having it (five). Wouldn't it be better to just say std::map instead?Even in fib_cpp.cpp, you only save fifteen characters for a net increase of five.std::vector vs. std::mapmap<int, ... Not an review, but an extended comment:I don't know how this solution can be optimized. The expected solution is radically different, and is based on a couple of theorems from number theory.You should check whether the number isA perfect square (the answer is obviously 1), oris a sum of 2 squares (the answer is obviously 2), ornot a Legendre's number,... Integer DivisionIn Python, 10 / 2 is equal to 5.0, not 5. Python has the integer division operator (//) which produces an integral value after division, instead of a floating point value. To prevent storing both int and float keys in the count dictionary, you should use:count[n] = collatz_count(n // 2) + 1CacheYour count cache works nicely. ... Missing methodsThe posted code is missing the schedule_left() and CreateOutputFile() method.Styleyou should always be consistent in the style you use. Switching from not using braces {} to using them for single for loops should be avoided. The same is true for single if statements.dead code should be deletedif you decide to not using braces ... What quickly jumps out:typedef vector<dimension*> VecDim;// ^ Pointertypedef vector<dimension*>::iterator VecDimIter;// ^ PointerIts very rare to see "raw" pointers in good C++ code. This is because now you have to do memory management on the code. Pointers are useful for implementing the ... Everything is in main. It's called main. It's not called everything.You have written no functions here, but you should. Writing functions is a way of giving names to chunks of code that are independent, testable, and give a desired result for a given input (and do that in the same way every time). Plus, it goes a long way to making our code far more ... This Code Review, so let us review your code, before proceeding to questions and optimisations.Code and style reviewChoose better names – According to python style guide, PEP8, you should use snake_case for variables and functions. In addition what does dp, acc stand for? And in general one should avoid single letter variables except in tight loops... For each i = 0...number, your method stores a std::list containingthe complete sequence of intermediate numbers from 1 to i. Each list is created bycopying the list from the optimal predecessor and appending i.Even for a single i, path[i] may be assigned a new list up tothree times.That is a lot of copying which consumes a lot of time.It can be ... If I were Alice, the most optimum strategy, well, there are two. The binary search, or in Python is the bisection method, or I would use the Fibonacci method, which is mathematically superior and faster than the binary search method.Anyway, that would be what Alice would have to do to get to the number in the least number of guesses.Bob, on the other ... The problems you've encountered usually signal that the approach is not the best.Consider the very first step in your algorithm: you are definite that if the number is even, the best action is \$n\rightarrow \frac{n}{2}\$ (why?).Try to apply the same logic one step further. Let \$n\$ be odd. Either \$\frac{n+1}{2}\$ or \$\frac{n-1}{2}\$ is also odd (why?)...
One of the main features of the Hull-White model is that it matches the market at $t = 0$. This means that at $t = 0$, not only does the zero coupon bond prices (starting from zero) not depend on the volatility, but neither do they depend on the mean reversion level. These prices depend only on the zero curve observed in the market. Of course, this is not to be confused with future zero bond prices $P(t,T)$ seen from $t = 0$, which are random variables and as a result have a distribution depending on the volatility and mean reversion as well. To show that the ZC bond prices at $t = 0$ match the market and don't depend on the model parameters, I will use a different (more convenient) formulation (see e.g. Andersen and Piterbarg, section 10.1.2.2), which uses $x(t) = r(t) - f(0,t)$ instead of the short rate $r(t)$. Which leads to the following SDE (keeping your notations): $$\begin{aligned}x(0) &= 0 \\dx(t) &= \left( y(t) - a x(t) \right) dt + \sigma dW(t)\end{aligned}$$ with: $y(t) = \frac{\sigma^2}{2a} \left(1-e^{-2at} \right)$. The ZC bond price is given by:$$P(t,T) = \frac{P^M(0,T)}{P^M(0,t)}\exp \left(-\frac{1}{2}B(t,T)^2y(t) - B(t,T)x(t) \right)$$ In the formula above, I used the superscript $^M$ to denote that the $P^M$ prices come from the zero curve observed in the market. Taking $t = 0$, as $x(0) = y(0) = 0$ and $P^M(0, 0) = 1$, we have: $$P(t,T) = P^M(0,T)$$
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Exponent Not Equal to Zero Theorem Let $x$ and $y$ be ordinals. Let $x \ne 0$. Then: $x^y \ne 0$ Proof The proof shall proceed by Transfinite Induction on $y$. Basis for the Induction $x^0 = 1$ by the definition of ordinal exponentiation. Therefore, $x^0 \ne 0$. This proves the basis for the induction. Induction Step The inductive hypothesis supposes that $x^y \ne 0$. \(\displaystyle x^{y^+}\) \(=\) \(\displaystyle x^y \times x\) definition of Definition:Ordinal Exponentiation \(\displaystyle x^y\) \(\ne\) \(\displaystyle 0\) Inductive Hypothesis \(\displaystyle x\) \(\ne\) \(\displaystyle 0\) Hypothesis \(\displaystyle x^y \times x\) \(\ne\) \(\displaystyle 0\) by Ordinals have No Zero Divisors This proves the induction step. Limit Case The inductive hypothesis says that: $\forall z \in y: x^z \ne 0$ \(\displaystyle \forall z \in y: \ \ \) \(\displaystyle x^z\) \(\ne\) \(\displaystyle 0\) Inductive Hypothesis \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(\in\) \(\displaystyle x^z\) by Ordinal Membership is Trichotomy \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(\in\) \(\displaystyle \bigcup_{z \in y} x^z\) by the fact that limit ordinals are nonempty \(\displaystyle \implies \ \ \) \(\displaystyle 0\) \(\in\) \(\displaystyle x^y\) definition of ordinal exponentiation \(\displaystyle \implies \ \ \) \(\displaystyle x^y\) \(\ne\) \(\displaystyle 0\) definition of empty set This proves the limit case. $\blacksquare$
Set Less than Cardinal Product Theorem Let $S$ and $T$ be sets. Let $T$ be nonempty. Suppose that $S \times T \sim \left|{ S \times T }\right|$. Then: $\left|{ S }\right| \le \left|{ S \times T }\right|$ Proof Let $y \in T$. Define the mapping $f : S \to S \times T$ as follows: $f\left({x}\right) = \left({ x,y }\right)$ If $f\left({x_1}\right) = f\left({x_2}\right)$, then $\left({ x_1 , y }\right) = \left({ x_2 , y }\right)$ by the definition of $f$. It follows that $x_1 = x_2$ by Equality of Ordered Pairs. Thus, $f : S \to S \times T$ is an injection. By Injection implies Cardinal Inequality, it follows that $\left|{S}\right| \le \left|{S \times T}\right|$ $\blacksquare$
I am interested in studying a problem of the form $\min F(\Omega)$ where $\Omega$ varies in the class of convex, open sets in the plane. An idea is to deform $\Omega$ at each step using a steepest descent algorithm, and if the deformed shape $\Omega_d$ is not convex, project it in some way on the space of convex sets. I guess there are different ways of doing this. I've tried replacing $\Omega_d$ by its convex hull. This is not satisfactory, since at some moment we might get into a cyclic situation: for example if $\Omega$ has a segment in its boundary, the algorithm pulls the segment inwards, but convexification brings it back where it was. To avoid this, we could consider a form $\Omega'$ which convex, and is closest to $\Omega$ in the sense of least squares. For examples, if $\rho_0$ is a radial parametrization of $\Omega$, and $(x_i)$ is a discretization of $[0,2\pi)$ then we could solve $$ \min_{\rho} \sum_{i = 1}^n (\rho(x_i)-\rho_0(x_i))^2$$ where $\rho$ is the radial function of a convex shape. Is there any simple way to impose the convexity constraint on $\rho$ in the discrete setting? In the end, the above optimization problem is equivalent to fitting the closest convex polygon to a set of points in the plane, in the sense of least squares. Is there any known algorithm to find such a best fit convex polygon. If I write the analytic condition (positive curvature in 2D), it will need the derivatives $\rho',\rho''$, which is not pleasant: if I parametrize the radial function by Fourier coefficients, it comes to solving a quadratic optimization problem under quadratic constraints.
Definition A square matrix \(M\) is \(\textit{invertible}\) (or \(\textit{nonsingular}\)) if there exists a matrix \(M^{-1}\) such that \[M^{-1}M=I=MM^{-1}.\] If \(M\) has no inverse, we say \(M\) is \(\textit{Singular}\) or \(\textit{non-invertible}\). Remark Let \(M\) and \(N\) be the matrices: \[ M=\begin{pmatrix} a & b \\ c & d \\ \end{pmatrix},\qquad N=\begin{pmatrix} d & -b \\ -c & a \\ \end{pmatrix} \] Multiplying these matrices gives: \[ MN=\begin{pmatrix} ad-bc & 0 \\ 0 & ad-bc \\ \end{pmatrix}=(ad-bc)I\, . \] Then \(M^{-1}=\frac{1}{ad-bc}\begin{pmatrix} d & -b \\ -c & a \\ \end{pmatrix}\), so long as \(ad-bc\neq 0\). 7.5.1: Three Properties of the Inverse 1. If \(A\) is a square matrix and \(B\) is the inverse of \(A\), then \(A\) is the inverse of \(B\), since \(AB=I=BA\). So we have the identity: \[ (A^{-1})^{-1}=A \] 2. Notice that \(B^{-1}A^{-1}AB=B^{-1}IB=I=ABB^{-1}A^{-1}\). Then: $$(AB)^{-1}=B^{-1}A^{-1}$$ Thus, much like the transpose, taking the inverse of a product \(\textit{reverses}\) the order of the product. 3. Finally, recall that \((AB)^{T}=B^{T}A^{T}\). Since \(I^{T}=I\), then \((A^{-1}A)^{T}=A^{T}(A^{-1})^{T}=I\). Similarly, \((AA^{-1})^{T}=(A^{-1})^{T}A^{T}=I\). Then: \[ (A^{-1})^{T}=(A^T)^{-1} \] 7.5.2 Finding Inverses (Redux) Suppose \(M\) is a square invertible matrix and \(MX=V\) is a linear system. The solution must be unique because it can be found by multiplying the equation on both sides by \(M^{-1}\) yielding \(X=M^{-1}V\). Thus, the reduced row echelon form of the linear system has an identity matrix on the left: \[ \begin{amatrix}{rr} M & V \end{amatrix} \sim \begin{amatrix}{rr} I & M^{-1}V \end{amatrix} \] Solving the linear system \(MX=V\) then tells us what \(M^{-1}V\) is. To solve many linear systems with the same matrix at once, $$MX=V_{1},~MX=V_{2}$$ we can consider augmented matrices with many columns on the right and then apply Gaussian row reduction to the left side of the matrix. Once the identity matrix is on the left side of the augmented matrix, then the solution of each of the individual linear systems is on the right. \[ \left(\begin{array}{c|cc} \!M & V_{1}&V_{2}\! \end{array}\right) \sim \left(\begin{array}{c|cc} \!I & M^{-1}V_{1} & M^{-1}V_{2}\! \end{array}\right) \] To compute \(M^{-1}\), we would like \(M^{-1}\), rather than \(M^{-1}V\) to appear on the right side of our augmented matrix. This is achieved by solving the collection of systems \(MX=e_{k}\), where \(e_{k}\) is the column vector of zeroes with a \(1\) in the \(k\)th entry. i.e. the \(n \times n\) identity matrix can be viewed as a bunch of column vectors \(I_{n}=(e_{1} \ e_{2} \ \cdots e_{n})\). So, putting the \(e_{k}\)'s together into an identity matrix, we get: \[ \begin{amatrix}{1} M & I \end{amatrix} \sim \begin{amatrix}{1} I & M^{-1}I \end{amatrix} =\begin{amatrix}{1} I & M^{-1} \end{amatrix} \] Example 84 Find \(\begin{pmatrix} -1 & 2 & -3 \\ 2 & 1 & 0 \\ 4 & -2 & 5 \\ \end{pmatrix}^{-1} \). We start by writing the augmented matrix, then apply row reduction to the left side. \begin{eqnarray*} \left(\begin{array}{rrr|ccc} -1 & 2 & -3 & 1 & 0 & 0 \\ 2 & 1 & 0 & 0 & 1 & 0 \\ 4 & -2 & 5 & 0 & 0 & 1 \\ \end{array}\right) & \sim & \left(\begin{array}{crr|ccc} 1 & -2& 3 & 1 & 0 & 0 \\ 0 & 5 & -6 & 2 & 1 & 0 \\ 0 & 6 & -7 & 4 & 0 & 1 \\ \end{array}\right) \\ & \sim & \left(\begin{array}{ccr|rrc} 1 & 0 & \frac{3}{5} & -\frac{1}{4} & \frac{2}{5} & 0 \\ 0 & 1 & -\frac{6}{5} & \frac{2}{5} & \frac{1}{5} & 0 \\ 0 & 0 & \frac{1}{5} & \frac{4}{5} & -\frac{6}{5} & 1 \\ \end{array}\right) \\ & \sim & \left(\begin{array}{ccc|rrr} 1 & 0 & 0 & -5 & 4 & -3 \\ 0 & 1 & 0 & 10 & -7 & 6 \\ 0 & 0 & 1 & 8 & -6 & 5 \\ \end{array}\right) \\ \end{eqnarray*} At this point, we know \(M^{-1}\) assuming we didn't goof up. However, row reduction is a lengthy and arithmetically involved process, so we should \(\textit{check our answer,}\) by confirming that \(MM^{-1}=I\) (or if you prefer \(M^{-1}M=I\)): \[MM^{-1} = \begin{pmatrix} -1 & 2 & -3 \\ 2 & 1 & 0 \\ 4 & -2 & 5 \\ \end{pmatrix}\begin{pmatrix} -5 & 4 & -3 \\ 10 & -7 & 6 \\ 8 & -6 & 5 \\ \end{pmatrix} =\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \] The product of the two matrices is indeed the identity matrix, so we're done. 7.5.3 Linear Systems and Inverses If \(M^{-1}\) exists and is known, then we can immediately solve linear systems associated to \(M\). Example 85 Consider the linear system: \[ \begin{array}{r} -x & +2y & -3z &=& 1 \\ 2x & +\ y\, & &=& 2 \\ 4x & -2y & +5z &=& 0 \end{array} \] The associated matrix equation is \(MX=\begin{pmatrix}1\\2\\0\end{pmatrix},\) where \(M\) is the same as in the previous section. Then: \[ \begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix} -1 & 2 & -3 \\ 2 & 1 & 0 \\ 4 & -2 & 5 \\ \end{pmatrix}^{-1}\begin{pmatrix}1\\2\\0\end{pmatrix} =\begin{pmatrix} -5 & 4 & -3 \\ 10 & -7 & 6 \\ 8 & -6 & 5 \\ \end{pmatrix}\begin{pmatrix}1\\2\\0\end{pmatrix} =\begin{pmatrix}3\\-4\\-4\end{pmatrix} \] Then \(\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}3\\-4\\-4\end{pmatrix}\). In summary, when \(M^{-1}\) exists, then $$MX=V \Leftrightarrow X=M^{-1}V\, .$$ 7.5.4 Homogeneous Systems Theorem A square matrix \(M\) is invertible if and only if the homogeneous system $$MX=0$$ has no non-zero solutions. Proof First, suppose that \(M^{-1}\) exists. Then \(MX=0 \Rightarrow X=M^{-1}0=0\). Thus, if \(M\) is invertible, then \(MX=0\) has no non-zero solutions. On the other hand, \(MX=0\) always has the solution \(X=0\). If no other solutions exist, then \(M\) can be put into reduced row echelon form with every variable a pivot. In this case, \(M^{-1}\) can be computed using the process in the previous section. 7.5.5: Bit Matrices In computer science, information is recorded using binary strings of data. For example, the following string contains an English word: \[ 011011000110100101101110011001010110000101110010 \] A bit is the basic unit of information, keeping track of a single one or zero. Computers can add and multiply individual bits very quickly. In chapter 5, section 5.2 it is explained how to formulate vector spaces over fields other than real numbers. In particular, for the vectors space make sense with numbers \(Z_{2}=\{0,1 \}\) with addition and multiplication given by: \[ \begin{array}{c|cc} + & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \\ \end{array} \qquad \begin{array}{c|cc} \times& 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 0 & 1 \\ \end{array} \label{Z2}\] Notice that \(-1=1\), since \(1+1=0\). Therefore, we can apply all of the linear algebra we have learned thus far to matrices with \(Z_{2}\) entries. A matrix with entries in \(Z_{2}\) is sometimes called a \(\textit{bit matrix}\). Example 86 \(\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix}\) is an invertible matrix over \(Z_{2}\): \[\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix}^{-1}=\begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix} \] This can be easily verified by multiplying: \[\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix}\begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix}=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \] Application: Cryptography A very simple way to hide information is to use a substitution cipher, in which the alphabet is permuted and each letter in a message is systematically exchanged for another. For example, the ROT-13 cypher just exchanges a letter with the letter thirteen places before or after it in the alphabet. For example, HELLO becomes URYYB. Applying the algorithm again decodes the message, turning URYYB back into HELLO. Substitution ciphers are easy to break, but the basic idea can be extended to create cryptographic systems that are practically uncrackable. For example, a \(\textit{one-time pad}\) is a system that uses a different substitution for each letter in the message. So long as a particular set of substitutions is not used on more than one message, the one-time pad is unbreakable. English characters are often stored in computers in the ASCII format. In ASCII, a single character is represented by a string of eight bits, which we can consider as a vector in \(Z_{2}^{8}\) (which is like vectors in \(\Re^{8}\), where the entries are zeros and ones). One way to create a substitution cipher, then, is to choose an \(8\times 8\) invertible bit matrix \(M\), and multiply each letter of the message by \(M\). Then to decode the message, each string of eight characters would be multiplied by \(M^{-1}\). To make the message a bit tougher to decode, one could consider pairs (or longer sequences) of letters as a single vector in \(Z_{2}^{16}\) (or a higher-dimensional space), and then use an appropriately-sized invertible matrix. For more on cryptography, see "The Code Book,'' by Simon Singh (1999, Doubleday).
If $G_1\cong G_2$ and $H_1\cong H_2$ then $G_1 \times H_1 \cong G_2 \times H_2$ Proof: $f_G:G_1\rightarrow G_2$ and $f_H:H_1\rightarrow H_2$. Question 1: Is the following statement valid? Does it belong in this proof? $$f_G:G_1\times A \rightarrow G_2 \times A\textrm{ for any group }A$$ Question 2: Can I finish the proof like this? Do I need to show any steps in between? $$\left[\ f_H \circ f_G\right] :G_1\times H_1 \rightarrow G_2 \times H_2$$ $f_G$ and $f_H$ are both bijective. Thus $f_H\circ f_G$ is bijective. Therefore, $G_1 \times H_1 \cong G_2 \times H_2$. Does this work? I was considering breaking it down further and manipulating individual elements in the groups $G_1,G_2,H_1,H_2.$ I would appreciate your advice. Thanks
The lagrangian function of an non-relativistic isolated system of point masses is $$L=\sum_i\frac{m_i}{2}\dot{\vec r}_i^2-V,$$ where the potential function $V$ represents all interactions. If we assume Newton's principle of determinancy, then by knowing the initial positions and velocities we know any future state of the system. This implies that the potential may be a function of positions, velocities and time, $V=V(\vec r_i,\dot{\vec r}_i,t)$. Let us now assume Galilean invariance which means the system is invariant under time translations, spatial translations, rotations and change of state of uniform motion. Time translation invariance, $t\rightarrow t+s$, implies that the potential does not depend explicitly on time. Hence $V=V(\vec r_i,\dot{\vec r}_i)$. Spatial translation invariance, $\vec r_i\rightarrow \vec r_i+\vec a$, implies that the position dependence of the potential is only on the relative vectors, $V=V(\vec r_i-\vec r_j,\dot{\vec r}_i)$. If the system is invariant under change of state of uniform motion, $\vec r_i\rightarrow \vec r_i+\vec vt$, then also the velocity dependence must be through relative velocities, $V=V(\vec r_i-\vec r_j,\dot{\vec r}_i-\dot{\vec r}_j)$. Invariance under rotations, $\vec r\rightarrow R\vec r$, $R\in SO(3)$, leads to the fact that what is important is length and not orientation, so $V=V(|\vec r_i-\vec r_j|,|\dot{\vec r}_i-\dot{\vec r}_j|)$. The question is: Should not the potential function of such system be just $V(|\vec r_i-\vec r_j|)$ ? How to eliminate the dependence on $|\dot{\vec r}_i-\dot{\vec r}_j|$ ?
This is something that just occurred to me. If heavier elements sink, then how can the entire ocean be salty? Shouldn't the 'salt', because of its density, all sink to the bottom of the ocean? In theory, only the deepest parts of the ocean should be salty, while the top of the ocean is not. Yet, the only water in the world that isn't salty comes from rain and rivers. How can this be? When dissolved in water, salt breaks up into sodium and chlorine ions, which combine with water molecules so they cannot easily sink. However, there is a tendency for streams of fresh water to float on salt water and rise to the top. This caused problems for British submarines in the Dardanelles Straits during WW1. Moving from almost fresh water to the denser salt water, they suddenly became more buoyant and rose involuntarily to the surface, making them visible to Turkish gunners on the shore. There are also parts of the ocean where there are pools of very salty water lying on the bottom in such a way as to clearly show the pool to any diver who happens to see it, as though it were a pool on land, so in some circumstances very salty water can sink. Why does the salt in the oceans not sink to the bottom? Because there isn't any "salt", per se, in the ocean. Salt, as the compound sodium chloride (NaCl) does not exist as a solid in the ocean. It is dissolved into sodium and chloride ions (charged atoms) that exist within the ocean as a homogenous phase (that is, a "thing"). That said, water with sodium chloride dissolved in it is indeed denser than pure water, because after all, sodium and chlorine atoms are denser than atoms of hydrogen and oxygen. This leads to an interesting phenomenon: you can have layers of more-salty water and less-salty water that do indeed rise and sink. There are several YouTube videos that demonstrate this very well. For example this video shows dyed salty and fresh water, separated by a barrier: and then when the barrier is released, the salty water sinks down: This phenomenon is extremely important for planet-scale ocean circulation, and has strong influence on our climate. I'm a regular from the Physics Stack Exchange reporting for duty. Why this is a serious question This is a bigger question than you might be giving it credit for. The question is ultimately similar to asking why all the air molecules in the atmosphere do not fall to the floor. Your question comes from a very solid principle in physics which could be called the minimum energy principle. The basic derivation is that if you define the power exerted by a force $\mathbf F_i$ on a particle with velocity $\mathbf v$ to be $$P_i=\mathbf F_i\cdot\mathbf v = |\mathbf F_i|~|\mathbf v|~\cos\theta,$$then Newton’s law that the sum of forces on a particle $\sum_i \mathbf F_i = m~\dot{\mathbf v}$ is the mass times the change in velocity per unit time, directly implies that the sum of powers exerted on a particle $\sum_i P_i = \dot K$ is the change in kinetic energy per unit time. Drag forces exist and they oppose relative motion, so their $\cos \theta$ is negative and they will decrease kinetic energy, $\dot K < 0.$ Since energy is a conserved quantity (a “stuff,” if you’d like: if you find more or less of it in a box, then it must have come from somewhere else where there is less or more of it), drag forces eventually rob energy from a system until it ends up at the minimum energy. And it is a very useful principle, for example you can use it to very easily derive the principle of buoyancy and the effective force that must be created by the displaced water to produce that effect; you can't do Newton’s laws easily when there are that many tiny little forces of little water molecules but you can absolutely compare total potential energy when an object is at the bottom of the ocean, the middle, and the top. It fails to describe certain things like static friction (why is my laptop on my desk and not on my floor?!) because it does not tell you how long such things take and requires an assumption of noise to eventually perturb you out of “local minimums” and such. But surely the air has had enough time to fall to the ground if that were what it wanted to do. The air does not want to fall to the ground. And we can’t steal our normal solutions for other things like “why don’t clouds fall,” “well what you think of as a cloud is actually more like a waterfall, there is constant movement of water droplets, the water gets a boost upward from heating the air around it as it condenses but it does tend to eventually fall but when it falls beneath a certain flat surface it evaporates again and becomes invisible and so the visible puff is constantly being fed by new water droplet formation and constantly sapped by falling water that becomes invisible…”—no. These are concrete particles that somehow avoid falling to the ground and we have to actually solve the problem. Fluctuation-Dissipation theorem to the rescue The minimum-energy principle describes something that we would call dissipation, energy leaving one system to end up in another system. These sorts of gates are always bidirectional: energy goes through in both ways. But mostly you don't notice it, and that’s key to how the principle helps us describe things: energy always flows out, it never flows back in. Until, well, it does. Energy of a bouncing ball spreads out among all of the different degrees of freedom of the floor, the air, but if it really goes all the way to 0 and sits perfectly and completely still, very soon the air will bump it and start it jostling and vibrating and moving again—just not moving very much. The same things that allow energy to dissipate must also be contributing constant energy fluctuations that prevent energy from going all the way to 0. These fluctuations are collectively understood as temperature. Temperature is technically only defined for a system where all of its degrees of freedom in the ways it can move have come to the same average energy, and it is measured as that average energy. Temperature defines this average energy and the size of these fluctuations. So at room temperature for example we would say that every degree of freedom has 26 meV, 26 "milli-electron-Volts" of energy, or 0.026 of the energy that an electron would have if accelerated by a one volt battery. So why does the air stay up? It is, basically, because the molecules of the floor are kicking the air molecules with enough energy to hit the upper reaches of the atmosphere. They do not actually go straight there; one air molecule bumps into other air molecules over a very short distance scale: but it transfers that energy and momentum to other particles which transfer that energy and momentum to other particles and in the end the air "prefers" to "hang out" near the ground but the fluctuations cause it to get bumped to an average height given by our temperature. So if you take the mass of nitrogen N 2 of 28 amu, and the acceleration due to gravity of 9.8 N/kg, you can find out that this 26 meV temperature means that the atmosphere is ~9 km high on average, which does get you a good chunk into the troposphere where the air starts to thin out dramatically. Actually the theory says that if nothing else were to happen and the random kicks were to just launch a particle up into the atmosphere, it would have a random height sampled according to an exponential probability distribution, $P(h) \sim e^{-h/(9\text{ km})}$. Similarly why don't the salt molecules fall to the ocean floor? Well, they do, and then they get kicked back up. The water at the ocean floor is saltier. The key difference is whether the salt in question dissolves in water (if it sticks to water better than it sticks to itself) or precipitates in water (it sticks to itself better): larger chunks of a piece of stuff that get bound together will tend to act as big massive chunks and then that thermal energy cannot kick it as high. This is the general idea of the fluctuation-dissipation theorem, which states that fluctuation and dissipation (under some extremely broad assumptions called “detailed balance”) always go hand-in-hand. Anything which can absorb light (dissipation) must radiate light into space (blackbody radiation, a sort of fluctuation). Every electrical resistor is also a noise source (Johnson noise). If energy can flow out of a system into some environment, then it will only flow out until they have the same average energy levels, and if you try to go lower, energy fluctuations from the environment flow back into the system. Turbulence, because seawater is, almost, always on the move saltier water is mixed with fresher by wave action and, to a lesser extent in surface waters, by Brownian motion. In Fjordland the annual rainfall is so high (up to 8000mm) that there is a permanent freshwater layer several metres thick that you can drink from sitting over the salt water from the Tasman in the sheltered inlets. Even there this layer doesn't have a clear cut boundary but rather a mixing layer where the salt and fresh water exchange particles and homogenise over time. In bodies of water that don't experience regular circulation stagnation and anoxia set in over time but chemical solution of a number of dissolved salts still occurs. Saltier water has higher mass density, so the gravitational energy can be lowered that way. The concentration differences go up until the free-energy of creating that big a concentration difference balances the gravitational energy change. Department of Physics, University of Illinois at Urbana-Champaign Making some simplifying assumptions, they find: the equilibrium concentration goes up exponentially with depth, by a factor of e for each 10 km or so. The actual oceans are stirred by currents, so this equilibrium concentration difference isn't present in them. Basically they saying that it takes energy to separate a homogeneous solution into parts which are more or less concentrated (and hence more or less dense). Taking into account the gravitational energy, it follows that the least energy state of a column of water is saltier at the bottom. But it does, but according to each salt's solubility and density. Soluble salts tend to mix into the water and keep suspended. Insoluble salts separate from the solution and creates deposits in the oceanic floor. One famous example was the "de-ironing" of the seas, when iron salts were deposited in the bottom due to the oxigenation of the oceanic water, by the time of the emergence of aerobic, photosynthetic organisms. Great Oxidation Event: https://en.wikipedia.org/wiki/Great_Oxidation_Event "The oxygen then combined with dissolved iron in Earth's oceans to form insoluble iron oxides, which precipitated out, forming a thin layer on the ocean floor". https://en.wikipedia.org/wiki/Banded_iron_formation And then there is the saturation issue. Salt can be dissolved in water to a certain degree only. Once that degree is exceeded the salt begins to fall out and sink to the ground. If I remember well the limit for water is something like 35g per litre (depending on the temperature) Salt does sink to the bottom in the oceans. Why? Your question referring to salt. Salt is a solid chemical compound. Take a lump of rock salt of sodium chloride, throw it into the water: it will sink to the bottom. The reason is that the density of sodium chloride with more than 2 g/cm 3 is higher than the density of seawater less than 1.1 g/cm 3. Of course, the salt lump will be dissolved sometime and no longer exist. But then it's no salt anymore. Then there are only fast and somehow moving loose cations and anions in the water. Salt does not sink to the bottom in the seas and oceans, because it dissolves in water! If you want to get salt from the seas and oceans, try to vaporize them :-)...
Recursion recursion recursion! We saw two recursive functions for finding the maximal element in a list. We discussed quicksort. Then we wrote two recursive functions: binom and change. We also discussed memoization and demonstrated it using our recursive implementations for binom and change. Recursion: Memoization: from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = "all" Question from an exam: Implement find_steady, a function that receives a list $L$ of sorted (ascending) unique integers, and returns a value $i \geq 0$ s.t. $L[i] == i$. If there is no such $i$, None is returned. If there is more than one such index, any one of them can be returned. For example: find_steady([3, 5, 9, 17]) => None find_steady([-3, 0, 2, 10]) => 2 def find_steady(lst): for i in range(len(lst)): if lst[i] == i: return i return Noneprint(find_steady([3, 5, 9, 17]))print(find_steady([-3, 0, 2, 10])) None 2 def find_steady(lst): n = len(lst) left = 0 right = n-1 while left <= right: middle = (right + left) // 2 # middle rounded down if middle == lst[middle]: # item found return middle elif middle < lst[middle]: # item not in top half right = middle - 1 else: # item not in bottom half left = middle + 1 return Noneprint(find_steady([3, 5, 9, 17]))print(find_steady([-3, 0, 2, 10])) None 2 What just happened? The crucial point about this algorithm is the following: if $lst[mid] > mid$ then a fixed point cannot exist above $mid$. Why is that? Assume there exists some $j = mid+k$ for some $k>0$ such that $lst[j] == j$. Note that $lst[mid] \geq mid + 1$. Now, as the elements in $lst$ are unique, we must have $lst[j] == lst[mid + k] \geq mid + k + 1 > j$ A similar argument shows that if $lst[mid] < mid$ then a fixed point cannot exist below $mid$, thus we get the correctness of the algorithm. # We would like the function to return 3print(find_steady([-1, 0, 3, 3, 3])) The maximum is the maximal value between lst[0] and the result of recursively finding the max in lst[1:]. Let $n$ denote the size of lst. Recursion depth: $O(n)$ Time complexity: $O(n^2)$ def max1(L): if len(L) == 1: return L[0] return max(max1(L[1:]), L[0])max1([2,5,10,2,100,-10]) 100 The maximum is the maximal value between the result of recursively finding the max in lst[:n//2] and the result of recursively finding the max in lst[n//2:], where $n$ denotes the size of lst. Recursion depth: $O(\log{n})$ Time complexity: $O(n\log{n})$ def max2(L): if len(L) == 1: return L[0] l = max2(L[:len(L)//2]) r = max2(L[len(L)//2:]) return max(l,r)max2([2,5,10,2,100,-10]) 100 Since slicing is a costly action we can do things better. Instead of slicing the list each time we will maintain indices for the "active" part of the list (like we did for binary search) and simply recurse after updating the indices according to the same logic. We also add envelope functions for a more user-friendly code. How does time/depth change for each function? The depth is clearly unaffected. Time, however, is much better. Since we only do $O(1)$ work in the function, the runtime is analogous to computing the size of the tree which is $O(n)$ in both cases. def max11(L,left,right): if left == right: return L[left] return max(L[left], max11(L, left + 1, right))def max22(L, left, right): if left == right: return L[left] mid = (left + right) // 2 l = max22(L, left, mid) r = max22(L, mid + 1, right) return max(l, r)def max1_slice(L): return max11(L, 0, len(L) - 1)def max2_slice(L): return max22(L, 0, len(L) - 1) Quicksort has a very simple recursive logic: import randomdef quicksort(lst): """ quick sort of lst """ if len(lst) <= 1: return lst else: pivot = random.choice(lst) # select a random element from list smaller = [elem for elem in lst if elem < pivot] equal = [elem for elem in lst if elem == pivot] greater = [elem for elem in lst if elem > pivot] return quicksort(smaller) + equal + quicksort(greater) #two recursive calls def det_quicksort(lst): """ sort using deterministic pivot selection """ if len(lst) <= 1: return lst else: pivot = lst[0] # select first element from list smaller = [elem for elem in lst if elem < pivot] equal = [elem for elem in lst if elem == pivot] greater = [elem for elem in lst if elem > pivot] return det_quicksort(smaller) + equal + det_quicksort(greater) #two recursive calls The worst case and best case analyses were discussed in class. It is interesting to note that: The subset sum problem is described as follows: given a list of integers $L$ and a value $s \in \mathbb{Z}$, is there a subset $S \subseteq L$ such that: $$\sum_{x \in S} x = s$$ If such an $S$ exists we return True, otherwise False. Examples: The base cases are pretty straight-forward: What about the recursive call? Well, if $s \neq 0, L \neq []$ then the following holds: So what do we do? We recursively check if either $L[1:], s - L[0]$ or $L[1:], s$ can be solved, and we only return False if both fail. Note that instead of using slicing (e.g. $L[1:]$), we pass an index that indicates our iteration along the list. Let's code: def subset_sum(L, s, i=0): # Base cases if s==0: return True if i==len(L): return False # Check both cases with_first = subset_sum(L, s-L[i], i+1) without_first = subset_sum(L, s, i+1) # If any of the above succeeds we return True, else False return with_first or without_first# Sanity checksL1 = [4, -7, 12, 5, 1]L2 = [1, 2, 4, 8, 16]s1 = 6s2 = 32print(subset_sum(L1, s1, 0))print(subset_sum(L2, s2, 0)) True False What is the running time of the code in relation to the size of the list $|L| = n$? The recurrence relation is $T(n) = 2 \cdot T(n-1) + O(1)$, which yields $T(n) = 2^n$ (Just like the Hanoi towers solution). In class we've seen how to transform recursive algorithms which run in exponential time into iterative algorithms that run in linear time (i.e. - Fibonacci, factorial). Can you think of a better solution? One that works in time $O(n^2)$? How about $O(n^{100})$? Incredibly, there is a wide held belief that this problem yields no algorithm which runs in time $\mathrm{poly}(n)$. More on that later (much later, say, next year in computational models)... A bus driver needs to give an exact change and she has coins of limited types. She has infinite coins of each type. Given the amount of change ($amount$) and the coin types (the list $coins$), how many ways are there? def change(amount, coins): if amount == 0: return 1 elif amount < 0 or coins == []: return 0 return change(amount, coins[:-1]) +\ change(amount - coins[-1], coins) change(5, [1,2,3]) 5 Why is it counting unique solution? For example, why is [2,2,1] counted once and not [1,2,2]? Consider the case where $amount = n^2, coins = [1,2,\ldots, n]$. When we call $change(amount, coins)$ the first level of the recursion calls $change([1,\ldots, n-1], n^2)$ and $change([1,\ldots, n], n^2 - n)$. This means that the list size in the recusrive calls is at least $n-1$ and $amount \geq n^2 -n$. One can show (using induction) that in the first $k \leq n$ levels of recursion the list size is at least $n - k$ and $amount \geq n^2-n k$, thus there are two recursive calls at each of these layers. This gives a running time of at least $2^n$ by the same argument as that we applied in the subset sum problem. It is interesting to note that while the two problem share some similarities, one major difference is that in the subset sum problem we are asking whether a solution exists while in the change problem we are trying to count the number of valid solutions to a problem. This is a recurring theme in CS that we will encounter many times in the future.
Advanced Studies in Pure Mathematics Adv. Stud. Pure Math. Probability and Number Theory — Kanazawa 2005, S. Akiyama, K. Matsumoto, L. Murata and H. Sugita, eds. (Tokyo: Mathematical Society of Japan, 2007), 455 - 478 The probability of two $\mathbb{F}_q$-polynomials to be coprime Abstract By means of the adelic compactification $\widehat{R}$ of the polynomial ring $R := \mathbb{F}_q [x]$, $q$ being a prime, we give a probabilistic proof to a density theorem: $$ \frac{\# \{(m, n) \in \{0, 1, \dots, N-1\}^2\ ;\ \varphi_m \text{ and }\varphi_n \text{ are coprime}\}}{N^2} \to \frac{q-1}{q}, $$ as $N \to \infty$, for a suitable enumeration $\{\varphi_n\}_{n=0}^{\infty}$ of $R$. Then establishing a maximal ergodic inequality for the family of shifts $\{\widehat{R} \ni f \mapsto f + \varphi_n \in \widehat{R}\}_{n=0}^{\infty}$, we prove a strong law of large numbers as an extension of the density theorem. Article information Dates Received: 17 February 2006 Revised: 20 March 2006 First available in Project Euclid: 27 January 2019 Permanent link to this document https://projecteuclid.org/ euclid.aspm/1548550910 Digital Object Identifier doi:10.2969/aspm/04910455 Zentralblatt MATH identifier 1154.60006 Citation Sugita, Hiroshi; Takanobu, Satoshi. The probability of two $\mathbb{F}_q$-polynomials to be coprime. Probability and Number Theory — Kanazawa 2005, 455--478, Mathematical Society of Japan, Tokyo, Japan, 2007. doi:10.2969/aspm/04910455. https://projecteuclid.org/euclid.aspm/1548550910
Accurately stated: Diversification helps during turmoil, but helps less as what would be expected by using $w^T \Omega w$ as the portfolio variance where the off-diagonal covariances are estimated during tranquil periods. This is because correlations and covariances change during turmoil, typically increasing. This reduces the benefit of diversification since the off-diagonal elements, $\sigma_{i,j}(t)$, tend to now be larger when $t$ belongs to a turmoil regime. This is true only up to the point of assets that suffer from contagion. In fact the exact opposite is true for assets that are considered to be safe haven assets, such as gold, some currencies (USD,JPY,CHF,perhaps GBP) and some government fixed income instruments. Here, diversification during market turmoil since these assets can be relied on in most cases to increase in value during such conditions. Thus the statement is true only assuming you are investing in assets that suffer from contagion. works better In addition, the extent to which contagion (technically defined as an increase in general comovement - which itself is not technically defined - between asset prices during crises) exists at all not a settled question. I refer you to Forbes and Rigobon (2001) and follow up papers. This is because the fact that $\sigma(t)$ tends to increase during crises causes a bias in $E[(\rho(t))]$ where $\rho$ is the Pearson estimator and $E$ is the expectation. Note that this was the state in mid 2000s, and the literature could have settled the question since then. Wavelets and copulas have both shown it in at least one paper which are not susceptible to such biases in a way that I am aware of; the FB (2001) result is for Pearson's correlation. From a portfolio management perspective, I recommend looking into time-varying multivariate copulas of the flavour of the Symmetrized Joe-Clayton Copula and later derivatives such as the SCAR in order to calibrate the left-tail comovement during turmoil periods. I am aware of only a very small contagion literature that takes this approach but I feel it is the best approach for this question. However many practitioners will take a qualitative, and not quantitative approach. Many in the industry actually refuse to rely on any sort of mathematical optimisation due to the estimation error in $E[R(t)]$ and $\hat{\rho(t)}$ (as well as pure ignorance of academic literature since Markowitz), and will instead try to diversify by using a subjective mish-mash of sector exposure, style exposure, factor exposure, country exposure, asset class exposure, and so on. I am not against this approach. This is only possible on the asset-class level optimisation (although this can still be applied at the stock level, for example, by using Black-Litterman).
Tags:asymptotic bound, asymptotic termination, automated complexity analysis, complexity analysis, ranking functions, termination, termination complexity and vector addition systems with states Abstract: Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parametrized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if a complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $\Theta(n^k)$, for some integer $k\leq d$. We present a polynomial-time algorithm computing the optimal $k$. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e. a $k$ such that the termination complexity is $\Omega(n^k)$. Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis. Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS
I gave a talk during the June 2013 algebraic topology summer course at MSRI. Can you see how nervous I am? notes I spoke about the Becker Gottlieb transfer map as described in J. Becker, D. Gottlieb, “The transfer map and fiber bundles” Topology , 14 (1975) pp. 1–12 link In Gottlieb’s 1975 paper, he proved the existence of a transfer homomorphism involving the total space and the base space of a fiber bundle. The transfer homomorphism is between the nth cohomology of the total space, and the nth cohomology of the base space. The transfer map is defined s.t. the composition of the induced pullback of the projection and the transfer map is just multiplication by the Euler characteristic of the fiber. $\hat{\tau} p*: H*(B) \rightarrow H*(B)$ A natural question to pose is whether or not this transfer homomorphism was induced by some map from $B$ to $E$. This would make the transfer homomorphism geometric. The paper ends with a proof of the Adams conjecture. It is an important result dealing with real vector bundle isomorphism classes and cohomology operations in K-Theory. The original proof used algebraic geometry. Quillen then proved the Adams Conjecture using the Brauer Induction Theorem. Becker and Gottlieb’s proof is considered exceptional, because it only uses algebraic topology. Their proof reduces the problem to elements of $KO(X)$ which are $2n$-dimensional vector bundles. It uses the transfer map by applying a splitting principal derived from the transfer map.
In the Standard Model (SM) of particle physics, fermions are taken to be left handed. The reason is to incorporate the parity violation in weak interactions. Which is a fact of nature. If one check then in will be seen that $V-A$ type current can account for the inclusion of Left handed fermions ($V+A$ type current may also possible, but its ruled but by experiment). Minimal extension of SM is the Left-Right symmetric model, proposed by Rabindra Mohapatra and Goran Senjanovic back late seventies, based on gauge group $SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$. In this model the fermion fields are assigned to the doublets $$L_{L}^{i}= \begin{pmatrix}\nu \\ e^- \end{pmatrix}_{L}^{i}\qquad L_{R}^{i}= \begin{pmatrix}\nu \\ e^- \end{pmatrix}_{R}^{i} \qquad Q_{L}^{i}= \begin{pmatrix} u \\ d \end{pmatrix}_{L}^{i} \qquad Q_{R}^{i}= \begin{pmatrix} u \\ d \end{pmatrix}_{R}^{i}$$ It is obvious from the field representations that both left and right handed fermions entered in the game with both hands. The transformation between $L$ and $R$ fields accomplished parity, and impose parity invariance before spontaneous breaking down $U(1)_{e}$. This model has interesting features and anomaly free, and looks more symmetric compared to SM itself. But despite intense search, no data has found yet supporting this model. An interesting paper published by Roni Harnik et al recently. Where authors considered the possibility of an universe without weak interactions. They provided theoretical arguments that it is indeed possible to have a stable universe with nucleosynthesis, matter domination, structure formation, in the absence of weak interactions, which is responsible for parity violation.
Functions The concept of function is one of the most important in mathematics. However, its history is relatively short. M. Kline credits [Kline, p. 338] Galileo (1564-1642) with the first statements of dependency of one quantity on another, e.g., "The times of descent along inclined planes of the same height, but of different slopes, are to each other as the lengths of these slopes." In a 1673 manuscript Leibniz used the word "function" to mean any quantity varying from point to point of a curve, like the length of the tangent or the normal. The curve itself was said to be given by an equation. But in 1714, he already used the word "function" to mean quantities that depend on a variable. The notation f(x) was introduced by Euler in 1734. Still, in the 1930s, a well known Russian mathematician N. Luzin wrote: The function concept is one of the most fundamental concepts of modern mathematics. It did not arise suddenly. It arose more than two hundred years ago out of the famous debate on the vibrating string and underwent profound changes in the very course of that heated polemic. From that time on this concept has deepened and evolved continuously, and this twin process continues to this very day. That is why no single formal definition can include the full content of the function concept. This content can be understood only by a study of the main lines of the development that is extremely closely linked with the development of science in general and of mathematical physics in particular. Functions, especially of the numeric variety, are often confused with formulas by means of which they are defined. In one of the discrete mathematics textbooks, the authors fling a particularly inept remark to the effect that "Whereas classical mathematics is about formulas, discrete mathematics is as much about algorithms as about formulas." Charitably, I interpret the maxim as the authors' attempt to emphasize the importance of functions in mathematics in general and discrete mathematics in particular. In their view, I believe, the efficiency of function computations gains prominence when it comes to practical matters. In mathematics, the function of two variables $f(x, y) = x^{2} - y^{2}$ can be equally well defined as $f(x, y) = (x - y)(x + y).$ In algorithmic mathematics there is an important difference between the two definitions: one requires two multiplications and one addition (with the sign minus), the other needs one multiplication and two additions. The latter is faster! But the authors, of course, might have had their own reasons. That is a fact, however, that the definition we currently use has been introduced by Johann Peter Gustav Lejeune Dirichlet (1805-1859). The turning point in the common perception of function as associated with an analytic curve - the curve whose shape in any small region defines its shape everywhere else - has occurred with the 1807 publication by Joseph Fourier (1768-1830) of his solution to the wave equation. Fourier represented his solution as (what is now called) Fourier series: $\displaystyle f(x)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}(a_{n}\cos nx+b_{n}\sin nx),$ where $\displaystyle a_{n}=\frac{1}{\pi}\int_{0}^{2\pi}f(t)\cos nt\,dt$ and $\displaystyle b_{n}=\frac{1}{\pi}\int_{0}^{2\pi}f(t)\sin nt\,dt.$ The crucial argument for reconsidering the notion of function was the realization that Fourier series converges pointwise for a wide range of functions, not necessarily analytic, but, for example, defined piece-wise. References M. Kline, Mathematical Thought From Ancient to Modern Times I, Oxford University Press, 1972 N. Luzin, Function, Mathematical Evolutions, A. Shenitzer and J. Stillwell (eds.), MAA, 2002 Functions Functional Notations and Terminology Examples of Functions Addition of Functions Multiplication of Functions Cantor Set and Function Limit and Continuous Functions Sine And Cosine Are Continuous Functions Composition of Functions, an Exercise 65620953
The assumption $\mathrm{SUBEXP}\subset \mathrm{P}/\mathrm{poly}$ seems to yield nothing interesting at all. Is that true? Yes, we have to admit so. For other normal complexity class $\mathcal{C}$, either $\mathcal{C}\subseteq \mathrm{PSPACE}$ then we have an interactive protocol with the prover to be replaced by the circuit, or $\mathcal{C} = \mathrm{EXP}$ in this case, we utilize Meyer's to put $\mathcal{C}$ in $\Sigma_2^p\cap \Pi_2^p \subseteq \mathrm{PSPACE}$ to get back to the previous case. But in the case of $\mathrm{SUBEXP}$, nothing above applies. Note that in Meyer's, we have to simulate the machine deciding $\mathrm{L}\in \mathcal{C}$. But as every particular machine $\mathrm{M}$ deciding $\mathrm{L}$ runs in exponential time ($\mathrm{L}$ has infinitely many machines with ever decreasing running time bounds), we cannot have a multi-output circuit for it.
This thing is making me going crazy, mathematicians and physicists use different notations for spherical polar coordinates. Now during a vector calculus problem I had the following issue: Had to find $d\underline{S}$ for the surface of a sphere of radius $a$ centred at the origin. In all the books I always find that for a parametrised surface $\underline{r}(s,t)$ we have $d\underline{S} = \left(\frac{\partial \underline{r}}{\partial s}\times\frac{\partial \underline{r}}{\partial t}\right)dsdt$ in this order. For the sphere I have $\underline{r}(\theta,\phi) = a\cos(\theta)\sin(\phi)\underline{i}+a\sin(\theta)\sin(\phi)\underline{j}+a\cos{\phi}\underline{k}$ for $0\leq \theta\leq 2\pi$ and $0\leq \phi\leq \pi$ And hence I get $\frac{\partial \underline{r}}{\partial \theta}\times\frac{\partial \underline{r}}{\partial \phi} = -\underline{r}a\sin{\phi} d\theta d\phi$ which points inwards so I take the opposite of it. In my notes, they always preserve the order I preserved here (i.e. the first partial on the left (i.e. $\frac{\partial}{\partial \theta}$) is the first component in the brackets of $\underline{r}(\theta,\phi)$). Preserving the order I should always get the correct normal vector. However for some weird reason when in my notes, in the books and online people have to calculate $d\underline{S}$ for a sphere (like here) they always invert the coordinates and write the spherical coordinates as $(r,\theta,\phi)$ for $0\leq \theta \leq \pi$ and $0\leq \phi \leq 2\pi$ and $\underline{r}(\theta,\phi)$ with $\frac{\partial \underline{r}}{\partial \theta}\times\frac{\partial \underline{r}}{\partial \phi} = \underline{r}a\sin{\theta} d\theta d\phi$ why does this happen? It's just a notation convention, however the order of the partial should give the correct normal, although in my example it clearly gives the opposite, while using the other notation, it gives the correct one.
(59th Polish Olympiad in Physics) A ball of mass $m$, radius $r$ and a moment of inertia $I = \frac 25 mr^2$ is rolling on the floor without sliding with the linear velocity $v_0$. It hit the wall perpendicularly. Find out the velocity $v_k$ of the ball's receding from the wall after a long time after the collision. The friction coefficient between the ball and the floor equals $\mu$, whereas the friction coefficient between the wall and the ball is very big. The collisions are infinitesimally short. All collisions are perfectly elastic and do not undergo deformation. Ignore the rolling resistance and the air resistance. As the collision is very short, the forces acting between the wall and the ball are very big, so we may neglect the gravity, friction between the floor and the ball and the floor's reaction force. Then the angular momentum wrt to the axis of ball's tangency to the wall is conserved. This means $I' \omega' = \mathrm{const}$, where $I'$ is the moment of intertia of a ball wrt to that axis and $\omega'$ the angular velocity wrt to that axis. But why is it equivalent to the condition that $I\omega + mv_yr = \mathrm{const}$ where $v_y$ is the vertical component of the velocity of the ball? /edit: the official solution: The coordinate system is used of axis $x$ perpendicular to the wall and directed left, the axis $y$ is perpendicular to the floor and is directed upwards. Positive angular velocities mean a counterclockwise motion. As the ball is rolling without sliding, it approaches the wall with linear velocity $v_0$ and angular $\omega_0$. The collision of with the wall is very short, so the contact force and the reaction force are very big. This means that during the collision we may neglect the gravity, the reaction of the floor and the friction of the ball with the floor. In this situation the torques wrt to the axis of the ball's tangency to the wall equal 0. So the total angular momentum is conserved wrt to that axis $$I\omega + mv_y r = \mathrm{const} (1)$$ Because the wall's friction coefficient is very big, during the collision, the ball will stop sliding wrt to the wall. This means that right after the collision the vertical component of the ball's velocity $v_{y2}$ and is angular velocity $\omega_2$ fulfill the formula $v_{2y} = \omega_2 r$. Taking in account that before the collision $\omega = v_0 /r, v_y = 0$, from the conservation on angular momentum (1) we get $$\omega_2 = \frac {I}{I + mr^2} \frac {v_0} r = \frac 2 7 \frac {v_0} r$$ $$v_{2y} = \omega_2 r = \frac 2 7 {v_0}$$ The wall and the ball are ideally elastic, the total work done by the reaction forces perpendicular to the wall equals zero, so the kinetic energy in the direction of $x$ is conserved, so $v_{2x} = - v_0$ After the collision the ball's motion is a projectile with initial velocity $(v_{2x}, v_{2y}$. The floor and the ball are ideally elastic, so the ball will jump infinitely long, reaching the same maximum height (it has no importance for finding out the final horizontal velocity). During the collision with the floor we will have friction until we get $v_{x_{konc}} = \omega_{x_{konc}} r$. On the other hand, at each collision with the floor the angular momentum is conserved wrt to the axis of the ball's tangency to the floor $$I \omega + m v_x r = \mathrm {const}$$ Hence $$v_{x_{konc}} = \frac{I \omega_2 + mrv_{2x}}{I+mr^2}r$$ /edit2: It should give $\omega_2 = 2/7 \omega_0$, indeed. We have $$\frac {dp_x}{dt} = N(t)$$ The friction gives upwards acceleration $$\frac {dp_y}{dt} = fN(t)$$ And diminishes the angular velocity $$I \frac {d\omega}{dt} = -fN(t)r$$ Hence $$I \frac {d\omega}{dt} + r\frac {dp_y}{dt} = 0 ~~~~(*)$$ So after integrating and finding the constants $$mrv_{2y} - mrv_0 = mrv_{2y} = mr^2 \omega_2 + I \omega_2 - I \omega_0 = 0$$ So $$\omega_2 = \frac 2 7 \omega_0$$ In fact, the formula (*) is the formula I have problems with. But why is it in fact the formula for angular momentum conservation for the axis of tangency.
Bound state solutions of Schrödinger-Poisson system with critical exponent School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China $\tag{P}\label{0.1} \begin{cases}- Δ u+V(x)u+K(x)φ u=|u|^{2^*-2}u, &x∈ \mathbb{R}^3,\\-Δ φ=K(x)u^2,&x∈ \mathbb{R}^3,\end{cases}$ $2^*=6 $ $\mathbb R^3$ $ K∈ L^{\frac{1}{2}}(\mathbb{R}^3)$ $V∈ L^{\frac{3}{2}}(\mathbb{R}^3)$ $|V|_{\frac{3}{2}}+|K|_{\frac{1}{2}}$ Mathematics Subject Classification:Primary:35J20;Secondary:35J6. Citation:Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 605-625. doi: 10.3934/dcds.2017025 References: [1] [2] [3] [4] [5] V. Benci and C. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u=u^{(N+2)/(N-2)}$ in $\mathbb R^3$, [6] [7] [8] H. Brezis, [9] [10] [11] [12] X. M. He and W. M. Zou, Existence and concentration of ground states for Schrödinger-Poisson equations with critical growth [13] I. Ianni and G. Vaira, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part I: Necessary condition, [14] I. Ianni, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part II: Existence, [15] [16] [17] P. L. Lions, The concentration-compactness method in the calculus of variations. The locally compact case, parts 1 and 2, [18] Z. S. Liu and S. J. Guo, On ground state solutions for the Schrödinger-Poisson equations with critical growth, [19] D. Ruiz, Semiclassical states for coupled Schrödinger-Maxwell equations: Concentration around a sphere, [20] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of the potential, [21] [22] [23] J. Wang, J. X. Xu, F. B. Zhang and X. M. Chen, Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system, [24] M. Willem, [25] [26] Z. P. Wang and H. S. Zhou, Positive solutions for a nonlinear stationary Schrödinger-Poisson system in $\mathbb{R}^3 $, [27] [28] [29] L. G. Zhao, H. Liu and F. K. Zhao, Existence and concentration of solutions for the Schrödinger-Poisson equations with steep well potential, [30] [31] show all references References: [1] [2] [3] [4] [5] V. Benci and C. Cerami, Existence of positive solutions of the equation $-Δ u+a(x)u=u^{(N+2)/(N-2)}$ in $\mathbb R^3$, [6] [7] [8] H. Brezis, [9] [10] [11] [12] X. M. He and W. M. Zou, Existence and concentration of ground states for Schrödinger-Poisson equations with critical growth [13] I. Ianni and G. Vaira, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part I: Necessary condition, [14] I. Ianni, Solutions of the Schrödinger-Poisson problem concentrating on spheres, Part II: Existence, [15] [16] [17] P. L. Lions, The concentration-compactness method in the calculus of variations. The locally compact case, parts 1 and 2, [18] Z. S. Liu and S. J. Guo, On ground state solutions for the Schrödinger-Poisson equations with critical growth, [19] D. Ruiz, Semiclassical states for coupled Schrödinger-Maxwell equations: Concentration around a sphere, [20] D. Ruiz and G. Vaira, Cluster solutions for the Schrödinger-Poisson-Slater problem around a local minimum of the potential, [21] [22] [23] J. Wang, J. X. Xu, F. B. Zhang and X. M. Chen, Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system, [24] M. Willem, [25] [26] Z. P. Wang and H. S. Zhou, Positive solutions for a nonlinear stationary Schrödinger-Poisson system in $\mathbb{R}^3 $, [27] [28] [29] L. G. Zhao, H. Liu and F. K. Zhao, Existence and concentration of solutions for the Schrödinger-Poisson equations with steep well potential, [30] [31] [1] Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. [2] Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. [3] Sitong Chen, Xianhua Tang. Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system. [4] Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. [5] Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. [6] [7] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [8] Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. [9] Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. [10] Claudianor O. Alves, Minbo Yang. Existence of positive multi-bump solutions for a Schrödinger-Poisson system in $\mathbb{R}^{3}$. [11] [12] Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang. Standing waves for Schrödinger-Poisson system with general nonlinearity. [13] Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents. [14] [15] Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schrödinger equation with critical exponent. [16] Margherita Nolasco. Breathing modes for the Schrödinger-Poisson system with a multiple--well external potential. [17] [18] Juntao Sun, Tsung-Fang Wu, Zhaosheng Feng. Non-autonomous Schrödinger-Poisson system in $\mathbb{R}^{3}$. [19] Zhengping Wang, Huan-Song Zhou. Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Entanglement isn't about interaction or information transfer betweeen entangled particles. Consider spin-entaglement of two spin-$\frac{1}{2}$ particles:Let them be in singulet-state relative to an arbitrary axis (say z-axis): $$ |\Psi \rangle = \frac{1}{\sqrt{2}} (\ |\uparrow_z, \downarrow_z \rangle - |\downarrow_z,\uparrow_z\rangle \ ) $$ The propability $P$ to measure both particles in state $|i,j \rangle$ with $i,j \in \{ \uparrow, \downarrow \}$ where the axis of both measurments enclose the angle $\theta$ is given by:$$ P_{i,j} = \| \langle i,j | \Psi \rangle \|^2 = \frac{1}{4} (1 - i \cdot j \cdot \cos \theta )$$if we take $i,j$ to be 1 and -1 for $\uparrow$ and $\downarrow$, respectively. The reduced propability $p_i$ of measuring only one particle (e.g. if we don't care about the other) is given by:$$ p_i = \sum_{j \in \{1,-1\}} P_{i,j} = \frac{1}{2} $$ The conditional propability of measuring the other particle ( after we already know about the measurment of the first particle) is given by:$$ \tilde{p}_{j|i} = \frac{P_{i,j}}{p_i} = \frac{1}{2} (1 - i \cdot j \cdot \cos \theta ) $$ This does involve the angle $\theta$ and usually one starts here to argue about non-locality and instantanious actions changing the outcome of experiment when we change the angle $\theta$ at the first measurment apparatus. This is however not true. If we are talking about conditional propabilities we have already performed a measurment and set the measurment axis of the first measurment. Changing this axis afterwards will not affect the propability as the angle $\theta$ is relative to the measured axis. Changing the axis of the second measurment only changes the propability predicting the outcome of the later measurment for the first observer because he has that extra knowledge. The propability for the second observer stays the same, as this is the reduced propability (he doesn't know about the first measurment):$$ p_j = \sum_{i \in \{1,-1\}} P_{i,j} = \frac{1}{2} $$ In short: Without the extra knowledge of the first measurment, entanglement is not important for the second observer. To gain that extra knowledge there must be an additional information transfer to the second observer and this is restricted by means of relativity-causality ($v\le c$ etc.). So entanglement neither breaks causality nor can it transfer any information. $$$$ Sometimes one comes about the argument that the violation of Bell inequalities shows, that entanglement is still something more than classical perception would allow.So let's have a look at a certain expectation value. The axis for spin measurment shall be labeled by normalized vectors $\vec{a}$ and $\vec{b}$ such that $\vec{a}\cdot\vec{b} = \cos\theta$. Consider\begin{equation} \langle \Psi|\vec{a}\cdot\vec{S_1} \ \ \vec{b} \cdot \vec{S}_2 | \Psi \rangle = -\frac{\hbar^2}{4}\vec{a}\cdot\vec{b} = -\frac{\hbar^2}{4} \cos\theta \tag{1}\end{equation}, which is the expectation value of the product of both measurments results. Here we have $\vec{S} = \frac{\hbar}{2}(\sigma_x, \sigma_y, \sigma_z)^T$ with $\sigma_x, \sigma_y, \sigma_z$ the Pauli matrices.We now follow the reasoning of John Bell in his original work since other, similar inequalities are based on the same problem. The argument goes like this: Assume a classical, statistical system with non-hidden and hidden variables all labeled by $\vec{\lambda} = (\lambda_1, \dots, \lambda_n)$ for some $n\in\mathbb N$. Furthermore there exists two functions $A(\vec{a},\vec{\lambda})$ and $B(\vec{b},\vec{\lambda})$ that give the results of spin measurment on particle 1 and 2, respectively. They can only yield $\pm\frac{\hbar}{2}$, since that is the only outcome of experiment. Those functions depend on one measurment axis only, because there shall be no action between measurment apparatus 1 and 2 (this is the assumed locality). $$$$ Because the system is studied on a statistical basis, there exists a propability density $ \varrho(\vec{\lambda}) $ that is a function of the system parameters $\vec{\lambda}$ and allows calculation of the expectation value$$ E(\vec{a},\vec{b}) = \int \varrho(\vec{\lambda}) \cdot A(\vec{a},\vec{\lambda}) B(\vec{b},\vec{\lambda}) \ d^n\lambda $$,which should equal the one from above (1) if it is to be interpreted on a classical, local basis (Note: one can incorporate discrete statistical variables by terms like $\sum_j \alpha_j \cdot \delta(c_j-\lambda_m)$). The malicious assumption here is that $\varrho$ is no function of the axis-vectors $\vec{a}$ and $\vec{b}$. This is, however, quite natural for classical systems with correlation. The point is: Allowing $\varrho(\vec{\lambda}, \vec{a}, \vec{b})$ or even just $\varrho(\vec{\lambda}, \vec{a} \cdot \vec{b})$, the Bell inequalities cannot be derived! Such propability densities can cause violation of the inequality. To understand that, I will now derive them and point out which step is not possible with the modified density: $$$$ Assume$$ E(\vec{a},\vec{b}) = -\frac{\hbar^2}{4} \vec{a} \cdot \vec{b} \tag2 $$,so that quantum mechanical description is in agreement with the classical one. For $\vec{a} = \vec{b}$:\begin{equation} \begin{aligned} -\frac{\hbar^2}{4} & = \int \underbrace{\varrho(\vec{\lambda})}_{\ge 0} \cdot \underbrace{A(\vec{a},\vec{\lambda}) B(\vec{a},\vec{\lambda})}_{\ge -\frac{\hbar^2}{4}} \, d^n\lambda \\ & \Leftrightarrow \\ 0 & = \int \underbrace{\varrho(\vec{\lambda})}_{\ge 0} \cdot \left( \underbrace{A(\vec{a},\vec{\lambda}) B(\vec{a},\vec{\lambda}) + \frac{\hbar^2}{4}}_{\ge 0} \right) \, d^n\lambda \end{aligned}\end{equation}because $\varrho$ is a normalized propability density. It follows that\begin{equation} \begin{aligned} A(\vec{a},\vec{\lambda}) B(\vec{a},\vec{\lambda}) = -\frac{\hbar^2}{4} \end{aligned}\end{equation}is a valid equation under the integral with $\varrho$. This can only hold if\begin{equation} \begin{aligned} B(\vec{a},\vec{\lambda}) = - A(\vec{a},\vec{\lambda}) \end{aligned} \tag3\end{equation}.Note that this holds for any vector $\vec{a}$. Now take another normalized vector $\vec{c}$ and do the following calculations:\begin{align} \frac{\hbar^2}{4}|(-\vec{a}\cdot\vec{b}) - (-\vec{a}\cdot\vec{c})| & = |E(\vec{a},\vec{b}) - E(\vec{a},\vec{c}) | \\ & = \left| - \int \varrho(\vec{\lambda}) \cdot (A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) - A(\vec{a},\vec{\lambda}) A(\vec{c},\vec{\lambda})) \, d^n\lambda \right| \\ & = \left| \int \varrho(\vec{\lambda}) \cdot A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) \cdot (1 - \frac{4}{\hbar^2}A(\vec{b},\vec{\lambda}) A(\vec{c},\vec{\lambda})) \, d^n\lambda \right| \\ & \le \int | \varrho(\vec{\lambda}) | \cdot | A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) | \cdot |1 - \frac{4}{\hbar^2}A(\vec{b},\vec{\lambda}) A(\vec{c},\vec{\lambda})| \, d^n\lambda \\ & = \int \varrho(\vec{\lambda}) \cdot (\frac{\hbar^2}{4} - A(\vec{b},\vec{\lambda}) A(\vec{c},\vec{\lambda})) \, d^n\lambda \\ & = \frac{\hbar^2}{4} + E(\vec{b},\vec{c}) = \frac{\hbar^2}{4} - \frac{\hbar^2}{4}\vec{b}\cdot\vec{c} \tag4\end{align} In the first equality we used (2). In the second we used (3). In the third we used $A(\vec{b},\vec{\lambda})^2 = \frac{\hbar^2}{4}$. The fourth step is the triangle inequality for integrals. In the fifth step we used $A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) = \pm \frac{\hbar^2}{4}$ and $\varrho(\vec{\lambda}) \ge 0$. In the last step we used (2) and the fact that $\varrho$ is normalized. So we finaly have Bell's inequality\begin{equation} \begin{aligned} |\vec{a}\cdot\vec{b} - \vec{a}\cdot\vec{c}| + \vec{b}\cdot \vec{c} \le 1 \, , \end{aligned} \tag5\end{equation},which can be violated for some choise of $\vec{a},\vec{b},\vec{c}$. This usually shows that our first assumption (2) is false. Therefore, no classical, local system should be able to describe the expectation value (1). $$$$ With the modified probability density the steps in (4) look like this:\begin{align} \frac{\hbar^2}{4}|(-\vec{a}\cdot\vec{b}) - (-\vec{a}\cdot\vec{c})| & = |E(\vec{a},\vec{b}) - E(\vec{a},\vec{c}) | \notag \\ & = \left| - \int \varrho(\vec{\lambda}, \vec{a}, \vec{b}) \cdot A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) - \varrho(\vec{\lambda}, \vec{a}, \vec{c}) \cdot A(\vec{a},\vec{\lambda}) A(\vec{c},\vec{\lambda}) \, d^n\lambda \right| \notag \\ & = \left| \int A(\vec{a},\vec{\lambda}) A(\vec{b},\vec{\lambda}) (\varrho(\vec{\lambda}, \vec{a}, \vec{b}) - \varrho(\vec{\lambda}, \vec{a}, \vec{c}) \frac{4}{\hbar^2}A(\vec{b},\vec{\lambda}) A(\vec{c},\vec{\lambda})) \, d^n\lambda \right| \notag \\ & \le \int \frac{\hbar^2}{4} \cdot \left| \varrho(\vec{\lambda}, \vec{a}, \vec{b}) - \varrho(\vec{\lambda}, \vec{a}, \vec{c}) \frac{4}{\hbar^2}A(\vec{b},\vec{\lambda}) A(\vec{c},\vec{\lambda}) \right| \, d^n\lambda\end{align}Note that one cannot proceed from here since in general $\varrho(\vec{\lambda}, \vec{a},\vec{b}) \ne \varrho(\vec{\lambda}, \vec{a},\vec{c})$. Also the second equality shouldn't work here anyway since (3) is only vaild when multiplied by $\varrho(\vec{\lambda},\vec{a},\vec{a})$. For instance, when $\varrho(\vec{\lambda},\vec{a},\vec{a}) = 0$ equation (3) can be violated in general. Nevertheless, one could only try to use another triangle equation on the term $|\dots|$, leaving us finally with the inequality\begin{equation} \begin{aligned} |\vec{a}\cdot\vec{b} - \vec{a}\cdot\vec{c}| \le 2 \, , \end{aligned}\end{equation},which is not to be violated by any choise of $\vec{a},\vec{b},\vec{c}$. $$$$ In summary: If one allows propability densities $\varrho(\vec{\lambda}, \vec{a}, \vec{b})$, that depend on some parameters of the measurment, the derivation of an inequality which is violated by quantum mechanical expectation values is not possible in the usual way. Above, I already argued that the dependence on $\vec{a}, \vec{b}$ is in general no cause for non-local behaviour as long as the reduced propability of a subsystem is only depended on its own parameters. This problem is inherent to inequalities that are derived on the same arguments like Bell's inequality: see for example the CHSH-inequality on page 527 equation 2, which is frequently used in experiments! $$$$ So if we would find some functions $A$ and $B$ that satisfy our locality-conditions from above there is no reason to think of the expectation value (1) as a non-local one. Take\begin{align} p_{i,j}(\vec{a},\vec{b}) & = \frac{1}{4} (1 - i j \ \vec{a}\cdot\vec{b}) \\ A(i,\vec{a}) & = \frac{\hbar}{2} \ i \\ B(j,\vec{b}) & = \frac{\hbar}{2} \ j\end{align}Then we have$$ E(\vec{a}, \vec{b}) = \sum_{i,j \in \{1,-1 \}} p_{i,j}(\vec{a},\vec{b}) \cdot A(i,\vec{a}) B(j,\vec{b}) = - \frac{\hbar^2}{4} \ \vec{a}\cdot\vec{b} = - \frac{\hbar^2}{4} \ \cos\theta$$,which equals (1) on a pure, local and classical basis.
Ultimately, you'll need a mathematical proof of correctness. I'll get to some proof techniques for that below, but first, before diving into that, let me save you some time: before you look for a proof, try random testing. Random testing As a first step, I recommend you use random testing to test your algorithm. It's amazing how effective this is: in my experience, for greedy algorithms, random testing seems to be unreasonably effective. Spend 5 minutes coding up your algorithm, and you might save yourself an hour or two trying to come up with a proof. The basic idea is simple: implement your algorithm. Also, implement a reference algorithm that you know to be correct (e.g., one that exhaustively tries all possibilities and takes the best). It's fine if your reference algorithm is asymptotically inefficient, as you'll only run this on small problem instances. Then, randomly generate one million small problem instances, run both algorithms on each, and check whether your candidate algorithm gives the correct answer in every case. Empirically, if your candidate greedy algorithm is incorrect, typically you'll often discover this during random testing. If it seems to be correct on all test cases, then you should move on to the next step: coming up with a mathematical proof of correctness. Mathematical proofs of correctness OK, so we need to prove our greedy algorithm is correct: that it outputs the optimal solution (or, if there are multiple optimal solutions that are equally good, that it outputs one of them). The basic principle is an intuitive one: Principle: If you never make a bad choice, you'll do OK. Greedy algorithms usually involve a sequence of choices. The basic proof strategy is that we're going to try to prove that the algorithm never makes a bad choice. Greedy algorithms can't backtrack -- once they make a choice, they're committed and will never undo that choice -- so it's critical that they never make a bad choice. What would count as a good choice? If there's a single optimal solution, it's easy to see what is a good choice: any choice that's identical to the one made by the optimal solution. In other words, we'll try to prove that, at any stage in the execution of the greedy algorithms, the sequence of choices made by the algorithm so far exactly matches some prefix of the optimal solution. If there are multiple equally-good optimal solutions, a good choice is one that is consistent with at least one of the optima. In other words, if the algorithm's sequence of choices so far matches a prefix of one of the optimal solutions, everything's fine so far (nothing has gone wrong yet). To simplify life and eliminate distractions, let's focus on the case where there are no ties: there's a single, unique optimal solution. All the machinery will carry over to the case where there can be multiple equally-good optima without any fundamental changes, but you have to be a bit more careful about the technical details. Start by ignoring those details and focusing on the case where the optimal solution is unique; that'll help you focus on what is essential. There's a very common proof pattern that we use. We'll work hard to prove the following property of the algorithm: Claim: Let $S$ be the solution output by the algorithm and $O$ be the optimum solution. If $S$ is different from $O$, then we can tweak $O$ to get another solution $O^*$ that is different from $O$ and strictly better than $O$. Notice why this is useful. If the claim is true, it follows that the algorithm is correct. This is basically a proof by contradiction. Either $S$ is the same as $O$ or it is different. If it is different, then we can find another solution $O^*$ that's strictly better than $O$ -- but that's a contradiction, as we defined $O$ to be the optimal solution and there can't be any solution that's better than that. So we're forced to conclude that $S$ can't be different from $O$; $S$ must always equal $O$, i.e., the greedy algorithm always outputs the correct solution. If we can prove the claim above, then we've proven our algorithm correct. Fine. So how do we prove the claim? We think of a solution $S$ as a vector $(S_1,\dots,S_n)$ which corresponds to the sequence of $n$ choices made by the algorithm, and similarly, we think of the optimal solution $O$ as a vector $(O_1,\dots,O_n)$ corresponding to the sequence of choices that would lead to $O$. If $S$ is different from $O$, there must exist some index $i$ where $S_i \ne O_i$; we'll focus on the smallest such $i$. Then, we'll tweak $O$ by changing $O$ a little bit in the $i$th position to match $S_i$, i.e., we'll tweak the optimal solution $O$ by changing the $i$th choice to the one chosen by the greedy algorithm, and then we'll show that this leads to an even better solution. In particular, we'll define $O^*$ to be something like $$O^* = (O_1,O_2,\dots,O_{i-1},S_i,O_{i+1},O_{i+2},\dots,O_n),$$ except that often we'll have to modify the $O_{i+1},O_{i+2},\dots,O_n$ part slightly to maintain global consistency. Part of the proof strategy involves some cleverness in defining $O^*$ appropriately. Then, the meat of the proof will be in somehow using facts about the algorithm and the problem to show that $O^*$ is strictly better than $O$; that's where you'll need some problem-specific insights. At some point, you'll need to dive into the details of your specific problem. But this gives you a sense of the structure of a typical proof of correctness for a greedy algorithm. A simple example: Subset with maximal sum This might be easier to understand by working through a simple example in detail. Let's consider the following problem: Input: A set $U$ of integers, an integer $k$ Output: A set $S \subseteq U$ of size $k$ whose sum is as large as possible There's a natural greedy algorithm for this problem: Set $S := \emptyset$. For $i := 1,2,\dots,k$: Let $x_i$ be the largest number in $U$ that hasn't been picked yet (i.e., the $i$th largest number in $U$). Add $x_i$ to $S$. Random testing suggests this always gives the optimal solution, so let's formally prove that this algorithm is correct. Note that the optimal solution is unique, so we won't have to worry about ties. Let's prove the claim outlined above: Claim: Let $S$ be the solution output by this algorithm on input $U,k$, and $O$ the optimal solution. If $S \ne O$, then we can construct another solution $O^*$ whose sum is even larger than $O$. Proof. Assume $S \ne O$, and let $i$ be the index of the first iteration where $x_i \notin O$. (Such an index $i$ must exist, since we've assumed $S \ne O$ and by the definition of the algorithm we have $S=\{x_1,\dots,x_k\}$.) Since (by assumption) $i$ is minimal, we must have $x_1,\dots,x_{i-1} \in O$, and in particular, $O$ has the form $O=\{x_1,x_2,\dots,x_{i-1},x'_i,x'_{i+1},\dots,x'_n\}$, where the numbers $x_1,\dots,x_{i-1},x'_i,\dots,x'_n$ are listed in descending order. Looking at how the algorithm chooses $x_1,\dots,x_i$, we see that we must have $x_i > x'_j$ for all $j\ge i$. In particular, $x_i > x'_i$. So, define $O^ = O \cup \{x_i\} \setminus \{x'_i\}$, i.e., we obtain $O^*$ by deleting the $i$th number in $O$ and adding $x_i$. Now the sum of elements of $O^*$ is the sum of elements of $O$ plus $x_i-x'_i$, and $x_i-x'_i>0$, so $O^*$'s sum is strictly larger than $O$'s sum. This proves the claim. $\blacksquare$ The intuition here is that if the greedy algorithm ever makes a choice that is inconsistent with $O$, then we can prove $O$ could be even better if it was modified to include the element chosen by the greedy algorithm at that stage. Since $O$ is optimal, there can't possibly be any way to make it even better (that would be a contradiction), so the only remaining possibility is that our assumption was wrong: in other words, the greedy algorithm will never make a choice that is inconsistent with $O$. This argument is often called an exchange argument or exchange lemma. We found the first place where the optimal solution differs from the greedy solution and we imagined exchanging that element of $O$ for the corresponding greedy choice (exchanged $x'_i$ for $x_i$). Some analysis showed that this exchange only can only improve the optimal solution -- but by definition, the optimal solution can't be improved. So the only conclusion is that there must not be any place where the optimal solution differs from the greedy solution. If you have a different problem, look for opportunities to apply this exchange principle in your specific situation.
Formulas Basic $ { \frac{ 1,200 }{ \mbox{ ship speed } } \cdot \sqrt{ ( x_2 - x_1 ) ^ 2 + ( y_2 - y_1 ) ^ 2 } } $ Notes: This formula is what is best use when you 1 ststart the game and do not have any researches discovered that can increase your travel speed. The result value is in minutesand the seconds part is always rounded upto the nearest whole number integer. Cargo Ships The speed of a Cargo Ship is 60, so this simplifies to: $ { 20 \cdot \sqrt{ ( x_1 - x_2 ) ^ 2 + ( y_1 - y_2 ) ^ 2 } } $ Example If your Cargo Ship is going from 30:30 (x1:y1) to 31:30 (x2:y2) ( Straight line) $ { 20 \cdot \sqrt{ ( 30 - 30 ) ^ 2 + ( 31 - 30 ) ^ 2 } ) } $ =$ { 20 \cdot ( 1.0 ) } $ = 20 minutes. $ { 20 \cdot \sqrt{ ( 30 - 30 ) ^ 2 + ( 31 - 30 ) ^ 2 } ) } $ If your Cargo Ship is going from 30:30 (x1:y1) to 31:31 (x2:y2) ( Diagonal) $ { 20 \cdot \sqrt{ ( 31 - 30 ) ^ 2 + ( 31 - 30 ) ^ 2 } } $ =$ { 20 \cdot \sqrt{ 2 } } $ =$ { 20 \cdot 1.41 } $ = 28.28 min ≈ 28 minutes 17 seconds. $ { 20 \cdot \sqrt{ ( 31 - 30 ) ^ 2 + ( 31 - 30 ) ^ 2 } } $ Note: A distance of zero, the towns are on the same island, is treated as a distance of $ { 0.5 \cdot 20 } $ = 10 minutes. Advanced $ { \frac { Distance \ast 72,000 } { ( UnitSpeed + \left [ UnitSpeed \ast SeaMapArchiveLevel ~ ( for ~ ships ~ only ) \right] \ast Gov% ) \ast ( 1 + ( Poseidon% + Draft% ) + TritonLevel ) } } $ Distance The number if Island spaces you will be traveling. UnitSpeed See the individual units / ships to get their speed, indicated by this icon ( ). SeaMapArchiveLevel Can be [0 through #.#] for [Levels 0 through 40]. Gov% The government form Oligarchy can increase your speed by 10% or 1.10, otherwise, the speed will be the normal speed or 1.0. Poseidon% Can be [0.0, 0.10, 0.30, 0.50, 0.70, 1.0] for level [0, 1, 2, 3, 4, 5]. Draft% Can be [66.4%, 49.8%, 33.2%, 16.6%, 0%] based on the amount of cargo per merchant ship [100, 200, 300, 400, 500]. TritonLevel Can be [0, 1, 2, 3] for [+0%, +100%, +200%, +300%]. Notes: This is the more accurateformula to use after you have learned the researches that will increase your travel speed. The result value is in secondsand is always rounded upto the nearest whole number integer. Time to reach a city on the same island in minutes Unit 7.5 Gyrocopter 10 Doctor Archer Hoplite Slinger Spearman Sulphur Carabineer Swordsman 15 Catapult Cook Mortar Ram Steam Giant 30 Balloon-Bombardier in minutes Unit Community content is available under CC-BY-SA unless otherwise noted.
Short Answer: Your work is perfectly fine if your lower and upper integral limits satisfy $0 < a \leq b$. In that case your answer $2u - \ln(u)$ even has a nice closed form purely in terms of $x$: $$ \int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} = \Big[\big(1 + \sqrt{4x + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4x + 1}\big)\Big)\Big]_a^b $$ This formula also continues to work for a lower limit of $a = 0$ if you interpret either the integral and/or the nested radical $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in the denominator properly enough. Long Answer (Analysis): Since you used $u$-substitution, your method should work as long as the conditions for an integration by substitution are met. Say you are integrating over some interval $[a, b]$. You have to verify: Does the function $u(x) = \sqrt{x + u(x)}$ that you defined implicitly actually make sense over $[a, b]$? In other words, is there really a function $u : [a, b] \to \Bbb R$ that satisfies that recursion? Is the function $u(x)$ actually differentiable over $[a, b]$? $\underline{\textit{There is some good news for these questions:}}$ As long as $x > 0$, there is a well-defined expression for $u$ in terms of $x$ when $u = \sqrt{x + u}$. To realize this, we need translate the intuitive expression $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ into the precise language of calculus. Only then can we bring the full power of calculus to bear on this problem. So formally what is going on with a nested radical like $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ is this: Let $u_{x,1} = \sqrt{x}$ and define recursively the sequence $u_{x,n + 1} = \sqrt{x + u_{x,n}}$ ($n \in \Bbb Z_+$). If $$u_x = \lim\limits_{n \to \infty}u_{x,n}$$ exists, then we may define our sought-after function $u$ at $x$ to be $u(x) = u_x$. In essence, the limit $\lim\limits_{n \to \infty}u_{x,n}$ is mathematically what we define the expression $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ to be. And we can easily check that $u_x = \sqrt{x + u_x}$ by taking the limit as $n \to \infty$ at both sides of the equation $u_{x,n + 1} = \sqrt{x + u_{x,n}}$. Now the good news is that, as long as $x > 0$, you can show that the sequence $u_{x,n}$ is bounded and monotonically increasing so that it does converge to a definite limit, namely $$u_x = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$$ Hence, our function $u(x) = u_x$ is well-defined for $x > 0$. Also, note that the formula above should not surprise you. You can easily see where it originated: Informally, if you take your substitution equation $u^2 - u = x$ and wrote it as a quadratic equation $u^2 - u - x = 0$, you can solve it by thinking of $x$ as a constant. And indeed, one of the solutions that pops out is precisely $u_+ = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$. You can eliminate the other solution $u_- = \frac{1}{2}\big(1 - \sqrt{4x + 1}\big)$ since it is negative if $x > 0$ and by convention square roots are positive. So as long as $x > 0$, you can safely take $$u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$$ as the $u$-substitution function which satisfies $u = \sqrt{x + u}$. In fact, as is apparent from the formula, $u(x)$ is even differentiable in this case. $\underline{\textit{But there are caveats:}}$ $1.\ \textbf{Note that for $x < 0$, the limit does not make sense:}$ as the very first sequence element $u_{x,1} = \sqrt{x}$ is not real. So, from this very analysis, you can immediately conclude that you should not be integrating over negative values in your integral. $2.\ \textbf{Next, at $x = 0$, things almost work out but break down anyway:}$ Note, we only managed to eliminate $\frac{1}{2}\big(1 - \sqrt{4x + 1}\big)$ as a candidate for the limit above because it was negative if $x > 0$. Well, if $x = 0$, then $u_- = \frac{1}{2}\big(1 - \sqrt{4x + 1}\big) = 0$ and you can no longer eliminate it that easily. So, we must go back to our definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in terms of sequences to arbitrate between $u_+$ and $u_-$. Applying that definition when $x = 0$, we see that $u_- = 0$ is the candidate that is chosen this time not $u_+ = 1$. This is because in this case, all the sequence elements $u_{x,n}$ are zero: $$u_{x=0,1} = \sqrt{x} = \sqrt{0} = 0,\quad u_{x=0,2} = \sqrt{x + u_{x=0,1}} = \sqrt{0 + 0} = 0,\quad \ldots \text{ etc}$$ Hence, $\lim\limits_{n \to \infty}u_{x=0,n} = 0$ and $u(0) = 0$. However, approaching $0$ from the right, we see that$$\lim\limits_{x \to 0+}u(x) = \frac{1}{2}\big(1 + \sqrt{4\cdot0 + 1}\big) = 1$$ And therefore even though $u(x)$ is defined at $x = 0$, it is sadly not continuous there, let alone differentiable. So the $u$-substitution Theorem no longer applies. In any case, there is an even worse problem when $x = 0$. Note that the function you are trying to integrate $f(x) = \sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ is undefined at $x = 0$ because as we saw, our definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in terms of sequences gives you a $0$ when $x = 0$. So there would be a $0$ in your denominator for $f(x)$ if that was allowed. $\underline{\textit{Okay, so we have concluded so far that:}}$ As long as your integration interval $[a, b]$ satisfies $0 < a \leq b$, your work should go through and you can use $u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$ as the explicit formula for $u$ to express your final integral answer: $$ \int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} = \Big[\big(1 + \sqrt{4x + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4x + 1}\big)\Big)\Big]_a^b $$ $\underline{\textit{Fixing the breakdown at $x = 0$:}}$ If you really want $x = 0$ as one of the limits e.g.$$\int_0^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}}$$ for $b > 0$, you can do so in two ways, both of which lead to the same result: You can modify the definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ thus: it defaults to the usual definition via sequences if $x > 0$ and to $\frac{1}{2}(1 + \sqrt{4 \cdot 0 + 1}) = 1$ if $x = 0$. Then you can safely use $u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$ for all $x \geq 0$. And the answer you will get for your integral is exactly what you would expect by plugging in $a = 0$ in the closed form I gave above:$$\big(1 + \sqrt{4b + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4b + 1}\big)\Big) - 2$$ On the other hand, you can instead take a limiting integral in the same spirit that we define $\int_0^b \frac{1}{x^2}dx$ to get around the singularity of $\frac{1}{x^2}$ at $0$. That is, you can define:\begin{align*}\int_0^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} &:= \lim_{a \to 0+}\int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} \\&= \lim_{a \to 0+}\big[2u(x) - \ln(u(x))\big]_a^b\end{align*} This leads to the same answer because ultimately $\lim\limits_{a \to 0^+}u(x) = 1$.
I was wondering what tools of algebraic topology are usually used to show that some things have the same homotopy type? Hatcher doesn't really talk about this in his book even though he defines the concept on page 3. Of course we can compute the homology or homotopy groups of a space, but just showing that they agree is not enough as far as I know. For example, knowing that the Poincare conjecture is true, we know that every closed simply-connected 3-manifold is the 3-sphere. It follows that they must have the same homotopy type. Is this any easier to prove than Poincare itself? If so how? The reason I picked this example is that I know they are homotopy equivalent and I don't know an obvious map between the spaces. EDIT: Dylan actually gave what's needed to finish off a proof. The map given by the generator of $\pi_3$ can easily be checked to induce isomorphisms on all homology groups. Now replace the $3$-manifold $M$ by a $2$-connected CW-model $Z$ by CW-approximation. Functoriality of CW-models then induces a map $f:S^3\to Z$ which induces isomorphisms on homology. The standard argument that replaces $Z$ by the mapping cylinder of $f$ and then applies Hurewicz on $H_n(M_f,S^3)$ shows that $\pi_n(M_f,S^3)=0$ for all $n$ on which implies that $M_f$ deformation retracts onto $S^3$ and they are homotopy equivalent. This gives the following chain of homotopy equivalences $$S^3\simeq M_f\simeq Z\simeq M$$ so it follows that $M$ and $S^3$ has the same homotopy type.
I am facing a simple (at first glance) problem. I need to implement a numerical scheme for the solution of the first order wave propagation equation with chromatic dispersion included. My original problem is (for a forward propagating wave): \begin{equation} \frac{1}{c} \frac{\partial u(x,t)}{\partial t} = -\frac{ \partial u}{ \partial x} - \frac{i \beta_2}{2} \frac{ \partial^2 u}{ \partial t^2}, \end{equation} where $c$ is the velocity of light, $u$ is the (complex) envelope of the field, $\beta_2$ is the 2nd order dispersion coefficient. Assume also that the wave is propagating inside a ring cavity of length , say, L where I take periodic boundary conditions: $u(x+L,t) = u(x,t)$ and also that at $t=0$ we know $u(x,0) $ and $u_t(x,0)$. I am trying to implement a time-stepping numerical scheme and in the process I tried the following: 1) MOL approach, where I do semidiscretization along $x$, reduce the set of equations to a system of first order ODEs (by setting $v = \dot{u}$) and I establish a system: \begin{equation} \begin{bmatrix} \dot{v} \\ \dot{u} \end{bmatrix} = A \begin{bmatrix} v \\ u \end{bmatrix} . \end{equation} When I solve the corresponding ODEs via 4th Runge-Kutta, Crank-Nicholson , or simply precomputing the matrix exponential, unfortunatelly, all my solutions eventually blow up to Inf. I implemented the periodic boundary conditions by modifying the matrix $A$ as $A \leftarrow PA$ , where $P$ is the identity matrix with the first row identical copy of the last row. I also tried a simple finite differences approach where the spatial derivative is approximated via an upwind FD (first order) but to no avail. Lastly I tried a strang splitting approach based on the two equations: \begin{align} \frac{1}{c}\dot{u} &= -\frac{\delta u }{\delta x} \\ \frac{1}{c}\dot{u} &= -\frac{i\beta_2}{2}\frac{\delta^2 u }{\delta t^2} , \end{align} where this time the solution does not blow up but it looks somehow unphysical. Does someone here know a stable and possibly higher than first order time-stepping scheme for this equation? Please, note that a solution based on a Fourier transform in $x$ is also not a good option for me because I would like to have the flexibility to implement different non-periodic boundary conditions. I would also dislike substituting the second order derivative in time with one in space due to the fact that this complicates the implementation of boundary conditions. Thanks.
Suppose I have the process $X = X(t)$ for $t \ge 0$ given by $X(t) = \sqrt{t}*Z \,\forall t \ge 0$ where $Z$ is normally distributed with $N(0,1)$. Is this a Brownian motion? Solution yields: $$X(t)-X(s)=Z\sqrt{t} - Z\sqrt{s} \sim N\left( 0,(\sqrt{t}-\sqrt{s})^2)\right) = N\left(0,t-2*\sqrt{s\,t}+s\right)$$ and now we must compare with $X(t-s)$, etc. However this is not my question, my question is how does $(Z\sqrt{t}-Z\sqrt{s})$ become $N(0,(\sqrt{t}-\sqrt{s})^2)$? Why is it not $N(0,\sqrt{t}-\sqrt{s})$ only, without squaring? So basically what does $Z(\sqrt{t}-\sqrt{s})$ mean intuitively and mathematically? Would be grateful for any answer.
I have been trying for 2-3 days now to get L2 regularized logistric regression to work in Matlab (CVX) and Python(CVXPY) but no success. I am fairly new to convex optimization so I am quite frustrated. Following is the equation that I am trying to solve using CVX/CVXPY. I have taken this equation from the paper https://intentmedia.github.io/assets/2013-10-09-presenting-at-ieee-big-data/pld_js_ieee_bigdata_2013_admm.pdf In the case of L2 regularized logistic regression the problem becomes: $$ \text{minimize} \frac{1}{m}\sum_{i=1}^{m}\log[1 + \exp(-b_i\mathbf{A}_i^Tx)] + \lambda\Vert x\Vert_2^2$$ where $\lambda$ is the regularization factor. My Matlab (CVX) code is function L2m = 800; N = 5;lambda =0.000001;A = load('/path/to/training/file'); b= A(:,6); //Label Matrix (800x1)A = A(:,1:5); //Feature matrix (800x5)cvx_begin variable x(N) minimize( (1/m * sum( log(1+ exp(-1* A' * (b * x')) ) ) ) + lambda*(norm(x,2)))cvx_end CVX returns an error saying which makes sense but the paper mentions the above equation. How can I solve it ? Your objective function is not a scalar. After trying on Matlab, I tried on CVXPY. Here is the python code from cvxopt import solvers, matrix,log, exp,mulfrom cvxopt.modeling import op,variableimport numpy as npn = 5m=800data = np.ndarray(shape=(m,n), dtype=float,)bArray = []file = open('/path/to/training/file')i = 0;j=0;for line in file: for num in line.split(): if(j==5): bArray.append(float(num)) else: data[i][j] = num j = j + 1 j=0 i = i + 1A = matrix(data)b_mat= matrix(bArray)m, n = A.sizelamb_default = 0.000001x=variable(n)b = -1*b_matw = exp(A.T*b*x)f = (1/m) + sum(log(1+w)) + lamb_default*mul(x,x)lp1 = op(f)lp1.solve()lp1.statusprint(lp1.objective.value()) I get the error TypeError: incompatible dimensions So, my question is: What am I doing wrong in the code for calculation of L2 problem in CVX/CVXPY ?
DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited. These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz. Determinant of Lorentz transformations We require that Lorentz transformations leave the dot product invariant, that is \( x \cdot y = x’ \cdot y’ \), or \begin{equation}\label{eqn:qftLecture3:20} x^\mu g_{\mu\nu} y^\nu = {x’}^\mu g_{\mu\nu} {y’}^\nu. \end{equation} Explicitly, with coordinate transformations \begin{equation}\label{eqn:qftLecture3:40} \begin{aligned} {x’}^\mu &= {\Lambda^\mu}_\rho x^\rho \\ {y’}^\mu &= {\Lambda^\mu}_\rho y^\rho \end{aligned} \end{equation} such a requirement is equivalent to demanding that \begin{equation}\label{eqn:qftLecture3:500} \begin{aligned} x^\mu g_{\mu\nu} y^\nu &= {\Lambda^\mu}_\rho x^\rho g_{\mu\nu} {\Lambda^\nu}_\kappa y^\kappa \\ &= x^\mu {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu y^\nu, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:60} g_{\mu\nu} = {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu \end{equation} multiplying by the inverse we find \begin{equation}\label{eqn:qftLecture3:200} \begin{aligned} g_{\mu\nu} {\lr{\Lambda^{-1}}^\nu}_\lambda &= {\Lambda^\alpha}_\mu g_{\alpha\beta} {\Lambda^\beta}_\nu {\lr{\Lambda^{-1}}^\nu}_\lambda \\ &= {\Lambda^\alpha}_\mu g_{\alpha\lambda} \\ &= g_{\lambda\alpha} {\Lambda^\alpha}_\mu. \end{aligned} \end{equation} This is now amenable to expressing in matrix form \begin{equation}\label{eqn:qftLecture3:220} \begin{aligned} (G \Lambda^{-1})_{\mu\lambda} &= (G \Lambda)_{\lambda\mu} \\ &= ((G \Lambda)^\T)_{\mu\lambda} \\ &= (\Lambda^\T G)_{\mu\lambda}, \end{aligned} \end{equation} or \begin{equation}\label{eqn:qftLecture3:80} G \Lambda^{-1} = (G \Lambda)^\T. \end{equation} Taking determinants (using the normal identities for products of determinants, determinants of transposes and inverses), we find \begin{equation}\label{eqn:qftLecture3:100} det(G) det(\Lambda^{-1}) = det(G) det(\Lambda), \end{equation} or \begin{equation}\label{eqn:qftLecture3:120} det(\Lambda)^2 = 1, \end{equation} or \( det(\Lambda)^2 = \pm 1 \). We will generally ignore the case of reflections in spacetime that have a negative determinant. Smart-alec Peeter pointed out after class last time that we can do the same thing easier in matrix notation \begin{equation}\label{eqn:qftLecture3:140} \begin{aligned} x’ &= \Lambda x \\ y’ &= \Lambda y \end{aligned} \end{equation} where \begin{equation}\label{eqn:qftLecture3:160} \begin{aligned} x’ \cdot y’ &= (x’)^\T G y’ \\ &= x^\T \Lambda^\T G \Lambda y, \end{aligned} \end{equation} which we require to be \( x \cdot y = x^\T G y \) for all four vectors \( x, y \), that is \begin{equation}\label{eqn:qftLecture3:180} \Lambda^\T G \Lambda = G. \end{equation} We can find the result \ref{eqn:qftLecture3:120} immediately without having to first translate from index notation to matrices. Field theory The electrostatic potential is an example of a scalar field \( \phi(\Bx) \) unchanged by SO(3) rotations \begin{equation}\label{eqn:qftLecture3:240} \Bx \rightarrow \Bx’ = O \Bx, \end{equation} that is \begin{equation}\label{eqn:qftLecture3:260} \phi'(\Bx’) = \phi(\Bx). \end{equation} Here \( \phi'(\Bx’) \) is the value of the (electrostatic) scalar potential in a primed frame. However, the electrostatic field is not invariant under Lorentz transformation. We postulate that there is some scalar field \begin{equation}\label{eqn:qftLecture3:280} \phi'(x’) = \phi(x), \end{equation} where \( x’ = \Lambda x \) is an SO(1,3) transformation. There are actually no stable particles (fields that persist at long distances) described by Lorentz scalar fields, although there are some unstable scalar fields such as the Higgs, Pions, and Kaons. However, much of our homework and discussion will be focused on scalar fields, since they are the easiest to start with. We need to first understand how derivatives \( \partial_\mu \phi(x) \) transform. Using the chain rule \begin{equation}\label{eqn:qftLecture3:300} \begin{aligned} \PD{x^\mu}{\phi(x)} &= \PD{x^\mu}{\phi'(x’)} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \PD{{x}^\mu}{{x’}^\nu} \\ &= \PD{{x’}^\nu}{\phi'(x’)} \partial_\mu \lr{ {\Lambda^\nu}_\rho x^\rho } \\ &= \PD{{x’}^\nu}{\phi'(x’)} {\Lambda^\nu}_\mu \\ &= \PD{{x’}^\nu}{\phi(x)} {\Lambda^\nu}_\mu. \end{aligned} \end{equation} Multiplying by the inverse \( {\lr{\Lambda^{-1}}^\mu}_\kappa \) we get \begin{equation}\label{eqn:qftLecture3:320} \PD{{x’}^\kappa}{} = {\lr{\Lambda^{-1}}^\mu}_\kappa \PD{x^\mu}{} \end{equation} This should be familiar to you, and is an analogue of the transformation of the \begin{equation}\label{eqn:qftLecture3:340} d\Br \cdot \spacegrad_\Br = d\Br’ \cdot \spacegrad_{\Br’}. \end{equation} Actions We will start with a classical action, and quantize to determine a QFT. In mechanics we have the particle position \( q(t) \), which is a classical field in 1+0 time and space dimensions. Our action is \begin{equation}\label{eqn:qftLecture3:360} S = \int dt \LL(t) = \int dt \lr{ \inv{2} \dot{q}^2 – V(q) }. \end{equation} This action depends on the position of the particle that is local in time. You could imagine that we have a more complex action where the action depends on future or past times \begin{equation}\label{eqn:qftLecture3:380} S = \int dt’ q(t’) K( t’ – t ), \end{equation} but we don’t seem to find such actions in classical mechanics. Principles determining the form of the action. relativity (action is invariant under Lorentz transformation) locality (action depends on fields and the derivatives at given \((t, \Bx)\). Gauge principle (the action should be invariant under gauge transformation). We won’t discuss this in detail right now since we will start with studying scalar fields. Recall that for Maxwell’s equations a gauge transformation has the form \begin{equation}\label{eqn:qftLecture3:520} \phi \rightarrow \phi + \dot{\chi}, \BA \rightarrow \BA – \spacegrad \chi. \end{equation} Suppose we have a real scalar field \( \phi(x) \) where \( x \in \mathbb{R}^{1,d-1} \). We will be integrating over space and time \( \int dt d^{d-1} x \) which we will write as \( \int d^d x \). Our action is \begin{equation}\label{eqn:qftLecture3:400} S = \int d^d x \lr{ \text{Some action density to be determined } } \end{equation} The analogue of \( \dot{q}^2 \) is \begin{equation}\label{eqn:qftLecture3:420} \begin{aligned} \lr{ \PD{x^\mu}{\phi} } \lr{ \PD{x^\nu}{\phi} } g^{\mu\nu} &= (\partial_\mu \phi) (\partial_\nu \phi) g^{\mu\nu} \\ &= \partial^\mu \phi \partial_\mu \phi. \end{aligned} \end{equation} This has both time and spatial components, that is \begin{equation}\label{eqn:qftLecture3:440} \partial^\mu \phi \partial_\mu \phi = \dotphi^2 – (\spacegrad \phi)^2, \end{equation} so the desired simplest scalar action is \begin{equation}\label{eqn:qftLecture3:460} S = \int d^d x \lr{ \dotphi^2 – (\spacegrad \phi)^2 }. \end{equation} The measure transforms using a Jacobian, which we have seen is the Lorentz transform matrix, and has unit determinant \begin{equation}\label{eqn:qftLecture3:480} d^d x’ = d^d x \Abs{ det( \Lambda^{-1} ) } = d^d x. \end{equation} Problems. Question: Four vector form of the Maxwell gauge transformation. Show that the transformation \begin{equation}\label{eqn:qftLecture3:580} A^\mu \rightarrow A^\mu + \partial^\mu \chi \end{equation} is the desired four-vector form of the gauge transformation \ref{eqn:qftLecture3:520}, that is \begin{equation}\label{eqn:qftLecture3:540} \begin{aligned} j^\nu &= \partial_\mu {F’}^{\mu\nu} \\ &= \partial_\mu F^{\mu\nu}. \end{aligned} \end{equation} Also relate this four-vector gauge transformation to the spacetime split. Answer \begin{equation}\label{eqn:qftLecture3:560} \begin{aligned} \partial_\mu {F’}^{\mu\nu} &= \partial_\mu \lr{ \partial^\mu {A’}^\nu – \partial_\nu {A’}^\mu } \\ &= \partial_\mu \lr{ \partial^\mu \lr{ A^\nu + \partial^\nu \chi } – \partial_\nu \lr{ A^\mu + \partial^\mu \chi } } \\ &= \partial_\mu {F}^{\mu\nu} + \partial_\mu \partial^\mu \partial^\nu \chi – \partial_\mu \partial^\nu \partial^\mu \chi \\ &= \partial_\mu {F}^{\mu\nu}, \end{aligned} \end{equation} by equality of mixed partials. Expanding \ref{eqn:qftLecture3:580} explicitly we find \begin{equation}\label{eqn:qftLecture3:600} {A’}^\mu = A^\mu + \partial^\mu \chi, \end{equation} which is \begin{equation}\label{eqn:qftLecture3:620} \begin{aligned} \phi’ = {A’}^0 &= A^0 + \partial^0 \chi = \phi + \dot{\chi} \\ \BA’ \cdot \Be_k = {A’}^k &= A^k + \partial^k \chi = \lr{ \BA – \spacegrad \chi } \cdot \Be_k. \end{aligned} \end{equation} The last of which can be written in vector notation as \( \BA’ = \BA – \spacegrad \chi \).
Responsible for performing coarse fitting between two density objects. More... #include <IMP/em/CoarseCC.h> Responsible for performing coarse fitting between two density objects. The pixels involved are derived from the positions of N particles. Definition at line 28 of file CoarseCC.h. static float calc_score (DensityMap *data, SampledDensityMap *model_map, float scalefactor, bool recalc_rms=true, bool resample=true, FloatPair norm_factors=FloatPair(0., 0.)) Calculates the value of the EM fitting term. More... static double cross_correlation_coefficient (const DensityMap *grid1, const DensityMap *grid2, float grid2_voxel_data_threshold, bool allow_padding=false, FloatPair norm_factors=FloatPair(0., 0.)) Calculates the cross correlation coefficient between two maps. More... static float local_cross_correlation_coefficient (const DensityMap *em_map, DensityMap *model_map, float voxel_data_threshold) Local cross correlation function. More... Calculates the value of the EM fitting term. Note The function returns scalefac*(1-ccc) to support minimization optimization. The ccc value (cross correlation coefficient) is calculate by the cross_correlation_coefficient function. Parameters [in] data DensityMap class containing the EM map. note: correct RMSD and mean MUST be in the header! [in] model_map SampledDensityMap class prepared to contain the simulated EM map for the model. [in] scalefactor scale factor to apply to the value of the cross correlation term [in] recalc_rms determines whether the RMS of both maps should be recalculated prior to the correlation calculation. False is faster, but potentially inaccurate [in] resample if true, the model density map is resampled [in] norm_factors if set these precalculated terms are used for normalization Returns the value of the cross correlation term: scalefac*(1-ccc) See Also cross_correlation_coefficient static double IMP::em::CoarseCC::cross_correlation_coefficient ( const DensityMap * grid1, const DensityMap * grid2, float grid2_voxel_data_threshold, bool allow_padding = false, FloatPair norm_factors = FloatPair(0., 0.) ) static Calculates the cross correlation coefficient between two maps. Cross correlation coefficient between the em density and the density of a model. The function applied is: \(\frac{\sum_{i=1}^{N}{{td}_i}{{md}_i}-{N} {{mean}_{td}} {{mean}_{md}}} {N\sigma_{{td}}\sigma_{{md}}}\), such that \(N\) is the number of voxels, \({td}\) is the target density, \({tm}\) is the model density, Parameters [in] grid1 The first 3D grid [in] grid2 The second 3D grid [in] grid2_voxel_data_threshold voxels with value lower than threshold in grid2 are not summed (avoid calculating correlation on voxels below the threshold [in] allow_padding determines whether the two maps should be padded to have the same size before the calculation is performed. If set to false and the grids are not of the same size, the function will throw an exception. [in] norm_factors if set these precalculated terms are used for normalization Returns the cross correlation coefficient value between two density maps Note This is not the local CC function static float IMP::em::CoarseCC::local_cross_correlation_coefficient ( const DensityMap * em_map, DensityMap * model_map, float voxel_data_threshold ) static Local cross correlation function. The documentation for this class was generated from the following file:
Motivation: I'm using 2D regular grid (it's actually a quadtree but I can still treat it as a finite difference thing if I weight-average the solution over smaller scale cells for the purpose of estimating Laplacian in the neighbor points) to discretize my Poisson equation and I'd like to solve it iteratively. The discretize Poisson equation $\Delta f = g$ reads $$ \frac{f_{i+1,j} + f_{i,j+1} + f_{i,j+1} + f_{i,j-1} - 4f_{i,j}}{h^2} = g_{i,j} $$ or, as an iterative method $$ f_{i,j} = \frac{1}{4} \left( h^2 g_{i,j} - f_{i+1,j} - f_{i,j+1} - f_{i,j+1} - f_{i,j-1} \right) $$ meaning, that if $g_{i,j}$ are given, I traverse the grid and update $f_{i,j}$ based on the neighbour values of $f$ from the previous iteration (and since I don't keep the old values in memory, and always use the most recent values, I think the method is actually called "Gauss-Siedel") I believe that this iterative method is identical to the Jacobi/Gauss-Siedel method used to solve linear equations (when formulated via matrix and RHS vector). Error estimation: There are several formulas to assess the error (let's call it $d$) of the solution. All have the same property: solution converges $\implies$ $d$ goes to zero but each has some quirks that I will now describe: The first few formulas probably anyone tries is ($N$ is number of grid points, occasionally appearing $\varepsilon$ is to make sure denominators are not zero and $\alpha$ is some chosen exponent): $$ d = \sum_{i,j} \left| f_{i,j} - f^{(old)}_{i,j} \right|^\alpha $$ $$ d = \sum_{i,j} \left| \frac{f_{i,j} - f^{(old)}_{i,j}}{|f_{i,j}| + \varepsilon} \right|^\alpha $$ $$ d = \frac{1}{N} \sum_{i,j} \left| f_{i,j} - f^{(old)}_{i,j} \right|^\alpha $$ However, the problem of this is that it smooths out some possible local non-convergence (if one point or group of points is not converging as quickly as others, it will get killed with the $1/N$ term or something similar). Hence, I started to use something like the following $$ d = \max_{\substack{i, j}} \{ \left| f_{i,j} - f^{(\text{old})}_{i,j} \right| \} $$ (again, with possible variations like terms divided by the value of $f_{i,j}$ to make it dimensionless etc.) The problem: even when the cutoff on $d$ is small, like, 0.01 (so the computation stops when $d < 0.01$), the solution still may be far from the true solution (certainly farther than within one percent), so it seems like the difference between successive iterations is not enough to truthfully assess the error between the iterative solution and the true solution. Every paper I read on this do not address the question "when to stop the iteration", it is somehow generally understood that the answer is "when it stops changing", but that might not be enough (sometimes if I let it run for twice as many steps, I still get a much better solution, but $d$ is already ridiculously small, like $10^{-6}$). Of course, I know it's hard to estimate how far we are from the solution, especially since knowing the true values of the solution would eliminate the need to solve the Poisson equation in the first place. The question: What is the best upper bound $d$ on the error of Jacobi iterations (given we don't know the true solution, only these iterative solutions) that still satisfies: iterative solution converges to the true solution $\iff$ $d \to 0$? (and reflects the nature of the "closeness" of the approximate solution to the true solution more faithfully? For example, if $f_{i,j}$ is 5 percent off from the true solution, the value of $d$ will be something more realistic, like $0.05$, rather than $0.001$)
Harmonic Series And Its Parts A series is an expression (a formal sum) with an infinite number of terms, like this: $a_{1} + a_{2} + a_{3} + \ldots + a_{n} + \ldots$ which we write in a concise form $\displaystyle\sum_{i = 1}a_{i} = a_{1} + a_{2} + a_{3} + \ldots + a_{n} + \ldots$ The first index need not be $1\;$ and may be an arbitrary integer (see an example where the difference in starting indices was concealed leading to a curious observation). When there is no possibility of confusion the first index is omitted all together: $\displaystyle\sum a_{i} = a_{1} + a_{2} + a_{3} + \ldots + a_{n} + \ldots$ To every series there correspond a sequence of partial sums: $\displaystyle\begin{align} s_{1} &= a_{1},\\ s_{2} &= a_{1} + a_{2},\\ s_{3} &= a_{1} + a_{2} + a_{3},\\ s_{4} &= a_{1} + a_{2} + a_{3} + a_{4},\\ &\ldots \end{align}$ This sequence may or may not have a limit. If the limit $\displaystyle\lim_{i\rightarrow\infty}s_{i}\;$ exists then the series is said to converge, or to be convergent. In this case the expression $\displaystyle\sum_{i=1}^{\infty} a_{i}\;$ is assigned a value, the limit of the sequence $s_{i}.\;$ The series that do not converge are said to diverge, or to be divergent. Several examples of convergent and divergent series are available elsewhere. Here we focus on convergence properties of the harmonic series, a series of the reciprocals of positive integers $\displaystyle a_{n} = \frac{1}{n}.$ The partial sums of the harmonic grow without bound which, in particular, means that the harmonic series is divergent. There are many proofs of that result. Here is one that is credited to the famous philosopher of the Middle Ages, Nicolas Oresme. It is clear that a series with constant terms, however small, is divergent. Because the partial sums do grow without bound. For the harmonic series we can write $\displaystyle\begin{align} 1 + 1/2 &+ 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/6 + 1/9 + \ldots\\ &=(1) + (1/2) + (1/3 + 1/4) + (1/5 + 1/6 + 1/7 + 1/8) + (1/9 + \ldots\\ &\gt (1) + (1/2) + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + (1/16 + \ldots\\ &=1 + 1/2 + 1/2 + 1/2 + \ldots \end{align}$ where the first sign of equality "=" tells us that series in fact remained the same although its terms have been grouped in a peculiar way. The last equality sign "=" only meant to indicate that we get the same series but with the terms in a simpler form. Uche Eliezer Okeke (Anambra State, Nigeria) gave a concise estimate for Oresme's approach: $\displaystyle\begin{align} H_{2^n}&=\sum_{k=0}^{2^n}\frac{1}{k}\\ &\gt 1 + \frac{1}{2} + \left(\frac{1}{4} + \frac{1}{4}\right) + \left(\frac{1}{8} + \frac{1}{8} + \frac{1}{8} +\frac{1}{8}\right) +\ldots+\underbrace{\left(\frac{1}{2^n}+\ldots+\frac{1}{2^n}\right)}_{2^{n-1}\,times}\\ &=1+\sum_{k=1}^n\frac{2^{k-1}}{2^{k}}=1+\frac{1}{2}\sum_{k=1}^n1=1+\frac{n}{2}. \end{align}$ Thus the sequence of partial sums of the harmonic series exceeds term-by-term the sequence of partial sums of a series that diverges to infinity. So, the same can be said of the harmonic series as well. A recent proof due to Leonard Gillman starts with a contrary assumption that the series \sum 1/n converges to a finite number S: $\displaystyle S = \sum_{n\ge 1}\frac{1}{n}.$ Then the terms in the series are grouped two at a time: $\displaystyle\begin{align} S &= 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/6 + 1/9 + \ldots\\ &=(1 + 1/2) + (1/3 + 1/4) + (1/5 + 1/6) + (1/7 + 1/8) + (1/9 + \ldots\\ &\gt (1/2 + 1/2) + (1/4 + 1/4) + (1/6 + 1/6) + (1/8 + 1/8) + (1/10 + \ldots\\ &=1 + 1/2 + 1/3 + 1/4 + \ldots\\ &= S, \end{align}$ with the conclusion that $S\gt S,\;$ which is absurd. (Additional details can be found on a separate page.) Euler has shown that the partial sums of the harmonic series change slowly: $s_{n} \approx \ln n + \gamma,$ where $\ln n\;$ is a natural logarithm of $n\;$ and $\gamma\;$ is a constant that now bears Euler's name: $\gamma\approx 0.57721566490153286061\ldots$ (It is not known yet whether Euler's constant is transcendental or, for that matter, if it's irrational. Sometimes it is also referred to as the Euler-Mascheroni constant.) The series of the reciprocals of all the natural numbers - the harmonic series - diverges to infinity. There are many ways to thin the series as to leave a convergent part. For example, if we leave only the reciprocals of the squares, $\displaystyle\sum_{n\ge 1}\frac{1}{n^{2}},\;$ the series will converge. (This is because the reciprocal of a square, say, $\displaystyle\frac{1}{k^{2}},\;$ is bounded from above by a term $\displaystyle\frac{1}{k(k - 1)}\;$ of the convergent telescoping series.) On the other hand, if we remove the reciprocals of all the composite numbers leaving only those of the primes, the remaining series will still be divergent. In a 1914 paper, A. J. Kempner proved a surprising result that a removal of the reciprocals of all the integers whose decimal representation contains a specified digit (say 3) leaves a convergent series. The result was strengthened shortly thereafter by F. Irwin in the following form: Theorem (Irwin, 1916) If we strike out from the harmonic series those terms whose denominators contain the digit $9\;$ at least $a\;$ times, and, at the same time, the digit $8\;$ at least $b\;$ times, the digit $7\;$ at least $c\;$ times, and so on, to the digit $0\;$ at least $j\;$ times $(a, b, c, \ldots, j\;$ being any given integers), the series so obtained will converge. (It should be noted that an increase of any of the $10\;$ numbers $a, b, c, \ldots, j\;$ shrinks the set of the removed terms.) Some eighty years later, H. Behforooz proved a result concerning the density of the terms in the harmonic series that form a convergent series on their own. Theorem (Behforooz, 1995) Suppose that $C\;$ is a subset of positive integers and $\displaystyle\sum_{n\in C}\frac{l}{n}\;$ is convergent. For any positive integer $k,\;$ let $N_{k}\;$ be the number of elements in $C\;$ that are $\le 10^{k},\;$ and $M_{k} = 10^{k} - N_{k}.\;$ Then $\displaystyle lim_{k\rightarrow\infty}\frac{N_{k}}{M_{k}} = 0.$ Let's prove Kempner's original theorem. Theorem (Kempner, 1914) If the denominators do not include all natural numbers $1, 2, 3, \ldots\;$ but only those numbers which do not contain any figure $9,\;$ the series converges. The method of proof holds unchanged if, instead of $\,9,\;$ any other figure $1, 2, \ldots, 8\;$ is excluded, but not for the figure $0.$ Proof We split the series of the (remaining) reciprocals into groups with indices k satisfying $10^{n} \le k \lt 10^{n + 1}, n = 0, 1, 2, \ldots$ The sum of the terms in each group is denoted $a_{n}\;$ so that the series is replaced with $T = a_{0} + a_{1} + \ldots + a_{n} + \ldots$ We need to estimate the sums $a_{n}.\;$ As we saw in another discussion, there are at most $9^{n+1}\;$ integers between $10^{n}\;$ and $10^{n + 1}\;$ that miss a particular digit. (We consider $n + 1\;$ letter strings from an alphabet of $9\;$ letters. Not all such strings correspond to valid decimal representations of integers.) On the other hand, the terms in the $\displaystyle\sum_{n\ge 1}a_{n}\;$ are bounded from above by $10^{-n}.\;$ It follows that $\begin{align} T&= a_{0} + a_{1} + \ldots + a_{n} + \ldots\\ &\le 9 + 9^{2} / 10 + 9^{3} / 10^{2} + 9^{4} / 10^{3} + \ldots\\ &= 9\cdot (1 + (9 / 10)^{1} + (9 / 10)^{2} + (9 / 10)^{3} + \ldots)\\ &= 9\cdot 1 / (1 - 9/10)\\ &= 90. \end{align}$ The sum $T\;$ is bounded from above implying the convergence of the series. The assertion of the theorem may be surprising at first sight as the relevant assertion that "all integers have digit $3\;$ in their decimal representation". On second thought, as R. Honsberger has observed, "most" natural numbers contain millions of digits. For the large numbers it will be more surprising not to contain digit 3. References H. Behforooz, Thinning Out the Harmonic Series, Mathematics Magazine, Vol. 68, No. 4. (Oct., 1995), pp. 289-293 R. P. Boas, in R. Honsberger, Mathematical Plums, MAA, 1979, 38-61 G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, Fifth Edition, Oxford Science Publications, 1996. R. Honsberger, Mathematical Gems II, MAA, 1976, 98-104 F. Irwin, A curious convergent series, Am Math Monthly, v 23 (1916) 149-152. A. J. Kempner, A curious convergent series, Am Math Monthly, v 21 (1914) 48-50. A. D. Wadhwa, An Interesting Subseries of the Harmonic Series, Am Math Monthly, v 82, n 9. (Nov., 1975), 931-933. Copyright © 1996-2018 Alexander Bogomolny 65620866
I'm a little confused by Wikipedia's definition of a regular language: The collection of regular languages over an alphabet $\Sigma$ is defined recursively as follows: The empty language $\emptyset$ is a regular language. For each $a \in \Sigma$ ($a$ belongs to $\Sigma$), the singleton language ${a}$ is a regular language. If $A$ and $B$ are regular languages, then $A \cup B$ (union), $A \cdot B$ (concatenation), and $A^\ast$ (Kleene star) are regular languages. No other languages over $\Sigma$ are regular. This seems defines a regular language as any set that contains a finite number of unique elements, though the set can be countably infinite (it's worth noting that the article doesn't explicitly state that $\left\vert{\Sigma}\right\vert$ is finite in the definition), i.e., a regular language is any set $S$ $$ \left\vert{S}\right\vert \le \left\vert{\infty}\right\vert, {S} \in \Sigma \\ \left\vert{\Sigma}\right\vert \lt \left\vert{\infty}\right\vert $$ I think this is correct because the union, concatenation or Kleene star of a singleton that is part of a finite set or the empty set must be also part of that set, even when operated on recursively. It's been a fair while since I did any maths, and even longer since I've touched set theory. Have I completely misread Wikipedia's definition? If so, how? If I haven't how is a regular language different from my definition?
In this section, we define the Jacobi symbol which is a generalization of the Legendre symbol. The Legendre symbol was defined in terms of primes, while Jacobi symbol will be generalized for any odd integers and it will be given in terms of Legendre symbol. Let \(n\) be an odd positive integer with prime factorization \[n=p_1^{a_1}p_2^{a_2}...p_m^{a_m}\] and let \(a\) be an integer relatively prime to \(n\), then \[\left(\frac{a}{n}\right)=\prod_{i=1}^m\left(\frac{a}{p_i}\right)^{c_i}.\] Notice that from the prime factorization of 45, we get that \[\left(\frac{2}{55}\right)=\left(\frac{2}{5}\right)\left(\frac{2}{11}\right)=(-1)(-1)=1\] We now prove some properties for Jacobi symbol that are similar to the properties of Legendre symbol. properties for Jacobi symbol Let \(n\) be an odd positive integer and let \(a\) and \(b\) be integers such that\((a,n)=1\) and \((b,n)=1\). Then if \(n \mid (a-b)\), then \[\left(\frac{a}{n}\right)=\left(\frac{b}{n}\right).\] \[\left(\frac{ab}{n}\right)=\left(\frac{a}{n}\right)\left(\frac{b}{n}\right).\] Proofs Proof of 1 Note that if \(p\) is in the prime factorization of \(n\), then we have that \(p\mid (a-b)\). Hence by Theorem 70, we get that \[\left(\frac{a}{p}\right)=\left(\frac{b}{p}\right).\] As a result, we have \[\left(\frac{a}{n}\right)=\prod_{i=1}^m\left(\frac{a}{p_i}\right)^{c_i}= \prod_{i=1}^{m}\left(\frac{b}{p_i}\right)^{c_i}\] Proof of 2 Note that by Theorem 71, we have \(\left(\frac{ab}{p}\right)=\left(\frac{a}{p}\right)\left(\frac{b}{p}\right)\) for any prime \(p\) appearing in the prime factorization of \(n\). As a result, we have \[\begin{aligned} \left(\frac{ab}{n}\right)&=&\prod_{i=1}^m\left(\frac{ab}{p_i}\right)^{c_i}\\ &=&\prod_{i=1}^m\left(\frac{a}{p_i}\right)^{c_i}\prod_{i=1}^m\left(\frac{b}{p_i}\right)^{c_i} \\&=&\left(\frac{a}{n}\right)\left(\frac{b}{n}\right).\end{aligned}\] In the following theorem, we determine \(\left(\frac{-1}{n}\right)\) and \(\left(\frac{2}{n}\right)\). Note Let \(n\) be an odd positive integer. Then \[\left(\frac{-1}{n}\right)=(-1)^{(n-1)/2}.\] \[\left(\frac{2}{n}\right)=(-1)^{(n^2-1)/8}.\] Proofs Proof of 1 If \(p\) is in the prime factorization of \(n\), then by Corollary 3, we see that \(\left(\frac{-1}{p}\right)=(-1)^{(p-1)/2}\). Thus \[\begin{aligned} \left(\frac{-1}{n}\right)&=&\prod_{i=1}^m\left(\frac{-1}{p_i}\right)^{c_i}\\ \\ &=& (-1)^{\sum_{i=1}^mc_i(p_i-1)/2}.\end{aligned}\] Notice that since \(p_i-1\) is even, we have \[p_i^{a_i}=(1+(p_i-1))^{c_i}\equiv 1+c_i(p_i-1)(mod \ 4)\] and hence we get \[n=\prod_{i=1}^mp_i^{c_i}\equiv 1+\sum_{i=1}^mc_i(p_i-1)(mod \ 4).\] As a result, we have \[(n-1)/2\equiv \sum_{i=1}^mc_i(p_i-1)/2 \ (mod \ 2).\] Proof of 2 If \(p\) is a prime, then by Theorem 72 we have \[\left(\frac{2}{p}\right)=(-1)^{(p^2-1)/8}.\] Hence \[\left(\frac{2}{n}\right)=(-1)^{\sum_{i=1}^mc_i(p_i^2-1)/8}.\] Because \(8 \mid p_i^2-1\), we see similarly that \[(1+(p_i^2-1))^{c_i}\equiv 1+c_i(p_i^2-1)(mod \ 64)\] and thus \[n^2\equiv 1+\sum_{i=1}^mc_i(p_i^2-1) (mod \ 64),\] which implies that \[(n^2-1)/8\equiv \sum_{i=1}^mc_i(p_i^2-1)/8 (mod \ 8).\] We now show that the reciprocity law holds for Jacobi symbol. Let \((a,b)=1\) be odd positive integers. Then \[\left(\frac{b}{a}\right)\left(\frac{a}{b}\right)=(-1)^{\frac{a-1}{2}.\frac{b-1}{2}}.\] Notice that since \(a=\prod_{j=1}^mp_i^{c_i}\) and \(b=\prod_{i=1}^nq_i^{d_i}\) we get \[\left(\frac{b}{a}\right)\left(\frac{a}{b}\right)= \prod_{i=1}^n\prod_{j=1}^m\left[\left(\frac{p_j}{q_i}\right)\left(\frac{q_i}{p_j}\right)\right]^{c_jd_i}\] By the law of quadratic reciprocity, we get \[\left(\frac{b}{a}\right)\left(\frac{a}{b}\right)= (-1)^{\sum_{i=1}^n\sum_{j=1}^mc_j\left(\frac{p_j-1}{2}\right)d_i\left(\frac{q_i-1}{2}\right)}\] As in the proof of part 1 of Theorem 75, we see that \[\sum_{j=1}^mc_j\left(\frac{p_j-1}{2}\right)\equiv \frac{a-1}{2}(mod \ 2)\] and \[\sum_{i=1}^nd_i\left(\frac{q_i-1}{2}\right)\equiv \frac{b-1}{2}(mod \ 2).\] Thus we conclude that \[\sum_{j=1}^mc_j\left(\frac{p_j-1}{2}\right)\sum_{i=1}^nd_i\left(\frac{q_i-1}{2}\right)\equiv \frac{a-1}{2}.\frac{b-1}{2}(mod \ 2).\] Exercises Evaluate \(\left(\frac{258}{4520}\right)\). Evaluate \(\left(\frac{1008}{2307}\right)\). For which positive integers \(n\) that are relatively prime to 15 does the Jacobi symbol \(\left(\frac{15}{n}\right)\) equal 1? Let \(n\) be an odd square free positive integer. Show that there is an integer \(a\) such that \((a,n)=1\) and \(\left(\frac{a}{n}\right)=-1\).
Yes, your equations aren't quite right. The main issue is that you're assuming a certain form for the normal force that isn't correct. What follows should illuminate why this is so in some detail. When using forces and Newton's Laws to solve this problem, it is overwhelmingly helpful to work in spherical coordinates, not just for locating the position of the mass, but also for writing vector components. In particular, it's advantageous to express all vectors in spherical coordinate unit vectors. There are two forces acting on the mass: normal force which points in the radial direction, and the gravitational force which points in the negative $z$ direction, and this gives the following net force on the particle\begin{align} \mathbf F = N\hat{\mathbf r} -mg\,\hat{\mathbf z}.\end{align}We would like to follow our advice above, and write this in terms of spherical coordinate unit vectors $\hat{\mathbf r}, \hat{\boldsymbol \theta}, \hat{\boldsymbol\phi}$. If you look in the back of Griffiths' Electrodynamics, or better yet work it out for yourself, you will find that $\hat{\mathbf z} = \cos\theta\hat{\mathbf r} - \sin\theta\hat{\boldsymbol\theta}$, so we can write the net force entirely in terms of spherical coordinates and unit vectors as follows:\begin{align} \mathbf F = (N-mg\cos\theta)\hat{\mathbf r} + mg\sin\theta\hat{\boldsymbol\theta}.\end{align}Notice that we do not yet know what $N$, the normal force, is. The normal force is complicated in this problem because it will turn out to depend on the velocity of the mass, a feature that you can immediately tell is missing from your original work. Now, in order to write down Newton's Laws, we need the acceleration in spherical coordinates which is an awful mess in general. But notice that since the mass is constrained to the surface of the sphere, and since there are no forces in the tangential direction, provided the mass starts at the top of the sphere, we will have\begin{align} \dot r = 0, \qquad \dot \phi = 0.\end{align}The first of these equations is a constraint we impose by virtue of the particle remaining on the sphere at all times ($r(t) = R$), the second can be argued purely mathematically from Newton's Laws, but it should be clear from the physical argument above, so we omit that step. The result is a drastic simplification in the expression for acceleration in spherical coordinates:\begin{align} \mathbf a = -R\dot\theta^2\hat{\mathbf r} +R\ddot\theta \hat{\boldsymbol\theta}.\end{align}These terms should actually look quite familiar. The first is just the centripetal acceleration, and the second is the tangential acceleration in the $\theta$ direction. Compare these to the standard expressions $a_r = -R\omega^2$ and $a_\theta = R\alpha$. If we now use Newton's Second Law, then we find\begin{align} N - mg\cos\theta = -mR\dot\theta^2, \qquad g\sin\theta = R\ddot\theta.\end{align}This is a system of two ODEs in two unknown functions $N=N(t)$ and $\theta = \theta(t)$, but the second is an ODE entirely for $\theta$. Once you have solved this to find the motion of the mass, you can plug the solution back into the first equation to determine the normal force as a function of time if you wish. Note that this is consistent will the Lagrangian approach of user yohBS. Simply plug $\dot\phi = 0$ into his $\theta$ Euler-Lagrange equation an note that it agrees with the equation above.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates How To Compute Iterated Integrals Now that we know what double integrals In order to integrate over a rectangle $[a,b] \times [c,d]$, we first integrate with respect to one variable (say, $y$) for each fixed value of $x$. That's an ordinary integral, which we can compute using the fundamental theorem of calculus. We then integrate the result over the other variable (in this case $x$), which we can also compute using the fundamental theorem of calculus. So a There are two ways to see the relation between double integrals and iterated integrals. In the bottom-up approach, we evaluate the sum $$\sum_{i=1}^m\sum_{j=1}^n f\left(x_{i}^*,y_{j}\right)^* \,\Delta x\,\Delta y=\sum_{i=1}^m \left(\sum_{j=1}^n f\left(x_{i}^*,y_{j}^*\right) \,\Delta x\,\right) \Delta y,$$ by first summing over all of the boxes with a fixed $i$ to get the contribution of a column (as indicated with the parentheses on the second sum), and then adding up the columns. (We could do this in the other order, by reversing the summations.) This approach is explained in the following video, and an example is worked out. (
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Which is the case: $$ \prod_{i \in I}i! = \prod_{i \in I}(i!) $$ or $$ \prod_{i \in I}i! = \Bigg(\prod_{i \in I}i\Bigg)! $$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community The convention \begin{align*} \prod_{i \in I}i! = \prod_{i \in I}(i!)\tag{1} \end{align*} is also affirmed by the operator precedence rulesstated in OEIS. For standard arithmetic, operator precedence is as follows: Parenthesization, Factorial, Exponentiation, Multiplicationand division, Addition and subtraction. and since the product sign $\prod$ is just a short-hand for successively using the multiplication operator, the convention (1) is valid. This would depend on the author, but the former notation would be much more common:$$\prod_{i \in I}i! = \prod_{i \in I}(i!)$$ If the product itself was factorialized, it would most likely be written as the latter: $$\Bigg(\prod_{i \in I}i\Bigg)!$$ edit: added the bolded word much. I would see it as $$\prod_{i \in I}i! = \prod_{i \in I}(i!)$$ Like the $\sum _i a_i^2$ which is $\sum _i (a_i^2)$ not $(\sum _i a_i)^2$
892 results for "using". Questions testing understanding of the precedence of operators using BIDMAS. That is, they test Brackets, Indices, Division/Multiplication and Addition/Subtraction. Question Ready to use CC BY Published Last modified 30/09/2019 15:45 No subjects selected No topics selected No ability levels selected Three graphs are given with areas underneath them shaded. The student is asked to calculate their areas, using integration. Q1 has a polynomial. Q2 has exponentials and fractional functions. Q3 requires solving a trig equation and integration by parts. Question Draft CC BY Published Last modified 26/09/2019 09:26 No subjects selected No topics selected No ability levels selected Write down the Newton-Raphson formula for finding a numerical solution to the equation $e^{mx}+bx-a=0$. If $x_0=1$ find $x_1$. Included in the Advice of this question are: 6 iterations of the method. Graph of the NR process using jsxgraph. Also user interaction allowing change of starting value and its effect on the process. Question Draft CC BY Published Last modified 19/09/2019 13:49 No subjects selected No topics selected No ability levels selected Factorise $x^2+cx+d$ into 2 distinct linear factors and then find $\displaystyle \int \frac{ax+b}{x^2+cx+d}\;dx,\;a \neq 0$ using partial fractions or otherwise. Question Draft CC BY Published Last modified 19/09/2019 11:55 No subjects selected No topics selected No ability levels selected Calculate the moment of a force about three points using the definition of a Moment. All forces and points are in the same plane. Question Ready to use CC BY-NC Published Last modified 17/09/2019 19:48 No subjects selected No topics selected No ability levels selected $I$ compact interval. $\displaystyle g: I \rightarrow I, g(x)=\frac{ax}{x^2+b^2}$. Find stationary points and local maxima, minima. Using limits, has $g$ a global max, min? Question Ready to use CC BY Published Last modified 17/09/2019 15:07 No subjects selected No topics selected No ability levels selected
In the last section, we saw how to find series solutions to second order linear differential equations. We did not investigate the convergence of these series. In this discussion, we will derive an alternate method to find series solutions. We will also learn how to determine the radius of convergence of the solutions just by taking a quick glance of the differential equation. Example \(\PageIndex{1}\) Consider the differential equation \[ y'' + y' + ty = 0. \nonumber\] As before we seek a series solution \[ y = a_0 + a_1t + a_2t^2 + a_3t^3 + a_4t^4 + ... \;. \nonumber\] The theory for Taylor Series states that \[ n!\; a_n = y^{(n)}(0). \nonumber \] We have \[ y'' = -y' -ty. \nonumber \] Plugging in 0 gives \[ 2!\, a_2 = y''(0) = -y'(0) + 0 = -a_1 \nonumber \] \[ a_2 = -\dfrac{a_1}{2}. \nonumber\] Taking the derivative of the differential equation gives \[ (y'' + y' + ty)' = y''' + y'' + ty' + y = 0 \nonumber \] or \[ y''' = -y'' - ty' - y. \nonumber\] Plugging in zero gives \[ 3!\, a_3 = a_1 - a_0 \nonumber\] \[ a_3 = \dfrac{a_1}{6} - \dfrac{a_0}{6}. \nonumber\] Taking another derivative gives \[ (y''' + y'' + ty' + y)' = y^{(iv)} + y''' + ty'' + 2y' = 0 \nonumber \] or \[ y^{(iv)} = -y''' - ty'' - 2y' . \nonumber\] Plugging in zero gives \[ 4! \,a_4 = -a_1 + a_0 - 2a_1 \nonumber \] \[ a_4 = -\dfrac{49}{24} a_1 + \dfrac{a_0}{24}. \nonumber\] The important thing to note here is that all of the coefficients can be written in terms of the first two. To come up with a theorem regarding this, we first need a definition. Definition: Analytic Function A function \(f(x)\) is called analytic at \(x_0\) if \(f(x)\) is equal to its power series. It turns out that if \(p(x)\) and \(q(x)\) are analytic then there always exists a power series solution to the corresponding differential equation. We state this fact below without proof. If \(x_0\) is a point such that \(p(x)\) and \(p(x)\) are analytic, then \(x_0\) is called an ordinary point of the differential equation. Theorem Let \(x_0\) be an ordinary point of the differential equation \[ L(y) = y'' + p(t)y' + q(t)y = 0. \] Then the general solution can be represented by the power series \[ y= \sum_{n=0}^\infty a_n(x-x_0)^n = a_0\,y_1(x) + a_1\,y_2(x). \] where \(a_0\) and \(a_1\) are arbitrary constants and \(y_1\) and \(y_2\) are analytic at \(x_0\). The radii of convergence for \(y_1\) and \(y_2\) are at least as large as the minimum radii of convergences for \(p\) and \(q\). Remark: The easiest way of find the radii of convergence of most functions us by using the following fact If \(f(x)\) is an analytic function for all \(x\), then the radius of convergence for \(1/f(x)\) is the distance from the center of convergence to the closest root (possibly complex) of \(f(x)\). Example \(\PageIndex{2}\) Find a lower bound for the radius of convergence of series solutions about \(x = 1\) for the differential equation \[ (x^2 + 4)\; y'' + \text{sin} \; (x)y' + e^xy = 0. \nonumber \] Solution We have \[ p(x) = \dfrac{\sin x}{x^2 + 4} \nonumber\] \[ q(x) = \dfrac{e^x}{x^2 + 4} . \nonumber\] Both of these are quotient of analytic functions. The roots of \(x^2 + 4\) are \[2i \;\;\; \text{and} \;\;\; -2i. \nonumber \] The distance from \(1\) to \(2i\) is the same as the distance from \((1,0)\) to \((0,2)\) which is \( \sqrt{5} \). We get the same distance from \(1\) to \(-2i\). Hence the radii of convergence of the solutions are both at least \( \sqrt{5} \). Contributors Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
There exist continuous functions $f:[0,1] \to \mathbb{R}$ which are nowhere differentiable. The, so-called, Brownian motion is a stochastic process which has (with probability one) sample paths which are Hölder continuous but nowhere differentiable. This shows, in particular, the existence of functions with the above properties. Moreover, there is a close connection between PDEs and Brownian motion, and therefore Brownian motion can be used to give probabilistic proofs of PDE results, for instance to study existence and uniqueness of solutions to the heat equation or the Dirichlet problem. Take a look at the book Brownian motion by Schilling & Partzsch if you are interested in the topic. Lipschitz continuous functions are almost everywhere differentiable. There is a probabilistic proof of this statement which relies on the martingale convergence theorem, see this question here for details. Numerical calculation of $\pi$ The strong law of large numbers can be used to compute $\pi$ numerically. Indeed, if we consider a sequence of independent random variables $(X_n)_{n \geq 1}$ which are uniformly distributed on the square $[-1,1] \times [-1,1]$, then $$\frac{1}{n} \sum_{i=1}^n 1_{|X_i| \leq 1}(\omega) = \frac{1}{n} \sharp \{1 \leq i \leq n; |X_i(\omega)| \leq 1\}$$ converges almost surely to $\pi/4$ as $n \to \infty$. Sampling such a sequence $(X_n)_{n \in \mathbb{N}}$ is pretty easy, and therefore this is a nice way to calculate $\pi$ numerically. Fundamental theorem of algebra There is a probabilistic proof of the fundamental theorem of algebra; it relies on a martingale convergence theorem and the (neighbourhood) recurrence of Brownian motion in dimension $d=2$; see here or the book by Rogers & Williams for details. Open mapping theorem There is a probabilistic proof of the open mapping theorem for analytic functions, see this article; the proof relies on the conformal invariance of Brownian motion. There exist normal numbers. The existence of normal numbers can be shown by applying the strong law of large numbers. Borel used probabilistic methods to prove that Lebsgue-almost all real numbers are normal. Remark: Note that there are two similar threads on mathoverflow (No.1, No. 2) with plenty of examples!
What you provide is an equality statement$$e_1 = e_2$$of two expressions and the infix equality operator.The expression $e_1$$$\frac{0}{0}$$thus the result of dividing 0 by 0 (which is an instance of dividing a number by zero), is usually considered as undefined, e.g. there is no real number that could be used as result and not cause trouble elsewhere. The second expression $e_2$ is$$\frac{100-100}{100-100}$$We could now argue that both expressions are equal for example by one of these arguments: fractions are equal if they have equal nominators and denominators and this is the case as reduction by partial evaluation of nominator and denominator individually is unproblematic each expression is an instance of division by zero and thus the same failure each expression is evaluated to the same special undefined value e.g. div, $\bot$, false (used for partial functions) or undefined (e.g. JavaScript) and thus equality of values occurs Where the first argument would probably be objected by the argument that these expressions are constructed like fractions (syntactic equality $e_1 = e_2$, the expressions are the same) but are not proper fractions, because zero denominators are not allowed there, thus equality by value ($\sigma(e_1) = \sigma(e_2)$) can not apply because lack of values due to evaluation not happening (semantic $\sigma$ which assigns a value to the expression is not defined). The second might raise the question if those two are different from other failures like $\infty - \infty$ or $1/-$ or $(1,2,3)+(1,2)$. Nonetheless we might get convinced that equality holds and then continue boldly with $e_3$$$\frac{10^2-10^2}{10(10-10)}$$and $e_4$$$\frac{(10+10)(10-10)}{10(10-10)}$$ and then try it again with $e_5$$$\frac{10+10}{10}$$This is not equality by argument 1, 2 or 3 but by argument two fractions are equal if nominator and denominator are each the same multiple of the other one $$\frac{n_1}{d_1} = \frac{c \, n_1}{c \, d_1}$$ We note that $e_5$ is now a number while $e_1,\ldots,e_4$ were not (be it $./0$, $\bot$, false but certainly no real number). So inequality because of different type applies (comparing apples with pears).Thus argument 4 was invalid, reduction by common division by $(10-10)=0$ is not valid to derive a fraction with the same value, as it changes the object from being not a number into a number.We rather have$$e_4 \ne e_5$$ Summary: Division by zero is the culprit but only at the fourth equality (see above) and not before. And there it is not a single division but a common division to reduce a fraction.
You are correct until you got to` $$\frac{\Delta E_k \ \Delta t}{\Delta d} = mv$$ Remember that $\frac{\Delta d}{\Delta t} = $ Average Velocity (And I'll explain why further down) When we consider constant acceleration, average velocity is denoted as $\frac{V_o+V_f}{2}$ In your case, the particle is at rest. So initial velocity is zero. We can rewrite work as $$\frac{\Delta E_k}{{v_f}/{2}} = mv_f$$$$E_k - 0 = \frac{1}{2}mv_f^2$$So the Kinetic Energy equation will be $$E_k= \frac{1}{2}mv_f^2$$ Of course, there are many other ways to derive the formula. One easy way to derive it is integrating the force in terms of distance, but I don't believe your problem requires calculus to obtain the formula. Another way we could solve it is using Galileo's equation, but honestly, that makes it more confusing if one doesn't know where to derive that equation from (like me). If you think about it regarding Work, you know that it is $$Work = Force \cdot Distance$$ The reason why we know that $E_p = mgh$ is that we know what the initial position is. We know how much energy will be exerted from that height if it falls down to the ground. But it wouldn't work the same way if you wanted to find kinetic energy from the particle's current position. Instead, some problems will tell you to find the energy given time. You need its instantaneous velocity. We can use calculus to find instantaneous velocity, but we know one thing that allows us to find it without calculus: acceleration is constant. This means that Average Velocity IS Instantaneous velocity. And it just so happens that the particle starts at rest. Does that mean the final velocity equals the instantaneous velocity? No. If there were a problem that asks you to find the kinetic energy while it was in motion, this would be this full equation: $$E_k=m\cdot(V_f - V_o)\cdot(\frac{V_o+V_f}{2})$$ where $V_f - V_o$ is the change in velocity and $\frac{V_o+V_f}{2}$ is the average velocity It was just convenient to have an application where the particle is at rest and acceleration is constant. Hopefully, this explanation helps!
As an example of a hyperbolic equation we study the wave equation. One of the systems it can describe is a transmission line for high frequency signals, 40m long. \[\begin{aligned} \dfrac{\partial^2}{\partial x^2} V &= \underbrace{LC}_{\text{imp}\times \text {capac}}\dfrac{\partial^2}{\partial t^2} V \nonumber\\ \dfrac{\partial}{\partial x} V (0,t) &= \dfrac{\partial}{\partial x} V(40,t) = 0, \nonumber\\ V(x,0) &= f(x),\nonumber\\ \dfrac{\partial}{\partial t} V(x,0) &= 0,\end{aligned}\] Separate variables, \[V(x,t) = X(x) T(t).\] We find \[\frac{X''}{X} = LC \frac{T''}{T} = -\lambda .\] Which in turn shows that \[\begin{aligned} X'' &=-\lambda X, \nonumber\\ T'' &= -\frac{\lambda}{LC} T .\end{aligned}\] We can also separate most of the initial and boundary conditions; we find \[X'(0) = X'(40)=0,\;\;T'(0)=0.\] Once again distinguish the three cases \(\lambda>0\), \(\lambda=0\), and \(\lambda<0\): \(\lambda>0\) (almost identical to previous problem) \(\lambda_n = \alpha_n^2\), \(\alpha_n = \frac{n\pi}{40}\), \(X_n=\cos(\alpha_n x)\). We find that \[T_n(t) = D_n\cos \left(\frac{n\pi t}{40\sqrt{LC}}\right) + E_n\sin\left(\frac{n\pi t}{40\sqrt{LC}}\right).\] \(T'(0)=0\) implies \(E_n=0\), and taking both together we find (for \(n \geq 1\)) \[V_n(x,t) = \cos\left(\frac{n\pi t}{40\sqrt{LC}}\right) \cos\left(\frac{n\pi x}{40}\right).\] \(\lambda=0\) \(X(x) = A + B x\). \(B=0\) due to the boundary conditions. We find that \(T(t) = D t + E\), and \(D\) is 0 due to initial condition. We conclude that \[V_0(x,t) = 1.\] \(\lambda<0\) No solution. Taking everything together we find that \[V(x,t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos\left(\frac{n\pi t}{40\sqrt{LC}}\right) \cos\left(\frac{n\pi x}{40}\right).\] The one remaining initial condition gives \[V(x,0) = f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos\left(\frac{n\pi x}{40}\right).\] Use the Fourier cosine series (even continuation of \(f\)) to find \[\begin{aligned} a_0 & = \frac{1}{20} \int_0^{40} f(x) dx, \nonumber\\ a_n & = & \frac{1}{20} \int_0^{40} f(x)\cos\left(\frac{n\pi x}{40}\right) dx.\end{aligned}\]
I want to differentiate w.r.t. $\sigma^2$ the following equation $u'(Y)\mu$ + $\frac{u''(Y)}{2}$$(\sigma^2 + \mu^2) = 0$ where we can consider $\mu$(reward) as an implicit function of $\sigma^2$(risk) of small bets u'(Y)$\frac{du}{d\sigma^2} +\frac{u''(Y)}{2} + \mu u''(Y) \frac{du}{d\sigma^2}=0$ Hence, $u'(\sigma^2)=\frac{u''(Y)}{2(u'(Y) + \mu u''(Y)} $ But answer given in the PDF downloaded from the internet is $u'(\sigma^2)=\frac{u''(Y)}{2(u'(Y) + \mu(\sigma^2) u''(Y)}$ Which answer is correct? My another question is what does 1st derivative and 2nd derivative of Utility function imply? My second question is as follows: Consider a financial market with one single period, with interest rate r and one stock S. Suppose that $S_0 =1$ and, for n=1, $S_1$ can take two different values: 2, 1/2. For which values of r the market is viable? viable means (free of arbitrage opportunities). What if $S_1$ can also take the value 1. Solution provided. We want to calculate the values of r such that there is an arbitrage opportunity. We take a portfolio with zero initial value $V_0=0$ Then we invest the amount q in the stock without risk, we have to invest −q in the risky stock (q can be negative or positive). We calculate the value of this portfolio in time 2. $V_1(\omega_1)=q(r-1)$.......(1) $V_1(\omega_2)=q(r+1/2)$........(2) So, if r > 1 there is an arbitrage oppportunity taking q positive (money in the bank account and short position in the risky stock) and if r < −1/2 we have an arbitrage opportunity with q positive (borrowing money and investing in the risky stock). The situation does not change if $S_1$ take the value 1. I want to know,Now if r<-1/2 how there is an arbitrage opportunity?Would any member explain me in his reply? That means to have a viable market, $r$ must be between -0.5 To 1
Go back to the cold war era and start a "Russians are trying to warm the planet" scare. You will need a lot of money to fund some big advertising campaigns. You also want to seed a few specific technologies like nuclear and solar power to try and push them along. Let ignorance and paranoia work towards the betterment of mankind for a change.It's ... The climate of Earth has been roughed up quite a bit last century. But it has no idea what it's got coming with this portal of yours.Earth turns into Venus.Update: As R.M. pointed out, the amount of energy is not 'maybe a long term thing', it's the Major Issue. This has been fixed now.How much water are we talking?Let's say your portal is 10km below ... It is technically possible to burn carbon dioxide, but not in a practical way. The reason burning carbon produces energy is that the total potential energy of carbon and oxygen is minimized by the CO2 configuration. Splitting them up into carbon and oxygen again requires an addition of energy. Therefore, in order to burn carbon dioxide, you need to find ... •98% of all the landmass is wasteland/desert•The two poles are some big chunks of ice, each covering about 20% of earths areaThis is rather confusing. Yes, I get that there are also cold deserts, but next to the glacier there will be the melting zone, where the bonanza of water will inevitably favor life.•There are only two climate zones, hot ... This will not work at allThe short versionIf you drop an ice cube into a room temperature drink, the drink will be slightly colder for a little while. But what happens then? Answer: the drink warms up again until it has the same temperature as its surroundings.What happens when these melt? Does the drink stay cold? No, it does not. (Image source)The ... What keeps my planet's water from irreversibly concentrating over time on the frigid wastes while the rest of the planet dries up?When ice piles up, it will exercise pressure. The closer to the terminator, the less the ice.As a consequence, pressure gradient will tend to push the ice sheet toward the terminator, where it will melt, returning water to the ... It takes a lot of energy to move the EarthIn this question, I calculate the energy needed to move the Earth. In order to move the Earth by 1m in orbit, you will need to expend about $2\times10^{22}$ Joules. This is about five orders of magnitude greater than the largest atomic bomb, and only one order of magnitude less than the Chicxulub asteroid that ... Two of my favorites for this scenario.Dig.Fresno is pretty close to what you describe. This guy Forestiere bought land sight unseen thinking he would grow fruit and nut trees but on getting there realized it was worthless. So he dug. When he got low enough, he planted the trees.Take a tour off hwy 99 and visit fresno's best kept secret •Ahand-... The mad scientist sledgehammer option for this particular nut.Kill a very large slice of the world population.It worked when Europe colonised the Americas, so many natives were killed it actually changed the global climate.America colonisation ‘cooled Earth's climate’He travels back in time to the height of the cold war at its most unstable & ... Comets are tinyHalley's mass is 2.2×1015 kg.A cubic kilometer of water has a mass※1 of 1 × 1012 kgOceans hold 1.35 × 10 9 cubic kilometers, which gives us 1.35 21 kg of water.So, the mass of the Halley is half a millionth of the ocean. If it is 200 C colder than Earth oceans (say −180° C for the comet, 20° C for the ocean) when dropped, the calculus of ... First observation: The portals as described in the question create a perpetuum mobile. Salt water under high pressure (from the ground of the Mariana trench) wells up at some point in the Sahara desert, becoming a fine source of hydroelectric power. It will create a river of salt water that may fill up some basins and eventually reaches the sea (either the ... You can’t use carbon dioxide as fuel, and that’s not what the article you cite is about. You can turn carbon dioxide (plus hydrogen or water) into fuel, but the process will need more energy than you will later release by burning the fuel, so you’ll need to get that energy from somewhere.But yes; if you get the energy without burning fossil fuels and you ... TLDR;The equations:$$r_{B} = \sqrt{\frac{F}{2.46\times 10^{-14}}}$$or rearranging for$$F = r_{B}^{2} \times 2.46\times 10^{-14}$$Where $F$ is the fraction of light blocked ($F=0.01$ gives your $1\%$) and $r_{B}$ is the radius of your satellite in meters which will achieve this.For one percent reduction, using the equations above, we need a ... You're trying to cure the sickness by alleviating a symptom.You can't cure global warming by putting more pollution into the air. You may temporarily bring the patient's temperature down, but humanity will respond by turning up the heat. In the end, you'll make global warming much, much worse.Please keep in mind that global-warming/climate-change/name-... Librations. That is, the tidally locked planet is not in a perfectly circular orbit, and so the portion of the planet that is sun-facing is not constant. This is because the rate of rotation is (extremely nearly) constant, but the rate of revolution around the sun changes due to the non-circular nature. For the Earth's Moon, this is only a few degrees. If ... To me, it's the same as if you'd said:Stopping global warming by leaving the fridge open.Ice needs to be produced. This requires large quantities of energy for freezing water, which requires more and more power plants.You need large quantities of ice. There are 1386 million cubic kilometres of water on Earth (Wiki), they need to be cooled.Even if: You ... You're positing a world that has no surface plants. That's... implausible.There are plants out there that can manage climatic extremes far worse than anything humans could live through. Plants actively thrive in concentrations of CO2 that humans would find lethal. You might easily have a massive die-off as climate change modified local conditions, but ... According to your link:The process can work with any level of carbon dioxide concentration, Wu says -- they have tested it all the way from 2 percent to 99 percent -- but the higher the concentration, the more efficient the process is.The atmospheric concentration of carbon dioxide is .0391%. That's well under 2%. This would not work well at ... The best solution is to not nuke the poles. The poles are really-really-really big. The arctic ice sheet is around 20000km$^3$ of ice, or 20000000000000m$^3$! That's 1.8334 × 10$^1$$^6$ kilograms of ice. Melting ice takes 333.55 kJ/kg, so we'll need about 6,100,000,000 TJ of energy to melt it all.The RS-28 you reference is believed to be able to ... Here is an idea:What if Henry Ford had built his assembly line for an electric car rather than a gas powered car? Before the assembly line brought down the price of the Model T, electric cars were actually less expensive than gas cars. The assembly line would have made these even cheaper.Electric cars and infrastructure would need more electric power ... Direct bootstrap of nuclear fission technology in the 1700s.Sounds crazy right? Not so fast.In order to reliably prevent runaway climate change, we must prevent the situation that caused it, namely cheap coal and oil power. This is quite well accomplished by getting there first with uranium, plutonium, and thorium reactors all at once. Since mining won't ... Could we do this?Yes, we could. It's been proposed before in a number of forms. Most calculations agree that this sunshade would need to reduce solar insolation by anywhere from 2-10%. If we take an optimistic figure - the lower bound of 2% - then we could achieve this by putting a shade 4.5 million square kilometers in area at the Sun-Earth Lagrange point,... @SJuan76 's answer is incomplete. He did the math to show the small cooling effect of, but didn't show the heating effect. I shamelessly copied his cooling numbers, and added heating numbers to compare.Comets have a small cooling effectHalley's mass is $2.2\times10^{15} \text{kg}$.A cubic kilometer of water has a mass of $1\times10^{12} \text{kg}$... First: how big to appear as big as the moon if in a typical satellite orbit? Here is a fine image of the ISS passing in front of the moon and we can use it to gauge their relative apparent sizes.from https://www.space.com/6870-spot-satellites.htmlWhen I blew them up, I measured the diameter of the moon at 567 pixels and the ISS at 14. 567/14 = 40 so a ... The surface conditions you're describing are closer to an ice age than any other situation. In practice the ice caps covered only 35% of the land mass, though they locked up the vast majority of the fresh water. This means that the ice age was one of the driest periods in the planet's history.The more of the water that's locked up in the poles the drier ... Prevent Chernobyl and Three Mile Island so that adoption of nuclear power isn't regressed. This might not completely solve the problem but if it cuts enough emissions to buy a couple decades so that renewables and electric cars and other technologies become economical soon enough to prevent cataclysmic warming. No.The mass extinction you're hoping for—no plants and only microscopic animals—is impossible from these conditions.At the point that humans left the planet, it was possible for animal and plant life to exist, even if it was limited to certain areas. Because otherwise, they'd be dead before they left.Once humans were gone, they stopped their ... Effectiveness 10/10. Move further from the sun, you get less incoming solar radiation, so you cool the planet.Practicality 0/10. Planets are heavy.More specifically, the energy needed to increase the earth's orbit would be absolutely astronomical (all puns intended). The solution used in Futurama as you describe it is also utterly impractical. The ... I assume you prefer a scientifically correct answer, as in the portal can't violate energy conservation.In that case, we just can't have portals. But we can have a long tube, and lets just ignore friction inside that tube so we simulate most of the portal stuff. It does not even matter if the portaltube would end in the marianna trench. Just a few meters ... Keeping traveling back even when incentive is lostA big problem with solving problems trough time travel is that once it is fixed the incentive to travel back in time is lost and thereby no one will travel back in time to keep the timeline fixed.So what the time traveler has to do is leave a note. Either to himself, or if he never gets born in the new ...
“Klein-Fricke” and “Fricke-Klein” — to number theorists these mantras invariably recall memories of early number theory seminars, be it in late undergraduate school or graduate school, or, more likely, both, when modular forms first made their appearance in the curriculum. I’m pretty sure that for me it happened in the late 1970s at UCLA in a memorable seminar run, in his usual inimitable style, by the late Basil Gordon. But doubtless they, Klein and Fricke, made appearances, too, in my graduate seminars at UCSD: my advisor, Audrey Terras, was always sure to refer back to classical sources whenever possible. So it is that Felix Klein and Robert Fricke (his PhD student at Göttingen in its Golden Age), mentioned in that order, are the authors of the (two-volume) Lectures on the Theory of Elliptic Modular Forms, and the same pair, in reverse order, are responsible for the (also two-volume) Lectures on the Theory of Automorphic Forms. The original publication dates of these works are, respectively, 1880 and 1882 for the first set, and 1897 and 1912 for the second. Given that automorphic forms date their official discovery to Henri Poincaré in the 1880s, these encyclopedic tomes appeared as timely, if not avant garde, works. They made a huge impact. As all of us who are familiar with E. T. Bell’s Men of Mathematics and Constance Reid’s Hilbert will immediately recall, this momentous discovery by Poincaré came in the setting of a competition with another young and brilliant mathematician who was hot on the trail, none other than Klein. Evidently the race nigh on brought the latter to nervous exhaustion, and, as I recall one of the aforementioned biographers saying, the result of the competition was essentially a tie. Here, by the way, for good measure, is Poincaré’s own account of his fabulous discovery (which he at first christened Fuchsian functions): For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours. I am reminded of the aphorism usually ascribed to Paul Erdős, but evidently actually due to Alfréd Rényi, to the effect that a mathematician is a machine for turning caffeine into theorems. In any event, Felix Klein emerged as one of the true early masters of this burgeoning field. While certainly at home in number theory, it required a lot of beautiful and new and living mathematics from other parts of the discipline, including, for example, hyperbolic geometry, the theory of transformation groups (and, before too long, representation theory), and all sorts of marvels from complex analysis, including the theory of Riemann surfaces. The number theoretic results properly so-called were, of course, nothing less than spectacular and the automorphic forms industry, so speak, continued to boom, especially after Klein brought David Hilbert to the Georg August Universität in Göttingen in 1895: one has merely to consider what just Carl Ludwig Siegel and Erich Hecke, both at Göttingen, brought about in this subject. And of course there is Robert Fricke, Klein’s student, and we now get to the books under review. The titles making up Klein-Fricke plus Fricke-Klein are arranged as volumes 1–4 in the Classical Topics in Mathematics series published by Higher Education Press, Beijing, China, and distributed by the AMS. The books in question are offered as a paean to “Klein’s vision of the grand unity of mathematics,” and the books’ back covers accordingly allude to Klein’s vaunted Erlangen program. It is indeed hard to imagine anything more consonant with his philosophy than the theory of modular or automorphic forms, what with Klein’s Erlangen program centered on the fundamental role played by group theory in the structure and ensuing form of mathematics, particularly geometries. Just recall for a moment the now-familiar scenario: a discrete (e.g. Fuchsian) group acts on a symmetric space (e.g. the complex upper half plane), and one looks for a uniform way in which to describe what happens when the action is lifted to complex functions on such a space. A sort of functional equations yoga emerges, with the action by the group’s generators on such functions taking central stage. One defines such notions as weights, levels, etc., and makes certain natural identifications resulting in the delineation of fundamental domains for this data. Then comes some natural topology, the prototype being the emergence of the Riemann surfaces obtained from the (compactified) fundamental domains for the action of subgroups of the special linear group over the integers on the complex upper half plane: the theory of elliptic modular forms. Here the geometry is hyperbolic. This is very beautiful mathematics in its own right, but more should be said regarding connections with perhaps more familiar objects in Zahlentheorie. Arguably the most effective illustration of this role played by modular forms is that of the Hecke correspondence. In rough terms what this powerful interplay is all about is the fact that via the services of the Mellin and inverse Mellin (integral) transforms, one obtains a correspondence between, on the one hand, certain modular forms, and, on the other hand, certain types of Dirichlet series. Modifying one set, e.g. by playing with weights, levels, and characters, on the modular forms end, brings about mirror effects (after a fashion) at the other end. Thus, results in one setting translate to results in the other setting, and we soon have at our disposal a set of very fecund tools to do analytic number theory. It is worth noting that this material is heavily imbued with Fourier analysis (since we get Fourier expansions of modular forms at cusps of their fundamental domains), and the trailblazer along this magical path is none other than Riemann, and, yes, indeed, once again it’s in his magisterial paper, Über die Anzahl der Primzahlen unter einer gegebenen Größe. There is abundant reason for the famous quip by Martin Eichler that there are not four but five arithmetical operations, viz., \(+\), \(-\), \(\times\), \(\div\), and modular forms. Very well, then, on to the four books under review. They all come equipped with the same (excellent) introduction by Lizhen Ji, titled, “Why should one open and read Klein-Fricke and Fricke-Klein?” This essay is in itself a gem, and addresses the following themes: why these books are (justly) classics; Klein vs. Poincaré; Klein’s “vision”; the historical role played by these books (and “unearthed treasure”); a rationale for modular and automorphic forms; how to prepare for reading these works; and finally a couple of rather piquant subsections: “Why does Fricke-Klein have a reputation for being difficult and not clear?,” and “A few facts about Robert Fricke.” These last two themes are interesting in themselves, and I won’t give the game away by elaborating too much: the interested reader should crack the books To convey the special features of each of the four books would be an altogether herculean undertaking, so suffice it to say that the books’ tables of contents will give a good idea of what’s available. It should be noted, however, that Ji has very good reasons for including §6, “Preparatory reading for Klein-Fricke and Fricke-Klein,” in his introductory essay. Here, caveat lector, are a few of his remarks: Both books … are not easy to read, and even the famous and more expository book [that Klein wrote on the icosahedron] is not easy to many people … [and] there have been several recent attempts to explain some ideas of [ loc.cit.] in modern language … But these expositions do not seem to convey completely all the major ideas and results …, or its connections with other work of Klein, in particular [the present books]. Therefore, it is very valuable to read the original writing. But even in English translation, the style is no longer what we are used to, and maybe it was even rough going for the readers of the late 19th and early 20th century, when the Satz-Beweis style that we are now so attuned to had not yet swept the field. Ji provides a related comment by none other than Jean-Pierre Serre (in connection with the aforementioned book on the icosahedron): I am sorry to have been slow in answering your query about Klein’s icosahedron book. I have been looking at it, off and on, for the past weeks without being able to write anything. Indeed, as regards the books under review, which are a horse of the same color as far as expository style goes, it is evident that while the authors’ scope and their corresponding attention to detail in each of the four books are meritorious indeed, there are no modern books written in the style of Klein-Fricke and Fricke-Klein. Here is a snippet from the first book in the quartet, just to give an illustration of what is going on. Book 1, p. 593: … one can now define the Galoisian problem of 168th degree in form-theoretic form as follows: \(g_2,\,g_3,\,\Delta\) are given numerically, according to their [usual Weierstrassian] relations; from [earlier] equations, to which we also have to add the equation \(f(z_\alpha)=0\), one requires the associated solution system \(z_1,z_2,z_3\) to be calculated. We … designate this … as the form problem of the \(z_\alpha\). However, in order to state our problem in function-theoretic form, we set \(z_1=x,\, z_2=y,\, z_4=1\) say, whereby [we get] the following equations: \(J:J-1:1 = \phi^3(x,y,1):\psi^2(x,y,1):-1728X^7(x,y,1),\, x^3+xy^3+y=0\). Our equation problem of 168th degree is now, to calculate, for given \(J\), from these equations, the associated solution system \(x,y\). What a remarkable application of elliptic modular forms to Galois theory this is, and the reader unable to resist its appeal, but it is part of a long narrative, much along the lines of a verbatim transcript of a long lecture, and the same reader has to cultivate the requisite Sitzfleisch to get it all, from soup to nuts. And he had better have a sharp pencil ready. It would perhaps be a worthwhile — but highly non-trivial — enterprise to redo this theorem in modern form: Sätze und Beweise. But the scholar who does this has about 2000 more pages to play with: these four tomes are Teutonically beefy. Even without the benefit of modern vernacular, however, any serious number theorist would do well to read these books. In fact, the added difficulty of dealing with an unfamiliar style, allowing for so much commentary beyond what today’s writing rubrics might dictate, can only add to the impact this beautiful mathematics would have on the patient and diligent reader. After all, Abel’s aphorism is ever so true: we should learn from the masters, not their pupils (allowing in this case for the fact that, in the beginning, Fricke was Klein’s student). One final comment: these books each contain a copy (all isomorphic) of a set of commentaries by contemporary mathematicians. Richard Borcherds highlights the “bizarre properties” of Klein’s elliptic modular function, as well a lot more. Jeremy Gray addresses historical material. William Harvey writes on Fricke-Klein: automorphic functions as such. Barry Mazur’s essay is in itself an excellent review of much of the material in these books. There is one essay jointly by Caroline Series, David Mumford, and David Wright, and one essay by Domingo Toledo. Lastly, there is a collection of five very short commentaries, by Igor Dolgachev, Linda Keen, Robert Langlands, Yuri Manin, and Ken Ono. These address in broad terms why the enterprise of bringing Klein-Fricke and Fricke-Klein to a modern audience, in English, is so commendable. Ono is particularly encouraging: It pays to read the seminal works by the greats, and with these translations these classic works will be reintroduced to generations of mathematicians. Langlands, however, strikes a note of warning: The indifference to the mathematical past and the distance from it will, I suppose, … become more pronounced if or when the Asian nations assume a major role in mathematics and even English ceases to be the principal, or even an adequate, medium of communication. The possibility of reducing mathematics to a trivial pursuit, a struggle for a large number of citations, for a prize, or just for a tenured position, is also there. It is difficult not to be pessimistic! Against this threat, these four volumes stand as an obvious part of a necessary counteroffensive. It is noteworthy and encouraging that they are published in China: perhaps Langlands’ pessimism might be somewhat mitigated. Time will tell. Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
202 35 Summary What happens to invariant mass of an object when it gets closer or further from a gravitational body? In Special Relativity, you learn that invariant mass is computed by taking the difference between energy squared and momentum squared. (For simplicity, I'm saying c = 1). [tex] m^2 = E^2 - \vec{p}^2 [/tex] This can also be written with the Minkowski metric as: [tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex] More generally, if there is a different metric (for example Schwartzchild), you would write it as: [tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex] Now the question is, if invariant mass does not change from one metric to the other, you get the equation: [tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex] This seems to give unphysical results. I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields. Any help here would be much appreciated. Thank you. [tex] m^2 = E^2 - \vec{p}^2 [/tex] This can also be written with the Minkowski metric as: [tex] m^2 = \eta_{\mu\nu} p^\mu p^\nu [/tex] More generally, if there is a different metric (for example Schwartzchild), you would write it as: [tex] m^2 = g_{\mu\nu} p^\mu p^\nu [/tex] Now the question is, if invariant mass does not change from one metric to the other, you get the equation: [tex] 0 = (g_{\mu\nu} - \eta_{\mu\nu})p^\mu p^\nu [/tex] This seems to give unphysical results. I solved for a photon in the Schwartzchild metric, and the only physical solution available is if the Schwartzchild radius is 0. So this seems to imply that invariant mass (or lack thereof) is not invariant under gravitational fields. Any help here would be much appreciated. Thank you.
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
How can I convince myself that wavefunctions of electrons on molecular orbitals are indeed standing waves? Is it a consequence of the fact that electrons don't drift away from the molecule? In other words, can one prove from the Schrödinger equation that, unless $\psi(x,t)$ can be represented as $\phi(x)\theta(t)$, then $\lim_{t \to \infty}\int_U |\psi(\bar x,t)|^2d\bar x=0$ for any bounded set $U\subset \mathbb R^3$ (or something along those lines)? Or are there physical considerations that explain the standing waves? Update. Apparently «standing wave» is an ambiguous/controversial term here, so let me reformulate my question in a more mathematical and unambiguous way without referring to standing waves. Let a wavefunction $\psi$ correspond to a stationary state, i.e. $|\psi(x,t)|=\mathrm{const}(t)$. We can conclude, then, that $\psi(x,t)=\phi(x)\theta(x,t)$, where $|\theta(x,t)|=1$. In order to separate the variables and move on to the time-independent Schrödinger equation, we also need to establish that $\theta(x,t)$ doesn't depend on $x$. Where does this assumption follow from?
In this article, we are going to derive and implement an iterative algorithm to compute the variance and mean of an incoming continuous stream of data. Recently, a friend of mine was asked this question in an interview: Suppose you are creating an IoT sensor device that emits output as a continuous stream of integer values. While most of the data is transmitted to a central server for logging, the local node needs to maintain some statistic about the data. This data is modelled by a random variable \(X\) and it is assumed that the values for this random variable are derived from a uniform distribution. As a designer of the IoT device, you are required to maintain the variance and mean of the incoming stream of data on the device. Conservation of memory is paramount. Design an algorithm that can achieve this. Now, as a designer - for a moment - let us just forget all about random variables and distributions. The problem is, we want to store the mean and variance of all the data that has ever been sensed and find a way to update it as and when new data arrives. So, we are looking for some kind of a recursive formula which we could implement in code. Here, we'll don our cap of statistics and find one such formula. Image Credits Deriving the Recurrence Relation Assume that the data was represented by a sequence of data \(\{x_1, x_2 \dots x_{n-1}, x_n, x_{n+1} \dots\}\). Further, suppose that we are looking at this sequence of data at instant \(n\). Then, so far, we would have received \(n\) data samples. Mean of \(n\) data samples \(\mathbb{E}[X] = \overline{X_n} = \frac{x_1+x_2+\dots +x_{n-1}+ x_n}{n} = \frac{\sum_{i=1}^{n}x_i}{n}\) We could break it down as the following: \(\overline{X_n} = \frac{x_1+x_2+\dots +x_{n-1}}{n} + \frac{x_n}{n} = \frac{\sum_{i=1}^{n-1}x_i}{n}+ \frac{x_n}{n} \) Multiply and divide each side by \((n-1)\) and simplifying, we can reduce this expression into an elegant recursive formula as: \(\overline{X_n} = \frac{n-1}{n} \frac{\sum_{i=1}^{n-1}x_i}{n-1}+ \frac{x_n}{n} = \frac{n-1}{n} {\overline{X}}_{n-1} + \frac{x_n}{n}\) Thus, our recursive formula can be written as: \(\overline{X_{n+1}} = \frac{n}{n+1}\overline{X_n} + \frac{x_{n+1}}{n+1}\) Similarly, for variance computation, we can write a recurrence relation deriving it from the standard formula: \(var(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\) Now, we've already found the recurrence relation for \(\mathbb{E}[X]\). If we can find another similar kind of recurrence relation for \(\mathbb{E}[X^2]\), we'll have found our gold. So, \(\mathbb{E}[X^2] = \sum_{i=1}^{n}{x_i}^2P(X=x_i)\) . Here, \(P(X=x_i)\)is the probability of random variable\(X\) taking the value \(x_i\). Since it is a uniform distribution, all values are equi-likely, and hence, this value may be taken out of the summation as \(\frac{1}{n}\). Note that we did this step implicitly in the previous derivation. Then, \(\mathbb{E}(X^2) = \frac{\sum_{i=1}^{n} {x_i}^2}{n} = \frac{n-1}{n}\cdot\frac{\sum_{i=1}^{n-1} {x_i}^2}{n-1} + \frac{{x_n}^2}{n}\) With this, we have our desired recurrence relation for variance as: \((var(X))_{n+1} = (\frac{n}{n+1}\mathbb{E}_n({X^2}) + \frac{{x_{n+1}}^2}{n+1}) - (\frac{n}{n+1}\mathbb{E}_n(X) + \frac{x_{n+1}}{n+1})^2\) So, now that we have our recurrence relations, we can just store away minimal amount of data and still be able to compute the mean and variance of this data. Note that we are not actually storing the variance of data, but the second moment of the data. From this second moment \(\mathbb{E}(X^2)\) and the first moment (or mean) \(\mathbb{E}(X)\) (or \(\overline{X}\)), we are able to calculate the variance of data till the time instant 'n'. All in all, we are keeping 3 variables, E_x, E_x_squared and n and by recursively updating these values every time a new data value arrives, we are satisfying the requirement of saving the mean and variance without using too much memory. This kind of iterative approach where we learn and update the new values on-the-fly is often called online learning or sequential learning in a machine learning context. (The other version is batch learning, where we treat our algorithm to chunks or batches of data and operate on them collectively.) Now, let's put down the statistics hat and wear the programmer's hat. Can we validate this relation? Let us write very short code to see if it works right. Sample Implementation Presented below is a very short code written in MATLAB/Octave where we've simulated the incoming sequence (pre-generated the sequence) and fed it iteratively into the computational loop. We stop at random points to see if the expression of Mean and Variance are indeed what they should be. %%Variables N = 10^6; stream = rand([N,1]); pause_at = sort(randperm(N, 10)); E_x = 0; E_x_squared = 0; for n=0:N-1 E_x = ((n/(n+1))*(E_x)) + (stream(n+1)/(n+1) ); E_x_squared = ((n/(n+1))*(E_x_squared)) + ... (stream(n+1)^2/(n+1) ); if(find(pause_at==n)) fprintf('\n Computed: N=%d. Mean=%f, Variance = %f',n, E_x, ... E_x_squared - (E_x)^2); fprintf('\n Analytical: Mean=%f, Variance = %f', ... mean(stream(1:n+1)),... var(stream(1:n+1))); end end Here's a sample output: Computed: N=22504. Mean=0.500859, Variance = 0.083856 Analytical: Mean=0.500859, Variance = 0.083860 Computed: N=60197. Mean=0.499909, Variance = 0.083266 Analytical: Mean=0.499909, Variance = 0.083268 Computed: N=111533. Mean=0.499907, Variance = 0.083466 Analytical: Mean=0.499907, Variance = 0.083467 Computed: N=121321. Mean=0.500081, Variance = 0.083466 Analytical: Mean=0.500081, Variance = 0.083467 Computed: N=123488. Mean=0.499915, Variance = 0.083462 Analytical: Mean=0.499915, Variance = 0.083463 Computed: N=352519. Mean=0.499968, Variance = 0.083501 Analytical: Mean=0.499968, Variance = 0.083501 Computed: N=392207. Mean=0.499799, Variance = 0.083529 Analytical: Mean=0.499799, Variance = 0.083529 Computed: N=479770. Mean=0.499807, Variance = 0.083528 Analytical: Mean=0.499807, Variance = 0.083528 Computed: N=480705. Mean=0.499824, Variance = 0.083528 Analytical: Mean=0.499824, Variance = 0.083528 Computed: N=903026. Mean=0.500124, Variance = 0.083363 Analytical: Mean=0.500124, Variance = 0.083363 Yep, we're doing fine! Easy!
Difference between revisions of "Talk:Absolute continuity" Line 10: Line 10: : Why not? --[[User:Boris Tsirelson|Boris Tsirelson]] 13:21, 30 July 2012 (CEST) : Why not? --[[User:Boris Tsirelson|Boris Tsirelson]] 13:21, 30 July 2012 (CEST) : Fine by me [[User:Camillo.delellis|Camillo]] 14:08, 30 July 2012 (CEST) : Fine by me [[User:Camillo.delellis|Camillo]] 14:08, 30 July 2012 (CEST) + + + + + + + + + + + + + + + + + + + + Revision as of 10:45, 10 August 2012 Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras --Jjg 12:57, 30 July 2012 (CEST) Between metric setting and References I would like to type the following lines. But for some reason which is misterious to me, any time I try the page comes out a mess... Camillo 10:45, 10 August 2012 (CEST) if for every $\varepsilon$ there is a $\delta > 0$ such that, for any $a_1<b_1<a_2<b_2<\ldots < a_n<b_n \in I$ with $\sum_i |a_i -b_i| <\delta$, we have \[ \sum_i d (f (b_i), f(a_i)) <\varepsilon\, . \] The absolute continuity guarantees the uniform continuity. As for real valued functions, there is a characterization through an appropriate notion of derivative. Theorem 1A continuous function $f$ is absolutely continuous if and only if there is a function $g\in L^1_{loc} (I, \mathbb R)$ such that\begin{equation}\label{e:metric}d (f(b), f(a))\leq \int_a^b g(t)\, dt \qquad \forall a<b\in I\,\end{equation}(cp. with ). This theorem motivates the following Definition 2If $f:I\to X$ is a absolutely continuous and $I$ is compact, the metric derivative of $f$ is the function $g\in L^1$ with the smalles $L^1$ norm such that \ref{e:metric} holds (cp. with ) How to Cite This Entry: Absolute continuity. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27468
Ancillary-file links: Ancillary files (details): python3_src/KW_scaldims.py python3_src/LICENSE python3_src/README python3_src/TNR.py python3_src/cdf_ed_scaldimer.py python3_src/cdf_scaldimer.py python3_src/custom_parser.py python3_src/ed_scaldimer.py python3_src/initialtensors.py python3_src/modeldata.py python3_src/pathfinder.py python3_src/scaldim_plot.py python3_src/scaldimer.py python3_src/scon.py python3_src/scon_sparseeig.py python3_src/tensordispenser.py python3_src/tensors/abeliantensor.py python3_src/tensors/ndarray_svd.py python3_src/tensors/symmetrytensors.py python3_src/tensors/tensor.py python3_src/tensors/tensor_test.py python3_src/tensors/tensorcommon.py python3_src/tensorstorer.py python3_src/timer.py python3_src/toolbox.py Current browse context: quant-ph Change to browse by: Bookmark(what is this?) Condensed Matter > Strongly Correlated Electrons Title: Topological conformal defects with tensor networks (Submitted on 11 Dec 2015 (v1), last revised 23 Sep 2016 (this version, v3)) Abstract: The critical 2d classical Ising model on the square lattice has two topological conformal defects: the $\mathbb{Z}_2$ symmetry defect $D_{\epsilon}$ and the Kramers-Wannier duality defect $D_{\sigma}$. These two defects implement antiperiodic boundary conditions and a more exotic form of twisted boundary conditions, respectively. On the torus, the partition function $Z_{D}$ of the critical Ising model in the presence of a topological conformal defect $D$ is expressed in terms of the scaling dimensions $\Delta_{\alpha}$ and conformal spins $s_{\alpha}$ of a distinct set of primary fields (and their descendants, or conformal towers) of the Ising CFT. This characteristic conformal data $\{\Delta_{\alpha}, s_{\alpha}\}_{D}$ can be extracted from the eigenvalue spectrum of a transfer matrix $M_{D}$ for the partition function $Z_D$. In this paper we investigate the use of tensor network techniques to both represent and coarse-grain the partition functions $Z_{D_\epsilon}$ and $Z_{D_\sigma}$ of the critical Ising model with either a symmetry defect $D_{\epsilon}$ or a duality defect $D_{\sigma}$. We also explain how to coarse-grain the corresponding transfer matrices $M_{D_\epsilon}$ and $M_{D_\sigma}$, from which we can extract accurate numerical estimates of $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\epsilon}}$ and $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\sigma}}$. Two key new ingredients of our approach are (i) coarse-graining of the defect $D$, which applies to any (i.e. not just topological) conformal defect and yields a set of associated scaling dimensions $\Delta_{\alpha}$, and (ii) construction and coarse-graining of a generalized translation operator using a local unitary transformation that moves the defect, which only exist for topological conformal defects and yields the corresponding conformal spins $s_{\alpha}$. Submission historyFrom: Markus Hauru [view email] [v1]Fri, 11 Dec 2015 23:01:19 GMT (1352kb,D) [v2]Mon, 16 May 2016 22:16:02 GMT (1920kb,AD) [v3]Fri, 23 Sep 2016 19:16:05 GMT (1919kb,AD)
Solution of the Duffing Equation Part of the Fundamental Theories of Physics book series (FTPH, volume 60) Chapter Abstract Consider the Duffing equation with variable excitation and constant coefficients α, β, γ δ(t) will be written as a series δ(t) = Σ $$\begin{gathered} {\text{u''}} + \alpha u' + \beta u + \gamma {u^3} = \delta (t) \hfill \\ u(0) = {c_0}{\text{ u'(0) = }}{{\text{c}}_1} \hfill \\ \end{gathered} % MathType!End!2!1! $$ n=0 ∞δ nt n. Let L = d 2/dt 2. Then L −1will be the two-fold integration from 0 to t. KeywordsExcitation Frequency White Noise Excitation Decomposition Solution DUFFING Equation Sine Series These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. Preview Unable to display preview. Download preview PDF. Suggested Reading 1.A. Blaquière, Nonlinear System Analysis, Academic Press (1988).Google Scholar 2.J. Hale, Oscillations in Nonlinear Systems, McGraw-Hill (1963).Google Scholar 3.C. Hyashi, Nonlinear Oscillations in Physical Systems, McGraw-Hill (1964).Google Scholar 4.G. Duffing, Erzwungene Schwingungen bei Veränderlicher Eigenfrequenz und ihre technische Bedeutung, Vieweg (1918).Google Scholar 5.Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations,Springer-Verlag (1983).Google Scholar 6.K. Kreith, Oscillation Theory, Springer-Verlag (1973).Google Scholar 7.P. Hagedorn, Nonlinear Oscillations, 2nd ed., Clarendon (1988).Google Scholar 8.J. D. Cole, Perturbation Methods in Applied Mathematics, Blaisdell (1968).Google Scholar 9.A. A. Andronov, A. A. Vitt, and S. E. Khaikin, F. Immirzi, transl., Theory of Oscillators, Addison-Wesley (1966).Google Scholar Copyright information © Springer Science+Business Media Dordrecht 1994
5.4 Average Value of a Function We often need to find the average of a set of numbers, such as an average test grade. Suppose you received the following test scores in your algebra class: 89, 90, 56, 78, 100, and 69. Your semester grade is your average of test scores and you want to know what grade to expect. We can find the average by adding all the scores and dividing by the number of scores. In this case, there are six test scores. Thus, \[\dfrac{89+90+56+78+100+69}{6}=\dfrac{482}{6}≈80.33.\] Therefore, your average test grade is approximately 80.33, which translates to a B− at most schools. Suppose, however, that we have a function \(v(t)\) that gives us the speed of an object at any time t, and we want to find the object’s average speed. The function \(v(t)\) takes on an infinite number of values, so we can’t use the process just described. Fortunately, we can use a definite integral to find the average value of a function such as this. Let \(f(x)\) be continuous over the interval \([a,b]\) and let \([a,b]\) be divided into n subintervals of width \(Δx=(b−a)/n\). Choose a representative \(x^∗_i\) in each subinterval and calculate \(f(x^∗_i)\) for \(i=1,2,…,n.\) In other words, consider each \(f(x^∗_i)\) as a sampling of the function over each subinterval. The average value of the function may then be approximated as \[\dfrac{f(x^∗_1)+f(x^∗_2)+⋯+f(x^∗_n)}{n},\] which is basically the same expression used to calculate the average of discrete values. But we know \(Δx=\dfrac{b−a}{n},\) so \(n=\dfrac{b−a}{Δx}\), and we get \[\dfrac{f(x^∗_1)+f(x^∗_2)+⋯+f(x^∗_n)}{n}=\dfrac{f(x^∗_1)+f(x^∗_2)+⋯+f(x^∗_n)}{\dfrac{(b−a)}{Δx}}.\] Following through with the algebra, the numerator is a sum that is represented as \(\sum_{i=1}^nf(x∗i),\) and we are dividing by a fraction. To divide by a fraction, invert the denominator and multiply. Thus, an approximate value for the average value of the function is given by \(\dfrac{\sum_{i=1}^nf(x^∗_i)}{\dfrac{(b−a)}{Δx}}=(\dfrac{Δx}{b−a})\sum_{i=1}^nf(x^∗_i)=(\dfrac{1}{b−a})\sum_{i=1}^nf(x^∗_i)Δx.\) This is a Riemann sum. Then, to get the exact average value, take the limit as n goes to infinity. Thus, the average value of a function is given by \(\dfrac{1}{b−a}\lim_{n→∞}\sum_{i=1}^nf(x_i)Δx=\dfrac{1}{b−a}∫^b_af(x)dx.\) Definition: average value of the function Let \(f(x)\) be continuous over the interval \([a,b]\). Then, the average value of the function \(f(x)\) (or \(f_{ave}\)) on \([a,b]\) is given by \[f_{ave}=\dfrac{1}{b−a}∫^b_af(x)dx.\] Example \(\PageIndex{8}\): Finding the Average Value of a Linear Function Find the average value of \(f(x)=x+1\) over the interval \([0,5].\) Solution First, graph the function on the stated interval, as shown in Figure. Figure \(\PageIndex{10}\):The graph shows the area under the function \((x)=x+1\) over \([0,5].\) The region is a trapezoid lying on its side, so we can use the area formula for a trapezoid \(A=\dfrac{1}{2}h(a+b),\) where h represents height, and a and b represent the two parallel sides. Then, \(∫^5_0x+1dx=\dfrac{1}{2}h(a+b)=\dfrac{1}{2}⋅5⋅(1+6)=\dfrac{35}{2}\). Thus the average value of the function is \(\dfrac{1}{5−0}∫^5_0x+1dx=\dfrac{1}{5}⋅\dfrac{35}{2}=\dfrac{7}{2}\). Exercise \(\PageIndex{7}\) Find the average value of \(f(x)=6−2x\) over the interval \([0,3].\) Hint Use the average value formula, and use geometry to evaluate the integral. Answer 3 Key Concepts The definite integral can be used to calculate net signed area, which is the area above the x-axis less the area below the x-axis. Net signed area can be positive, negative, or zero. The average value of a function can be calculated using definite integrals. Key Equations Definite Integral \(\displaystyle∫^b_af(x)dx=\lim_{n→∞}\sum_{i=1}^nf(x^∗_i)Δx\) Glossary average value of a function (or \(f_{ave})\) the average value of a function on an interval can be found by calculating the definite integral of the function and dividing that value by the length of the interval variable of integration indicates which variable you are integrating with respect to; if it is x, then the function in the integrand is followed by dx Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
The string operator is a way to study the quasiparticle excitations in the string-net model http://arxiv.org/abs/cond-mat/0404617. It is claimed in the above reference (Eq.(19), p.9) that for string operator $W(P)$ defined on an arbitrary path $P = I_1,\dots,I_N$ on the honeycomb lattice, which is a product of simple string operators $W(P) = W_{s_1}(P)\dots W_{s_m}(P)$, and $n_s$ the non-negative integers characterizing the action of $W(P)$ on the vacuum: $W(P)|0> = \sum_{s}n_s |s> $, one can show that the matrix elements of $W(P)$ between an initial spin state $i_1,\dots,i_N$ and final spin state $i_1^{\prime},\dots,i_N^{\prime}$ are always of the form \begin{equation} W_{i_1,\dots,i_N}^{i_1^{\prime},\dots,i_N^{\prime}}(e_1e_2,\dots,e_N) = \sum_{\{ s_k\}}(\prod_{k=1}^N F_{s_k^* i_{k-1}^{\prime}i_{k}^{\prime *}}^{e_k i_k^{*}i_{k-1}})\text{Tr}(\prod_{k=1}^N \Omega_k(i_k,i_k^{\prime},s_k,s_{k+1})) \ \ \ (1) \end{equation} where $\Omega_k$ is some $n_{s_k}\times n_{s_{k+1}}$ complex matrix. The matrix element of a type$-s$ simple string operator is \begin{equation} W_{s,i_1,\dots,i_N}^{\ i_1^{\prime},\dots,i_N^{\prime}}(e_1e_2,\dots,e_N) = (\prod_{k=1}^N F_{s^* i_{k-1}^{\prime}i_{k}^{\prime *}}^{e_k i_k^{*}i_{k-1}})(\prod_{k=1}^N \omega_k (i_k,i_k^{\prime},s)) \ \ \ (2) \end{equation} where $\omega_k$ is some complex number. I failed to show the above claim. I tried inserting $m$ identity operators $I =\sum_{i_1^{(l)},\dots,i_N^{(l)}} |i_1^{(l)},\dots,i_N^{(l)}><i_1^{(l)},\dots,i_N^{(l)}|, 1\leq l \leq m$, between $<i_1,\dots,i_N|W_{s_1}(P)\dots W_{s_m}(P)|i_1^{\prime},\dots,i_N^{\prime}>$, and then applying Eq.$(2)$, and simplifying the resultant expression. One could also try using the graphical representation of the string operators illustrated in Appendix D. In particular, the effect of applying $W_{s_1}(P)\dots W_{s_m}(P)$ on some string-net state is to add $m$ simple strings $s_1,\dots,s_m$ in the fattened honeycomb lattice along the path $P = I_1,\dots,I_N$. One can then apply rules Eq.$(D1)$ (for simple strings) to merge these additional strings with the original strings in the unfattened lattice, and write the state as a linear combination of the string-net state on the unfattened lattice. However, in both approaches, it is not immediately clear to me that the matrix elements of $W = W_{s_1}\dots W_{s_m}$ is of the form Eq.$(1)$, for some complex matrices $\Omega_k$ of appropriate dimensions.
Conway’s Big Picture consists of all pairs of rational numbers $M,\frac{g}{h}$ with $M > 0$ and $0 \leq \frac{g}{h} < 1$ with $(g,h)=1$. Recall from last time that $M,\frac{g}{h}$ stands for the lattice \[ \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \subset \mathbb{Q}^2 \] and we associate to it the rational $2 \times 2$ matrix \[ \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] If $M$ is a natural number we write $M \frac{g}{h}$ and call the corresponding lattice number-like, if $g=0$ we drop the zero and write $M$. The Big Picture carries a wealth of structures. Today, we will see that it can be factored as the product of Bruhat-Tits buildings for $GL_2(\mathbb{Q}_p)$, over all prime numbers $p$. Here’s the factor-building for $p=2$, which is a $3$-valent tree: To see this, define the distance between lattices to be \[ d(M,\frac{g}{h}~|~N,\frac{i}{j}) = log~Det(q(\alpha_{M,\frac{g}{h}}.\alpha_{N,\frac{i}{j}}^{-1})) \] where $q$ is the smallest strictly positive rational number such that $q(\alpha_{M,\frac{g}{h}}.\alpha_{N,\frac{i}{j}}^{-1}) \in GL_2(\mathbb{Z})$. We turn the Big Picture into a (coloured) graph by drawing an edge (of colour $p$, for $p$ a prime number) between any two lattices distanced by $log(p)$. \[ \xymatrix{M,\frac{g}{h} \ar@[red]@{-}[rr]|p & & N,\frac{i}{j}} \qquad~\text{iff}~\qquad d(M,\frac{g}{h}~|~N,\frac{i}{j})=log(p) \] The $p$-coloured subgraph is $p+1$-valent. The $p$-neighbours of the lattice $1 = \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} \vec{e}_2$ are precisely these $p+1$ lattices: \[ p \qquad \text{and} \qquad \frac{1}{p},\frac{k}{p} \qquad \text{for} \qquad 0 \leq k < p \] And, multiplying the corresponding matrices with $\alpha_{M,\frac{g}{h}}$ tells us that the $p$-neighbours of $M,\frac{g}{h}$ are then these $p+1$ lattices: \[ pM,\frac{pg}{h}~mod~1 \qquad \text{and} \qquad \frac{M}{p},\frac{1}{p}(\frac{g}{h}+k)~mod~1 \qquad \text{for} \qquad 0 \leq k < p \] Here's part of the $2$-coloured neighbourhood of $1$ To check that the $p$-coloured subgraph is indeed the Bruhat-Tits building of $GL_2(\mathbb{Q}_p)$ it remains to see that it is a tree. For this it is best to introduce $p+1$ operators on lattices \[ p \ast \qquad \text{and} \qquad \frac{k}{p} \ast \qquad \text{for} \qquad 0 \leq k < p \] defined by left-multiplying $\alpha_{M,\frac{g}{h}}$ by the matrices \[ \begin{bmatrix} p & 0 \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad \begin{bmatrix} \frac{1}{p} & \frac{k}{p} \\ 0 & 1 \end{bmatrix} \qquad \text{for} \qquad 0 \leq k < p \] The lattice $p \ast M,\frac{g}{h}$ lies closer to $1$ than $M,\frac{g}{h}$ (unless $M,\frac{g}{h}=M$ is a number) whereas the lattices $\frac{k}{p} \ast M,\frac{g}{h}$ lie further, so it suffices to show that the $p$ operators \[ \frac{0}{p} \ast,~\frac{1}{p} \ast,~\dots~,\frac{p-1}{p} \ast \] form a free non-commutative monoid. This follows from the fact that the operator \[ (\frac{k_n}{p} \ast) \circ \dots \circ (\frac{k_2}{p} \ast) \circ (\frac{k_1}{p} \ast) \] is given by left-multiplication with the matrix \[ \begin{bmatrix} \frac{1}{p^n} & \frac{k_1}{p^n}+\frac{k_2}{p^{n-1}}+\dots+\frac{k_n}{p} \\ 0 & 1 \end{bmatrix} \] which determines the order in which the $k_i$ occur. A lattice at distance $n log(p)$ from $1$ can be uniquely written as \[ (\frac{k_{n-l}}{p} \ast) \circ \dots \circ (\frac{k_{l+1}}{p} \ast) \circ (p^l \ast) 1 \] which gives us the unique path to it from $1$. The Big Picture itself is then the product of these Bruhat-Tits trees over all prime numbers $p$. Decomposing the distance from $M,\frac{g}{h}$ to $1$ as \[ d(M,\frac{g}{h}~|~1) = n_1 log(p_1) + \dots + n_k log(p_k) \] will then allow us to find minimal paths from $1$ to $M,\frac{g}{h}$. But we should be careful in drawing $2$-dimensional cells (or higher dimensional ones) in this ‘product’ of trees as the operators \[ \frac{k}{p} \ast \qquad \text{and} \qquad \frac{l}{q} \ast \] for different primes $p$ and $q$ do not commute, in general. The composition \[ (\frac{k}{p} \ast) \circ (\frac{l}{q} \ast) \qquad \text{with matrix} \qquad \begin{bmatrix} \frac{1}{pq} & \frac{kq+l}{pq} \\ 0 & 1 \end{bmatrix} \] has as numerator in the upper-right corner $0 \leq kq + l < pq$ and this number can be uniquely(!) written as \[ kq+l = up+v \qquad \text{with} \qquad 0 \leq u < q,~0 \leq v < p \] That is, there are unique operators $\frac{u}{q} \ast$ and $\frac{v}{p} \ast$ such that \[ (\frac{k}{p} \ast) \circ (\frac{l}{q} \ast) = (\frac{u}{q} \ast) \circ (\frac{v}{p} \ast) \] which determine the $2$-cells \[ \xymatrix{ \bullet \ar@[blue]@{-}[rr]^{\frac{u}{q} \ast} \ar@[red]@{-}[dd]_{\frac{v}{p} \ast} & & \bullet \ar@[red]@{-}[dd]^{\frac{k}{p} \ast} \\ & & \\ \bullet \ar@[blue]@{-}[rr]_{\frac{l}{q} \ast} & & \bullet} \] These give us the commutation relations between the free monoids of operators corresponding to different primes. For the primes $2$ and $3$, relevant in the description of the Moonshine Picture, the commutation relations are \[ (\frac{0}{2} \ast) \circ (\frac{0}{3} \ast) = (\frac{0}{3} \ast) \circ (\frac{0}{2} \ast), \quad (\frac{0}{2} \ast) \circ (\frac{1}{3} \ast) = (\frac{0}{3} \ast) \circ (\frac{1}{2} \ast), \quad (\frac{0}{2} \ast) \circ (\frac{2}{3} \ast) = (\frac{1}{3} \ast) \circ (\frac{0}{2} \ast) \] \[ (\frac{1}{2} \ast) \circ (\frac{0}{3} \ast) = (\frac{1}{3} \ast) \circ (\frac{1}{2} \ast), \quad (\frac{1}{2} \ast) \circ (\frac{1}{3} \ast) = (\frac{2}{3} \ast) \circ (\frac{0}{2} \ast), \quad (\frac{1}{2} \ast) \circ (\frac{2}{3} \ast) = (\frac{2}{3} \ast) \circ (\frac{1}{2} \ast) \]