text
stringlengths
256
16.4k
Abstract The Lee-Yang circle theorem describes complex polynomials of degree $n$ in $z$ with all their zeros on the unit circle $|z|=1$. These polynomials are obtained by taking $z_1=\dots=z_n=z$ in certain multiaffine polynomials $\Psi(z_1,\dots,z_n)$ which we call Lee-Yang polynomials (they do not vanish when $|z_1|,\dots,|z_n|<1$ or $|z_1|,\dots,|z_n|>1$). We characterize the Lee-Yang polynomials $\Psi$ in $n+1$ variables in terms of polynomials $\Phi$ in $n$ variables (those such that $\Phi(z_1,\dots,z_n)\ne0$ when $|z_1|,\dots,|z_n|<1$). This characterization gives us a good understanding of Lee-Yang polynomials and allows us to exhibit some new examples. In the physical situation where the $\Psi$ are temperature dependent partition functions, we find that those $\Psi$ which are Lee-Yang polynomials for all temperatures are precisely the polynomials with pair interactions originally considered by Lee and Yang. [1] T. Asano, "Theorems on the partition functions of the Heisenberg ferromagnets.," J. Phys. Soc. Japan, vol. 29, pp. 350-359, 1970. @article {1, MRKEY = {0280127}, AUTHOR = {Asano, Taro}, TITLE = {Theorems on the partition functions of the {H}eisenberg ferromagnets.}, JOURNAL = {J. Phys. Soc. Japan}, VOLUME = {29}, YEAR = {1970}, PAGES = {350--359}, MRCLASS = {82.46}, MRNUMBER = {43 \#5848}, MRREVIEWER = {S. Sherman}, DOI = {10.1143/JPSJ.29.350}, } [2] B. Beauzamy, "On complex Lee and Yang polynomials," Comm. Math. Phys., vol. 182, iss. 1, pp. 177-184, 1996. @article {2, MRKEY = {1441909}, AUTHOR = {Beauzamy, Bernard}, TITLE = {On complex {L}ee and {Y}ang polynomials}, JOURNAL = {Comm. Math. Phys.}, FJOURNAL = {Communications in Mathematical Physics}, VOLUME = {182}, YEAR = {1996}, NUMBER = {1}, PAGES = {177--184}, ISSN = {0010-3616}, CODEN = {CMPHAY}, MRCLASS = {82B05 (32A99)}, MRNUMBER = {99e:82004}, URL = {http://projecteuclid.org/getRecord?id=euclid.cmp/1104288024}, ZBLNUMBER = {0877.12002}, } [3] J. Borcea and P. Brändén, "The Lee-Yang and Polya-Schur programs. I. Linear operators preserving stability," Invent. Math., vol. 177, pp. 541-569, 2009. @article{3, author={Borcea, J. and Brändén, P.}, TITLE={The Lee-Yang and Polya-Schur programs. I. Linear operators preserving stability}, JOURNAL={Invent. Math.}, VOLUME={177}, YEAR={2009}, PAGES={541--569}, } [3b] J. Borcea and P. Brändén, "The Lee-Yang and Polya-Schur programs. II. Theory of stable polynomials and applications," Comm. Pure Appl. Math., vol. 62, pp. 1595-1631, 2009. @article{3b, author={Borcea, J. and Brändén, P.}, TITLE={The Lee-Yang and Polya-Schur programs. II. Theory of stable polynomials and applications}, JOURNAL={Comm. Pure Appl. Math.}, VOLUME={62}, YEAR={2009}, PAGES={1595--1631}, } [4] T. D. Lee and C. N. Yang, "Statistical theory of equations of state and phase transitions. II. Lattice gas and Ising model," Phys. Rev., vol. 87, pp. 410-419, 1952. @article {4, MRKEY = {0053029}, AUTHOR = {Lee, T. D. and Yang, C. N.}, TITLE = {Statistical theory of equations of state and phase transitions. {II}. {L}attice gas and {I}sing model}, JOURNAL = {Phys. Rev.}, VOLUME = {87}, YEAR = {1952}, PAGES = {410--419}, MRCLASS = {80.1X}, MRNUMBER = {14,711c}, MRREVIEWER = {L. Van Hove}, ZBLNUMBER = {0048.43401}, } [5] G. Pólya and G. SzegHo, Problems and Theorems in Analysis. II, New York: Springer-Verlag, 1998. @book {5, MRKEY = {1492448}, AUTHOR = {P{ó}lya, George and Szeg{ő}, Gabor}, TITLE = {Problems and Theorems in Analysis. {\rm II}}, SERIES = {Classics in Mathematics}, PUBLISHER = {Springer-Verlag}, ADDRESS = {New York}, YEAR = {1998}, PAGES = {xii+392}, ISBN = {3-540-63686-2}, MRCLASS = {00A07 (01A75)}, MRNUMBER = {1492448}, } [6] D. Ruelle, "Extension of the Lee-Yang circle theorem," Phys. Rev. Lett., vol. 26, pp. 303-304, 1971. @article {6, MRKEY = {0292441}, AUTHOR = {Ruelle, David}, TITLE = {Extension of the {L}ee-{Y}ang circle theorem}, JOURNAL = {Phys. Rev. Lett.}, FJOURNAL = {Physical Review Letters}, VOLUME = {26}, YEAR = {1971}, PAGES = {303--304}, ISSN = {0031-9007}, MRCLASS = {82.32}, MRNUMBER = {45 \#1527}, MRREVIEWER = {A. C. Manoharan}, DOI = {10.1103/PhysRevLett.26.303}, } [7] D. Ruelle, "Grace-like polynomials," in Foundations of Computational Mathematics. Proc. of Smalefest 2000, River Edge, NJ: World Sci. Publ., 2002, p. viii. @incollection{7, MRKEY = {2021974}, AUTHOR = {Ruelle, David}, TITLE = {Grace-like polynomials}, BOOKTITLE = {Foundations of Computational Mathematics. Proc. of {S}malefest 2000}, VENUE={{H}ong {K}ong, 2000}, NOTE={Cucker, F. and Rojas, J. M. (eds.)}, PUBLISHER = {World Sci. Publ.}, ADDRESS = {River Edge, NJ}, YEAR = {2002}, PAGES = {viii+469}, ISBN = {981-02-4845-8}, MRCLASS = {65-06 (00B25 00B30)}, MRNUMBER = {2004g:65003}, }
Reasonable exceptions allowed on $q$. Example solution: $n=2$. Suppose $q$ is odd. Let $p$ be so $pq\equiv -1$ (mod 8). Then $q\neq$ 2nd power (mod $p$) is the same as $\left(\frac{q}{p}\right)=-1$. We can obtain this from quadratic reciprocity law: $$\left(\frac{q}{p}\right)\left(\frac{p}{q}\right)\equiv(-1)^{\frac{p-1}{2}\frac{q-1}{2}}=(-1)^{\frac{pq+1}{4}}(-1)^{\frac{p+q}{2}}=(-1)^{\frac{p+q}{2}}$$ where the last equality holds since $pq\equiv -1$ (mod 8) $\Rightarrow$ $\frac{pq+1}{4}\equiv 0$ (mod 2). Then the condition $\left(\frac{q}{p}\right)=-1$ is equivalent to $\left(\frac{p}{q}\right)=-(-1)^{\frac{p+q}{2}} = -(-1)^{\frac{-q^{-1}+q}{2}}$, where the $q^{-1}$ is taken (mod 8). In particular, for any $p\equiv -(-1)^{\frac{q-q^{-1}}{2}}$ (mod $q$) and $pq\equiv -1$ (mod 8), we will have $q\neq$ 2nd power (mod $p$). I was trying to repeat this argument for higher powers. I can find conditions so that, say, $n$-th residue symbol will equal to 1 (for certain $n$ anyways). However, that does not guarantee that $q$ will not be a $n$-th power (mod $p$). Any thoughts appreciated.
First consider the single torus, represented as the square with opposite edges identified, call the edges $a$ and $b$. $U$ is the torus with a single point removed, this retracts to just the borders of the square, so it is the union of two copies of $S^1$ and it has fundamental group $<a, b>$, the free group in two elements. Now consider a circle around the removed point, this represents a generator of the fundamental group of $U\cap V$, since $U\cap V$ is in this representation just the area between two circles around the removed point. This circle represents the element $aba^{-1}b^{-1}\in\pi_1(U)$. Similarly for $V$, the generator gets mapped to $cdc^{-1}d^{-1}$. Hence the fundamental group of the double torus is $<a, b, c, d \mid aba^{-1}b^{-1}(cdc^{-1}d^{-1})^{-1} >$. Edit: Filling some details. First if you haven't seen it already, here's the picture that I am using for the argument. If you haven't seen this before, I strongly suggest that you look up this construction of the torus before moving on. Note that the ways denoted $A$ and $B$ in the picture are in fact loops. Cut a point (or a small circle) out in the middle of that picture. Consider again a circle around the removed point in the torus as above. This circle generates the fundamental group of $U\cap V$. We wish to figure out where the generator gets mapped under the inclusion map $U\cap V \to U$ or more precisely under the induced map $\pi_1(U\cap V) \to \pi_1(U)$. To do this, we take a representant, i.e. the circle and compose it with the inclusion map. Since the inclusion maps every point to itself, we still have the same circle. Now recall that the torus minus a point retracts to the border of the square. Under this retraction map, the circle gets mapped to the loop around the border of the square. This loop leads along all four edges, but remember that the edges are identified. In fact they are identified in such a way that when we move along an edge for the second time then we will pass it in the opposite direction. So our loop can be written as the composition of loops $ABA^{-1}B^{-1}$. This means that there is a similar equation for elements of the fundamental group. Technically, at this point you would have to go back from the square to the torus missing a point, where a similar equation holds, because the retraction induces an isomorphism of fundamental groups.
Assuming that the procedure described in the body of the question actually does define a coordinate patch (I haven't checked, but it seems quite plausible so I'll accept its validity), then, as far as finding the geodesic equations is concerned, we are reduced to the simple question of finding them for a metric of the form $\hat g = ds^2 = du^2 + G(u, v)dv^2, \tag{1}$ where $G(u, v)$ meets the presented conditions: $G(0, v) = 1 \; \text{and} \; G_u(0, v) = 0. \tag{2}$ These equations are derived in my answer to this one of B11b's other questions; we have $\ddot u + \dot u^2 \Gamma_{11}^1 + 2\dot u \dot v \Gamma_{12}^1 + \dot v^2 \Gamma_{22}^1 = 0 \tag{3}$ and $\ddot v + \dot u^2 \Gamma_{11}^2 + 2\dot u \dot v \Gamma_{12}^2 + \dot v^2 \Gamma_{22}^2 = 0, \tag{4}$ where the $\Gamma_{\mu \nu}^\alpha$, $\alpha, \mu, \nu = 1, 2$, are the Christoffel symbols for the covariant derivative associated to the metric tensor field $ds^2$ of (1). The $\Gamma_{\mu \nu}^\alpha$ may be simply had from the components of $\hat g = ds^2$ in via well-knonw formulas which are given below; before proceeding in that direction, however, I pause for an adjustment of notation. In (3) and (4), $u$ is the $1$-coordinate and $v$ is the $2$-coordinate in the sense that $u$ corresponds to the index "$1$" and $v$ to index "$2$" on the Christoffel symbols. I now define/set $x^1 = u$ and $x^2 = v$, and will use the $x^i$, $1 = 1, 2$, in place of $u, v$ for the following arguments/calulations, since it is notationally much more convenient to deal with variables indexed/denoted in a consistent manner throughout; at the end, we can write any results in terms of $u, v$ if so desired. The coefficients $\Gamma_{\mu \nu}^\alpha$ occurring in (3) and (4) are related to the components $g_{\mu \nu}$ of the metric $\hat g$ according to $\Gamma_{\mu \nu}^\alpha = \dfrac{1}{2}g^{\alpha \rho}(g_{\rho \mu, \nu} + g_{\rho \nu, \mu} - g_{\mu \nu, \rho}), \tag{5}$ wherein the coefficients $g^{\alpha \rho}$ are the components of the inverse of the matrix representing the tensor field $\hat g$ in $x^1, x^2$ coordinates (see for example this widipedia entry). In the present case, since $\hat g$ takes such a simple form $ds^2 = \hat g = [g_{\mu \nu}] = \begin{bmatrix} 1 & 0 \\ 0 & G(u, v) \end{bmatrix}, \tag{6}$ we have $[g^{\mu \nu}] = \begin{bmatrix} 1 & 0 \\ 0 & G^{-1}(u, v) \end{bmatrix}, \tag{7}$ and since $\hat g$ has only one non-constant component $g_{22} = G(u, v)$, calculation of the $\Gamma_{\mu \nu}^\alpha$ becomes a relatively simple matter; indeed, since $g^{\alpha \beta} = 0$ for $\alpha \ne \beta$, and since $g_{\mu \nu, \sigma} = 0$ unless $\mu = \nu = 2$, we see from (5) that $\Gamma_{22}^1 = \dfrac{1}{2}g^{11}(-g_{22, 1}) = -\dfrac{1}{2}G_{, x_1}; \tag{8}$ $\Gamma_{\mu \nu}^1 = 0 \tag{9}$ if $\mu = 1$ or $\nu = 1$, and $\Gamma_{11}^2 = 0; \tag{10}$ $\Gamma_{12}^2 = \Gamma_{21}^2 = \dfrac{1}{2}g^{22}g_{22, 1} = \dfrac{1}{2}G_{-1}G_{, x_1}; \tag{11}$ $\Gamma_{22}^2 = \dfrac{1}{2}g^{22}g_{22, 2} = \dfrac{1}{2}G^{-1}G_{, x_2}, \tag{12}$ wherein $G_{, x_1} = \partial G / \partial x_1$ and so forth; all the other $\Gamma_{\mu \nu}^\alpha$ vanish. Apparently, then, the geodesic equations (3)-(4) take the particularly simple form $\ddot x^1 - \dfrac{1}{2}G_{, x_1}\dot x_2^2 = 0, \tag{13}$ $\ddot x^2 + G^{-1}G_{, x_1}\dot x_1 \dot x_2 + \dfrac{1}{2}G^{-1}G_{, x_2}\dot x_2^2 = 0, \tag{14}$ or, if you like, converted back into the $u, v$ notation: $\ddot u - \dfrac{1}{2}G_{, u}\dot v^2 = 0, \tag{15}$ $\ddot v + G^{-1}G_{, u} \dot u \dot v + \dfrac{1}{2}G^{-1}G_{, v}\dot v^2 = 0; \tag{16}$ (15) and (16) are indeed relatively simple as geodesic equations go. As for the length $l(\gamma, t_0, t_1)$ of a curve $\gamma (t)$ twixt $\gamma(t_0)$ and $\gamma (t_1)$, i.e., for $\gamma:[t_0, t_1] \to S$, it is as usual given by the integral of the magnitude of the tangent vector $\gamma'(t)$, $\vert \gamma' \vert = \sqrt{\hat g(\dot \gamma, \dot \gamma)}$: $l(\gamma, t_0, t_1) = \int_{t_0}^{t_1} \sqrt{\hat g(\dot \gamma, \dot \gamma)} dt; \tag{17}$ in the $u$-$v$ coordinate system, with the metric given by (1), this expression also takes a particularly simple form; we have, writing the coordinate expression of $\gamma(t) = (\gamma^u(t), \gamma^v(t))$, so that $\dot \gamma(t) = \dot \gamma^u(t) (\partial / \partial u) + \dot \gamma^v(t) (\partial / \partial v) $, $\hat g(\dot \gamma(t), \dot \gamma(t)) = (\dot \gamma^u(t))^2 + G(\gamma^u(t), \gamma^v(t))(\dot \gamma^v(t))^2, \tag{18}$ whence $l(\gamma, t_0, t_1) = \int_{t_0}^{t_1} \sqrt{(\dot \gamma^u(t))^2 + G(\gamma^u(t), \gamma^v(t))(\dot \gamma^v(t))^2} dt, \tag{19}$ itself a relatively simple expression as such things go. Hope this helps! Cheerio, and as always, Fiat Lux!!!
Since I posed the question in comment, here is how to treat the friction case, and of course at the end it reduces to the expected friction free result. Assume the applied force has components $f_x$ parallel (up) the plane and a normal component $f_y$ downward into the plane: The normal force between block and plane is: $$f_n = W\cos(t) + f_y$$ The force $f_x$ required to hold against sliding, assuming static coulomb friction with coefficient $u$ is $$f_x = W\sin(t) - u f_n = W\sin(t) - u (W\cos(t) + f_y)$$ The magnitude of the applied force is then $$f_{mag} = \sqrt{f_x^2+f_y^2}$$ without showing all the messy steps, setting $d f_{mag} / d f_y == 0$ we can solve for $f_y$: $$f_y = u W\sin(t)\cdot\dfrac{1 - u /\tan(t)}{1 + u^2}$$ and $$f_x = W\sin(t) ( 1 - u\cdot\dfrac{1/\tan(t) + u}{1 + u^2})$$ For the friction free case ($u=0$) this results in the expected $f_x=W\sin(t),f_y=0$ (ie parallel to the plane) but with friction angling the force toward the plane reduces the required force magnitude.
Is the following provable and how? I feel like I am missing some proof technique or strong theorems, I'd be grateful for any pointer. Let $(I, \leq)$ be an upwards-directed poset. Define an $f: I \to \{a,b\}$ such that $f$ is noteventually constant. Upwards-directed ("every path can be recombined"): $\forall i_1, i_2 \in I\ \exists j \in I.\quad j \geq i_1, j \geq i_2$. Eventually constant ("a path on which $f$ is constant"): $\exists C \in \{a, b\}\ \exists i \in I\ \forall j \geq i.\quad f(j) = C$ If $I = (\mathbb{N}, \leq)$ with the usual ordering, then my solution would be: Set $f(0) := a$ and inductively choose $f(n + 1) \neq f(n)$. Or put differently, define $f$ recursively by $$f(n) = \begin{cases} a &, n = 0 \\ \mathrm{flip}(f(n-1)) &, n > 0. \end{cases}$$ In general, if $I$ is wellordered, then such a (non-constructive) definition by transfinite recursion works. What if $I$ is not wellordered? While the set $I$ can always be wellordered, that assigned order is not necessarily compatible with the old order. Within a topology $X$ we have: $$x \text{ is an isolated point} \Leftrightarrow \text{every net converging to } x \text{ is eventually constant}$$ $\Rightarrow$ is clear to me, my attempted solution for the rest: For the direction $\Leftarrow$ I'd like to show the contraposition $$x \text{ is not isolated} \Rightarrow \exists \text{ net } f \text{ converging to x, which is not eventually constant.}$$ $x$ is not isolated, hence every neighborhood of $x$ has at least two elements. Define $f: (\mathcal{V}(x), \supseteq) \to X$ mapping neighborhoods to one of their elements, so that $f$ is not eventually constant ( how?).
Serre's finiteness theorem says if $n$ is an odd integer, then $\pi_{2n+1}(S^{n + 1})$ is the direct sum of $\mathbb{Z}$ and a finite group. By looking at the table of homotopy groups, say on Wikipedia, one empirically observes that if $n \equiv 3 \pmod 4$, then we in fact have $$ \pi_{2n+1}(S^{n+1}) \cong \mathbb{Z} \oplus \pi_{2n}(S^n). $$ This holds for all the cases up to $n = 19$. On the other hand, for $n \equiv 1 \pmod 4$ (and $n \neq 1$), the order of the finite part drops by a factor of $2$ when passing from $\pi_{2n}(S^n)$ to $\pi_{2n+1}(S^{n+1})$. From the EHP sequence, we know that these two are the only possible scenarios. Indeed, we have a long exact sequence $$ \pi_{2n+2}(S^{2n+1}) \cong \mathbb{Z}/2\overset{P}{\to} \pi_{2n}(S^n) \overset{E}{\to} \pi_{2n+1}(S^{n+1}) \overset{H}{\to}\pi_{2n+1}(S^{2n+1}) \cong \mathbb{Z}. $$ Since $H$ kills of all torsion, one sees that the map $E$ necessarily surjects onto the finite part of $\pi_{2n+1}(S^{n+1})$. So the two cases boil down to whether or not $P$ is the zero map. What we observed was that it is zero iff $n \equiv 3 \pmod 4$, up to $n = 19$. Since $\pi_{2n+2}(S^{2n+1}) \cong \mathbb{Z}/2\mathbb{Z}$ has only one non-zero element, which is the suspension of the Hopf map, it seems like perhaps one might be able to check what happens to this element directly. However, without a good grasp of what the map $P$ (or the preceeding $H\colon\pi_{2n+2}(S^{n+1}) \to \pi_{2n+2}(S^{2n+1})$) does, I'm unable to proceed. Curiously, I can't seem to find any mention of this phenomenon anywhere. The closest I can find is this MO question, but this phenomenon is not really about early stabilization, since for $n = 3, 7$, the group $\pi_{2n-1}(S^n)$ is not the stable homotopy group, but something smaller. I'd imagine either this pattern no longer holds for larger $n$, or there is some straightforward proof I'm missing. Note: Suggestions for a more descriptive title are welcome.
Your hovercraft device has a mass of 2.25 kg and at max power can travel a speed of 0.2 m/s. You also learn that the target distance is 180 cm and you are trying to get there in 15 seconds. How many rolls of pennies should you place on the hovercraft in order to get as close to 15 seconds as possible? Assuming there is no change in lift (which affects friction) and the hovercraft gets to max speed instantly:One penny roll is around the mass of 50 (new) pennies, 50 * 3.11 g or 0.156 kg.Required speed is 1.8 m / 15 s = 0.12 m/s.(2.25 kg)(0.2 m/s) = (2.25 kg + n * 0.156 kg)(0.12 m/s)n = 9.6 Your hovercraft device has a mass of 2.25 kg and at max power can travel a speed of 0.2 m/s. You also learn that the target distance is 180 cm and you are trying to get there in 15 seconds. How many rolls of pennies should you place on the hovercraft in order to get as close to 15 seconds as possible? Assuming there is no change in lift (which affects friction) and the hovercraft gets to max speed instantly:One penny roll is around the mass of 50 (new) pennies, 50 * 3.11 g or 0.156 kg.Required speed is 1.8 m / 15 s = 0.12 m/s.(2.25 kg)(0.2 m/s) = (2.25 kg + n * 0.156 kg)(0.12 m/s)n = 9.6 10 Oh sorry, I forgot to mention that you should assume that each roll of pennies has a mass of 125 grams (it's mentioned in the rules manual). However, your answer is technically correct, so you can go ahead with the next one! "The fault, dear Brutus, is not in our stars,But in ourselves, that we are underlings." University of Texas at Austin '23Seven Lakes High School '19 Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level. Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level. First, find the individual components of the 2D velocity vector:[math]v_x = vcos\theta[/math][math]v_y = vsin\theta[/math]The amount of time the ball will be in the air can be found using[math]\Delta y = v_{y}t + \frac{at^2}{2}[/math]Since [math]\Delta y = 0[/math] and [math]a = -9.8[/math]:[math]0 = t(vsin\theta - 4.9t)[/math][math]t = 0, \frac{vsin\theta}{4.9}[/math]To then get the total distance traveled, we simply use [math]v_{x}t = \Delta x[/math].[math]vcos\theta*\frac{vsin\theta}{4.9} = \Delta x[/math][math]\frac{v^2sin(2\theta)}{9.8} = \Delta x[/math] is the final equation. Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level. First, find the individual components of the 2D velocity vector:[math]v_x = vcos\theta[/math][math]v_y = vsin\theta[/math]The amount of time the ball will be in the air can be found using[math]\Delta y = v_{y}t + \frac{at^2}{2}[/math]Since [math]\Delta y = 0[/math] and [math]a = -9.8[/math]:[math]0 = t(vsin\theta - 4.9t)[/math][math]t = 0, \frac{vsin\theta}{4.9}[/math]To then get the total distance traveled, we simply use [math]v_{x}t = \Delta x[/math].[math]vcos\theta*\frac{vsin\theta}{4.9} = \Delta x[/math][math]\frac{v^2sin(2\theta)}{9.8} = \Delta x[/math] is the final equation. There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision? Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision? It doesn't seem like there's enough information given to solve the problem, but given that the balls "stick together", thenwhere east is positive. There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision? It doesn't seem like there's enough information given to solve the problem, but given that the balls "stick together", thenwhere east is positive. or where south is positive. or That’s correct. You turn! Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind 1. A uniform disk is rolled down a hill of height 11m. What is the speed of the disk when it reaches the bottom of the hill?2. As soon as the disk reaches the bottom of the hill, it hits a horizontal surface of frictional coefficient 0.13. What is the acceleration of the disk?3. A ball of mass 2kg is attached to a string of length 0.75m and rotated vertically with an angular velocity of 7rad/s. What is the ratio of the string tension at the top of the loop to the string tension at the bottom?Ignore significant figures. i wish i was goodEvents 2019: Expd, Water, HerpRip states 2019
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3,\beta_4\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of a root \(\nu\) of \(x^{5}\mathstrut -\mathstrut \) \(2\) \(x^{4}\mathstrut -\mathstrut \) \(5\) \(x^{3}\mathstrut +\mathstrut \) \(6\) \(x^{2}\mathstrut +\mathstrut \) \(6\) \(x\mathstrut -\mathstrut \) \(1\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} - \nu - 2 \) \(\beta_{3}\) \(=\) \( \nu^{3} - 2 \nu^{2} - 3 \nu + 3 \) \(\beta_{4}\) \(=\) \( \nu^{4} - 2 \nu^{3} - 4 \nu^{2} + 5 \nu + 2 \) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{2}\mathstrut +\mathstrut \) \(\beta_{1}\mathstrut +\mathstrut \) \(2\) \(\nu^{3}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(2\) \(\beta_{2}\mathstrut +\mathstrut \) \(5\) \(\beta_{1}\mathstrut +\mathstrut \) \(1\) \(\nu^{4}\) \(=\) \(\beta_{4}\mathstrut +\mathstrut \) \(2\) \(\beta_{3}\mathstrut +\mathstrut \) \(8\) \(\beta_{2}\mathstrut +\mathstrut \) \(9\) \(\beta_{1}\mathstrut +\mathstrut \) \(8\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(2\) \(-1\) \(3\) \(-1\) \(167\) \(-1\) This newform can be constructed as the kernel of the linear operator \(T_{5} \) \(\mathstrut +\mathstrut 2 \) acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(6012))\).
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Reading through Titchmarsh's book on the Riemann zeta function, chapter 3 discusses the Prime Number Theorem. One way to prove this result is to check the zeta function has no zeros on the line $z = 1 + it,$ $$ \zeta(1 + it) \neq 0$$ Indeed the book has $3$ or $4$ proofs of this result. Actually connecting it to the prime number theorem is another matter. One version of the Prime Number Theoriem is: $$ \sum_{n \leq x} \Lambda (n) = x + o(x)$$ involving the van Mangoldt function, but why is this equivalent to the non-vanishing of the Riemann zeta function. I think you can start from Perron's theorem $$ \frac{1}{2\pi i}\int_{1-iT}^{1+iT} \frac{\zeta'(w)}{\zeta(w)} \, \frac{x^w}{w}dw = \sum_{n \leq x} \Lambda (n) $$ and then I don't know how to proceed.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Navigating The most natural way of navigating is by clicking wiki links that connect one page with another. The “Front page” link in the navigation bar will always take you to the Front Page of the wiki. The “All pages” link will take you to a list of all pages on the wiki (organized into folders if directories are used). Alternatively, you can search using the search box. Note that the search is set to look for whole words, so if you are looking for “gremlins”, type that and not “gremlin”. The “go” box will take you directly to the page you type. Creating and modifying pages Registering for an account In order to modify pages, you’ll need to be logged in. To register for an account, just click the “register” button in the bar on top of the screen. You’ll be asked to choose a username and a password, which you can use to log in in the future by clicking the “login” button. While you are logged in, these buttons are replaced by a “logout so-and-so” button, which you should click to log out when you are finished. Note that logins are persistent through session cookies, so if you don’t log out, you’ll still be logged in when you return to the wiki from the same browser in the future. Editing a page To edit a page, just click the “edit” button at the bottom right corner of the page. You can click “Preview” at any time to see how your changes will look. Nothing is saved until you press “Save.” Note that you must provide a description of your changes. This is to make it easier for others to see how a wiki page has been changed. Page metadata Pages may optionally begin with a metadata block. Here is an example: ---format: latex+lhscategories: haskell mathtoc: notitle: Haskell and Category Theory...\section{Why Category Theory?} The metadata block consists of a list of key-value pairs, each on a separate line. If needed, the value can be continued on one or more additional line, which must begin with a space. (This is illustrated by the “title” example above.) The metadata block must begin with a line --- and end with a line ... optionally followed by one or more blank lines. Currently the following keys are supported: format Overrides the default page type as specified in the configuration file. Possible values are markdown, rst, latex, html, markdown+lhs, rst+lhs, latex+lhs. (Capitalization is ignored, so you can also use LaTeX, HTML, etc.) The +lhsvariants indicate that the page is to be interpreted as literate Haskell. If this field is missing, the default page type will be used. categories A space or comma separated list of categories to which the page belongs. toc Overrides default setting for table-of-contents in the configuration file. Values can be yes, no, true, or false(capitalization is ignored). title By default the displayed page title is the page name. This metadata element overrides that default. Creating a new page To create a new page, just create a wiki link that links to it, and click the link. If the page does not exist, you will be editing it immediately. You can also type the path to and name of the file in the browser URL window. Note that in that case any new directory included in the path will be created but only if you tick the corresponding boxes (in the edit window) to confirm their creation. Deleting a page The “delete” button at the bottom of the page will delete a page. Note that deleted pages can be recovered, since a record of them will still be accessible via the “activity” button on the top of the page. Markdown This wiki’s pages are written in pandoc’s extended form of markdown. If you’re not familiar with markdown, you should start by looking at the markdown “basics” page and the markdown syntax description. Consult the pandoc User’s Guide for information about pandoc’s syntax for footnotes, tables, description lists, and other elements not present in standard markdown. Markdown is pretty intuitive, since it is based on email conventions. Here are some examples to get you started: *emphasized text* emphasized text **strong emphasis** strong emphasis `literal text` literal text \*escaped special characters\* *escaped special characters* [external link](http://google.com) external link ![folder](/img/icons/folder.png?1566200379) Wikilink: [Front Page](Front Page) Wikilink: Front Page H~2~O H 2O 10^100^ 10 100 ~~strikeout~~ $x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}}$ x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}} 1 A simple footnote.^[Or is it so simple?] A simple footnote. 2 > an indented paragraph, > usually used for quotations pre> #!/bin/sh -e # code, indented four spaces echo “Hello world” pre> * a bulleted list * second item - sublist - and more * back to main list 1. this item has an ordered 2. sublist a) you can also use letters b) another item pre> Fruit Quantity ——– ———– apples 30,200 oranges 1,998 pears 42Table: Our fruit inventory For headings, prefix a line with one or more # signs: one for a major heading, two for a subheading, three for a subsubheading. Be sure to leave space before and after the heading. # MarkdownText...## Some examples...Text... Wiki links Links to other wiki pages are formed this way: [Page Name](Page Name). (Gitit converts markdown links with empty targets into wikilinks.) To link to a wiki page using something else as the link text: [something else](Page Name). Note that page names may contain spaces and some special characters. They need not be CamelCase. CamelCase words are not automatically converted to wiki links. Wiki pages may be organized into directories. So, if you have several pages on wine, you may wish to organize them like so: Wine/Pinot NoirWine/BurgundyWine/Cabernet Sauvignon Note that a wiki link [Burgundy](Burgundy) that occurs inside the Wine directory will link to Wine/Burgundy, and not to Burgundy. To link to a top-level page called Burgundy, you’d have to use [Burgundy](/Burgundy). To link to a directory listing for a subdirectory, use a trailing slash: [Wine/](Wine/) will link to a listing of the Wine subdirectory. Editing the wiki with darcs The wiki uses darcs as a backend and can thus be edited using a local darcs repository mirroring the web site. To be able to push to the darcs repository, you will need write access permissions which you need to get from the site administrator (Stéphane Popinet). These rights will be granted based on your SSH public key. Once this is done, you can get the entire content of the web site using: darcs get wiki@basilisk.fr:wiki You can also use Makefiles, make plots, and generate HTML pages typically in the local copy of your sandbox/, to make sure that everything works before darcs recording and darcs pushing your changes to the web site. Note that the (new) wiki engine is clever enough to detect markdown comments in most files, so that the .page extension is not required anymore.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Why magic and physics cannot coexist peacefully? After reading many physics and math stuff, I think that the problem maybe come from the principle of least action. So, I edited the definition of action, in order to make magic and physics coexist peacefully. Definition of action in my world: $$ S=\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt + \frac{\int_{t_1}^{t_2} M(\mathbf{r-q,\dot{r}-\dot{q},t})\;dt}{1+|\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt|} $$ The first term is the definition of action in real world. The second term is added by me. $$ \int_{t_1}^{t_2} M(\mathbf{r-q,\dot{r}-\dot{q},t})\;dt $$ It is the total magical power needed to change a body's trajectory. r is the new position. (after magic influence) r dot is the new velocity. (after magic influence) $$ 1+|\int_{t_1}^{t_2} L(\mathbf{q,\dot{q},t})\;dt| $$ is used to suppress the influence of magic in daily lives. "1+" is used to prevent value which is smaller than one and bigger than zero to appear in the denominator. Absolute value is used to prevent negative value. Finally, the magical power need to have a unit $ J^2 s$ as well. lol Is there any contradiction (physics or math) caused by this edited definition of action? (I know that many equations will change, such as F=ma, but it is ok if there is no contradiction.) Response to Comments: I didn't plan to define "magical power" using some well-known physics concept, such as energy. I want to make "magical power" into something new, so its unit $ J^2 s$ is the only explanation. :p Magic in my world also affects quantum, but the calculation of quantum field theory is difficult, I need to spend more time on it.
This post describes some discriminative machine learning algorithms. Distribution of Y given X Algorithm to predict Y Normal distribution Linear regression Bernoulli distribution Logistic regression Multinomial distribution Multinomial logistic regression (Softmax regression) Exponential family distribution Generalized linear regression Distribution of X Algorithm to predict Y Multivariate normal distribution Gaussian discriminant analysis or EM Algorithm X Features conditionally independent \(p(x_1, x_2|y)=p(x_1|y) * p(x_2|y) \) Naive Bayes Algorithm Other ML algorithms are based on geometry, like the SVM and K-means algorithms. Linear Regression Below a table listing house prices by size. x = House Size (\(m^2\)) y = House Price (k$) 50 99 50 100 50 100 50 101 60 110 For the size 50\(m^2\), if we suppose that prices are normally distributed around the mean μ=100 with a standard deviation σ, then P(y|x = 50) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-μ}{\sigma})^{2})\) We define h(x) as a function that returns the mean of the distribution of y given x (E[y|x]). We will define this function as a linear function. \(E[y|x] = h_{θ}(x) = \theta^T x\) P(y|x = 50; θ) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-h_{θ}(x)}{\sigma})^{2})\) We need to find θ that maximizes the probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(\theta)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{n} P(y^{(i)}|x^{(i)};θ)\) Or maximizes the log likelihood function l: \(l(\theta)=log(L(\theta )) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};\theta ))\)\(= \sum_{i=1}^{m} log(\frac{1}{\sigma \sqrt{2\pi}}) -\frac{1}{2} \sum_{i=1}^{n} (\frac{y^{(i)}-h_{θ}(x^{(i)})}{\sigma})^{2}\) To maximize l, we need to minimize J(θ) = \(\frac{1}{2} \sum_{i=1}^{m} (y^{(i)}-h_{θ}(x^{(i)}))^{2}\). This function is called the Cost function (or Energy function, or Loss function, or Objective function) of a linear regression model. It’s also called “Least-squares cost function”. J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\). A convex function has no local minimum. There are many methods to solve this equation: Gradient descent (Batch or Stochastic Gradient descent) Normal equation Newton method Matrix differentiation Gradient descent is the most used Optimizer (also called Learner or Solver) for learning model weights. Batch Gradient descent \(θ_{j} := θ_{j} – \alpha \frac{\partial J(θ)}{\partial θ_{j}} = θ_{j} – α \frac{\partial \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\) α is called “Learning rate” \(θ_{j} := θ_{j} – α \frac{1}{2} \sum_{i=1}^{n} \frac{\partial (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}\) If \(h_{θ}(x)\) is a linear function (\(h_{θ} = θ^{T}x\)), then : \(θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Batch size should fit the size of CPU or GPU memories, otherwise learning speed will be extremely slow. When using Batch gradient descent, the cost function in general decreases without oscillations. Stochastic (Online) Gradient Descent (SGD) (use one example for each iteration – pass through all data N times (N Epoch)) \(θ_{j} := θ_{j} – \alpha (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) This learning rule is called “Least mean squares (LMS)” learning rule. It’s also called Widrow-Hoff learning rule. Mini-batch Gradient descent Run gradient descent for each mini-batch until we pass through traning set (1 epoch). Repeat the operation many times. \(θ_{j} := θ_{j} – \alpha \sum_{i=1}^{20} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Mini-batch size should fit the size of CPU or GPU memories. When using Batch gradient descent, the cost function decreases quickly but with oscillations. Learning rate decay It’s a technique used to automatically reduce learning rate after each epoch. Decay rate is a hyperparameter. \(α = \frac{1}{1+ decayrate + epochnum} . α_0\) Momentum Momentum is a method used to accelerate gradient descent. The idea is to add an extra term to the equation to accelerate descent steps. \(θ_{j_{t+1}} := θ_{j_t} – α \frac{\partial J(θ_{j_t})}{\partial θ_j} \color{blue} {+ λ (θ_{j_t} – θ_{j_{t-1}})} \) Below another way to write the expression: \(v(θ_{j},t) = α . \frac{\partial J(θ_j)}{\partial θ_j} + λ . v(θ_{j},t-1) \\ θ_{j} := θ_{j} – \color{blue} {v(θ_{j},t)}\) Nesterov Momentum is a slightly different version of momentum method. AdaGrad Adam is another method used to accelerate gradient descent. The problem in this method is that the term grad_squared becomes large after running many gradient descent steps. The term grad_squared is used to accelerate gradient descent when gradients are small, and slow down gradient descent when gradients are large. RMSprop RMSprop is an enhanced version of AdaGrad. The term decay_rate is used to apply exponential smoothing on grad_squared term. Adam optimization Adam is a combination of Momentum and RMSprop. Normal equation To minimize the cost function \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation: \( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial trace(J(θ))}{\partial θ} = 0 \\ \frac{\partial trace((Xθ – y)^T(Xθ – y))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty)}{\partial θ} = 0\)\(\frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ) + trace(y^Ty))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ))}{\partial θ} = 0\)\(\frac{\partial trace(θ^TX^TXθ) – trace(y^TXθ) – trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – 2 trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θθ^TX^TX)}{\partial θ} – 2 \frac{\partial trace(θy^TX))}{\partial θ} = 0 \\ 2 X^TXθ – 2 X^Ty = 0 \\ X^TXθ= X^Ty \\ θ = {(X^TX)}^{-1}X^Ty\) If \(X^TX\) is singular, then we need to calculate the pseudo inverse instead of the inverse. Newton method \(J”(θ_{t}) := \frac{J'(θ_{t+1}) – J'(θ_{t})}{θ_{t+1} – θ_{t}}\)\(\rightarrow θ_{t+1} := θ_{t} – \frac{J'(θ_{t})}{J”(θ_{t})}\) Matrix differentiation To minimize the cost function: \(J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)\), we need to solve the equation: \( \frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty}{\partial θ} = 2X^TXθ – \frac{\partial θ^TX^Ty}{\partial θ} – X^Ty = 0\) \(2X^TXθ – \frac{\partial y^TXθ}{\partial θ} – X^Ty = 2X^TXθ – 2X^Ty = 0\) (Note: In matrix differentiation: \( \frac{\partial Aθ}{\partial θ} = A^T\) and \( \frac{\partial θ^TAθ}{\partial θ} = 2A^Tθ\)) we can deduce \(X^TXθ = X^Ty\) and \(θ = (X^TX)^{-1}X^Ty\) Logistic Regression Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Benign=0, Malignant=1) 1 0 1 0 2 0 2 1 3 1 3 1 Given x, y is distributed according to the Bernoulli distribution with probability of success p = E[y|x]. \(P(y|x;θ) = p^y (1-p)^{(1-y)}\) We define h(x) as a function that returns the expected value (p) of the distribution. We will define this function as: \(E[y|x] = h_{θ}(x) = g(θ^T x) = \frac{1}{1+exp(-θ^T x)}\). g is called Sigmoid (or logistic) function. P(y|x; θ) = \(h_{θ}(x)^y (1-h_{θ}(x))^{(1-y)}\) We need to find θ that maximizes this probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\) Or maximize the log likelihood function l: \(l(θ)=log(L(θ)) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};θ ))\)\(= \sum_{i=1}^{m} y^{(i)} log(h_{θ}(x^{(i)}))+ (1-y^{(i)}) log(1-h_{θ}(x^{(i)}))\) Or minimize the \(-l(θ) = \sum_{i=1}^{m} -y^{(i)} log(h_{θ}(x^{(i)})) – (1-y^{(i)}) log(1-h_{θ}(x^{(i)})) = J(θ) \) J(θ) is convex, to minimize it, we need to solve the equation \(\frac{\partial J(θ)}{\partial θ} = 0\). There are many methods to solve this equation: Gradient descent \(θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)} \) Logit function (inverse of logistic function) Logit function is defined as follow: \(logit(p) = log(\frac{p}{1-p})\) The idea in the use of this function is to transform the interval of p (outcome) from [0,1] to [0, ∞]. So instead of applying linear regression on p, we will apply it on logit(p). Once we find θ that maximizes the Likelihood function, we can then estimate logit(p) given a value of x (\(logit(p) = h_{θ}(x) \)). p can be then calculated using the following formula: \(p = \frac{1}{1+exp(-logit(h_{θ}(x)))}\) Multinomial Logistic Regression (using maximum likelihood estimation) In multinomial logistic regression (also called Softmax Regression), y could have more than two outcomes {1,2,3,…,k}. Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Type1= 1, Type2= 2, Type3= 3) 1 1 1 1 2 2 2 2 2 3 3 3 3 3 Given x, we can define a multinomial distribution with probabilities of success \(\phi_j = E[y=j|x]\). \(P(y=j|x;\Theta) = Ï•_j \\ P(y=k|x;\Theta) = 1 – \sum_{j=1}^{k-1} Ï•_j \\ P(y|x;\Theta) = Ï•_1^{1\{y=1\}} * … * Ï•_{k-1}^{1\{y=k-1\}} * (1 – \sum_{j=1}^{k-1} Ï•_j)^{1\{y=k\}}\) We define \(\tau(y)\) as a function that returns a \(R^{k-1}\) vector with value 1 at the index y: \(\tau(y) = \begin{bmatrix}0\\0\\1\\0\\0\end{bmatrix}\), when \(y \in \{1,2,…,k-1\}\), . and \(\tau(y) = \begin{bmatrix}0\\0\\0\\0\\0\end{bmatrix}\), when y = k. We define \(\eta(x)\) as a \(R^{k-1}\) vector = \(\begin{bmatrix}log(\phi_1/\phi_k)\\log(\phi_2/\phi_k)\\…\\log(\phi_{k-1}/\phi_k)\end{bmatrix}\) \(P(y|x;\Theta) = 1 * exp(η(x)^T * \tau(y) – (-log(\phi_k)))\) This form is an exponential family distribution form. We can invert \(\eta(x)\) and find that: \(Ï•_j = Ï•_k * exp(η(x)_j)\)\(= \frac{1}{1 + \frac{1-Ï•_k}{Ï•_k}} * exp(η(x)_j)\)\(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} Ï•_c/Ï•_k}\)\(=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} exp(η(x)_c)}\) If we define η(x) as linear function, \(η(x) = Θ^T x = \begin{bmatrix}Θ_{1,1} x_1 +… + Θ_{n,1} x_n \\Θ_{1,2} x_1 +… + Θ_{n,2} x_n\\…\\Θ_{1,k-1} x_1 +… + Θ_{n,k-1} x_n\end{bmatrix}\), and Θ is a \(R^{n*(k-1)}\) matrix. Then: \(Ï•_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\) The hypothesis function could be defined as: \(h_Θ(x) = \begin{bmatrix}\frac{exp(Θ_1^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \\…\\ \frac{exp(Θ_{k-1}^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \end{bmatrix}\) We need to find Θ that maximizes the probabilities P(y=j|x;Θ) for all values of x. In other words, we need to find θ that maximizes the likelihood function L: \(L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)\)\(=\prod_{i=1}^{m} \phi_1^{1\{y^{(i)}=1\}} * … * \phi_{k-1}^{1\{y^{(i)}=k-1\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\)\(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) \(=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}\) and \(Ï•_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}\) Multinomial Logistic Regression (using cross-entropy minimization) In this section, we will try to minimize the cross-entropy between Y and estimated \(\widehat{Y}\). We define \(W \in R^{d*n}\), \(b \in R^{d}\) such as \(S(W x + b) = \widehat{Y}\), S is the Softmax function, k is the number of outputs, and \(x \in R^n\). To estimate W and b, we will need to minimize the cross-entropy between the two probability vectors Y and \(\widehat{Y}\). The cross-entropy is defined as below: \(D(\widehat{Y}, Y) = -\sum_{j=1}^d Y_j Log(\widehat{Y_j})\) Example: if \(\widehat{y} = \begin{bmatrix}0.7 \\0.1 \\0.2 \end{bmatrix} \& \ y=\begin{bmatrix}1 \\0 \\0 \end{bmatrix}\) then \(D(\widehat{Y}, Y) = D(S(W x + b), Y) = -1*log(0.7)\) We need to minimize the entropy for all training examples, therefore we will need to minimize the average cross-entropy of the entire training set. \(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\), L is called the loss function. If we define \(W = \begin{bmatrix} — θ_1 — \\ — θ_2 — \\ .. \\ — θ_d –\end{bmatrix}\) such as: \(θ_1=\begin{bmatrix}θ_{1,0}\\θ_{1,1}\\…\\θ_{1,n}\end{bmatrix}, θ_2=\begin{bmatrix}θ_{2,0}\\θ_{2,1}\\…\\θ_{2,n}\end{bmatrix}, θ_d=\begin{bmatrix}θ_{d,0}\\θ_{d,1}\\…\\θ_{d,n}\end{bmatrix}\) We can then write \(L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})\) \(= \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^d 1^{\{y^{(i)}=j\}} log(\frac{exp(θ_k^T x^{(i)})}{\sum_{k=1}^d exp(θ_k^T x^{(i)})})\) For d=2 (nbr of class=2), \(= \frac{1}{m} \sum_{i=1}^m 1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}} log(\frac{exp(θ_1^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(j)})}) + 1^{\{y^{(i)}=\begin{bmatrix}0 \\ 1\end{bmatrix}\}} log(\frac{exp(θ_2^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(i)})})\) \(1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}}\) means that the value is 1 if \(y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\) otherwise the value is 0. To estimate \(θ_1,…,θ_d\), we need to calculate the derivative and update \(θ_j = θ_j – α \frac{\partial L}{\partial θ_j}\) Kernel regression Kernel regression is a non-linear model. In this model we define the hypothesis as the sum of kernels. \(\widehat{y}(x) = Ï•(x) * θ = θ_0 + \sum_{i=1}^d K(x, μ_i, λ) θ_i \) such as: \(Ï•(x) = [1, K(x, μ_1, λ),…, K(x, μ_d, λ)]\) and \(θ = [θ_0, θ_1,…, θ_d]\) For example, we can define the kernel function as : \(K(x, μ_i, λ) = exp(-\frac{1}{λ} ||x-μ_i||^2)\) Usually we select d = number of training examples, and \(μ_i = x_i\) Once the vector Ï•(X) calculated, we can use it as new engineered vector, and then use the normal equation to find θ: \(θ = {(Ï•(X)^TÏ•(X))}^{-1}Ï•(X)^Ty\) Bayes Point Machine The Bayes Point Machine is a Bayesian linear classifier that can be converted to a nonlinear classifier by using feature expansions or kernel methods as the Support Vector Machine (SVM). More details will be provided. Ordinal Regression Ordinal Regression is used for predicting an ordinal variable. An ordinal variable is a categorical variable for which the possible values are ordered (eg. size: Small, Medium, Large). More details will be provided. Poisson Regression Poisson regression assumes the output variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear function. log(E[Y|X]) = log(λ) = θ.x
I need to show that the leading order inner solution is given by the below. Thus far, I have rescaled and showed the boundary layer is of order $\epsilon^{\frac{3}{4}}$. Hence at leading order I then try to solve the second order ode $Y_0''(X) + X^{2}Y_0'(X)=0$ but don't get the result. Please help Here is what I did, our layer thicknesses don't agree: We know the boundary layer is at $x = 0,$ so let $\xi = x \epsilon^{-a}.$ Then the equation is, $$\epsilon^{1-2a} y''(\xi) + \xi^2 \epsilon^a - \xi^3 \epsilon^{3a} y = 0.$$ If $a = 1/3,$ then the first two terms balance and the third term is of higher order, so the thickness is ${\cal O}(\epsilon^{1/3}).$ The equation has now become, $$ y'' + \xi^2 y' = 0.$$ Doing the typical $y(\xi) \sim Y_0(\xi) + \cdots,$ the equation you stated is now the problem to solve. Clearly one may integrate this straight up one time to get $$Y'_0 = c_2 e^{-\xi^3 / 3}$$ This is just your basic use of an integrating factor, $e^{\xi^3/3}.$ So then integrate it one more time to get $Y_0(\xi) = c_1 + c_2\int_0^\xi e^{-t^3/3} \; dt.$
Let $X_1,\cdots,X_n$ be mutually independent and identical distributed exponential random variables. Let $M = \max(X_1, \cdots, X_n)$. Find $P\left( X_1 + \cdots + X_n < 2M\right)$. I have the following. First, we can do \begin{align} P\left( X_1 + \cdots + X_n < 2 M \right) = 1 - P\left( X_1 + \cdots + X_n > 2M\right) \end{align} Then \begin{align} P\left(X_1 + \cdots + X_n > 2M\right) &= P\left(X_1 + \cdots + X_n>2 \max(X_1,\cdots,X_n)\right) \\ &=P\left( X_2 + \cdots + X_n > X_1 \cap \cdots \cap X_1 + \cdots + X_{n-1} > X_n\right) \end{align} I think I can somehow find $$P\left(\sum_{i\ne j} X_i > X_j\right)$$ by using symmetry argument (e.g. $P(X_1 > X_2 + X_3 + X_4) = P(X_2 > X_1 + X_3 + X_r)$). But the problem now is that I don't know if I can turn the expression into product of probabilities. Besides, I haven't used the fact that $X_i$'s are exponential random variables. Any hint would be great.
Summary The main idea behind the "cuboverse" is that spacetime distances are measured by (something close to) the sup norm or infinity norm. Under this norm, spheres (the set of points at a fixed distance from an origin) are the same as cubes, hence the name. Other features that I have been able to derive from this are: Geodesics are straight lines like in our world, but Objects can basically only move at a certain constant speed, and only in one of eight special directions, $(\pm 1, \pm 1, \pm 1)$ in Cartesian coordinates. There is an attractive force of "gravity", and a second cohesive force that allows primordial material to form big planet-like bodies of liquid. My question is: Would planets be cubic in this universe? If not, what shape would they attain (octahedra, ordinary spheres, unstable, something else)? I would like answers based on physical reasoning and supported by mathematical calculations if possible, taking into account the relevant changes to real-world physics (see the details section below). Background I recently discovered the science fiction writer Greg Egan. Many of his novels like Diaspora, the Orthogonal series and Dichronauts share the idea of changing one or more fundamental things about our world's physics (the number of dimensions, the metric signature of these dimensions, changes to particle physics etc.) and exploring the consequences of that change. The author keeps some science notes online relating to these works, and after reading them I got inspired to attempt to build one such world myself. The cuboverse I imagined consists of big planets made of liquid (similar to water), one of them inhabited by a small intelligent species of eight-spiked "sea urchins", along with some other eel-like and carpet-like sentient creatures, all of them living near the surface. There are no stars in this world, so the necessary heat comes from the planet itself. I have already thought of a method of propulsion for the sea urchins and some rough details about their society. I still haven't developed the chemistry and particle physics, and I also have some questions about the biology, but first of all I would like to know whether the setting I imagined (specifically the shape of the planets and their stability) is realistic in the context of this modified physics. Details As a warning, I'm not at all experienced in exploring alternate world physics, it's my first time doing this, so some of the things I derived below could be wrong. Anyways, my basic idea is to change the Minkowski metric $$ds = (-c^2 dt^2 + dx^2 + dy^2 + dz^2)^{1/2}$$ to a $\lambda$-norm $$ds = (-c^\lambda |dt|^\lambda + |dx|^\lambda + |dy|^\lambda + |dz|^\lambda)^{1/\lambda},$$ where $\lambda$ is a very big number (I decided not to choose the sup norm itself $\lambda \to \infty$ because that would make geodesics non-unique). According to this, with this norm spacetime seemingly becomes a kind of Lorentzian analog of a Finsler geometry. We can calculate geodesics (which turn out to be straight lines) and define a four-momentum vector for point particles as usual, using the Lagrangian formalism. After some calculations (I can provide details if needed), we arrive at equations for the momentum $p_i = mc\: \gamma^{\lambda-1} \left\vert\dfrac{v_i}{c}\right\vert^{\lambda-1} \operatorname{sign}(v_i)$ and energy $E = mc^2\: \gamma^{\lambda-1}$, where $$\gamma = \dfrac{1}{\left(1-\frac{|v_x|^\lambda+|v_y|^\lambda+|v_z|^\lambda}{c^\lambda}\right)^{1/\lambda}}.$$ Newton's second law $\mathbf{F} = \dfrac{d\mathbf{p}}{dt}$ still holds, so heuristically, for a generic set of particles at generic positions under the action of generic forces, the probability that the momenta have one or more components nearly equal to zero will be very small. Since $\lambda$ is very big, the components of the velocity vector will generically be close to $$v_i = \pm \left\vert\dfrac{p_i c}{E}\right\vert^{1/(\lambda-1)}\: c \approx \pm \left\vert\dfrac{p_i c}{E}\right\vert^0\: c = \pm c,$$ as I claimed in the summary. 1 This implies, among other things, that it is virtually impossible for any object to stay still: its velocity will generically be one of the eight possible vectors pictured below (which one depends on which octant the momentum vector lies). For gravity, the most reasonable thing would be to work with a generalized 2 gravitational potential $V = \dfrac{Gm_1 m_2}{\lVert\mathbf{r}\rVert_\lambda}$, where $\lVert\mathbf{r}\rVert_\lambda = (|r_x|^\lambda + |r_y|^\lambda + |r_z|^\lambda)^{1/\lambda}$ is the sup norm distance between two particles of masses $m_1$ and $m_2$. I managed to make some calculations but the orbits look too funky, so I decided to get rid of stars and planetary systems completely, having instead a single kind of astronomical body. If planets are cubic (that's my question!), I believe the gravity near the surface would be constant. Since it is extremely easy to move a stationary object by applying a very small force, every big structure would become unstable under gravity alone, so I decided to have a secondary cohesive force that sticks particles of primordial matter together, while still allowing for some free fluid-like motion. I am not really sure how this force will look like since the chemistry isn't yet developed, so for the moment I'm forced to work with this rough description of how I want it to behave. For the moment, as a starting point I'm assuming the primordial matter consists of small hard particles, let's say ordinary spheres, and studying the collapse of a cloud of this material under gravity and a perfectly inelastic contact force (these assumptions can be changed if needed for the answer). At this point the analysis becomes more difficult, and I've been unable to find whether the planets are cubes or not. Based on the form of the gravitational potential, I would expect a "yes" answer, but the weird restrictions on the velocity make me doubt. Also, I'm not completely sure that a cohesive force will fully solve the instability problem. On the other hand, things could get complicated by relativistic effects since the velocities are close to $c$. Two final notes: Just to prevent any possible confusion: my intention behind the question isn't to make a world with cubic planets, it's perfectly OK for me if the answer is "these planets can't ever exist in your world, even if you modify the gravitational/cohesive forces". The underlying intention is just to explore the consequences of the main premise; it's not necessary to keep the little sea urchins idea viable. Although the context is worldbuilding, this is at its core a mathematical physics problem, and as such it most certainly has a unique right answer. I believe I have developed the basic physics enough for the question to be answerable with the information I provided. If that's not the case (if there's a free variable unaccounted for, or if the answer crucially needs concepts from the corresponding version of, for example, thermodynamics), please point out what else is needed and I'll try to rigorously work it out if it's feasible. EDIT: Inspired by Aric's answer I tried to make a simulation myself. I'm not very good at coding so I haven't been able to make the inelastic collisions work yet. Applying the force of gravity only, it turns out that an initially static cloud of material does seem to collapse into an octahedral shape, as suggested by some of the answers. Here are the results for a spherical slice of primordial material (color represents height): However, since there are some points where particles tend to clump together heavily and the simulation doesn't take into account the cohesive force that would separate them, I think there's still a good chance that the real shape is more cube-like, or perhaps something in between similar to this, as suggested by JBH's answer. There's still the question of stability, that I don't know how to tackle. Just for reference, I also found some slides online about a possible way to treat fluid dynamics in a Finsler spacetime. Most of it is over my head right now, but perhaps someone will find them useful. 1. - If my calculations are correct, the true speed $\lVert\mathbf{v}\rVert_\lambda$ is actually not so close to $c$ but a bit lower for massive particles. For example, for $m = 1.8$ kg, $\lambda = 100$ and a range of kinetic energies within $(1.5 - 180) mc^2$, the speed remains between $95-97\%$ of $c$. I believe the most common relativistic phenomena like length contraction or time dilation aren't expected to play a big role though, because $\gamma$ is practically $1$ for almost all speeds, but I may be wrong on this, I haven't given it much thought yet. 2. - One can consider the analogue of a Klein-Gordon massless field and work out the corresponding "Coulomb" interaction potential as the Green function for a static field background. Dimensional analysis of the resulting integral suggests a law of the form $V \propto \lVert\mathbf{r}\rVert_\lambda^{\lambda/(\lambda - 1)-3}$ (which is an inverse square of the distance for big $\lambda$) rather than $V \propto \lVert\mathbf{r}\rVert_\lambda^{-1}$ (someone asked for my reasoning and I put it here, in case anyone else's interested). For a true gravitational force I guess I would have to look into a suitable coupling between matter and a curved Finsler geometry, and it seems there is already some work being done on this. But I don't think the specifics matter much at this point, for the moment I just want a reasonable-looking attractive force.
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
I think that the question is sufficiently precise if we think at a realistic meaning of the word “inconsistent”. Also nowadays, for non logicians the adjective “inconsistent” doesn't really mean “free of contradictions” (this is only the obvious meaning given by modern Mathematical Logic), but rather it means not acceptable by a large or important part of the scientific community. Also nowadays, some of our works in some parts of modern Mathematics are not accepted as sufficiently rigorous by other parts. These works are hence perceived only as not sufficiently precise “ways of arguing”. Therefore, these “foreign argumentations” are perceived as potentially inconsistent, and need a different reformulation to be accepted. I know of relationships of this type between some parts of Geometry and Analysis, to mention only an example. It is the same problem occurring in the relationships between (some parts of) Physics and Mathematics because these two disciplines are really completely different “games”: in Physics the most important achievement is the existence of a dialectic between formulas and a part of nature, even if the related Mathematics lacks in formal clarity and is hence not accepted by several mathematicians. Analogously, early calculus was consistent until the community accepted these “ways of arguing” and discovered statements which could be verified as true by a dialogue with other part of knowledge: Physics and geometrical intuition in primis. Since in the early calculus the formal intuition (in the modern sense of manipulation of symbols, without a reference to intuition) was surely weak, the dialectic between proofs and intuition was surely stronger (I mean statistically, in the distribution of 17th century mathematicians). In my opinion, this is the reason of the discovering of true statements, even if the related proofs are perceived as “weak” nowadays. Once the great triumvirate Cantor, Dedekind, and Weierstrass decided that it was time to make a step further, the notion of “inconsistent” changed for this important part of the community and hence, sooner or later, for all the others. Also from the point of view of rules of inference, the consistency of early calculus has to be meant in the sense of dialectic between different parts of knowledge and acceptance by the related scientific community. Therefore, in this sense, in my opinion early calculus is as consistent as our (and the future) calculus. I agree with Joel that “we are not in a qualitatively different situation”: probably in the near future all proofs will be computer assisted, in the sense that all the missing steps will be checked by a computer (whose software will be verified, once again, by a large part of the community) and we will only need to provide the main steps. Necessarily, articles will change in nature and, I hope, they will be more focused on those ideas and intuitions thanks to which we were able to create the results we are presenting. Therefore, young students in the future will probably read disgusted at our papers saying: “how were they able to understand how all these results were created? These papers seems like phone books: def, lem, thm, cor, def, lem, thm, cor... without any explanation of discovery rules and several missing formal steps!”. Finally, I think that only formally, but not conceptually, this early calculus may look similar to NSA or SDG. In my opinion, one of the main reason of the lack of diffusion of NSA is that its techniques are perceived as “voodoo” by all modern mathematicians (the majority) that rely their work on the dialogue between formal mathematics and informal intuition. Too much frequently the lack of intuition is too strong in both theories. For example, for a person like Cauchy, what is the intuitive meaning of the standard part of the sine of an infinite number (NSA)? For people like Bernoulli, what is the intuitive meaning of properties like $x\le0$ and $x\ge0$ for every infinitesimal and $\neg\neg\exists h$ such that $h$ is infinitesimal (but not necessarily there exists an infinitesimal; SDG)? Moreover, as soon as discontinuous functions appeared in the calculus, the natural reactions of almost every working mathematicians (of 17th century and nowadays) looking at the microaffinity axiom is not to change Logic switching to the intuitionistic one, but to change this axiom inserting a restriction on the quantifier “for every $f:R\longrightarrow R$”. The apparently inconsistent argumentation of setting $h\ne0$ and finally $h=0$, can be faithfully formalized using classical calculus rather than using these theories of infinitesimals. We can say that $f:R\longrightarrow R$ (here $R$ is the usual Archimedean real field) is differentiable at $x$ if there exists a function $r:R\times R\longrightarrow R$ such that $f(x+h)=f(x)+h\cdot r(x,h)$ and such that $r$ is continuous at $h=0$. It is easy to prove that this function $r$ is unique. Therefore, we can assume $h\ne0$, we can make freely calculations to discover what is the unique form of the function $r(x,h)$ for $h\ne0$ and, in the final formula, to set $h=0$ because $r$ is clearly continuous for all the examples of functions of the early calculus. This is called the Fermat-Reyes methods, and it can be proved also for generalized functions like Schwartz distributions (and hence for an isomorphic copy of the space of all the continuous functions). Moreover, in my opinion, both Cauchy and Bernoulli would had perfectly understood this method and the related intuition. On the contrary, they would not be able to understand all the intuitive inconsistencies they can easily find both in NSA and SDG.
A few days ago I found myself in the midst of an interesting knapsack problem. I was preparing for an endurance race and I wanted to minimize the weight of the gear I needed to carry during the race. I was concerned with weight because I was running The Death March, a classic test-piece among endurance athletes that weaves through the Pemigewasset Wilderness. With 20,000 feet of vertical elevation change over 34 miles, The Death March is well-deserving of its name. This post is about The Death March and the algorithm I used to help me select the optimal gear to carry in the race. Endurance racing is an interesting resource allocation problem because of the inherent trade-offs between weight and speed. The longer a racer estimates it will take to complete a race, the more food, water, and supportive gear they need to take with them. However, the more weight a racer carries, the longer it will take them to complete the race. Reducing weight improves speed, but at the expense of safety and comfort. Take too little gear and risk hypothermia and dehydration. Take too much gear and risk not finishing the race in a reasonable time frame. Selecting the most useful combination of gear at the least possible weight is crucial for a successful race. Completing The Death March in October added another layer of complexity to the race. Capricious autumn weather in the Pemi forces racers to carry extra gear to deal with fast moving winter storms. At one point in my race, the weather changed from a calm and sunny 55°F to blasting sleet and 70 mph wind all within about 2 hours. 1 It was crucial that I carry extra gear to accommodate for these weather conditions, while also limiting the amount of added weight in my pack. I formulated my optimization problem as a knapsack problem. To explain how I found the best set of items to carry for my race, I’ll start with some defining some notation. Let $x_i$ denote an item that I considered carrying in the race. Associated with each item were two variables—the physical weight of the item, $w_i$ and a value of importance for the item, $v_i$. If $x_i$ was a topographic map for example, it would have a large value of importance because maps are needed for navigation. The objective of this problem was then to find a subset of items that maximized the value of importance carried in the race without exceeding the total weight capacity, $C$: $$ \begin{eqnarray} \mbox{maximize} && \sum_{i=1}^n v_i x_i \\ \mbox{subject to} && \sum_{i=1}^n w_i x_i \lt C \\ && x_i \in \{0,1\}, \: \{i = 1, \dots, n\} \end{eqnarray} $$ The naive approach to this problem is to compute all $2^n$ possible item combinations and then find the best subset of items to pack. Brute-force would be a good strategy if the item set was not overly large; however, I had dozens of items I was considering carrying in the race, so I needed to use a solution that was faster to compute than exponential time. The knapsack problem is well-studied and many algorithms exist to solve different applications of the problem. A common improvement to the brute-force approach is to use backward induction and dynamic programming techniques to find intermediate solutions to some of the problem’s sub-problems so that these solutions can be reused throughout the main problem. The advantage of using dynamic programming is that it generally leads to faster solutions, but at the expense of a larger memory footprint. I’ll use a minimal working example to help explain the dynamic programming approach I used for my race. Consider a racer who wishes to carry a total weight of less than 8 kilograms in a race and has only 5 items to choose from. The goal is to maximize the value of items in the knapsack such that the total capacity is less than 8 kilos. Here are the weights and values of each of the 5 items numbered 1–5: Most implementations start by building a matrix, $A$ of $N$ columns and $M$ rows where the $j$th column represents a specific integer weight of the knapsack from $0$ to $N$ and the $i$th row represents an index to the item being considered for inclusion in the knapsack, $\{0,\dots,M\}$, where $i=0$ indicated no item. The first objective is to complete the matrix from top left to bottom right and find the value in each cell of the matrix. Once the value in a given cell has been determined, the solution can be reused to help find other solutions in subsequent cells. It’s clear that if there are no items in the knapsack and/or the weight of the knapsack is 0, the total value in the knapsack must also be 0. For all cells that satisfy one or both of these statements, the value in the cell is set to 0. For all other cells it is necessary to find the maximum of two possible solutions: 1. inherit the solution from the cell above the current cell, $A_{i-1,j}$ or 2. choose a better solution if it exists. This second solution is found by first computing the residual weight in the knapsack by finding the difference between the column weight and the weight of the current item. This quantity gives a shifted column index in the row above the current cell, $A_{i-1, residual}$. This index specifies the cell containing the value of the residual weight. Once the residual value has been found using the index, this value is added to the value of the current item and the maximum of this sum and solution 1 is the value chosen to enter into the current cell. Each cell value is computed in this manner until the entire matrix is completed. The completed matrix for this problem looks like this: Next, the optimal set of items to include in the knapsack can be determined by retracing the correct branch of the recurrence that produced the optimal value in the last cell, $A_{N,M}$. I’ve drawn the correct branch as a line through the matrix in the drawing above starting at $A_{5,8}$ and moving backward in the matrix toward $A_{0,j}$. The circles containing values greater than zero denote which items are to be include in the knapsack. These cells were found by starting with the optimal value in $A_{N,M}$ and subtract the weight of each item in the row above the current cell and then shifting left in the the row by this index value. This process is essentially reversing the residual computation performed above when the matrix was being filled. The best set of items to carry in this minimal working example subject to the weight constraint of 8 kilos is: $\langle 5, 4, 3 \rangle$. I estimated that I would need to carry no more than 6 kilograms of gear in my race. My goal was to then find the best combination of items to carry in the race which would provide me with the most value subject to this weight constraint. The complete data and code I used in my knapsack problem is available from the GitHub repository that accompanies this blog post. I wrote an implementation for this algorithm in Python. The main function knapsack takes two parameters, an integer value, C indicating the capacity of the knapsack and an array of dictionaries, items where each map represents one item containing a name, integer weight, and integer value: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import sys import json from collections import defaultdict, namedtuple, deque from itertools import product def item_set(filename): with open(filename) as infile: return json.load(infile) def knapsack(C, items): Datum = namedtuple('Datum', 'x, w, v, url') items = [Datum(**_) for _ in items] A = defaultdict(int) Q = deque() S = len(items) N = range(1, S + 1) M = range(1, C + 1) for i, j in product(N, M): abv_val = A[i-1, j] prev_elem = items[i-1] res_wi = j - prev_elem.w if j < prev_elem.w: A[i, j] = abv_val else: A[i, j] = max(abv_val, A[i-1, res_wi] + prev_elem.v) while S > 0: nxt = S - 1 if A[S, C] != A[nxt, C]: Q.appendleft(items[nxt].x) C -= items[nxt].w S = nxt return Q def output_fmt(packed_items, knapsack_data): item_set = set(packed_items) for item in knapsack_data: if item['x'] in item_set: line = '[{x}]({url})'.format(**item) print(line) def main(): json_data = item_set(sys.argv[1]) knapsack_data = [ {'x': _.get('item'), 'w': _.get('weight'), 'v': _.get('value'), 'url': _.get('url')} for _ in json_data ] packed_items = knapsack(6000, knapsack_data) output_fmt(packed_items, knapsack_data) if __name__ == '__main__': main() In total, my algorithm suggested I take 27 items with a combined weight of 5,964 grams, which was only 37 grams less than my desired maximum weight capacity. I took almost the exact set of gear my algorithm suggested with the exception of only a few items. Here are the items my algorithm selected for me to carry in the race: Below, I’ve highlighted a few of the items I carried in the race and my rationale for selecting each item. Choosing the appropriate footwear for racing was paramount. I gave footwear a high value of importance to ensure my algorithm would select shoes for the race; I wasn’t going to run barefoot. Footwear selection is critically important for racing because carrying weight at the extremities is costly. Over the course of many miles, repeatedly lifting extra weight on the feet is especially fatiguing. Therefore, I wanted my algorithm to select a race shoe that was light but with enough protection and rigidity to handle moving quickly without damaging the feet from impact with rocks. My algorithm selected the Salomon Speedcross paired with Kahtoola MICROspikes. The Speedcross is a perfect shoe for this race because it is light-weight and breathable while still providing enough rigidity, cushioning, and protection to handle the rock-laden trails. In October, The Death March is frequently veneered with ice, so a traction system was necessary to prevent slipping on icy scree. Kahtoola’s microspikes are lightweight, easy to take on and off, and provide most of the functionally of traditional crampons. Along with footwear, I placed a high value on a layering system for clothing I felt would be important for the race. The logic of layering certain pieces of clothing together was to provide maximum versatility for a broad range of weather conditions, ecosystems, and activity levels. The layering concept is similar in theory to my Matryoshka Hardwear. Each layer can be used alone or in combination depending on conditions. My layering system was tailored for the three main ecosystems I encountered in the race—acadian forests, the taiga, and alpine tundra. A good layering system has two main functions; it draws perspiration away from the body and protects the body from the environment. These two jobs require fabrics that are highly breathable, fast drying, and remain warm even when wet all while shielding the body from wind, rain, sun, and snow. Perspiring in alpine climates can be especially dangerous because lingering moisture can quickly cool and induce hypothermia. Selecting outerwear composed of the right materials is vital for keeping the body at the proper temperature. The job of a base layer like the Patagonia Lightweight Crew was to wick moisture away from the body and keep the skin dry. On top of the base layer, I wore a Patagonia R2 fleece as my mid layer to help evaporate moisture, transport it away from the base layer, and vent moisture into the air. The mid layer also provided added insulation to help retain heat close to the body. When the temperature dropped or winds intensified, I added a Patagonia Nanopuff Pullover synthetic jacket for warmth over my base and mid layer. I prefer synthetic fabrics at this time of year because they stay warm even when wet. My final layer was an Arc’teryx Alpha FL shell jacket and Venta SV gloves, which protected my skin and the other under layers from wind and precipitation. Together or in different combinations, these layers allowed me to stay safe while moving fast and light through the mountains. Altogether, a very fun albeit exhausting trip!
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... The ALICE TPC, a large 3-dimensional tracking device with fast readout for ultra-high multiplicity events (Elsevier, 2010-10-01) The design, construction, and commissioning of the ALICE Time-Projection Chamber (TPC) is described. It is the main device for pattern recognition, tracking, and identification of charged particles in the ALICE experiment ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
I am currently trying to understand Euler's article E71 on the Riccati differential equation and its connection with continued fractions. Apparently Daniel Bernoulli had shown that the equation $$ y' + ay^2 = bx^\alpha $$ can be solved by elementary functions if $\alpha = -2$ or $\alpha = - \frac{4n}{2n-1}$ for integers $n$. The proof proceeds by reduction: using a series of transformations (which you can find e.g. in Kamke's classical book on differential equations), the differential equation is transformed into $$ y' + Ay^2 = Bx^\beta $$ with $$ \beta = \frac{\alpha+4}{\alpha+3} = \frac{4(n-1)}{2(n-1)-1}. $$ Bernoulli claimed and Liouville proved that the equation cannot be solved by elementary functions for other exponents except $\alpha = -2$, the limit point for $n \to \infty$. To a number theorist, this looks a little bit like descent: every exponent $\alpha$ for which there is an elementary solution can be reached from $\alpha = 0$ by a finite series of transformations. What I would like to know is: Is this a superficial resemblance, or is there something deeper going on? In particular, can we attach some algebraic curve to this differential equation in such a way that its rational points correspond to the exponents for which the Riccati equation has an elementary solution, and can the transformations be explained geometrically? For what it's worth, in Liouville's article we can find the singular quartic $(2n+1)^2(m^2+4m) + 16n^2 + 16n=0$. Edit. I have just discovered an article on the Riccati equation from a group theoretical viewpoint which seems to confirm my suspicion that there is a lot of algebra beneath the integrability of the Riccati equation.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Evidence for a Narrow Near-Threshold Structure in the J/psi phi Mass Spectrum in B+ -> J/psi phi K+ Decays PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2009, Volume 102, Issue 24 Journal Article 2. Study of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^+$$\end{document}Bc+→J/ψDs+ and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_c^+ \rightarrow J/\psi D_s^{+}$$\end{document}Bc+→J/ψDs∗+ decays with the ATLAS detector The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 2016, Volume 76 The decays \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy}... Regular - Experimental Physics Regular - Experimental Physics Journal Article Physics Letters B, ISSN 0370-2693, 12/2015, Volume 751, Issue C, pp. 63 - 80 An observation of the decay and a comparison of its branching fraction with that of the decay has been made with the ATLAS detector in proton–proton collisions... PARTICLE ACCELERATORS PARTICLE ACCELERATORS Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2015, Volume 114, Issue 12, p. 121801 A search for the decays of the Higgs and Z bosons to J/psi gamma and Upsilon(nS)gamma (n = 1,2,3) is performed with pp collision data samples corresponding to... PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PHYSICS, MULTIDISCIPLINARY | PSI | PHYSICS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 10/2009, Volume 103, Issue 15, p. 152001 Journal Article Physical Review Letters, ISSN 0031-9007, 06/2009, Volume 102, Issue 24 Journal Article 7. Prompt and non-prompt $$J/\psi $$ J/ψ elliptic flow in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}} = 5.02$$ sNN=5.02 Tev with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 23 The elliptic flow of prompt and non-prompt $$J/\psi $$ J/ψ was measured in the dimuon decay channel in Pb+Pb collisions at $$\sqrt{s_{_\text {NN}}}=5.02$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 8. Measurement of the prompt J/ψ pair production cross-section in pp collisions at √s = 8 TeV with the ATLAS detector The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 2, pp. 1 - 34 Journal Article 9. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J / ψ production in proton–proton collisions at s = 7 TeV Nuclear Physics, Section B, ISSN 0550-3213, 2011, Volume 850, Issue 3, pp. 387 - 444 The inclusive production cross-section and fraction of mesons produced in -hadron decays are measured in proton–proton collisions at with the ATLAS detector at... Journal Article 10. Prompt and non-prompt $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) suppression at high transverse momentum in $$5.02~\mathrm {TeV}$$ 5.02TeV Pb+Pb collisions with the ATLAS experiment The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 28 A measurement of $$J/\psi $$ J/ψ and $$\psi (2\mathrm {S})$$ ψ(2S) production is presented. It is based on a data sample from Pb+Pb collisions at... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 03/2007, Volume 98, Issue 13, p. 132002 We present an analysis of angular distributions and correlations of the X(3872) particle in the exclusive decay mode X(3872)-> J/psi pi(+)pi(-) with J/psi... PHYSICS, MULTIDISCIPLINARY | CHARMONIUM | DETECTOR | Physics - High Energy Physics - Experiment | PARTICLE DECAY | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | SPIN | PIONS MINUS | QUANTUM NUMBERS | FERMILAB COLLIDER DETECTOR | MUONS PLUS | PARITY | ANGULAR DISTRIBUTION | J PSI-3097 MESONS | MUONS MINUS | TEV RANGE 01-10 | PIONS PLUS | PROTON-PROTON INTERACTIONS | PAIR PRODUCTION PHYSICS, MULTIDISCIPLINARY | CHARMONIUM | DETECTOR | Physics - High Energy Physics - Experiment | PARTICLE DECAY | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | SPIN | PIONS MINUS | QUANTUM NUMBERS | FERMILAB COLLIDER DETECTOR | MUONS PLUS | PARITY | ANGULAR DISTRIBUTION | J PSI-3097 MESONS | MUONS MINUS | TEV RANGE 01-10 | PIONS PLUS | PROTON-PROTON INTERACTIONS | PAIR PRODUCTION Journal Article 12. Measurement of differential J/ψ production cross sections and forward-backward ratios in p + Pb collisions with the ATLAS detector Physical Review C - Nuclear Physics, ISSN 0556-2813, 10/2015, Volume 92, Issue 3 Measurements of differential cross sections for J/ψ production in p + Pb collisions at √sNN=5.02 TeV at the CERN Large Hadron Collider with the ATLAS detector... NUCLEAR PHYSICS AND RADIATION PHYSICS NUCLEAR PHYSICS AND RADIATION PHYSICS Journal Article 13. Measurement of the centrality dependence of J/ψ yields and observation of Z production in lead–lead collisions with the ATLAS detector at the LHC Physics Letters B, ISSN 0370-2693, 03/2011, Volume 697, Issue 4, pp. 294 - 312 Journal Article 14. Study of B-+/--> J/psi pi(+/-) and B-+/--> J/psi K +/- decays: Measurement of the ratio of branching fractions and search for direct CP violation Physical Review Letters, ISSN 0031-9007, 2004, Volume 92, Issue 24, p. 241802 Journal Article
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range European Physical Journal C, ISSN 1434-6044, 2011, Volume 71, Issue 5, pp. 1 - 17 The production of J/ψ mesons in proton–proton collisions at √s = 7 TeV is studied with the LHCb detector at the LHC. The differential cross-section for prompt... Hadrons | Teoria quàntica | Gravitació | Gravitation | Teoria quàntica de camps | Relativitat (Física) | Relativity (Physics) | Quantum field theory | Quantum theory Hadrons | Teoria quàntica | Gravitació | Gravitation | Teoria quàntica de camps | Relativitat (Física) | Relativity (Physics) | Quantum field theory | Quantum theory Journal Article Physical Review Letters, ISSN 0031-9007, 2012, Volume 109, Issue 23, p. 232001 Journal Article Physics Letters B, ISSN 0370-2693, 01/2012, Volume 707, Issue 1, pp. 52 - 59 The production of pairs in proton–proton collisions at a centre-of-mass energy of 7 TeV has been observed using an integrated luminosity of 37.5 pb collected... Journal Article Physical Review Letters, ISSN 0031-9007, 06/2012, Volume 108, Issue 25 Journal Article 5. Folk art africain? : créations contemporaines en Afrique subsaharienne : Omar Victor Diop, Kifouli Dossou, Samuel Fosso, Romuald Hazoumè, Kiripi Katembo, J.-P. Mika, Gérard Quenum, Sory Sanlé, Amadou Sanogo, Ablaye Thiossane Book 6. Evidence for the decay $B^0\to J/\psi \omega$ and measurement of the relative branching fractions of $B^0_s$ meson decays to $J/\psi\eta$ and $J/\psi\eta^{'} Nuclear Physics B, ISSN 0550-3213, 10/2012, Volume 867, pp. 547 - 566 Journal Article European Physical Journal C, ISSN 1434-6044, 05/2011, Volume 71, Issue 5 Journal Article Physics Letters B, ISSN 0370-2693, 12/2012, Volume 718, Issue 2, pp. 431 - 440 The prompt production of charmonium and states is studied in proton–proton collisions at a centre-of-mass energy of at the Large Hadron Collider. The and... Journal Article Physical Review Letters, ISSN 0031-9007, 04/2012, Volume 108, Issue 15 Journal Article 10. Evidence for the decay B 0 J/ψω and measurement of the relative branching fractions of Bs0 meson decays to J/ψή and J/ψή Nuclear Physics B, ISSN 0550-3213, 02/2013, Volume 867, Issue 3, pp. 547 - 566 First evidence of the B J/ψω decay is found and the B J/ψη and B J/ψή decays are studied using a dataset corresponding to an integrated luminosity of 1.0 fb... Journal Article Physical Review Letters, ISSN 0031-9007, 12/2012, Volume 109, Issue 23 Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2012, Volume 108, Issue 25 Journal Article Physical Review Letters, ISSN 0031-9007, 03/2012, Volume 108, Issue 10 Journal Article 14. Evidence for the decay B-0 -> J/psi omega and measurement of the relative branching fractions of B-s(0) meson decays to J/psi eta and J/psi eta NUCLEAR PHYSICS B, ISSN 0550-3213, 02/2013, Volume 867, Issue 3, pp. 547 - 566 Journal Article Physics Letters B, ISSN 0370-2693, 01/2012, Volume 707, Issue 1, pp. 52 - 59 Journal Article Physical Review D, ISSN 1550-7998, 2012, Volume 86, Issue 5 Journal Article The European Physical Journal C, ISSN 1434-6044, 8/2012, Volume 72, Issue 8, pp. 1 - 9 The relative rates of B-meson decays into J/ψ and ψ(2S) mesons are measured for the three decay modes in pp collisions recorded with the LHCb detector. The... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | Mesons | Particle decay | Regular - Experimental Physics Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Astronomy, Astrophysics and Cosmology | Elementary Particles, Quantum Field Theory | Mesons | Particle decay | Regular - Experimental Physics Journal Article
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
In the spirit of the classic four fours, I wonder what's the optimal set of four numbers? Your goal is to make the most consecutive integers using four digits of your choice. Pick four: $0,1,2,3,4,5,6,7,8,9$ ( You can pick multiple instances of the same digit ) When constructing an integer: all of your four candidates must be used exactly once (order/placing of digits is irrelevant) You may use basic arithmetic operations $+,-,\times,\div$ and parentheses $()$ You may use $a^b$ and $\sqrt[a]{b}$ but at the expense of 2 numbers as you can see You may notform new numbers, i.e. $ab$ is not allowed If we were to use four $4$s, the best we could do would be up to $9$: 0 = 4 ÷ 4 × 4 − 4 1 = 4 ÷ 4 + 4 − 4 2 = 4 −(4 + 4)÷ 4 3 = (4 × 4 − 4)÷ 4 4 = 4 + 4 ×(4 − 4) 5 = (4 × 4 + 4)÷ 4 6 = (4 + 4)÷ 4 + 4 7 = 4 + 4 − 4 ÷ 4 8 = 4 ÷ 4 × 4 + 4 9 = 4 ÷ 4 + 4 + 4*10 = 4 ÷√4 + 4 ×√4 *10 = (44 − 4) ÷ 4 Number 10 can't be done and is an example of failing, since it would require either: expenseless roots; $\sqrt{4}$ isn't allowed. ( $\sqrt[2]{4}$ is, which requires you to use $4$ and $2$ ) number formationwhich isn't allowed either. Zero does not necessarily need to be included, you can start at either $0$ or $1$. For the purposes of freedom of puzzling, if you think you can top your solution for a chosen set of digits, by starting at any other positive integer, you can add that to your answer below your initial solution. (I suspect this is unlikely) If you want, you can extend your consecutive list to negative integers but this is strictly optional and not necessary in any way, other than for the purposes of fulfillment and mathematical euphoria. Example There is an example on Puzzling.SE using digits $2,2,4,5$: But this can be expanded since the given example uses only basic arithmetic operations, not including potentiation and roots. I also suspect It could be done better using another set. I tried this by hand and I'm stuck at number $29$ using this example set, and at number $34$ using $9,8,3,2$.
Sampling from a maximal coupling of 2 univariate Gaussians will give you a bivariate distribution which is not bivariate Gaussian. A bivariate Gaussian can be parameterized with a correlation parameter $\rho$. I assume that the maximal coupling distribution will have a larger correlation than the bivariate Gaussian but I might be wrong on this-the maximal coupling could be less correlated than the bivariate Gaussian. Are there any results on the relationship between maximal coupling and correlation? Pierre Jacob on Statisfaction explains how to implement a maximal coupling between two distributions $p$ and $q$. Which is essentially a sequence of two accept-reject steps. From there you can try to estimate the correlation $\varrho$ between both Gaussian variates. Here is for instance a comparison of empirical estimates of the correlations, depending on the values of $\mu$ and $\sigma$ (with the other Gaussian being standard): The correlation associated with a maximal coupling can be derived from \begin{align*}\mathbb{E}[XY]&=\mathbb{E}[XY\mathbb{I}_{X=Y}]+\mathbb{E}[XY\mathbb{I}_{X\ne Y}]\\ &=\int x^2 p(x)\wedge q(x)\text{d}x+ \int\int xy \{p(x)-p(x)\wedge q(x)\}\{q(y)-p(y)\wedge q(y)\}\text{d}x\text{d}y \end{align*} Note that, to answer the question, the correlation returned by maximal coupling is a function of the four parameters of the Gaussians, while the maximal correlation is always equal to one.
ISSN: 1612-1112 Keywords: Liquid chromatography ; Reversed-phase capacity factors ; Cavity term ; Characteristic volumes Source: Springer Online Journal Archives 1860-2000 Topics: Chemistry and Pharmacology Notes: Summary In the correlation of reversed-phase liquid chromatography capacity factors through the equation, $$\log k' = log k'_0 + mV/100 + s\pi _2^* + b\beta _2 + a\alpha _2 $$ the use of McGowans characteristic volume, Vx, which can be trivially calculated, is entirely equivalent to the use of Leahy's computer-calculated intrinsic volumes, V1, for the cavity term mV/100. It is shown that for 209 gaseous, liquid, and solid solutes, the two sets of volumes are related through the equation, $$V_1 = 0.597 + 0.6823 V_x $$ with a standard deviation of only 1.24cm3 mol−1, and a correlation coefficient of 0.9988. Type of Medium: Electronic Resource URL: Permalink http://dx.doi.org/10.1007/BF02311772
t-test & Welch-test [Power / Sample Size] Hi Rocco, » Where does the formula on slide 10.83 in bebac.at/lectures/Leuven2013WS2.pdf for CI for parallel design come from? I cannot seem to find reference anywhere. Honestly, I don’t remember Algebra:$$s\sqrt{\tfrac{n_1+n_2}{n_1n_2}}=\sqrt{s^2(1/n_1+1/n_2)}\;\tiny{\square}$$ Comparison with the data of the example. In the » Where does the formula on slide 10.83 in bebac.at/lectures/Leuven2013WS2.pdf for CI for parallel design come from? I cannot seem to find reference anywhere. Honestly, I don’t remember whyI simplified the commonly used formula. Algebra:$$s\sqrt{\tfrac{n_1+n_2}{n_1n_2}}=\sqrt{s^2(1/n_1+1/n_2)}\;\tiny{\square}$$ Comparison with the data of the example. Like in the presentation: mean.log <- function(x) mean(log(x), na.rm = TRUE) T <- c(100, 103, 80, 110, 78, 87, 116, 99, 122, 82, 68, NA) R <- c(110, 113, 96, 90, 111, 68, 111, 93, 93, 82, 96, 137) n1 <- sum(!is.na(T)) n2 <- sum(!is.na(R)) s1.2 <- var(log(T), na.rm = TRUE) s2.2 <- var(log(R), na.rm = TRUE) s0.2 <- ((n1 - 1) * s1.2 + (n2 - 1) * s2.2) / (n1 + n2 - 2) s0 <- sqrt(s0.2) nu.1 <- n1 + n2 - 2 nu.2 <- (s1.2 / n1 + s2.2 / n2)^2 / (s1.2^2 / (n1^2 * (n1 - 1)) + s2.2^2 / (n2^2 * (n2 - 1))) t.1 <- qt(p = 1 - 0.05, df = nu.1) t.2 <- qt(p = 1 - 0.05, df = nu.2) PE.log <- mean.log(T) - mean.log(R) CI.1 <- PE.log + c(-1, +1) * t.1 * s0 * sqrt((n1 + n2) / (n1 * n2)) CI.2 <- PE.log + c(-1, +1) * t.2 * sqrt(s1.2 / n1 + s2.2 / n2) CI.t <- 100 * exp(CI.1) CI.w <- 100 * exp(CI.2) fmt <- "%.3f %.3f %.3f %.2f %.2f %.2f" cat(" method df mean.T mean.R PE CL.lower CL.upper", "\n t-test", sprintf(fmt, nu.1, exp(mean.log(T)), exp(mean.log(R)), 100 * exp(PE.log), CI.t[1], CI.t[2]), "\nWelch-test", sprintf(fmt, nu.2, exp(mean.log(T)), exp(mean.log(R)), 100 * exp(PE.log), CI.w[1], CI.w[2]), "\n") method df mean.T mean.R PE CL.lower CL.upper t-test 21.000 93.554 98.551 94.93 83.28 108.20 Welch-test 20.705 93.554 98.551 94.93 83.26 108.23 More comfortable with the t.test()function, where var.equal = TRUEgives the t-test and var.equal = FALSEthe Welch-test: res <- data.frame(method = c("t-test", "Welch-test"), df = NA,□ mean.T = NA, mean.R = NA, PE = NA, CL.lower = NA, CL.upper = NA) var.equal <- c(TRUE, FALSE) for (j in 1:2) { x <- t.test(x = log(T), y = log(R), conf.level = 0.90, var.equal = var.equal[j]) res[j, 2] <- signif(x[[2]], 5) res[j, 3:4] <- signif(exp(x[[5]][1:2]), 5) res[j, 5] <- round(100 * exp(diff(x[[5]][2:1])), 2) res[j, 6:7] <- round(100 * exp(x[[4]]), 2) } print(res, row.names = FALSE) method df mean.T mean.R PE CL.lower CL.upper t-test 21.000 93.554 98.551 94.93 83.28 108.20 Welch-test 20.705 93.554 98.551 94.93 83.26 108.23 t-test is fairly robust against the former but less so for the latter. In the Rfunction t.test() var.equal = FALSE is the default because: Any pre-test is bad practice (might inflate the type I error) and has low power esp. for small sample sizes. If \({s_{1}}^{2}={s_{2}}^{2}\;\wedge\;n_1=n_2\), the formula given above reduces to the simple \(\nu=n_1+n_2-2\) anyhow. In all other cases the Welch-test is conservative, which is a desirable property. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes Complete thread: Input for CV when small first-in-man has been run Rocco_M 2019-09-03 18:45 Geometric mean and CV Helmut 2019-09-05 14:05 Geometric mean and CV Rocco_M 2019-09-05 23:49 Geometric mean and CV Helmut 2019-09-06 00:15 Geometric mean and CV Rocco_M 2019-09-06 17:49 Geometric mean and CV Helmut 2019-09-06 18:15 Geometric mean and CV ElMaestro 2019-09-06 19:03 Geometric mean and CV Helmut 2019-09-07 09:57 Geometric mean and CV Rocco_M 2019-09-09 13:31 Geometric mean and CV Helmut 2019-09-07 09:57 Geometric mean and CV ElMaestro 2019-09-06 19:03 Geometric mean and CV Helmut 2019-09-06 18:15 Geometric mean and CV Rocco_M 2019-09-06 17:49 Geometric mean and CV Helmut 2019-09-06 00:15 Geometric mean and CV Rocco_M 2019-09-05 23:49 Geometric mean and CV Helmut 2019-09-05 14:05
By my understanding, the Mach Number at a given altitude is calculated by dividing IAS by the speed of sound at that altitude. So how is this speed of sound calculated to display the Mach Number on the Mach Meter? Does the Mach Meter share the same pitot tube used to calculate airspeed? Most modern jets use an Air Data Computer (ADC) to calculate (among other things) Mach Number. An ADC is simply a computer which accepts measurements of atmospheric data to calculate various flight related data. A typical ADC may be connected to$^1$: Inputs Static System Pressure Pitot Pressure Total Air Temperature (TAT) Outputs (Calculated) Pressure Altitude Baro-Corrected Altitude Vertical Speed Mach Number Total Air Temperature Calibrated Airspeed True Airspeed Digitized Pressure Altitude (Gillham) Altitude Hold Airspeed Hold Mach Hold Flight Control Gain Scheduling. Each of the inputs and outputs may be analog or digital depending on the design of the system, and are used for many purposes throughout the airplane. Each output is a purely calculated value based on the various input measurements and data stored within the unit. To answer your question about the pitot source for the Mach Meter: Yes, they use the same pitot and static sources as the airspeed indicator. In the case of mechanical instruments, they are both connected directly to the pitot static system. In the case of an ADC, the pitot static system is connected directly to the ADC and then electrical signals communicate the airspeed and mach number to the electric airspeed indicator and mach meter (or EFIS), which no longer require actual pitot static connections. The Math A simplified example for the Mach Number calculation$^2$ would be based on the pressure inputs: $$Mach~number=5((PT/PS+1)^{0.2857}–1)^\frac12$$ Where: $PT$ = Total Pressure $PS$ = Static Pressure The actual calculation makes corrections to the pressure data to compensate for installation errors and nonlinear sensor readings. Note that it doesn't actually calculate the (local) speed of sound (LSS) in order to determine the current mach number, but with the TAT input and the calculated mach number, it could calculate it by calculating the outside air temperature (OAT/SAT) first: $$SAT=\frac{TAT}{1+0.2\times{Mach}^2}$$ $$LSS=38.945\sqrt{SAT}$$ For example, let's say that the TAT is -36C (237.16K) and we are flying Mach 0.80: $$SAT=\frac{237.16}{1+0.2\times0.8^2}=\frac{237.16}{1.128}=210.25°K=-63°C$$ $$LSS=38.945\sqrt{210.25}=38.945\times14.5=564.70knots$$ Again, these are simplified formulas because the actual ones would consider sensor error, etc. An (analog) machmeter looks something like this: So it's more like an more complex version of the airspeed indicator, in this case correcting for the altitude in the process. That being said, I found this extract apparently from an FAA publication: Some older mechanical Machmeters not driven from an air data computer use an altitude aneroid inside the instrument that converts pitot-static pressure into Mach number. These systems assume that the temperature at any altitude is standard; therefore, the indicated Mach number is inaccurate whenever the temperature deviates from standard. These systems are called indicated Machmeters. Modern electronic Machmeters use information from an air data computer system to correct for temperature errors. These systems display true Mach number. Most systems today use more detailed data from sensors to give a correct value through a variety of (complex) calculations. A little more discussion is available on PPruNe. Side note: Speed of sound ($a$) itself is solely determined by temperature (that being said, you are able to determine it from pressure, as pressure is a function of temperature) hence the problem with the analog system above. For air: $$a=\sqrt{R{\gamma}T}~m/s$$ Where: $R=287$ Specific Gas constant [dimensionless] $\gamma=1.4$ Specific heat ratio [dimensionless] $T=$ Absolute temperature [K] Remember that you're reading off indicated airspeed [IAS] in knots in the cockpit, which is not the same as True airspeed [TAS] converted to m/s, in case you're trying to work out your mach speed manually ($M=\frac{TAS}{a}$) For use without knowledge of airspeed & temperature, Wikipedia gives the following formula for subsonic flows: $$M=\sqrt{5((\frac{P_T}{P}+1)^\frac27-1)}$$ Where: $P_T=$ Total Pressure $P=$ Static Pressure A Machmeter does not determine the speed of sound. It doesn't even need to: $$Mach~Number=\frac{P_T-P_S}{P_S}$$ Mach number is simply the ratio between total pressure minus static pressure, divided by the static pressure. Here is why: $$Mach~Number=\frac{TAS}{LSS}$$ The Mach number is true airspeed versus the local speed of sound $$TAS=IAS\sqrt{T}\div\sqrt{P}\div16.97$$ Converting indicated speed to true speed, we need to multiply with the square root of absolute temperature (in °K) $$LSS=38.94\sqrt{T}$$ Also the local speed of sound is directly proportional with the square root of absolute temperature (in °K) If you divide $TAS=IAS\sqrt{T}\div\sqrt{P}\div16.97$ by $LSS=38.94\sqrt{T}$, the $\sqrt{T}$ will cancel eachother out. $$Mach~Number=\frac{IAS}{\sqrt{P}x}$$ IAS we already have, it's dynamic pressure minus static pressure, and $P$ is just static pressure, or, like I said in the beginning: $$Mach~Number=\frac{P_T-P_S}{P_S}$$ See, no thermometer... only dynamic and static pressure.
Fatigue criteria Fatigue is a progressive structural damage of materials under cyclic loads. Most of engineering failures are caused by Fatigue that is why it is very important to perform a Fatigue Check. SDC Verifier implement two methods of Fatigue Check: the stress difference method the stress difference method with mean stress correction (Smith Correction) The first method only takes into account the stress range (the difference between the maximum and minimum stress). Codes or standards as the Eurocode 3, the EN13001 crane code or the recommendations for fatigue design of welded joints and components of the IIW (the International Institute of Welding), see for example IIW document XIII-1965-03 / XV-1127-03 for both Steel and Aluminum. The second method takes the influence of the mean stress into account according to Smith, see the figure below: Most of the crane standards as the DIN 15018, FEM 1.001, NEN 2018/ NEN 2019, DASt richtlinie and the FKM use the Smith correction (combination of goodman and yield correction, the red line in the graph above). The implementation of the DIN 15018 standard is used as an example to explain this method. Translation of the Smith correction into allowable stresses In contrast with other fatigue calculation methods, no fatigue damage is calculated, but an allowable fatigue stress is used which depends on the usage (=number of load cycles and the intensity of the load translated in a load spectrum). Instead of using a fixed allowable alternating stress, the Smith correction results in lower allowable stress variation with increasing mean stresses. In crane codes instead of using mean stress and alternating stress, the minimum and maximum stress values are used, see below: The alternating stress is: \(\sigma _{alt} \ =\ \sigma _{max} -\sigma _{min}\) and the mean stress is: \(\sigma _{mean} \ =\ (\sigma _{max} -\sigma _{min})/2\) The limit stress ratio defines the maximum allowable stress and is calculated as: \(\kappa =\sigma _{min} /\sigma _{max} \ \) if \(|\sigma _{max} | >|\sigma _{min} |\) and \(\kappa =\sigma _{max} /\sigma _{min} \ \) if \(|\sigma _{min} | >|\sigma _{max} |\) and therefor: -1 < κ < 1 The resulting allowable stress range is shown in the figure below with the limitation for tension and compression stress in red: In the table below this is translated into allowable stress formula for 1 stress direction: Table 18. Equations relating to the permissible maximum stresses according to figure 9 as a function of x and zul σD(-1) as specified in table 17 Alternating stress range 1< x < 0 Tension \(zul\sigma _{D_{z}( x)} =\ \frac{5}{3-2x} \cdot zul\sigma _{ D( -1)}\) Compression \(zul\sigma _{D_{d}( x)} =\ \frac{2}{1-x} \cdot zul\sigma _{ D( -1)}\) Pulsating stress range 0 < x < +1 Tension \(zul\sigma _{D_{z}( x)} =\ \frac{zul\sigma _{D_{z}( 0)}}{1-\left( 1-\frac{zul\sigma _{D_{z}( 0)}}{0,75\cdot \sigma _{B}}\right) \cdot x}\) Compression \(zul\sigma _{D_{d}( x)} =\ \frac{zul\sigma _{D_{d}( 0)}}{1-\left( 1-\frac{zul\sigma _{D_{d}( 0)}}{0,90\cdot \sigma _{B}}\right) \cdot x}\) This check is repeated for all stress directions. For a combination check DIN 15018 uses the formula below: \((\frac{\sigma_{x}}{zul\sigma_{xD}})^2+(\frac{\sigma_{y}}{zul\sigma_{yD}})^2-(\frac{\sigma_{x}\cdot\sigma_{y}}{|zul\sigma_{xD}|\cdot|zul\sigma_{yD}|})+(\frac{\tau}{zul\tau_{D}})^2 \le 1,1\) where σ x, σy is the calculated normal stress in xand ydirections; is the permissible normal stress corresponding to stresses σ and x σrespectively; y is the amount of zul σ and xD σ respectively; yD τ is the calculated shear stress: zul τ b is the permissible shear stress corresponding to the stress calculated shear stress τ Also, the following material constants are used: σ B is the ultimate material stress σ S is the yield material stress With this method the allowable fatigue stress σ D(κ)for each stress direction depends on: Material type (Yield and Ultimate stress levels); Number of load cycles and load spectrum in an element Group (B1-B6); Weld or notch group (W0-W2 for unwelded material and K0-K4 for welded designs); The κ-factor (the ratio between the maximum and the minimum stress); The sign of the stress (tension or compression). Implementation in SDC Verifier The allowable fatigue stress according to the smith correction method depends according to crane codes as the Din 15018 on the following items: Number of load cycles in an element Group (B1-B6); Material type (Yield and Ultimate stress levels); Weld or notch group (W0-W2 for unwelded material and K0-K4 for welded designs). SDC Verifier defines this for the DIN 15018 with the start window below: For the DIN 15018 standard classifications has to be set. Manually select elements which belong to welds. With the selection tools and/or the help of Weld Finder tool it is possible to quickly set all classifications. Number of load cycles → Element group The Element Group or Loading Group depends on Load Spectrum (S0-S3) and Class of Utilization (N1-N4): The Load Spectrum and number of load cycles are shown in the next tables: Class of Utilization Material properties The yield and tensile stress of the material need to be filled for each material for example in the window below (in the SDC Verifier relevant material properties): Notch group Classification tools The DIN 15018 comes uses 2 material tables, St37 which is equal to Fe360 or S235) and St52-3 which is similar to Fe510 or S355. Each element can be assigned to 1 of the 2 material tables with the window below: For the fatigue resistance the DIN 15018 uses weld or notch groups where W0-W2 are classes for unwelded material and K0-K4 are for welded designs). The table 17 of the DIN 15018 shows the influence of the material type, the crane group and the notch group on the maximum allowable stress amplitude for pure dynamic loads (minimum stress = – maximum stress and stress ratio κ =-1) Basic values of the allowable stresses for fatigue σ D(-1), in N/mm² for x=-1 in members for the verification of service strength. Steel Grade St 37 Notch group W0 W1 W2 K0 K1 K2 K3 K4 Loading group Permissible stresses zulσ D(-1) for x=-1 1 180 180 180 180 180 180 180 (152.7) 2 (168) (180) 108 3 (161.4) 141.3 (178.2) 127.3 76.1 4 (169.7) 135.8 118.8 (168) (150) 126 90 54 5 142.7 114.2 99.9 118.8 106.1 89.1 63.6 38.2 6 120 96 84 84 75 63 45 27 Steel Grade St 52-3 Notch group W0 W1 W2 K0 K1 K2 K3 K4 Loading group Permissible stresses zulσ D(-1) for x=-1 1 270 270 (247.2) 270 270 270 (254) (152.7) 2 (249) 199.2 (252) 180 108 3 (152.2) 200.6 160.5 (237.6) (212.1) 178.2 127.3 76.1 4 203.2 161.7 129.3 168 150 126 90 54 5 163.8 130.3 104.2 118.8 106.1 89.1 63.6 38.2 6 132 105 84 84 75 63 45 27 The step ratio between the stresses of two consecutive loading groups is 1.1892 for St 37 and 1.2409 for St52.3, for notch cases, W0 to W2; or notch cases K0 to K4, the step ratio is 1.4142 for St37 and St 52-3. According to this table the maximum alternating stress amplitude, for example, an element with Material Type = St37, Weld Type = K2 and Element Group = B5 = 89.1 MPa. For non-pure alternating stress, the relations from table 17 of the DIN 15018 is used. Notch group classification in crane codes as the DIN 15018 The notch factor library tries to combine information of: — the weld quality* — the stress concentration factor *non-welded sections are combined in the special notch groups W0, W1, and W2. This same weld atlas is also used also for other standards as the F.E.M. 1.001, NEN 2018/ NEN 2019, DASt richtlinie. Influence of weld quality The influence of the weld quality can be best seen by the perpendicular stressed weld type. For a non-welded plate notch-group W0 is used and for welds weld qualities K0 to K4 are applicable. The weld is classified by a 3 digit number (see red encircled number below), the first digit represents the weld group 0-4 for K0 to K4, the second the stress orientation and the third a geometric type number. A good quality weld has a low notch group K0 and the notch group number increases with a decreasing fatigue resistance. No weld no holes W0 N0. Description Symbol W01 Part without a hole and joint, with a normal state of the surface, without notch behavior. Slight notch behavior group K0 N0. Description Symbol 011 Parts, joined by a butt weld of special quantity, perpendicular to the direction of force. Moderate notch behavior group K1 N0. Description Symbol 111 Parts, joined by a butt weld of ordinary quantity, perpendicular to the direction of force. Medium notch behavior group K2 N0. Description Symbol 211 Parts, joined by a butt weld of special quantity, perpendicular to the direction of force. Great notch behavior group K3 N0. Description Symbol 311 Parts, joined by a butt weld with a backing strap, without sealing run and perpendicular to the direction of force. Backing strap fixed by track welding. and a different connection type 351 N0. Description Symbol 351 Double bevel weld of ordinary quantity, perpendicular to the direction of force, between crossing parts. Very great notch behavior group K4 N0. Description Symbol 412 Parts of different thickness, joined by a butt weld of ordinary quantity, perpendicular to the direction of force. Asymmetrical joint without slope. and a different connection type 451 N0. Description Symbol 451 Fillet welds of normal quantity or single bevel weld (included fillet weld) with backing, perpendicular to the direction of force, between crossing parts. Influence of stress concentrations The influence of stress concentrations can be seen in the following perpendicular loaded weld in the plate-plate connection types. If a detailed model is used the influence of already modeled stress concentration factors should be avoided. Slight notch behavior group K0 N0. Description Symbol 013 Gusset, jointed by out welds of special quantity, perpendicular to the direction of force. Moderate notch behavior group K1 N0. Description Symbol 113 Gusset, jointed by butt welds of ordinary quantity, perpendicular to the direction of force. Medium notch behavior group K2 N0. Description Symbol 213 Butt weld of special quantity and continuous part, both perpendicular to the direction of force, at a crossing of flanges with in-weld corner plates. The ends of the welds are ground to prevent them from notch behavior. Great notch behavior group K3 N0. Description Symbol 313 Butt weld of ordinary quantity and continuous part, both perpendicular to the direction of force, at a crossing of flanges with in-weld corner plates. The ends of the welds have been ground to prevent them from notch behavior. Very great notch behavior group K4 N0. Description Symbol 413 Butt weld of ordinary quantity, perpendicular to the direction of force, at a crossing of flanges without corner plates. Determination of a Notch Case or Weld type The Notch case– also called Weld Type, can be set to K0-K4 – for different welding types, or W0-W2 – for elements without welds. The weld Type depends on the shape, structural design, whole pattern or type and quality of welds. With SDC Verifier the specification of the notch case throughout the model is very straightforward. With the weld finder, all welds in a FEM model are found automatically and the element x-orientation can be set parallel to the weld direction (the y-direction is then always perpendicular to the weld). With the selection tools, it becomes very easy to set the different notch classes for the locations without weld, with weld (perpendicular K2 and parallel K1 to the weld) and for the intersections K2 in X direction. Fatigue check results When all classifications are defined and loads are combined into Load Groups Fatigue Check can be performed. The fatigue Utilization Factor (Stress / Fatigue Allowable Stress) can be calculated for each different direction (X Y and Z are normal stresses and XY, YZ and ZX the shear stresses). The equivalent direction shows the combined result according to the combination rule as in the formula below: As in the example below the result of the combination, the rule is not always the highest factor (The sqrt (4.09)/1.1=1.84 which is lower than the maximum fatigue check in x direction 1.93). The maximum result of the 7 checks is shown in the Overall row. In the overall row, the maximum of all single checks and the square root of the equivalent check/1.1 is calculated. In this way, all 7 checks need to be below 1. Therefore the best way to get an immediate answer on all fatigue checks at once is by plotting the overall fatigue result. The calculation procedure is repeated for all elements and each point of interest. By plotting the overall fatigue check (all other directions can also be plotted of course) the result of the complete check is shown:
Hello there! My first series of posts will be about math; or, to be clearer, about dot product. Since I've started developing games, every once in a while I would need to solve a spatial problem. For my unhappiness, the solution usually involved math. As many other programmers in the world, I'm not really fond of calculations using angles, trigonometry, vectors, matrices and so on – however this is a knowledge that will more and more bind you down whenever you try to make something cool. Here, I will assume that you already understand the basic of vectors. Math vectors. Yes, we will dive into boring math a little bit. So, where is a good starting point? The dot product. The dot product is composed of simple calculations using two vectors of the same dimension. Let's say that we have two vectors, A = [a0, a1, ... , an] and B = [b0, b1, ... , bn]. The dot product is defined as:$$\begin{equation}A\cdot B = \sum^n_{i = 0} a_ib_i = a_0b_0 + a_1b_1 + ... + a_nb_n \end{equation}$$ The dot product is nothing more than the sum of the multiplications of the elements that share the same index in each vector, and thus we are happy with it. Computers are really good with sums and multiplications - good enough to do them a lot faster than even other simple calculations such as divisions. We don't want to jump into premature optimization here, but I think that this is a really useful fact to know (after all, in a few months you might be doing math on thousands of vectors that need to finish in a really short time window, or even programming your own graphics shader). But still, we don't have any clue on what or why use dot product. Before thinking on that, we need to know two small facts: $$\begin{equation} A \cdot B = B \cdot A \end{equation}$$ The dot product has the commutative property. This is a mathematical way to say that you don't need to go through the burden of carefully ordering your vectors to reach the correct solution. Of course dot product has other interesting properties, but they I won't be listing them in this post, as we wont use them right now. What we need to know best is the following: $$\begin{equation} A \cdot B = |A| |B| cos \theta \end{equation}$$ Given that we have the means to calculate the length of the A and B vectors - and usually we DO - and we know the angle $\theta$ between both, we can also calculate the dot product. What is interesting in this fact is that now we have two ways of calculating the dot product of two vectors. And better yet, one of the equations has a cosine! If you look at it, assuming we know both vectors A and B, we can calculate their dot product. Since we can calculate the length of a vectors we know the vector, this means we just have found a way to magically discover the cosine of the angle $\theta$ between them! $$\begin{equation} cos \theta = \cfrac{A \cdot B}{|A| |B|}\end{equation}$$ Calculating this cosine is what will make the dot product so useful. The cosine. For practical programming, sometimes we won't care about the angle $\theta$ itself - for that, we could simply use a math function called atan2that exists for your programming language of choice. One last note, though. If we normalize the vectors, their length will become 1, so the equation will be simplified: $$\begin{equation} cos \theta =A \cdot B \end{equation}$$ By normalizing both vectors, calculating the cosine between two vectors became only sums of multiplications. In this post, I just talked a bit about the math on dot products. There is a lot more to it, but we can really get started with what we have now. Next post, I'll write about some ways of using the dot product in game development. Some of them will use the resulting cosine while others won't. Till next time!
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. These problems, puzzles and riddles come from many top-notch puzzle folks around the world — including you! There are two types: Riddler Express for those of you who want something bite-sized and Riddler Classic for those of you in the slow-puzzle movement. You can mull them over on your commute, dissect them on your lunch break and argue about them with your friends and lovers. When you’re ready, submit your answer(s) using the buttons below. I’ll reveal the solutions next week, and a correct submission (chosen at random) will earn a shoutout in this column. 1 Before we get to the new puzzles, let’s reveal the winners of last week’s. Congratulations to 👏 Adrianna Whitehouse 👏 of New York and 👏 Hart Levy 👏 of Toronto, our respective Express and Classic winners. You can find solutions to the previous Riddlers at the bottom of this post. First, for Riddler Express, a dinner party puzzle from Robin Stewart: A musician and her partner attended a dinner party with four other couples. At the party, as with most good dinner parties, both members of the couple had the opportunity to hug some of the other people at the party. There are three things we know about that evening: If an attendee hugged someone, they only hugged that person once; no one hugged him or herself; and no one hugged his or her partner. On the way home, the musician, known to be an astute and accurate observer of interesting mathematical happenings, remarked to her partner: “Did you notice that everyone at the party other than me received a unique number of hugs?” How many hugs did the musician’s partner engage in? Riddler Classic is something a little different and cool and colorful — a cartographical game from Dave Moran: Allison and Bob decide to play a map-coloring game. Each turn, Allison draws a simple closed curve on a piece of paper, and Bob must then color the interior of the “country” that curve creates with one of his many crayons. If the new country borders any pre-existing countries, Bob must color the new country with a color that is different from the ones he used for the bordering ones. For example, Allison creates Country 1 and Bob colors it green. Then Allison creates Country 2, which Bob colors purple, and so on. Allison wins the game when she forces Bob to use a sixth color. If they both play optimally, how many countries will Allison have to draw to win? Here’s the solution to last week’s Riddler Express, which asked how quickly four people could cross a very scary bridge. Person 1 takes one minute to cross, Person 2 takes two minutes, Person 5 takes five and Person 10 takes 10; the bridge can support only two people at any time; and whoever is crossing needs to carry the group’s only flashlight. If two people cross at once, they cross at the speed of the slower person. The quickest they can get across is 17 minutes. Here’s how to do it, from the puzzle’s submitter, Brent Edwards: Person 1 crosses with Person 2, which takes two minutes. We’re at two minutes total. Person 1 returns, which takes one minute. That’s three minutes total. The flashlight is handed to Persons 5 and 10, who cross. That takes 10 minutes. We’re at 13 minutes total. The flashlight is handed to Person 2, who returns in two minutes. That’s 15 minutes. Finally, Persons 1 and 2 cross, which takes two minutes. That’s 17 minutes! And here’s the solution to last week’s Riddler Classic, which asked if you (yes, you!) would change the results of an election. If the N voters who aren’t you each vote randomly and independently for one of two candidates, with a 50 percent chance of each, and you vote for your preferred candidate, there is about a \(\sqrt{\frac{2}{N \pi}}\) chance that your vote will be decisive. Why? Your vote will be decisive if and only if half of the N voters vote for one candidate and half of them vote for the other candidate. (For simplicity, let’s assume N is even. If N is odd, the best your vote can do is to move the election into a tie. You can tweak the formula above slightly to reflect that, but the distinction won’t make any meaningful difference when N is relatively large.) This distribution of voters, like the flipping of many coins, follows a binomial distribution, with N trials and a 0.5 probability of “success” (voting for a specific one of the candidates) in each trial. According to this distribution, the probability of a specific number, k, of successes if there is a probability p of success in any one trial is given by: $${N \choose k} p^k (1-p)^{N-k}$$ Plugging in the specifics from our voting scenario gives our answer: $${N \choose {N/2}} (1/2)^{(N/2)} (1/2)^{(N/2)} $$ Which simplifies a little to: $${N \choose {N/2}} \frac{1}{2^N}$$ We can stop there and be completely correct, but it’s still a little hard to understand what happens to our chances of turning the election as the number of voters grows simply by looking at that formula. To see that more clearly, the puzzle’s submitter, Andrew Spann, suggests using Stirling’s approximation. That delivers this close approximation of your chances: $$\sqrt{\frac{2}{N \pi}}$$ Laurent Lessard created the following graph of your chances as the number of voters grows: Elsewhere in the puzzling world: Would you get into Oxbridge? [The Guardian] Some more election puzzles, if you can stand them [Expii] Some beautiful sums [The Players’ Tribune] The building is the puzzle[The New York Times] Have a great weekend!
paper reading 主要原理: the Hierarchical Attention Network (HAN) that is designed to capture two basic insights about document structure. First, since documents have a hierarchical structure (words form sentences, sentences form a document), we likewise construct a document representation by first building representations of sentences and then aggregating those into a document representation. Second, it is observed that different words and sentences in a documents are differentially informative. 对于一个document含有这样的层次结构,document由sentences组成,sentence由words组成。 the importance of words and sentences are highly context dependent, i.e. the same word or sentence may be differentially important in different context (x3.5). To include sensitivity to this fact, our model includes two levels of attention mechanisms (Bahdanau et al., 2014; Xu et al., 2015) — one at the word level and one at the sentence level — that let the model to pay more or less attention to individual words and sentences when constructing the representation of the document. words和sentences都是高度上下文依赖的,同一个词或sentence在不同的上下文中,其表现的重要性会有差别。因此,这篇论文中使用了两个attention机制,来表示结合了上下文信息的词或句子的重要程度。(这里结合的上下文的词或句子,就是经过RNN处理后的隐藏状态)。 Attention serves two benefits: not only does it often result in better performance, but it also provides insight into which words and sentences contribute to the classification decision which can be of value in applications and analysis (Shen et al., 2014; Gao et al., 2014) attention不仅有好的效果,而且能够可视化的看见哪些词或句子对哪一类document的分类影响大。 本文的创新点在于,考虑了ducument中sentence这一层次结构,因为对于一个document的分类,可能前面几句话都是废话,而最后一句话来了一个转折,对document的分类起决定性作用。而之前的研究,只考虑了document中的词。 Model Architecture GRU-based sequence encoder reset gate: controls how much the past state contributes to the candidate state. \[r_t=\sigma(W_rx_t+U_rh_{t-1}+b_r)\] candidate state: \[\tilde h_t=tanh(W_hx_t+r_t\circ (U_hh_{t-1})+b_h)\] update gate: decides how much past information is kept and how much new information is added. \[z_t=\sigma(W_zx_t+U_zh_{t-1}+b_z)\] new state: a linear interpolation between the previous state \(h_{t−1}\) and the current new state \(\tilde h_t\) computed with new sequence information. \[h_t=(1-z_t)\circ h_{t-1}+z_t\circ \tilde h_t\] Hierarchical Attention Word Encoder \[x_{it}=W_ew_{it}, t\in [1, T]\] \[\overrightarrow h_{it}=\overrightarrow {GRU}(x_{it}),t\in[1,T]\] \[\overleftarrow h_{it}=\overleftarrow {GRU}(x_{it}),t\in [T,1]\] \[h_{it} = [\overrightarrow h_{it},\overleftarrow h_{it}]\] i means the \(i^{th}\) sentence in the document, and t means the \(t^{th}\) word in the sentence. Word Attention Not all words contribute equally to the representation of the sentence meaning. Hence, we introduce attention mechanism to extract such words that are important to the meaning of the sentence and aggregate the representation of those informative words to form a sentence vector. Attention机制说到底就是给予sentence中每个结合了上下文信息的词一个权重。关键在于这个权重怎么确定? \[u_{it}=tanh(W_wh_{it}+b_w)\] \[\alpha_{it}=\dfrac{exp(u_{it}^Tu_w)}{\sum_t^Texp(u_{it}^Tu_w)}\] \[s_i=\sum_t^T\alpha_{it}h_{it}\] 这里首先是将 \(h_{it}\) 通过一个全连接层得到 hidden representation \(u_{it}\),然后计算 \(u_{it}\) 与 \(u_w\) 的相似性。并通过softmax归一化得到每个词与 \(u_w\) 相似的概率。越相似的话,这个词所占比重越大,对整个sentence的向量表示影响越大。 那么关键是这个 \(u_w\) 怎么表示? The context vector \(u_w\) can be seen as a high level representation of a fixed query “what is the informative word” over the words like that used in memory networks (Sukhbaatar et al., 2015, End-to-end memory networks.; Kumar et al., 2015, Ask me anything: Dynamic memory networks for natural language processing.). The word context vector \(u_w\) is randomly initialized and jointly learned during the training process. Sentence Encoder \[\overrightarrow h_{i}=\overrightarrow {GRU}(s_{i}),t\in[1,L]\] \[\overleftarrow h_{i}=\overleftarrow {GRU}(s_{i}),t\in [L,1]\] \[H_i=[\overrightarrow h_{i}, \overleftarrow h_{i}]\] hi summarizes the neighbor sentences around sentence i but still focus on sentence i. Sentence Attention \[u_i=tanh(W_sH_i+b_s)\] \[\alpha_i=\dfrac{exp(u_i^Tu_s)}{\sum_i^Lexp(u_i^Tu_s)}\] \[v = \sum_i^L\alpha_ih_i\] 同样的 \(u_s\) 表示: a sentence level context vector \(u_s\) Document Classification The document vector v is a high level representation of the document and can be used as features for document classification: \[p=softmax(W_cv+b_c)\] 代码实现 需要注意的问题 如果使用tensorboard可视化 变量范围的问题
In the particle exchange picture, the particles are emitted in all directions and only the ones going from P in the direction of E that hit E are intercepted and have an effect. The other particles interfere themselves out of existence, as there is no on-shell state they can enter while conserving energy, or else return to P, giving the self-energy modification to P's mass. In fact, most return to P, since the self-energy is divergent, while only a small fraction make it to E by comparison. This process is virtual, so that it is defined by temporary intermediate states which only can stick around until their phase randomizes them away. For the case of a classical force, you need to use particles that go every which way, forward and backward in time. Consider two classical objects interacting with a (free) quantum field according to this Lagrangian: $$\int |\partial_\phi|^2 + \phi(x) s(x) $$ where the source is two delta functions $s(x) = g\delta(x-x_0) + g\delta(x-x_1)$. Each of these classical sources is steadily spitting out and absorbing particles per unit time at a steady rate g, as you can see by the added source term in the Hamitlonian: $$ g\phi(x_0) = g\int {d^3k\over 2E_k} e^{ikx_0} \alpha_k + e^{-ikx_0}\alpha^\dagger_k $$ the g term is multiplying a creation operator and an annihilation operator, so the Hamiltonian has a steady amplitude g per unit time to emit any on-shell particle, and the same amplitude to absorb one. If you have no other source, the particles that are absorbed are those emitted by the source, and you just get an (infinite) self-energy renormalization of the mass. This description is the on-shell old-fasioned perturbation theory, in which the intermediate states are k-states and the description is Hamiltonian in time. This is not covariant, but it shows you that particles are spat out and absorbed, and the two sources only interact to the extent that some of the particles spat out by one are absorbed by the other. The old-fasioned picture is useless for actual computations, but it reveals the particle processes most clearly, because it follows the annihilation and creation of physical particles in detail in time. The result of the interaction when there are two sources is altered by those particles produced by one, absorbed by the other later. The covariant Schwinger/Feynman form of this introduces particles that meander around in space and time both. Those that do not get absorbed by the other make a field around the particle. The fact that you are doing things by loop order means that you are not considering the process of a particle emitted by one source absorbed by itself, since this is a loop. The loop order separation of terms makes the scattering process look weird, since it looks like the emitted particle knew where to go to find the other particle. It didn't. If it came back to the first particle, we would include it as part of the next order of Feynman diagram as part of the self-energy graph.
KöMaL Problems in Mathematics, February 2017 Please read the rules of the competition. Show/hide problems of signs: Problems with sign 'K' K. 535. Is it possible to arrange the numbers 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 in the star pentagon in the figure, so that the sum of the numbers is the same along each line of four points? (6 pont) K. 536. For the angles of the star pentagon in the figure, determine the value of \(\displaystyle \alpha + \beta + \gamma +\delta + \varepsilon\). (6 pont) K. 537. How many multiples does 9 have that consist of even digits, all different? (6 pont) K. 538. The number 2592 may be called ``printer safe'' because it does not cause any error to print the product \(\displaystyle 2^5\, 9^{2}\) as 2592, since the product is equal to this number. Find out in what way the number 13942125 may be considered ``printer safe'', that is, which digits could be interpreted as exponents without changing the value of the result. (6 pont) K. 539. In how many different ways is it possible to write the numbers 1, 2, 3, 4, 5, 6, 7 and 8 on the vertices of a cube, so that the sum of the four numbers is the same around each face of the cube? (Two solutions are considered the same if each number has the same neighbours adjacent to it.) (6 pont) K. 540. On the planet XL, there are 180 days in a year, and 10 days in a month. Every seventh year is made an XL year, which means adding an extra day to the third month, making it 11 days long. The week consists of 5 days, called AX, EX, IX, OX and UX, in this order. Pax was born on the first day of the fourth month of an XL year. Today, on an OX day, he is exactly 25 years old. His son, Felix is 2 years, 3 months and 5 days old today. When Felix is exactly half as many days old as his father, he will have his coming of age ceremony. On what day of the week will the ceremony be held? (Proposed by L. Lorántfy, Dabas) (6 pont) Problems with sign 'C' C. 1399. In a 50-metre running race, if Martin gives a 4-metre advantage to Bill, he will just catch up with him at the finish line. If Bill gives 15 metres of advantage to Henry in a 200-metre race, they will finish side by side. How many metres of advantage may Martin give to Henry in a 1000-metre race so that the two of them finish together? (Assume that each of the three runners maintains the same constant running speed throughout the races.) (5 pont) C. 1400. Prove that if the interior angles of a convex hexagon are equal then the difference of each pair of opposite sides is the same. (5 pont) C. 1401. The positive numbers \(\displaystyle x\), \(\displaystyle y\) satisfy the equation \(\displaystyle x^3+y^3=x-y\). Prove that \(\displaystyle x^2+y^2<1\). (5 pont) C. 1402. On each side of a square, either a regular triangle or a square of the same side length is drawn, on the outside. Altogether, there are two triangles and two new squares drawn. What is the radius of the smallest possible circle that completely covers the complex figure obtained in this way? (5 pont) C. 1403. An \(\displaystyle n\)-element set has half as many \(\displaystyle {(k-1)}\) element subsets as \(\displaystyle k\)-element subsets, and \(\displaystyle \frac 74\) times as many \(\displaystyle {(k + 1)}\) element subsets as \(\displaystyle k\)-element subsets. Determine the number of \(\displaystyle k\)-element subsets of the set. (Proposed by L. Koncz, Budapest) (5 pont) C. 1404. \(\displaystyle D\) is the foot of the perpendicular drawn to side \(\displaystyle BC\) of an isosceles triangle \(\displaystyle ABC\) from the midpoint \(\displaystyle F\) of base \(\displaystyle AB\). The midpoint of line segment \(\displaystyle FD\) is \(\displaystyle G\). Prove that \(\displaystyle AD\) is perpendicular to \(\displaystyle CG\). (5 pont) C. 1405. There are 10 balls in a bag, 6 of which are red. We play the following game: four balls are drawn at random. If \(\displaystyle k\) of them are red, we get \(\displaystyle k^2\) forints (HUF Hungarian currency). What is the expected value of the money gained if one game costs 1 forint? (5 pont) Problems with sign 'B' B. 4849. The centre of the inscribed circle of \(\displaystyle ABC\triangle\) is \(\displaystyle K\), and the centre of the excircle drawn to side \(\displaystyle AB\) is \(\displaystyle L\). Prove that the line segment \(\displaystyle KL\) and the arc \(\displaystyle AB\) of the circumscribed circle not containing \(\displaystyle C\) bisect each other. (3 pont) B. 4850. Solve the following simultaneous equations in the set of real numbers: \(\displaystyle \sqrt{x_1}+\sqrt{x_2}+\ldots+\sqrt{x_{2016}} =\sqrt{2017}\,,\) \(\displaystyle x_1+x_2+\ldots+x_{2016} =2017.\) (Proposed by J. Szoldatics, Budapest) (3 pont) B. 4851. Prove that if all three roots of the equation \(\displaystyle x^3-px^2+qx-r=0 \) are positive then the sum of the reciprocals of the roots is at most \(\displaystyle \frac{p^2}{3r}\). (Proposed by M. Kovács, Budapest) (4 pont) B. 4852. Triangle \(\displaystyle A_1B_1C_1\) is inscribed in triangle \(\displaystyle ABC\), and triangle \(\displaystyle A_2B_2C_2\) is circumscribed about it as shown in the figure, where \(\displaystyle A_1B_1\parallel A_2B_2\), \(\displaystyle B_1C_1\parallel B_2C_2\) and \(\displaystyle C_1A_1\parallel C_2A_2\). The areas of triangles \(\displaystyle ABC\), \(\displaystyle A_1B_1C_1\) and \(\displaystyle A_2B_2C_2\) are \(\displaystyle t\), \(\displaystyle t_1\) and \(\displaystyle t_2\), respectively. Prove that \(\displaystyle t^2=t_1\cdot t_2\). (Proposed by S. Róka, Nyíregyháza) (5 pont) B. 4853. Let \(\displaystyle K\) denote the centre of the incircle of a circumscribed quadrilateral \(\displaystyle ABCD\). \(\displaystyle M\) is a point on the line segment \(\displaystyle AK\), and \(\displaystyle N\) is a point on the line segment \(\displaystyle CK\), such that \(\displaystyle 2 MBN\sphericalangle= ABC\sphericalangle\). Prove that \(\displaystyle 2MDN\sphericalangle=ADC\sphericalangle\). (5 pont) B. 4854. Let \(\displaystyle a_1,a_2,\dots,a_n\) be real numbers. Consider the \(\displaystyle 2^n-1\) (nonempty) sums composed out of these numbers. How many of these may be positive? (5 pont) B. 4855. A table is filled in with digits of 0 and 1 such that there are no two identical rows, but there are two identical rows in any \(\displaystyle 4\times 2\) sub-table formed by two columns and four rows. Show that there exists a column in which one kind of digit occurs exactly once. (Proposed by Á. Lelkes) (6 pont) B. 4856. Point \(\displaystyle P\) is said to be a centre of convexity of a point set \(\displaystyle \mathcal{H}\) if \(\displaystyle \mathcal{H}\cup H_P'\) is convex, where \(\displaystyle H_P'\) is the reflection of the point set \(\displaystyle \mathcal{H}\) in the point \(\displaystyle P\). Show that \(\displaystyle a)\) every convex quadrilateral of the plane has three distinct non-collinear centres of convexity; \(\displaystyle b)\) a tetrahedron has no centre of convexity. (6 pont) B. 4857. Determine all pairs \(\displaystyle (n,k)\) of positive integers for which \(\displaystyle \big(2^{2^n}+1\big)\big(2^{2^k}+1\big)\) is divisible by \(\displaystyle nk\). ( Bulgarian problem) (6 pont) Problems with sign 'A' A. 689. Let \(\displaystyle f_1,f_2,\ldots\) be an infinite sequence of continuous \(\displaystyle \mathbb{R}\to\mathbb{R}\) functions such that for arbitrary positive integer \(\displaystyle k\) and arbitrary real numbers \(\displaystyle r>0\) and \(\displaystyle c\) there exists a number \(\displaystyle x\in(-r,r)\) with \(\displaystyle f_k(x)\ne cx\). Show that there exists a sequence \(\displaystyle a_1,a_2,\ldots\) of real numbers such that \(\displaystyle \sum_{n=1}^\infty a_n\) is convergent, but \(\displaystyle \sum_{n=1}^\infty f_k(a_n)\) is divergent for every positive integer \(\displaystyle k\). (5 pont) A. 690. In a convex quadrilateral \(\displaystyle ABCD\), the perpendicular drawn from \(\displaystyle A\) to line \(\displaystyle BC\) meets the lines \(\displaystyle BC\) and \(\displaystyle BD\) at \(\displaystyle P\) and \(\displaystyle U\), respectively. The perpendicular drawn from \(\displaystyle A\) to line \(\displaystyle CD\) meets the lines \(\displaystyle CD\) and \(\displaystyle BD\) at \(\displaystyle Q\) and \(\displaystyle V\), respectively. The midpoints of the segments \(\displaystyle BU\) and \(\displaystyle DV\) are \(\displaystyle S\) and \(\displaystyle R\), respectively. The lines \(\displaystyle PS\) and \(\displaystyle QR\) meet at \(\displaystyle E\). The second intersection point of the circles \(\displaystyle PQE\) and \(\displaystyle RSE\), other than \(\displaystyle E\), is \(\displaystyle M\). The points \(\displaystyle A\), \(\displaystyle B\), \(\displaystyle C\), \(\displaystyle D\), \(\displaystyle E\), \(\displaystyle M\), \(\displaystyle P\), \(\displaystyle Q\), \(\displaystyle R\), \(\displaystyle S\), \(\displaystyle U\), \(\displaystyle V\) are distinct. Show that the center of the circle \(\displaystyle BCD\), the center of the circle \(\displaystyle AUV\) and the point \(\displaystyle M\) are collinear. (5 pont) A. 691. Let \(\displaystyle c\ge3\) be an integer, and define the sequence \(\displaystyle a_1,a_2,\dots\) by the recurrence \(\displaystyle a_1=c^2-1\), \(\displaystyle a_{n+1}=a_n^3-3a_n^2+3\) \(\displaystyle (n=1,2,\ldots)\). Show that for every integer \(\displaystyle n\ge2\), the number \(\displaystyle a_n\) has a prime divisor that does not divide any of \(\displaystyle a_1,\ldots,a_{n-1}\). (5 pont) Upload your solutions above or send them to the following address: KöMaL Szerkesztőség (KöMaL feladatok), Budapest 112, Pf. 32. 1518, Hungary
Education is a difficult task, it really is. Teaching takes a few tries to get the hang of. Writing textbooks is even harder. And math is one of those technical fields in which human error is hard to avoid. So usually, when I see a mistake in a math text, it doesn’t bother me much. But some things just hurt my soul. No correspondence between the integers and rationals? Yes there is, Example 2! Yes there is! This horrifying falsehood was stated in the supplementary “Study Guide and Intervention” worksheet for the Glencoe/McGraw-Hill Algebra 2 textbook, and recently pointed out on Reddit. Or at least it was stated in some version of this worksheet. The original file can be found online at various websites, including one download link from Glencoe’s website that shows up on a Google search. There are other versions of the document that don’t contain this example, but this version was almost certainly used in some high schools, as the Reddit thread claims. Luckily, mathematicians are here to set the record straight. The Wolfram blog published a fantastic post about this error already, with several proofs of the countability of the rationals. There are also several excellent older expositions on this topic, including on the Math Less Traveled and the Division by Zero blogs. I’ll discuss two of my favorites here as well. But first, let’s talk about what is wrong with the argument in Example 2. The author is correct in stating that listing all the rationals in order would make a one-to-one and onto correspondence between the rationals and integers, and so they try to do so in a random way and failed. At that point, instead of trying a different ordering, they gave up and figured it couldn’t be done! That’s not a proof, or even logically sound (as my students at this year’s Prove it! Math Academy would certainly recognize.) If one were going to try to prove that a certain set couldn’t be organized into a list, a common tactic would be to use proof by contradiction: assume there was a way to list them, and then show that something goes wrong and you get a contradiction. Of course, this wouldn’t work either in the case of the rationals, because they can be listed. So let’s discuss a correct solution. Getting our definitions straight First, let’s state the precise meaning of a one-to-one and onto correspondence. A function $f$ from a set $A$ to a set $B$, written $f:A\to B$, is an assignment of each element of $a\in A$ to an element $f(a)\in B$. To clear up another misuse of notation in the Glencoe Algebra textbook, the set $A$ is called the domain and $B$ is called the codomain (not the range, as Glencoe would have you think – the range refers to the set of elements of $B$ that are assigned to by the function.) A function is: One-to-one, or injective, if no two elements of $A$ are assigned to the same element of $B$, i.e., if $f(x)=f(y)$ then $x=y$. Onto, or surjective, if every element of $B$ is mapped to, i.e., for all $b\in B$, there exists $a\in A$ such that $f(a)=b$. For instance, if $\mathbb{Z}$ denotes the set of integers, the function $f:\mathbb{Z}\to \mathbb{Z}$ defined by $f(x)=2x$ is injective, since if $2x=2y$ then $x=y$. However, it is not surjective, since an odd number like $3$ is not equal to $2x$ for any integer $x$. A function which is both injective and surjective is said to be bijective, and is called a bijection. This is just a shorter way of saying “one-to-one and onto correspondence,” which is wordy and cumbersome. So, we want to find a bijection $f:\mathbb{Z}\to \mathbb{Q}$, where $\mathbb{Z}$ denotes the integers and $\mathbb{Q}$ the rationals. Notice that we can list all the integers in order: $$0,1,-1,2,-2,3,-3,\ldots$$ and so if we list all the rationals in order, $r_0,r_1,r_2,\ldots$, we can define the function $f$ accordingly by $f(0)=r_0$, $f(1)=r_1$, $f(-1)=r_2$, and so on. The function will be bijective if and only if every rational number appears in the list exactly once. Next, let’s be precise about the rationals. Recall that the rational numbers are those numbers which can be written as fractions $a/b$ where $a$ and $b$ are integers with $b\neq 0$. In order to assign every rational number a unique representation, let us restrict to the case where $b>0$ and $a$ is any integer such that $\mathrm{gcd}(a,b)=1$. This condition makes $a/b$ into a reduced fraction. So the number $2/-4$ should be written as $-1/2$ in this convention. It follows that we can think of the set of rational numbers as the set $$\mathbb{Q}=\{(a,b) | b>0\text{ and }\mathrm{gcd}(a,b)=1\text{ and }a,b\in \mathbb{Z}\}.$$ Listing the rationals, naively One way to construct this list is to think of the rationals as ordered pairs $(a,b)$ of integers with $b>0$ and $\mathrm{gcd}(a,b)=1$ as described above. There is an easy way of ordering all pairs of integers – plot them on a coordinate plane, and use a spiral! Now, to list the rationals, follow the spiral from $(0,0)$ outwards. Each time we reach an ordered pair of integers, say $(a,b)$, write it down if $b>0$ and $\mathrm{gcd}(a,b)=1$, and otherwise skip it and move on. (These are the green dots above.) This process guarantees that we list all the rationals exactly once. More elegant methods There are many other elegant enumerations of the rationals, and one particularly nice one is due to Calkin and Wilf. They construct a binary tree in which each rational number appears exactly once, as shown below. The tree is constructed as follows: start with $1/1$ at the top, and for each node $a/b$ in the tree, assign it the left and right children $a/(a+b)$ and $(a+b)/b$ respectively. This tree turns out to contain every positive rational number exactly once. It also has an incredible property: if we read the entries of each row of the tree successively from left to right, the denominator of each entry will match the numerator of the next entry, giving us a sequence of numerators/denominators: $$1,1,2,1,3,2,3,1,4,3,5,2,5,3,4,1,\ldots$$ such that the consecutive quotients give us a list of the positive rationals. This makes me wonder is whether there is a way of listing the integers such that (a) every integer occurs exactly once in the sequence, and (b) the consecutive quotients give all rationals exactly once (allowing the consecutive integers to have common factors greater than one). Thoughts?
Let $f(z) = \sum_{n=1}^\infty a(n) e^{2i \pi nz}$ be an eigenform of $S_k(\Gamma_0(N))$. Since the Hecke operator acts by $T_p f = a_p f$ the Riemann hypothesis for $f$'s L-function is $$ \!\!\! \!\!\! \!\!\! \!\!\!(\sum_{n=1}^\infty a_n n^{-s})^{-1} =\prod_p (1-a(p) p^{-s}+p^{k-1-2s}) =\frac{\displaystyle[\prod_p (1-T_p p^{-s}+p^{k-1-2s})] f(z)}{f(z)} \tag{1}$$ converges and is analytic for $\Re(s) > k/2$. Then, the Riemann hypothesis for all the eigenforms of $S_k(\Gamma_0(N))$ is that for any $f \in S_k(\Gamma_0(N))$, $$[\prod_p (1-T_p p^{-s}+p^{k-1-2s})] f(z) \quad \text{ is analytic for } \Re(s) > k/2\tag{2} $$ Questions : This suggests to define a Riemann hypothesis for the Hecke operators themselves, and I would like to know if there is a well-known way to think about that. The RH for the Hecke operators could be the convergence in operator norm of $\lim_{x \to \infty} \prod_{p \le x} (1-T_p p^{-s}+p^{k-1-2s}),$ for $\Re(s) > k/2$, with the norm coming from the Petersson inner product. But there are other possible norms, for example $\langle f,g\rangle = \int_0^\infty f(ix) \overline{g(ix)} x^{2 \sigma -1}dx$. Indeed, the choice of the norm is a major problem : when defining $\displaystyle T_p f(z) = p^{k-1}\sum_{ad =p, b \bmod p} d^{-k} f(\frac{az+b}{d}) \tag{3}$ the statement in $(1)$ works the same way whenever $f(z) = \sum_{n=1}^\infty a_n e^{2i \pi n z}$ and $\sum_{n=1}^\infty a_n n^{-s} = \prod_p (1+a_p p^{-s}+ p^{k-1-2s})^{-1}$, no matter that $f$ is modular or not, so that in general $f$ doesn't have a Riemann hypothesis (nor the $T_p$ operator acting on it). So we really need a norm and a statement specific to modular forms, for example about $\prod_p (1-T_pp^{-s}+\langle p \rangle p^{k-1-2s})$ (which works for $S_k(\Gamma_1(N))$ too). For short, RH is believed true for Dirichlet series with Euler product and functional equation. $\prod_p (1-T_p p^{-s}+p^{k-1-2s})$ is just an Euler product. So hopefully, we only need to add a reference to the modularity (implying the functional equation) to make a viable RH statement. Assuming we solved that part, the Hecke operators depend on $N$ only for finitely many $p$, so can we expect the statement to imply the Riemann hypothesis for $\displaystyle\bigcup_N S_k(\Gamma_0(N))$ ? In that case, what's about the dependence on $k$ ? Also looking at the weight-$\frac{1}{2}$ forms $\sum_{n \ge 1}^\infty \chi(n) e^{2i \pi n^2 z}$ could help.
18th SSC CGL Tier II level Question Set, topic Trigonometry 4, questions with answers This is the 18th question set for the 10 practice problem exercise for SSC CGL Tier II exam and 4th on topic Trigonometry. Some of these 10 questions may seem to be a bit more difficult. We repeat the method of taking the test. It is important to follow result bearing methods even in practice test environment. Method of taking the test for getting the best results from the test: Before start,you may refer to our tutorial or any short but good material to refresh your concepts if you so require. Basic and rich Trigonometric concepts and applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers given at the end to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again. Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you You may refer to: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest content from this place. subscribe 18th question set- 10 problems for SSC CGL Tier II exam: 4th on Trigonometry - testing time 12 mins Problem 1. The value of $\displaystyle\frac{4\cos(90^0- \theta)\sin^3(90^0+\theta)-4\sin(90^0+\theta)\cos^3(90^0-\theta)}{\cos\left(\displaystyle\frac{180^0+8\theta}{2}\right)}$ is, $0$ $-1$ $1$ $2$ Problem 2. If $\sec \theta(\cos \theta+\sin \theta)=\sqrt{2}$ then the value of $\displaystyle\frac{2\sin \theta}{\cos \theta -\sin \theta}$ is equal to, $\sqrt{2}$ $\displaystyle\frac{1}{\sqrt{2}}$ $3\sqrt{2}$ $\displaystyle\frac{3}{\sqrt{2}}$ Problem 3. The value of $\displaystyle\frac{(\sin x+\sin y)(\sin x - \sin y)}{(\cos x+\cos y)(\cos y - \cos x)}$ is, $0$ $2$ $1$ $-1$ Problem 4. The value of $\displaystyle\frac{1}{\sin^4(90^0-\theta)}+\displaystyle\frac{1}{\cos^2(90^0-\theta)-1}$ is, $\tan^4 \theta$ $\tan^2 \theta\sec^2 \theta$ $\sec^4 \theta$ $\tan^2 \theta\sin^2 \theta$ Problem 5. The value of $\sin(B-C)\cos(A-D)$ $\hspace{22mm}+\sin(A-B)\cos(C-D)$ $\hspace{22mm}+\sin(C-A)\cos(B-D)$ is, $\displaystyle\frac{3}{2}$ $0$ $-3$ $1$ Problem 6. The value of $\left[\tan^2(90^0-\theta)-\sin^2(90^0-\theta)\right]$ $\hspace{22mm}\times{\text{cosec}^2(90^0-\theta)\text{cot}^2(90^0-\theta)}$ is, $1$ $0$ $2$ $-1$ Problem 7. The value of $\displaystyle\frac{4}{3}\text{cot}^2\left(\displaystyle\frac{\pi}{6}\right)+3\cos^2(150^0)$ $\hspace{22mm}-4\text{cosec}^245^0+8\sin\left(\displaystyle\frac{\pi}{2}\right)$ is, $1$ $\displaystyle\frac{13}{2}$ $-\displaystyle\frac{7}{2}$ $\displaystyle\frac{25}{4}$ Problem 8. The value of $\displaystyle\frac{\tan 5\theta+\tan 3\theta}{4\cos 4\theta(\tan 5\theta-\tan 3\theta)}$ is, $\tan 4\theta$ $\sin 2\theta$ $\cos 2\theta$ $\text{cot }2\theta$ Problem 9. The value of $\displaystyle\frac{\sin(y-z)+\sin(y+z)+2\sin y}{\sin(x-z)+\sin(x-z)+2\sin x}$ is, $\cos x\sin y$ $\sin x\tan y$ $\sin z$ $\displaystyle\frac{\sin y}{\sin x}$ Problem 10. The value of $\displaystyle\frac{\sin(90^0-10\theta)-\cos(\pi-6\theta)}{\cos\left(\displaystyle\frac{\pi}{2}-10\theta\right)-\sin(\pi-6\theta)}$ is, $\cos \theta$ $\tan 2\theta$ $\text{cot }3\theta$ $\text{cot }2\theta$ The conceptual solutions to the questions are available at , and the answers are given below. SSC CGL Tier II Solution Set 18 Trigonometry 4 Answers to the questions Problem 1. Answer: Option b: $-1$. Problem 2. Answer: Option a: $\sqrt{2}$. Problem 3. Answer: Option c: $1$. Problem 4. Answer: Option b: $\tan^2 \theta\sec^2 \theta$. Problem 5. Answer: Option b: $0$. Problem 6. Answer: Option a: $1$. Problem 7. Answer: Option d: $\displaystyle\frac{25}{4}$. Problem 8. Answer: Option c: $\cos 2\theta$. Problem 9. Answer: Option d: $\displaystyle\frac{\sin y}{\sin x}$. Problem 10. Answer: Option d: $\text{cot }2\theta$. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL Tier II level Question Set 18 Trigonometry 4
L2正则化的数学原理 L2正则化: To avoid parameters from exploding or becoming highly correlated, it is helpful to augment our cost function with a Gaussian prior: this tends to push parameter weights closer to zero, without constraining their direction, and often leads to classifiers with better generalization ability. If we maximize log-likelihood (as with the cross-entropy loss, above), then the Gaussian prior becomes a quadratic term 1 (L2 regularization): \[J_{reg}(\theta)=\dfrac{\lambda}{2}[\sum_{i,j}{W_1}_{i,j}^2+\sum_{i'j'}{W_2}_{i,j}^2]\] 可以证明: \[W_{ij} ∼ N (0; 1=λ)\] 从两种角度理解正则化:知乎 RNN为什么容易出现梯度消失和梯度爆炸问题 relu为啥能有效的解决梯度消失的问题 很难理解为啥用relu能很好的解决梯度消失的问题,的确relu的梯度为1,但这也太简单了吧。。。所以得看看原论文 A Simple Way to Initialize Recurrent Networks of Rectified Linear Units
Superfluids passing an obstacle and vortex nucleation 1. Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, NY. 10012, USA 2. Department of Mathematics, University of British Columbia, Vancouver BC V6T 1Z2, Canada $ \epsilon^2 \Delta u+ u(1-|u|^2) = 0 \ \mbox{in} \ {\mathbb R}^d \backslash \Omega, \ \ \frac{\partial u}{\partial \nu} = 0 \ \mbox{on}\ \partial \Omega $ $ \Omega $ $ {\mathbb R}^d $ $ d\geq 2 $ $ \epsilon>0 $ $ u = \rho_\epsilon (x) e^{i \frac{\Phi_\epsilon}{\epsilon}} $ $ \rho_\epsilon (x) \to 1-|\nabla \Phi^\delta(x)|^2, \Phi_\epsilon (x) \to \Phi^\delta (x) $ $ \Phi^\delta (x) $ $ \nabla ( (1-|\nabla \Phi|^2)\nabla \Phi ) = 0 \ \mbox{in} \ {\mathbb R}^d \backslash \Omega, \ \frac{\partial \Phi}{\partial \nu} = 0 \ \mbox{on} \ \partial \Omega, \ \nabla \Phi (x) \to \delta \vec{e}_d \ \mbox{as} \ |x| \to +\infty $ $ |\delta | <\delta_{*} $ $ d = 2 $ $ |\nabla \Phi^\delta (x)|^2 $ Mathematics Subject Classification:35J25, 35B25, 35B40, 35Q35. Citation:Fanghua Lin, Juncheng Wei. Superfluids passing an obstacle and vortex nucleation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 6801-6824. doi: 10.3934/dcds.2019232 References: [1] [2] [3] [4] [5] [6] [7] [8] [9] L. Bers, [10] S. Byun and L. Wang, The conormal derivative problem for elliptic equations with BMO coefficients on Reifenberg flat domains, [11] [12] [13] [14] M. del Pino, M. Kowalczyk and J. Wei, Entire solutions of the Allen-Cahn equation and complete embedded minimal surfaces of finite total curvature, [15] [16] [17] M. del Pino, P. Felmer and M. Kowalczyk, Minimality and nondegeneracy of degree-one Ginzburg-Landau vortex as a Hardy's type inequality, [18] Q. Du, J. Wei and C. Zhao, Vortex solutions of the high-$\kappa$ high-field Ginzburg-Landau model with an applied current, [19] [20] [21] [22] [23] [24] [25] J. Grant and P. H. Roberts, Motions in a Bose condensate. Ⅲ. The structure and effective masses of charged and uncharged impurities, [26] [27] M. Abid, C. Huepe, S. Metens, C. Nore, C. T. Pham, L. S. Tuckerman and M. E. Brachet, Gross-Pitaevskii dynamics of Bose-Einstein condensates and superfluid turbulence, [28] [29] [30] [31] [32] [33] [34] [35] Y. Liu and J. Wei, Adler-Moser polynomials and traveling waves solutions of Gross-Pitaevskii, preprint.Google Scholar [36] [37] [38] M. Maris, Nonexistence of supersonic traveling waves for nonlinear Schrödinger equations with nonzero conditions at infinity, [39] [40] C.-T. Pham, C. Nore and M. E. Brachet, Boundary layers and emitted excitations in nonlinear Schröinger superflow past a disk, [41] [42] O. Rey and J. Wei, Blowing up solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. Part Ⅱ: $N \geq 4$, [43] [44] E. M. Stein and G. Weiss, [45] show all references References: [1] [2] [3] [4] [5] [6] [7] [8] [9] L. Bers, [10] S. Byun and L. Wang, The conormal derivative problem for elliptic equations with BMO coefficients on Reifenberg flat domains, [11] [12] [13] [14] M. del Pino, M. Kowalczyk and J. Wei, Entire solutions of the Allen-Cahn equation and complete embedded minimal surfaces of finite total curvature, [15] [16] [17] M. del Pino, P. Felmer and M. Kowalczyk, Minimality and nondegeneracy of degree-one Ginzburg-Landau vortex as a Hardy's type inequality, [18] Q. Du, J. Wei and C. Zhao, Vortex solutions of the high-$\kappa$ high-field Ginzburg-Landau model with an applied current, [19] [20] [21] [22] [23] [24] [25] J. Grant and P. H. Roberts, Motions in a Bose condensate. Ⅲ. The structure and effective masses of charged and uncharged impurities, [26] [27] M. Abid, C. Huepe, S. Metens, C. Nore, C. T. Pham, L. S. Tuckerman and M. E. Brachet, Gross-Pitaevskii dynamics of Bose-Einstein condensates and superfluid turbulence, [28] [29] [30] [31] [32] [33] [34] [35] Y. Liu and J. Wei, Adler-Moser polynomials and traveling waves solutions of Gross-Pitaevskii, preprint.Google Scholar [36] [37] [38] M. Maris, Nonexistence of supersonic traveling waves for nonlinear Schrödinger equations with nonzero conditions at infinity, [39] [40] C.-T. Pham, C. Nore and M. E. Brachet, Boundary layers and emitted excitations in nonlinear Schröinger superflow past a disk, [41] [42] O. Rey and J. Wei, Blowing up solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. Part Ⅱ: $N \geq 4$, [43] [44] E. M. Stein and G. Weiss, [45] [1] Ko-Shin Chen, Peter Sternberg. Dynamics of Ginzburg-Landau and Gross-Pitaevskii vortices on manifolds. [2] Norman E. Dancer. On the converse problem for the Gross-Pitaevskii equations with a large parameter. [3] André de Laire, Pierre Mennuni. Traveling waves for some nonlocal 1D Gross–Pitaevskii equations with nonzero conditions at infinity. [4] Yujin Guo, Xiaoyu Zeng, Huan-Song Zhou. Blow-up solutions for two coupled Gross-Pitaevskii equations with attractive interactions. [5] Jeremy L. Marzuola, Michael I. Weinstein. Long time dynamics near the symmetry breaking bifurcation for nonlinear Schrödinger/Gross-Pitaevskii equations. [6] Dong Deng, Ruikuan Liu. Bifurcation solutions of Gross-Pitaevskii equations for spin-1 Bose-Einstein condensates. [7] Thomas Chen, Nataša Pavlović. On the Cauchy problem for focusing and defocusing Gross-Pitaevskii hierarchies. [8] Xiaoyu Zeng, Yimin Zhang. Asymptotic behaviors of ground states for a modified Gross-Pitaevskii equation. [9] [10] [11] Georgy L. Alfimov, Pavel P. Kizin, Dmitry A. Zezyulin. Gap solitons for the repulsive Gross-Pitaevskii equation with periodic potential: Coding and method for computation. [12] Roy H. Goodman, Jeremy L. Marzuola, Michael I. Weinstein. Self-trapping and Josephson tunneling solutions to the nonlinear Schrödinger / Gross-Pitaevskii equation. [13] Shuai Li, Jingjing Yan, Xincai Zhu. Constraint minimizers of perturbed gross-pitaevskii energy functionals in $\mathbb{R}^N$. [14] Weiran Sun, Min Tang. A relaxation method for one dimensional traveling waves of singular and nonlocal equations. [15] Hua Chen, Ling-Jun Wang. A perturbation approach for the transverse spectral stability of small periodic traveling waves of the ZK equation. [16] [17] Andrea Corli, Lorenzo di Ruvo, Luisa Malaguti, Massimiliano D. Rosini. Traveling waves for degenerate diffusive equations on networks. [18] John M. Hong, Cheng-Hsiung Hsu, Bo-Chih Huang, Tzi-Sheng Yang. Geometric singular perturbation approach to the existence and instability of stationary waves for viscous traffic flow models. [19] [20] 2018 Impact Factor: 1.143 Tools Article outline [Back to Top]
Revista Matemática Iberoamericana Full-Text PDF (251 KB) | Metadata | Table of Contents | RMI summary Volume 30, Issue 4, 2014, pp. 1265–1280 DOI: 10.4171/RMI/814 Published online: 2014-12-15 A sharp multiplier theorem for Grushin operators in arbitrary dimensionsAlessio Martini [1]and Detlef Müller [2](1) University of Birmingham, UK (2) Christian-Albrechts-Universität zu Kiel, Germany In a recent work by A. Martini and A. Sikora, sharp $L^p$ spectral multiplier theorems for the Grushin operators acting on $\mathbb R^{d_1}_{x'} \times \mathbb R^{d_2}_{x''}$ and defined by the formula $$L=-\sum_{j=1}^{d_1}\partial_{x'_j}^2 - \Big(\sum_{j=1}^{d_1}|x'_j|^2\Big) \sum_{k=1}^{d_2}\partial_{x''_k}^2$$ are obtained in the case $d_1 \geq d_2$. Here we complete the picture by proving sharp results in the case $d_1 < d_2$. Our approach exploits $L^2$ weighted estimates with "extra weights" depending essentially on the second factor of $\mathbb R^{d_1} \times \mathbb R^{d_2}$ (in contrast to the mentioned work, where the "extra weights" depend only on the first factor) and gives a new unified proof of the sharp results without restrictions on the dimensions. Keywords: Grushin operator, spectral multiplier, Mihlin–Hörmander multiplier, Bochner–Riesz mean, singular integral operator Martini Alessio, Müller Detlef: A sharp multiplier theorem for Grushin operators in arbitrary dimensions. Rev. Mat. Iberoam. 30 (2014), 1265-1280. doi: 10.4171/RMI/814
Definition:Expectation/Continuous Definition Let $F = \Pr \paren {X < x}$ be the cumulative probability function of $X$. The expectation of $X$ is written $\expect X$, and is defined over the probability measure as: $\expect X := \displaystyle \int_{x \mathop \in \Omega} x \rd F$ $\displaystyle \int_{x \mathop \in \Omega} \size x \rd F < \infty$ $\displaystyle \expect X := \int_{x \mathop \in \Omega_X} x \ f_X \paren x \rd x$ Also known as The expectation of $X$ is also called the expected value of $X$ or the mean of $X$, and (for a given continuous random variable) is often denoted $\mu$. The terminology is appropriate, as it can be seen that an expectation is an example of a normalized weighted mean. Also see The $\LaTeX$ code for \(\expect {X}\) is \expect {X} . When the argument is a single character, it is usual to omit the braces: \expect X
* Lecture Notes * KAIST undergraduate student, Haesong Seo, wrote a fairly complete lecture notes of all lectures given at this year's topology summer school. One can download the note from the following link [download]. KAIST Advanced Institute for Science-X (KAIX) hosts its first thematic program this summer. As a part of the program, there will be a summer school on mathematics in June. This year's theme is "Introduction to the recent developments in PDE and Topology, and their intersection." Topology session is organized by me, and PDE session is organized by Prof. Soonsik Kwon. Topology session's title is "Topics in Geometric Group Theory". PDE session's title is "Dynamics of Partial Differential Equations". Now here are more information about the topology session of the summer school. One can also take this officially as a course in summer semester of 2018; MAS481(25.481): Topics in Mathematics I<Topics in Geometric Group Theory>. All talks are at the building E6-1, Room 2413. The schedule is as in the table below. Mladen Bestvina (University of Utah) Title: Introduction to Out(F_n) Abstract: The following topics will be covered. 1. Stallings folds and applications. 2. Culler-Vogtmann's Outer space, contractibility, and consequences for Out(F_n) 3. Lipschitz metric on Outer space, train track maps and growth of automorphisms. Notes and Homeworks [Link] Vogtmann's brief introduction of Outer spaces [Link] Koji Fujiwara (Kyoto University) Title: Group actions on quasi-trees and application. Abstract: A quasi-tree is a geodesic metric space that is quasi-isometric to a tree. With Bestvina-Bromberg, I introduced an axiomatic way to construct a quasi-tree and group actions on it. I explain the basic of it, then discuss some applications including some recent ones. Kenichi Ohshika (Osaka University) Title: Kleinian groups and their deformation spaces Abstract: Historically, deformation spaces of Kleinian groups appeared as generalisations of Teichmuller spaces. Thurston’s work in the 1980s gave a quite novel viewpoint coming from his study of hyperbolic 3-manifolds. In this talk, I shall describe the theory of deformations of Kleinian groups starting from classical work of Bers, Maskit and Marden, and then spend most of time explaining Thurston’s framework. If time permits, I should also like to touch upon the continuity/discontinuity of several invariants defined on deformation spaces. Thomas Koberda (University of Virginia) Title: Regularity of groups acting on the circle Abstract: There is a rich interplay between the degree of regularity of a group action on the circle and the allowable algebraic structure of the group. In this series of talks, I will outline some highlights of this theory, culminating in a construction due to Kim and myself of groups of every possible critical regularity $\alpha \in [1,\infty)$.There is a rich interplay between the degree of regularity of a group action on the circle and the allowable algebraic structure of the group. In this series of talks, I will outline some highlights of this theory, culminating in a construction due to Kim and myself of groups of every possible critical regularity $\alpha \in [1,\infty)$. If you have any question, please contact me via email hrbaik(at)kaist.ac.kr.
As explained in the book "Spinors in Hilbert Space" by Plymen and Robinson, if $V$ is a complex (separable) Hilbert space with a real structure, and $\mathrm{Cl}(V)$ the corresponding Clifford algebra, there is a unique completion $\mathrm{Cl}[V]$ of this algebra to a $C^*$-algebra: Any $*$-representation of $\mathrm{Cl}(V)$ on a Hilbert space will induce that same $C^*$-norm on it, and the corresponding closure is the Clifford $C^*$-algebra $\mathrm{Cl}[V]$. The situation is different for von-Neumann algebras. If $\pi: \mathrm{Cl}(V) \rightarrow B(\mathcal{H})$ is a $*$-representation, then the von-Neumann-closure $M_\pi := \pi(\mathrm{Cl}(V))^{\prime\prime}$ will depend drastically on $\pi$. For example, if $\pi$ is the left regular representation, then $M_\pi$ is the hyperfinite $\mathrm{II}_1$ factor, if $\pi$ is a Fock representation, then it is type $\mathrm{I}_\infty$, and there are other settings where we obtain type $\mathrm{III}$ factors. However, there is another canonical von-Neumann algebra containing $\mathrm{Cl}(V)$, namely the universal enveloping algebra, coming from the case where $\pi$ is the universal representation of $\mathrm{Cl}[V]$. Alternatively, it is the double dual of $\mathrm{Cl}[V]$. Q: What can we say about the enveloping von-Neumann-algebra of $\mathrm{Cl}[V]$? Does it happen to be a factor? Is it possibly the hyperfinite $\mathrm{II}_1$ factor?
I would like to automatically replace the align environment by an equation + aligned combination. Specifically, whenever \begin{align} occurs, it should automatically read it as \begin{equation}\begin{aligned}. Similarly, \end{align} should be replaced by \end{aligned}\end{equation}. Occurrences of \nonumber also need to be gobbled up and ignored. \documentclass[a4paper]{article}\usepackage{amsmath}\begin{document}Any align such as\begin{align}x & = y + z, \nonumber \\\alpha &= \beta + \gamma,\end{align}should automatically be replaced by an equation + aligned combination, effectively becoming\begin{equation}\begin{aligned}x & = y + z, \\\alpha &= \beta + \gamma.\end{aligned}\end{equation}\end{document} Ideally, I would like to achieve this without having to perform a manual replaceall throughout my document. The simplest solution I see would be to create a new environment that achieves the required effects, but then I would still need to replace all existing uses of the align environment with the new environment. I've not been able to find or create an implemention of this. The most related answer I've been able to find is: Modify eqnarray to match amsmath align.
Revista Matemática Iberoamericana Full-Text PDF (587 KB) | Metadata | Table of Contents | RMI summary Volume 30, Issue 4, 2014, pp. 1301–1354 DOI: 10.4171/RMI/816 Published online: 2014-12-15 Linear multifractional stable motion: fine path propertiesAntoine Ayache [1]and Julien Hamonier (1) Université Lille 1, Villeneuve d'Asq, France For at least a decade, there has been considerable interest in applied and theoretical issues related to multifractional random models. Nonetheless, only a few results are known in the framework of heavy-tailed stable distributions. In this framework, a paradigmatic example is the linear multifractional stable motion (LMSM) $\{Y(t)\colon t\in\mathbb R\}$. Stoev and Taqqu [30, 29| introduced LMSM by replacing the constant Hurst parameter of classical linear fractional stable motion (LFSM) by a deterministic function $H(\cdot)$ depending on the time variable $t$. The main goal of our article is to make a comprehensive study of the local and asymptotic behavior of $\{Y(t):t\in\mathbb R\}$. To this end, one needs to derive fine path properties of $\{X(u,v)\colon (u,v)\in\mathbb R \times (1/\alpha,1)\}$, the field generating the process (i.e., one has $Y(t)=X(t,H(t))$ for all $t\in\mathbb R$). This leads us to introduce random wavelet series representations of $\{X(u,v)\colon (u,v)\in\mathbb R \times (1/\alpha,1)\}$ as well as of all its pathwise partial derivatives of any order with respect to $v$. Then our strategy consists in using wavelet methods which are reminiscent of those in [2, 5]. Among other things, we solve a conjecture of Stoev and Taqqu concerning the existence for LMSM of a version with almost surely continuous paths; moreover we significantly improve Theorem 4.1 in [29], which provides some bounds for the local H\"older exponent (in other words, the uniform pointwise Hölder exponent) of LMSM. Namely, we obtain a quasi-optimal global modulus of continuity, and also an optimal local one. It is worth noticing that, even in the quite classical case of LFSM, the optimal local modulus of continuity provides a new result which was previously not known. Keywords: Linear fractional and multifractional stable motions, wavelet series representations, moduli of continuity, Hölder regularity, laws of the iterated logarithm Ayache Antoine, Hamonier Julien: Linear multifractional stable motion: fine path properties. Rev. Mat. Iberoam. 30 (2014), 1301-1354. doi: 10.4171/RMI/816
Dimensionality reduction is used to remove irrelevant and redundant features. When the number of features in a dataset is bigger than the number of examples, then the probability density function of the dataset becomes difficult to calculate. For example, if we model a dataset \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\) as a single Gaussian N(μ, ∑), then the probability density function is defined as: \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |Σ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {Σ}^{-1} (x-μ))\) such as \(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)} \\ ∑ = \frac{1}{m} \sum_{i=1}^m (x^{(i)} – μ)(x^{(i)} – μ)^T\). But If n >> m, then ∑ will be singular, and calculating P(x) will be impossible. Note: \((x^{(i)} – μ)(x^{(i)} – μ)^T\) is always singular, but the \(\sum_{i=1}^m\) of many singular matrices is most likely invertible when m >> n. Principal Component Analysis Given a set \(S = \{x^{(1)}=(0,1), x^{(2)}=(1,1)\}\), to reduce the dimensionality of S from 2 to 1, we need to project data on a vector that maximizes the projections. In other words, find the normalized vector \(μ = (μ_1, μ_2)\) that maximizes \( ({x^{(1)}}^T.μ)^2 + ({x^{(2)}}^T.μ)^2 = (μ_2)^2 + (μ_1 + μ_2)^2\). Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = μ_1^2 + μ_2^2 = 1\).\(L(μ, λ) = (μ_2)^2 + (μ_1 + μ_2)^2 – λ (μ_1^2 + μ_2^2 – 1) \) We need to find μ such as \(∇_u = 0 \) and ||u|| = 1 After derivations we will find that the solution is the vector μ = (0.52, 0.85) Generalization Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), to reduce the dimensionality of S, we need to find μ that maximizes \(arg \ \underset{u: ||u|| = 1}{max} \frac{1}{m} \sum_{i=1}^m ({x^{(i)}}^T u)^2\)\(=\frac{1}{m} \sum_{i=1}^m (u^T {x^{(i)}})({x^{(i)}}^T u)\) \(=u^T (\frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T) u\) Let’s define \( ∑ = \frac{1}{m} \sum_{i=1}^m {x^{(i)}} * {x^{(i)}}^T \) Using the method of Lagrange Multipliers, we can solve the maximization problem with constraint \(||u|| = u^Tu\) = 1.\(L(μ, λ) = u^T ∑ u – λ (u^Tu – 1) \) If we calculate the derivative with respect to u, we will find:\(∇_u = ∑ u – λ u = 0\) Therefore u that solves this maximization problem must be an eigenvector of ∑. We need to choose the eigenvector with highest eigenvalue. If we choose k eigenvectors \({u_1, u_2, …, u_k}\), then we need to transform the data by multiplying each example with each eigenvector.\(x^{(i)} := (u_1^T x^{(i)}, u_2^T x^{(i)},…, , u_k^T x^{(i)}) = U^T x^{(i)}\) Data should be normalized before running the PCA algorithm: 1-\(μ = \frac{1}{m} \sum_{i=1}^m x^{(i)}\) 2-\(x^{(i)} := x^{(i)} – μ\) 3-\(σ_j^{(i)} = \frac{1}{m} \sum_{i=1}^m {x_j^{(i)}}^2\) 4-\(x^{(i)} := \frac{x_j^{(i)}}{σ_j^{(i)}}\) To reconstruct the original data, we need to calculate \(\widehat{x}^{(i)} := U^T x^{(i)}\) Factor Analysis Factor analysis is a way to take a mass of data and shrinking it to a smaller data set with less features. Given a set \(S = \{x^{(i)}\}_{i=1}^m,\ x \in R^{n}\), and S is modeled as a single Gaussian. To reduce the dimensionality of S, we define a relationship between the variable x and a laten (hidden) variable z called factor such as \(x^{(i)} = μ + Λ z^{(i)} + ϵ^{(i)}\) and \(μ \in R^{n}\), \(z^{(i)} \in R^{d}\), \(Λ \in R^{n*d}\), \(ϵ \sim N(0, Ψ)\), Ψ is diagonal, \(z \sim N(0, I)\) and d <= n. From Λ we can find the features that are related to each factor, and then identify the features that need to be eliminated or combined in order to reduce the dimensionality of the data. Below the steps to estimate the parameters Ψ, μ, Λ.\(E[x] = E[μ + Λz + ϵ] = E[μ] + ΛE[z] + E[ϵ] = μ \) \(Var(x) = E[(x – μ)^2] = E[(x – μ)(x – μ)^T] = E[(Λz + ϵ)(Λz + ϵ)^T]\) \(=E[Λzz^TΛ^T + ϵz^TΛ^T + Λzϵ^T + ϵϵ^T]\) \(=ΛE[zz^T]Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + E[ϵϵ^T]\) \(=Λ.Var(z).Λ^T + E[ϵz^TΛ^T] + E[Λzϵ^T] + Var(ϵ)\) ϵ and z are independent, then the join probability of p(ϵ,z) = p(ϵ)*p(z), and \(E[ϵz]=\int_{ϵ}\int_{z} ϵ*z*p(ϵ,z) dϵ dz\)\(=\int_{ϵ}\int_{z} ϵ*z*p(ϵ)*p(z) dϵ dz\) \(=\int_{ϵ} ϵ*p(ϵ) \int_{z} z*p(z) dz dϵ\) \(=E[ϵ]E[z]\) So:\(Var(x)=ΛΛ^T + Ψ\) Therefore \(x \sim N(μ, ΛΛ^T + Ψ)\) and \(P(x) = \frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x-μ))\) \(Λ \in R^{n*d}\), if d <= m, then \(ΛΛ^T + Ψ\) is most likely invertible. To find Ψ, μ, Λ, we need to maximize the log-likelihood function.\(l(Ψ, μ, Λ) = \sum_{i=1}^m log(P(x^{(i)}; Ψ, μ, Λ))\) \(= \sum_{i=1}^m log(\frac{1}{{(2π)}^{\frac{n}{2}} |ΛΛ^T + Ψ|^\frac{1}{2}} exp(-\frac{1}{2} (x^{(i)}-μ)^T {(ΛΛ^T + Ψ)}^{-1} (x^{(i)}-μ)))\) This maximization problem cannot be solved by calculating the \(∇_Ψ l(Ψ, μ, Λ) = 0\), \(∇_μ l(Ψ, μ, Λ) = 0\), \(∇_Λ l(Ψ, μ, Λ) = 0\). However using the EM algorithm, we can solve that problem. More details can be found in this video: https://www.youtube.com/watch?v=ey2PE5xi9-A Restricted Boltzmann Machine A restricted Boltzmann machine (RBM) is a two-layer stochastic neural network where the first layer consists of observed data variables (or visible units), and the second layer consists of latent variables (or hidden units). The visible layer is fully connected to the hidden layer. Both the visible and hidden layers are restricted to have no within-layer connections. In this model, we update the parameters using the following equations: \(W := W + α * \frac{x⊗Transpose(h_0) – v_1 ⊗ Transpose(h_1)}{n} \\ b_v := b_v + α * mean(x – v_1) \\ b_h := b_h + α * mean(h_0 – h_1) \\ error = mean(square(x – v_1))\). Deep Belief Network A deep belief network is obtained by stacking several RBMs on top of each other. The hidden layer of the RBM at layer i becomes the input of the RBM at layer i+1. The first layer RBM gets as input the input of the network, and the hidden layer of the last RBM represents the output. Autoencoders An autoencoder, autoassociator or Diabolo network is a deterministic artificial neural network used for unsupervised learning of efficient codings. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. A deep Autoencoder contains multiple hidden units. Loss function For binary values, the loss function is defined as: \(loss(x,\hat{x}) = -\sum_{k=1}^{size(x)} x_k.log(\hat{x_k}) + (1-x_k).log(1 – \hat{x_k})\). For real values, the loss function is defined as: \(loss(x,\hat{x}) = ½ \sum_{k=1}^{size(x)} (x_k – \hat{x_k})^2\). Dimensionality reduction Autoencoders separate data better than PCA. Variational Autoencoder Variational autoencoder (VAE) models inherit autoencoder architecture, but make strong assumptions concerning the distribution of latent variables. In general, we suppose the distribution of the latent variable is gaussian. The training algorithm used in VAEs is similar to EM algorithm.
This is an elaboration on wl's comment, which proves that there is no strategy which guarantees success in any number of rounds, because no matter what strategy the prisoners use, there is at least one ordering which will cause no one to be eliminated in the first round. The only information a prisoner has is what the last vote was, so their only decision is whether or not to copy the previous vote. Globally, the team's strategy consists of choosing the number of "copiers" for that round, while everyone who is not a copier is a "flopper." Suppose they choose n prisoners to be copiers in the first round. If n is 100, then everyone votes the same, so no one is eliminated. If n is 99, then the same thing happens if the flopper is picked first. Otherwise, the below arrangements result in tie votes: n is even: (n/2) copiers, flopper, (n/2) copiers, flopper, 100–n–2 floppers n is odd : flopper, (n–1)/2 copiers, flopper, (n+1)/2 copiers, flopper, 100–n–3 floppers By establishing the probabilities of transitioning from $n$ to $n'$ prisoners in a round given then number of floppers, we can use Markov analysis to calculate the best-case chance of success. Except for the first flopper, each flopper reverses the vote of the previous flopper. Let's label the votes that agree with the first flopper Group A, and the opposite Group B. We can make the following statements about the vote: The first Group A vote is always the first flopper, while the remaining Group A floppers can be distributed anywhere in the remaining Group A votes. The first Group B vote can be either a flopper or copier, so the Group B floppers can be distributed anywhere in the Group B votes. The number of floppers is split equally between Group A and Group B, except when there is an odd number of floppers and there is one more flopper in Group A than in Group B. If $V_A$ is the number of votes garnered by Group A, $N$ is the number of prisoners voting, and $F$ is the number of floppers, we can establish the following probability while taking care of proper bounds and the edge case of zero floppers: $$\Pr(V_A{=}v {\mid} N{=}n, F{=}f) = \begin{cases}\cfrac {\dbinom{v{-}1}{\left\lceil{\frac{f}{2}}\right\rceil{-}1} {\dbinom{n{-}v}{\left\lfloor{\frac{f}{2}}\right\rfloor}}}{\dbinom{n}{\,f}} , & \text{if $0 \lt f \le n$, $\left\lceil{\frac{f}{2}}\right\rceil \le v \le n{-}\left\lfloor{\frac{f}{2}}\right\rfloor$ }\\{1} , & \text{if $f=0$, $v=0$} \\0, & \text{otherwise}\end{cases}$$ Using the preceding, the probability of transitioning from $n$ prisoners to $n'$ in a round, given $f$ floppers, is: $$\Pr(n_{t+1}{=}n' {\mid} n_t{=}n, f_t{=}f)=\begin{cases}\sum_{v \in \{n',\,n{-}n'\}} \Pr(V_A{=}v {\mid} N{=}n, F{=}f), & \text {if $\frac{n}{2} \lt n' \lt n$} \\[1ex]\sum_{v \in \{0,\,n\}} \Pr(V_A{=}v {\mid} N{=}n, F{=}f), & \text {if $n'{=}n$, $n$ is odd} \\[1ex]\sum_{v \in \{0,\,n/2,\,n\}} \Pr(V_A{=}v {\mid} N{=}n, F{=}f), & \text {if $n'{=}n$, $n$ is even} \\[1ex]0, & \text{otherwise}\end{cases}$$ Implementation of the preceding transition probability function in Python ( p_trans): def memoize(obj): cache = obj.cache = {} def memoizer(*args, **kwargs): if args not in cache: cache[args] = obj(*args, **kwargs) return cache[args] return memoizer @memoize def comb(n, k): """Combinations of n-choose-k""" from math import factorial return factorial(n)/factorial(k)/factorial(n-k) @memoize def p_voteA(v, n, f): """Pr(V_A=v | N=n, F=f)""" from math import ceil, floor if f != 0 and ceil(f/2.0) <= v <= n - floor(f/2.0): combinations = comb(v-1, ceil(f/2.0)-1) * comb(n-v, floor(f/2.0)) return float(combinations) / comb(n, f) if f == 0 and v == 0: return 1.0 return 0.0 @memoize def p_trans(n_p, n, f): """Pr(n_{t+1}=n_p | n_t=n, f_t=f)""" if n/2.0 < n_p < n: vs = [n_p, n-n_p] elif n_p == n and n % 2 == 1: vs = [0, n] elif n_p == n and n % 2 == 0: vs = [0, n/2, n] else: vs = [] return sum(p_voteA(v, n, f) for v in vs) Having a transition probability function allows us to use Markov analysis. For instance we can use value iteration to calculate the best-case winning percentage based on selecting the optimal number of floppers in each state: @memoize def win_pct_max(n, t): if n <= 2 and t <= 10: return 1.0 if t >= 10: return 0.0 pcts = [] for f in range(n+1): pct = 0.0 for n_p in range(n+1): p = p_trans(n_p, n, f) if p != 0: pct += p*win_pct_max(n_p, t+1) pcts.append(pct) return max(pcts) print(win_pct_max(100, 0)) The result 0.9623222013994797. This is the best we can do by optimizing floppers each round. Given that there is no perfect strategy, we turn our attention to finding the best strategy. We assign each number of prisoners that are left to vote a score that represents the average amount of rounds needed to win. When there are only 1 or 2 prisoners left, the game is won: $s(1)=s(2)=0$. With 3 prisoners left, we can always get to 2 prisoners left by choosing 3 floppers, therefore $s(3)=s(2)+1$. Below are the scores and best strategy for a few numbers. $\begin{array}{r|r|l} \text{prisoners} & \text{score} & \text{voting strategy} \\ \hline 1 & 0 \\ \hline 2 & 0 \\ \hline 3 & 1 & \text{3 floppers} \\ \hline 4 & 2.5 & \text{2 floppers} \\ \hline 5 & 2 & \text{5 floppers} \\ \hline 6 & 3.5 & \text{2 floppers} \\ \hline 7 & 3.3 & \text{4 floppers} \\ \hline 8 & 3.75 & \text{6 floppers} \\ \hline 9 & 3 & \text{9 floppers} \\ \hline 10 & 4.5125 & \text{2 floppers} \\ \hline 11 & \approx 4.4317 & \text{7 floppers} (\frac{2659}{600}) \\ \hline 12 & \approx 4.6865 & \text{6 floppers} (\frac{3393}{724}) \\ \hline 13 & 4.3 & \text{13 floppers} \\ \hline 14 & \approx 4.8960 & \text{8 floppers} (\frac{64431}{13160}) \\ \hline 15 & \approx 4.5192 & \text{12 floppers} (\frac{235}{52}) \\ \hline 16 & 4.875 & \text{14 floppers} \\ \hline 17 & 4 & \text{17 floppers} \\ \hline 18 & \approx 5.5901 & \text{2 floppers} (\frac{41544062881}{7431715200}) \\ \hline 19 & \approx 5.4887 & \text{16 floppers} (\frac{11197}{2040}) \\ \hline 20 & \approx 5.6829 & \text{4 floppers} (\frac{41544062881}{7431715200}) \\ \hline 21 & \approx 5.4317 & \text{21 floppers} (\frac{3259}{600}) \\ \hline 22 & \approx 5.7503 & \text{8 floppers} (\frac{54422747555537}{9464289307200}) \\ \hline 23 & \approx 5.5760 & \text{20 floppers} (\frac{141297}{25340}) \\ \hline 24 & \approx 5.7568 & \text{10 floppers} (\frac{21644780716829}{3759863970720}) \\ \hline 25 & 5.3 & \text{25 floppers} \\ \hline 26 & \approx 5.9097 & \text{12 floppers} (\frac{1037335451143696343}{175530959563814400}) \\ \hline 27 & \approx 5.7170 & \text{18 floppers} (\frac{46416921682337}{8119148856000}) \\ \hline 28 & \approx 5.8679 & \text{18 floppers} (\frac{11486474568601}{1957513783680}) \\ \hline 29 & \approx 5.5192 & \text{29 floppers} (\frac{287}{52}) \\ \hline 30 & \approx 6.0077 & \text{22 floppers} (\frac{64203605748067}{10686806457600}) \\ \end{array}$ These values were calculated using brute force. For $2^n+1$ all floppers will always be the best strategy. For $2^(n+1)$ all but 2 floppers is a great strategy because in over 50% of the cases we will get to $2^n+1$ which will guarantee a victory in $n$ rounds. From the table we can see that for 6 prisoners we want to try to avoid a result too close to an even split because removing only one prisoner is actually preferable. With 8 prisoners on the other hand we want to get as close as possible to the even split. The winning strategy of 6 floppers will tie votes $\frac{3}{7}$ of the time and thus have no one advance, but it is still the best strategy because if they advance, they will win in 2 more rounds. By just using these rules while otherwise defaulting odd rounds to all floppers and even rounds to all random votes, the prisoners can already win in over 95% of the cases (up from 85% for only using the two default rules). With perfect play in all rounds this would rise even further, but playing perfect in the last rounds is most important. If the prisoners are smart enough to take all this into account they have a reasonably high chance to survive yet another day in Puzzling Prison.
When I calculate the variance by hand I get something different than Rstudio. They guy in the video however, did calculate it as I did, wrong. Why is that? My calculations: Observations $=1,2,3,4,5,6,7,8,9$ Calculation by hand: $$E[X] = \frac{1}{n}*\sum^n_1 x_i = \frac{45}{9} = 5 \\\text{or}\\ E[X] = \sum x_i *\frac{1}{9} = 5 \\ Var(X) = E[(X-\mu)^2]=\sum (x_i - \mu)^2* \frac{1}{9} = \frac{20}{3} \\\text{or}\\ Var(X) = \left(\frac{1}{n} * \sum x_i^2\right)-\mu^2 = \frac{20}{3}$$ However when I use the following code in R i get different results. a <- c(1:9)mean(a) % = 5var(a) % = 7.5 Questions: What is happening here\Why are the results different? Are the formulas I used for the calculation by hand correct?
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 44, Number 2 (2011), 307-315. On asymptotics of entropy of a class of analytic functions Abstract Let $(K,D)$ be a compact subset of an open set $D$ on a Stein manifold $\Omega$ of dimension $n$, $H^\infty(D)$ the Banach space of all bounded and analytic in $D$ functions endowed with the uniform norm, and $A_{K}^{D}$ be a compact subset in the space of continuous functions $C(K)$ consisted of all restrictions of functions from the unit ball $\mathbb{B}_{H^\infty(D)}$. In 1950s Kolmogorov raised the problem of a strict asymptotics ([K1,K2,KT]) of an entropy of this class of analytic functions: $\mathcal{H}_{\varepsilon}(A_{K}^{D})\sim\tau(\ln\frac{1}{\varepsilon})^{n+1},\varepsilon\rightarrow 0,$ with a constant $\tau $. The main result of this paper, which generalizes and strengthens the Levin's and Tikhomirov's result in [LT], shows that this asymptotics is equivalent to the asymptotics for the widths (Kolmogorov diameters): $\ln d_{k}(A_{K}^{D})\sim -\sigma k^{1/n}, k\rightarrow \infty $, with the constant $\sigma =(\frac{2}{\tau( n+1)})^{1/n}$. This result makes it possible to get a positive solution of the above entropy problem by applying recent results [Z2] on the asymptotics for the widths $d_{k}(A_{K}^{D})$. Article information Source Funct. Approx. Comment. Math., Volume 44, Number 2 (2011), 307-315. Dates First available in Project Euclid: 22 June 2011 Permanent link to this document https://projecteuclid.org/euclid.facm/1308749134 Digital Object Identifier doi:10.7169/facm/1308749134 Mathematical Reviews number (MathSciNet) MR2841189 Zentralblatt MATH identifier 1218.28010 Subjects Primary: 28D20: Entropy and other invariants 47B06: Riesz operators; eigenvalue distributions; approximation numbers, s- numbers, Kolmogorov numbers, entropy numbers, etc. of operators 322A07 Secondary: 32U35: Pluricomplex Green functions 32U20: Capacity theory and generalizations Citation Zakharyuta, Vyacheslav. On asymptotics of entropy of a class of analytic functions. Funct. Approx. Comment. Math. 44 (2011), no. 2, 307--315. doi:10.7169/facm/1308749134. https://projecteuclid.org/euclid.facm/1308749134
Questions regarding easy-to-compute, but hard-to-invert functions. Informally, a function is called one-way if it is easy to compute, but hard to invert. These definitions are formally captured as follows: function $f \colon \{0,1\}^* \to \{0,1\}^*$ is called one-way if it has the following properties: Easy to compute:There exist a (deterministic) polynomial-time machine $M$ such that, $\forall x \in \{0,1\}^* \quad M(x)=f(x)$. Hard to invert:For all probabilistic polynomial-time machines $A$, for all positive polynomials $p(\cdot)$, and for all sufficiently large $n\in \mathbb{N}$, the inequality $\Pr_{x\leftarrow _R \{0,1\}^n}[f(A(1^n,f(x)))=f(x)] < \tfrac{1}{p(n)}$ holds. There are several definitions which more or less resemble this one. For instance, one might let the machine $M$ to be probabilistic, and let it fail with negligible probability. One-way permutations are a special class of one-way functions. It is proven that, secure secret-key encryptions exist iff one-way functions exits, and secure public-key encryptions exist iff one-way permutations exits.
56th SSC CGL level Question Set, topic Trigonometry 5 This is the 56th question set for the 10 practice problem exercise for SSC CGL exam and 5th on topic Trigonometry. We repeat the method of taking the test. It is an important to follow Test preparation method even in practice testenvironment as metrics or performance measurement is built-in. for continuous skill-set improvement, Method of taking the test for getting the best results from the test: Before start,go through or any short but good material to refresh your concepts if you so require. Tutorial on Basic and rich concepts in Trigonometry and its applications Answer the questionsin an undisturbed environment with no interruption, full concentration and alarm set at 12 minutes. When the time limit of 12 minutes is over,mark up to which you have answered, but go on to complete the set. At the end,refer to the answers from the companion solution set to mark your score at 12 minutes. For every correct answer add 1 and for every incorrect answer deduct 0.25 (or whatever is the scoring pattern in the coming test). Write your score on top of the answer sheet with date and time. Identify and analyzethe problems that you couldn't doto learn how to solve those problems. Identify and analyzethe problems that you solved incorrectly. Identify the reasons behind the errors. If it is because of your shortcoming in topic knowledgeimprove it by referring to only that part of conceptfrom the best source you can get hold of. You might google it. If it is because of your method of answering,analyze and improve those aspects specifically. Identify and analyzethe problems that posed difficulties for you and delayed you. Analyze and learn how to solve the problems using basic concepts and relevant problem solving strategies and techniques. Give a gapbefore you take a 10 problem practice test again. Important:both and practice tests must be timed, analyzed, improving actions taken and then repeated. With intelligent method, it is possible to reach highest excellence level in performance. mock tests Resources that should be useful for you Before taking the test it is recommended that you refer to You may also refer to the related resources: or 7 steps for sure success in SSC CGL tier 1 and tier 2 competitive tests to access all the valuable student resources that we have created specifically for SSC CGL, but section on SSC CGL generally for any hard MCQ test. If you like,you may to get latest subscribe content on competitive examspublished in your mail as soon as we publish it. Now set the stopwatch alarm and start taking this test. It is not difficult. 56th question set- 10 problems for SSC CGL exam: 5th on Trigonometry - testing time 12 mins Problem 1. The maximum value of $(2\sin \theta + 3\cos\theta)$ is, $1$ $2$ $\sqrt{13}$ $\sqrt{15}$ Problem 2. Find the minimum value of $9\tan^2 \theta + 4\cot^2 \theta$, $15$ $6$ $9$ $12$ Problem 3. If $\text{cosec} \theta -\cot \theta= \displaystyle\frac{7}{2}$, the value of $\text{cosec} \theta$ is, $\displaystyle\frac{49}{28}$ $\displaystyle\frac{53}{28}$ $\displaystyle\frac{47}{28}$ $\displaystyle\frac{51}{28}$ Problem 4. If $\tan^2 \alpha=1 +2\tan^2 \beta$, where $\alpha$ and $\beta$ both are positive acute angles, find the value of $\sqrt{2}\cos \alpha - \cos \beta$. $0$ $\sqrt{2}$ $-1$ $1$ Problem 5. The value of $152(\sin 30^0 + 2\cos^2 45^0 + 3\sin 30^0 +$ $\hspace{30mm}4\cos^2 45^0 + ....+17\sin 30^0+18\cos^2 45^0)$ is, an irrational number a rational number but not an integer an integer but not a perfect square the perfect square of an integer Problem 6. If $\tan \theta + \cot \theta=2$, then the value of $\tan^n \theta + \cot^n \theta$ ($0^0 \lt \theta \lt 90^0$, and $n$ an integer) is, $2$ $2^{n+1}$ $2^n$ $2n$ Problem 7. If $\displaystyle\frac{\sin \theta}{1+\cos \theta} + \displaystyle\frac{\sin \theta}{1-\cos \theta} = 4$, the value of $\cot \theta + \sec \theta$ is, $\sqrt{3}$ $\displaystyle\frac{\sqrt{3}}{7}$ $\displaystyle\frac{\sqrt{3}}{5}$ $\displaystyle\frac{5}{\sqrt{3}}$ Problem 8. If $\sin \theta + \cos \theta =\sqrt{2}$ with $\angle \theta$ a positive acute angle, then the value of $\tan \theta + \sec \theta$ is, $\sqrt{3}-1$ $\displaystyle\frac{1}{\sqrt{2}-1}$ $\sqrt{2}-1$ $\sqrt{3}+1$ Problem 9. If $p=a\sec {\theta}\cos \alpha$, $q =b\sec {\theta}\sin \alpha$, and $r =c\tan \theta$, then the value of $\displaystyle\frac{p^2}{a^2} +\displaystyle\frac{q^2}{b^2}-\displaystyle\frac{r^2}{c^2}$ is, 0 1 4 5 Problem 10. $\displaystyle\frac{\sin^2 \theta}{\cos^2 \theta}+\displaystyle\frac{\cos^2 \theta}{\sin^2 \theta}$ is equal to, $\displaystyle\frac{1}{\sin^2 {\theta}\cos^2 \theta}$ $\displaystyle\frac{1}{\sin^2 {\theta}\cos^2 \theta} -2$ $\displaystyle\frac{1}{\tan^2 \theta - \cot^2 \theta}$ $\displaystyle\frac{\sin^2 \theta}{\cot \theta - \sec \theta}$ You will find answers and the detailed conceptual solutions to these questions in . SSC CGL level Solution Set 56 on Trigonometry 5 You may watch the video solutions of the questions in the two-part video below. Part 1: Q1 to Q5 Part 2: Q6 to Q10 Answers to the questions Problem 1. Answer: c: $\sqrt{13}$. Problem 2. Answer: d: $12$. Problem 3. Answer: b: $\displaystyle\frac{53}{28}$. Problem 4. Answer: a: $0$. Problem 5. Answer: d: the perfect square of an integer. Problem 6. Answer: a: $2$. Problem 7. Answer: d: $\displaystyle\frac{5}{\sqrt{3}}$. Problem 8. Answer: b: $\displaystyle\frac{1}{\sqrt{2}-1}$. Problem 9. Answer: b: 1. Problem 10. Answer: b: $\displaystyle\frac{1}{\sin^2 {\theta}\cos^2 \theta}-2$. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. SSC CGL Tier II level question and solution sets on Trigonometry SSC CGL level question and solution sets in Trigonometry SSC CGL level Question Set 56 on Trigonometry 5 Algebraic concepts If you like, you may to get latest subscribe content on competitive examsin your mail as soon as we publish it.
No, the RSA key size is not the size of the private key exponent. It is customarily the number of bits in the public modulus (which is known as $N$). In other words, the key size is the integer $k$ such that $2^{k-1}\le N<2^k$. In most implementations (and all implementations conforming to PKCS#1), a private exponent $d$ has size in bits at most the key size $k$, and typically is few bits smaller. Notice that the private exponent is not uniquely defined. The smallest private exponent is $d=e^{-1}\bmod\lambda(N)$. It is always less than $\lambda(N)$, where $\lambda()$ is Carmichael's function, with $\lambda(N)=\operatorname{lcm}(p-1,q-1)$ when $N=p\cdot q$ with $p$ and $q$ distinct primes. The smallest private exponent always has size in bits strictly less than the key size $k$, and typically is a few bits smaller. Sometime, a private exponent is computed as $d=e^{-1}\bmod\varphi(N)$, where $\varphi()$ is Euler's function, with $\varphi(N)=(p-1)\cdot(q-1)$ when $N=p\cdot q$ with $p$ and $q$ distinct primes. That particular $d$ has size in bits at most the key size $k$, often is a few bits smaller, and often is a few bits larger than the smallest private exponent. Yet other methods to define a private exponent give no maximum for $d$, and only require that $d\equiv e^{-1}\pmod{\lambda(N)}$ (which is the necessary and sufficient condition for $d$ to work), or $d\equiv e^{-1}\pmod{\varphi(N)}$.
In the code below, I am only getting a box around part of my equation like this: How do I get the box to appear around the full equation? Thanks! CODE \documentclass{article}\usepackage{mathtools}\usepackage{unicode-math}\begin{document}\fbox{$ \zeta\nearrow \underset {\mathrlap{\displaystyle\Searrow\text{ overshoot}\searrow}} {\mathrlap{\Rightarrow\text{P.M.} \nearrow}}$}\end{document}
(15C) Multiple Linear Regression for HP-15C 01-26-2014, 02:16 PM (This post was last modified: 06-15-2017 01:30 PM by Gene.) Post: #1 (15C) Multiple Linear Regression for HP-15C To perform the regression of the following linear model: Y = a + bX + cZ We transform it into the following equations: (Y-Ymean)/(X-Xmean) = b + c*(Z-Zmean)/(X-Xmean) Then Calculate a using: a = Ymean - b*Xmean - c*Zmean I have added a ZIP file containing the source code for the HP-15C that Torsten recently introduced to this web site (click here). The drawback for this method is having to enter the data twice. The advantage for this method (over using matrices in the HP-15C to perform multiple regression) is that there is no limit for the number of data points entered. Memory Map RCL 0 = Ymean RCL 1 = Xmean RCL 8 = Zmean RCL 9 = X - Xmean Flags F0 = When internally set, the program goes into data deletion mode. Listing Code: 1 LBL B # Enter Y before pressing B. Use this label to delete data Usage Phase 1 In this phase perform the following: 1. Clear teh statistical registers 2. Enter the values of Y. 3. Calculate the mean value of Y and record it. 4. Clear teh statistical registers 5. Enter the values of X. 6. Calculate the mean value of X and record it. 7. Clear teh statistical registers 8. Enter the values of Z. 9. Calculate the mean value of Z and record it. 10. Clear the statistical registers. 11. Enter teh means for Y, X, and Z in registers 0, 1, and 8 respectively. Phase 2 1. Enter the value of Y and press the keys f and A. 2. Enter the value of X and press R/S. 3. Enter the value of Z and press R/S. 4. Repeat steps 1 through 3 for all the observations. 5. To calculate the values of coefficients b and c, press the keys f and C. The program, displays the value of coefficient b in the X register. 6. Press the key X<>Y to view and record the value of coefficient c. 7. Press the key X<>Y. 8. Press the key R/S to display the coefficient a. To delete a data point: 1. Enter the value of Y and press the keys f and B. 2. Enter the value of X and press R/S. 3. Enter the value of Z and press R/S. Example Give the following data: Code: Y X Z In phase 1 calculate the mean for Y as 2.4, the mean for X as 3.1, and the mean for Z as 46. Store these means in registers 0, 1, and 8, respectively. 1. Enter the value of -4.5 and press the keys f and A. 2. Enter the value of 1 and press R/S. 3. Enter the value of 33 and press R/S. 4. Repeat steps 1 through 3 for all the above observations. 5. To calculate the values of coefficients b and c, press the keys f and C. The program, displays the 2.000 as the value of coefficient b in the X register. 6. Press the key X<>Y to view and record -0.30000 as the value of coefficient c. 7. Press the key X<>Y. 8. Press the key R/S to display 10.000 as the coefficient a. The model is Y = 10 + 2*X - 0.3*Z 01-27-2014, 07:35 AM Post: #2 RE: Multiple Linear Regression for HP-15C This example could be calculated without entering data twice or using any program. Make sure to set the calculator to USER mode. Enter matrix \(A\): \(A = \begin{bmatrix} 1 & 1 & 55 \\ 1 & 2 & 67 \\ 1 & 3.5 & 33 \\ 1 & 4 & 34 \\ 1 & 5 & 41 \\ \end{bmatrix} \) Code: 5 ENTER 3 Enter matrix \(B\): \( B = \begin{bmatrix} -4.5 \\ -6.1 \\ 7.1 \\ 7.8 \\ 7.7 \\ \end{bmatrix} \) Code: 5 ENTER 1 Calculate \(C=A^{\top}A\): Code: RESULT C Calculate \(D=A^{\top}B\): Code: RESULT D Calculate \(E=C^{-1}D\): Code: RESULT E Get result: Code: RCL E -> 10 Cheers Thomas 01-27-2014, 11:57 AM Post: #3 RE: Multiple Linear Regression for HP-15C What is the size limit on using matrices in the 15C do to multiple regression? 01-27-2014, 12:04 PM Post: #4 RE: Multiple Linear Regression for HP-15C Registers will be the limit. You've got to store all the data points, the results and the intermediates inside of 64 registers. - Pauli 01-27-2014, 07:23 PM Post: #5 RE: Multiple Linear Regression for HP-15C (01-27-2014 11:57 AM)Namir Wrote: What is the size limit on using matrices in the 15C do to multiple regression?15. But you have to calculate D first, then delete B (DIM 0,0) and only then calculate C and E. My emulator on the iPhone sucks: it will just crash when I try to create a matrix that uses all 64 registers. So I can't tell you what happens when you try with 16 on the real thing. If you want to avoid entering the data twice you could use \(\sum{+}\) to calculate \(n\), \(\sum{X}\), \(\sum{X^2}\), \(\sum{Z}\), \(\sum{Z^2}\) and \(\sum{XZ}\). In addition to that you have to calculate \(\sum{Y}\), \(\sum{XY}\) and \(\sum{ZY}\) separately. Cheers Thomas 01-28-2014, 11:16 PM Post: #6 RE: Multiple Linear Regression for HP-15C (01-27-2014 07:23 PM)Thomas Klemm Wrote:(01-27-2014 11:57 AM)Namir Wrote: What is the size limit on using matrices in the 15C do to multiple regression?15. But you have to calculate D first, then delete B (DIM 0,0) and only then calculate C and E. My emulator on the iPhone sucks: it will just crash when I try to create a matrix that uses all 64 registers. So I can't tell you what happens when you try with 16 on the real thing. Thomas, My main point is to perform multiple linear regression using the L.R. command. One can always use the equations from the HP-67 Stat Pac I to do a straight forward calculations. Such calculations will require a longer program BUT has the advantage of entering the data once. User(s) browsing this thread: 1 Guest(s)
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Dennis Soemers' answer is correct: you should use a HashSet or a similar structure to keep track of visited states in BFS Graph Search.However, it doesn't quite answer your question. You're right, that in the worst case, BFS will then require you to store 16! nodes. Even though the insertion and check times in the set will be O(1), you'll still need an ... Artificial Intelligence is a very broad field and it covers many and very deep areas of computer science, mathematics, hardware design and even biology and psychology. As for the math: I think calculus, statistics and optimization are the most important topics, but learning as much math as you can won't hurt.There are many good free introductory resources ... Start with Andrew Ng's introduction to Machine Learning course on Coursera.There are not many prerequisites for that course, but you will learn how to make some useful things. And, more importantly, it will clearly show you which subjects you need to learn next. A neuron is said activated when its output is more than a threshold, generally 0.For examples :\begin{equation}y = Relu(a) > 0\end{equation}when\begin{equation}a = w^Tx+b > 0\end{equation}Same goes for sigmoid or other activation functions. To excel in in AI you need a mathematical intuition or point of view. In order to become a full stack AI engineer, it is important that you have a firm understanding of the mathematical foundations of machine learning.My advice to anyone preparing to jump into the field is that learning mathematics is about doing. Remember the 20/80 rule. You need to ... A really good introduction is the Berkeley CS188 class videos and projects. You can find those materials at http://ai.berkeley.edu/home.htmlYou probably also want to get ahold of a copy of Artificial Intelligence: A Modern Approach by Norvig and Russell.For more on the "machine learning" aspects of AI, including an introduction to Neural Networks, take ... You can use a set (in the mathematical sense of the word, i.e. a collection that cannot contain duplicates) to store states that you have already seen. The operations you'll need to be able to perform on this are:inserting elementstesting if elements are already in therePretty much every programming language should already have support for a data ... Supervised learning is typically an attempt to learn a mathematical function, $f(\bf X)=\bf y$. For this, you need both the input vector $\bf X$ and the output vector $\bf y$. The model outputs have whatever dimensionality that the target values have.Unsupervised learning models instead learn a structure from the data. A clustering model, for example, is ... While the answers given are generally true, a BFS in the 15-puzzle is not only quite feasible, it was done in 2005! The paper that describes the approach can be found here:http://www.aaai.org/Papers/AAAI/2005/AAAI05-219.pdfA few key points:In order to do this, external memory was required - that is the BFS used the hard drive for storage instead of RAM.... Utility is a fundamental to Artificial Intelligence because it is the means by which we evaluate an agent's performance in relation to a problem. To distinguish between the concept of economic utility and utility-based computing functions, the term "performance measure" is utilized.The simplest way to distinguish between a goal-based agent and a utility-... Backpropagation is a subroutine often used when training Artificial Neural Networks with a Gradient Descent learning algorithm. Gradient Descent requires computation of the error gradient, i.e. derivatives, of a cost function with respect to the network parameters. BP allows you to find this gradient a lot faster than using naive methods. Reinforcement ... Neural networks seem to be (something along the lines of) a type ofalgorithm that creates a graph which works based on a theory about howneurons interact, in order to create self-learning programs.Technically, a neural network is a combination ofgroup of high dimensional arrays/vectors storing 'weights' (more on this soon)a list of instructions or ... In a neural network (NN), a neuron can act as a linear operator, but it usually acts as a non-linear one. The usual equation of a neuron $i$ in layer $l$ of an NN is$$o_i^l = \sigma(\mathbf{x}_i^l \cdot \mathbf{w}_i^l + b_i^l),$$where $\sigma$ is a so-called activation function, which is usually a non-linearity, but it can also be the identity ... This is fairly boilerplate advice, but, since you're brand new to AI, I'd personally suggest writing a classical Tic-Tac-Toe AI, ideally using minimax.I suggest this because minimax is fundamental to AI, and there are many webpages devoted to this subject, such as How to make your Tic Tac Toe game unbeatable by using the minimax algorithm and Tic Tac Toe: ... You'll find that both Calculus and Linear Algebra have some application in AI/ML techniques. In many senses, you can argue that most of ML reduces to Linear Algebra, and Calculus is used in, eg. the backpropagation algorithm for training neural networks.You'd be well served to take a class or two in probability and statistics as well.Programming ... AI is quite large in scope and it sits at the intersection of several areas. However, there are a few essential fields or topics that you need to knowSet theoryLogicLinear algebraCalculusProbability and statisticsI would recommend you to first explore the AI algorithms that you might be interested in. I advise you to start with machine learning and ... Well, you are definitely mixing two different things. Here are those bits:The function that deep learning approximates is basically a function that best fits the INPUT DATA points. You should not think about its differentiability or optimization aspects. We don't care what type of function it is; we just want the best fit of input data (ofcourse ... As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.This is true. In fact, it is impossible to solve using software. No software-only technique can generate randomness without an initial random seed or support from hardware.This is also true for AI software. No AI design that ... Using a machine learning or AI-powered model once it has been built and tested, is not directly an AI issue, it is just a development issue. As such, you won't find many machine learning tutorials that focus on this part of the work. But they do exist.In essence it is the same as integrating any other function, which might be in a third-party library:... I would suggest you tostart with Andrew Ng's Machine Learning course on Coursera. He provides the brief introduction to mathematics necessary for machine learning. Though not complete, it will be enough to cruise through the course.Next carefully learn logistic regression in the course. The sigmoid function will be widely used in neural networks.In the ... When I got interested in AI, I started with the most basic things. My very first book was Russell&Norvig's Artificial Intelligence- A modern Approach. I think that's a good place to start, even if you're mostly interested in Deep Nets. It treats not just the basic AI concepts and algorithms (expert systems, depth-first and breadth-first search,knowledge ... When crossover happens and one parent is fitter than the other, the nodes from the more fit parent are carried over to the child. This is the case as disjoint and excess genes are only carried over from the fittest parent.Here's an example:// Node CrossoverParent 1 Nodes: {[0][1][2]} // more fit parentParent 2 Nodes: {[0][1][2][3]}Child Nodes: {[0]... In your question you didn’t specify the type of pooling that you aren’t doing. So it’s possible that you could have, for example a mean pool followed by a max or min pool. What this could do is combine the idea of reducing dimensionality of your data from a holistic perspective with the mean pool and then choosing the best of your averages with your max ... The term "activated" is mostly used when talking about activation functions which only outputs a value (except 0) when the input to the activation function is greater than a certain treshold.Especially when discussing ReLU the term "activated" may be used. ReLU will be "activated" when it's output is greater than 0, which is also when it's input is greater ... In many cases, a production-ready model has everything it needs to make predictions without retaining training data. For example: a linear model might only need the coefficients, a decision tree just needs rules/splits, and a neural network needs architecture and weights. The training data isn't required as all the information needed to make a prediction is ... What you have could be well described as a Task Allocation problem, which is studied as part of the planning subfield of AI. Chapters 10 & 11 of Russell & Norvig provide a good overview of this area, although I think they don't talk too much about Task Allocation in particular.There are two basic approaches to this problem: centralized approaches, ... The term you are looking for is stylometry, which is related to a technique in forensic linguistics called writeprint analysis. There are many different techniques to perform stylometric analysis, from the very basic 5-feature analysis classifying features such as the lexicon and idiosyncrasies unique to a person to more complex analysis utilizing neural ... Welcome to AI.SE @Kate_Catelena!I teach AI courses at the undergraduate level, and so have seen a lot of semester projects over the years. Here are some templates that often lead to exciting outcomes:Pick a new board or card game, and write a program to play it. Your course has probably covered Adversarial Search, and may also have covered Monte Carlo ... I think the key part of your question is "as a beginner". For all intents and purposes you can create a state of the art (SoTA) model in various fields with no knowledge of the mathematics what so ever.This means you do not need to understand back-propagation, gradient descent, or even mathematically how each layer works. Respectively you could just ... Here are some resources I have found useful to get to know the basics of AIAndrew Ng's lecture series on AIAndrew Ng's lecture at the Stanford Business SchoolAndrew Ng - The State of Artificial IntelligenceAndrew Ng is a visiting professor at Stanford, founder of Coursera and currently the head of research at Alibaba. The above videos should give you (...
I would like to put code from an outside file into a LaTeX document. This code is able to run on its own. Equation labels from the document are referenced in the comments of this code. I would like to see these equations updated and referenced in the final document, with a caption, label, in a box, numbered lines, etc.. Example latex document with a labelled equation: %doc.tex\documentclass[]{article}\usepackage{amsmath}\usepackage{hyperref}\usepackage{listings}\begin{document} \begin{align} \label{eq:my equation} y = \sin(x) \cos(x) \end{align} \lstinputlisting[caption={Code}, frame=single, numbers=left, escapeinside={tex:}{:tex} ]{code.py}\end{document} Example code to be included in the document: #code.pyimport mathdef y(x): return math.sin(x)*math.cos(x) #tex: Equation \ref{eq:my equation} :texprint y(math.pi/4) However, using \lstinputlisting produces errors. How can I reference equations like this? Thank you very much.
Inverse in Group is Unique/Proof 2 Jump to navigation Jump to search Theorem Let $\struct {G, \circ}$ be a group. $\forall x \in G: \exists_1 x^{-1} \in G: x \circ x^{-1} = e^{-1} = x \circ x$ where $e$ is the identity element of $\struct {G, \circ}$. Proof Suppose that: $\exists b, c \in G: a \circ b = e, a \circ c = e$ that is, that $b$ and $c$ are both inverse elements of $a$. Then: \(\displaystyle b\) \(=\) \(\displaystyle b \circ e\) as $e$ is the identity element \(\displaystyle \) \(=\) \(\displaystyle b \circ \paren {a \circ c}\) as $c$ is an inverse of $a$ \(\displaystyle \) \(=\) \(\displaystyle \paren {b \circ a} \circ c\) Group Axioms: $G1$: Associativity \(\displaystyle \) \(=\) \(\displaystyle e \circ c\) as $b$ is an inverse of $a$ \(\displaystyle \) \(=\) \(\displaystyle c\) as $e$ is the identity element So $b = c$ and hence the result. $\blacksquare$ Sources 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 1.4$: Lemma $5$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{II}$: The Group Property 1996: John F. Humphreys: A Course in Group Theory... (previous) ... (next): Chapter $3$: Elementary consequences of the definitions: Proposition $3.2$
I was trying some fonts from this post which-opentype-math-fonts-are-available I downloaded and installed STIX two From http://stixfonts.org/ and have been using it for few days. It looks really nice. Except sometimes some of math letters in fractions hit each others. Compare this image (will show MWE below), comparing STIX and STIX two and the default latex font. Notice how the denominator is touching the math in the line below it Here is MWE \documentclass[11pt]{article}\usepackage{amsmath}\usepackage{unicode-math}\setmainfont{STIX Two Text}\setmathfont{STIX Two Math}%\setmainfont{XITS}%\setmathfont{XITS Math} \begin{document}\[ G(x,s) = \left\{ \begin{array}[c]{ccc}% \frac{\cos s}{\cos(1) }\sin(1-x) & & 0\leq s\leq x\\ \frac{\cos x}{\cos(1) }\sin(1-s) & & x\leq s\leq 1 \end{array} \right.\]\end{document} Compiled using lualatex foo.tex This below is the default Latex font: The question is, is there is something one can do to fix this? Is this a bug in the font? I find it harder to read when the letters are touching each others. Otherwise, it is a nice font. I think they put too much space between the fraction line and the denominator. With the default font, the spacing is much better. To install the fonts, I unzipped the file from the above link, and copied the 3 font folders to ~./fonts folder. That is all. This is on cygwin. On windows or mac, the fonts need to be moved to wherever the opentype fonts folder is.
Solvers General orthogonal coordinates Data processing Output functions Input functions Interactive Basilisk View Miscellaneous functions/modules Tracking floating-point exceptions See also Solvers \displaystyle \partial_t \int_{\Omega} \mathbf{q} d \Omega = \int_{\partial \Omega} \mathbf{f} ( \mathbf{q}) \cdot \mathbf{n}d \partial \Omega - \int_{\Omega} hg \nabla z_b \displaystyle \mathbf{q} = \left(\begin{array}{c} h\\ hu_x\\ hu_y \end{array}\right), \;\;\;\;\;\; \mathbf{f} (\mathbf{q}) = \left(\begin{array}{cc} hu_x & hu_y\\ hu_x^2 + \frac{1}{2} gh^2 & hu_xu_y\\ hu_xu_y & hu_y^2 + \frac{1}{2} gh^2 \end{array}\right) Semi-implicit scheme Multiple layers \displaystyle \partial_th + \partial_x\sum_{l=0}^{nl-1}h_lu_l = 0 \displaystyle \partial_t(h\mathbf{u}_l) + \nabla\cdot\left(h\mathbf{u}_l\otimes\mathbf{u}_l + \frac{gh^2}{2}\mathbf{I}\right) = - gh\nabla z_b - \partial_z(h\mathbf{u}w) + \nu h\partial_{z^2}\mathbf{u} Green-Naghdi \displaystyle \partial_t \int_{\Omega} \mathbf{q} d \Omega = \int_{\partial \Omega} \mathbf{f} ( \mathbf{q}) \cdot \mathbf{n}d \partial \Omega - \int_{\Omega} hg \nabla z_b + h \left( \frac{g}{\alpha}\nabla \eta - D \right) \displaystyle \alpha h\mathcal{T} \left( D \right) + hD = b \displaystyle b = \left[ \frac{g}{\alpha} \nabla \eta +\mathcal{Q}_1 \left( u \right) \right] \displaystyle \partial_t\left(\begin{array}{c} s_i\\ \mathbf{v}_j\\ \end{array}\right) + \nabla\cdot\left(\begin{array}{c} \mathbf{F}_i\\ \mathbf{T}_j\\ \end{array}\right) = 0 \displaystyle \partial_t\mathbf{q} + \nabla\cdot(\mathbf{q}\mathbf{u}) = - \nabla p + \nabla\cdot(\mu\nabla\mathbf{u}) + \rho\mathbf{a} \displaystyle \partial_t p + \mathbf{u}\cdot\nabla p = -\rho c^2\nabla\cdot\mathbf{u} Navier–Stokes Streamfunction–Vorticity formulation \displaystyle \partial_t\omega + \mathbf{u}\cdot\nabla\omega = \nu\nabla^2\omega \displaystyle \nabla^2\psi = \omega “Markers-And-Cells” (MAC or “C-grid”) formulation Centered formulation \displaystyle \partial_t\mathbf{u}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u}) = \frac{1}{\rho}\left[-\nabla p + \nabla\cdot(2\mu\mathbf{D})\right] + \mathbf{a} \displaystyle \nabla\cdot\mathbf{u} = 0 Azimuthal velocity for axisymmetric flows \displaystyle \partial_t w + u_x \partial_x w + u_y \partial_y w + \frac{u_y w}{y} = \frac{1}{\rho y} \left[ \nabla \cdot (\mu y \nabla w) - w \left( \frac{\mu}{y} + \partial_y \mu \right) \right] Two-phase interfacial flows Electrohydrodynamics Ohmic conduction \displaystyle \partial_t\rho_e = \nabla \cdot(K \nabla \phi) \displaystyle \nabla \cdot (\epsilon \nabla \phi) = - \rho_e Ohmic conduction of charged species \displaystyle \partial_tc_i = \nabla \cdot( K_i c_i \nabla \phi) Electrohydrodynamic stresses \displaystyle M_{ij} = \varepsilon (E_i E_j - \frac{E^2}{2}\delta_{ij}) Viscoelasticity \displaystyle \rho\left[\partial_t\mathbf{u}+\nabla\cdot(\mathbf{u}\otimes\mathbf{u})\right] = - \nabla p + \nabla\cdot(2\mu_s\mathbf{D}) + \nabla\cdot\mathbf{\tau}_p + \rho\mathbf{a} \displaystyle \mathbf{\tau}_p = \frac{\mu_p \mathbf{f_s}(\mathbf{A})}{\lambda} \displaystyle \Psi = \log\mathbf{A} \displaystyle D_t \Psi = (\Omega \cdot \Psi -\Psi \cdot \Omega) + 2 \mathbf{B} + \frac{e^{-\Psi} \mathbf{f}_r (e^{\Psi})}{\lambda} Other equations Hele-Shaw/Darcy flows \displaystyle \mathbf{u} = \beta\nabla p \displaystyle \nabla\cdot(\beta\nabla p) = \zeta Advection \displaystyle \partial_tf_i+\mathbf{u}\cdot\nabla f_i=0 Interfacial forces \displaystyle \phi\mathbf{n}\delta_s Reaction–Diffusion \displaystyle \theta\partial_tf = \nabla\cdot(D\nabla f) + \beta f + r Poisson–Helmholtz \displaystyle \nabla\cdot (\alpha\nabla a) + \lambda a = b Runge–Kutta time integrators \displaystyle \frac{\partial\mathbf{u}}{\partial t} = L(\mathbf{u}, t) Signed distance field Okada fault model General orthogonal coordinates When not written in vector form, some of the equations above will change depending on the choice of coordinate system (e.g. polar rather than Cartesian coordinates). In addition, extra terms can appear due to the geometric curvature of space (e.g. equations on the sphere). An important simplification is to consider only orthogonal coordinates. In this case, consistent finite-volume discretisations of standard operators (divergence etc…) can be obtained, for any orthogonal curvilinear coordinate system, using only a few additional geometric parameters. The face vector fm is the scale factor for the length of a face i.e. the physical length is fm\Delta and the scalar field cm is the scale factor for the area of the cell i.e. the physical area is cm\Delta^2. By default, these fields are constant and unity (i.e. the Cartesian metric). Several metric spaces/coordinate systems are predefined: Axisymmetric Stokes stream function \displaystyle \frac{\partial^2\psi}{\partial z^2} + \frac{\partial^2\psi}{\partial r^2} - \frac{1}{r}\partial_r\psi = - \omega r Spherical Radial/cylindrical Data processing Various utility functions: timing, field statistics, slope limiters, etc. Tagging connected neighborhoods Counting droplets Output functions Multiple fields interpolated on a regular grid (text format) Single field interpolated on a regular grid (binary format) Portable PixMap (PPM) image output Volume-Of-Fluid facets Basilisk snapshots Basilisk View Gerris simulation format ESRI ASCII Grid format VTK format Input functions Interactive Basilisk View bview: a script to start the client/server visualisation pipeline. bview-server.c: the server. bview-client.py: the client. Miscellaneous functions/modules Tracking floating-point exceptions On systems which support signaling NaNs (such as GNU/Linux), Basilisk is set up so that trying to use an unitialised value will cause a floating-point exception to be triggered and the program to abort. This is particularly useful when developing adaptive algorithms and/or debugging boundary conditions. To maximise the “debugging potential” of this approach it is also recommended to use the trash() function to reset any field prior to updates. This will guarantee that older values are not mistakenly reused. Note that this call is quite expensive and needs to be turned on by adding -DTRASH=1 to the compilation flags (otherwise it is just ignored). Doing ulimit -c unlimited before running the code will allow generation of core files which can be used for post-mortem debugging (e.g. with gdb). Visualising stencils It is often useful to visualise the values of fields in the stencil which triggered the exception. This can be done using the -catch option of qcc. We will take this code as an example: Copy and paste this into test.c, then do ulimit -c unlimitedqcc -DTRASH=1 -g -Wall test.c -o test -lm./test you should get Floating point exception (core dumped) Then do gdb test core you should get ...Core was generated by `./test'.Program terminated with signal 8, Arithmetic exception.#0 0x0000000000419dbe in gradients (f=0x7fff5f412430, g=0x7fff5f412420) at /home/popinet/basilisk/wiki/src/utils.h:203203 v.x[] = (s[1,0] - s[-1,0])/(2.*Delta); To visualise the stencil/fields which lead to the exception do qcc -catch -g -Wall test.c -o test -lm./test you should now get Caught signal 8 (Floating Point Exception)Caught signal 6 (Aborted)Last point stencils can be displayed using (in gnuplot) set size ratio -1 set key outside v=0 plot 'cells' w l lc 0, \ 'stencil' u 1+3*v:2+3*v:3+3*v w labels tc lt 1 title columnhead(3+3*v), \ 'coarse' u 1+3*v:2+3*v:3+3*v w labels tc lt 3 t ''Aborted (core dumped) Follow the instructions i.e. gnuplotgnuplot> set size ratio -1gnuplot> set key outsidegnuplot> v=0gnuplot> plot 'cells' w l lc 0, \ 'stencil' u 1+3*v:2+3*v:3+3*v w labels tc lt 1 title columnhead(3+3*v), \ 'coarse' u 1+3*v:2+3*v:3+3*v w labels tc lt 3 t '' With some zooming and panning, you should get this picture The red numbers represent the stencil the code was working on when the exception occured. It is centered on the top-left corner of the domain. Cells both inside the domain and outside (i.e. ghost cells) are represented. While the field inside the domain has been initialised, ghost cell values have not. This causes the gradients() function to generate the exception when it tries to access ghost cell values. To initialise the ghost-cell values, we need to apply the boundary conditions i.e. add boundary ({a}); after initialisation. Recompiling and re-running confirms that this fixes the problem. Note that the blue numbers are the field values for the parent cells (in the quadtree hierarchy). We can see that these are also un-initialised but this is not a problem since we don’t use them in this example. The v value in the gnuplot script is important. It controls which field is displayed. v=0 indicates the first field allocated by the program (i.e. a[] in this example), accordingly ga.x[] and ga.y[] have indices 1 and 2 respectively. Tracing permissions Some recent systems disallow tracing of processes for security reasons. The symptom will be an error message from gdb looking like: Attaching to process 9351Could not attach to process. If your uid matches the uid of the targetprocess, check the setting of /proc/sys/kernel/yama/ptrace_scope, or tryagain as the root user. For more details, see /etc/sysctl.d/10-ptrace.confptrace: Operation not permitted. To enable tracing (which weakens your system’s security), you need to do: sudo sh -c 'echo 0 > /proc/sys/kernel/yama/ptrace_scope'
Elham Esfehani Requirements for this tutorial This is an advanced tutorial that needs the basics explained in other sections of this learning trail. The following should have been worked through or prepared before this tutorial is started: Notations and Variables – Superscripts, Subscripts,Indicesshould have been worked through and the example created. They should be available for reuse in the learner’s repository. The equations are also given below for reference. Parameter lists I – basic useshould be completed as a basic understanding of the use of parameter lists is necessary here. Example problem Equation system Equation System eq_sys_flash Notation nota_flash Base Names Name Description Dippr parameter A in – Bottom molar flow in kmol/h Dippr parameter B in K Dippr parameter C in – Head molar flow in kmol/h Dippr parameter D in – Dippr parameter E in – Feed molar flow in kmol/h Constant. Phase equilibrium and others Pressure in Pa Temperature in K Bottom molar fraction in mol/mol Head molar fraction in mol/mol Feed molar fraction in mol/mol Superscripts Name Description Phase equilibrium Parameter Subscripts Name Description Reference value Indices Name Description Component index 1…NC Use of a function For the calculation of the vapor pressure a DIPPR function is used. As the parameters in this functions specify the mixture used in this process it makes sense to specify them in a parameter list. Function vapor_pressure_dippr Notation nota_thermo_variables Base Names Name Description Pressure in Pa Temperature in K Parameter List params_thermo_dippr Notation nota_thermo_dippr Base Names Name Description parameter A parameter B parameter C parameter D parameter E parameter T parameter p Superscripts Name Description indicating a scaling/reference Problem description Index maximum value: NC = 2 Design variables and their values: F = 1.0 z_{i=1} = 0.5 p = 101325 T = 354.6 Iteration Variables and their initial values: B = 0.5 D = 0.5 Specification of physical parameters: i Component A B C D E 1 Methanol 81.1 -6880.0 -8.71 4.05E-6 2.0 2 Water 72.6 -7210.0 -7.14 7.19E-6 2.0 Specification of scaling/reference parameters: Name Value Engineering Unit 1 K 1 Pa Expected simulation results: B = 0.715 D = 0.285 Creating the parameter list Create ‘nota_thermo_dippr’. Create the parameter list ‘params_thermo_dippr’ which uses ‘notation_thermo_dippr’. If you have problems here, refer to the previous sections . Attention: The parameters of the list do nothave any index. Creating the function Create ‘nota_thermo_variables’. Select the Function Editor in the Model Panel. Select ‘nota_thermo_variables’ in the Notation file panel. Select ‘params_thermo_dippr’ in the Parameters file panel. Enter a Description for the new function. In the tab check that Interface & Body Settings and Free Interface are selected in the comboboxes for the interface respectively body setting. Specify formula Activate the tab . Interface Specification Press [Edit Output] and create the variable of the left hand side of the function: ‘p’. Press [Add Input] and create the input variable ‘T’. Activate the tab . Body Specification Enter the right hand side of the dippr function. The latex expression is: p^{sca} \cdot \exp( A + \frac{B}{T} + C \cdot \ln(\frac{T}{T^{sca}} + D\cdot(\frac{T}{T^{sca}})^{E})) Save the function. Creating the equation system and applying the function Create the notation ‘nota_flash’. If you want to use the notation created in section 2, you have to extend the existing notation by the base name symbols p and T. Create the mentioned equations of the section ”Equation system eq _sys_flash” Create a new equation system ‘eq_sys_flash’ using notation ‘nota_flash’ and add the above equations (via naming policy ‘integrate’ without a connector). Activate the tab Functions Press [Add Function]and load the function created above. Have a look at the loaded contents: Preview shows the rendered function The field Output Naming shows the output variable as it is defined in the function The table Input Namings contains the input variable ‘T’. There is a section Applications (Function Calls). This section contains an empty table for the Output Variable which later on contains the Applied output namings. We have to add an entry into the list for the Applied Namingsof the Output Variable. Press [Add Application]below the above mentioned table. This brings up the Edit Function Applicationdialog. Press [Edit Output]below the field for the Applied Namingfor the Outputand enter the name for the vapor pressure as it appears in the equation system with the component index:p_{o,i}^{LV}. Double click at Tand enter the naming for the temperature as it should/will appear in the equation system: ‘T’. Enter the matchings for the parameters: A ->A^{par}_{i}, B ->B^{par}_{i}, C ->C^{par}_{i}, D ->D^{par}_{i}, E ->E^{par}_{i},p^{sca} ->p^{par}, T^{sca} ->T^{par} Press [OK]to leave the Edit Function Appliancedialog. The list for the applied output namings contains the entry created above. Press [OK]to leave the Add Function Usagedialog. Save the equation system. Evaluating the equation system Select the Simulation Editorand activate the Equation Systemtab. Load equation system ‘eq_sys_flash’ created above in the EQ Systemfile panel. You may check if everything is loaded correctly in the Equationsand Functionstab. Activate the Indexingtab, set the maximum values for the indices to 2 and confirm the indexing. Activate the Specificationstab and the Variablessub tab. You will notice that the two P_{o,i}^{LV} are automatically classified as Calculated Variables. Enter the problem specification as specified in the beginning of this tutorial. HINTS: You may select several variables at the same time using the mouse and the control or shift key. Pressing [<<] or [>>] changes the classification of all selected variables. A right mouse click on the selection brings up a dialog where a value can be entered for all selected variables (useful for initialization values). Save the variable specification for later use. Activate the Parameterssub tab and enter the parameters as specified above and save this parameter specification. Enter a description for the simulation/evaluation. Save the Evaluation information using the file panel on top of the Evaluation Editor. Generate the code and simulate the flash
Definition:Polynomial Ring Contents 1 One Indeterminate 2 Multiple Indeterminates 3 Terminology 4 Notation 5 Equivalence of definitions 6 Also defined as 7 Also known as 8 Also see One Indeterminate Let $R$ be a commutative ring with unity. $S$ is a commutative ring with unity $\iota : R \to S$ is a unital ring homomorphism, called canonical embedding $X$ is an element of $S$, called indeterminate that can be defined in several ways: Let $R^{\left({\N}\right)}$ be the ring of sequences of finite support over $R$. Let $\iota : R \to R^{\left({\N}\right)}$ be the mapping defined as: $\iota \left({r}\right) = \left \langle {r, 0, 0, \ldots}\right \rangle$. Let $X$ be the sequence $\left \langle {0, 1, 0, \ldots}\right \rangle$. The polynomial ring over $R$ is the ordered triple $\left({R^{\left({\N}\right)}, \iota, X}\right)$. Let $\N$ denote the additive monoid of natural numbers. Let $R \left[{\N}\right]$ be the monoid ring of $\N$ over $R$. The polynomial ring over $R$ is the ordered triple $\left({R \left[{\N}\right], \iota, X}\right)$ where: $X \in R \left[{\N}\right]$ is the standard basis element associated to $1\in \N$. $\iota : R \to R \left[{\N}\right]$ is the canonical mapping. For every pointed $R$-algebra $(A, \kappa, a)$ there exists a unique pointed algebra homomorphism $h : S\to A$, called evaluation homomorphism. This is known as the universal property of a polynomial ring. Multiple Indeterminates Let $R$ be a commutative ring with unity. Let $I$ be a set. $S$ is a commutative ring with unity $\iota : R \to S$ is a unital ring homomorphism, called canonical embedding $f : S \to R$ is a family, whose image consists of indeterminates that can be defined in several ways: Let $R \left[{\left\{{X_i: i \in I}\right\}}\right]$ be the ring of polynomial forms in $\left\{{X_i: i \in I}\right\}$. The polynomial ring in $I$ indeterminates over $R$ is the ordered triple $\left({\left({A, +, \circ}\right), \iota, \left\{ {X_i: i \in I}\right\} }\right)$ Terminology Single indeterminate Let $\left({S, \iota, X}\right)$ be a polynomial ring over $R$. The indeterminate of $\left({S, \iota, X}\right)$ is the term $X$. Multiple Indeterminates Let $I$ be a set. Let $\left({S, \iota, f}\right)$ be a polynomial ring over $R$ in $I$ indeterminates. The unital ring homomorphism $\iota$ is called the canonical embedding into the polynomial ring. Multiple Indeterminates Let $I$ be a set. The unital ring homomorphism $\iota$ is called the canonical embedding into the polynomial ring. The embedding $\iota$ is then implicit. Equivalence of definitions Also defined as It is common for an author to define using a specific construction, and refer to other constructions as the polynomial ring . At $\mathsf{Pr} \infty \mathsf{fWiki}$ we deliberately do not favor any construction. All the more so because at some point it becomes irrelevant. apolynomial ring It is also common to call any ring isomorphic to a polynomial ring a polynomial ring. For the precise meaning of this, see Ring Isomorphic to Polynomial Ring is Polynomial Ring. Also known as The polynomial ring in one indeterminate over $R$ is often referred to as the polynomial ring over $R$. That is, if no reference is given to the number of indeterminates, it is assumed to be $1$. Also see Results about polynomial ringscan be found here.
My colleague David Harden recently pointed me to Molien’s theorem, a neat little fact about the invariant polynomials under the action by a finite group. It turns out that this has a nice interpretation in the case of the symmetric group $S_n$ that brings in some nice combinatorial and group-theoretic arguments. The general version of Molien’s theorem can be stated thus: Suppose we have a finite subgroup $G$ of the general linear group $GL_n(\mathbb{C})$. Then $G$ acts on the polynomial ring $R=\mathbb{C}[x_1,\ldots,x_n]$ in the natural way, that is, by replacing the column vector of variables $(x_1,\ldots,x_n)$ with their image under left matrix multiplication by $G$. Let $R^G$ be the invariant space under this action. Then $R$ is graded by degree; that is, for each nonnegative integer $k$, the space $R_k^G$ of $G$-invariant homogeneous polynomials of degree $k$ are a finite dimensional subspace of $R^G$, and $R^G$ is the direct sum of these homogeneous components. What is the size (dimension) of these homogeneous components? If $d_k$ denotes the dimension of the $k$th piece, then Molien’s theorem states that the generating function for the $d_k$’s is given by $$\sum_{k\ge 0} d_k t^k=\frac{1}{|G|}\sum_{M\in G} \frac{1}{\det(I-tM)}$$ where $I$ is the $n\times n$ identity matrix. There is a nice exposition of the proof (in slightly more generality) in this paper of Richard Stanley, which makes use of some basic facts from representation theory. Rather than go into the proof, let’s look at the special case $G=S_n$, namely the set of all permutation matrices in $GL_n$. Specialization at $G=S_n$ In this case, the invariant space $R^{S_n}$ is simply the space of symmetric polynomials in $x_1,\ldots,x_n$, and the $k$th graded piece consists of the degree-$k$ homogeneous symmetric polynomials. But we know exactly how many linearly independent homogeneous symmetric polynomials of degree $k$ there can be – as shown in my previous post, the monomial symmetric polynomials $m_\lambda$, where $\lambda$ is any partition of $k$, form a basis of this space in the case that we have infinitely many variables. Since we only have $n$ variables, however, some of these are now zero, namely those for which $\lambda$ has more than $n$ parts. The nonzero $m_\lambda$’s are still linearly independent, so the dimension of the $k$th graded piece in this case is $p(k,n)$, the number of partitions of $k$ into at most $n$ parts. Notice that by considering the conjugate of each partition, we see that the number of partitions of $k$ into at most $n$ parts is equal to the number of partitions of $k$ that use parts of size at most $n$. It is not hard to see that the generating function for $p(k,n)$ is therefore $$\sum_{k\ge 0}p(k,n)t^k=\frac{1}{(1-t)(1-t^2)\cdots (1-t^n)}.$$ Molien’s theorem says that this generating function should also be equal to $$\frac{1}{n!}\sum_{M\in S_n}\frac{1}{\det(I-tM)}$$ where we use the somewhat sloppy notation $M\in S_n$ to indicate that $M$ is an $n\times n$ permutation matrix. What are these determinants? Well, suppose $M$ corresponds to a permutation with cycle type $\lambda$, that is, when we decompose the permutation into cycles the lengths of the cycles are $\lambda_1,\ldots,\lambda_r$ in nonincreasing order. Then notice that, up to simultaneous reordering of the rows and columns, $I-tM$ is a block matrix with blocks of sizes $\lambda_1,\ldots,\lambda_r$. The determinant of a block of size $\lambda_i$ is easily seen to be $1-t^{\lambda_i}$. For instance $$\det \left(\begin{array}{ccc} 1 & -t & 0 \\ 0& 1 & -t \\ -t & 0 & 1\end{array}\right)=1-t^3,$$ and in general, the determinant of such a block will have contributions only from the product of the 1’s down the diagonal and from the product of the off-diagonal $-t$’s; all other permutations have a $0$ among the corresponding matrix entries. The sign on the product of $t$’s is always negative since either $\lambda_i$ is odd, in which case the cyclic permutation of length $\lambda_i$ is even, or $\lambda_i$ is even, in which case the permutation is odd. Hence, the determinant of each block is $1-t^{\lambda_i}$, and the entire determinant is $$\det (I-tM)=\prod_i (1-t^{\lambda_i}).$$ So, our summation becomes $$\frac{1}{n!}\sum_{\pi\in S_n} \frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}$$ where $c(\pi)$ denotes the cycle type of a permutation $\pi$. Already we have an interesting identity; we now know this series is equal to $$\frac{1}{(1-t)(1-t^2)\cdots (1-t^n)}.$$ But can we prove it directly? It turns out that the equality of these two series can be viewed as a consequence of Burnside’s Lemma. In particular, consider the action of the symmetric group on the set $X$ of weak compositions of $k$ having $n$ parts, that is, an ordered $n$-tuple of nonnegative integers (possibly $0$) that add up to $k$. Then Burnside’s lemma states that the number of orbits under this action, which correspond to the partitions of $k$ having at most $n$ parts, is equal to $$\frac{1}{n!}\sum_{\pi \in S_n} |X^\pi|$$ where $X^\pi$ is the collection of weak compositions which are fixed under permuting the entries by $\pi$. We claim that this is the coefficient of $t^k$ in $$\frac{1}{n!}\sum_{\pi\in S_n} \frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}$$ hence showing that the two generating functions are equal. To see this, note that if $\pi\in S_n$ has cycle type $\lambda$, then $X^\pi$ consists of the weak compositions which have $\lambda_1$ of their parts equal to each other, $\lambda_2$ other parts equal to each other, and so on. Say WLOG that the first $\lambda_1$ parts are all equal, and the second $\lambda_2$ are equal, and so on. Then the first $\lambda_1$ parts total to some multiple of $\lambda_1$, and the next $\lambda_2$ total to some multiple of $\lambda_2$, and so on, and so the total number of such compositions of $k$ is the coefficient of $t^k$ in the product $$\frac{1}{\prod_{\lambda_i\in c(\pi)} (1-t^{\lambda_i})}.$$ Averaging over all $\pi\in S_n$ yields the result.
Is there a topology $\tau$ on $\omega$ such that $(\omega,\tau)$ is Hausdorff and path-connected? No, a path-connected Hausdorff space is arc-connected, whence it would be of (at least) continuum cardinality provided it has more than one point. This follows from a more general (and deep) result that a Peano space (a compact, connected, locally connected, and metrizable space) is arc-connected if it is path-connected, together with the observation that a Hausdorff space that is the continuous image of the unit interval is a Peano space. See this section of the nLab, and references therein. Todd has already answered the question, but let me give an alternative argument. Theorem. Every compact Hausdorff space of size less than thecontinuum is totally disconnected. Proof. Suppose $a\neq b$ in a compact Hausdorff space $X$ ofsize less than the continuum. Since every compact Hausdorff space is normal,there is by Urysohn's lemma acontinuous function $f:X\to\mathbb{R}$ such that $f(a)=0$ and$f(b)=1$. Since $X$ is smaller than the continuum, there is somereal number $z$ strictly between $0$ and $1$ with$z\notin\text{ran}(f)$. So $\text{ran}(f)$ is disconnected and there can be no connected subspace of $X$ containing both$a$ and $b$. So $X$ is totally disconnected. QED In particular, any such space with at least two points is not connected. Corollary. There are no nontrivial paths in any Hausdorffspace of size less than the continuum. Proof. The image of a nontrivial path in such a space would be a compactHausdorff connected space of size less than the continuum, with atleast two points, contrary to the theorem. QED I realise this is quite an old question, but here is an alternative to the already existing elegant answers, which shows that Hausdorff can be replaced by $T_1$: Proposition.No countable $T_1$ space containing at least two points is the continuous image of an interval. Proof. Suppose $X$ is a $T_1$ topological space (so points are closed), $I:=[0,1]$, and $f\colon I\to X$ is continuous. Then$$ \{f^{-1}(x)\colon x\in X\}$$is a partition of $I$ into closed, hence compact, subsets. But $I$ (more generally, any non-trivial continuum) is not the countable disjoint union of compact proper subsets. So $X$ is uncountable. $\square$ Observe that the proof, in contrast to Joel's, does not show that the cardinality of $X$ is that of the continuum. On the other hand, there are non-Hausdorff compact $T_1$ spaces, so his proof does not apply to $T_1$ spaces. Likewise for Todd's argument, as there are $T_1$ spaces that are path-connected, but not arcwise connected. On the other hand, Sierpiński space (consisting of two points, only one of which is closed) is both $T_0$ and path-connected. So $T_1$ cannot be replaced by $T_0$. EDIT. It seems this argument is also here: http://topospaces.subwiki.org/wiki/Path-connected_and_T1_with_at_least_two_points_implies_uncountable
78 0 Hello, What One-particle quantum states seem to be fairly well understood. The state of the system is a function [itex] \psi: \mathbb{R}^3 \rightarrow \mathbb{C} [/itex], and [tex]|\psi(x)|^2[/tex] gives the probability density of finding the particle near the space point x. Let's denote by [itex] \Omega_1 [/itex] the space of one-particle states. [itex] \Omega_1 [/itex] is a Hilbert space with inner product Building on this, an n-particle state [tex] \psi [/tex] is presumably a function of n space points (x1, x2, ... xn). Assume Bose symmetry, so [tex] \psi [/tex] is totally symmetric with respect to x1, x2, ... xn. So in this case [itex] \psi: (\mathbb{R}^3)^n \rightarrow \mathbb{C} [/itex] and [tex]|\psi(x_1, x_2, \ldots, x_n)|^2[/tex] gives the probability density of finding the n particles near the space points x1, x2, ..., xn. The set [itex] \Omega_n [/itex] of all n-particle states has a canonical inner product Cheers, Dave What isa quantum state? Put generalised functions/Schwartz distributions to one side, because a) they're not a Hilbert space, and b) they can't be multiplied, so it's hopeless to even begin to think about Feynman diagrams. One-particle quantum states seem to be fairly well understood. The state of the system is a function [itex] \psi: \mathbb{R}^3 \rightarrow \mathbb{C} [/itex], and [tex]|\psi(x)|^2[/tex] gives the probability density of finding the particle near the space point x. Let's denote by [itex] \Omega_1 [/itex] the space of one-particle states. [itex] \Omega_1 [/itex] is a Hilbert space with inner product [tex]This Hilbert space is known as [itex]L^2(\mathbb{R}^3)[/itex]. The states [tex]\psi, \phi \in \Omega_1[/tex] evolve in time according to an equation of motion, and the the inner product (1) is constant in time. Equivalently, the system evolves by a unitary transformation on [itex] \Omega_1 [/itex]. \langle \psi, \phi \rangle = \int d^3x \; \psi^*(x) \phi(x) \qquad \qquad (1) [/tex] \langle \psi, \phi \rangle = \int d^3x \; \psi^*(x) \phi(x) \qquad \qquad (1) [/tex] Building on this, an n-particle state [tex] \psi [/tex] is presumably a function of n space points (x1, x2, ... xn). Assume Bose symmetry, so [tex] \psi [/tex] is totally symmetric with respect to x1, x2, ... xn. So in this case [itex] \psi: (\mathbb{R}^3)^n \rightarrow \mathbb{C} [/itex] and [tex]|\psi(x_1, x_2, \ldots, x_n)|^2[/tex] gives the probability density of finding the n particles near the space points x1, x2, ..., xn. The set [itex] \Omega_n [/itex] of all n-particle states has a canonical inner product [tex]and is a Hilbert space. So far so good. This is not just rigorous - L^2 spaces are stock concepts in pure math - but it's readily understandable as well. States have a direct physical interpretation at all times, not just at asymptotic [itex] t \rightarrow \pm \infty [/tex]. So is it possible to formulate field theory from this standpoint? In field theory, particle number changes with time, so let's suppose the set of all states is \langle \psi, \phi \rangle = \frac{1}{n!} \int d^3x_1 \ldots d^3x_n \; \psi^*(x_1, \ldots, x_n) \phi(x_1, \ldots, x_n) [/tex] \langle \psi, \phi \rangle = \frac{1}{n!} \int d^3x_1 \ldots d^3x_n \; \psi^*(x_1, \ldots, x_n) \phi(x_1, \ldots, x_n) [/tex] [tex](The C is for the vacuum.) Is it possible to define the dynamics of the system in terms of an equation of motion for the n-particle 'wavefunctions'? Would this be another route towards constructing the Feynman series? \Omega = \mathbb{C} \oplus \Omega_1 \oplus \Omega_2 \oplus \ldots \oplus \Omega_n \oplus \ldots [/tex] \Omega = \mathbb{C} \oplus \Omega_1 \oplus \Omega_2 \oplus \ldots \oplus \Omega_n \oplus \ldots [/tex] Cheers, Dave
How are the layers in a encoder connected across the network for normal encoders and auto-encoders? In general, what is the difference between encoders and auto-encoders? To answer this rather succinctly, an encoder is a function mapping some input to some different space. An example of this is what the brain does. We have to process the sensory input that the environment gives us in order for it to be storable. An autoencoder's job, on the other hand, is to learn a representation(encoding). An autoencoder will have the same number of output nodes as there are inputs for the purposes of reconstructing the inputs instead of trying to predict the Y target. Autoencoders are usually used in reducing output dimensions in high dimensional data sets. Hope I answered your question! Theory Encoder In general, an Encoder is a mapping $f : X \rightarrow Y $ with $X$ Input Space and $Y$ Code Space In case of Neural Networks, it is a Generative Modelhence a function which is able to compute a Representationout of some input (like GAN) The point is: how would you train such an encoder network ? The general answer is: it depends on what you want your codeto be and ultimately depends on what kind of problem the NN has to solve, so let's pick one Signal Compression The goal is to learn a compressed representation for your input that allows to reconstruct the original input minimizing the loss of information In this case hence you want the dimensionality of $Y$ to be lower than the dimensionality $X$ which in the NN case means the code space will be represented by less neurons than the input space Autoencoder Focusing on the Signal Compression problem, what we want to build is a system which is able to take a given signal with size Nbytes compress it into another signal with size M<Nbytes reconstruct the original signal, starting from the compressed representation, as good as possible To be able to achiebve this goal, we need basically 2 components an Encoder which compresses its input, performing the $f : X \rightarrow Y$ mapping a Decoder which decompresses its input, performing the $f: Y \rightarrow X$ mapping We can approach this problem with the Neural Network Framework, defining an Encoder NN and a Decoder NN and training them It is important to observe this kind of problem can be effectively approached with the convenient learning strategy of unsupervised learning : there is no need to spend any human work (expensive) to build a supervision signal as the original input can be used for this purpose This means we have to build a NN which operates essentially between 2 spaces the $X$ Input Space the $Y$ Latent or Compressed Space The general idea behind the training is to make a certain input go along the encoder + decoder pipeline and then compare the reconstruction result with the original input with some kind of loss function To define this idea a bit more formally The final autoencoder mapping is $f : X \rightarrow Y \rightarrow X$ with the $x$ input the $y$ encoded input or latent representation of the input the $\hat x$ reconstructed input Eventually you will get an architecture similar to You can train this architecture in an unsupervised way, using a loss function like $f : X \times X \rightarrow \mathbb{R}$ so that $f(x, \hat x)$ is the loss associated to the $\hat x$ reconstruction compared with the $x$ input which is also the ideal result Code Now let's add a simple example in Keras related to the MNIST Dataset from keras.layers import Input, Dense from keras.models import Model # Defines spaces sizes ## MNIST 28x28 Input space_in_size = 28*28## Latent Space space_compressed_size = 32 # Defines the Input Tensor in_img = Input(shape=(space_in_size,))encoder = Dense(space_compressed_size, activation='relu')(in_img)decoder = Dense(space_in_size, activation='sigmoid')(encoder)autoencoder = Model(in_img, decoder)autoencoder.compile(optimizer='adam', loss='binary_crossentropy') As an addition to NicolaBernini's answer. Here is a full listing which should work with a Python 3 installation that includes Tensorflow: """MNIST autoencoder"""from tensorflow.python.keras.layers import Input, Dense, Flatten, Reshapefrom tensorflow.python.keras.models import Model from tensorflow.python.keras.datasets import mnistimport matplotlib.pyplot as pltfrom matplotlib.pyplot import figure"""## Load the MNIST dataset"""(x_train, y_train), (x_test, y_test) = mnist.load_data()"""## Define the autoencoder model"""## MNIST 28x28 Input image_shape = (28,28)## Latent Space space_compressed_size = 25 in_img = Input(shape=image_shape)img = Flatten()(in_img)encoder = Dense(space_compressed_size, activation='elu')(img)decoder = Dense(28*28, activation='elu')(encoder)reshaped = Reshape(image_shape)(decoder)autoencoder = Model(in_img, reshaped)autoencoder.compile(optimizer='adam', loss='mean_squared_error')"""## Train the autoencoder"""history = autoencoder.fit(x_train, x_train, epochs=10, shuffle=True, validation_data=(x_test, x_test))"""## Plot the training curves"""plt.plot(history.history['loss'])plt.plot(history.history['val_loss'])plt.legend(['loss', 'val_loss'])plt.show()"""## Generate some output images given some input images. This will allow us to see the quality of the reconstruction for the current value of ```space_compressed_size```"""rebuilt_images = autoencoder.predict([x_test[0:10]])"""## Plot the reconstructed images and compare them to the originals"""figure(num=None, figsize=(8, 32), dpi=80, facecolor='w', edgecolor='k')plot_ref = 0for i in range(len(rebuilt_images)): plot_ref += 1 plt.subplot(len(rebuilt_images), 3, plot_ref) if i==0: plt.title("Reconstruction") plt.imshow(rebuilt_images[i].reshape((28,28)), cmap="gray") plot_ref += 1 plt.subplot(len(rebuilt_images), 3, plot_ref) if i==0: plt.title("Original") plt.imshow(x_test[i].reshape((28,28)), cmap="gray") plot_ref += 1 plt.subplot(len(rebuilt_images), 3, plot_ref) if i==0: plt.title("Error") plt.imshow(abs(rebuilt_images[i] - x_test[i]).reshape((28,28)), cmap="gray")plt.show(block=True) I have changed the loss function of the training optimiser to "mean_squared_error" to capture the grayscale output of the images. Change the value of space_compressed_sizeto see how that effects the quality of the image reconstructions.
Endorsing Szabolcs' excellent $MaTeX$ package as a way to manage $\LaTeX$ in $Mathematica$ but with an addition that for me at least has removed the significant irritant of having to escape backslashes in strings. For example, given some previous $\LaTeX$ "\tilde{x}=\begin{cases} (\frac{n+1}{2}) \text{th term} & \text{n odd} \\ ((\frac{n}{2}) \text{th} + (\frac{n}{2}+1) \text{th term})/2 & \text{n even} \end{cases}" Applying $MaTeX$ out of the box means having to escape each occuring backslash: Needs["MaTeX`"] MaTeX@"\\tilde{x}=\\begin{cases} (\\frac{n+1}{2}) \\text{th term} & \\text{n odd} \\\\ ((\\frac{n}{2}) \\text{th} + (\\frac{n}{2}+1) \\text{th term})/2 & \\text{n even} \\end{cases}" It's not just the inconvenience of these mechanical modifications but that this also represents an impediment to programmatically modifying underlying $\LaTeX$. But following Simon Rochester's approach (while using a more efficient StringTake and specific MakeExpression - both as pointed out by Alexey Popkov in the comments), we can directly access the original string. Using MaTeX as a wrapper for this interpretative intercept MakeExpression[ RowBox@{"MaTeX", "[", str_String, "]"} | RowBox@{"MaTeX", "@", str_String} | RowBox@{str_String, "//", "MaTeX"} , StandardForm] := MakeExpression[ RowBox@{"MaTeX", "[", "StringTake", "[", ToString@InputForm[str], ",", "{", "2", ",", "-2", "}", "]", "]"}, StandardForm]; we get a more natural MaTeX invocation MaTeX@"\tilde{x}=\begin{cases} (\frac{n+1}{2}) \text{th term} & \text{n odd} \\ ((\frac{n}{2}) \text{th} + (\frac{n}{2}+1) \text{th term})/2 & \text{n even} \end{cases}" Some strings don't expect some of the harmless (in this context) escapes - e.g. MaTeX["X \sim \mathcal{N}(1,0)"] so we'll turn this error message off: Off[Syntax::stresc] For string pre-processing we can do a similar intercept with a RawString wrapper to gain the ability to programmatically generate $\LaTeX$ for subsequent feeding into $MaTEX$ MakeExpression[ RowBox@{"RawString", "[", str_String, "]"} | RowBox@{"RawString", "@", str_String} | RowBox@{str_String, "//", "RawString"} , StandardForm] := MakeExpression[ RowBox@{"StringTake", "[", ToString@InputForm[str], ",", "{", "2", ",", "-2", "}", "]" }, StandardForm]; as illustrated by med[var_String] := StringTemplate[RawString["\tilde{`1`}=\begin{cases} (\frac{n+1}{2}) \text{th term} & \text{n odd} \\ ((\frac{n}{2}) \text{th} + (\frac{n}{2}+1) \text{th term})/2 & \text{n even} \end{cases}"]][var]; vars = {"x", "y", "z"}; MaTeX[med /@ vars] // Column Note that MaTeX operates on (non-string) expressions in the normal way. Hence programmability is preserved for $Mathematica$ expressions and the frontend's 2D formatting/shortcuts can still be used to readily generate $\LaTeX$ code as desired. mean[var_] := HoldForm[ \!\(\*OverscriptBox[\(var\), \(_\)]\) = \!\(TraditionalForm\` \*FractionBox[\( \*UnderoverscriptBox[\(\[Sum]\), \(i = 1\), \(n\)] \*SubscriptBox[\(var\), \(i\)]\), \(n\)]\)]; MaTeX[mean /@ vars] // Column $MaTeX$'s use-cases would seem to be consistency and customising of $\LaTeX$ graphics/formulae, or for presenting complex formulae within a Mathematica notebook. Sometimes however, the need may arise for the original Mathematica code snippet/function to be included either in a paper or in as part of a collation within a notebook. The obvious method, HoldForm, does its holding after the underlying boxes have been parsed so to cut the parser off at the pass by way of showing how the code was originally entered, we can define a RawHoldForm using this same idiom (I've adapted from Simon Rochester's other answer). MakeExpression[ RowBox@{"RawHoldForm", "[", expr_, "]"} | RowBox@{"RawHoldForm", "@", expr_} | RowBox@{expr_, "//", "RawHoldForm"}, StandardForm] := HoldComplete[ ExpressionCell[RawBoxes@expr, "Input", ShowStringCharacters -> True]] and observe the difference HoldForm[((x+"df") // f] RawHoldForm[((x + "df") // f)] (* f[x + df] ((x+"df") // f) *) Putting it all together in a single code block that provides a more natural MaTeX invocation, a string pre-processing, RawString wrapper and a RawHoldForm, the following can be loaded after $MaTeX$ (at least in its current version): Needs["MaTeX`"]; MakeExpression[ RowBox@{"MaTeX", "[", str_String, "]"} | RowBox@{"MaTeX", "@", str_String} | RowBox@{str_String, "//", "MaTeX"} , StandardForm] := MakeExpression[ RowBox@{"MaTeX", "[", "StringTake", "[", ToString@InputForm[str], ",", "{", "2", ",", "-2", "}", "]", "]"}, StandardForm]; Off[Syntax::stresc] MakeExpression[ RowBox@{"RawString", "[", str_String, "]"} | RowBox@{"RawString", "@", str_String} | RowBox@{str_String, "//", "RawString"} , StandardForm] := MakeExpression[ RowBox@{"StringTake", "[", ToString@InputForm[str], ",", "{", "2", ",", "-2", "}", "]" }, StandardForm]; MakeExpression[ RowBox@{"RawHoldForm", "[", expr_, "]"} | RowBox@{"RawHoldForm", "@", expr_} | RowBox@{expr_, "//", "RawHoldForm"}, StandardForm] := HoldComplete[ ExpressionCell[RawBoxes@expr, "Input", ShowStringCharacters -> True]] Addends While it is the ToString@InputForm@str that permits the sought-after programmability, as shown from MrWizard's and Jen's answers, the original strings can also be worked with in the frontend for one-off insertions The use of StringTake avoids the infinite recursion in MakeExpression's definition Thus far no downsides have been observed (escapes like \n, \t are not really relevant in latex formatting) although heavier users of $MaTeX$ might notice/uncover issues From the originator's comments the original syntax was not designed and hence this would seem to offer an improved syntax for latex formatting in $MaTeX$
There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group. What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds. The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function. To define it, recall that characters are constant across conjugacy classes, and in $S_n$, the conjugacy classes correspond to the partitions of $n$ by associating a permutation $\pi$ with its cycle type $c(\pi)$. For instance, the permutations $\pi=(12)(345)$ and $\sigma=(35)(142)$ both have cycle type $\lambda=(3,2)$. So, any character $\chi$ satisfies $\chi(\pi)=\chi(\sigma)$. We can now define the Frobenius map $F$. For any character $\chi$ of a representation of $S_n$, define $$F\left(\chi\right)=\frac{1}{n!}\sum_{\pi\in S_n}\chi(\pi)p_{c(\pi)}$$ where $p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_k}$ is the $\lambda$th power sum symmetric function. (Recall that $p_{\lambda_i}=x_1^{\lambda_i}+x_2^{\lambda_i}+\cdots+x_k^{\lambda_i}$.) Then, combining the permutations with the same cycle type in the sum, we can rewrite the definition as a sum over partitions of size n: $$F\left(\chi\right)=\sum_{|\lambda|=n}\frac{1}{z_\lambda}\chi(\lambda)p_{\lambda}$$ for some constants $z_\lambda$. (It’s a fun little combinatorial problem to show that if $\lambda$ has $m_1$ 1’s, $m_2$ 2’s, and so on, then $z_\lambda=1^{m_1}m_1!2^{m_2}m_2!\cdots.$) As an example, let’s take a look at the character table of $S_3$: $$ \begin{array}{c|ccc} & [(1)(2)(3)] & [(12)(3)] & [(123)] \\\hline \chi^{(3)} & 1 & 1 & 1 \\ \chi^{(1,1,1)} & 1 & -1 & 1 \\ \chi^{(2,1)} & 2 & 0 & -1 \end{array} $$ Consider the third row, $\chi^{(2,1)}$, and let us work over three variables $x,y,z$. Then the Frobenius map sends $\chi^{(2,1)}$ to $$F\chi^{(2,1)}=\frac{1}{6}(2p_{(1,1,1)}-2p_3)$$ by our definition above. This simplifies to: $$\frac{1}{3}((x+y+z)^3-(x^3+y^3+z^3))=x^2y+y^2x+x^2z+z^2x+y^2z+z^2y+2xyz$$ which can be written as $m_{2,1}+2m_{1,1,1}$. Notice that, by the combinatorial definition of the Schur functions, this is precisely the Schur function $s_{2,1}$! In fact: The Frobenius map sends the irreducible character $\chi^\lambda$ to the Schur function $s_\lambda$ for all $\lambda$. And therein lies the bridge. Why is this the case? Read on to find out…
The ratio is the same for all aeroplanes if you accept a number of assumptions: The propulsion efficiency is constant, regardless of speed or power setting Aerodynamic drag is the sum of parasite drag and induced drag Parasite drag is proportional to the square of airspeed:$ D_p = k_p \cdot V^2$ Induced drag is inversely proportional to the square of airspeed: $D_i = \frac{k_i }{V^2}$ There is no wind Since we assume efficiency is constant, the fuel consumption rate is directly proportional to the power. Power required is drag times airspeed: $P = D\cdot V = D_p\cdot V + D_i\cdot V = k_p\cdot V^3 + \frac{k_i}{V}$ For the maximum endurance we need to minimise the fuel consumption and thus we need to find the speed that minimises the power. $\frac{dP}{dV} = \frac{1}{3} k_p V^2 - \frac{k_i}{V^2} = 0$ Solving for $V$ results in $V_{endurance} = \sqrt[\uproot{1}4]{ 3\frac{k_i}{k_p}} $ For the maximum range we need to find the speed that minimises the fuel consumption per distance travelled, which is found when the ratio of power to speed over ground is minimal. As we assume there is no wind, the ground speed and the airspeed are equal. Since the ratio of power to airspeed is drag, we have to find the speed for minimum drag: $\frac{dD}{dV} = 2 k_p V - 2\frac{k_i}{V^3} = 0$ Solving for $V$ results in $V_{range} = \sqrt[\uproot{1}4]{ \frac{k_i}{k_p}} $ We can now show that the ratio of maximum endurance speed to maximum range speed is: $\frac{V_{endurance}}{V_{range}} = \left. \sqrt[\uproot{1}4]{3 \frac{k_i}{k_p}} \middle/ \sqrt[\uproot{1}4]{ \frac{k_i}{k_p}} \right. = \sqrt[\uproot{1}4]{ 3} = 1.316... $ Now as long as the assumptions hold, for any aircraft the ratio of the speed will be approximately 1.3. For the F-35 the ratio may be approximately right in the subsonic domain. Effects of compressibility will cause the drag to be higher when the aircraft gets into the transonic domain so depending on the max-range speed the last two assumptions may not hold. For an autogyro at higher speeds, the relation of induced drag to airspeed is inversely proportional to the square of velocity, just like conventional fixed wing aircraft. This is because the fundamental way of creating lift by deflecting the incoming airflow downwards is the same for autogyros and aeroplanes. Therefore autogyros have also a factor pf approximately 1.3 between their maximum endurance and maximum range speeds.
There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group. What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds. The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function. To define it, recall that characters are constant across conjugacy classes, and in $S_n$, the conjugacy classes correspond to the partitions of $n$ by associating a permutation $\pi$ with its cycle type $c(\pi)$. For instance, the permutations $\pi=(12)(345)$ and $\sigma=(35)(142)$ both have cycle type $\lambda=(3,2)$. So, any character $\chi$ satisfies $\chi(\pi)=\chi(\sigma)$. We can now define the Frobenius map $F$. For any character $\chi$ of a representation of $S_n$, define $$F\left(\chi\right)=\frac{1}{n!}\sum_{\pi\in S_n}\chi(\pi)p_{c(\pi)}$$ where $p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_k}$ is the $\lambda$th power sum symmetric function. (Recall that $p_{\lambda_i}=x_1^{\lambda_i}+x_2^{\lambda_i}+\cdots+x_k^{\lambda_i}$.) Then, combining the permutations with the same cycle type in the sum, we can rewrite the definition as a sum over partitions of size n: $$F\left(\chi\right)=\sum_{|\lambda|=n}\frac{1}{z_\lambda}\chi(\lambda)p_{\lambda}$$ for some constants $z_\lambda$. (It’s a fun little combinatorial problem to show that if $\lambda$ has $m_1$ 1’s, $m_2$ 2’s, and so on, then $z_\lambda=1^{m_1}m_1!2^{m_2}m_2!\cdots.$) As an example, let’s take a look at the character table of $S_3$: $$ \begin{array}{c|ccc} & [(1)(2)(3)] & [(12)(3)] & [(123)] \\\hline \chi^{(3)} & 1 & 1 & 1 \\ \chi^{(1,1,1)} & 1 & -1 & 1 \\ \chi^{(2,1)} & 2 & 0 & -1 \end{array} $$ Consider the third row, $\chi^{(2,1)}$, and let us work over three variables $x,y,z$. Then the Frobenius map sends $\chi^{(2,1)}$ to $$F\chi^{(2,1)}=\frac{1}{6}(2p_{(1,1,1)}-2p_3)$$ by our definition above. This simplifies to: $$\frac{1}{3}((x+y+z)^3-(x^3+y^3+z^3))=x^2y+y^2x+x^2z+z^2x+y^2z+z^2y+2xyz$$ which can be written as $m_{2,1}+2m_{1,1,1}$. Notice that, by the combinatorial definition of the Schur functions, this is precisely the Schur function $s_{2,1}$! In fact: The Frobenius map sends the irreducible character $\chi^\lambda$ to the Schur function $s_\lambda$ for all $\lambda$. And therein lies the bridge. Why is this the case? Read on to find out…
This is not actually a research question. It is more an exercise which I posed myself in mathematical/statistical modelling. I have some Whatsapp data of a chat with someone. I want to find a mathematical model to describe the data. I have manually cut the chat into meaningful conversation pieces. So far I have the following Ansatz: Let $t_{j,i}$ be the time at which something is said by Person A or Person B in the whatsapp-chat at conversation j. We have the following "waiting times": $0=t_{11}<t_{12}<\cdots<t_{1,a_1}<t_{2,1}<t_{2,2}<\cdots<t_{2,a_2}<\cdots<t_{n,1}<\cdots<t_{n,a_n}$ So we have $n$ "conversations" in this chat by two people. Now my modeling Ansatz is that we have between each conversation a pause $P_j$: $t_{1,a_1}+P_1 = t_{2,1}$ $t_{2,a_2}+P_2 = t_{3,1}$ $\cdots$ $t_{n-1,a_{n-1}}+P_{n-1} = t_{n,1}$ I have verified with the Kolmogorov-Smirnov Test all my assumptions concerning distribution of variables. Now we have $P_j \sim Exp(\lambda_P)$ $d_{j,i} = t_{j,i+1}-t_{j,i} \sim Exp(\lambda_d)$ "interarrival times" $a_j \sim Pois(\lambda_a)$ Now one could think of this as a "nested Poisson process", by which I mean, that we have a Poisson Process which governs the distributions of the conversations, and in each conversation we have a homogeneous Poisson process. Two conversations might have different parameters. Ok, so in reality we can not observe when one conversation ends and when it starts. So the question is, given the data $t_1 < \cdots < t_m$ is it possible to calibrate the above model to find out how many conversations there are in this chat and when a conversation ends / starts, or are there to many parameters in the model, which need to be estimated? If it is of help: We also observe at each timestamp who is chatting (Person A / Person B). We have $t_{n,a_n} = \sum_{j=1}^n P_j + \sum_{j=1}^n\sum_{i=1}^{a_j-1}d_{j,i}$ From this I have computed the expected value and the variance of $t_{n,a_n}$: $E(t_{n,a_n}) = n/\lambda_P + n(\lambda_a-1)/\lambda_d$ $Var(t_{n,a_n}) = n/\lambda_P^2 + n(\lambda_a-1)/\lambda_d^2$ Now the question is, given the data $t_1<\cdots<t_m$ how to estimate the parameters: $n, \lambda_P, \lambda_d, \lambda_a$? EDIT: (by suggestion of Bjørn Kjos-Hanssen): One idea, as suggested by Bjørn Kjos-Hanssen is to plot the differences (pauses) and then to cut them off at the mean of pauses: The number of times the pauses are above the mean, could be estimated as $n$ the number of conversations.So to make it more precise let $d_i = t_{i+1}-t_i$ $i=1,\cdots,m-1$Then $\widehat{d} = 1/(m-1) \sum_{i=1}^{m-1} d_i$. Now let $n = $ number of times we have $d_i > \widehat{d}$. What assumptions should I make to justify this procedure? Suppose, that the above procedure can distinguish between a conversation and a pause, then we have $E(m) = \sum_{i=1}^nE(a_i) = n \lambda_a$ hence we can estimate $\lambda_a$ as $\widehat{\lambda_a} = m / n$. On the other hand we can estimate $\lambda_P$ as $\widehat{\lambda_P} = \frac{1}{1/n \sum_{d_j>\widehat{d}}d_j}$ And the Ansatz $t_m = n/\widehat{\lambda_P}+n(\widehat{\lambda_a}-1)/\widehat{\lambda_d}$ gives an estimate of $\widehat{\lambda_d}$ as: $\widehat{\lambda_d} = \frac{m/n-1}{t_m/n-1/n \sum_{d_j>\widehat{d}}d_j}$ So in order to make this argumentation more valid, my question is: What assumptions should I make to justify the procedure above? The data is: conversation time person 1 0 A 1 1 A 1 34 B 1 35 A 1 36 B 2 5585 B 2 5586 B 2 5911 A 3 8837 B 3 8838 A 3 8839 B 3 8840 B 3 8841 B 3 8850 A 3 8851 A 3 8870 A 3 8947 B 3 8948 B 3 9592 A 4 14406 B 4 14430 A 4 14435 B 4 14443 B 4 14446 A 4 14447 B 5 14857 B 5 15834 B 5 17125 A 5 17162 B 5 17163 A 5 17165 B 6 17251 A 6 17253 A 7 23330 B 7 23999 A 8 32968 A 8 32969 A 8 32970 B 8 32971 B 8 32972 B 8 32973 B 8 32988 B 9 39365 A 9 39742 B 9 46310 A 9 46330 B 9 46331 A 9 50791 A 9 50866 B 9 51368 A 9 51429 B 9 51441 A 9 51459 B 9 51461 A 9 51462 B 9 51467 A 9 51468 A 10 52890 A 10 52891 B 11 54825 B 11 54830 A 11 54831 A 11 54842 A 11 54843 B 11 54844 A 11 54859 B 11 54860 A 11 54861 A 11 54863 B 11 54865 A 12 70562 A 12 70566 B 12 70568 A 12 70570 A 12 70571 A 12 70572 B 12 70586 A 12 70587 B 13 71609 B 13 71611 A 13 71613 B 13 71617 A 13 71618 B 13 71619 A 14 96595 A 14 96625 A 14 96626 A 14 96627 A 14 96632 B 14 96633 B 14 96634 A 14 96635 A 15 96755 B 15 96782 A 15 96787 A 15 96792 B 15 96794 A 15 96867 A 15 96869 B 15 96870 B 15 96871 A 15 96873 B 15 96905 A 15 96911 A 15 96921 B 16 102817 A 16 102940 B
What is Area & Perimeter of Rhombus? A quadrilateral with four congruent sides is a rhombus or diamond (see the picture below). In some literature it is named equilateral quadrilateral, since all of its sides are equal in length.That means, if ${\overline{AB}}\cong{\overline{BC}}\cong{\overline{CD}}\cong{\overline{DA}}$, then ${\overline{ABCD}}$ is a rhombus.Since opposite sides are parallel, the rhombus is a parallelogram, but not every parallelogram is a rhombus. That means that all the properties of a parallelogram can be also applied to rhombus. To recall, the parallelogram has the following properties: Opposite sides of a parallelogram are congruent; Opposite angles of a parallelogram are congruent; The consecutive angles of a parallelogram are supplementary to each other; The diagonals of a parallelogram bisect each other Rhombus satisfies two more properties: The diagonals bisect the opposite angles of the rhombus; The diagonals of a rhombus are perpendicular. Any quadrilateral with perpendicular diagonals, that one diagonal bisect other, is a kite. So, the rhombus is a kite, but not every kite is a rhombus.Any quadrilateral that is both a kite and parallelogram is a rhombus.Because the diagonals of a rhombus are perpendicular, we can apply the Pythagorean Theorem to find the rhombus side length.Let us consider the rhombus $\overline{ABCD}$. Diagonals of the rhombus bisect each other, $\overline{AO} \cong\overline{OC}$ and $\overline{BO}\cong\overline{OD}.$Let us denote the side length and the lengths of diagonals of a rhombus by $a,d_1$ and $d_2$, respectively. Applying the Pythagorean theorem to $\Delta AOB$, we obtain $${a}^2=\Big(\frac {d_1}2\Big)^2+\Big(\frac {d_2}2\Big)^2$$ The rhombus has only two lines of symmetry. A rhombus is symmetrical about its diagonals. A rhombus has central symmetry about the point of intersection of its diagonals, $O$. A rhombus has rotational symmetry because $180^o$ counterclockwise rotation about the point $O$ transforms the rhombus to itself.The distance around a rhombus is called the perimeter of the rhombus. It is usually denoted by $P$.To find the perimeter of rhombus we add the lengths of its sides. Thus, the perimeter of a rhombus with side length of $a$ is $$P =a+a+a+a= 4 \times a$$ The area of a rhombus is a number of square units needed to fill the rhombus. The area, usually denoted by $A$. A rhombus and a rectangle on the same base and between the same parallels are equal in area. Thus, area of the rhombus $\overline {ABCD}$ is equal to area of parallelogram $\overline{D'C'CD}$ (see picture on right).The area of a rhombus with the length of $a$ and the perpendicular distance between the opposite sides of $h$ is $$A =a\times h$$ In other words, the area of a rhombus is the product of its base and height. If we use the sine ratio in the right triangle $\Delta CC'B$, $h=a\sin m\angle B$, the previous area formula becomes $$A =a^2\times \sin m\angle B $$ Since $\sin(180^o-\alpha)=\sin \alpha$, $\angle A\cong\angle C$ and $\angle B\cong\angle D,$ we can use the measure of any angle in rhombus, i.e. $$A =a^2\times \sin m\angle B=a^2\times \sin m\angle A $$The area of a rhombus can be found in the second way. It is half the product of lengths of its diagonals. $$A=\frac 12 d_1\times d_2$$ This formula follows from the formula for triangle area. The perimeter is measured in units such as centimeters, meters, kilometers, inches, feet, yards, and miles. The area is measured in units such as square centimeters $(cm^2)$, square meters $(m^2)$, square kilometers $(km^2)$ etc. The area & perimeter of a Rhombus work with steps shows the complete step-by-step calculation for finding the perimeter and area of the rhombus with the side length of $10\;in$ and the measure of the angle of $30$ degrees using the perimeter and area formulas. For any other values for side length and measure of the angle of a rhombus, just supply two positive real numbers and click on the GENERATE WORK button. The grade school students may use this area and perimeter of a Rhombus to generate the work, verify the results of the perimeter and area of two-dimensional figures or do their homework problems efficiently.
Introduction: If you consider the above scribbles of Évariste Galois, who developed Galois theory, you will note that some of the scribbles appear random. Yet, upon closer inspection none of the scribbles are completely random. Many of the scribbles are rather smooth which would be improbable if the trajectories were generated by some kind of Brownian-type motion. This isn’t really surprising if you consider the biomechanical constraints on handwritten text. In fact, some scientists have attempted to distill this observation into a physical law known as the two-thirds power law which I analyse here. Briefly speaking, here’s a breakdown of my analysis: I provide a mathematical description of the law and describe how it may be used as a discriminative model. We may also use this equation as a generative model if we consider symmetries of the equation. Here is the code. The limitations of the ‘law’ are considered and arguments are given to shift focus on plausible generative models. In spite of its limitations I think that the power law is a very good starting point for understanding biomechanical constraints on realistic drawing tasks. Description of the law: Brief description: The power law for the motion of the endpoint of the human upper-limb during drawing motion may be formulated as follows: \begin{equation} v(t) = K \cdot k(t)^\beta \end{equation} where is the instantaneous curvature of the path and the law is satisfied when . By taking logarithms of both sides of the equation we have: \begin{equation} \ln v(t) = K - \frac{1}{3} \ln k(t) \end{equation} Frenet-Serret formulas: To clarify what we mean by instantaneous curvature in (2) it’s necessary to use a moving reference frame, aka Frenet-Serret frame, where in two dimensions our reference frame is described by the unit vector tangent to the curve and a unit vector normal to the curve. With this moving frame we may define the curvature of regular curves(i.e. curves whose derivatives never vanish) parametrized by time as follows: \begin{equation} k(t) = \frac{\lvert \ddot{x}\dot{y} - \ddot{y}\dot{x} \rvert}{(\dot{x}^2 + \dot{y}^2)^{3/2}} = \frac{\lvert \ddot{x}\dot{y} - \ddot{y}\dot{x} \rvert}{v^3(t)} \end{equation} Now, if we denote: \begin{equation} \alpha(t) = \lvert \ddot{x}\dot{y} - \ddot{y}\dot{x} \rvert \end{equation} we have: \begin{equation} \ln v(t) = \frac{1}{3} \ln \alpha(t) - \frac{1}{3} \ln k(t) \end{equation} and we note that our law is satisfied when . Given that this is a linear equation we may use this equation as a discriminative model by performing a linear regression analysis on drawing data. Parallelograms: If we focus on we may note that this value corresponds to the determinant of a particular matrix: Furthermore, we may note that this determinant may be identified with the area of a parallegram with the following vertices: This formulation is useful as invariants of now correspond to volume-preserving transformations applied to the above parallelogram. Generative modelling via Invariants: Invariance via volume-preserving transforms: Let’s first note that if we always have: \begin{equation} \lvert \ddot{x}\dot{y} - \ddot{y}\dot{x} \rvert=K \end{equation} for some then we must have: \begin{equation} \lvert \ddot{x}(0)\dot{y}(0) - \ddot{y}(0)\dot{x}(0) \rvert=K \end{equation} Now, given that \begin{equation} \mathcal{M} = \{ M \in \mathbb{R}^{2 \times 2}: det(M)=1 \} \end{equation} are volume-preserving transformations, we may use to simulate arbitrary trajectories that satisfy . We may think of this as the Jacobian of a linear, hence differentiable, transformation. Computer simulation: In order to simulate these trajectories, we note that: where the position is updated using: \begin{equation} x_{n+1} = x_n + \dot{x}_n\cdot \Delta t + \frac{1}{2} \ddot{x}_n \cdot \Delta t^2 \end{equation} and in order to make sure that we may use the trigonometric identity: \begin{equation} cos^2(\theta) + sin^2(\theta) = 1 \end{equation} so we have: \begin{equation} ad = cos^2(\theta) \end{equation} \begin{equation} bc = -sin^2(\theta) \end{equation} and as a result we have a generative variant of the 2/3 power law. Ok, but are these ‘scribbles’ ecologically plausible? I don’t think so, which is why I call the main Julia function I used to simulate these trajectories ‘crazy paths’. Criticism: The law is a pretty weak discriminative model because as shown by [2] the exponent varies with the viscosity of the drawing medium and as shown by [1] the exponent also depends on the complexity of the shape drawn. The law is an even weaker generative model as it completely ignores environmental cues. The output ‘scribbles’ aren’t the result of any plausible interaction of an agent with an ecologically realistic environment. This point is even more clear when you consider the underlying minimum-jerk theory that is supposed to justify this ‘law’. A verbatim interpretation of jerk minimisation wouldimply that humans should mainly draw straight lines. However, there’s certainly a tradeoff between energy minimisation and the expressiveness of the figure drawn since drawing is an activity that involves communicating a particular message. References: D. Huh & T. Sejnowski. Spectrum of power laws for curved hand movements. 2015. M. Zago et al. The speed-curvature power law of movements: a reappraisal. 2017. U. Maoz et al. Noise and the two-thirds power law. 2006. M. Richardson & T. Flash. Comparing Smooth Arm Movements with the Two-Thirds Power Law and the Related Segmented-Control Hypothesis. 2002.
Regularity and uniqueness results in grand Sobolev spaces for parabolic equations with measure data 1. Dipartimento di Costruzioni e Metodi Matematici in Architettura, Universitá di Napoli "Federico II", via Monteoliveto, 3, I-80134 Napoli, Italy 2. Dipartimento di Matematica e Applicazioni "R. Caccioppoli", Universitá di Napoli "Federico II", via Cintia, I-80126 Napoli, Italy 3. Laboratoire d'Applications des Mathématiques, Teleport 2 Département de Mathématiques, Université de Poitiers, B.P. 30179, 86962 Futuroscope Chasseneuil cedex, France $\partial_t u - \Delta_N u=\mu$ in $\mathcal D'(Q) $ $u=0$ on $]0,T[\times\partial \Omega$ $u(0)=u_0$ in $ \Omega,$ where $Q$ is the cylinder $Q=(0,T)\times\Omega$, $T>0$, $\Omega\subset \mathbb R^n$, $N\ge 2$, is an open bounded set having $C^2$ boundary, $\mu\in L^1(0,T;M(\Omega))$ and $u_0$ belongs to $M(\Omega)$, the space of the Radon measures in $\Omega$, or to $L^1(\Omega)$. The results are obtained in the framework of the so-called grand Sobolev spaces, and represent an extension of earlier results on standard Sobolev spaces. Mathematics Subject Classification:35K60, 35R05, 35K15, 46E3. Citation:Alberto Fiorenza, Anna Mercaldo, Jean Michel Rakotoson. Regularity and uniqueness results in grand Sobolev spaces for parabolic equations with measure data. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 893-906. doi: 10.3934/dcds.2002.8.893 [1] Peter Weidemaier. Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed $L_p$-norm. [2] [3] [4] Doyoon Kim, Hongjie Dong, Hong Zhang. Neumann problem for non-divergence elliptic and parabolic equations with BMO$_x$ coefficients in weighted Sobolev spaces. [5] Shitao Liu, Roberto Triggiani. Determining damping and potential coefficients of an inverse problem for a system of two coupled hyperbolic equations. Part I: Global uniqueness. [6] Jean Ginibre, Giorgio Velo. Modified wave operators without loss of regularity for some long range Hartree equations. II. [7] Pierre-Étienne Druet. Higher $L^p$ regularity for vector fields that satisfy divergence and rotation constraints in dual Sobolev spaces, and application to some low-frequency Maxwell equations. [8] Marco Degiovanni, Michele Scaglia. A variational approach to semilinear elliptic equations with measure data. [9] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. II. Convergence of the method of finite differences. [10] Duchao Liu, Beibei Wang, Peihao Zhao. On the trace regularity results of Musielak-Orlicz-Sobolev spaces in a bounded domain. [11] [12] [13] Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. I. Well-posedness and convergence of the method of lines. [14] Dian Palagachev, Lubomira Softova. A priori estimates and precise regularity for parabolic systems with discontinuous data. [15] Verena Bögelein, Frank Duzaar, Ugo Gianazza. Very weak solutions of singular porous medium equations with measure data. [16] Huilian Jia, Lihe Wang, Fengping Yao, Shulin Zhou. Regularity theory in Orlicz spaces for the poisson and heat equations. [17] [18] Yuta Kugo, Motohiro Sobajima, Toshiyuki Suzuki, Tomomi Yokota, Kentarou Yoshii. Solvability of a class of complex Ginzburg-Landau equations in periodic Sobolev spaces. [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
Week 6 = 4 days for dimensionality reduction and recommender systems. It was also sandwiched between 4th of July (holiday) and our week-long break. I think most of us were not in the strongest mindset or focus for matrix algebra, so it's good for me to do some recapping. Dimensionality Reduction In machine learning, our data is represented in a matrix where each row is a set of observations of some features. So a matrix of size m x p would have m observations and p features. When m and p are extremely large, we need dimensionality reduction to help alleviate some problems. High-dimensional matrices suffer from "the curse of dimensionality". Unsupervised learning methods such as k-nearest neighbors and k-means/hierarchical clustering use distance metrics to group together data that is similar, meaning there is small distance between data points. So, a large number of features means that data points will be extremely far from each other (sparse). You would need a LOT more data to learn and describe the high-dimensional space. Or, you can perform dimensionality reduction. A large feature space may exhibit multicollinearity, redundancy, and/or noise, which we can be solved by tuning (reducing) the feature space- this reminds me of using regularization for tuning model complexity in regression models. Dimensionality reduction can also bring out latent features! These are the most relevant features of your data that is not explicitly measured in the data (think hidden topics like book/movie genres). Also, computations are nicer with a smaller matrix. We went over 3 methods for dimensionality reduction: Principal Component Analysis (PCA), Singular Value Decomposition (SVD), and Non-negative Matrix Factorization (NMF). PCA takes your data that is described by possibly correlated variables and tries to describe it with uncorrelated (orthogonal) variables. This set of uncorrelated variables are the principal components part of PCA, and the number of principal components are less than or equal to the number of features you originally started with. The principal components are axes that your data has the most variance along- I liked this image they showed us in lecture- there is no reduction in number of features, but it has transformed it to a more useful set of axes: To get the components, you must solve an eigenvalue problem of your data's covariance matrix. Here, the eigenvalue corresponds to the variance captured along the direction of its respectful eigenvector. So, we can choose a reduced feature space by choosing a subset of the eigenvectors (eliminate eigenvectors with smallest eigenvalues first). You can choose an appropriate number of principal components by targeting some ratio of $\frac{principal \ component \ variance}{original \ variance} $ or by using the "elbow" method of a scree plot (y-axis of eigenvalues in decreasing order, x-axis of corresponding component). On top of computing the covariance matrix AND solving an eigenvalue problem to perform PCA (which isn't the most computationally tractable especially for data already requiring dimensionality reduction) these results aren't as interpretable, which brings us to SVD. SVD breaks down your matrix X ( m x p) into a product of 3 matrices: U ( m x k), D ( k x k), V ( k x p) in that order. The derivation is a bunch of matrix algebra that I'll skip. The important part is that D is a diagonal matrix of singular values corresponding to k latent features. This means that U is some matrix describing a relationship between observation and latent feature, and V is describing a relationship between latent feature and original feature. Performing SVD on a matrix of users (rows) by their rankings of specific movies (columns) can reveal topics (latent features). These matrices may contain negative elements, which might not be applicable or interpretable in some contexts (like term frequency or inverse document frequency in NLP). In that case, we can use NMF. NMF factors your matrix X ( m x p) into W ( m x k) and H ( k x p). This is done using alternating least squares where you hold W (or H) constant, solve for H (or W) using simple regression and only keeping the non-negative values, then repeat by switching W & H. It's important to remember that since this is an iterative process, your resulting factorization is only approximate. Recommenders Recommenders work by suggesting an item to a user that is similar to some preference he/she has exhibited. "Similar" can be defined as similarity between users or similarity between items. The number of users is usually larger than the number of items, so item-item similarity may be nicer to build a recommender with. Similarity is measured on a scale of 0 (totally dissimilar) to 1 (identical). Some of the ways similarity can be computed include: euclidean distance (pretty intuitive) cosine similarity (similarity of vector directions) Pearson correlation (normalized covariance) since this metric is normalized, it will capture similarity between users who rate items consistently high/low Ex: if one person rates some items as 5, 3, 4, and another person rates the same items as 3, 1, 2, they will have a similarity of 1 since they relatively rate the same way Jaccard similarity (for sets constructed from booleans) this is used when we have a boolean measure like bought/didn't bought instead of ratings the similarity is the ratio of the number of people who bought item 1 & item 2 to the number of people who bought item 1 or item 2 Then, we can construct a similarity matrix where rows and columns correspond to items and are populated with the similarity between two items (this matrix will be symmetric). Given some utility matrix that describes a user's rating of every possible item where rows are users and columns are items, we can expect that this matrix will not be fully populated. But, we can use the similarity matrix to predict a user's ratings for unrated items, and return items with the highest predicted rating as recommendations: $$rating(u, i) = \frac{\sum_{j \in I_u} similarity (i, j) \times r_{u, j}}{\sum_{j \in I_u} similarity (i, j)}$$ where $$I_u = set \ of \ items \ rated \ by \ user \ u$$ $$r_{u, j} = user \ u's \ rating \ of \ item \ j$$ The predicted ratings are computed like a weighted average of all items a user has rated- items that are similar to an item j that user i has rated will be weighted more highly with the user's rating of item j. You can also speed up the prediction by only looking at "neighborhoods" of items. In the above calculation, you would only use similarity matrix values for elements that are most similar to the unrated item you are trying to predict. Issues with Recommenders: Utility matrix can be sparse! This is when dimensionality reduction is helpful for us. Hard to validate- to measure that your recommender is effective you could perform hypothesis testing to see if conversions have changed. Requires a user to rate a number of items before recommendations can be given Then a long awaited break to celebrate the halfway point of our program! Things I thought I'd accomplish on my break: figuring out what to work on for my capstone project revamping my structural engineering resume to a data science resume brushing up my LinkedIn (adding data sciencey skills to get endorsed, figuring out what description to put for my experience at Galvanize) updating the blog exploring topics on my own: word2vec neural nets d3js and more things... the list never ends finding a new place to live after my summer sublet is up Things I got done on my break: revamping my structural engineering resume to a data science resume brushing up my LinkedIn (adding data sciencey skills to get endorsed, figuring out what description to put for my experience at Galvanize) kindaupdating the blog Fussing with Craigslist Missed Connections as I mentioned in the last post ... there's always an improvement or update to be made
Let $ G,H $ are simple finite graphs and $A = G \times H$. Here $ G \times H $ is the tensor product (also called the direct or categorical product) of $ G $ and $ H $. Let $G$ has smaller chromatic number. Experiments suggest that given a coloring $f$ of $G$ one can color $A = G \times H$. Color the vertices $(a,b)$ of $A$ with $g(a,b)=f(a), \; a \in V(G)$. Experimentally the coloring is valid. This was verified for 1000 random graphs and for $\{ \text{Petersen graph}, K_2,K_6, C_5,\text{Star graph 6} ,\text{Random graph of order 14}, \} \times \\\\ \{\text{All graphs up to 7 vertices}\}$ This is related to Hedetniemi's Conjecture which states $ \chi(G \times H) = \min \{ \chi(G), \chi(H) \} $. Any counterexamples to this coloring? Is it possible to prove this is valid coloring for certain $G$ or $H$? What types of graphs are potential counterexamples?
For any spherical body with a density $\rho$ and radius $R$ and no atmosphere, we can calculate this easily. Let's assume you launch to a very low orbit that just skims the surface of the sphere. You can add 10% or 20% later for a planet like Earth with its atmosphere. Venus would be a lot harder! (so I've asked separately Launch to orbit delta-v penalty ... The amount of propellant required to achieve a certain delta-V is dependent on the ratio between the starting and ending mass of the spacecraft, according to the Tsiolkovsky rocket equation; a given thruster and fuel supply will get you more delta-V on a smaller spacecraft and less delta-V on a larger one. That is, 0.058 km/s per kg is not an inherent ... Some good practices I'm aware of are:As you mentioned, maneuvers are simulated before they are commanded and their effect is evaluated on ground so that thruster parameters and tank filling are updated, so if anything funny is happening during maneuvers this can be identified.If propulsion is electric (which is still not so common), then thrusting is ... I too am going to start with the orbital speed as a first approximation, but we can do slightly better than that.$$\Delta v = v_{orbit} = \sqrt{\frac{\mu}{r}}$$If you are using this approximation alone, you would want to use the objects radius for $r$ and not the radius of the low orbit, as that gives a slightly higher cost that better accounts for the ...
As noted in the comments, the question has a positive answer for tame Artin stacks (see e.g. [Ols12, Prop 6.1]) and also for non-tame Deligne–Mumford stacks (see [KV04, Lem. 2]). It is however also true in general. Throughout, let $\mathscr{X}$ be an algebraic stack with finite inertia and let $\pi\colon \mathscr{X}\to X$ denotes its coarse moduli space. Proposition 1. The map $\pi^*\colon \mathrm{Pic}(X)\to \mathrm{Pic}(\mathscr{X})$ is injective and if $\mathscr{X}$ is quasi-compact, then $\mathrm{coker}(\pi^*)$ has finite exponent, i.e., there exists a positive integer $n$ such that $\mathcal{L}^{n}\in \mathrm{Pic}(X)$ for every $\mathcal{L}\in \mathrm{Pic}(\mathscr{X})$. For simplicity, assume that $\pi$ is of finite type (as is the case if $\mathscr{X}$ is of finite type over a noetherian base scheme) although this is not necessary for any of the results (the only property that is used is that $\mathscr{X}\to X$ is a universal homeomorphism and that invertible sheaves are trivial over semi-local rings etc). Lemma 1. The functor $\pi^*\colon \mathbf{Pic}(X)\to \mathbf{Pic}(\mathscr{X})$ is fully faithful. In particular: If $\mathcal{L}\in \mathrm{Pic}(X)$, then the adjunction map $\mathcal{L}\to\pi_*\pi^*\mathcal{L}$ is an isomorphism and the natural map $H^0(X,\mathcal{L})\to H^0(\mathscr{X},\pi^*\mathcal{L})$ is an isomorphism. The natural map $\pi^*\colon \mathrm{Pic}(X)\to \mathrm{Pic}(\mathscr{X})$ is injective. Moreover, a line bundle on $\mathscr{X}$ that is locally trivial on $X$ comes from $X$, that is: Let $g\colon X'\to X$ be faithfully flat and locally of finite presentation and let $f\colon \mathscr{X}':=\mathscr{X}\times_X X'\to \mathscr{X}$ denote the pull-back. If $\mathcal{L}\in \mathrm{Pic}(\mathscr{X})$ is such that $f^*\mathcal{L}$ is in the image of $\pi'^*\colon\mathrm{Pic}(X')\to \mathrm{Pic}(\mathscr{X}')$, then $\mathcal{L}\in \mathrm{Pic}(X)$. Proof. Statement 1 follows immediately from the isomorphism $\mathcal{O}_X\to \pi_*\mathcal{O}_{\mathscr{X}}$. For the other statements, let $\mathcal{L}\in \mathrm{Pic}(\mathscr{X})$ and identify $\mathcal{L}$ with a class $c_\mathcal{L}\in H^1(\mathscr{X},\mathcal{O}_{\mathscr{X}}^*)$. If $\mathcal{L}$ is in the image of $\pi^*$ or locally in its image, then there exists an fppf covering $g\colon X'\to X$ such that $f^*\mathcal{L}$ is trivial. This means that we can represent $c_\mathcal{L}$ by a Čech $1$-cocycle for $f\colon \mathscr{X}'\to \mathscr{X}$. But since $H^0(\mathscr{X}\times_X U,\mathcal{O}_{\mathscr{X}}^*)=H^0(U,\mathcal{O}_X^*)$ for any flat $U\to X$, this means that $c_\mathcal{L}$ is given by a Čech $1$-cocycle for the covering $X'\to X$ giving a unique class in $H^1(X,\mathcal{O}_X^*)$. QED As mentioned in the comments, when $\mathscr{X}$ is tame, then 3. can be replaced with: $\mathcal{L}\in \mathrm{Pic}(\mathscr{X})$ is in the image of $\pi^*$ if and only if the restriction to the residual gerbe $\mathcal{L}|_{\mathscr{G}_x}$ is trivial for every $x\in |\mathscr{X}|$ (see [Alp13, Thm 10.3] or [Ols12, Prop 6.1]). Lemma 2.If there exists an algebraic space $Z$ and a finite morphism $p\colon Z\to \mathscr{X}$ such that $p_*\mathcal{O}_Z$ is locally free of rank $n$, then the cokernel of $\pi^*\colon \mathrm{Pic}(X)\to \mathrm{Pic}(\mathscr{X})$ is $n$-torsion. Proof. If $\mathcal{L}\in\mathrm{Pic}(\mathscr{X})$, then $\mathcal{L}^{n}=N_p(p^*\mathcal{L})$ (the norm is defined and behaves as expected since $p$ is flat). Since $Z\to X$ is finite, we can trivialize $p^* \mathcal{L}$ étale-locally on $X$. This implies that the norm is trivial étale-locally on $X$, i.e., $\mathcal{L}^{n}$ is trivial étale-locally on $X$. The result follows from 3. in the previous lemma. QED Proof of Proposition 1. There exist an étale covering $\{X'_i\to X\}_{i=1}^r$ such that $\mathscr{X}'_i:=\mathscr{X}\times_X X'_i$ admits a finite flat covering $Z_i\to \mathscr{X}'_i$ of some constant rank $n_i$ for every $i$. By the two lemmas above, the integer $n=\mathrm{lcm}(n_i)$ then kills every element in the cokernel of $\pi^*\colon \mathrm{Pic}(X)\to \mathrm{Pic}(\mathscr{X})$. QED It is also simple to prove things like: Proposition 2.Let $f\colon \mathscr{X}'\to \mathscr{X}$ be a representable morphism and let $\pi'\colon \mathscr{X}'\to X'$ denote the coarse moduli space and $g\colon X'\to X$ the induced morphism between coarse moduli spaces. If $\mathscr{X}$ is quasi-compact and $\mathcal{L}\in \mathrm{Pic}(\mathscr{X}')$ is $f$-ample, then there exists an $n$ such that $\mathcal{L}^{n}=\pi'^*\mathcal{M}$ and $\mathcal{M}$ is $g$-ample. Proof. The question is local on $X$ so we may assume that $X$ is affine and $\mathscr{X}$ admits a finite flat morphism $p\colon Z\to \mathscr{X}$ of constant rank $n$ with $Z$ affine. Let $p'\colon Z'\to \mathscr{X}'$ be the pull-back. Then $p'^*\mathcal{L}$ is ample. We have seen that $\mathcal{L}^{n}=\pi'^*\mathcal{M}$ for $\mathcal{M}\in \mathrm{Pic}(X')$. It is enough to show that sections of $\mathcal{M}^m=\mathcal{L}^{mn}$ for various $m$ defines a basis for the topology of $X'$. Thus let $U'\subseteq X'$ be an open subset and pick any $x'\in U'$. Since $\mathcal{L}|_{Z'}$ is ample, we may find $s\in \Gamma(Z',\mathcal{L}^m)$ such that $D(s)=\{s\neq 0\}$ is an open neighborhood of the preimage of $x'$ (which is finite) contained in the preimage of $U'$. Let $t=N_p(s)$. Then $t\in H^0(\mathscr{X}',\mathcal{L}^{mn})=H^0(X',\mathcal{M}^m)$. But $\mathscr{X}\setminus D(t)=p(Z\setminus D(s))$ so $D(t)=X\setminus \pi(p(Z\setminus D(s)))$ is an open neighborhood of $x'$ contained in $U'$. QED Acknowledgments I am grateful for comments from Jarod Alper and Daniel Bergh. References [Alp13] Alper, J. Good moduli spaces for Artin stacks, Ann. Inst. Fourier (Grenoble) 63 (2013), no. 6, 2349–2402. [KV04] Kresch, A. and Vistoli, A. On coverings of Deligne–Mumford stacks and surjectivity of the Brauer map, Bull. London Math. Soc. 36 (2004), no. 2, 188–192. [Ols12] Olsson, M. Integral models for moduli spaces of G-torsors, Ann. Inst. Fourier (Grenoble) 62 (2012), no. 4, 1483–1549.
Mini Series: Designing a Satellite for Dummies Are you an aspiring aerospace engineer, a space enthusiast, a parent checking your child’s homework or simply interested in the specifics of how to design certain satellite parts? Then this is the place to be. In this mini series we will go through the basics of designing and scaling a satellite, ranging from solar arrays to propellant tanks and even orbital parameters. If you would like us to cover other space-related topics, feel free to reach out to engineering@valispace.com. Part 1: How to size a Solar Array We will start this series with a tutorial on how to determine the size of the solar panels of a satellite, being one of the parts that almost every satellite needs. The solar arrays are part of the power subsystem of a satellite and normally act as the main source of power. Firstly, we show how the required power will be calculated, after which the other factors influencing the required size of the arrays will be discussed. In the end, you will be able to fully perform the sizing of your own satellite solar array! The required power The power subsystem of a satellite has to provide power to the satellite both in sunlight (subscript d) and in eclipse (subscript e). Usually, this is done by a combination of batteries and solar arrays. When the satellite is in eclipse (so no sunlight reaches the satellite), the batteries are drained to power the satellite. In sunlight, the solar arrays both power the satellite and recharge the batteries. This means that the power required from the solar arrays $P_{SA}$ depends on the power required by the satellite during daylight, the time of daylight, the power required by the satellite during the eclipse, the time of eclipse and the efficiencies of the power transfers (from the arrays directly to the satellite parts (loads) $\chi_d$ and indirectly, so via the batteries $\chi_e$): $$P_{SA} = \frac{P_d \cdot t_d / \chi_d + P_e \cdot t_e / \chi_e}{t_d} \; \; [W]$$ The required solar array size Now that we know how much power the solar arrays have to provide for the satellite, we will use this as input to find the required size of the solar arrays. The ideal flux The first step in this process is to find the amount of power that the solar array will provide per square meter in ideal circumstances ($\Phi_{ideal}$). This is dependent on the amount of solar radiation (in Earth orbit generally taken to be $J_S = 1367 \; \; [W/m^2]$) and the efficiency of the solar cells (e.g. for Ga-As cells $\eta_{cell} = 36\% $). This results in the following ideal flux equation: $$\Phi_{ideal} = J_S \cdot \eta_{cell} \; \; [W/m^2]$$ The Beginning of Life (BOL) flux The actual flux at the start of the satellite’s life is dependent on two more parameters, namely the inherent degradation ($I_d$) and the highest solar incidence angle ($\theta_{max}$). The inherent degradation is directly proportional to the amount of area of the array that is not directly used for power generation. The highest solar incidence angle is the maximum to be expected angle between the normal of the solar array plane and the incoming solar radiation. The Flux at BOL will look like this: $$\Phi_{BOL} = \Phi_{ideal} \cdot I_d \cdot cos(\theta_{max}) \; \; [W/m^2]$$ The End of Life (EOL) flux Over the lifetime of a solar array the individual cells start degrading, mainly due to continued exposure to radiation. The amount by which the entire array is degraded is normally calculated using the lifetime degradation factor ($L_d$): $$L_d = (1-F_d)^{N_{y}} \; \; [-] $$ Here, $F_d$ is the yearly degradation factor (For Ga-As cells around 2.75%) and $N_{y}$ is the amount of years until EOL is reached. The flux at EOL will then be: $$\Phi_{EOL} = \Phi_{BOL} \cdot L_d \; \; [W/m^2] $$ Sizing of the solar array area Since at the end of the satellite’s life it still requires the same amount of power, the EOL flux eventually determines the total area of the arrays. The total solar array area is calculated as follows: $$A_{SA} = \frac{P_{SA}}{\Phi_{EOL}} \; \; [m^2]$$ If you followed the steps correctly, you have now performed the sizing of your own solar arrays, congratulations! We hope you liked this mini-tutorial! If you want to learn how the solar array and the power subsystem are related to other subsystems in a satellite or how to design a complete satellite using Valispace and practical examples, also check our Satellite Tutorial by Calum Hervieu and Paolo Guardabasso. Stay tuned for more and feel free to give us feedback at contact-us@valispace.com! Valispace is a single source of truth and collaboration platform for all your engineering data. Click here to get a demo and try it for free.
8 results authored by Jeanne Parmentier - search across all users. This quiz contains questions on functions, limits, logs, exponential functions, simultaneous equations and quadratic equations. Exam (9 questions) Draft CC BY Published Last modified 07/09/2018 13:15 No subjects selected No topics selected No ability levels selected Using simple substitution to find $\lim_{x \to a} bx+c$, $\lim_{x \to a} bx^2+cx+d$ and $\displaystyle \lim_{x \to a} \frac{bx+c}{dx+f}$ where $d\times a+f \neq 0$. Question Draft CC BY Published Last modified 07/09/2018 13:14 No subjects selected No topics selected No ability levels selected Understanding of intersection and union symbols. Question Draft CC BY Published Last modified 07/09/2018 13:09 No subjects selected No topics selected No ability levels selected On a plusieurs ensembles dont les éléments sont des entiers tirés aléatoirement. Il faut faire des opérations élémentaires faisant intervenir $\cap,\;\cup$ et la notion de complémentaire. Question Draft CC BY Published Last modified 07/09/2018 13:07 No subjects selected No topics selected No ability levels selected Understanding of intersection and union symbols. Question Draft CC BY Published Last modified 07/09/2018 12:59 No subjects selected No topics selected No ability levels selected The student is asked to factorise a quadratic $x^2 + ax + b$. A custom marking script uses pattern matching to ensure that the student's answer is of the form $(x+a)(x+b)$, $(x+a)^2$, or $x(x+a)$. To find the script, look in the Scriptstab of part a. Question Draft CC BY Published Last modified 07/09/2018 12:53 No subjects selected No topics selected No ability levels selected Use laws for addition and subtraction of logarithms to simplify a given logarithmic expression to an arbitrary base. Question Draft CC BY Published Last modified 20/07/2018 09:31 No subjects selected No topics selected No ability levels selected
A few simple steps to the solution At high school level often we find that math problems are solved in a long series of steps. This is what we call . conventional approach to solving problems This approach not only involves a large number of steps, in most cases, the steps themselves introduce a higher level of complexity, and increases chances of error. More importantly, the conventional inefficient problem solving approach curbs the out-of-the-box thinking skills of the students. While dealing with complex Trigonometry problems similar to school level in competitive exam scenario, the student is now forced to solve such a problem in a minute, and not in many minutes. The pressure to find the solution along the shortest path gains immense importance for successful performance in such tests as SSC CGL. Though at school level, all steps to the solution are to be written down, that does not take up most of the time, the bulk of the time is actually consumed in inefficient problem solving, finding the path and steps to the solution - in thinking through the barriers to the solution. We will take up two apparently difficult Trigonometry problems from that actually belong to school level, and appear in MCQ form in the competitive test scenario. SSC CGL test level The here through solution of the problem thinking process that we will highlight as well as can help SSC CGL aspirants high school studentsto solve problems efficiently like a problem solver, using deductive reasoning, powerful strategies, techniques and basic subject concepts, rather than being constrained by the costly routine approach. Problem example 1 If $xcos\theta - ysin\theta = \sqrt{x^2 + y^2}$, and $\displaystyle\frac{cos^2\theta}{a^2} + \displaystyle\frac{sin^2\theta}{b^2}=\frac{1}{x^2 + y^2}$ then the correct relation among the following is, $\displaystyle\frac{x^2}{b^2} - \displaystyle\frac{y^2}{a^2} = 1$ $\displaystyle\frac{x^2}{b^2} + \displaystyle\frac{y^2}{a^2} = 1$ $\displaystyle\frac{x^2}{a^2} - \displaystyle\frac{y^2}{b^2} = 1$ $\displaystyle\frac{x^2}{a^2} + \displaystyle\frac{y^2}{b^2} =1$ First try to solve this problem yourself and then only go ahead. You might be able to reach the elegant solution to this problem yourself. Efficient solution in a few steps Deductive reasoning: First stage analysis: one must analyze the problem first. Our first observation is, all terms in the target expression and the second given expression are in squares. This urges us straightaway to square up the first given expression and simplify as much as possible. This fragment of statement "as much as possible" is important that we will see. $xcos\theta - ysin\theta = \sqrt{x^2 + y^2}$, Or, $x^2cos^2\theta - 2xysin\theta{cos\theta} + y^2sin^2\theta = x^2 + y^2$. This gives us the opportunity to apply the by collecting the coefficients of $x^2$ and $y^2$ from both sides together. We would be more interested to do this because we can see that by this action, we will simplify the factors $x^2(1 - cos^2\theta)$ and $y^2(1 - sin^2\theta)$. principle of collection of friendly terms Rearranging the terms, taking all of them on one side of the equation we get, $x^2(1 - cos^2\theta) + 2xysin\theta{cos\theta} + y^2(1 - sin^2\theta) = 0$, Or, $(xsin\theta + ycos\theta)^2 = 0$ Or, $xsin\theta + ycos\theta = 0$, Or, $\displaystyle\frac{x}{y} = -cot\theta$ Or, $\displaystyle\frac{x^2}{y^2} = cot^2\theta$. This is what we wanted, because from our intial analysis, it was clear that we need to eliminate $sin\theta$ and $cos\theta$ and to do that we must have had a relationship of a trigonometric function in terms of $x^2$ and $y^2$. We are happy with any trigonometric function because from basic concepts we know, $sin^2\theta = 1 - cos^2\theta$ $cosec^2\theta - 1 = cot^2\theta$ $sec^2\theta - 1 = tan^2\theta$. If we get the value of one of the basic identities we can derive any other. To eliminate we need to get the value of $sin^2\theta$ and $cos^2\theta$ in terms of $x^2$ and $y^2$. Let's get those relations now, $\displaystyle\frac{x^2}{y^2} = cot^2\theta = cosec^2\theta - 1$, Or, $cosec^2\theta = \displaystyle\frac{x^2 + y^2}{y^2}$, Or, $sin^2\theta = \displaystyle\frac{y^2}{x^2 + y^2}$, and so, $cos^2\theta = \displaystyle\frac{x^2}{x^2 + y^2}$. Substituting these two in the target expression we find the factor, $\displaystyle\frac{1}{x^2 + y^2}$ common to both the terms in LHS and also on the RHS and thus it cancels out, leaving just, $\displaystyle\frac{x^2}{a^2} + \displaystyle\frac{y^2}{b^2} =1$. Answer: d: $\displaystyle\frac{x^2}{a^2} + \displaystyle\frac{y^2}{b^2} =1$. Problem example 2 If $x = \displaystyle\frac{cos\theta}{1 - sin\theta}$, then the expression, $\displaystyle\frac{cos\theta}{1 + sin\theta}$ is, $\displaystyle\frac{1}{x + 1}$ $\displaystyle\frac{1}{1 - x}$ $\displaystyle\frac{1}{x}$ $x - 1$ Efficient solution in a few steps Problem analysis We find $1+sin\theta$ in the target expression whereas its complementary expression, $1 - sin\theta$ is already there in the given expression. Why complementary? Because product of the two will be $1 - sin^2\theta = cos^2\theta$, always simplifying expressions in Trigonometry. So we multiply both numerator and denominator of given expression by $1 + sin\theta$, $x=\displaystyle\frac{cos\theta(1 + sin\theta)}{1 - sin^2\theta}$, $\hspace{5mm}=\displaystyle\frac{cos\theta(1 + sin\theta)}{cos^2\theta}$, $\hspace{5mm}=\displaystyle\frac{1 + sin\theta}{cos\theta}$, So just inverting we get, $\displaystyle\frac{cos\theta}{1 + sin\theta} = \frac{1}{x}$. Answer: c: $\displaystyle\frac{1}{x}$. Solutions to both the problems have come quickly and so we won't analyze further. Generally we find this class of problem is solved in longer steps. It would be so because, unless you see through the barriers of the problem very quickly applying the concepts and techniques that are relevant to the problem to take you to the solution actually in a few simple steps, more often than not you will be wasting valuable time in searching for the solution. We will end with our standard advice, Always think: is there any other shorter better way to the solution? And use your brains more than your factual memory and mass of mechanical routine procedures. Resources on Trigonometry and related topics You may refer to our useful resources on Trigonometry and other related topics especially algebra. Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve difficult SSC CGL level School math problems in a few simple steps, Trigonometry 4 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
A typical nonlinear FE problem can be given by: \begin{equation} \mathbf{R} = \mathbf{K}\rho + \mathbf{F}_{NL}(\rho), \end{equation} where R is the vector of applied forces, K is the material or linear stiffness matrix, $F_{NL}$ is the nonlinear force function and $\rho$ is the vector of displacements. The tangent stiffness matrix is: \begin{equation} K_T = K + \dfrac{\partial F_{NL}}{\partial \rho}(\rho) \end{equation} If an incremental procedure is applied, $\dfrac{\partial F_{NL}}{\partial \rho}$ will need to be updated at load application step, since it is a function of the deformed configuration. But why does the material stiffness matrix $K$ not need to be updated? $K$ is formed by rotating local stiffness matrices and assembling them into a global frame of reference. Surely when the geometry becomes deformed $K$ will need updating also, even if it is the linear stiffness matrix?
Event detail Student Probability/PDE Seminar: Metastability of the Zero Range Process on a Finite Set Without Capacity Estimates Seminar | April 6 | 2:10-3:30 p.m. | 891 Evans Hall Chanwoo Oh, UC Berkeley In this talk, I'll prove metastability of the zero range process on a finite set without using capacity estimates. The proof is based on the existence of certain auxiliary functions. One such function is inspired by Evans and Tabrizian's article, "Asymptotics for the Kramers-Smouchowski equations". This function is the solution of a certain equation involving the infinitesimal generator of the zero range process. Another relevant auxiliary function is from a work of Beltran and Landim. We also use martingale problems to characterize Markov processes. Let $p$ be the jump rates of a random walk on a finite set $S$. Assume that the uniform measure on $S$ is an invariant measure of this random walk for simplicity (we expect that our method is applicable for an arbitrary invariant measure $m$). Consider the zero range process on $S$, where the rate the particle jumps from a site $x$ to $y$ with $k$ particles at the site $x$ is given by $g(k) p(x, y).$ Here $g(0) = 0$, $g(1) = 1$, and $g(k) = (k/ k-1)^\alpha , k > 1$ for some $\alpha > 1$. As total number of particles $N \rightarrow \infty $, most of the particles concentrate on a single site. In the time scale $N^{1+\alpha }$, the site of concentration evolves as a Markov chain whose jump rates are proportional to the capacities of the underlying random walk. This talk based on the joint work with F. Rezakhanlou.
This is a contributed gemstone, written by Sushant Vijayan. Enjoy! Consider a generic cubic equation $$at^3+bt^2+ct+d=0,$$ where $a,b,c,d$ are real numbers and $a \neq 0$. Now we can transform this equation into a depressed cubic equation, i.e., one with no $t^2$ term, through means of Tschirnhaus Transformation $t=x-\frac{b}{3a}$, followed by dividing through by $a$. The depressed cubic equation is given by $$x^3+px+q=0$$ where $p$ and $q$ are related to $a,b,c,d$ by the relation given here. Setting $p=-m$ and $q=-n$ and rearranging we arrive at $$x^3 =mx+n \hspace{3cm} (1)$$ We will investigate the nature of roots for this equation. We begin with plotting out the graph of $y=x^3$: It is an odd, monotonic, nondecreasing function with an inflection point at $x=0$. The real roots of the equation (1) are the $x$-coordinates of the points of intersection between the straight line $y= mx+n$ and the curve $y=x^3$. It is clear geometrically that however we draw the straight line, as a bare minimum, there would be at least one point of intersection. This corresponds to the fact that all cubic equations with real coefficients have at least one real root. It is immediately clear that if the slope of the line $m$ is less than $0$, there is only one point of intersection and hence only one real root. On the other hand, the condition $m>0$ is equivalent to demanding that the depressed cubic $y(x)= x^3-mx-n$ has two points of local extrema (which is a necessary, but not sufficient, condition for the existence of three real roots). Now the possibility of repeated real roots occur when the straight line is a tangent to the curve $y=x^3$. Hence consider the slope of the function $y=x^3$: $$\frac{dy}{dx}=3x^2$$ Now equating the slopes (note $m\ge 0$) we get the tangent points: $$3x^2=m$$ $$x=\pm \sqrt{\frac {m}{3} } $$ Equivalently, the tangents are at the two values of $x$ for which $$|x|=\sqrt{\frac {m}{3} }.$$ The corresponding $y$-intercepts for these tangent straight lines are: $$n=\pm \frac{2m}{3}\sqrt{\frac{m}{3}}$$ or, in other words, $$|n|=\frac{2m}{3}\sqrt{\frac{m}{3}}$$ Thus for a straight line with a given slope $m\ge 0$ there are only two tangents with corresponding tangent points and $y$-intercepts. This would be the case of equation (1) having one real and one real repeated root. In this case the straight line is parallel to the two tangent lines (since same slope) and in the region bounded by the two tangent lines. Hence it would necessarily intersect the curve $y=x^3$ at three points. This corresponds to the situation of equation (1) having three real roots. And the case where $ |n| > \frac{2m}{3}\sqrt{\frac{m}{3}}$ corresponds to the area outside the bounded region of the two tangent lines and has only one point of intersection. Hence the necessary and sufficient condition for three real roots (including repeated roots) is given by: $$|n| \le \frac{2m}{3}\sqrt{\frac{m}{3}} \hspace{3cm}(2) $$ We note that the condition $m\ge 0$ is subsumed within the above condition (2), for If $m<0$ then the condition (2) cannot be satisfied. We note that condition (2) is not a radical expression. We square on both sides and rearrange to arrive at: $$ \frac{4m^3}{27}-n^2 \ge 0$$ Multiply by 27 and set $\bigtriangleup=4m^3-27n^2 $ to get $$ \bigtriangleup \ge 0 $$ The $\bigtriangleup $ is the discriminant of the cubic and has all the information required to determine the nature of the roots of the depressed cubic given by equation (1). This may then be written in terms of $a,b,c,d$ by inverting the Tschirnhaus transformation. A similar exercise may be carried out for the quartic equation, and a similar but albeit more complicated expression can be derived for the discriminant. It would be very interesting to do the same exercise for the quintic and see where it fails (fail it must, otherwise it would contradict Abel's famous result of insolvability of quintic equations using radical expressions).
Difference between revisions of "Talk:Fujimura's problem" (→General n) (→General n) Line 37: Line 37: A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. − You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think c = + You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size <math>cn^2</math>. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think c = 1/6+ o(1n). One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has <math>\frac{1}{24}n(n+1)(n+2)(n+3)</math> tetrahedrons. Each point on the grid is part of n tetrahedrons, so <math>\frac{1}{24}(n+1)(n+2)(n+3)</math> points must be removed to remove all tetrahedrons. This gives an upper bound of <math>\frac{1}{8}(n+1)(n+2)(n+3)</math>. One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has <math>\frac{1}{24}n(n+1)(n+2)(n+3)</math> tetrahedrons. Each point on the grid is part of n tetrahedrons, so <math>\frac{1}{24}(n+1)(n+2)(n+3)</math> points must be removed to remove all tetrahedrons. This gives an upper bound of <math>\frac{1}{8}(n+1)(n+2)(n+3)</math>. Revision as of 03:17, 29 March 2009 Let [math]\overline{c}^\mu_{n,4}[/math] be the largest subset of the tetrahedral grid: [math] \{ (a,b,c,d) \in {\Bbb Z}_+^4: a+b+c+d=n \}[/math] which contains no tetrahedrons [math](a+r,b,c,d), (a,b+r,c,d), (a,b,c+r,d), (a,b,c,d+r)[/math] with [math]r \gt 0[/math]; call such sets tetrahedron-free. These are the currently known values of the sequence: n 0 1 2 [math]\overline{c}^\mu_{n,4}[/math] 1 3 7 n=0 [math]\overline{c}^\mu_{0,4} = 1[/math]: There are no tetrahedrons, so no removals are needed. n=1 [math]\overline{c}^\mu_{1,4} = 3[/math]: Removing any one point on the grid will leave the set tetrahedron-free. n=2 [math]\overline{c}^\mu_{2,4} = 7[/math]: Suppose the set can be tetrahedron-free in two removals. One of (2,0,0,0), (0,2,0,0), (0,0,2,0), and (0,0,0,2) must be removed. Removing any one of the four leaves three tetrahedrons to remove. However, no point coincides with all three tetrahedrons, therefore there must be more than two removals. Three removals (for example (0,0,0,2), (1,1,0,0) and (0,0,2,0)) leaves the set tetrahedron-free with a set size of 7. General n A lower bound of 2(n-1)(n-2) can be obtained by keeping all points with exactly one coordinate equal to zero. You get a non-constructive quadratic lower bound for the quadruple problem by taking a random subset of size [math]cn^2[/math]. If c is not too large the linearity of expectation shows that the expected number of tetrahedrons in such a set is less than one, and so there must be a set of that size with no tetrahedrons. I think [math] c = \frac{24^{1/4}}{6} + o(\frac{1}{n})[/math]. One upper bound can be found by counting tetrahedrons. For a given n the tetrahedral grid has [math]\frac{1}{24}n(n+1)(n+2)(n+3)[/math] tetrahedrons. Each point on the grid is part of n tetrahedrons, so [math]\frac{1}{24}(n+1)(n+2)(n+3)[/math] points must be removed to remove all tetrahedrons. This gives an upper bound of [math]\frac{1}{8}(n+1)(n+2)(n+3)[/math].
This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email Address: Article Id: WHEBN0000010390 Reproduction Date: Econometrics is the application of mathematics, statistical methods, and computer science, to economic data and is described as the branch of economics that aims to give empirical content to economic relations.[1] More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference."[2] An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships."[3] The first known use of the term "econometrics" (in cognate form) was by Polish economist Paweł Ciompa in 1910.[4] Ragnar Frisch is credited with coining the term in the sense in which it is used today.[5] Econometrics is the intersection of economics, mathematics, and statistics. Econometrics adds empirical content to economic theory allowing theories to be tested and used for forecasting and policy evaluation.[6] The basic tool for econometrics is the linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis.[7] Estimating a linear regression on two variables can be visualized as fitting a line through data points representing paired values of the independent and dependent variables. For example, consider Okun's law, which relates GDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate (\Delta\ Unemployment) is a function of an intercept ( \beta_0 ), a given value of GDP growth multiplied by a slope coefficient \beta_1 and an error term, \epsilon: The unknown parameters \beta_0 and \beta_1 can be estimated. Here \beta_1 is estimated to be −1.77 and \beta_0 is estimated to be 0.83. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 points. The model could then be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of \beta_1 were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. Econometric theory uses statistical theory to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. An estimator is unbiased if its expected value is the true value of the parameter; It is consistent if it converges to the true value as sample size gets larger, and it is efficient if the estimator has lower standard error than other unbiased estimators for a given sample size. Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators that incorporate prior beliefs are advocated by those who favor Bayesian statistics over traditional, classical or "frequentist" approaches. Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analyzing economic history, and forecasting.[8] Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments.[9] In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may by useful for generating new hypotheses.[10] Economics often analyzes systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. Consequently, the field of econometrics has developed methods for identification and estimation of simultaneous-equation models. These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system. One of the fundamental statistical methods used by econometricians is regression analysis.[11] Regression methods are important in econometrics because economists typically cannot use controlled experiments. Econometricians often seek illuminating natural experiments in the absence of evidence from controlled experiments. Observational data may be subject to omitted-variable bias and a list of other problems that must be addressed using causal analysis of simultaneous-equation models.[12] Artificial Intelligence has become important for building econometric models and for use in decision making.[13] Artificial intelligence is a nature-inspired computational paradigm which has found usage in many areas. It allows economic models to be of arbitrary complexity and also to be able to evolve as the economic environment also changes. For example, artificial intelligence has been applied to simulate the stock market, to model options and derivatives as well as model and control interest rates. In recent decades, econometricians have increasingly turned to use of experiments to evaluate the often-contradictory conclusions of observational studies. Here, controlled and randomized experiments provide statistical inferences that may yield better empirical performance than do purely observational studies.[14] Data sets to which econometric analyses are applied can be classified as time-series data, cross-sectional data, panel data, and multidimensional panel data. Time-series data sets contain observations over time; for example, inflation over the course of several years. Cross-sectional data sets contain observations at a single point in time; for example, many individuals' incomes in a given year. Panel data sets contain both time-series and cross-sectional observations. Multi-dimensional panel data sets contain observations across time, cross-sectionally, and across some third dimension. For example, the Survey of Professional Forecasters contains forecasts for many forecasters (cross-sectional observations), at many points in time (time series observations), and at multiple forecast horizons (a third dimension). In many econometric contexts, the commonly-used ordinary least squares method may not recover the theoretical relation desired or may produce estimates with poor statistical properties, because the assumptions for valid use of the method are violated. One widely used remedy is the method of instrumental variables (IV). For an economic model described by more than one equation, simultaneous-equation methods may be used to remedy similar problems, including two IV variants, Two-Stage Least Squares (2SLS), and Three-Stage Least Squares (3SLS).[15] Computational concerns are important for evaluating econometric methods and for use in decision making.[16] Such concerns include mathematical well-posedness: the existence, uniqueness, and stability of any solutions to econometric equations. Another concern is the numerical efficiency and accuracy of software.[17] A third concern is also the usability of econometric software.[18] Structural econometrics extends the ability of researchers to analyze data by using economic models as the lens through which to view the data. The benefit of this approach is that any policy recommendations are not subject to the Lucas critique since counter-factual analyses take an agent's re-optimization into account. Structural econometric analyses begin with an economic model that captures the salient features of the agents under investigation. The researcher then searches for parameters of the model that match the outputs of the model to the data. There are two ways of doing this. The first requires the researcher to completely solve the model and then use maximum likelihood.[19] However, there have been many advances that can bypass the full solution of the model and that estimate models in two stages. Importantly, these methods allow the researcher to consider more complicated models with strategic interactions and multiple equilibria.[20] A good example of structural econometrics is in the estimation of first price sealed bid auctions with independent private values.[21] The key difficulty with bidding data from these auctions is that bids only partially reveal information on the underlying valuations, bids shade the underlying valuations. One would like to estimate these valuations in order to understand the magnitude of profits each bidder makes. More importantly, it is necessary to have the valuation distribution in hand to engage in mechanism design. In a first price sealed bid auction the expected payoff of a bidder is given by: where v is the bidder valuation, b is the bid. The optimal bid b^* solves a first order condition: which can be re-arranged to yield the following equation for v Notice that the probability that a bid wins an auction can be estimated from a data set of completed auctions, where all bids are observed. This can be done using simple non-parametric estimators. If all bids are observed, it is then possible to use the above relation and the estimated probability function and its derivative to point wise estimate the underlying valuation. This will then allow the investigator to estimate the valuation distribution. A simple example of a relationship in econometrics from the field of labor economics is: This example assumes that the natural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. The parameter \beta_1 measures the increase in the natural log of the wage attributable to one more year of education. The term \varepsilon is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, \beta_0 \mbox{ and } \beta_1 under specific assumptions about the random variable \varepsilon. For example, if \varepsilon is uncorrelated with years of education, then the equation can be estimated with ordinary least squares. If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on Years of Education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages. The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that \epsilon is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render \beta_1 identifiable.[22] An overview of econometric methods used to study this problem were provided by Card (1999).[23] The main journals which publish work in econometrics are Econometrica, the Journal of Econometrics, the Review of Economics and Statistics, Econometric Theory, the Journal of Applied Econometrics, Econometric Reviews, the Econometrics Journal,[24] Applied Econometrics and International Development, the Journal of Business & Economic Statistics, and the Journal of Economic and Social Measurement. Like other forms of statistical analysis, badly specified econometric models may show a spurious relationship where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals, McCloskey concluded that economists report p values (following the Fisherian tradition of tests of significance of point null-hypotheses), neglecting concerns of type II errors; economists fail to report estimates of the size of effects (apart from statistical significance) and to discuss their economic importance. Economists also fail to use economic reasoning for model selection, especially for deciding which variables to include in a regression.[25][26] In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects.[27] In such cases, economists rely on observational studies, often using data sets with many strongly associated covariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions".[28] Economists from the Austrian School argue that aggregate economic models are not well suited to describe economic reality because they waste a large part of specific knowledge. Friedrich Hayek in his The Use of Knowledge in Society argued that "knowledge of the particular circumstances of time and place" is not easily aggregated and is often ignored by professional economists.[29][30] Cryptography, Artificial intelligence, Software engineering, Science, Machine learning Game theory, Economics, Utility, Factors of production, Monopoly
By Joannes Vermorel, January 2018 The cross-entropy is a metric that can be used to reflect the accuracy of probabilistic forecasts. The cross-entropy has strong ties with the maximum likelihood estimation. Cross-entropy is of primary importance to modern forecasting systems, because if it is instrumental in making possible the delivery of superior forecasts, even for alternative metrics. From a supply chain perspective, cross-entropy is particularly important as it supports the estimation of models that are also good at capturing the probabilities of rare events, which frequently happen to be the costliest ones. This metric departs substantially from the intuition that supports simpler accuracy metrics, like the mean square error or the mean absolute percentage error. Frequentist probability vs Bayesian probability A common way of understanding statistics is the frequentist probability perspective. When trying to make quantitative sense of an uncertain phenomenon, the frequentist perspective states that measurements should be repeated many times, and that by counting the number of occurrences of the phenomenon of interest, it is possible to estimate the frequency of the phenomenon, i.e. its probability. As the frequency rate converges through many experiments, the probability gets estimated more accurately. The cross-entropy departs from this perspective by adopting the Bayesian probability perspective. The Bayesian perspective reverses the problem. When trying to make quantitative sense of an uncertain phenomenon, the Bayesian perspective starts with a model that directly gives a probability estimate for the phenomenon. Then, through repeated observations, we assess how the model fares when confronted with the real occurrences of the phenomenon. As the number of occurrences increase, the measurement of the (in)adequacy of the model improves. The frequentist and the Bayesian perspectives are both valid and useful. From a supply chain perspective, as collecting observations is costly and somewhat inflexible – companies have little control on generating orders for a product – the Bayesian perspective is frequently more tractable. The intuition of cross-entropy Before delving into the algebraic formulation of the cross-entropy, let’s try to shed some light on its underlying intuition. Let’s assume that we have a probabilistic model – or just model in the following - that is intended to both explain the past and predict the future. For every past observation, this model provides an estimate of the probability that this observation should have happened just like it did. While it is possible to construct a model that simply memorize all past observations assigning them a probability of exactly 1, this model would not tell us anything about the future. Thus, an interesting model somehow approximates the past, and thus delivers probabilities that are less than 1 for past events. By adopting the Bayesian perspective, we can evaluate the probability that the model would have generated all the observations. If we further assume all observations to be independent (IID actually), then the probability that this model would have generated the collection of observations that we have is the product of all the probabilities estimated by the model for every past observation. The mathematical product of thousands of variables that are typically less than 0.5 - assuming that we are dealing with a phenomenon which is quite uncertain – can be expected to be an incredibly small number. For example, even when considering an excellent model to forecast demand, what would be the probability that this model could generate all the sales data that a company has observed over the course of a year? While estimating this number is non-trivial, it is clear that this number would be astoundingly small. Thus, in order to mitigate this numerical problem known as an arithmetic underflow , logarithms are introduced. Intuitively, logarithms can be used to transform products into sums, which conveniently addresses the arithmetic underflow problem. Formal definition of the cross-entropy For two discrete random variables $p$ and $q$, the cross-entropy is defined as:$$H(p, q) = -\sum_x p(x)\, \log q(x). \!$$This definition is not symmetric. $P$ is intended as the “true” distribution, only partially observed, while $Q$ is intended as the “unnatural” distribution obtained from a constructed statistical model. In information theory, cross-entropy can be interpreted as the expected length in bits for encoding messages, when $Q$ is used instead of $P$. This perspective goes beyond the present discussion and isn’t of primary importance from a supply chain perspective. In practice, as $P$ isn’t known, the cross-entropy is empirically estimated from the observations, by simply assuming that all the collected observations are equally probable, that is, $p(x)=1/N$ where $N$ is the number of observations.$$H(q) = - \frac{1}{N} \sum_x \log q(x). \!$$Interestingly enough, this formula is identical to the average log-likehood estimation . Optimizing the cross-entropy or the log-likelihood is essentially the same thing, both conceptually and numerically. The superiority of cross-entropy From the 1990’s to early 2010, most of the statistical community was convinced that the most efficient way, from a purely numerical perspective, to optimize a given metric, say MAPE (mean absolute percentage error), was to build an optimization algorithm directly geared for this metric. Yet, a critical yet counter-intuitive insight achieved by the deep learning community is that this wasn’t the case. Numerical optimization is a very difficult problem, and most metrics are not suitable for efficient, large scale, numerical optimization efforts. Also during the same period, the data science community at large had come to realize that all the forecasting / prediction problems were actually numerical optimization problems. From a supply chain perspective, the take-away is that even if the goal of the company is to optimize a forecasting metric like MAPE or MSE (mean square error), then, in practice, the most efficient route is to optimize the cross-entropy. At Lokad, in 2017, we have collected a significant amount of empirical evidence supporting this claim. More surprisingly maybe, cross-entropy also outperforms CRPS (continuous-ranked probability score), another probabilistic accuracy metric, even if the resulting models are ultimately judged against CRPS. It is not entirely clear what makes cross-entropy such a good metric for numerical optimization. One of the most compelling arguments, detailed in Ian Goodfellow et all , is that cross-entropy provides very large gradient values, that are especially valuable for gradient descent, which precisely happens to be the most successful scale optimization method that is available at the moment. CRPS vs cross-entropy As far as supply chain is concerned, cross-entropy largely outperforms CRPS as a metric for probabilistic forecasts simply because it puts a much greater emphasis on rare events. Let’s consider a probabilistic model for demand that has a mean at 1000 units, with the entire mass of the distribution concentrated on the segment 990 to 1010. Let’s further assume that the next quantity observed for the demand is 1011. From the CRPS perspective, the model is relatively good, as the observed demand is about 10 units away from the mean forecast. In contrast, from the cross-entropy perspective, the model has an infinite error: the model did predict that observing 1011 units of demand had a zero probability – a very strong proposition – which turned out to be factually incorrect, as demonstrated by the fact that 1011 units have just been observed. The propensity of CRPS to favor models that can make absurd claims like the event XY will never happen while the event does happen, largely contributes to explain, from the supply chain perspective, why cross-entropy delivers better results. Cross-entropy favors models that aren’t caught “off guard” so to speak when the improbable happens. In supply chain, the improbable does happen, and when it does with no prior preparation, dealing with this event turns out to be very costly.
Difference between revisions of "Kakeya problem" (→General lower bounds) Line 22: Line 22: (I actually can improve the lower bound to something like <math>k_r\gg 3^{0.51r}</math>.) (I actually can improve the lower bound to something like <math>k_r\gg 3^{0.51r}</math>.) − For instance, we can use the "bush" argument. There are <math>N := (3^n-1)/2</math> different directions. Take a line in every direction, let E be the union of these lines, and let <math>\mu</math> be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu + For instance, we can use the "bush" argument. There are <math>N := (3^n-1)/2</math> different directions. Take a line in every direction, let E be the union of these lines, and let <math>\mu</math> be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu, we see that E has cardinality at least <math>2\mu+1</math>. If we minimise <math>\max(3N/\mu, 2\mu+1)</math> over all possible values of <math>\mu</math> one obtains approximately <math>\sqrt{6N} \approx 3^{(n+1)/2}</math> as a lower bound of |E|, which is asymptotically better than <math>(3/2)^n</math>. − Or, we can use the "slices" argument. Let <math>A, B, C \subset ({\Bbb Z}/3{\Bbb Z})^{n-1}</math> be the three slices of a Kakeya set E. We can form a graph G between A and B by connecting A and B by an edge if there is a line in E joining A and B. The restricted sumset <math>\{a+b: (a,b) \in G \} + Or, we can use the "slices" argument. Let <math>A, B, C \subset ({\Bbb Z}/3{\Bbb Z})^{n-1}</math> be the three slices of a Kakeya set E. We can form a graph G between A and B by connecting A and B by an edge if there is a line in E joining A and B. The restricted sumset <math>\{a+b: (a,b) \in G \}is essentially C, while the difference set <math>\{a-b: (a-b) \in G \}</math> is all of <math>({\Bbb Z}/3{\Bbb Z})^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 this paper of Katz-Tao], we conclude that <math>3^{n-1} \leq \max(|A|,|B|,|C|)^{11/6}</math>, leading to the bound <math>|E| \geq 3^{6(n-1)/11}</math>, which is asymptotically better still. == General upper bounds == == General upper bounds == Revision as of 15:35, 14 March 2009 Define a Kakeya set to be a subset [math]A[/math] of [math][3]^n\equiv{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, I also found [math]k_3=13[/math] and [math]k_4\le 27[/math]. I suspect that, indeed, [math]k_4=27[/math] holds (meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements), and I am very curious to know whether [math]k_5=53[/math]: notice the pattern in [math]3,7,13,27,53,\ldots[/math] we have the trivial inequalities [math]k_n\le k_{n+1}\le 3k_n[/math] The Cartesian product of two Kakeya sets is another Kakeya set; this implies that [math]k_{n+m} \leq k_m k_n[/math]. This implies that [math]k_n^{1/n}[/math] converges to a limit as n goes to infinity. General lower bounds Dvir, Kopparty, Saraf, and Sudan showed that [math]k_n \geq 3^n / 2^n[/math]. We have [math]k_n(k_n-1)\ge 3(3^n-1)[/math] since for each [math]d\in {\mathbb F}_3^r\setminus\{0\}[/math] there are at least three ordered pairs of elements of a Kakeya set with difference [math]d[/math]. (I actually can improve the lower bound to something like [math]k_r\gg 3^{0.51r}[/math].) For instance, we can use the "bush" argument. There are [math]N := (3^n-1)/2[/math] different directions. Take a line in every direction, let E be the union of these lines, and let [math]\mu[/math] be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least [math]3N/\mu[/math]. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that E has cardinality at least [math]2\mu+1[/math]. If we minimise [math]\max(3N/\mu, 2\mu+1)[/math] over all possible values of [math]\mu[/math] one obtains approximately [math]\sqrt{6N} \approx 3^{(n+1)/2}[/math] as a lower bound of |E|, which is asymptotically better than [math](3/2)^n[/math]. Or, we can use the "slices" argument. Let [math]A, B, C \subset ({\Bbb Z}/3{\Bbb Z})^{n-1}[/math] be the three slices of a Kakeya set E. We can form a graph G between A and B by connecting A and B by an edge if there is a line in E joining A and B. The restricted sumset [math]\{a+b: (a,b) \in G \}[/math] is essentially C, while the difference set [math]\{a-b: (a-b) \in G \}[/math] is all of [math]({\Bbb Z}/3{\Bbb Z})^{n-1}[/math]. Using an estimate from this paper of Katz-Tao, we conclude that [math]3^{n-1} \leq \max(|A|,|B|,|C|)^{11/6}[/math], leading to the bound [math]|E| \geq 3^{6(n-1)/11}[/math], which is asymptotically better still. General upper bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. Question: can the upper bound be strengthened to [math]k_{n+1}\le 2k_n+1[/math]? Another construction uses the "slices" idea and a construction of Imre Ruzsa. Let [math]A, B \subset [3]^n$ be the set of strings with \ltmath\gtn/3+O(\sqrt{n})[/math] 1's, [math]2n/3+O(\sqrt{n})[/math] 0's, and no 2's; let [math]C \subset [3]^n[/math] be the set of strings with [math]2n/3+O(\sqrt{n})[/math] 2's, [math]n/3+O(\sqrt{n})[/math] 0's, and no 1's, and let [math]E = \{0\} \times A \cup \{1\} \times B \cup \{2\} \times C[/math]. From Stirling's formula we have [math]|E| = (27/4 + o(1))^{n/3}[/math]. Now I claim that for most [math]t \in [3]^{n-1}[/math], there exists an algebraic line in the direction (1,t). Indeed, typically t will have [math]n/3+O(\sqrt{n})[/math] 0s, [math]n/3+O(\sqrt{n})[/math] 1s, and [math]n/3+O(\sqrt{n})[/math] 2s, thus [math]t = e + 2f[/math] where e and f are strings with [math]n/3 + O(\sqrt{n})[/math] 1s and no 2s, with the 1-sets of e and f being disjoint. One then checks that the line [math](0,f), (1,e), (2,2e+2f)[/math] lies in E. This is already a positive fraction of directions in E. One can use the random rotations trick to get the rest of the directions in E (losing a polynomial factor in n). Putting all this together, I think we have [math](3^{6/11} + o(1))^n \leq k_n \leq ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207\ldots+o(1))^n \leq k_n \leq (1.88988+o(1))^n[/math]
I'm a novice to homotopical algebra, but I've found myself confronted with it by necessity and have some basic questions... I'm going to consider chain complexes over a field $F := \mathbb{F}_2$. Given a chain complex $C$, I'm interested in two operations: the ``homotopy Sym'', where I form $(C \otimes C \otimes E (\mathbb{Z}/2))_{\mathbb{Z}/2}$ where the $\mathbb{Z}/2$ acts diagonally on the tensor product. I'll call this $hSym^2 C$. if $C$ has a $\mathbb{Z}$-action, then I can form ``homotopy quotient'' $C/\mathbb{Z}$, which is $(C \otimes E \mathbb{Z})_{\mathbb{Z}}$, with $\mathbb{Z}$ acting diagonally. I don't know if there is "official" notation for this; I'll just call it $hC/\mathbb{Z}$. (Edit: I thought I wrote this down but must have deleted it accidentally; $E G$ is a projective $F[G]$-resolution of the complex $F$ in degree $0$, which is supposed to represent a point; thus $EG$ is morally to be the chain complex of some contractible space on which $G$ acts freely.) So my question is about how the composition of these two operations in either order are related. If $C$ has a $\mathbb{Z}$-action, then I think $hSym^2 C$ still has a $\mathbb{Z}$-action, so I could form $$ h( hSym^2 C )/\mathbb{Z} $$ or I could do things in the opposite order: $$ hSym^2 (hC/\mathbb{Z}). $$ Based on naive intuition about how ordinary quotients work, I guess that there should be an induced map $$ h( hSym^2 C )/\mathbb{Z} \rightarrow hSym^2 (hC/\mathbb{Z}) $$ Is this right? And if it is, then is the above map an (edit:quasi-)isomorphism? (I guess probably not in general) How can I understand this map explicitly? For instance, if I choose an explicit model for $E \mathbb{Z}/2$ and $E \mathbb{Z}$, like the standard ones that spit out $\mathbb{RP}^{\infty}$ and $S^1$, then I should in principle be able to write it down explicitly, but I'm confused about how that goes.
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?