text
stringlengths
256
16.4k
The Annals of Statistics Ann. Statist. Volume 26, Number 5 (1998), 2014-2048. Strong approximation of density estimators from weakly dependent observations by density estimators from independent observations Abstract We derive an approximation of a density estimator based on weakly dependent random vectors by a density estimator built from independent random vectors. We construct, on a sufficiently rich probability space, such a pairing of the random variables of both experiments that the set of observations $X_1,\ldots,X_n}$ from the time series model is nearly the same as the set of observations $Y_1,\ldots,Y_n}$ from the i.i.d. model. With a high probability, all sets of the form $({X_1,\ldots,X_n}\\Delta{Y_1,\ldots,Y_n})\bigcap([a_1,b_1]\times\ldots\times[a_d,b_d])$ contain no more than $O({[n^1/2 \prod(b_i-a_i)]+ 1} \log(n))$ elements, respectively. Although this does not imply very much for parametric problems, it has important implications in nonparametric statistics. It yields a strong approximation of a kernel estimator of the stationary density by a kernel density estimator in the i.i.d. model. Moreover, it is shown that such a strong approximation is also valid for the standard bootstrap and the smoothed bootstrap. Using these results we derive simultaneous confidence bands as well as supremumtype nonparametric tests based on reasoning for the i.i.d. model. Article information Source Ann. Statist., Volume 26, Number 5 (1998), 2014-2048. Dates First available in Project Euclid: 21 June 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1024691367 Digital Object Identifier doi:10.1214/aos/1024691367 Mathematical Reviews number (MathSciNet) MR1673288 Zentralblatt MATH identifier 0930.62038 Subjects Primary: 62G07: Density estimation Secondary: 62G09: Resampling methods 62M07: Non-Markovian processes: hypothesis testing Citation Neumann, Michael H. Strong approximation of density estimators from weakly dependent observations by density estimators from independent observations. Ann. Statist. 26 (1998), no. 5, 2014--2048. doi:10.1214/aos/1024691367. https://projecteuclid.org/euclid.aos/1024691367
Existence and location of periodic solutions to convex and non coercive Hamiltonian systems 1. Department of Mathematics, University of Messina, 98166 Sant'Agata-Messina, Italy $J\dot u(t)+\nabla H(t,u(t))=0$ a.e. on $[0,T] $ $u(0)=u(T)$ where the function $H:[0,T]\times \mathbb R^{2N}\rightarrow \mathbb R$ is called Hamiltonian. Our attention will be focused upon the case in which the Hamiltonian H, besides being measurable on $t\in[0,T]$, is convex and continuously differentiable with respect to $u\in \mathbb R^{2N}$. Our basic assumption is that the Hamiltonian $H$ satisfies the following growth condition: Let $1 < p < 2$ and $q=\frac{p}{p-1}$. There exist positive constants $\alpha,\delta$ and functions $\beta,\gamma \in L^q(0,T;\mathbb R^+)$ such that $\delta|u|-\beta(t)\leq H(t,u)\leq\frac{\alpha}{q}|u|^q+\gamma(t),$ for all $u\in \mathbb R^{2N}$ and a.e. $t\in[0,T]$. Our main result assures that under suitable bounds on $\alpha,\delta$ and the functions $\beta,\gamma$, the problem above has at least a solution that belongs to $W_T^{1,p}$. Such a solution corresponds, in the duality, to a function that minimizes the dual action restricted to a subset of $\tilde{W}_T^{1,p}=${$v\in W_T^{1,p}: \int_0^{ T} v(t) dt=0$}. Keywords:dual action, Hamiltonian, periodic solutions., convex and non coercive Hamiltonian systems. Mathematics Subject Classification:34B1. Citation:Giuseppe Cordaro. Existence and location of periodic solutions to convex and non coercive Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (5) : 983-996. doi: 10.3934/dcds.2005.12.983 [1] Jianshe Yu, Honghua Bin, Zhiming Guo. Periodic solutions for discrete convex Hamiltonian systems via Clarke duality. [2] [3] [4] Xingyong Zhang, Xianhua Tang. Some united existence results of periodic solutions for non-quadratic second order Hamiltonian systems. [5] Anna Capietto, Walter Dambrosio, Tiantian Ma, Zaihong Wang. Unbounded solutions and periodic solutions of perturbed isochronous Hamiltonian systems at resonance. [6] Pietro-Luciano Buono, Daniel C. Offin. Instability criterion for periodic solutions with spatio-temporal symmetries in Hamiltonian systems. [7] [8] Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. [9] Laura Olian Fannio. Multiple periodic solutions of Hamiltonian systems with strong resonance at infinity. [10] Paolo Gidoni, Alessandro Margheri. Lower bound on the number of periodic solutions for asymptotically linear planar Hamiltonian systems. [11] Liang Ding, Rongrong Tian, Jinlong Wei. Nonconstant periodic solutions with any fixed energy for singular Hamiltonian systems. [12] [13] Ernest Fontich, Rafael de la Llave, Yannick Sire. A method for the study of whiskered quasi-periodic and almost-periodic solutions in finite and infinite dimensional Hamiltonian systems. [14] [15] Xiao-Fei Zhang, Fei Guo. Multiplicity of subharmonic solutions and periodic solutions of a particular type of super-quadratic Hamiltonian systems. [16] Chungen Liu, Xiaofei Zhang. Subharmonic solutions and minimal periodic solutions of first-order Hamiltonian systems with anisotropic growth. [17] Rumei Zhang, Jin Chen, Fukun Zhao. Multiple solutions for superlinear elliptic systems of Hamiltonian type. [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
A common approach to forcing is to use countable transitive model $M \in V$ with $\mathbb{P} \in M$ and take a $G \in M$ (which always exists) to form a countable transitive model $M[G]$. Another approach takes $M$ to be countable such that $M \prec H_\theta$ for sufficiently large $\theta$ (and hence may not be transitive). For example, a definition of proper forcing considers such models. Forcing with transitive models are quite convenient since many absoluteness results can be used to transfer properties of $x \in M[G]$ which hold in $M[G]$ up to $V$. If $M \prec H_\theta$ is not transitive, then it is not clear what type of property that $M[G]$ can prove about $x$ transfer to $V$. For instance, if $M[G] \models x \in {}^\omega\omega$, is $x \in {}^\omega\omega$ in $V$? Of course, one remedy could be to Mostowski collapse everything and then use the familiar absoluteness for transitive models. For $x \in {}^\omega\omega$, one could use the fact that $M \prec H_\theta$ implies $\omega \subseteq M$ and hence the Mostowski collapse of $M[G]$ would maps each real to itself and then use absoluteness to prove that $V \models x \in {}^\omega\omega$ as well. Is there a more direct way to prove these type of result rather than collapsing the forcing extension, which seem to suggest one should have started by collapse $M$ before starting the forcing construction. So my questions are 1 First, if one chooses to work with countable $M \prec H_\theta$ are there any changes that need to made to the forcing construction and the forcing theorem as they appear in Kunen or Jech? Of course, the definition of a generic filter should be changed to meeting those dense sets that appear in $M$. 2 I am aware that if $G$ has master conditions, then $M[G] \prec H_\theta[G]$? Is $H_\theta[G]$ just the forcing construction applied to $H_\theta$? As $G$ is not necessarily generic over $H_\theta$, it is not clear to me that the forcing theorem need to apply to $H_\theta[G]$ (or a priori $H_\Theta[G]$ models any particular amount of $\text{ZF}- \text{P}$, but since $M[G] \prec H_\theta[G]$, actually $H_\Theta[G]$ would model as much as $M[G]$.) In general without addition assumption like master conditions, does the relation $M[G] \prec H_\Theta[G]$ still hold. Also perhaps I am misunderstanding something, but since $\mathbb{P} \in M$, it appears that if $\theta$ is large enough, every $G \subseteq \mathbb{P}$ which is $\mathbb{P}$-generic over $M$ is already in $H_\Theta$. Would this not imply that $H_\theta[G] = H_\theta$ and hence $M[G] \prec H_\Theta$. Since $M \prec H_\theta$, $M$ and $M[G]$ models the exact same sentences. This surely can not happen. Thanks for any help and clarification that can be provided.
Here is a reason. The fourth of Maxwell's macroscopic equations says that$$ \nabla \times \vec{H} = \vec{J} +\frac{\partial \vec{D}}{\partial t},$$where $\vec{J}$ is the free current at a point. In general, it is not possible to rewrite this in terms of B-field without a detailed knowledge of the microscopic behaviour of the medium (with the exception of vacuum) and what currents and polarisation charges are present, either inherently, or induced by applied fields. Sometimes the approximation is made that $\vec{B} = \mu \vec{H}$, but this runs into trouble in even quite ordinary magnetic materials that have a permanent magnetisation or suffer from hysteresis and the general relationship is that $$ \vec{B} = \mu_0 (\vec{H} + \vec{M}) , $$where $\vec{M}$ is the magnetisation field (permanent or induced magnetic dipole moment per unit volume). For these reasons, the auxiliary magnetic field strength $\vec{H}$ is invaluable for performing accurate calculations of the fields induced by currents, or vice-versa, within magnetic materials. On the other hand, the Lorentz force on charged particles is expressed in terms of the magnetic flux density $\vec{B}$.$$ \vec{F} = q\vec{E} + q\vec{v}\times \vec{B}$$Indeed this can form the basis of the definition of B-field and can be used, along with the lack of magnetic monopoles, to derive Maxwell's third equation (Faraday's law), which does not feature the H-field. So, both fields are a necessary part of the physicists toolbox. As Philosophiae Naturalis points out in a comment, the B-field can be thought of as the sum of the contributions from the (applied) H-field and whatever magnetisation (induced or intrinsic) is present. Often, we can only control or easily measure the applied H-field. In limited circumstances we can get away with using only one of the B- or H-field if the magnetisation is related to the applied H-field in a straightforward way. For other cases (and hence most ferromagnetic materials or permanent magnets) both fields must be considered.
Measurable cardinal A measurable cardinal $\kappa$ is an uncountable cardinal such that it is possible to "measure" the subsets of $\kappa$ using a 2-valued measure on the powerset of $\kappa$, $\mathcal{P}(\kappa)$. There exists several other equivalent definitions: For example, $\kappa$ can also be the critical point of a nontrivial elementary embedding $j:V\to M$. Every measurable is a large cardinal, i.e. $V_\kappa$ satisfies $\text{ZFC}$, therefore $\text{ZFC}$ cannot prove the existence of a measurable cardinal. In fact $\kappa$ is inaccessible, the $\kappa$th inacessible, the $\kappa$th weakly compact cardinal, the $\kappa$th Ramsey, and similarly bears most of the large cardinal properties under Ramsey-ness. It is notable that every measurable has the mentioned properties in $\text{ZFC}$, but in $\text{ZF}$ they may not (but their existence remains consistency-wise much stronger than existence of cardinals with those properties), in fact under the axiom of determinacy, the first two uncountable cardinals, $\aleph_1$ and $\aleph_2$, are both measurable. Measurable cardinals were introduced by Stanislaw Ulam in 1930. Contents Definitions The following definitions are equivalent for every uncountable cardinal $\kappa$: There exists a 2-valued measure on $\kappa$. There exists a $\kappa$-complete (or even just $\sigma$-complete) nonprincipal ultrafilter on $\kappa$. There exists a nontrivial elementary embedding $j:V\to M$ with $M$ a transitive class and such that $\kappa$ is the least ordinal moved (the critical point). There exists an ultrafilter $U$ on $\kappa$ such that the ultrapower $(\text{Ult}_U(V),\in_U)$ of the universe is well-founded and isn't isomorphic to $V$. The equivalence between the first two definition is due to the fact that if $\mu$ is a 2-valued measure on $\kappa$, then $U=\{X\subset\kappa|\mu(X)=1\}$ is a nonprincipal ultrafilter (since $\mu$ is 2-valued) and is also $\sigma$-complete because of $\mu$'s $\sigma$-additivity. Similarly, if $U$ is a $\sigma$-complete nonprincipal ultrafilter on $\kappa$, then $\mu:\mathcal{P}(\kappa)\to[0,1]$ defined by $\mu(X)=1$ whenever $X\in U$, $\mu(X)=0$ otherwise is a 2-valued measure on $\kappa$. To see that the third definition implies the first two, one can show that if $j:V\to M$ is a nontrivial elementary embedding, then the set $\mathcal{U}=\{x\subset\kappa|\kappa\in j(x)\})$ is a $\kappa$-complete nonprincipal ultrafilter on $\kappa$, and in fact a normal fine measure. To show the converse, one needs to use ultrapower embeddings: if $U$ is a nonprincipal $\kappa$-complete ultrafilter on $\kappa$, then the canonical ultrapower embedding $j:V\to\text{Ult}_U(V)$ is a nontrivial elementary embedding of the universe. The equivalence of the last definition with the other ones is simply due to the fact that the ultrapower $(\text{Ult}_U(V),\in_U)$ of the universe is well-founded if and only if $U$ is $\sigma$-complete, and is isomorphic to $V$ if and only if $U$ is principal. Properties See also: Ultrapower If $\kappa$ is measurable, then it has a measure that take every value in $[0,1]$. Also there must be a normal fine measure on $\mathcal{P}_\kappa(\kappa)$. Every measurable cardinal is regular, and (under AC) bears most large cardinal properties weaker than it. It is in particular $\Pi^2_1$-indescribable. However the least measurable cardinal is not $\Sigma^2_1$-indescribable. Independently of the truth of AC, the existence of a measurable cardinal implies the consistency of the existence of large cardinals with the said properties, even if that measurable is merely $\omega_1$. If $\kappa$ is measurable and $\lambda<\kappa$ then it cannot be true that $\kappa<2^\lambda$. Under AC this means that $\kappa$ is a strong limit (and since it is regular, it must be strongly inaccessible, hence it cannot be $\omega_1$). If there exists a measurable cardinal then $0^\#$ exists, and therefore $V\neq L$. In fact, the sharp of every real number exists, and therefore $\mathbf{\Pi}^1_1$-determinacy holds. Furthermore, assuming the axiom of determinacy, the cardinals $\omega_1$, $\omega_2$, $\omega_{\omega+1}$ and $\omega_{\omega+2}$ are measurable, also in $L(\mathbb{R})$ every regular cardinal smaller than $\Theta$ is measurable. Every measurable has the following reflection property: let $j:V\to M$ be a nontrivial elementary embedding with critical point $\kappa$. If $x\in V_\kappa$ and $M\models\varphi(\kappa,x)$ for some first-order formula $\varphi$, then the set of all ordinals $\alpha<\kappa$ such that $V\models\varphi(\alpha,x)$ is stationary in $\kappa$ and has the same measure as $\kappa$ itself by any 2-valued measure on $\kappa$. Measurability of $\kappa$ is equivalent with $\kappa$-strong compactness of $\kappa$, and also with $\kappa$-supercompactness of $\kappa$ (fragments of strong compactness and supercompactness respectively.) It is also consistent with $\text{ZFC}$ that the first measurable cardinal and the first strongly compact cardinal are equal. If a measurable $\kappa$ is such that there is $\kappa$ strongly compact cardinals below it, then it is strongly compact. If it is a limit of strongly compact cardinals, then it is strongly compact yet not supercompact. If a measurable $\kappa$ has infinitely many Woodin cardinals below it, then the axiom of determinacy holds in $L(\mathbb{R})$, also the axiom of projective determinacy holds. If $\kappa$ is measurable in a ground model, then it is measurable in any forcing extension of that ground model whose notion of forcing has cardinality strictly smaller than $\kappa$. Prikry showed however that every measurable can be collapsed to a cardinal of cofinality $\omega$ and no other cardinal is collapsed. Failure of $\text{GCH}$ at a measurable Gitik proved that the following statements are equiconsistent: The generalized continuum hypothesis fails at a measurable cardinal $\kappa$, i.e. $2^\kappa > \kappa^+$ The singular cardinal hypothesis fails, i.e. there is a strong limit singular $\kappa$ such that $2^\kappa > \kappa^+$ There is a measurable cardinal of Mitchell order $\kappa^{++}$, i.e. $o(\kappa)=\kappa^{++}$ Thus violating $\text{GCH}$ at a measurable (or violating the SCH at any strong limit cardinal) is strictly stronger consistency-wise than the existence of a measurable cardinal. However, if the generalized continuum hypothesis fails at a measurable, then it fails at $\kappa$ many cardinals below it. Real-valued measurable cardinal A cardinal $\kappa$ is real-valued measurable if there exists a $\kappa$-additive measure on $\kappa$. The smallest cardinal $\kappa$ carrying a $\sigma$-additive 2-valued measure must also carry a $\kappa$-additive measure, and is therefore real-valued measurable, also it is strongly inaccessible under AC. If a real-valued measurable cardinal is not measurable, then it must be smaller than (or equal to) $2^{\aleph_0}$. Martin's axiom implies that the continuum is not real-valued measurable. Solovay showed that the existence of a measurable cardinal is equiconsistent with the existence of a real-valued measurable cardinal. More precisely, he showed that if there is a measurable then there is generic extension in which $\kappa=2^{\aleph_0}$ and $\kappa$ is real-valued measurable, and conversely if there exists a real-valued measurable then it is measurable in some model of $\text{ZFC}$. See also Read more Jech, Thomas - Set theory Bering A., Edgar - A brief introduction to measurable cardinals
Assuming I have the following homogeneous ODE equation:$$a\cdot y'' + b\cdot y' + c \cdot y = 0$$Why for $(b^2 - 4\cdot a\cdot c=0) \quad $,(meaning, when $m_1=m_2$) then the solution is:$$y = C_1\cdot e^{m_{1}x} + C_2\cdot x \cdot e^{m_{2}x}$$Why isn't it simply:$$y = C_1\cdot e^{m_{1}x} + C_2 \cdot e^{m_{2}x}$$? Also, why did they choose to multiply $C_2$ with $x$? Why not having a totally different approach for the solution when $(m1=m2)$ (e.g. diving the equation with $x$)? Assuming I have the following homogeneous ODE equation:$$a\cdot y'' + b\cdot y' + c \cdot y = 0$$Why for $(b^2 - 4\cdot a\cdot c=0) \quad $,(meaning, when $m_1=m_2$) then the solution is:$$y = C_1\cdot e^{m_{1}x} + C_2\cdot x \cdot e^{m_{2}x}$$Why isn't it simply:$$y = C_1\cdot e^{m_{1}x} + C_2 \cdot e^{m_{2}x}$$? second order differential equation needs two independent solutions so that it can satisfy the two initial conditions like $y(a) = \alpha, y^\prime(a) = \beta.$ when the indicial equation has a repeating root, which happens when $b^2 = 4ac,$ you only get one solution $e^{rx}$ you find the other solution by pretending the repeating root is really two roots $r -\epsilon$ and $r + \epsilon$ coming together to be $r, r$ so that $${e^{(r+\epsilon)x} - e^{(r-\epsilon)x} \over 2 \epsilon }= {e^{rx} (e^{\epsilon x} - e^{-\epsilon x}) \over 2 \epsilon} ={e^{rx}[1 + \epsilon x + \cdots -(1 - \epsilon x + \cdots)] \over 2 \epsilon} = xe^{rx} \mbox{ as } \epsilon \to 0$$ is also a solution. we have also used the fact that the linear combinations of the two solutions $e^{(r+\epsilon)x}, e^{(r-\epsilon)x}$ is again a solution. so we have now two solutions $e^{rx}$ and $xe^{rx}$ when the indicial $am^2 + bm + c = 0$ has a repeating root. Maybe oversimplifying here but, for 2nd order ODEs you should have two unique solutions. If $m_1=m_2$, your two solutions would not be unique, multiplying by $x$ takes care of that. The other part (why you wouldn't divide) is that you actually want your solution to be a solution to your ODE!
Indescribable cardinal A cardinal $\kappa$ is indescribable if it holds the reflection theorem up to a certain point. This is important to mathematics because of the concern for the reflection theorem. In more detail, a cardinal $\kappa$ is $\Pi_{m}^n$-indescribable if and only if for every $\Pi_{m}$ first-order sentence $\phi$: $$\forall S\subseteq V_{\kappa}(\langle V_{\kappa+n};\in,S\rangle\models\phi\rightarrow\exists\alpha<\kappa(\langle V_{\alpha+n};\in,S\cap V_{\alpha}\rangle\models\phi))$$ Likewise for $\Sigma_{m}^n$-indescribable cardinals. Here are some other equivalent definitions: A cardinal $\kappa$ is $\Pi_m^n$-indescribable for $n>0$ iff for every $\Pi_m$ first-order unary formula $\phi$: $$\forall S\subseteq V_\kappa(V_{\kappa+n}\models\phi(S)\rightarrow\exists\alpha<\kappa(V_{\alpha+n}\models\phi(S\cap V_\alpha)))$$ A cardinal $\kappa$ is $\Pi_m^n$-indescribable iff for every $\Pi_m$ $n+1$-th-order sentence $\phi$: $$\forall S\subseteq V_\kappa(\langle V_\kappa;\in,S\rangle\models\phi\rightarrow\exists\alpha<\kappa(\langle V_\alpha;\in,S\cap V_\alpha\rangle\models\phi))$$ In other words, if a cardinal is $\Pi_{m}^n$-indescribable, then every $n+1$-th order logic statement that is $\Pi_m$ expresses the reflection of $V_{\kappa}$ onto $V_{\alpha}$. This exercises the fact that these cardinals are so large they almost resemble the order of $V$ itself. This definition is similar to that of shrewd cardinals, an extension of indescribable cardinals. Variants Totally indescribable cardinals are $\Pi_m^n$-indescribable for every natural $m$ and $n$ (equivalently $\Sigma_m^n$-indescribable for every natural m and n, equivalently $\Delta_m^n$-indescribable for every natural $m$ and $n$). This means that every (finitary) formula made from quantifiers, $\in$ and a subset of $V_{\kappa}$ reflects from $V_{\kappa}$ onto a smaller rank. $Q$-indescribable cardinals are those which have the property that for every $Q$-sentence $\phi$: $$\forall S\subseteq V_\kappa(\langle V_\kappa;\in,S\rangle\models\phi\rightarrow\exists\alpha<\kappa(\langle V_\alpha;\in,S\cap V_\alpha\rangle\models\phi))$$ $\beta$-indescribable cardinals are those which have the property that for every first order sentence $\phi$: $$\forall S\subseteq V_\kappa(\langle V_{\kappa+\beta};\in,S\rangle\models\phi\rightarrow\exists\alpha<\kappa(\langle V_{\alpha+\beta};\in,S\cap V_\alpha\rangle\models\phi))$$ There is no $\kappa$ which is $\kappa$-indescribable. A cardinal is $\Pi_{<\omega}^m$-indescribable iff it is $m$-indescribable for finite $m$. Every $\omega$-indescribable cardinal is totally indescribable. Facts Here are some known facts about indescribability: $\Pi_2^0$-indescribability is equivalent to strong inaccessibility, $\Sigma_1^1$-indescribablity, $\Pi_n^0$-indescribability given any $n>1$, and $\Pi_0^1$-indescribability.[1] $\Pi_1^1$-indescribability is equivalent to weak compactness. [2],[1] $\Pi_n^m$-indescribablity is equivalent to $m$-$\Pi_n$-shrewdness (similarly with $\Sigma_n^m$). [3] $\Pi_n^1$-indescribability is equivalent to $\Sigma_{n+1}^1$-Indescribability. [1] If $m>1$, $\Pi_{n+1}^m$-indescribability is stronger (consistency-wise) than $\Sigma_n^m$ and $\Pi_n^m$-indescribability; every $\Pi_{n+1}^m$-indescribable cardinal is also both $\Sigma_n^m$ and $\Pi_n^m$-indescribable and a stationary limit of such for $m>1$.[1] If $m>1$, the least $\Pi_n^m$-indescribable cardinal is less than the least $\Sigma_n^m$-indescribable cardinal, which is in turn less than the least $\Pi_{n+1}^m$-indescribable cardinal.[1] If $\kappa$ is $Π_n$-Ramsey, then $\kappa$ is $Π_{n+1}^1$-indescribable. If $X\subseteq\kappa$ is a $Π_n$-Ramsey subset, then $X$ is in the $Π_{n+1}^1$-indescribable filter.[5] If $\kappa$ is completely Ramsey, then $κ$ is $Π_1^2$-indescribable.[6] Every $n$-Ramsey $κ$ is $Π^1_{2 n+1}$-indescribable. This is optimal, as $n$-Ramseyness can be described by a $Π^1_{2n+2}$-formula.[7] Every $<ω$-Ramsey cardinal is $∆^2_0$-indescribable.[7] Every normal $n$-Ramsey $κ$ is $Π^1_{2 n+2}$-indescribable. This is optimal, as normal $n$-Ramseyness can be described by a $Π^1_{2 n+3}$-formula.[7] Every critical point of a nontrivial elementary embedding $j:M\rightarrow M$ for some transitive inner model $M$ of ZFC is totally indescribable in $M$. (For example, rank-into-rank cardinals, $0^{\#}$ cardinals, and $0^{\dagger}$ cardinals). [2] If $2^\kappa\neq\kappa^+$ for some $\Pi_1^2$-indescribable cardinal, then there is a smaller $\lambda$ such that $2^\lambda\neq\lambda^+$. However, assuming the consistency of the existence of a $\Pi_n^1$-indescribable cardinal $\kappa$, it is consistent for $\kappa$ to be the least cardinal such that $2^\kappa\neq\kappa^+$. [8] Transfinite $Π^1_α$-indescribable has been defined via finite games and it turns out that for infinite $α$, if $κ$ is $Π_α$-Ramsey, then $κ$ is $Π^1_{2 ·(1+β)+ 1}$-indescribable for each $β < \min \{α, κ^+\}$.[9] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Feng, Qi. A hierarchy of Ramsey cardinals.Annals of Pure and Applied Logic 49(3):257 - 277, 1990. DOI bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Hauser, Kai. Indescribable Cardinals and Elementary Embeddings.56(2):439 - 457, 1991. www DOI bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
Exercise on Stein's/Sharkachi book chapter 6: Let $u(x, t)$ be a smooth solution of the wave equation and let $E(t)$ denote the energy of this wave $$E(t) = \int_{\mathbb R^d} \bigg|\frac{\partial u(x,t)}{\partial t}\bigg|^2+\sum_{j=1}^d \int_{\mathbb R^d}\bigg | \frac{\partial u(x,t)}{\partial x_j}\bigg |^2 dx.$$ We have seen that $E(t)$ is constant using Plancherel’s formula. One can give an alternate proof of this fact by differentiating the integral with respect to $t$ and showing that $\frac{dE}{dt} = 0.$ I remember that when we were in the 1-dimensional case, to prove a similar formula (which didn't involved modules back then), we had to multiply $u_{tt}= u_{xx}$ by $u_t$, integrate with respect to $x$ and then realize integration by parts. Noticing that $1/2(u_t^2)_t = u_{tt}u_t $ and doing similarly for $u_{xx}u_t$, we would arrive that: $\frac{d}{dt}(1/2\int u_t^2dx+1/2\int u_x^2dx ) =0 $ which implies the conservation of energy. So the hint given for the problem in $\mathbb R^d$ is the same: use integration by parts. But I dunno exactly what is the d-dimensional analog of integration by parts. How can I solve this problem?
The equivalence I describe below is well-known, but I'd like a simple standard reference for it. Consider $\mathbb{C}\mathbb{P}^1$, the set of one-dimensional subspaces of $\mathbb{C}^2$, which has a metric given by the angle between subspaces (varying between a minimum of $0$ for identical subspaces and a maximum of $\frac\pi2$ for a subspace and its unique orthogonal complement) and which has holomorphic isometry group $\mathrm{PU}(2)$. Consider on the other hand $\frac12 S^2$, the sphere of points distance $\frac12$ from the origin in $\mathbb{R}^3$, which has a metric given by great-circle distance (varying between a minimum of $0$ for identical points and a maximum of $\frac\pi2$ for a point and its unique antipode) and which has orientation-preserving isometry group $\mathrm{SO}(3)$. Now define a map $\varphi : \mathbb{C}\mathbb{P}^1 \to \frac12 S^2$. The subspace spanned by $(0,1)$ is sent by $\varphi$ to the north pole $p = (0,0,\frac12)$. Any other subspace is spanned by a uniquely defined vector $(1,a+bi)$, for $a$ and $b$ real and $i^2 = -1$, and $\varphi$ sends it to the point at which the open ray from $p$ through $(a, b, -\frac12)$ intersects $\frac12 S^2$. (This is a shift of the standard stereographic projection to place the center of the sphere at the origin.) Claim: The map $\varphi$ is an isometry from $\mathbb{C}\mathbb{P}^1$ to $\frac12 S^2$, and the map from $f \in \mathrm{SO}(3)$ to $g = \varphi^{-1} f \varphi \in \mathrm{PU}(2)$ is an isomorphism of Lie groups. The fact that the two Lie groups are isomorphic is mentioned (without reference, by a sequence of isomorphisms) in Wikipedia and the isometry also appears as a special case of something more specialized. I expect that some version of the equivalence I want is covered in any standard text on quantum computing, where $\mathbb{C}\mathbb{P}^1$ is called the Bloch sphere. If possible I would prefer not to use such specialized references for what is essentially a simple (but somewhat tedious to verify) piece of geometry. Is there a good standard reference, ideally requiring minimal background beyond standard undergraduate mathematics, that would suffice to treat a collection of vectors in $\mathbb{C}^2$, considered up to individual scaling and simultaneous action by $\mathrm{U}(2)$, as being equivalent (under an explicit map) to a collection of points in a $2$-sphere, considered up to Euclidean geometry?
Literature on Carbon Nanotube Research I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate! Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below). Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: <math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math> where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. This article can be found in our archive. Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns. The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}}</math> and could be sustained for 90 minutes, This is very different from the <math>1{\rm \mu m\ min.^{-1}}</math> reported by Editing Literature on Carbon Nanotube Research#Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen, which shows that the growth is very dependent on the method and materials used. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007). Overall the paper is somewhat short on the details of the process, but the results are very interesting. Perhaps the 5mm CNTs are long enough to be spun into a usable yarn. In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
Notation $\le$ is used for the subgroup relation; $P$ means polynomial time in input size; $\Omega = \{1,2,3,\cdots,n\}$ is a input domain; $\mathrm{Sym}(\Omega)$ means the symmetric group on $\Omega$; $G = \langle A \rangle $ means the subgroup $G$ generated by the subset $A$ of $\mathrm{Sym}(\Omega)$. The normal centralizer problem is defined as follows: Given: $G = \langle A \rangle, H = \langle B \rangle \le \text{Sym}(\Omega)$, where $G$ normalizes $H$. Find : $C_G(H) = \{g \in G \mid gh =hg, \forall h \in H\}$ Question : Is this problem in $P$? Give a polynomial time algorithm if answer is yes. I know that if we drop the normal condition from the above problem then the new version will not be in $P$. Also note that computing the normalizer of a subgroup $H$ is in P. Please note that I have asked the same question on theoretical computer science exchange (link) a month back but did not get any response.
Literature on Carbon Nanotube Research I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate! Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below). Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: <math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math> where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. This article can be found in our archive. Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns. The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}}</math> and could be sustained for 90 minutes, This is very different from the <math>1{\rm \mu m\ min.^{-1}}</math> reported by G. Zhang et al. (2005), which shows that the growth is very dependent on the method and materials used. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007). Overall the paper is somewhat short on the details of the process, but the results are very interesting. Perhaps the 5mm CNTs are long enough to be spun into a usable yarn. In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open.
The type of symmetry for which the simplex (not necessarily regular) is usually called "the least symmetric convex body" is the symmetry of reflection about a point (e.g. $x\mapsto-x$). There are a few measures of this symmetry that the simplex minimizes. For any convex body $K\subset\mathbf{R}^n$, consider the following symmetric bodies: $$ D = \tfrac12 (K-K) $$ $$ C = \mathrm{conv}\{K\times \{0\}, -K\times\{1\}\}\subset\mathbf{R}^{n+1} $$ $$ R_a = \mathrm{conv}\{K, 2a-K\} $$ and let $R$ be the body of smallest volume among $\{R_a\}$. Note that if $K$ is symmetric under reflection about a point, then $|D|=|C|=|R|=|K|$. When $K$ is not symmetric, $|D|,|C|,|R|>|K|$. And, for a fixed volume of $|K|$, the simplex maximizes $|D|$, $|C|$, and $|R|$. See Convex Bodies Associated with a Given Convex Body, C. A. Rogers and G. C. Shephard (1957). For your context it sounds like you want a measure of spherical symmetry. And since you want ellipsoids to be as "symmetric" as spheres, you probably want something affine invariant. I think you are looking for something like the Banach-Mazur distance to the sphere: $$\delta(K,B) = \min\{t: B\subseteq TK+a\subseteq e^t B \text{ for some }T\in GL(n), a\in \mathbf{R}^n\}$$ The BM distance is usually defined for point-symmetric bodies, but there are versions out there without that assumptions (e.g. Macbeath AM (1951), "A compactness theorem for affine equivalence-classes of convex regions", Journal canadien de mathématiques. 3, 54–61).
Let $C$ be the space nº74 ("double origin topology") in Steen & Seebach's Counterexamples of Topology, chosen because it is the only one listed there that is T2 and path-connected but not T3: $C$ consists of the set of points of the plane $\mathbb{R}^2$ together with an additional point $0^*$. Neighborhoods of points other than the origin $0$ and the point $0^*$ are the usual open sets of $\mathbb{R}^2\setminus\{0\}$; as a basis fo neighborhoods of $0$ and $0^*$, we take $V_n(0) = \{(x,y) : x^2+y^2 < 1/n^2 \land y>0\} \cup \{0\}$ and $V_n(0^*) = \{(x,y) : x^2+y^2 < 1/n^2 \land y<0\} \cup \{0^*\}$. In other words we have replaced the origin in the plane by an "upper origin" $0$ and a "lower origin" $0^*$, the neighborhoods of the upper origin being sets containing an open half-disk centered at the origin, plus the upper origin itself, and similarly for the lower origin. The space $C$ is Hausdorff, but not T2½ because $0$ and $0^*$ do not have disjoint closed neighborhoods; in particular, it is not T3 or T3½. So there is no continuous function $C \to \mathbb{R}$ taking different values on $0$ and $0^*$. Let $c_0 = 0$ and $c_1 = 0^*$. Then there is a path connecting $c_0$ and $c_1$: indeed, there is a path connecting $c_0$ to, say, $(1,0)$, and one connecting $(1,0)$ to $c_1$ (we can even find "arcs", i.e., injective paths, if we want). If we have a continuous function $C \to X$ taking $c_0$ to $x$ and $c_1$ to $y$, then right-composing it with the path just mentioned gives a path connecting $x$ and $y$: so $(C,c_0,c_1)$-connectedness implies path-connectedness. On the other hand, $\mathbb{R}$ is not $(C,c_0,c_1)$-connected because of what was said in the previous paragraph. I think this answers the question, with the additional constraint that $C$ is Hausdorff and $c_0 \neq c_1$. (I just noticed that Simon Henry had the same idea in the comments.) However, it doesn't answer the question that I think should have been asked, namely to also require $C$ itself to be $(C,c_0,c_1)$-connected (in the above example, it's pretty clear that it's not).
Using the basic operations of relational algebra (RA), it is possible to obtain the largest value assigned to a given attribute of a relation. This post shows how this can be done. To start, consider the following operators of RA (below $R$ is a relation): $\sigma_{\theta}(R)$ select tuples (rows) from $R$ which satisfy the condition $\theta$, where $\theta$ consists of comparisons of attributes from $R$ using the following binary operators: $\lt$, $\leq$, $=$, $\geq$ and $\gt$ $\Pi_{a_1,\ldots,a_n}(R)$ extract attributes (columns) $a_1,\ldots,a_n$ from the relation $R$ (duplicate tuples are discarded so each tuple in the resulting relation is unique) $\rho_{a/b}(R)$ rename attribute $b$ from the relation $R$ to $a$ $R \bowtie_{\theta} S$ $\theta$-join of relations $R$ and $S$: computes all combinations of tuples from R and S that satisfy the condition $\theta$ ($R \bowtie_{\theta} S = \sigma_{\theta}(R \times S)$, with $R \times S$ being the Cartesian product of $R$ and $S$) Consider now the relation $P$ below: Name Age Peter 21 Bob 25 Alice 32 John 27 The maximum age of the people listed in $P$ can be retrieved as follows: $$ \max_{P}(\textrm{Age}) := \Pi_{\textrm{Age}} P - \Pi_{\textrm{Age}}\left[ R \underset{\textrm{Age} \lt \textrm{Age2}}{\bowtie} \left(\rho_{\textrm{Name2/Name}}\rho_{\textrm{Age2/Age}} R\right)\right] $$ In other words, we first obtain a relation $\Pi_{\textrm{age}} P$ which contains a single column with all ages and subtract from it the set of all ages which are smaller than some other age. To clarify the second part, notice that: $$ R \underset{\textrm{Age} \lt \textrm{Age2}}{\bowtie} \left(\rho_{\textrm{Name2/Name}}\rho_{\textrm{Age2/Age}} R\right) $$ is a relation containing four columns ($\textrm{Name}$, $\textrm{Age}$, $\textrm{Name2}$, $\textrm{Age2}$) with each of its tuples satisfying $\textrm{Age} \lt \textrm{Age2}$. Applying $\Pi_{\textrm{Age}}$ to this relation gives us another relation with a single column ($\textrm{Age}$) containing all original age values from $R$ which are smaller than some other age value in $R$. Therefore, the relation $$ \Pi_{\textrm{Age}}\left[ R \underset{\textrm{Age} \lt \textrm{Age2}}{\bowtie} \left(\rho_{\textrm{Name2/Name}}\rho_{\textrm{Age2/Age}} R\right)\right] $$ contains all ages except the largest one, so removing these values from $\Pi_{\textrm{Age}} P$ yields a relation with a single age value: the largest one.
Wikibooks:Manual of Style This page contains a draft proposal for a Wikibooks policy or guideline. Discuss changes to this draft at the discussion page. Through consensus, this draft could become an official Wikibooks policy or guideline. The Wikibooks Manual of Style serves to establish good structural and stylistic practices to help editors produce higher quality wikibooks. Contents Wikibook titles See also Wikibooks:Naming policy for information on how to name books and their chapters. Wikibooks should be titled based on their aspect. This is a combination of the target audience, scope, and depth of the book. Several books on one topic but with differing aspects can exist. For example, a book on woodwork aimed at the general public may be entitled simply "Woodworking", and a mathematics text for commerce students may be called "Mathematics for Commerce" or "Commercial Mathematics" instead of simply "Mathematics". Some people prefer to use title casing, as books often do, while other people prefer to use sentence casing, as Wikipedia does. Title casing is recommended for book titles as it reduces potential confusion between title- and shelf-categories. Casing on subpage names and sections is entirely a matter of style. Whatever combination of schemes for book titles, pages and sections, please be consistent and follow the existing style for books you are editing. Sub-page names that describe what the chapter is about — for example, " Chess/Notating_The_Game" — are preferred over numbered chapters — like " Chess/Chapter_10" — so that inserting new chapters in between existing chapters, or reordering chapters, requires less work. Structure See also book design for some helpful tips and ideas. Main page The main page is generally the first page a new reader sees. It should give a quick overview of the scope, intended audience and layout of the book. Splash pages should be avoided. Often the layout is simply the full table of contents. Collections, printable versions and PDF files should be easily accessible from this page. Links to the book from the card catalog office and external websites should point to a book's main page. The subject index on the card catalog office requires the {{shelves}} template to be placed on the main page of the book. The book's main page and category should be placed in any categories that the book belongs to. Indicate a book's completion status with {{status}} to provide readers with an idea of how far along the book is when browsing the shelf pages. If you still require help with categorizing a book, please request assistance in the reading room. Interlingual links should be placed on the book's main page. Books across language projects may be dissimilar even when about the same subject. Be wary about placing interlingual links on any other pages. Table of contents In general, the table of contents should reside on the main page, giving readers an overview of the available material. In cases where this is impractical, a special page can be created for it. Common names for such pages are , Contents or similar. Table of contents Introduction An introduction page is the first page where learning begins. A book's introduction page commonly delves into purpose and goals of the book; what the book aims to teach, who the audience is, what the book's scope is, what topics the book covers, history of the subject, reasons to learn the subject, any conventions used in the book, or any other information that might make sense in an introductory chapter. Common names for such pages are or Introduction . The latter is more commonly used when information about the book is split from the introduction of the subject matter. About The local manual of style—when it is not part of the page -- is often named literally "Local style manual", "Manual of style", "How to contribute", "How you can help", "About", etc. — appended to the book name, of course.Whatever it is called in the book you are editing, it would be nice if a link to it is on the same page as the table of contents.Having a local manual of style is further discussed in WB:LMOS. Introduction Navigation aids are links included on pages to make navigating between pages easier. Navigation aids can help make reading books easier for readers, but can also make maintaining and contributing to books harder. Most web browsers can back track through pages already visited, and the wiki software also adds links for navigating back to the table of contents if pages use the slash convention in their name and a book's main page is the table of contents as suggested. Using a book-specific template to hold navigation aids (rather than copy-and-paste onto each page of the book) can help reduce some of the maintenance issues, since only that template must be edited if things change. There is no standard for navigation aids. Navigation aids are optional due to their potential drawbacks. Bibliography A bibliography is useful for collecting resources that were cited in the book, for linking to other useful resources on the subject that go beyond the scope of the book, and for including works that can aid in verifying the factual accuracy of what is written in the book. When used, such pages are commonly named , Further Reading , or similar. References Glossary A glossary lists terms in a particular domain of knowledge with the definitions for those terms. A glossary is completely optional and is most useful in books aimed at people new to a field of study. should always be used for such a page. Glossary Appendices An appendix includes important information that does not quite fit into the flow of a book. For example a list of math formulas might be used as an appendix in a math book. Appendices are optional and books may have more than one appendix. Examples of common ways to name appendices are and Appendix/Keywords . Appendix:Formulas Examples: Cover pages Cover pages are useful for print versions. These should be separated from the main page (remember: Wikibooks is not paper) but can be used to make print versions.When used, commonly named . Cover Print versions See Help:Print versions for more on print versions. They are often named . Book Name/Print version PDF versions Some books have a for the entire book at once in a single PDF file.If someone creates a PDF version of a book, it would be nice if that PDF file were mentioned at Wikibooks:PDF versions, and {{PDF version}} were placed on the table of contents to link to that PDF file. File:Bookname.pdf Examples Style Where appropriate the first use a word or phrase akin to the title of the page should be marked in bold and when a new term is introduced, it is italicised. Headers Headers should be used to divide page sections and provide content layout. Primary information should be marked with a "==", and information secondary to that should be marked with a "===" header, and so on, for example: == Animals == There are many kinds of animals. === Cats === Cats are animals. === Dogs === Dogs are animals. There is no need to provide systematic navigation between headers; only provide navigation between pages. A list of sections is automatically provided on the page when more than 3 headers are used. Linking Books with a deep sub-page hierarchy should include navigation links to help navigate within the hierarchy. Templates can help maintain consistency by making it easy to include navigational aids on the necessary pages. Footnotes and references Wikibooks has a couple of really simple implementations for footnotes and references. One uses {{note}} to place notes at the bottom of the page and {{ref}} to reference these at appropriate places in the text. The other places the references between <ref> and </ref> tags right in the text, and then uses the {{reflist}} or <references /> to automatically generate a list of these at the bottom of the page. Mathematics Format using HTML or in-line Mediawiki markup when using variables or other simple mathematical notation within a sentence. Use <math></math> tags for more complicated forms. Use italics for variables: a+ b, etc. Do not use italics for Greek letter variables, function names, or their parenthesis. To introduce mathematics notation in a display format, move to the start of the new line, add a single colon, and then the notation within the <math></math> tags. If punctuation follows the notation, do so inside the tags. For example: Markup Display : <math>\int_0^\infty {1 \over {x^2 + 1}}\,dx = \frac{\pi}{2}</math> is correctly used for "display" guidelines. If a notation does not render as a PNG, you may force it to do so by adding a " \,\!" at the end of the formula. Software Books that are about computer software or rely on the use of computer software to illustrate examples should clearly indicate which version of the software is relevant to the book, page, or section at hand. Templates to help with this and examples of their usage is available at Wikibooks:Templates.
This is a very interesting question. I don't know if there is a general and definitive answer, but I'll try to make some comments. I apologize if this ends up rambling; I'm finding this out as I write this answer. Operators have dimensions, since their eigenvalues are physical quantities. For bras and kets it gets more complicated. First, you cannot in general say that they are dimensionless. To see why, consider a state with a certain position $|x\rangle$. Since $\langle x | x' \rangle = \delta(x-x')$ and the Dirac delta has the inverse dimension of its argument, it must be that $[ \langle x | ] \times [ | x \rangle ] = 1/L$. A similar relationship holds for momentum eigenstates. Of course, there are higher powers of $L$ in higher dimensions. However, consider an operator with discrete spectrum, such as the energy in an atom or something like that. Then the appropriate equation is $\langle m | n \rangle = \delta_{mn}$, and since this delta is dimensionless, bras and kets must have inverse dimensions. This gets even weirder when you consider that the Hamiltonian for a hydrogen atom has both discrete and continuous eigenvalues, so the relationship between the bras and the kets' dimensions will be different depending on the energy (or whatever physical quantity is appropiate). We have the equation $\langle x | p \rangle = \frac1{\sqrt{2\pi\hbar}} \exp(ipx/\hbar)$. I at first thought that this combined with $[\langle x |] \times [| p \rangle ] = [\langle p |] \times [| x \rangle ]$ would allow us to find the dimensions of $|x\rangle$ (and everything else), but it turns out that the normalization conditions of $|x\rangle$ and $|p\rangle$ force the dimensions of $\langle x | p \rangle$ to come out right. We can find that $[|p\rangle] = \sqrt{T/M} [|x \rangle]$, but we can't go any further. Similar relationships will apply for the eigenstates of your favourite operator. Any given ket is a linear combination of eigenkets, but again there are subtleties depending on whether the spectrum is discrete or continuous. Suppose we have two observables $O_1$ and $O_2$ with discrete spectrum and eigenstates $|n\rangle_1$ and $|n\rangle_2$. Any state $|\psi\rangle$ can be expressed as a dimensionless linear combination of the eigenstates (dimensionless because since $\langle n | n \rangle = 1$, the squares of the coefficientes make up probabilities): $|\psi\rangle = \sum_n a_n |n\rangle_1 = \sum_n b_n |n\rangle_2$. This implies that the eigenkets of all observables with discrete spectrum have the same dimensions, and likewise for the eigenbras. It gets trickier for observables with continuous spectrum such as $x$ and $p$, because of the integration measure. We have $|\psi\rangle = \int f(x) |x\rangle\ dx = \int g(p) |p\rangle\ dp$. $\langle \psi | \psi \rangle = 1$ implies $\int |f(x)|^2\ dx = 1$, so that $[f] = 1/\sqrt{L}$ and likewise $[g] = \sqrt{T/ML}$. This should be no surprise since $f$ and $g$ are Fourier transforms of each other, with an $1/\sqrt{\hbar}$ thrown in. From this we can deduce $[|p\rangle] = \sqrt{T/M} [|x \rangle]$, which we already knew, and $\sqrt{L} [|x \rangle] = [|n \rangle]$. The conclusion seems to be the following. All eigenkets with discrete eigenvalues must have the same dimensions, but it looks like that dimension is arbitrary (so you could take them to be dimensionless). Furthermore, normalized states have that same dimension. Eigenstates with continuous spectrum are more complicated; if we have an observable $A$ (with continuous eigenvalues) with eigenvalues $a$, then we can use the fact that $|\psi\rangle$ can be written either as an integral over eigenstates of $A$ or as a sum over discrete eigenstates to find that $\sqrt{[a]} [|a\rangle] = [|n\rangle]$, where $|n\rangle$ is some discrete eigenket. So once you fix the dimensions of one ket, you fix the dimensions of every other ket.
Update: Trimok and MBN helped me solve most of my confusion. However, there is still an extra term $-(2/r)T$ in the final result. Brown doesn't write this term, and it seems physically wrong. Update #2: Possible resolution of the remaining issue. See comment on MBN's answer. Suppose we have a rope hanging statically in a Schwarzschild spacetime. It has constant mass per unit length $\mu$, and we want to find the varying tension $T$. Brown 2012 gives a slightly more general treatment of this, which I'm having trouble understanding. Recapitulating Brown's equations (3)-(5) and specializing them to this situation, I have in Schwarzschild coordinates $(t,r,\theta,\phi)$, with signature $-+++$, the metric $$ ds^2=-f^2 dt^2+f^{-2}dr^2+... \qquad , \text{ where} f=(1-2M/r)^{1/2} $$ and the stress-energy tensor $$ T^\kappa_\nu=(4\pi r^2)^{-1}\operatorname{diag}(-\mu,-T,0,0) \qquad .$$ He says the equation of equilibrium is: $$ \nabla_\kappa T^\kappa_r=0 $$ He then says that if you crank the math, the equation of equilibrium becomes something that in my special case is equivalent to $$ T'+(f'/f)(T-\mu)=0 \qquad ,$$ where the primes are derivatives with respect to $r$. This makes sense because in flat spacetime, $f'=0$, and $T$ is a constant. The Newtonian limit also makes sense, because $f'$ is the gravitational field, and $T-\mu\rightarrow -\mu$. There are at least two things I don't understand here. First, isn't his equation of equilibrium simply a statement of conservation of energy-momentum, which would be valid regardless of whether the rope was in equilibrium? Second, I don't understand how he gets the final differential equation for $T$. Since the upper-lower-index stress-energy tensor is diagonal, the only term in the equation of equilibrium is $\nabla_r T^r_r=0$, which means $\mu$ can't come in. Also, if I write out the covariant derivative in terms of the partial derivative and Christoffel symbols (the relevant one being $\Gamma^r_{rr}=-m/r(r-2m)$), the two Christoffel-symbol terms cancel, so I get $$ \nabla_r T^r_r = \partial _r T^r_r + \Gamma^r_{rr} T^r_r - \Gamma^r_{rr} T^r_r \qquad , $$ which doesn't involve $f$ and is obviously wrong if I set it equal to 0. What am I misunderstanding here? References Brown, "Tensile Strength and the Mining of Black Holes," http://arxiv.org/abs/1207.3342
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates To determine whether a parametrized curve is concave up or concave down, we need to see whether $d^2 y/dx^2$ is positive or negative. To compute these higher derivatives, we start with the formula for $dy/dx$ and apply the chain rule: $$\frac{d\hbox{(anything)}}{dt} = \frac{d\hbox{(anything)}}{dx}\frac{dx}{dt}.$$Dividing both sides by $dx/dt$ gives$$\frac{d\hbox{(anything)}}{dx} = \frac{1}{dx/dt} \frac{d\hbox{(anything)}}{dt}.$$ In this notation, the chain rule is $\displaystyle{\frac{d\hbox{(anything)}}{dx} = \frac{\dot{\hbox{anything}}}{\dot x}}$, and in particular $\displaystyle{\frac{dy}{dx} = \frac{\dot y}{\dot x}.}$ Taking a derivative with respect to $x$ one more time and applying the quotient rule gives \begin{eqnarray*}\frac{d^2y}{dx^2} = \frac{d}{dx}\frac{dy}{dx} &=& \frac{1}{\dot x} \frac{d (\dot y/\dot x)}{dt} \cr &=& \frac{1}{\dot x} \frac{\dot x \ddot y - \dot y \ddot x}{\dot x^2} \cr &=& \frac{\dot x \ddot y - \dot y \ddot x}{\dot x^3}. \end{eqnarray*} We can apply the same reasoning to get even higher derivatives, like $$\frac{d^3y}{dx^3} = \frac{d}{dx}\frac{d^2y}{dx^2} = \frac{1}{\dot x} \frac{d (d^2y/dx^2)}{dt},$$ but these come up a lot less often than the second derivative.
There are two approaches to this problem - we can solve it directly, or we can examine the answer choices and come to a conclusion about which one must be correct. Algebraic approach: Draw a perpendicular from point A to side BC, label it point D. Draw another perpendicular from point B to side AC, label it point E. Now we have a 30-60-90 triangle and a two similar 15-75-90 triangles. Let the length of CD = a Attachment: Isosceles Triangle.png [ 4.68 KiB | Viewed 33440 times ] Now we can use some ratios to figure out x Looking at triangle ABD, we know that \(\frac{BD}{AB} = \frac{x-a}{x} = \frac{\sqrt{3}}{2}\) (1) And looking at triangles ADC and BEC (similar triangles) we can see that \(\frac{AC}{DC} = \frac{BC}{EC} = \frac{2\sqrt{2}}{a}=\frac{x}{\sqrt{2}}\) (2) From (2) \(ax = 4\), or \(a=\frac{4}{x}\) Cross multiply (1) and plug in \(a=\frac{4}{x}\) \(2x-2a = \sqrt{3}x\) \(2x-2(\frac{4}{x})=\sqrt{3}x\) \(2x-\frac{8}{x}-\sqrt{3}=0\) --> multiply by \(x\) \(2x^2-8-\sqrt{3}x^2=0\) --> gather terms \((2-\sqrt{3})x^2=8\) \(x=\sqrt{\frac{8}{2-\sqrt{3}}}\) --> Now this doesn't look like any of our answer choices, so we will have to manipulate it a bit \(x=\frac{2\sqrt{2}}{\sqrt{2-\sqrt{3}}}\) --> multiply top and bottom by the conjugate of the denominator to rationalize the denominator \(x=\frac{2\sqrt{2}*\sqrt{2+\sqrt{3}}}{(\sqrt{2-\sqrt{3}})*(\sqrt{2+\sqrt{3}})}\) \(x=\frac{2\sqrt{2}*\sqrt{2+\sqrt{3}}}{1}\) --> bring the \(\sqrt{2}\) into the square root \(x=2\sqrt{4+2\sqrt{3}}\) --> Now rearrange the expression to complete the square within the square root \(x=2\sqrt{1+2\sqrt{3}+3} = 2\sqrt{(1+\sqrt{3})^2}\) \(x=2(1+\sqrt{3})\) Answer: E WHEW! Ok, now the simpler approach that involves almost no calculation, just a few quick estimates and a good idea of what's going on in the triangle: Because \(\angle{B}\) is \(30^{\circ}\), and the side opposite \(\angle{B}\) is \(2\sqrt{2}\), we know that the other two sides, x, will be longer than \(2\sqrt{2}\). \(2\sqrt{2}\) is \(\approx 2.8\) Now if we look at the answer choices: (A) \(\sqrt{3}-1 \approx 0.73\) (B) \(\sqrt{3}+2 \approx 3.73\) (C) \(\frac{\sqrt{3}-1}{2} \approx 0.36\) (D) \(\frac{\sqrt{3}+1}{2} \approx 1.36\) (E) \(2(\sqrt{3}+1) \approx 5.46\) Immediately we can eliminate A, C and D. To choose between B and E, we need to ask whether x should be almost twice as much as AC, or less than 1.5*AC. Go back to the diagram and look at triangle ABD. Since it is a 30-60-90 triangle, we know that AB (i.e. \(x\)) = 2*AD. Now, though the diagram is not to scale, it is accurate enough to surmise that AD is only slightly less than AC. Therefore, \(x\) should be only slightly less than 2*AC, which matches with answer E. I recommend the second approach, but it only works well because the answer choices are spread out enough. That being said, I don't think the GMAT will expect you to do the algebraic approach, and will design the answer choices specifically to be able to use approximation. Cheers _________________ Dave de Koos GMAT aficionado
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) @ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, } V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) Abstract. We show that Kelley-Morse (KM) set theory does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\newcommand\Ord{\text{Ord}}\Ord$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\Ord$ that there is a class function $F:\Ord\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a class $A\subseteq\omega\times\Ord$, such that each section $A_n=\{\alpha\mid (n,\alpha)\in A\}$ contains a class club, but $\bigcap_n A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed. The class Fodor principle is the assertion that every regressive class function $F:S\to\Ord$ defined on a stationary class $S$ is constant on a stationary subclass of $S$. This statement can be expressed in the usual second-order language of set theory, and the principle can therefore be sensibly considered in the context of any of the various second-order set-theoretic systems, such as Gödel-Bernays (GBC) set theory or Kelley-Morse (KM) set theory. Just as with the classical Fodor’s lemma in first-order set theory, the class Fodor principle is equivalent, over a weak base theory, to the assertion that the class club filter is normal. We shall investigate the strength of the class Fodor principle and try to find its place within the natural hierarchy of second-order set theories. We shall also define and study weaker versions of the class Fodor principle. If one tries to prove the class Fodor principle by adapting one of the classical proofs of the first-order Fodor’s lemma, then one inevitably finds oneself needing to appeal to a certain second-order class-choice principle, which goes beyond the axiom of choice and the global choice principle, but which is not available in Kelley-Morse set theory. For example, in one standard proof, we would want for a given $\Ord$-indexed sequence of non-stationary classes to be able to choose for each member of it a class club that it misses. This would be an instance of class-choice, since we seek to choose classes here, rather than sets. The class choice principle $\text{CC}(\Pi^0_1)$, it turns out, is sufficient for us to make these choices, for this principle states that if every ordinal $\alpha$ admits a class $A$ witnessing a $\Pi^0_1$-assertion $\varphi(\alpha,A)$, allowing class parameters, then there is a single class $B\subseteq \Ord\times V$, whose slices $B_\alpha$ witness $\varphi(\alpha,B_\alpha)$; and the property of being a class club avoiding a given class is $\Pi^0_1$ expressible. Thus, the class Fodor principle, and consequently also the normality of the class club filter, is provable in the relatively weak second-order set theory $\text{GBC}+\text{CC}(\Pi^0_1)$. This theory is known to be weaker in consistency strength than the theory $\text{GBC}+\Pi^1_1$-comprehension, which is itself strictly weaker in consistency strength than KM. But meanwhile, although the class choice principle is weak in consistency strength, it is not actually provable in KM; indeed, even the weak fragment $\text{CC}(\Pi^0_1)$ is not provable in KM. Those results were proved several years ago by the first two authors, but they can now be seen as consequences of the main result of this article (see corollary 15. In light of that result, however, one should perhaps not have expected to be able to prove the class Fodor principle in KM. Indeed, it follows similarly from arguments of the third author in his dissertation that if $\kappa$ is an inaccessible cardinal, then there is a forcing extension $V[G]$ with a symmetric submodel $M$ such that $V_\kappa^M=V_\kappa$, which implies that $\mathcal M=(V_\kappa,\in, V^M_{\kappa+1})$ is a model of Kelley-Morse, and in $\mathcal M$, the class Fodor principle fails in a very strong sense. In this article, adapting the ideas of Karagila to the second-order set-theoretic context and using similar methods as in Gitman and Hamkins’s previous work on KM, we shall prove that every model of KM has an extension in which the class Fodor principle fails in that strong sense: there can be a class function $F:\Ord\to\omega$, which is not constant on any stationary class. In particular, in these models, the class club filter is not $\sigma$-closed: there is a class $B\subseteq\omega\times\Ord$, each of whose vertical slices $B_n$ contains a class club, but $\bigcap B_n$ is empty. Main Theorem. Kelley-Morse set theory KM, if consistent, does not prove the class Fodor principle. Indeed, if there is a model of KM, then there is a model of KM with a class function $F:\Ord\to \omega$, which is not constant on any stationary class; in this model, therefore, the class club filter is not $\sigma$-closed. We shall also investigate various weak versions of the class Fodor principle. Definition. For a cardinal $\kappa$, the class $\kappa$-Fodor principleasserts that every class function $F:S\to\kappa$ defined on a stationary class $S\subseteq\Ord$ is constant on a stationary subclass of $S$. The class ${<}\Ord$-Fodor principleis the assertion that the $\kappa$-class Fodor principle holds for every cardinal $\kappa$. The bounded class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is bounded on a stationary subclass of $S$. The very weak class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is constant on an unbounded subclass of $S$. We shall separate these principles as follows. Theorem. Suppose KM is consistent. There is a model of KM in which the class Fodor principle fails, but the class ${<}\Ord$-Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle fails, but the bounded class Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle holds, but the bounded class Fodor principle fails. $\text{GB}^-$ proves the very weak class Fodor principle. Finally, we show that the class Fodor principle can neither be created nor destroyed by set forcing. Theorem. The class Fodor principle is invariant by set forcing over models of $\text{GBC}^-$. That is, it holds in an extension if and only if it holds in the ground model. Let us conclude this brief introduction by mentioning the following easy negative instance of the class Fodor principle for certain GBC models. This argument seems to be a part of set-theoretic folklore. Namely, consider an $\omega$-standard model of GBC set theory $M$ having no $V_\kappa^M$ that is a model of ZFC. A minimal transitive model of ZFC, for example, has this property. Inside $M$, let $F(\kappa)$ be the least $n$ such that $V_\kappa^M$ fails to satisfy $\Sigma_n$-collection. This is a definable class function $F:\Ord^M\to\omega$ in $M$, but it cannot be constant on any stationary class in $M$, because by the reflection theorem there is a class club of cardinals $\kappa$ such that $V_\kappa^M$ satisfies $\Sigma_n$-collection. Read more by going to the full article: V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) @ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
I learned a lot while building this website; I hope to share it so that it might be helpful for anyone trying to do the same. I’m sure you’ll notice that I’m far from an expert in the subjects we’re going to explore here; this is my first foray into web development. If you have any corrections, or things I’ve misunderstood, I’d love to hear about it! Just post a comment. The site is built using Jekyll, using the theme Minimal Mistakes. I host it on Github pages, and purchased and manage my domain through Google Domains. We’ll go through each of these steps in detail. I’ll assume that you have the up-to-date versions of Ruby and Jekyll on your local machine. I’m going through all this in macOS, which may affect some of the shell commands I give, but translating to Windows shouldn’t be too hard. Making a site with Minimal Mistakes The website for Minimal Mistakes includes a great quick-start guide; I recommend theStarting with jekyll new section as a place to start. Using this you should beable to establish a base site with some simple demonstration content, using the MinimalMistakes theme. You can always look directly at the source code for my site if you wantto see exactly what I have in my Gemfile, or _config.yml, or whatever. Enabling MathJax In order to enable MathJax, which renders the mathematical equations you see in myposts, you’ll need to edit the file scripts.html contained in the folder _includes/to include a line enabling MathJax. However, you’ll want to avoid overwriting thecontents of the default scripts.html. So, we need to find where bundle is storing the Gem for Minimal Mistakes. Tofind this, do bundle show minimal-mistakes-jekyll If you just want to navigate directly to that directory, do cd $(bundle show minimal-mistakes-jekyll) If bundle doesn’t seem to have minimal-mistakes-jekyll installed, then you shouldadd the line gem "minimal-mistakes-jekyll" to your Gemfile, and run bundle toinstall the Gems. Now you can copy the default scripts.html into your site: cp _includes/scripts.html /path/to/site/_includes/scripts.html Open the copied scripts.html in your editor of choice, 1 and add thefollowing lines at the end: And you’re done! 2 Now, you can type $$x_1$$ to see , and soon. The $$...$$ syntax will generate inline math if used inline, and willgenerate a display equation if used on its own line. So, if one enters $$ f(a) = \frac{1}{2\pi i} \oint_\gamma \frac{f(z)}{z-a} dz $$ Then the rendered equation appears as so: You can also use \\[ and \\] as delimiters for display equations, and \\( and \\) as delimiters for inline equations. The double slashes are only necessary inmarkdown; if you’re writing raw HTML, then a single slash will suffice. Customize Font Sizes I found the fonts a bit oversized, so I wanted to change the size for theposts. In order to do this, you need to copy the entire folder whichcontains all the relevant scss files. In order to do this, do cd $(bundle show minimal-mistakes-jekyll)cp -r _sass /path/to/site Now, after much digging through the GitHub issues, 3 I found that thefile to edit here is _sass/_reset.scss. In my site, the relevant chunk of textlooks like Once this file has been edited, you should see the font size reduced in your page. Getting it on GitHub Pages Okay, now we write a bunch of nonsense, find some beautiful pictures atUnsplash to use as headers, and we’re ready to publish the thing on GitHubPages. I’ll first go through as though we don’t want to use a custom domain, sothat the website will be exposed at USERNAME.github.io. Enabling jekyll-remote-theme First of all, make sure that you’re using the remote-theme jekyll plugin,which allows you to use any jekyll theme that is GitHub hosted, rather than onlythe few that are officially supported. This process is outlined on the MinimalMistakes website, but I’ll go through it here. First, in your _config.yml file, enable the plugin by including it in the plugins list, via plugins: - jekyll-remote-theme If you have other plugins you want to use (I use jekyll-feed), then add themto this list as well. Designate the remote_theme variable, but do so aftersetting the theme, so that you have in your config file theme: "minimal-mistakes-jekyll"remote_theme: "mmistakes/minimal-mistakes" Finally, in your Gemfile, add gem "jekyll-remote-theme". Push it to the repository GitHub pages looks for a repository that follows the naming convention USERNAME.github.io. So, for example, since my GitHub username is peterewills, the repository for the source of this site is at https://www.github.com/peterewills/peterewills.github.io. Once you’ve created such a repository, initialize a git repo on your site bygoing into path/to/your/site and doing git init. Then, do git remote add origin https://www.github.com/USERNAME/USERNAME.github.io and then commit and push. (If you’re unfamiliar with using git, I recommend eitherof these tutorials.) You’ll get an email telling you that your page build wassuccessful, but you’re “using an unsupported theme.” Don’t worry about this; it happenswhenever you use remote-theme. You now should be able to navigate to USERNAME.github.io and see your page! Using a Custom Domain Suppose you’d prefer to use a custom domain, such as mydomain.pizza (this isactually a real, and available, domain name). There are lots of ways to do this;I did it through Google Domains, so I’ll go through those steps. First, you go to Google Domains, pick out the domain you want, and registerit. For this example, we’ll assume you went with mydomain.pizza. You shouldnow see it appear under the My Domains tab on the right side of thepage. You should see a domain called mydomain.pizza and a DNS option. Thisis what we need to edit. We need to configure the DNS behavior of our domain so that it points at the IPaddress where GitHub Pages is hosting it. On the DNS page, scroll down to Custom Resource Records. You’ll want to add three custom resource records;two “host” resource records (designated by an A) and one “alias” resource record(designated by CNAME). GitHub pages exposes its sites at IP addresses192.30.252.153 and 192.30.252.154. So, you’ll want to add both of these as hostresource records. You’ll want to add your GitHub Pages url USERNAME.github.ioas an alias record. By the time you’ve added the three, your list of resourcerecords should look like the example below. So, now your url ( mydomain.pizza) knows that it is an alias for USERNAME.github.io, but we still have to specify this aliasing on the GitHubend of things. To do this, simply make a text file called CNAME and include on the first line mydomain.pizza This is the entire contents of the text file CNAME. Once this is pushed to therepository USERNAME/USERNAME.github.io, the appropriate settings shouldautomatically update themselves. To check this, go to the respository settings,scroll down to the “GitHub Pages” settings, and look under “Custom domain.” Youshould see something like the following: If the DNS record of your Google domain has not yet been updated, then you willsee Your site is ready to be published mydomain.pizza on a yellowbackground. Note that it sometimes takes up to 48 hours for DNS records toupdate, so be patient. Once the DNS records have updated, you should be able to see your site at mydomain.pizza. Enabling Comments via Disqus Minimal Mistakes has excellent support for comments built in to the theme. I chose to use Disqus as comments platform, but it supports others. If you choose to go another route, you can look on the minimal-mistakes guide for more direct assistance. To get Disqus comments up and running, you should go to the Disqushomepage and sign up. Eventually you should have a “shortname” thatyou can use for your site. Once you have this, you just need to add the following toyour _config.yml: comments: provider: "disqus" disqus: shortname: "my-shortname" A Caveat Even if you include this, you will not see Disqus comments appear in your posts when youdo bundle exec jekyll serve and look at the posts on localhost:4000. Why don’t theyshow up? When you build locally, you are by default building in the development environment. Inminimal mistakes, comments are only active in the production environment. You can testwhat environment you’re in by including the following chunk of code on a page: For more on environments, you can check out the official Jekylldocs. You should be able torun your local Jekyll server in a production environment by doing JEKYLL_ENV=production bundle exec jekyll serve but this never worked for me, so I had to test that comments were working by pushing to GitHub and looking at my posts live in production. Using Google Analytics to Monitor Traffic Minimal mistakes also includes nice support for google analytics. Similar to Disquscomments, you’ll need to sign up for google analytics and follow the steps there to geta tracking_id for your site. Then, in your _config.yml, include the following blob: analytics: provider: "google" google: tracking_id: "XB-934572345-6" You should then be able to use the Google Analytics dashboard to monitor traffic on your site. You can look at the minimal mistakes documentation on analytics for more details, especially if you plan on using a different analytics provider. Conclusion As I said at the beginning of the post, you can check out the repository for mysite to see examples of what I’ve gone through here; including my CNAME file, my _include/scripts.html file that enables MathJax, and my _config.yml file. Please letme know, either by email or in the comments, if you have any questions or corrections! Presumably emacs. ↩ Michael, the guy who built Minimal Mistakes, is really wonderful about responding to issues on GitHub, which are really used as a support forum for people using the theme who have no experience in web development (such as myself). ↩
I have been trying to find out the sweet middle ground of describing path integration of field theories, in between the physicist way and the mathematician way, but it seems hard to find something that is both rigorous and describes how to actually compute them. So far what I've been able to get is this. This is in the context of Schrödinger functionals, so what I've been trying to compute is something of the form $$\langle \Phi, \Psi \rangle = \int_{\mathcal{S}} d\gamma \Phi^*[\phi] \Psi[\phi]$$ $\Phi, \Psi$ are two wavefunctionals of the Hilbert space $L^2(\mathcal{S}(\mathbb{R}^n), d\gamma)$, so that a wavefunctional is a linear functional $\Psi \in \mathcal{S}'(\mathbb{R}^n) : \mathcal{S}(\mathbb{R}^n) \to \mathbb{C}$. That part isn't much of an issue, we can just consider the case $|0\rangle = 1$ here, that doesn't change much. So just $$\mathcal{Z} = \int_{\mathcal{S}} d\gamma $$ The measure here is the gaussian measure on the Schwartz space. There aren't a whole lot of very readable papers on the topic (although this one is the closest I found), but as far as I can tell, given a Gaussian measure on a Banach space $X$, there is the equality $$\int_X d\gamma e^{i f(\phi)} = e^{-\frac{1}{2} q(f, f)}$$ for $\phi \in X$, $f \in X'$, and $q$ is some non-negative, symmetric bilinear form on $X'$. Some sources write out $q$ as $$q(f,f) = \langle f, O f\rangle_X$$ For some hermitian product $\langle ., .\rangle$ on $X$, and a positive definite, self-adjoint operator $O$. As there is also the pseudo-equality $$d\gamma \approx e^{-\frac{1}{2} \langle \phi, O^{-1} \phi \rangle} \mathcal{D} \phi$$ I assume that $O$ is the inverse operator to $\Delta$ (since we have to work in Euclidian space), so that our integral is just something like $$\int_{\mathcal{S}} d\gamma e^{i f(\phi)} = e^{-\frac{1}{2} \int d^nx f \Delta^{-1} f}$$ This certainly seems to work out for the case $f = 0$ as it gives us $$\int_{\mathcal{S}} d\gamma = 1$$ I'm sure there are many issues here (for instance the hermitian product here only works if $f$ can be expressed as an actual function), but is this roughly the sort of formula we need to actually compute path integrals in a rigorous manner?
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Angewandte Chemie International Edition, ISSN 1433-7851, 08/2017, Volume 56, Issue 35, pp. 10539 - 10544 The silver‐catalyzed oxidative C(sp 3 )−H/P−H cross‐coupling of 1,3‐dicarbonyl compounds with H‐phosphonates, followed by a chemo‐ and regioselective C(sp 3... cross-coupling | P bond formation | silver | 1,3-dicarbonyl compounds | C−C bond cleavage | C(sp | HYDROGEN EVOLUTION REACTION | MANGANESE OXIDE CATALYSTS | C(sp) | WATER OXIDATION | C-C bond cleavage | CHEMISTRY, MULTIDISCIPLINARY | ELECTROCATALYSTS | OXYGEN REDUCTION REACTIONS | IRON-SULFUR CLUSTERS | NICKEL | SITES | 2FE-2S CLUSTERS | COBALT | Chemotherapy | Phosphonates | Cancer | Carbon-carbon composites | Coupling (molecular) | Silver | Cross coupling | Cleavage | Selectivity cross-coupling | P bond formation | silver | 1,3-dicarbonyl compounds | C−C bond cleavage | C(sp | HYDROGEN EVOLUTION REACTION | MANGANESE OXIDE CATALYSTS | C(sp) | WATER OXIDATION | C-C bond cleavage | CHEMISTRY, MULTIDISCIPLINARY | ELECTROCATALYSTS | OXYGEN REDUCTION REACTIONS | IRON-SULFUR CLUSTERS | NICKEL | SITES | 2FE-2S CLUSTERS | COBALT | Chemotherapy | Phosphonates | Cancer | Carbon-carbon composites | Coupling (molecular) | Silver | Cross coupling | Cleavage | Selectivity Journal Article 2. Evidence of a structure in $$\bar{K}^{0} \Lambda _{c}^{+}$$ K¯0Λc+ consistent with a charged $$\Xi _c(2930)^{+}$$ Ξc(2930)+ , and updated measurement of $$\bar{B}^{0} \rightarrow \bar{K}^{0} \Lambda _{c}^{+} \bar{\Lambda }_{c}^{-}$$ B¯0→K¯0Λc+Λ¯c- at Belle : Belle Collaboration The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 8 We report evidence for the charged charmed-strange baryon $$\Xi _{c}(2930)^+$$ Ξc(2930)+ with a signal significance of 3.9$$\sigma $$ σ with systematic errors... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 3. Asymmetric Organocatalytic Direct C(sp2)H/C(sp3)H Oxidative Cross‐Coupling by Chiral Iodine Reagents Angewandte Chemie International Edition, ISSN 1433-7851, 03/2014, Volume 53, Issue 13, pp. 3466 - 3469 An asymmetric organocatalytic direct CH/CH oxidative coupling reaction of N 1 , N 3 ‐diphenylmalonamides has been well established by using chiral... asymmetric catalysis | chirality | CH functionalization | cross‐coupling | organoiodine | cross-coupling | C-H functionalization | BOND FORMATION SYNTHESIS | CATALYSIS | ACTIVATION | IODOARENES | ENANTIOSELECTIVE SYNTHESIS | IN-SITU GENERATION | CHEMISTRY, MULTIDISCIPLINARY | FUNCTIONALIZATION | M-CHLOROPERBENZOIC ACID | CH functionalization | DEAROMATIZATION | OXYLACTONIZATION | Joining | Level (quantity) | Catalysis | Asymmetry | Catalysts | Iodine asymmetric catalysis | chirality | CH functionalization | cross‐coupling | organoiodine | cross-coupling | C-H functionalization | BOND FORMATION SYNTHESIS | CATALYSIS | ACTIVATION | IODOARENES | ENANTIOSELECTIVE SYNTHESIS | IN-SITU GENERATION | CHEMISTRY, MULTIDISCIPLINARY | FUNCTIONALIZATION | M-CHLOROPERBENZOIC ACID | CH functionalization | DEAROMATIZATION | OXYLACTONIZATION | Joining | Level (quantity) | Catalysis | Asymmetry | Catalysts | Iodine Journal Article 4. Enantioselective Rh-Catalyzed Carboacylation of C=N Bonds via C-C Activation of Benzocyclobutenones Journal of the American Chemical Society, ISSN 0002-7863, 01/2016, Volume 138, Issue 1, pp. 369 - 374 Herein we describe the first enantioselective Rh catalyzed carboacylation of mimes (imines) via C-C activation. In this transformation, the benzocyclobutenone... FUNCTIONALIZATION | CROSS-COUPLING REACTIONS | RHODIUM | REAGENTS | OLEFIN INSERTION | COMPLEXES | CHEMISTRY | CLEAVAGE | CHEMISTRY, MULTIDISCIPLINARY | CYCLOBUTENONES | ALKALOIDS | Catalysis | Cyclobutanes - chemistry | Cyclization | Stereoisomerism | Rhodium - chemistry FUNCTIONALIZATION | CROSS-COUPLING REACTIONS | RHODIUM | REAGENTS | OLEFIN INSERTION | COMPLEXES | CHEMISTRY | CLEAVAGE | CHEMISTRY, MULTIDISCIPLINARY | CYCLOBUTENONES | ALKALOIDS | Catalysis | Cyclobutanes - chemistry | Cyclization | Stereoisomerism | Rhodium - chemistry Journal Article 5. Evidence of a structure in $$\bar{K}^{0} \Lambda _{c}^{+}$$K¯0Λc+ consistent with a charged $$\Xi _c(2930)^{+}$$Ξc(2930)+, and updated measurement of $$\bar{B}^{0} \rightarrow \bar{K}^{0} \Lambda _{c}^{+} \bar{\Lambda }_{c}^{-}$$B¯0→K¯0Λc+Λ¯c- at Belle European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 8 We report evidence for the charged charmed-strange baryon Ξc(2930)+ with a signal significance of 3.9 σ with systematic errors included. The charged Ξc(2930)+... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 6. C(sp3)-H Bond Functionalization of Benzo [c] oxepines via C-O bond Cleavage: Formal [3+3] Synthesis of Multisubstituted Chromans Journal of Organic Chemistry, ISSN 0022-3263, 03/2018, Volume 83, Issue 6, pp. 3409 - 3416 An efficient base-promoted C(sp(3))-H bond functionalization strategy for the synthesis of multisubstituted chromans from the formal [3+3] cycloaddition of... PRIVILEGED STRUCTURES | HETEROCYCLES | ORTHO-QUINONE METHIDES | REARRANGEMENT AROMATIZATION | CHEMISTRY | CHEMISTRY, ORGANIC | QUINODIMETHANES | SUBSTITUTED NAPHTHALENES | ALDEHYDES | GENERATION | DERIVATIVES PRIVILEGED STRUCTURES | HETEROCYCLES | ORTHO-QUINONE METHIDES | REARRANGEMENT AROMATIZATION | CHEMISTRY | CHEMISTRY, ORGANIC | QUINODIMETHANES | SUBSTITUTED NAPHTHALENES | ALDEHYDES | GENERATION | DERIVATIVES Journal Article Organic and Biomolecular Chemistry, ISSN 1477-0520, 2016, Volume 14, Issue 20, pp. 4554 - 4570 Various rhodium-catalyzed double C-H activations are reviewed. These powerful strategies have been developed to construct C-C bonds, which might be widely... N-SULFONYL KETIMINES | POLYHETEROAROMATIC COMPOUNDS | DIRECT ARYLATION | ONE-POT SYNTHESIS | SEQUENTIAL CLEAVAGE | CHEMISTRY, ORGANIC | BOND ACTIVATION | INTERNAL ALKYNES | CASCADE OXIDATIVE ANNULATION | SIMPLE ARENES | CARBOXYLIC-ACIDS N-SULFONYL KETIMINES | POLYHETEROAROMATIC COMPOUNDS | DIRECT ARYLATION | ONE-POT SYNTHESIS | SEQUENTIAL CLEAVAGE | CHEMISTRY, ORGANIC | BOND ACTIVATION | INTERNAL ALKYNES | CASCADE OXIDATIVE ANNULATION | SIMPLE ARENES | CARBOXYLIC-ACIDS Journal Article Chemistry – A European Journal, ISSN 0947-6539, 11/2014, Volume 20, Issue 47, pp. 15605 - 15610 Among various types of radical reactions, the addition of carbon radicals to unsaturated bonds is a powerful tool for constructing new chemical bonds, in which... radical reactions | nickel | amides | esters | CO bond formation | Amides | Esters | C-O bond formation | Nickel | Radical reactions | ORGANIC-SYNTHESIS | RING EXPANSION | TRANSFER RAFT POLYMERIZATION | FRAGMENTATION CHAIN TRANSFER | BETA-KETO-ESTERS | CHEMISTRY, MULTIDISCIPLINARY | GAMMA-BUTYROLACTONES | REARRANGEMENT | CONSTRUCTION | CHEMISTRY | CYCLIZATION | Mathematical analysis | Unsaturated | Transformations | Bonding | Radicals | Carbonyls radical reactions | nickel | amides | esters | CO bond formation | Amides | Esters | C-O bond formation | Nickel | Radical reactions | ORGANIC-SYNTHESIS | RING EXPANSION | TRANSFER RAFT POLYMERIZATION | FRAGMENTATION CHAIN TRANSFER | BETA-KETO-ESTERS | CHEMISTRY, MULTIDISCIPLINARY | GAMMA-BUTYROLACTONES | REARRANGEMENT | CONSTRUCTION | CHEMISTRY | CYCLIZATION | Mathematical analysis | Unsaturated | Transformations | Bonding | Radicals | Carbonyls Journal Article Chemical Reviews, ISSN 0009-2665, 07/2017, Volume 117, Issue 13, pp. 8977 - 9015 Transition-metal-catalyzed activation of C-H and C-C bonds is a challenging area in synthetic organic chemistry. Among various methods to accomplish these... CHELATION-ASSISTED HYDROACYLATION | CATELLANI ORTHO-ARYLATION | ALKYLATION-ALKENYLATION REACTIONS | CONCOMITANT ORTHO-ALKYLATION | ONE-POT SYNTHESIS | INTERMOLECULAR HYDROACYLATION | CARBON-CARBON BONDS | TOSYLHYDRAZONE INSERTION REACTION | SUBSTITUTED ARYL IODIDES | TETRASUBSTITUTED HELICAL ALKENES | CHEMISTRY, MULTIDISCIPLINARY | Carbon-carbon composites | Transition metals | Catalysts | Metals | Chemical reactions | Activation | Carbon | Substrates | Molecules | Organic chemistry | Hydrogen bonds | Chemical bonds | Catalysis | Reaction mechanisms CHELATION-ASSISTED HYDROACYLATION | CATELLANI ORTHO-ARYLATION | ALKYLATION-ALKENYLATION REACTIONS | CONCOMITANT ORTHO-ALKYLATION | ONE-POT SYNTHESIS | INTERMOLECULAR HYDROACYLATION | CARBON-CARBON BONDS | TOSYLHYDRAZONE INSERTION REACTION | SUBSTITUTED ARYL IODIDES | TETRASUBSTITUTED HELICAL ALKENES | CHEMISTRY, MULTIDISCIPLINARY | Carbon-carbon composites | Transition metals | Catalysts | Metals | Chemical reactions | Activation | Carbon | Substrates | Molecules | Organic chemistry | Hydrogen bonds | Chemical bonds | Catalysis | Reaction mechanisms Journal Article 10. Copper(II)‐Promoted C–C Bond Formation by Oxidative Coupling of Two C(sp3)–H Bonds Adjacent to Carbonyl Group to Construct 1,4‐Diketones and Tetrasubstituted Furans European Journal of Organic Chemistry, ISSN 1434-193X, 02/2015, Volume 2015, Issue 4, pp. 876 - 885 The copper(II)‐promoted C–C bond formation from the coupling of two C(sp 3 )–H bonds that are adjacent to a carbonyl group was achieved. This protocol offers a... C–C coupling | Synthetic methods | Oxidation | Oxygen heterocycles | Copper | C-C coupling | ENANTIOSELECTIVE SYNTHESIS | AROMATIC DICARBOXYLIC-ACIDS | CATALYZED SYNTHESIS | TERTIARY-AMINES | 1,4-DICARBONYL COMPOUNDS | CHEMISTRY, ORGANIC | H ACTIVATION | FUNCTIONALIZATION | BENZYL KETONES | POLYSUBSTITUTED FURANS | REGIOSELECTIVE SYNTHESIS | Ketones | Copper compounds | Furans C–C coupling | Synthetic methods | Oxidation | Oxygen heterocycles | Copper | C-C coupling | ENANTIOSELECTIVE SYNTHESIS | AROMATIC DICARBOXYLIC-ACIDS | CATALYZED SYNTHESIS | TERTIARY-AMINES | 1,4-DICARBONYL COMPOUNDS | CHEMISTRY, ORGANIC | H ACTIVATION | FUNCTIONALIZATION | BENZYL KETONES | POLYSUBSTITUTED FURANS | REGIOSELECTIVE SYNTHESIS | Ketones | Copper compounds | Furans Journal Article 11. Rhodium(I)‐Catalyzed Decarbonylative Spirocyclization through CC Bond Cleavage of Benzocyclobutenones: An Efficient Approach to Functionalized Spirocycles Angewandte Chemie International Edition, ISSN 1433-7851, 02/2014, Volume 53, Issue 7, pp. 1891 - 1895 The rhodium‐catalyzed formation of all‐carbon spirocenters involves a decarbonylative coupling of trisubstituted cyclic olefins and benzocyclobutenones through... decarbonylation | CC activation | homogeneous catalysis | spirocycles | rhodium | C=C activation | CATALYSIS | ACTIVATION | INSERTION | CARBOACYLATION | CARBON-CARBON BOND | COMPLEXES | ELIMINATION | CHEMISTRY, MULTIDISCIPLINARY | OLEFINS | SYSTEMS | PALLADIUM(II) | CC activation | Polycyclic Compounds - chemistry | Cyclization | Stereoisomerism | Spiro Compounds - chemical synthesis | Molecular Structure | Catalysis | Rhodium - chemistry | Spiro Compounds - chemistry | Pathways | Olefins | Cleavage | Activation | Joining | Formations | Transformations | Bonding decarbonylation | CC activation | homogeneous catalysis | spirocycles | rhodium | C=C activation | CATALYSIS | ACTIVATION | INSERTION | CARBOACYLATION | CARBON-CARBON BOND | COMPLEXES | ELIMINATION | CHEMISTRY, MULTIDISCIPLINARY | OLEFINS | SYSTEMS | PALLADIUM(II) | CC activation | Polycyclic Compounds - chemistry | Cyclization | Stereoisomerism | Spiro Compounds - chemical synthesis | Molecular Structure | Catalysis | Rhodium - chemistry | Spiro Compounds - chemistry | Pathways | Olefins | Cleavage | Activation | Joining | Formations | Transformations | Bonding Journal Article 12. Palladium(II)-Catalyzed C-H Activation/C-C Cross-Coupling Reactions: Versatility and Practicality ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, ISSN 1433-7851, 2009, Volume 48, Issue 28, pp. 5094 - 5115 In the past decade, palladium-catalyzed C-H activation/C-C bond-forming reactions have emerged as promising new catalytic transformations; however, development... C-C coupling | METHOXY GROUPS | X-RAY-STRUCTURE | METHYL-GROUPS | ORGANOTIN REAGENTS | AROMATIC-COMPOUNDS | CHEMISTRY, MULTIDISCIPLINARY | ARYLBORONIC ACIDS | SIGMA-BOND METATHESIS | organometallic reagent | C-H activation | KINETIC RESOLUTION | ORGANOMETALLIC CHEMISTRY | palladium | CATALYZED DIRECT ARYLATION | Alkenes - chemistry | Stereoisomerism | Molecular Conformation | Crystallography, X-Ray | Organometallic Compounds - chemistry | Catalysis | Palladium - chemistry C-C coupling | METHOXY GROUPS | X-RAY-STRUCTURE | METHYL-GROUPS | ORGANOTIN REAGENTS | AROMATIC-COMPOUNDS | CHEMISTRY, MULTIDISCIPLINARY | ARYLBORONIC ACIDS | SIGMA-BOND METATHESIS | organometallic reagent | C-H activation | KINETIC RESOLUTION | ORGANOMETALLIC CHEMISTRY | palladium | CATALYZED DIRECT ARYLATION | Alkenes - chemistry | Stereoisomerism | Molecular Conformation | Crystallography, X-Ray | Organometallic Compounds - chemistry | Catalysis | Palladium - chemistry Journal Article
Let $S$ be a nonempty bounded subset of $\mathbb{R}$. Define the set $kS = \{ks : s \in S\}$. We wish to prove that if $k \geq 0$, then $\sup(kS) = k\cdot \sup S$. I'm pretty sure the upper half of the proof is fine, but it's the lower half that attempts to show that $k\cdot \sup S \leq \sup(kS)$ with which I am concerned. I begin the lower half by, "However, the following argument(?)...". Proof Let $k \geq 0$ be an arbitrary constant. Since $S$ is bounded above, the completeness axiom entails the existence of the least upper bound $\sup S$. Hence, the following inequality is readily established for all $s \in S$ $$s \leq \sup S$$ Since $k \geq 0$ we can multiply the above inequality by $k$ $$ks \leq k\cdot \sup S$$ So the set $kS$ is bounded above. Further, we know that $kS \neq \emptyset$, because if we choose any $s \in S$ where $S \neq \emptyset$, then $ks \in kS$ by definition of $kS$. Hence, $kS$ has the least upper bound $\sup(kS)$. The second inequality shows that $k \cdot \sup S$ is an upper bound of $kS$, so we must have the inequality $$ \sup(kS) \leq k \cdot \sup S$$ (since $\sup(kS)$ is the smallest upper bound of $kS$ and $k \cdot \sup S$ is an upper bound of $kS$). However, the following argument(?) shows that $k \cdot \sup S \leq \sup(kS)$, thereby establishing the fact that $\sup(kS) = k \cdot \sup S$. Since $\sup S$ is the least upper bound of $S$, the number $\sup S - \epsilon$ for $\epsilon > 0$ is not an upper bound of $S$. Therefore, there exists a number $s' \in S$ such that $$\sup S - \epsilon < s'$$ Since $k\geq 0$, we can multiply the above inequality by $k$ to construct the following inequality $$k \cdot \sup S - k \cdot \epsilon \leq k \cdot s'$$ Due to the fact that, for every $k\cdot s \in kS$, $k\cdot s \leq \sup kS$, it follows that $k \cdot s' \leq \sup kS$. Thus by transitivity, we conclude that $$k \cdot \sup S - k \cdot \epsilon \leq \sup(kS)$$ Adding $k\cdot \epsilon$ to both sides of the above inequality, we have $$k \cdot \sup S \leq \sup(kS) + k\cdot \epsilon$$ Because the above inequality is true for every $\epsilon > 0$, we infer $$k \cdot \sup S \leq \sup(kS)$$ as desired. Therefore, $k \cdot \sup S = \sup(kS)$.
Let $M$ a smooth manifold and $\nabla $ a connexion. Let $\gamma :[a,b]\longrightarrow M$ a $\mathcal C^\infty $ curvature. I recall that if $X,Y\in \Gamma(M)$, and $f,g\in \mathcal C^\infty (M)$, then $$\nabla_{fX}(gY)= f X(g)Y+fg\nabla _XY.$$ So $\nabla _{\dot \gamma (t)}$ is the covariante derivative (the derivate along $\gamma $). We denote $Y$ the vector field along $\gamma $, i.e. $Y_t\in T_{\gamma (t)}M$ for all $t$. Let $x^1,...,x^n$ a coordinate system around $p=\gamma (t)$. So I want to understand why $$\nabla _{\dot \gamma (t)}Y_t=\dot x^i \frac{\mathrm d a^j(t)}{\mathrm d t}\partial _j+\dot x^i a^j\nabla _{\partial _i}\partial _j$$ (using Einstein sommation convention and the fact that $\partial _i=\frac{\partial }{\partial x_i}$). My Idea In the coordinate $x^1,...,x^n$, $$\gamma (t)=(x^1(t),...,x^n(t)),$$ $$\dot\gamma (t)=x^{i}(t)\partial_i$$ and $$Y_t=a^j(t)\partial _j|_{\gamma (t)}.$$ Then $$\nabla _{\dot \gamma (t)}Y_t=\nabla _{x^i(t)\partial _i}(a^j\partial _j)=x^{i}(t)\partial _i (a^j(t))\partial _j+x^i(t)a^j\nabla _{\partial _i}\partial _j,$$ Q1) I don't understand why $\partial _i(a^j(t))=\frac{\mathrm d a^j(t)}{\mathrm d t}$ Q2) In my formula, does $x^i(t)$ is a scalar or a function ? I have the impression that with my notations, thing are a little bit ambiguous, isn't it ?
This answer combines @AntonioVargas and @GregMartin results. Let us start from the approximation in Laplace's method (https://en.wikipedia.org/wiki/Laplace%27s_method) $$\int_a^b h(x)e^{Mg(x)}dx \approx \sqrt{\frac{2\pi}{M\|g''(x_0)\|}}h(x_0)e^{Mg(x_0)}$$ and rewrite the general integral in the question as $$\int_0^1 \frac{4x^2}{1+x^2}e^{n\log\left(\dfrac{x(1-x)^2}{2}\right)} dx$$ so we may identify $$h(x)=\dfrac{4x^2}{1+x^2}$$ $$g(x)=\log\left(\dfrac{x(1-x)^2}{2}\right)$$ $$g'(x)=\dfrac{1-3x}{x(1-x)}$$ $$g''(x)=-\dfrac{1-2x+3x^2}{x^2(1-x)^2}$$ The position of the unique global maximum of $g(x)$ is obtained from $g'(x_0)=0$ $$x_0=\frac{1}{3}$$ Substituting into $h(x)$, $g(x)$ and $g''(x)$, we obtain $$h(x_0)=h\left(\dfrac{1}{3}\right)=\frac{4\left(\dfrac{1}{3}\right)^2}{1+\left(\dfrac{1}{3}\right)^2}=\frac{2}{5}$$ $$g(x_0)=log\left(\dfrac{x_0(1-x_0)^2}{2}\right) = log\left(\frac{2}{27}\right)$$ $$g''(x_0)=g''\left(\dfrac{1}{3}\right)=-\frac{27}{2}$$ Finally, $$\sqrt{\frac{2\pi}{M\|g''(x_0)\|}}h(x_0)e^{Mg(x_0)} = \sqrt{\dfrac{2\pi}{n\frac{27}{2}}}\frac{2}{5}e^{n\log\left(\dfrac{2}{27}\right)} = \frac{4}{15}\sqrt{\dfrac{\pi}{3n}}\left(\dfrac{2}{27}\right)^n,$$ so $$\int_0^1 \frac{x^{n+2}(1-x)^{2n}}{2^{n-2}(1+x^2)}dx \sim \frac{4}{15}\sqrt{\dfrac{\pi}{3n}}\left(\dfrac{2}{27}\right)^n$$ for large $n$. The asymptotic quality is therefore $$\theta = \lim_{n \to \infty} -\frac{\log\left(\dfrac{p_n}{q_n}-\pi\right)}{\log(q_n)} = \lim_{n \to \infty} -\frac{\log\left( \dfrac{4}{15}\sqrt{\dfrac{\pi}{3n}}\left(\dfrac{2}{27}\right)^n \right)}{\log( (2e^3)^n)} = \frac{3log(3)-log(2)}{3+log(2)} \approx 0.7 < 1,$$ which is low.
I have a regression model $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 +u$ It is known that sample means of both $x_1$ and $x_2$ are zero, moreover the error term is said to be homoskedastic, the standard error of regression and standard errors of OLS estimates of the slope coefficients are given. Can we say something about total sum of squares (i.e. $\sum_{i=1}^n (y_i - \bar{y})^2$), namely, can we somehow estimate its smallest possible value? What I have tried: $\sum_{i=1}^n (y_i - \bar{y})^2 = \sum_{i=1}^n(\hat{\beta_0} + \hat{\beta_1}x_{1i} + \hat{\beta_2}x_{2i} + \hat{u_i} - \bar{y})^2 = \sum_{i=1}^n (\hat{\beta_1}x_{1i} + \hat{\beta_2}x_{2i})^2 + \sum_{i=1}^n \hat{u_i}^2$. We know the second term, since we know the standard error. So it remains to estimate the first one. Standard erorrs of OLS estimates of slope coefficients are $$se(\hat{\beta_1}) = \sqrt{\dfrac{\sum_{i=1}^nx_{2i}^2}{\sum_{i=1}^nx_{1i}^2\sum_{i=1}^nx_{2i}^2- (\sum_{i=1}^nx_{1i}x_{2i})^2}} \hat{\sigma}$$ $$se(\hat{\beta_2}) = \sqrt{\dfrac{\sum_{i=1}^nx_{1i}^2}{\sum_{i=1}^nx_{1i}^2\sum_{i=1}^nx_{2i}^2- (\sum_{i=1}^nx_{1i}x_{2i})^2}} \hat{\sigma}$$ Since we know $se(\hat{\beta_1}), se(\hat{\beta_2}), \hat{\sigma}$ we can compute the ratio $\dfrac{\sum_{i=1}^nx_{1i}^2}{\sum_{i=1}^nx_{2i}^2}$. Moreover we can express $\sum_{i=1}^nx_{1i}^2$ and $\sum_{i=1}^nx_{2i}^2$ through $(\sum_{i=1}^nx_{1i}x_{2i})^2$. Thus, $$\sum_{i=1}^n (\hat{\beta_1}x_{1i} + \hat{\beta_{2}}x_{2i})^2 = \hat{\beta_1}^2\sum_{i=1}^nx_{1i}^2 + \hat{\beta_2}^2\sum_{i=1}^nx_{2i}^2 + 2 \hat{\beta_1}\hat{\beta_2}\sum_{i=1}^nx_{1i}x_{2i}$$ The idea was to plug the expressions of $\sum_{i=1}^nx_{1i}^2$ and $\sum_{i=1}^nx_{2i}^2$ into the aforementioned expression and try to minimize it by $\sum_{i=1}^nx_{1i}x_{2i}$ treating $\hat{\beta_1}, \hat{\beta_2}$ like constants. But I do not think that it is a right way, since once we change $\sum_{i=1}^nx_{1i}x_{2i}$ we also change $\hat{\beta_1}, \hat{\beta_2}$ because the OLS formulas include $\sum_{i=1}^nx_{1i}x_{2i}$. Plugging formulas for OLS estimates also does not seem to be a good idea since these formulas include sample covariances between $y$ and $x_1$, $x_2$ which we do not know. Here I got stuck. Could you please give me any hints, how to proceed? Thanks a lot in advance for any help!
It's \(\displaystyle \log\left(\frac{A-}{HA}\right)\) . So are to solve (A-)/(HA) together ? I thought of putting log10 on both sides. I do not know if log10 to the power of \(\displaystyle \log\left(\frac{A-}{HA}\right)\), does the log10 and the log cancel out to be just (A-)/(HA) and the other side log10 to the power of 4 will be converted to the exponential which is 10 to the power of 4? Is that so that there is a rule of log10 and the log cancel out to be just (A-)/(HA)? Then (A-)/(HA) is (10 to the power of 4 ) over 1. Thanks. Do hear from you soon. Hi, thanks. It's actually a chemistry question, so it is the ratio of A− (ionized) to HA (unionized) and I'm supposed to find the the ratio of A− (ionized) to HA (unionized). I'm just thinking of the mathematics part of log10 to the power of \(\displaystyle \log\left(\frac{A^-}{H_A}\right)\) , does the log10 and the log cancel out to be just (A-)/(HA) ? Okay, so "H-" and "HA" are just numbers that we could as well call "x" and "y" (for the non-chemists among us). And you are asking about \(\displaystyle log\left(\frac{x}{y}\right)\)?But I am still not clear on what you mean by "log 10 to the power of". Do "logarithm to base 10" of "logarithm of 10"? If you just mean "\(\displaystyle log\left(\frac{H-}{HA}\right)= -4\)" (the logarithm of \(\displaystyle 10^{-4}\) is just -4) then use the fact that \(\displaystyle log\left(\frac{H-}{HA}\right)= log(H-)- log(HA)= -4\) so that log(H-)= log(HA)- 4 or, equivalently, that log(HA)= log(H-)+ 4.
Difference between revisions of "Kunen inconsistency" Line 22: Line 22: <cite>Kanamori2009:HigherInfinite</cite>(p. 320-321), <cite>Kanamori2009:HigherInfinite</cite>(p. 320-321), Hamkins, Kirmayer and Perlmutter <cite>HamkinsKirmayerPerlmutter:GeneralizationsOfKunenInconsistency</cite>, Woodin <cite>Kanamori2009:HigherInfinite</cite>(p. 320-321), Hamkins, Kirmayer and Perlmutter <cite>HamkinsKirmayerPerlmutter:GeneralizationsOfKunenInconsistency</cite>, Woodin <cite>Kanamori2009:HigherInfinite</cite>(p. 320-321), − Zapletal <cite>Zapletal1996:ANewProofOfKunenInconsistency</cite> and Suzuki <cite>Suzuki1998: + Zapletal <cite>Zapletal1996:ANewProofOfKunenInconsistency</cite> and Suzuki <cite>Suzuki1998:, Suzuki1999:NoDefinablejVtoVinZF</cite>. * There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. * There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. Revision as of 18:27, 10 January 2012 The Kunen inconsistency, the theorem showing that there can be no nontrivial elementary embedding from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-strong cardinals, for example, have $V_\theta\subset M$; the $\lambda$-supercompact cardinals have $M^\lambda\subset M$; and the huge cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a Reinhardtcardinal. Shortly after this idea was introduced, however,Kunen famously proved that there are no such embeddings,and hence no Reinhardt cardinals in ZFC. Since that time, the inconsistency argument has been generalized by various authors, including Harada [1](p. 320-321), Hamkins, Kirmayer and Perlmutter [2], Woodin [1](p. 320-321), Zapletal [3] and Suzuki [4, 5]. There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. There is no nontrivial elementary embedding $j:V[G]\to V$ of a set-forcing extension of the universe to the universe, and neither is there $j:V\to V[G]$ in the converse direction. More generally, there is no nontrivial elementary embedding between two ground models of the universe. More generally still, there is no nontrivial elementary embedding $j:M\to N$ when both $M$ and $N$ are eventually stationary correct. There is no nontrivial elementary embedding $j:V\to \text{HOD}$, and neither is there $j:V\to M$ for a variety of other definable classes, including gHOD and the $\text{HOD}^\eta$, $\text{gHOD}^\eta$. If $j:V\to M$ is elementary, then $V=\text{HOD}(M)$. There is no nontrivial elementary embedding $j:\text{HOD}\to V$. More generally, for any definable class $M$, there is no nontrivial elementary embedding $j:M\to V$. There is no nontrivial elementary embedding $j:\text{HOD}\to\text{HOD}$ that is definable in $V$ from parameters. It is not currently known whether the Kunen inconsistency may be undertaken in ZF. Nor is it known whether one may rule out nontrivial embeddings $j:\text{HOD}\to\text{HOD}$ even in ZFC. Metamathematical issues Kunen formalized his theorem in Kelly-Morse set theory, but it is also possble to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a GBC class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals. Reinhardt cardinal Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Zapletal, Jindrich. A new proof of Kunen's inconsistency.Proc Amer Math Soc 124(7):2203--2204, 1996. www DOI MR bibtex Suzuki, Akira. Non-existence of generic elementary embeddings into the ground model.Tsukuba J Math 22(2):343--347, 1998. MR bibtex | Abstract Suzuki, Akira. No elementary embedding from $V$ into $V$ is definable from parameters.J Symbolic Logic 64(4):1591--1594, 1999. www DOI MR bibtex
Is it true that if the average of a continuous function $f:\mathbb{R}^2\rightarrow[0,1]$ over a unit circle centered around $(x,y)$ is $f(x,y)$ for all $(x,y)\in\mathbb{R}^2$, then $f$ is necessarily constant? Yes, any such $f$ is constant. In fact, if we relax the condition so that $f$ is only required to be bounded below, but not above, then it is still true that $f$ is constant. This can be proven by martingale theory, as can the statement that harmonic functions bounded below are constant (Liouville's theorem). Let $X_1,X_2,\ldots$ be a sequence of independent random random variables uniformly distributed on the unit circle, set $S_n=\sum_{m=1}^nX_m$ and let $\mathcal{F}_n$ be the sigma-algebra generated by $X_1,X_2,\ldots,X_n$. Then, $S_n$ is a random walk in the plane, and is recurrent. Your condition is equivalent to $\mathbb{E}[f(S_{n+1})\vert\mathcal{F}_n]=f(S_n)$. That is, $f(S_n)$ is a martingale. It is a standard result that a martingale which is bounded below converges to a limit, with probability one. However, as $S_n$ is recurrent, this only happens if $f$ is constant almost everywhere. By continuity of $f$, it must be constant everywhere. For the same argument applied to functions $f\colon\mathbb{Z}^2\to\mathbb{R}$, see Byron Schmuland's answer to this math.SE question. In general, for a continuous function $f\colon\mathbb{R}^2\to\mathbb{C}$, if $f(x,y)$ is the average of $f$ on the unit circle centered at $(x,y)$ then it does not follow that $f$ is harmonic. So, we cannot prove the result directly by applying Liouville's theorem. As an example (based on the comments by Gerald Edgar and by me), consider $f(x,y)=\exp(ax)$. The average of $f$ on the unit circle centered at $(x,y)$ is$$\frac{1}{2\pi}\int_0^{2\pi}f(x+\cos t,y+\sin t)\,dt=\frac{1}{2\pi}f(x,y)\int_0^{2\pi}e^{a\cos t}\,dt=f(x,y)I_0(a).$$Here, $I_0(a)$ is the modified Bessel function of the first kind. Whenever $I_0(a)=1$ then $f$ satisfies the required property. This is true for $a=0$, in which case $f$ is constant, but there are also nonzero solutions such as $a\approx1.88044+6.94751i$. In that case $f$ satisfies the required property but is not harmonic. These functions are called harmonic functions. One the simplest examples is $f(x,y)=xy$. More generally, the real part of any holomorphic function $\mathbb C\to \mathbb C$ is a harmonic function $\mathbb C\to \mathbb R$. Added later: As mentioned by Gerald, harmonic functions are characterized by the property that $$\int_0^1f(z+re^{2\pi\theta})d\theta=f(z),\qquad \forall r\ge 0,\quad \forall z\in \mathbb C.$$ I don't know whether that property for $r=1$ implies that property for all $r\ge 0$. Partial answer to the edited question: If you require the function to be bounded, then I think that yes, that should force it to be constant. Liouville's theorem states that any bounded holomorphic function $\mathbb C\to \mathbb C$ is constant. There is also a version of Liouville's theorem for harmonic functions, so yes: the function is constant. Gap in the argument: ▹ why is the function harmonic?
From my understanding, both are estimators that are based on first providing an unbiased statistic $T(X)$ and obtaining the root to the equation: $$c(X) \left( T(X) - E(T(X)) \right) = 0$$ Secondly both are in some sense "nonparametric" in that, regardless of what the actual probability model for $X$ may be, if you think of $T(\cdot)$ as a meaningful summary of the data, then you will be consistently estimating that "thing" regardless of whether that thing has any probabilistic connection with the actual probability model for the data. (e.g. estimating the sample mean from Weibull distributed failure times without censoring). However, method of moments seems to insinuate that the $T(X)$ of interest must be a moment for a readily assumed probability model, however, one estimates it with an estimating equation and not maximum likelihood (even though they may agree, as is the case for means of normally distributed random variables). Calling something a "moment" to me has the connotation of insinuating a probability model. However, supposing for instance we have log normally distributed data, is the method of moments estimator for the 3rd central moment based on the 3rd sample moment, e.g. $$\hat{\mu_3} = \frac{1}{n}\sum_{i=1}^n \left( X_i - \bar{X} \right)^3$$ Or does one estimate the first and second moment, transform them to estimate the probability model parameters, $\mu$ and $\sigma$ (whose estimates I will denote with hat notation) and then use these estimates as plug-ins for the derive skewness of lognormal data, i.e. $$ \hat{\mu_3} = \left( \exp \left( \hat{\sigma}^2 \right) + 2\right) \sqrt{\exp \left( \hat{\sigma}^2-1\right)}$$
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The distortion factor between size in $uv$-space and size in $xy$space is called the Let's see why the Jacobian is the distortion factor in general for a mapping $${\bf \Phi} : (u,\, v) \ \to \ (x(u,\,v),\, y(u,\,v)) \ = \ x(u,\,v)\, {\bf i} +y(u,\,v)\, {\bf j}\,, $$ making good use of all the vector calculus we've developed so far. Let $Q = [a,\,a+h]\times [c,\,c+k]$ be a rectangle in the $uv$-plane and ${\bf \Phi}(Q)$ its image in the $xy$-plane as shown in Then $${\bf u} \ = \ {\bf \Phi}(a+h,\,c) - {\bf \Phi}(a,\,c)\,, \qquad {\bf v} \ = \ {\bf \Phi}(a,\,c+k) - {\bf \Phi}(a,\,c)\,.$$ The area of the parallelogram spanned by ${\bf u} = u_1 {\bf i} + u_2 {\bf j}$ and ${\bf v} = v_1 {\bf i} + v_2 {\bf j}$ is the determinant $\left | \begin{matrix} u_1 & v_1 \cr u_2 & v_2 \end{matrix}\right |$. By the definition of partial derivatives, $$\frac{{\bf \Phi}(a+h,\,c) - {\bf \Phi}(a,\,c)}{h} \ \approx \ \frac{\partial {\bf \Phi}}{\partial u}\Big|_{(a,c)}\ = \ \frac{\partial x}{\partial u}\Big|_{(a,c)}\, {\bf i} + \frac{\partial y}{\partial u}\Big|_{(a,c)}\, {\bf j} \,,$$ $$\frac{{\bf \Phi}(a,\,c+k) - {\bf \Phi}(a,\,c)}{k} \ \approx \ \frac{\partial {\bf \Phi}}{\partial v}\Big|_{(a,c)}\ = \ \frac{\partial x}{\partial v}\Big|_{(a,c)}\, {\bf i} + \frac{\partial y}{\partial v}\Big|_{(a,c)}\, {\bf j}\,.$$ We then compute $$\hbox{area}(\Phi(Q)) \approx \left | \begin{matrix} u_1 & v_1 \cr u_2 & v_2 \end{matrix}\right | \ \approx \ \left | \begin{matrix} h \frac{\partial x}{\partial u} & k \frac{\partial x}{\partial v} \cr h \frac{\partial y}{\partial u} & k \frac{\partial y}{\partial v} \end{matrix} \right | \ = \ hk \left | \begin{matrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \cr \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}\end{matrix} \right |. $$ absolute value of the Jacobian times the area of the correspondingrectangle in $uv$-space. So why didn't we see an absolute value in the change-of-variables formula in one dimension? This had to do with the way we write the limits of integration.
I am struggling understanding intuitively why the harmonic series diverges but the p-harmonic series converges. I know there are methods and applications to prove convergence, but I am only having trouble understanding intuitively why it is. I know I must never trust my intuition, but this is hard for me to grasp. In both cases, the terms of the series are getting smaller, hence are approaching zero, but they both result in different answers. $$\sum_{n=1}^{\infty}\frac{1}{n}=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots = \text{diverges}$$ $$\displaystyle \sum_{n=1}^{\infty}\frac{1}{n^2}=1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots =\text{converges}$$ Firstly, you should always use your intuition. If you find that your intuition was correct, then smile. If you find that your intuition was wrong, use the experience to fine-tune your intuition. I hope I'm interpreting you question correctly - here goes. Since you are not interested in any of the proofs, I'll just focus on intuition. Now, let's consider a series of the from $\sum _n \frac{1}{n^p}$, with $p>0$ a parameter. Intuitively, the convergence or divergence of the series depends on how fast the general term $\frac{1}{n^p}$ tends to $0$. This is so because the sum is that of infinitely many positive quantities. If these quantities converge to $0$ too slow, the number of summands in each partial sum will be more dominant than the magnitude of the summands. However, if the quantities converge to $0$ fast enough, then in each partial sum the magnitude of the summands will be dominated by numbers of small magnitude, and thus outweigh the fact that there are lots of summands. So, the question is how fast does $\frac{1}{n^p}$ converge to $0$. Let's look at some extreme values of $p$. If $p$ is very large, say $p=1000$, then $\frac{1}{n^p}$ becomes very small very fast (experiment with computing just a few values to see that). So, when $p$ is large, it seems the general term converges to $0$ very fast, and thus we'd expect the series to converge. However, if the value of $p$ is very small, say $p=\frac{1}{1000}$, then $\frac{1}{n^p}$ is actually pretty large for the first few possibilities for $n$, and while it does monotonically tend to $0$, it does so very slowly. So, we'd expect the series to diverge when $p$ is small. Now, if $0<p<q$ then $\frac{1}{n^q}<\frac{1}{n^p}$, so the bigger the parameter the faster the convergence of the general term to $0$ gets. So, small values for the parameter imply diverge of the series, while large values of the parameter imply convergence of the series. So, somewhere in the middle there has to be a value $b$ for the parameter such that if $p<b$ then the series diverges, while if $p>b$ then the series converges. So, just by this straightforward analysis of the behaviour with respect to varying the parameter $p$, we know (intuitively) that there must be some cut-off value for $p$ that is the gateway between convergence and divergence. What happens at the that gateway value for $p$ is unclear, and there is no compelling reason to suspect one behaviour of the series over another. Now, the particular whereabouts of that special gateway value for $p$ should depend strongly on the particularities of the general term. This is thus where you'll have to delve into more rigorous proofs. I hope this rather lengthy answer addresses what you were wondering about. Basically, it says that a cutoff parameter must exist, but we can't expect to say anything about its whereabouts nor the behaviour at that cutoff value without careful study of the general terms. We produce two series that are close in spirit to the series you mentioned. Perhaps the divergence of the first, and the convergence of the second, will be clearer. Consider the series $$\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\frac{1}{16}+\cdots.$$ So there is $1$ term equal to $\frac{1}{2}$, then a block of $2$ each equal to $\frac{1}{4}$, then a block of $4$ each equal to $\frac{1}{8}$, then a block of $8$ each equal to $\frac{1}{16}$, and so on forever. Each block has sum $\frac{1}{2}$, so if you add enough terms, your sum will be very big. But it will take an awful lot of terms to add up to $1000$, many more more terms than there are atoms in the universe. Note that each term is less than the corresponding term in the harmonic series, so if you add together enough terms of the harmonic series, the sum will be very big. Now consider the series $$\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{4^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\frac{1}{8^2}+\cdots.$$ Each term is $\ge$ the corresponding term in the series $1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+\cdots$. Again, we find the sums of the blocks. The first block has sum $1$. The second has sum $\frac{1}{2}$. The third has sum $\frac{1}{4}$. The fourth has sum $\frac{1}{8}$, and so on forever. So if we add up "all" the terms, we get sum $1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\cdots$, a familiar series with sum $2$. Intuitively the main argument why the harmonic series diverge is that $\forall k \sum_{n=k}^{n=2k}\frac{1}{n}>k\frac{1}{2k}=\frac{1}{2}$ since smallest element is $\frac{1}{2k}$ and there are k elements in the interval $[k;2k]$. So the harmonic sum for any finite interval $[k;2k]$ is > 0.5. So if you split the infinite interval over which you make the summation into intervals $[1;k][k;2k][2k;4k],...,[2^nk;2^{n+1}k],...$ each interval has sum higher then 0.5 and since there are infinite number of this intervals to cover the whole $\mathbb{Z}$ it diverges. Basically they get smaller and smaller, but not fast enough to converge to a limit. The p-harmonic on the other hand because of the square in the denominator can not have this "ability" and converge, aka they get smaller faster enough. To try to explain it on intuitive level better thing like this: you are adding infinite number of the sequence, so in order to converge to a limit L they have to get smaller with some "speed". Now even if they get close to 0 if the speed at which they grow smaller is not high enough then they still will be too much stuff added and will never converge. If you convert the sum to an integral, $$ \int_1^\infty \frac{1}{x^2} dx = -\frac{1}{x}|_1^\infty = -\frac{1}{\infty} + 1 $$ converges, but $$ \int_1^\infty \frac{1}{x} dx = \ln x|_1^\infty = \ln \infty $$ doesn't. I think the best way to understand this intuitively is to look at the graphs of the representative functions, in particular, (1/x) and (1/x^2). You will see that the latter decreases much faster than the former, which means that it will reach 0, in a fantastical sense, "faster" than (1/x). Now, a series/sum can be thought of as a rough integral, so consider the area under each graph. As you go out to infinity, the area under the graph of (1/x^2) becomes much smaller much faster since, again, it is decreasing faster than (1/x). Therefore, less and less area is being added to the sum as you reach higher and higher values for n (the series substitute for x). However, with (1/x) it just so happens that the area under the graph of the function does not decrease as quickly. Rather, the area under the graph of the function is still significant enough to count towards the sum. Therefore, there must be some point on [1,2] where a split between converging and diverging occurs. And, though it may not seem so, (1/x^2) is an extreme case. Even much smaller values for p that are still greater than 1 converge. It just so happens that p=1 is the end of the field of numbers that allows this series to diverge, and everything greater than p=1 converges. In general, this is something that is accepted as mathematical truth. Hmm, I like the answer of @Andre Nicolas much, so I would like to also generalize that idea to arbitrary real exponents p in the denominator. First let's state, that a lower bound for any such series with real exponents p is 1 so let's write $L_p=1$. Second let's restate the upper bounding series $U_2$ for exponent $p=2$ where the repeated terms are written as multiplications as given by Andre $$ U_2 = 1 + 2\left(\frac{1}{2^2}\right) + 4\left(\frac{1}{4^2}\right) + 8\left(\frac{1}{8^2}\right) +... $$ Now let's write this explicitely as powers of base 2:$$ U_2 = 1 + 2^1\left(\frac{1}{2^2}\right) + 2^2\left(\frac{1}{2^{2 \cdot 2}}\right) + 2^3\left(\frac{1}{2^{3 \cdot 2}}\right) +... $$ We see, how this generalizes to some exponent p:$$ U_p = 1 + 2^1\left(\frac{1}{2^p}\right) + 2^2\left(\frac{1}{2^{2 \cdot p}}\right) + 2^3\left(\frac{1}{2^{3 \cdot p}}\right) +... $$ Now let's collect/cancel the exponents $$ U_p = 1 + 2^{1-p} + 2^{2(1-p)} + 2^{3(1-p)} +... $$ But this is now a geometric series $$ U_p = { 1\over 1-2^{1-p} } $$ and this has a finite value for any $p=1+\epsilon$ and is infinite for $\epsilon=0$ or $p=1$ And if the upper bounding series has a positive finite value, and the lower bound is 1 then the series in question must be convergent with a value in between. protected by Zev Chonoles Jul 7 '16 at 20:26 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Riemann curvature s denoted by $R^a_{bcd}$. If you try to attempt to parallel transport a tangent vector $\psi$ along an infinitesimal parallelogram given by two tangent vectors along non-parallel sides as $\eta$ and $\nu$, the following holds true. $$\psi^a_{;cd} - \psi^a_{;dc} = R^a_{cdb}\psi^b$$ where the $;$ indicates covariant differentiation. Here the $c$ and $d$ on the L.H.S. implies the two directions of the parallelogram. My professor said that the Riemann curvature tensor denotes the extent of the non commutativity of the covariant diferentiation along directions. In flat space, I would expect the Riemann Tensor to be zero. But does this imply commutativty of covariant differentiaton? Because covariant differentiation of a vector along a direction denotes how much the vector deviates during the parallel transport and one would not expect a deviation in flat space. So this should make the left hand side zero independent of the right hand side. What do you think? In flat space the covariant derivative reduces to the partial derivative (by definition). The partial derivatives do commute, hence the L.H.S. vanishes and the R.H.S., that is the Riemann tensor, measures zero. Just a comment on the statement, it should be: $\psi^a_{;cd} - \psi^a_{;dc} = R^a_{bdc}\psi^b$ As I know the indices of the non-parallel sides of the infinitesimal parallelogram should be the third and the fourth. A possible ambiguity remains in the sign, as you can decide which way of the loop to assume as positive, however it is a matter of convention. The right hand side in that equation is identically $=0$ since the component of the vector are smooth and the covariant derivative reduces to partial derivative, in fact the $\Gamma$s are zero in a flat space.
Question: Below is a diagram of the sound waves produced by two speakers, where the collection of dots represent vibrating air molecules and the distance between each compression (i.e., the region where the molecules are closer together and a higher pressure) is also given. If you are standing near the speakers as they're playing, what beat frequency will you hear, assuming the speed of sound in air is {eq}343\ \dfrac ms {/eq}? Beats Frequency When two sound waves of slightly different frequencies interfere, then we hear a periodic loud and soft sound called beats. The beat frequency (number of beats produced due to the interference of sound waves in one second) is equal to the difference in frequency of the two sounds that interfere in time to produce the beats Answer and Explanation: Given : The speed of sound is, {eq}v = 343 \ m/s {/eq} (The wavelength of a sound wave is the distance between two consecutive compressions or two consecutive rarefactions) The wavelength of the sound produced from the first speaker is, {eq}\lambda_1 = 0.56 \ m {/eq} The wavelength of the sound produced from the second speaker is, {eq}\lambda_2 = 0.72 \ m {/eq} The frequency of the sound produced by the first speaker will be, {eq}\begin{align*} f_1 &= \dfrac{v}{\lambda_1} \\ f_1 &= \dfrac{343}{0.56} \\ f_1 &= 612.5 \ Hz \end{align*} {/eq} The frequency of the sound produced by the second speaker will be, {eq}\begin{align*} f_2 &= \dfrac{v}{\lambda_2} \\ f_2 &= \dfrac{343}{0.72} \\ f_2 &= 476.39 \ Hz \end{align*} {/eq} The beat frequency which we will hear will be, {eq}\begin{align*} f_b &= f_1 -f_2 \\ f_b &= 612.5 - 476.39 \\ f_b &= \boxed{ 136.1 \ Hz} \end{align*} {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from MTEL Physics (11): Practice & Study GuideChapter 16 / Lesson 5
A figure is made from a semi circle and square. With the following dimensions, width = w, and length = l.Find the maximum area when the combined perimiter is 8 meter.I first try to construct the a function for the perimeter.2*l + w + 22/7*w/2 = 8 - > l = 4 - (9*w)/7Next I insert this... I have a data set of samples, and I made up some variables that filter out unwanted samples from the data set.Say each sample has values ##\vec{x}=x_1, x_2,..., x_n##, and I know the values of ##\vec{x}## for each sample.Also, if I sum up all of the samples, I get a value, ##Z##, which tells... Hello everyone. I am triyng to calculate the route which takes less time to go from point A to point B in the presence of a constant flow (I. E. a simple version of Zermelo's navigation problem) using the GAMS software. However, if I put both points on a straight line and make the constant flow... Hi, I'm making an electric longboard and trying to write an app for my phone to function as the remote. I've got a bunch of fancy Star-Trek-esque indicators on it, one of which is the pitch and roll of the deck. All the indicators have "danger zones" and turn red when they hit them, and for this... Hi,I have the following optimization problem. I have a list of tasks that I should be able to perform with my tools. Each tool costs a certain amount of money, and may be used to carry out a finite number of tasks. The goal is to choose an optimal set of tools in such a way that the toolset can... 1. Homework Statement2. Homework EquationsI have yet to figure out any relevant equations, but I do believe that the constraint equation for the optimization problem is the y=64-x^6 listed above.3. The Attempt at a SolutionI am currently trying to figure out methods to begin my... Summary:Some help would be greatly appreciated for finding the ideal taper and x/y dimensions of a trebuchet throwing arm to optimize strength and minimize the moment of inertia.Long Version:I am a high school junior currently in the process of working on an ambitious project to build a 21... Hi, I am trying to solve a control problem where I have to minimize the fuel consumption of a vehicle:$$J=\int_{0}^{T} L(x(t), u(t),t) + g(x(T),T)dt$$##L(u(t),v(t))=\sum\limits_{i,j=0}^{2} K_{i,j} u(t)^i v(t)^j ## is convex (quadratic) and the term ##g(x(T),T)## is to have a constraint in the... I have this word problem, and was wondering how I would go about creating a system of equations.Here is the question:Problem: You are a small forest landowner, and decide you want to sustainably harvest some of timber on your property. There are costs related to the infrastructure needed to... Currently design of turbomachinery (impellers/turbines) is more a form of art than an engineering process. One has to guess a bunch of parameters and check if they are right in much later stages of design.I was thinking about designing the other way: we know the initial and the final velocity... I wouldn't be surprised if I've posted in the wrong section because in fact the reason for posting is to get help naming this problem. That being the first step to knowing where to look for a solution. Newbie to the forum so open to advice.The problem: I have a complex histogram and a... Hello , I am working on a project where i need to optimize a set of values before starting my algorithm, I have never used optimization before , I read some courses about it but i still have a problem formulating my problem , I need to minimize the error function( sum of squared difference )... 1. Homework StatementGiven a list of integers and a single sum value, return the first two values (parse from the left please) in order of appearance that add up to form the sum.sum_pairs([11, 3, 7, 5], 10)# ^--^ 3 + 7 = 10== [3, 7]sum_pairs([4, 3, 2, 3, 4]... I'm trying to solve a problem that amounts to:Given b0, ..., bn-1 where1 <= bi, find the max of |a0 - a1| + |a1 - a2| + ... + |an-2 - an-1| where 1 <= ai <= bi.I'm 100% confident that each ai is either 1 or bi.I'm 90% confident that the elements a0, ..., an-1 are either1, b0, 1, b1... Dear Physics Forum friends,I am currently stuck with the following question about the integer optimization:"Produce a family of 0,1 knapsack sets (having an increasing number n of variables) whose associated family of minimal covers grows exponentially with n."My thought is that I need to... http://tinypic.com/r/1570ojk/9Please see the image. I don't understand how to solve such a problem. It has three criteria. Any hints or guide would be appreciated or even a solution. This is actually just an example from the book, but the book doesn't even solve the question, so I don't even... Im trying to solve the following problem from the book 'Learning with kernels', and would really appreciate a little help.Background information- Let $\{(x_{1},y_{1}),...,(x_{N},y_{N})\}$ be a dataset, L a Loss function and $H(k)$ a reproducing kernel Hilbert space with kernel $k$. The... I try to find out how to minimize functions i R by using nlm function:> f<-function(x,y){x^2+y^2+10-5*x-y}> nlm(f,0.1,0.1)That only gives me an estimate for x. How would write the code to get x and y? I have seen the implementation of L-BFGS-B by authors in Fortran and ports in several languages. I am trying to implement the algorithm on my own.I am having difficulty grasping a few steps. Is there a worked out example using L-BFGS or L-BFGS-B ? Something similar to... I am assigned to design a circuit that peaks voltage at 10kHz and is less than half peak voltage at 3k and 30k. Only capacitors and resistors are allowed. The circuit I'm using is attached. I end up with##I_0 = [\frac{-R_2\omega^2C_1C_2 + [C_1+C_2]R_1R_2\omega^3C_1C_2 - R_1\omega^2[C_1+C_2]^2... Hi,I am trying to understand the functional form of B3LYP from the Gaussian output file. I have tried to relate the details in the output file with the functional form of B3LYP. But I am not sure what certain terms correspond to. I have mentioned below the details. Can you pleaese help me.In... Hello, I am working on an optimization problem but I am not sure if the problem can be formulated and solved with conventional solvers.Assume the minimization problem for a set of elements ##\mathcal{N} =\{ 1,\dots, h, \dots, i,\dots, N \}##$$\mathrm{minimize}\quad C = \sum_{i=1}^{N}... Hi everyone,I am new here. I am working in geophysics and I would like to invert for a simple layered velocity model using CMA-ES optimization method. I downloaded the purecmaes.m code in Matlab here: https://www.lri.fr/~hansen/cmaes_inmatlab.html, and also implemented one in Fortran 90. I... First time posting here so excuse me if I don't know the rules so well. I figured this would be the best place to post this question.I'm trying to optimize the force produced by a solenoid that is no bigger than 15mm in diameter (D). My goal is to get just the right balance of number of wire... 1. Homework StatementThe cylinder x^2 + y^2 = 1 intersects the plane x + z = 1 in an ellipse. Find the point on the ellipse furthest from the origin.2. Homework Equations$f(x) = x^2 + y^2 + z^2$$h(x) = x^2 + y^2 = 1$$g(x) = x + z = 1$3. The Attempt at a Solution$\langle 2x, 2y... 1. Homework StatementA piece of wire, 100 cm long, needs to be bent to form a rectangle. Determine the dimensions of a rectangle with the maximum area.2. Homework EquationsP = 2(l+w)A = lw3. The Attempt at a SolutionThis is what I don't understand, the solutions that I saw from... Max: 3x + 5ys.t. x + 2y ≤ 5x ≤ 3y ≤ 2x,y ≥0By the simplex method, the profit is $14. Using sensitivity analysis I changed the RHS of the 1st constraint and keeping everything else constant, I get the best profit value of $19 at RHS of 7.What other methods can I use such as the... C \in \mathbb{R}^{m \times n}, X \in \mathbb{R}^{m \times n}, W \in \mathbb{R}^{m \times k}, H \in \mathbb{R}^{n \times k}, S \in \mathbb{R}^{m \times m}, P \in \mathbb{R}^{n \times n}##{S}## and ##{P}## are similarity matrices (symmetric).##\lambda##, ##\alpha## and ##\beta## are... So, I'm trying to plot a 3D "dipole" (an arrow with a small torus around it basically) in mathematica, and want to rotate it according to Euler angles...I use this code for the rotation matrix:rot[a, b, g] := RotationMatrix[g, {1, 0, 0}].RotationMatrix[b, {0, 1, 0}].RotationMatrix[a, {0, 0... Hi everyone,I am working on a mathematical optimization model for a fuel cell.Currently I am facing a problem with the ramp-up of the cell.I have a modulation ramp of 4% of the nominal power (58.3 kW) per minute.My constraint in the model has to be in kWh (I have to precise that my...
This question is followed up from this question How to solve a certain coupled first order PDE system Here I consider the non-homogeneous advection system \begin{equation} \displaystyle\left\{\begin{array}{l} \frac{\partial U}{\partial t}+b_1\frac{\partial U}{\partial x}=(r+l_1)U-l_1V, \\ \frac{\partial V}{\partial t}+b_2\frac{\partial V}{\partial x}=(r+l_2)V-l_2 U, \\ \end{array}\right. t\in [s,T] \end{equation} with the following boundary conditions (where $W$ could be $U$ and $V$) \begin{equation} \left\{\begin{array}{l} W(x,t)=0 \ \ \text{as} \ \ x\to-\infty,\\ W(x,t)\sim S_0e^x \ \ \text{as} \ \ x\to\infty,\\ W(x,T)=\max \{ S_0e^x-100,0\} \\ \end{array}\right. \end{equation} First I tried it with mathematica using DSolve DSolve[{ D[u[x, t], t] + b1*D[u[x, t], x] - (r + l1)*u[x, t] + l1*v[x, t] ==0, D[v[x, t], t] + b2*D[u[x, t], x] - (r + l2)*v[x, t] + l2*u[x, t]==0 }, {u[x, t], v[x, t]}, {x, t} ]; No things come out. I then tried it with Maple 12 sys := [diff(u(x,t),t)+b1*diff(u(x,t),x)=(r+l1)*u(x,t)-l1*v(x,t), diff(v(x,t),t)+b2*diff(v(x,t),x)=(r+l2)*v(x,t)-l2*u(x,t)]:sol:=pdsolve(sys,[u,v]) assuming b1>b2; here what I got It seems for me that Maple 12 can find the solution but I don't know what does the symbol $_c_{1}$ in the output mean ? My questions here are : 1) Can we tell mathematica does the same job as Maple 12 does ? One of my friends recommended the following approach: we assume that solution is of the form $u=u_0e^{d_1 t+d_2x}, v=v_0e^{d_1 t +d_2x}$, then I find $u_t, v_t, u_x, v_x$,---> plug them into the system---> collecting terms in terms of $(u_0, v_0)$, we have a linear system whose determinant must be zeros in order to have a solution for the advection system. At the end, you can find infinitely many $(d_1,d_2)$, hence the solution can be represented in term of an infinite series ( which is not likable :))) ) My next questions are : 1) How can he guess such a form of the solution? (I am not saying friend is incorrect but I am not sure whether we can assume such a form ) 2) Using the asymptotic limit: $U, V\sim S_0 e^x$, can I assume that $d_2\equiv 1$? 3) Is there any other simple way to solve the above system ? Thank you so much for your time . I really appreciate it.
Of course some math will be involved, but it's not much: Euclid would have understood it well. All you really need to know is how to add and rescale vectors. Although this goes by the name of "linear algebra" nowadays, you only need to visualize it in two dimensions. This enables us to avoid the matrix machinery of linear algebra and focus on the concepts. A Geometric Story In the first figure, $y$ is the sum of $y_{\cdot 1}$ and $\alpha x_1$. (A vector $x_1$ scaled by a numeric factor $\alpha$; Greek letters $\alpha$ (alpha), $\beta$ (beta), and $\gamma$ (gamma) will refer to such numerical scale factors.) This figure actually began with the original vectors (shown as solid lines) $x_1$ and $y$. The least-squares "match" of $y$ to $x_1$ is found by taking the multiple of $x_1$ that comes closest to $y$ in the plane of the figure. That's how $\alpha$ was found. Taking this match away from $y$ left $y_{\cdot 1}$, the residual of $y$ with respect to $x_1$. ( The dot "$\cdot$" will consistently indicate which vectors have been "matched," "taken out," or "controlled for.") We can match other vectors to $x_1$. Here is a picture where $x_2$ was matched to $x_1$, expressing it as a multiple $\beta$ of $x_1$ plus its residual $x_{2\cdot 1}$: (It does not matter that the plane containing $x_1$ and $x_2$ could differ from the plane containing $x_1$ and $y$: these two figures are obtained independently of each other. All they are guaranteed to have in common is the vector $x_1$.) Similarly, any number of vectors $x_3, x_4, \ldots$ can be matched to $x_1$. Now consider the plane containing the two residuals $y_{\cdot 1}$ and $x_{2 \cdot 1}$. I will orient the picture to make $x_{2\cdot 1}$ horizontal, just as I oriented the previous pictures to make $x_1$ horizontal, because this time $x_{2\cdot 1}$ will play the role of matcher: Observe that in each of the three cases, the residual is perpendicular to the match. (If it were not, we could adjust the match to get it even closer to $y$, $x_2$, or $y_{\cdot 1}$.) The key idea is that by the time we get to the last figure, both vectors involved ($x_{2\cdot 1}$ and $y_{\cdot 1}$) are already perpendicular to $x_1$, by construction. Thus any subsequent adjustment to $y_{\cdot 1}$ involves changes that are all perpendicular to $x_1$. As a result, the new match $\gamma x_{2\cdot 1}$ and the new residual $y_{\cdot 12}$ remain perpendicular to $x_1$. (If other vectors are involved, we would proceed in the same way to match their residuals $x_{3\cdot 1}, x_{4\cdot 1}, \ldots$ to $x_2$.) There is one more important point to make. This construction has produced a residual $y_{\cdot 12}$ which is perpendicular to both $x_1$ and $x_2$. This means that $y_{\cdot 12}$ is also the residual in the space (three-dimensional Euclidean realm) spanned by $x_1, x_2,$ and $y$. That is, this two-step process of matching and taking residuals must have found the location in the $x_1, x_2$ plane that is closest to $y$. Since in this geometric description it does not matter which of $x_1$ and $x_2$ came first, we conclude that if the process had been done in the other order, starting with $x_2$ as the matcher and then using $x_1$, the result would have been the same. (If there are additional vectors, we would continue this "take out a matcher" process until each of those vectors had had its turn to be the matcher. In every case the operations would be the same as shown here and would always occur in a plane.) Application to Multiple Regression This geometric process has a direct multiple regression interpretation, because columns of numbers act exactly like geometric vectors. They have all the properties we require of vectors (axiomatically) and therefore can be thought of and manipulated in the same way with perfect mathematical accuracy and rigor. In a multiple regression setting with variables $X_1$, $X_2, \ldots$, and $Y$, the objective is to find a combination of $X_1$ and $X_2$ ( etc) that comes closest to $Y$. Geometrically, all such combinations of $X_1$ and $X_2$ ( etc) correspond to points in the $X_1, X_2, \ldots$ space. Fitting multiple regression coefficients is nothing more than projecting ("matching") vectors. The geometric argument has shown that Matching can be done sequentially and The order in which matching is done does not matter. The process of "taking out" a matcher by replacing all other vectors by their residuals is often referred to as "controlling" for the matcher. As we saw in the figures, once a matcher has been controlled for, all subsequent calculations make adjustments that are perpendicular to that matcher. If you like, you may think of "controlling" as "accounting (in the least square sense) for the contribution/influence/effect/association of a matcher on all the other variables." References You can see all this in action with data and working code in the answer at https://stats.stackexchange.com/a/46508. That answer might appeal more to people who prefer arithmetic over plane pictures. (The arithmetic to adjust the coefficients as matchers are sequentially brought in is straightforward nonetheless.) The language of matching is from Fred Mosteller and John Tukey.
I confess, in 1965 or so, I first thought that the version of the SvKT (Seifert-van Kampen Theorem) for the fundamental groupoid enabled us to get rid of base points. But then I wanted to calculate the fundamental group of the circle, and gradually realised that we needed $\pi_1(X,A)$, the fundamental groupoid on a set of $A$ of base points chosen according to the geometry. Here is an example of the kind of situation which the standard texts do not give: and for which covering space methods are not ideal. In fact one needs combinatorics and combinatorial group(oid) theory to calculate individual $\pi_1(X,a)$ from $\pi_1(X,A)$. See my book Topology and Groupoids and also the classic 1971 book (downloadable) by Philip Higgins, Categories and Groupoids. Groupoids model homotopy 1-types. So one first determines the 1-type before calculating an individual fundamental group. This idea is nicely modelled in higher dimensions: one can calculate some 2-types and higher types as "big" algebraic objects inside which are the homotopy groups which one might want. The methods are explained more in a talk given in Paris on June 5, 2014, at the IHP, available on my preprint page. As Mariano points out, this has been discussed elsewhere on stackexchange and mathoverflow. July 14: The most general version of the SvKT is given in [41] (downloadable) on my publication list, R. Brown and A. Razak, `A van Kampen theorem for unions ofnon-connected spaces'', Archiv. Math. 42 (1984) 85-88. in the form of a coequaliser statement when given an open cover and a set $A$ of base points which meets each path component of each 1-,2-, and 3-fold intersections of the sets of the cover. The style of proof goes back to Crowell's original version, and has the advantage of generalising to higher dimensions. For example, in [32] R. Brown and P.J. Higgins, ``Colimit theorems for relative homotopygroups'', J. Pure Appl. Algebra 22 (1981) 11-41. The retraction from the version for the full fundamental groupoid, $A=X$, to this version is quite difficult to manage and is done in May's "Concise..." book, without the refinement on the conditions, but I feel is the wrong way to go, though it is quite elegant for the pushout version in dimension 1. July 17, 2014: Actually I have missed out three reasons why one is interested in the fundamental groupoid $\pi_1 X$. The notion of fibration of groupoids is relevant to topology particularly in construction of operations of groupoids on homotopy sets, and exact sequences. If $p: E \to B$ is a fibration of spaces then $\pi_1 p: \pi_1 E \to \pi_1 B$ is a fibration of groupoids. This is exploited in Chapter 7 of Topology and Groupoids. See for example arXiv:1207.6404 for other current uses of the algebra of groupoids, and in particular fibrations. Similarly if $p: E \to B$ is a covering map of spaces then on applying $\pi_1$ we get a covering morphism of groupoids. Thus a map is modelled by a morphism, and this often makes the theory easier to follow (IMHO!), particularly with regard to questions of lifting maps. See Chapter 10 of T&G. If $G$ is a (discrete) group acting on a space $X$ then it also acts on the fundamental groupoid $\pi_1 X$. So we have not only orbit spaces $X/G$ but also orbit groupoids $(\pi_1 X)/\!/G$. There is a canonical morphism $(\pi_1 X)/\!/G \to \pi_1(X/G)$ and there are useful conditions which ensure this is an isomorphism, e.g. $X$ is Hausdorff, has a universal cover, and the action is properly discontinuous. See Chapter 11 of T&G. One example given there is the cyclic group $Z_2$ acting on $X \times X$ whose orbit space is the symmetric square of $X$. Under useful conditions, its fundamental group is that of $X$ made abelian. It would be good to see lots more examples. Aug 17, 2016: I can now refer to my answer to my own mathoverflow question on the relation of the van Kampen Theorem to the notion of descent. Oct 17, 2016: I should add that a whole area of research into the use of strict higher groupoids in algebraic topology, one part of which is described in the book Nonabelian Algebraic Topology, (EMS 2011, 703 pages), arose out of seeking generalisations to higher dimensions of the use of the fundamental groupoid. December 23, 2016 It may be useful to point out that a new volume by Bourbaki "Topologie alg\'ebrique" Ch 1-4, (Springer) 2016, uses the fundamental groupoid extensively, and relates its use to descent theory. It does have results on orbit spaces, but no example of applications. It does not use the fundamental groupoid on a set of base points.
You are here Basic Electric Guitar Circuits 2: Potentiometers & Tone Capacitors Part 2: Potentiometers and Tone Capacitors What is a Potentiometer? Potentiometers, or "pots" for short, are used for volume and tone control in electric guitars. They allow us to alter the electrical resistance in a circuit at the turn of a knob. Drawing of physical potentiometers depicting terminals 1, 2, and 3 Drawing of potentiometer schematic depicting terminals 1, 2, and 3 It is useful to know the fundamental relationship between voltage, current and resistance known as Ohm's Law when understanding how electric guitar circuits work. The guitar pickups provide the voltage and current source, while the potentiometers provide the resistance. From Ohm's Law we can see how increasing resistance decreases the flow of current through a circuit, while decreasing the resistance increases the current flow. If two circuit paths are provided from a common voltage source, more current will flow through the path of least resistance. Ohm's Law$$V = I \times R$$ where ~V~ = voltage, ~I~ = current, and ~R~ = resistance Basic Electric Guitar Circuit Alternative functional terminal names Terminal 1 "Cold" Terminal 2 "Wiper" Terminal 3 "Hot" A Visual Representation of how a potentiometer works Based on a 300 degree rotation We can visualize the operation of a potentiometer from the drawing above. Imagine a resistive track connected from terminal 1 to 3 of the pot. Terminal 2 is connected to a wiper that sweeps along the resistive track when the potentiometer shaft is rotated from 0° to 300°. This changes the resistance from terminals 1 to 2 and 2 to 3 simultaneously, while the resistance from terminal 1 to 3 remains the same. As the resistance from terminal 1 to 2 increases, the resistance from terminal 2 to 3 decreases, and vice-versa. Tone Control: Variable Resistors & Tone Capacitors Tone pots are connected using only terminals 1 and 2 for use as a variable resistor whose resistance increases with a clockwise shaft rotation. The tone pot works in conjunction with the tone capacitor ("cap") to serve as an adjustable high frequency drain for the signal produced by the pickups. Tone Circuit The tone pot's resistance is the same for all signal frequencies; however, the capacitor has AC impedance which varies depending on both the signal frequency and the value of capacitance as shown in the equation below.$$\text{Capacitor Impedance} = Z_{\text{capacitor}} = \frac{1}{2 \pi f C}$$ where ~f~ = frequency and ~C~ = capacitance Capacitor impedance decreases if capacitance or frequency increases.High frequencies see less impedance from the same capacitor than low frequencies. The table below shows impedance calculations for three of the most common tone cap values at a low frequency (100 Hz) and a high frequency (5 kHz) ~C~ (Capacitance) ƒ (Frequency) ~Z~ (Impedance) .022 μF 100 Hz 72.3 kΩ .022 μF 5 kHz 1.45 kΩ .047 μF 100 Hz 33.9 kΩ .047 μF 5 kHz 677 Ω .10 μF 100 Hz 15.9 kΩ .10 μF 5 kHz 318 Ω When the tone pot is set to its maximum resistance (e.g. 250kΩ), all of the frequencies (low and high) have a relatively high path of resistance to ground. As we reduce the resistance of the tone pot to 0Ω, the impedance of the capacitor has more of an impact and we gradually lose more high frequencies to ground through the tone circuit. If we use a higher value capacitor, we lose more high frequencies and get a darker, fatter sound than if we use a lower value. Volume Control: Variable Voltage Dividers Volume pots are connected using all three terminals in a way that provides a variable voltage divider for the signal from the pickups. The voltage produced by the pickups (input voltage) is connected between the volume pot terminals 1 and 3, while the guitar\'s output jack (output voltage) is connected between terminals 1 and 2. Voltage divider equation:$$V_{\text{out}} = V_{\text{in}} \times \frac{R_2}{R_1 + R_2}$$ From the voltage divider equation we can see that if ~R_1 = 0\text{Ω}~ and ~R_2 = 250\text{kΩ}~, then the output voltage will be equal to the input voltage (full volume).$$V_{\text{out}} = V_{\text{in}} \times \frac{250\text{kΩ}}{0 + 250\text{kΩ}} = V_{\text{in}} \times \frac{250\text{kΩ}}{250\text{kΩ}}$$$$V_{\text{out}} = V_{\text{in}}$$ If ~R_1 = 250\text{kΩ}~ and ~R_2 = 0\text{Ω}~, then the output voltage will be zero (no sound).$$V_{\text{out}} = V_{\text{in}} \times \frac{0}{250\text{kΩ} + 0} = V_{\text{in}} \times \frac{0}{250\text{kΩ}}$$$$V_{\text{out}} = 0$$ Two Resistor Voltage Divider Schematic Example:$$V_{\text{in}} = 60\text{mV} \text{, } R_1 = 125\text{kΩ} \text{, } R_2 = 125\text{kΩ}$$$$V_{\text{out}} = V_{\text{in}} \times \frac{R_1}{(R_1 + R_2)}$$$$V_{\text{out}} = 60\text{mV} \times \frac{125\text{kΩ}}{(125\text{kΩ} + 125\text{kΩ})}$$$$V_{\text{out}} = 60\text{mV} \times \frac{1}{2}$$$$V_{\text{out}} = 30\text{mV}$$ Potentiometer Taper The taper of a potentiometer indicates how the output to input voltage ratio will change with respect to the shaft rotation. The two taper curves below are examples of the two most common guitar pot tapers as they would be seen on a manufacturer data sheet. The rotational travel refers to turning the potentiometer shaft clockwise from 0° to 300° as in the previous visual representation drawing. How do you know when to use an audio or linear taper potentiometer? The type of potentiometer you should use will depend on the type of circuit you are designing for. Typically, for audio circuits the audio taper potentiometer is used. This is because the audio taper potentiometer functions on a logarithmic scale, which is the scale in which the human ear percieves sound. Even though the taper chart appears to have a sudden increase in volume as the rotation increases, in fact the perception of the sound increase will occur on a gradual scale. The linear scale will actually (counterintuitively) have a more significant sudden volume swell effect because of how the human ear perceives the scale. However, linear potentiometers are often used for other functions in audio circuits which do not directly affect audio output. In the end, both types of potentiometers will give you the same range of output (from 0 to full), but the rate at which that range changes varies between the two. How do you know what value of potentiometer to use? The actual value of the pot itself does not affect the input to output voltage ratio, but it does alter the peak frequency of the pickup. If you want a brighter sound from your pickups, use a pot with a larger total resistance. If you want a darker sound, use a smaller total resistance. In general, 250kΩ pots are used with single-coil pickups and 500kΩ pots are used with humbucking pickups. Specialized Pots Potentiometers are used in all types of electronic products so it is a good idea to look for potentiometers specifically designed to be used in electric guitars. If you do a lot of volume swells, you will want to make sure the rotational torque of the shaft feels good to you and most pots designed specifically for guitar will have taken this into account. When you start looking for guitar specific pots, you will also find specialty pots like push-pull pots, no-load pots and blend pots which are all great for getting creative and customizing your guitar once you understand how basic electric guitar circuits work. By Kurt Prange (BSEE), Sales Engineer for Antique Electronic Supply - based in Tempe, AZ. Kurt began playing guitar at the age of nine in Kalamazoo, Michigan. He is a guitar DIY'er and tube amplifier designer who enjoys helping other musicians along in the endless pursuit of tone. Click here to return to the desktop version of this site
Given $q = e^{2\pi i \tau}$ and the Eisenstein series $E_{2k}(\tau)$, i.e., $$E_2(\tau) = 1-24\sum_{n=1}^\infty \frac{n q^n}{1-q^n}$$ $$E_4(\tau) = 1+240\sum_{n=1}^\infty \frac{n^3 q^n}{1-q^n}$$ and so on. Define the function, $$F_{2k}(\tau) = \frac{E_{2k}(\tau)}{\left(E_2(\tau)-\frac{3}{\pi\; \Im(\tau)}\right)^k}$$ for $k \geq 2$, where $\tau = \frac{1+\sqrt{-d}}{2}$, $\Im(\tau)$ is the imaginary part of $\tau$, and $d$ has class number $h(-d) = m$. For example, we have, $$F_4\left(\tfrac{1+\sqrt{-163}}{2}\right) = \frac{5\cdot23\cdot29\cdot163}{2^2\cdot3\cdot181^2}$$ $$F_6\left(\tfrac{1+\sqrt{-163}}{2}\right) = \frac{7\cdot11\cdot19\cdot127\cdot163^2}{2^9\cdot181^3}$$ $$F_8(\tau) = F_4^2(\tau)$$ and so on. Question: In general, is it true that for $k \geq 2$ the function $F_{2k}$, ? (I've tested it with like the j-function, is an algebraic number of degree m = h(-d) dwith higher class numbers, and it seems to be true.)
Learning Objectives To describe the characteristics of ionic bonding. To quantitatively describe the energetic factors involved in the formation of an ionic bond. Ions are atoms or molecules which are electrically charged. Cations are positively charged and anions carry a negative charge. Ions form when atoms gain or lose electrons. Since electrons are negatively charged, an atom that loses one or more electrons will become positively charged; an atom that gains one or more electrons becomes negatively charged. Ionic bonding is the attraction between positively- and negatively-charged ions. These oppositely charged ions attract each other to form ionic networks (or lattices). Electrostatics explains why this happens: opposite charges attract and like charges repel. When many ions attract each other, they form large, ordered, crystal lattices in which each ion is surrounded by ions of the opposite charge. Generally, when metals react with non-metals, electrons are transferred from the metals to the non-metals. The metals form positively-charged ions and the non-metals form negatively-charged ions. Generating Ionic Bonds Ionic bonds form when metals and non-metals chemically react. By definition, a metal is relatively stable if it loses electrons to form a complete valence shell and becomes positively charged. Likewise, a non-metal becomes stable by gaining electrons to complete its valence shell and become negatively charged. When metals and non-metals react, the metals lose electrons by transferring them to the non-metals, which gain them. Consequently, ions are formed, which instantly attract each other—ionic bonding. In the overall ionic compound, positive and negative charges must be balanced, because electrons cannot be created or destroyed, only transferred. Thus, the total number of electrons lost by the cationic species must equal the total number of electrons gained by the anionic species. Example \(\PageIndex{1}\): Sodium Chloride For example, in the reaction of Na (sodium) and Cl (chlorine), each Cl atom takes one electron from a Na atom. Therefore each Na becomes a Na + cation and each Cl atom becomes a Cl - anion. Due to their opposite charges, they attract each other to form an ionic lattice. The formula (ratio of positive to negative ions) in the lattice is \(\ce{NaCl}\). \[\ce{2Na (s) + Cl 2(g) \rightarrow 2NaCl (s)} onumber\] These ions are arranged in solid NaCl in a regular three-dimensional arrangement (or lattice): NaCl lattice. (left) 3-D structure and (right) simple 2D slice through lattes. Images used with permission from Wikipedia and Mike Blaber. The chlorine has a high affinity for electrons, and the sodium has a low ionization energy. Thus the chlorine gains an electron from the sodium atom. This can be represented using ewis dot symbols (here we will consider one chlorine atom, rather than Cl 2): The arrow indicates the transfer of the electron from sodium to chlorine to form the Na + metal ion and the Cl - chloride ion. Each ion now has an octet of electrons in its valence shell: Na +:2s 22p 6 Cl -: 3s 23p 6 Energetics of Ionic Bond Formation Ionic bonds are formed when positively and negatively charged ions are held together by electrostatic forces. Consider a single pair of ions, one cation and one anion. How strong will the force of their attraction be? According to Coulomb's Law, the energy of the electrostatic attraction (\(E\)) between two charged particles is proportional to the magnitude of the charges and inversely proportional to the internuclear distance between the particles (\(r\)): \[E \propto \dfrac{Q_{1}Q_{2}}{r} \label{Eq1a} \] \[ E = k\dfrac{Q_{1}Q_{2}}{r} \label{Eq1b} \] where each ion’s charge is represented by the symbol Q. The proportionality constant k is equal to 2.31 × 10 −28 J·m. This value of k includes the charge of a single electron (1.6022 × 10 −19 C) for each ion. The equation can also be written using the charge of each ion, expressed in coulombs (C), incorporated in the constant. In this case, the proportionality constant, k, equals 8.999 × 109 J·m/C 2. In the example given, Q 1 = +1(1.6022 × 10 −19 C) and Q 2 = −1(1.6022 × 10 −19 C). If Q 1 and Q 2 have opposite signs (as in NaCl, for example, where Q 1 is +1 for Na + and Q 2 is −1 for Cl −), then E is negative, which means that energy is released when oppositely charged ions are brought together from an infinite distance to form an isolated ion pair. Energy is always released when a bond is formed and correspondingly, it always requires energy to break a bond. As shown by the green curve in the lower half of Figure \(\PageIndex{1}\), the maximum energy would be released when the ions are infinitely close to each other, at r = 0. Because ions occupy space and have a structure with the positive nucleus being surrounded by electrons, however, they cannot be infinitely close together. At very short distances, repulsive electron–electron interactions between electrons on adjacent ions become stronger than the attractive interactions between ions with opposite charges, as shown by the red curve in the upper half of Figure \(\PageIndex{1}\). The total energy of the system is a balance between the attractive and repulsive interactions. The purple curve in Figure \(\PageIndex{1}\) shows that the total energy of the system reaches a minimum at r 0, the point where the electrostatic repulsions and attractions are exactly balanced. This distance is the same as the experimentally measured bond distance. Consider the energy released when a gaseous \(Na^+\) ion and a gaseous \(Cl^-\) ion are brought together from r = ∞ to r = r 0. Given that the observed gas-phase internuclear distance is 236 pm, the energy change associated with the formation of an ion pair from an \(Na^+_{(g)}\) ion and a \(Cl^-_{(g)}\) ion is as follows: \[ \begin{align*} E &= k\dfrac{Q_{1}Q_{2}}{r_{0}} \\[4pt] &= (2.31 \times {10^{ - 28}}\rm{J}\cdot \cancel{m} ) \left( \dfrac{( + 1)( - 1)}{236\; \cancel{pm} \times 10^{ - 12} \cancel{m/pm}} \right) \\[4pt] &= - 9.79 \times 10^{ - 19}\; J/ion\; pair \label{Eq2} \end{align*}\] The negative value indicates that energy is released. Our convention is that if a chemical process provides energy to the outside world, the energy change is negative. If it requires energy, the energy change is positive. To calculate the energy change in the formation of a mole of NaCl pairs, we need to multiply the energy per ion pair by Avogadro’s number: \[ E=\left ( -9.79 \times 10^{ - 19}\; J/ \cancel{ion pair} \right )\left ( 6.022 \times 10^{ 23}\; \cancel{ion\; pair}/mol\right )=-589\; kJ/mol \label{Eq3} \] This is the energy released when 1 mol of gaseous ion pairs is formed, not when 1 mol of positive and negative ions condenses to form a crystalline lattice. Because of long-range interactions in the lattice structure, this energy does not correspond directly to the lattice energy of the crystalline solid. However, the large negative value indicates that bringing positive and negative ions together is energetically very favorable, whether an ion pair or a crystalline lattice is formed. We summarize the important points about ionic bonding: At r 0, the ions are more stable (have a lower potential energy) than they are at an infinite internuclear distance. When oppositely charged ions are brought together from r= ∞ to r= r 0, the energy of the system is lowered (energy is released). Because of the low potential energy at r 0, energy must be added to the system to separate the ions. The amount of energy needed is the bond energy. The energy of the system reaches a minimum at a particular internuclear distance (the bond distance). Example \(\PageIndex{2}\): LiF Calculate the amount of energy released when 1 mol of gaseous Li +F − ion pairs is formed from the separated ions. The observed internuclear distance in the gas phase is 156 pm. Given: cation and anion, amount, and internuclear distance Asked for: energy released from formation of gaseous ion pairs Strategy: Substitute the appropriate values into Equation \(\ref{Eq1b}\) to obtain the energy released in the formation of a single ion pair and then multiply this value by Avogadro’s number to obtain the energy released per mole. Solution: Inserting the values for Li +F − into Equation \(\ref{Eq1b}\) (where Q 1 = +1, Q 2 = −1, and r = 156 pm), we find that the energy associated with the formation of a single pair of Li +F − ions is \[ \begin{align*} E &=k \dfrac{Q_1Q_2}{r_0} \\[4pt] &=\left(2.31 \times 10^{−28} J⋅\cancel{m} \right) \left(\dfrac{\text{(+1)(−1)}}{156\; pm \times 10^{−12} \cancel{m/pm}} \right)\\[4pt] &=−1.48 \times 10^{−18} \end{align*}\] Then the energy released per mole of Li +F − ion pairs is \[ \begin{align*} E&= \left(−1.48 \times 10^{−18} J/ \cancel{\text{ion pair}}\right) \left(6.022 \times 10^{23} \cancel{\text{ion pair}}/mol\right)\\[4pt] &−891 \;kJ/mol \end{align*}\] Because Li + and F − are smaller than Na + and Cl − (see Section 7.3), the internuclear distance in LiF is shorter than in NaCl. Consequently, in accordance with Equation \(\ref{Eq1b}\), much more energy is released when 1 mol of gaseous Li +F − ion pairs is formed (−891 kJ/mol) than when 1 mol of gaseous Na +Cl − ion pairs is formed (−589 kJ/mol). Exercise \(\PageIndex{2}\): Magnesium oxide Calculate the amount of energy released when 1 mol of gaseous \(\ce{MgO}\) ion pairs is formed from the separated ions. The internuclear distance in the gas phase is 175 pm. Answer −3180 kJ/mol = −3.18 × 10 3kJ/mol Electron Configuration of Ions How does the energy released in lattice formation compare to the energy required to strip away a second electron from the Na + ion? Since the Na + ion has a noble gas electron configuration, stripping away the next electron from this stable arrangement would require more energy than what is released during lattice formation (Sodium I 2 = 4,560 kJ/mol). Thus, sodium is present in ionic compounds as Na + and not Na 2+. Likewise, adding an electron to fill a valence shell (and achieve noble gas electron configuration) is exothermic or only slightly endothermic. To add an additional electron into a subshell requires tremendous energy - more than the lattice energy. Thus, we find Cl new -in ionic compounds, but not Cl 2-. Compound Lattice Energy (kJ/mol) LiF 1024 LiI 744 NaF 911 NaCl 788 NaI 693 KF 815 KBr 682 KI 641 MgF 2 2910 SrCl 2 2130 MgO 3938 This amount of energy can compensate for values as large as I 3 for valence electrons (i.e. can strip away up to 3 valence electrons). Because most transition metals would require the removal of more than 3 electrons to attain a noble gas core, they are not found in ionic compounds with a noble gas core. A transition metal always loses electrons first from the higher 's' subshell, before losing from the underlying 'd' subshell. (The remaining electrons in the unfilled d subshell are the reason for the bright colors observed in many transition metal compounds!) For example, iron ions will notform a noble gas core: Fe: [Ar]4s 23d 6 Fe 2+: [Ar] 3d 6 Fe 3+: [Ar] 3d 5 Some metal ions can form a pseudo noble gas core (and be colorless), for example: Ag: [Kr]5s 14d 10Ag +[Kr]4d 10Compound: AgCl Cd: [Kr]5s 24d 10Cd 2+[Kr]4d 10Compound: CdS The valence electrons do not adhere to the "octet rule" in this case (a limitation of the usefulness of this rule). Note: The silver and cadmium atoms lost the 5s electrons in achieving the ionic state. When a positive ion is formed from an atom, electrons are alwayslost first from the subshell with the largest principle quantum number Polyatomic Ions Not all ionic compounds are formed from only two elements. Many polyatomic ions exist, in which two or more atoms are bound together by covalent bonds. They form a stable grouping which carries a charge (positive or negative). The group of atoms as a whole acts as a charged species in forming an ionic compound with an oppositely charged ion. Polyatomic ions may be either positive or negative, for example: NH 4 +(ammonium) = cation SO 4 2-(sulfate) = anion The principles of ionic bonding with polyatomic ions are the same as those with monatomic ions. Oppositely charged ions come together to form a crystalline lattice, releasing a lattice energy. Based on the shapes and charges of the polyatomic ions, these compounds may form crystalline lattices with interesting and complex structures. Summary The amount of energy needed to separate a gaseous ion pair is its bond energy. The formation of ionic compounds are usually . The strength of the electrostatic attraction between ions with opposite charges is directly proportional to the magnitude of the charges on the ions and inversely proportional to the internuclear distance. The total energy of the system is a balance between the repulsive interactions between electrons on adjacent ions and the attractive interactions between ions with opposite charges. extremely exothermic
I'm not really $100\%$ sure my solution is correct, also, there remains an indeterminacy in the end result: I don't quite identify the class you are looking for in $P^1\Bbb H\simeq \Bbb S^4$, only up to sign. Let $Q$ be the projectivisation of $p:V\rightarrow P^1\Bbb H$. As you note, $Q$ is isomorphic to $P^3\Bbb C$. To be precise, we consider $\Bbb C\subset \Bbb H$ through $1,i\mapsto 1,i$ respectively. Thanks to the standard left $\Bbb H$-vector space structure on $\Bbb H^2$, $\Bbb H^2$ can be seen as a complex vector space isomorphic $\Bbb C^4$. Now by definition of $Q$ we get a homeomorphism$$\begin{array}{rcl}Q=\coprod_{S\subset\Bbb H^2}P(S)&\longleftrightarrow &P^3\Bbb C\\ l&\mapsto&l\\l\in P(\Bbb H\cdot l)&\gets&l\end{array}$$The direct sum is taken over all quaternionic (left) lines $S$, and for every such line $S$, $P(S)$ is the complex projective space on $S$. The pullback bundle of $V$ over $Q$ splits into two complex line bundles $L\oplus L'$ ver $Q$, with the fiber of $L$ over $l$ being $l$ itself, so that (modulo the above isomorphism), $$L\text{ is isomorphic to the tautological line bundle over }P^3\Bbb C$$ @Matt E gave a convincing argument for the vanishing of the first Chern class of $V$. Computing the total Chern class of the pullback bundle gives $$\pi^*(1+c_2(V))=c(L\oplus L')=c(L)c(L')=1+c_1(L)+c_1(L')+c_1(L)c_1(L')$$ The class $c_1(L)+c_1(L')$ vanishes, and it follows (I believe) that $L'\simeq L^*$, since both are line bundles, and thus completely caracterized by their first Chern class. If $c$ is the standard degree $2$ generator of $H^3(Q)=H^2(P^3(\Bbb C))$ (i.e. the first Chern class of the tautological line bundle over $P^3(\Bbb C)$), then $$\pi^*(c_2(V))=-c^2.$$ It remains to understand $\pi$. The canonical map $Q\to P^1\Bbb H$ is a fibration. Actually, it is obtained from the Hopf fibration $\Bbb S^3\hookrightarrow\Bbb S^7\hookrightarrow\Bbb S^4$ by quotienting out the action of $\Bbb S^1$).$$\begin{array}{rc}\Bbb S^2\simeq P^1\Bbb C\hookrightarrow & P^3(\Bbb C)\\&\downarrow\\ &\Bbb S^4\end{array}$$The associated spectral sequence collapses at rank $2$ because of how the nonzero nodes are placed, and this tells us that the cohomology of $\Bbb S^4$ in degree $4$ is isomorphic to that of $P^3\Bbb C$ in degree $4$ through $\pi^*$. Since $\pi^*(c_2(V))=-c^2$ is a generator in degree $4$, we necessarily have $c_2(V)=$ one of the two generators of $H^4(\Bbb S^4)$ . I don't know which one this is.
A. W. Apter, J. Cummings, and J. D. Hamkins, “Singular cardinals and strong extenders,” Central European J.~Math., vol. 11, iss. 9, pp. 1628-1634, 2013. @article {ApterCummingsHamkins2013:SingularCardinalsAndStrongExtenders, AUTHOR = {Apter, Arthur W. and Cummings, James and Hamkins, Joel David}, TITLE = {Singular cardinals and strong extenders}, JOURNAL = {Central European J.~Math.}, FJOURNAL = {Central European Journal of Mathematics}, VOLUME = {11}, YEAR = {2013}, NUMBER = {9}, PAGES = {1628--1634}, ISSN = {1895-1074}, MRCLASS = {03E55 (03E35 03E45)}, MRNUMBER = {3071929}, MRREVIEWER = {Samuel Gomes da Silva}, DOI = {10.2478/s11533-013-0265-1}, URL = {http://jdh.hamkins.org/singular-cardinals-strong-extenders/}, eprint = {1206.3703}, archivePrefix = {arXiv}, primaryClass = {math.LO}, } Brent Cody asked the question whether the situation can arise that one has an elementary embedding $j:V\to M$ witnessing the $\theta$-strongness of a cardinal $\kappa$, but where $\theta$ is regular in $M$ and singular in $V$. In this article, we investigate the various circumstances in which this does and does not happen, the circumstances under which there exist a singular cardinal $\mu$ and a short $(\kappa, \mu)$-extender $E$ witnessing “$\kappa$ is $\mu$-strong”, such that $\mu$ is singular in $Ult(V, E)$.
This is a homework question and I am to show that $$\sigma(n) = \sum_{d|n} \phi(n) d\left(\frac{n}{d}\right)$$ where $\sigma(n) = \sum_{d|n}d$, $d(n) = \sum_{d|n} 1 $ and $\phi$ is the Euler Phi function. What I have. Well I know $$\sum_{d|n}\phi(d) = n$$ I also know that for $n\in \mathbb{Z}^n$ it has a certain prime factorization $n = p_1^{a_1} \ldots p_k^{a_k}$ so since $\sigma$ is a multiplicative function, we have $\sigma(n) = \sigma(p_1)\sigma(p_2)...$ I also know the theorem of Möbius Inversion Formula and the fact that if $f$ and $g$ are artihmetic functions, then $$f(n) = \sum_{d|n}g(d)$$ iff $$g(n) = \sum_{d|n}f(d)\mu\left(\frac{n}{d}\right)$$ Please post no solution, only hints. I will post the solution myself for others when I have figured it out.
I would like to determine a closed-form expression for the following symbolic integral $$ \int_{-1/2}^{1/2} \!\!\!\! \mathrm{d} x \int_{-1/2}^{1/2} \!\!\!\! \mathrm{d} y \, \frac{1 + b x + c y}{1 + e x + f y + i \eta} \, ,$$ where the coefficients appearing in this expression are such that $$ \begin{cases} \displaystyle b,c,e,f \in \mathbb{R} \, , \\ \displaystyle \eta \in \mathbb{R}\, , \, \eta \neq 0 \, . \end{cases} $$ As a consequence, this integral is always well-defined since its denominator is never equal to 0. To compute this integral in Mathematica, I used the instruction : int = Integrate[(1 + b x + c y)/(1 + e x + f y + I \[Eta]), {x, -1/2,1/2}, {y, -1/2, 1/2}, Assumptions -> {b \[Element] Reals, c \[Element] Reals, e \[Element] Reals, f \[Element] Reals, \[Eta] \[Element] Reals, \[Eta] != 0}] After a quite long calculation (260s) on my computer with Mathematica 10.0.2, I obtained a result of the form int = ConditionalExpression[......,(f<-2 && (2+f<e<0 || (e>0 && 2+e+f<0))) || (-2<f<0 && ((2+e+f>0 && e<0) || 0<e<2+f)) || (0<f<2 && (-2+f<e<0 || (e>0 && e+f<2))) || (f>2 && ((e+f>2 && e<0) || 0<e<-2+f))]] Why do I obtain a result with a ConditionalExpression, while my integral should always be correctly defined ? How should I proceed to convince Integrate that this integral is always well-defined ? I noticed that the result inside the ConditionalExpression involves terms of the form ArcTan[(2 \[Eta])/(-2+e+f)], which partially explain the constraints obtained. Another issue is that the result inside the ConditionalExpression also involves complex logarithmic terms of the form Log[2-e-f+2 I \[Eta]] . I know that the complex logarithm has a branch-cut so that the result may not be robust when being evaluated. Would it also be possible to obtain a result that would not involve expression with branch-cuts, so that the results would always be straightforwardly well defined ?
If you are a physicist or at least know a bit about the theory of relativity, your answer to this question might be a confident "no". Sure, if nothing can exceed the speed of light, one would need an infinite amount of time to travel an infinite distance. But let's put the relativistic laws of physics aside and imagine a pure Newtonian universe. In such a universe, an object can travel arbitrarily fast. But would it be possible for someone to travel "to infinity" in a finite amount of time? The answer is yes. Let's prove this with an example. Imagine an object at $x = 0$ which is at rest at $t = 0$. Between $t = 0$ and some $t = T \gt 0$, a force $F(t)$ pointing along the $x$ axis is applied to the object. This force is given by: $$ F(t) = \displaystyle\frac{\alpha}{(T - t)^3} $$ for some constant $\alpha \gt 0$. For $0 \leq t \lt T$ the value of $F(t)$ is positive and finite, so it points along the positive $x$ axis. Its value increases as $t \rightarrow T$ (here and in what follows, $t \rightarrow T$ means $t$ approaches $T$ from the left side, so $t \lt T$) and diverges at $t = T$. If the mass of the object is $m$, Newton's second law of motion states that (below the notation $\dot{q}$ is used to represent $dq/dt$ and $\ddot{q}$ to represent $d^2q/dt^2$): $$ F(t) = ma(t) = m\ddot{x}(t) = \displaystyle\frac{\alpha}{(T - t)^3} $$ where $a(t)$ is the acceleration experienced by the object at time $t$. This implies that: $$ \ddot{x}(t) = \displaystyle\displaystyle\frac{\alpha}{m}\frac{1}{(T - t)^3} \Longrightarrow \int_{t' = 0}^{t' = t} \ddot{x}(t')dt' = \int_{t' = 0}^{t' = t} \frac{\alpha}{m}\displaystyle\frac{1}{(T - t')^3}dt' $$ The integral of the term on the right-hand side can be computed by letting $w := T - t'$, $dw = -dt'$: $$ \dot{x}(t) - \dot{x}(0) = \frac{\alpha}{m}\int_{w = T}^{w = T - t} \displaystyle\frac{(-1)}{w^3}dw = \frac{\alpha}{m}\displaystyle\left(\frac{1}{2w^2}\right) \bigg|_{w = T}^{w = T - t} $$ Since the object is initially at rest ($\dot{x}(0) = 0$), we get: $$ \boxed{ \dot{x}(t) = \displaystyle\frac{\alpha}{2m}\left[ \displaystyle\frac{1}{(T - t)^2} - \frac{1}{T^2} \right] } \label{%INDEX_vel} $$ This expression gives us the velocity $v(t) = \dot{x}(t)$ of the object at time $t$ (for $0 \leq t \lt T$). For all $0 \lt t \lt T$ we have $\dot{x}(t) \gt 0$ since in this interval $(T - t) \lt T$. The velocity diverges at $t = T$. Integrating equation \eqref{%INDEX_vel} with respect to time yields: $$ \int_{t' = 0}^{t' = t}\dot{x}(t')dt' = \frac{\alpha}{2m}\left[ \int_{t' = 0}^{t' = t}\displaystyle\frac{1}{(T - t')^2}dt' -\int_{t' = 0}^{t' = t}\frac{1}{T^2}dt' \right] $$ Computing the second integral on the right-hand side is a trivial task since the integrand $1/T^2$ is a constant. To integrate the first term on the right-hand side, let $w := T - t'$, $dw = -dt'$. We then obtain: $$ \begin{eqnarray} x(t) - x(0) &=& \frac{\alpha}{2m}\left[ \int_{w = T}^{w = T - t}\displaystyle\frac{(-1)}{w^2}dw - \displaystyle\frac{t}{T^2} \right] \nonumber\\[5pt] &=& \frac{\alpha}{2m}\left[ \displaystyle\left(\frac{1}{w}\right)\bigg|_{w = T}^{w = T - t} - \displaystyle\frac{t}{T^2}\right] \nonumber\\[5pt] &=& \displaystyle\frac{\alpha}{2m}\left[ \displaystyle\frac{1}{T - t} - \displaystyle\frac{1}{T} - \displaystyle\frac{t}{T^2}\right] \nonumber\\[5pt] &=& \displaystyle\frac{\alpha}{2m}\left[ \displaystyle\frac{1}{T - t} - \displaystyle\frac{(T + t)}{T^2} \right] \label{post_1bb7826792e3d5f8d4af531e1fca6075_pos_meq} \end{eqnarray} $$ The object is initially at $x = 0$, so $x(0) = 0$. A bit more algebraic work on the right-hand side of equation \eqref{post_1bb7826792e3d5f8d4af531e1fca6075_pos_meq} yields: $$ \boxed{ x(t) = \displaystyle\frac{\alpha}{2mT^2} \displaystyle\frac{t^2}{(T - t)} } $$ Since $\dot{x}(t) \gt 0$ for $0 \lt t \lt T$, $x(t)$ increases monotonically during this time period. As $t \rightarrow T$, $x(t)$ diverges. In other words: $$ x(t) \rightarrow \infty \quad \textrm{when} \quad t \rightarrow T $$ so the object reaches "the infinity" in a finite amount of time (namely, in time $T$). Although theoretically possible, one would have to come up with a way to produce such a force on an object. But that task, dear reader, I will leave to you! ;-)
On the domination and signed domination numbers of zero-divisor graph Ebrahim Vatandoost, Fatemeh Ramezani Abstract Let $R$ be a commutative ring (with 1) and let $Z(R)$ be its set of zero-divisors. The zero-divisor graph $\Gamma(R)$ has vertex set $Z^*(R)=Z(R) \setminus \lbrace0 \rbrace$ and for distinct $x,y \in Z^*(R)$, the vertices $x$ and $y$ are adjacent if and only if $xy=0$. In this paper, we consider the domination number and signed domination number on zero-divisor graph $\Gamma(R)$ of commutative ring $R$ such that for every $0 \neq x \in Z^*(R)$, $x^2 \neq 0$. We characterize $\Gamma(R)$ whose $\gamma(\Gamma(R))+\gamma(\overline{\Gamma(R)}) \in \lbrace n+1,n,n-1 \rbrace$, where $|Z^*(R)|=n$. Keywords domination number, signed domination number, zero-divisor graph
Moser-lower.tex \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation} holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or semispheres or applying elementary results from coding theory. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), This leads to the lower bound \begin{equation}\label{cn3-low} c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}. The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$ and minimal distance $d$. Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation} With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }} Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ } For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\] It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum. The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only. For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following: Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles $(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \sqrt{\frac{9}{4\pi}}$, which is thus superior to the previous constructions. An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}: \begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure} More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}. This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}. \begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure} Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=10$ case, the set $$B = \{(0 0 10),(0 2 8 ),(0 3 7 ),(0 4 6 ),(1 4 5 ),(2 1 7 ),(2 3 5 ), (3 2 5 ),(3 3 4 ),(3 4 3 ),(4 4 2 ),(5 1 4 ),(5 3 2 ),(6 2 2 ), (6 3 1 ),(6 4 0 ),(8 1 1 ),(9 0 1 ),(9 1 0 ) \}$$ generates the lower bound $c'_{10,3} \geq 24786$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding the following twelve elements from $\Gamma_{5,0,5}$ one can increase the lower bound slightly to $24798$: $1111133333$, $1111313333$, $1113113333$, $1133331113$, $1133331131$, $1133331311$, $3311333111$, $3313133111$, $3313313111$, $3331111133$, $3331111313$, $3331111331$ However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: \begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition} \begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}/\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof}
Suppose $z$ is a complex number with $\bar{z}$ denoting its conjugate. Does there exist real numbers $\{a_1,\ldots, a_n\}$ such that $$z^k+\bar z^k= a_1^k+a_2^k+\cdots+a_n^k,$$ for all $k\in\mathbb N$? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Suppose $z$ is complex. Let us ask whether any real numbers $a_i$ exist for $1 \leq i \leq n$ such that $$z^k+\bar z^k = a_1^k+a_2^k+\cdots+a_n^k \tag1$$ for every positive integer $k.$ This is almost the same as the original question, except that it avoids the simple argument in which setting $k=0$ shows that $n=2.$ In fact, the new conditions are slightly weaker. If $z$ is real then of course for any $n\geq2$ we can set $a_1 = a_2 = z$ and $a_3 = \cdots = a_n = 0.$ But in the case where $z$ is not real, I will prove by contradiction that there is no set of real numbers $a_i$ satisfying Equation $1.$ Assume $z$ is not real and Equation $1$ is true. Write $z = re^{i(\theta + m\pi)}$ where $0 < \lvert\theta\rvert < \frac\pi2$ and $m$ is an integer, and consider the following two cases: Case $0 < \lvert\theta\rvert \leq \frac\pi4.$ Then $0 < \lvert 2\theta \rvert \leq \frac\pi2$ and there exists some positive integer $p$ such that $\frac\pi2 \leq p\lvert2\theta\rvert \leq \pi.$ Then $z^{2p} = r^{2p}e^{i(2p\theta + 2pm\pi)} = r^{2p}e^{i(2p\theta)}$ and $\Re(z^{2p}) \leq 0.$ Case $\frac\pi4 < \lvert\theta\rvert \leq \frac\pi2.$ Then $z^2 = r^2e^{i(2\theta + 2m\pi)} = r^2e^{i(2\theta)}$ where $\frac\pi2 < \lvert2\theta\rvert \leq \pi,$ so $\Re(z^2) < 0.$ Combining these two cases, if $z$ is not real there is some positive integer $p$ such that $\Re(z^{2p}) \leq 0$ and therefore $z^{2p} + \bar z^{2p} \leq 0$. On the other hand, $a_1^{2p}+a_2^{2p}+\cdots+a_n^{2p} \geq 0$ for any positive integer $p,$ with equality only if $a_1 = \cdots = a_n = 0.$ Therefore either $z^{2p} + \bar z^{2p} < a_1^{2p}+a_2^{2p}+\cdots+a_n^{2p},$ contradicting the assumption, or $z^k+\bar z^k = 0$ for all positive integers $k.$ But $z + \bar z=0$ implies $z = ir$ where $r$ is real, which implies $z^2 + \bar z^2 = -2r^2,$ which implies $r=0,$ which contradicts the assumption that $z$ is not real. By contradiction, no such set of real numbers $a_i$ exists when $z$ is not real.
My textbook says that $\lambda$ is the probability per unit time that 1 particle will decay in one second. This makes absolutely no sense to me - I can see that it is related to probability but cannot see how it is the probability. We have: $$N = N_0 e^{-\lambda t} \tag A$$ Now, if the probability of a particle decaying is $p$ then we can say that at $t=1$, $N=(1-p)N_0$. Therefore: $$1-p= e^{-\lambda}\tag B$$ Rearranging we get: $$\lambda = \ln\biggl(\frac{1}{1-p}\biggr) \tag C$$ What is wrong with this reasoning? EDIT: I just want to add an example - this is what initially confused me. Say we have a probability per unit time of decay of $\frac16$. We would therefore expect the number of particles to go from $N$ to $\frac 56 N$ in one second which, using $(A)$, implies that $\frac 56 = e^{-\lambda}$ (I think this is the dodgy step?). This means that $\lambda = ln \biggl(\frac 65 \biggr) \approx 0.1823 $ and not $\frac 16 \approx 0.1667$.
The likelihood could be defined by several ways, for instance : the function $L$ from $\Theta\times{\cal X}$ which maps $(\theta,x)$ to $L(\theta \mid x)$ i.e. $L:\Theta\times{\cal X} \rightarrow \mathbb{R} $. the random function $L(\cdot \mid X)$ we could also consider that the likelihood is only the "observed" likelihood $L(\cdot \mid x^{\text{obs}})$ in practice the likelihood brings information on $\theta$ only up to a multiplicative constant, hence we could consider the likelihood as an equivalence class of functions rather than a function Another question occurs when considering change of parametrization: if $\phi=\theta^2$ is the new parameterization we commonly denote by $L(\phi \mid x)$ the likelihood on $\phi$ and this is not the evaluation of the previous function $L(\cdot \mid x)$ at $\theta^2$ but at $\sqrt{\phi}$. This is an abusive but useful notation which could cause difficulties to beginners if it is not emphasized. What is your favorite rigorous definition of the likelihood ? In addition how do you call $L(\theta \mid x)$ ? I usually say something like "the likelihood on $\theta$ when $x$ is observed". EDIT: In view of some comments below, I realize I should have precised the context. I consider a statistical model given by a parametric family $\{f(\cdot \mid \theta), \theta \in \Theta\}$ of densities with respect to some dominating measure, with each $f(\cdot \mid \theta)$ defined on the observations space ${\cal X}$. Hence we define $L(\theta \mid x)=f(x \mid \theta)$ and the question is "what is $L$ ?" (the question is not about a general definition of the likelihood)
Along the lines of Glen O's answer, this answer attempts to explain the solvability of the problem, rather than provide the answer, which has already been given. Instead of using the meta-knowledge approach, which, as Glen stated, can get hard to follow, I use the range-base approach used in Rubio's answer, and specifically address some of the objections being raised. The argument has been put forward that when Mark fails to answer on the first morning, he gives Rose no new information. This is actually true (sort of— see the last spoiler section of this answer). Rose could have predicted beforehand with certainty that Mark would fail to answer on the first day, so his failure to answer doesn't tell her anything she didn't know. However, that doesn't make the problem unsolvable. To see why, you must understand the following logical axiom: Additional information never invalidates a valid deduction. In other words, if I know that all of the statements $P_1,\dots P_n$ and $Q$ are true, and that $R$ is definitely true if $P_1, \dots P_n$ are true, I can conclude that $R$ is true. My additional knowledge that $Q$ is true, though unnecessary to deduce $R$, doesn't hamper my ability to deduce $R$ from $P_1,\dots P_n$. I will call this rule LUI for "Law of Unnecessary Information." (It may have some other name, but I don't know it, so I'm giving it a new one.) The line of reasoning goes as follows: Let $R,\;M$ be the number of bars on Rose's and Mark's windows, respectively. Before the first question is asked, both Mark and Rose know the following: $P_1$: Mark knows the value of $M$ $P_2$: Rose knows the value of $R$ $P_3$: $M+R=20 \;\vee \;M+R=18\;$ ($\vee$ means "or", in case you're unfamiliar with the notation) $P_4$: $M\ge 2\;\wedge\;R \ge2\;$ ($\wedge$ means "and") $P_5$: Both of them know every statement on this list, and every statement that can be deduced from statements they both know. To help keep track of $P_5$ I will say that I will call a statement $P$ (with some subscript) only if it is known to both prisoners (or neither); thus, $P_5$ becomes "the other prisoner knows every $P$ that I know." Additionally, Mark knows that $M=12$ and Rose knows that $R=8$. Call this knowledge $Q_M$ and $Q_R$, respectively. Finally, as soon as one of them is asked the question for $k^\text{th}$ time, they both know (and know that one another know, etc.) $P_{\leftarrow k}$: $P_{\leftarrow k}$: The other prisoner could not deduce the value of $M+R$ given the information they already had. After Mark doesn't answer on the morning of day one, both prisoners can deduce from $P_1, P_3, P_4, P_5,$ and $P_{\leftarrow 2}$ that $M\le 16$ (call this $P_6$). It is true that both prisoners have more information than this about the value of $M$, but LUI tells us that that doesn't invalidate the deduction. It basically just means that Rose won't be surprised when she gets asked the question. She already knows she will be. By the following morning, both prisoners can deduce from $P_1\dots P_6$ and $P_{\leftarrow 3}$ that $4\le R \le 16$ ($P_7$), and that evening, they can deduce from $P1,\dots P_7$ and $P_{\leftarrow 4}$ that $4 \le M \le 14$ ($P_8$). Again, both prisoners know all of this already. (But the conclusions are still valid by LUI.) On the next day, in a similar manner, they can deduce in the morning that $6 \le R \le 14$ ($P_9$), and in the evening that $6 \le M \le 12$ ($P_{10}$). Here's where things get interesting. Mark can deduce from $P_3$ and $Q_M$ that $R$ is either $6$ or $8$, but $R=6\wedge P_{10} \wedge P_3\implies M+R=18$ and $R=6\wedge P_{10} \wedge P_3\wedge\left[R=6\wedge P_{10} \wedge P_3\implies M+R=18\right]\implies \neg P_{\leftarrow 7}$. When he gets asked the question again on the following morning, he learns that $P_{\leftarrow 7}$ is true, and can thus deduce that $R \neq 6$ and therefore $R=8$ and $M+R=20$. This is actually the first time in the sequence that a $P_{\leftarrow k}$ provides any more information about the value of $M+R$ than the prisoner already has, but the sequence of irrelevant questions is necessary to establish the deep metaknowledge Glen talks about. In this formulation, all this metaknowledge is encapsulated in $P_5$. When a prisoner is asked a question, $P_5$ says that they can deduce not only $P_{\leftarrow k}$ but also that both of them know $P_{\leftarrow k}$ and, by repeatedly applying $P_5$, that both of them know that both of them know $P_{\leftarrow k}$ and so on. For any $P_{\leftarrow k}$, there is some level of "we both know that we both know" that can't be deduced from $P_1\dots P_5$ and $Q_M$ or $Q_R$ alone. This is the "new information" being "learned" at each stage. Really nothing new is learned until Rose fails to answer on the $3^\text{rd}$ evening, but the sequence of non-answers $P_{\leftarrow k}$ is necessary to provide the deductive path to $P_{\leftarrow 7}$. In fact, viewing it another way, the fact that not answering provides "no new information" (and in fact doesn't provide any new direct information about the number of bars) is exactly why the puzzle is solvable, because It says that the previous answer provided no new information. Because they both know that the number of bars is either $18$ or $20$ (only two possibilities), any new information about the number of bars (eliminating a possibility) will allow them to give the answer; thus, not answering sends the message "I have not yet received any new information," which, eventually, is new information for the other prisoner. The "conversation" the prisoners have amounts to this: Mark: I don't know how many bars there are. Rose: I already knew that (that you wouldn't know). Mark: I already knew that (that you'd know I wouldn't know). Rose: I already knew THAT (etc.) Mark: I already knew THAT. Rose: I already knew $\mathbf {THAT}$. Mark (To the Evil Logician): There are $20$ bars. But how, you may ask, can a series of messages that provide their recipient with no new information lead to one that does? Simple! The non-answers provide no new information to the recipient, but they do provide information to the sender. If I tell you that I'm secretly a ninja, you might already know that, but even if you do, knowledge is gained, because by telling you, I give myself the knowledge that you know I'm a ninja, and that you know I know you know I'm a ninja, etc. Thus, each message sent, even if the recipient already knows it, provides the sender with information. After several such questions, this is enough information that a message recipient can draw conclusions based on the sender's inability to draw any conclusions from the information they know the sender has. Ok, fine, you might say, but what, exactly, is learned when Mark fails to answer on the first morning, and how can you prove this was not already known? Great question, thanks for asking. You see... At this point, we have to resort to metaknowledge (I know she knows I know...) even though it can get confusing, However, I'll break it down in such a way as to hopefully satisfy anyone who still objects that there is (meta)knowledge available after Mark fails to answer the first question was not available before he did so. Specifically, After failing to answer the first question, Now, that's a mouthful, so let's break it down into parts: Mark gains the information that Rose knows that Mark knows that Rose knows that Mark knows that Rose knows that Mark's window has less than $18$ bars. $R_0$:Mark's window doesn't have $18$ bars $M_1$:Rose knows $R_0$ $R_2$:Mark knows $M_1$ $M_3$:Rose knows $R_2$ $R_4$:Mark knows $M_3$ $M_5$:Rose knows $R_4$ My claim is that A) Before he fails to answer on the first morning, Mark does not know $M_5$, and B) Afterwards, he does. Let's examine A) first: To show that Mark doesn't know $M_5$ beforehand, we work backwards from $R_0$. In order for Rose to know that Mark's window doesn't have $18$ bars, her window would have to have more than $2$ bars. Since the rules (and numbers of bars) imply that they both have an even number of bars, in order for Mark to know $M_1$, he would have to know that Rose's window has at least $4$ bars. The only way for him to know that is if his window has less than $16$ bars. Thus, for rose to know $R_2$, she must know that Mark has no more than $14$ bars, which requires that she have at least $6$ bars. For Mark to know $M_3$, then, he must have no more than $12$ bars, so for Rose to know $R_4$ she must have at least $8$ bars, and for Mark to know $M_5$ he must have no more than $10$ bars. But he does have more than $10$ bars, so he doesn't know $M_5$ beforehand. To see why Mark must know $M_5$ after he fails to answer the question, we must realize that they both know the rules of the game and one of the rules of the game is that they both know the rules of the game. This creates an infinite loop of meta-knowledge, meaning that they both know that they both know that they both know... the rules, no matter how many times you repeat "they both know". This infinite-depth meta-knowledge extends to anything that can be deduced from the rules. If Mark's window had $18$ bars, he could deduce from the rules that Rose must have $2$, and the tower must have $20$ in total. Because he doesn't answer, rose will be asked, and when she is, she will know that he couldn't deduce the answer, and therefore has less than $18$ bars. Because this is all deduced directly from the rules, rather than the private knowledge that either prisoner has, it inherits the infinite meta-knowledge of the rules, and Mark knows $M_5$. So, Mark learns $M_5$. Does Rose learn anything? It's tempting to think that she doesn't, because she can predict in advance that Mark won't answer and therefore, one might think, she can draw in advance any conclusions that could be drawn from his not answering. However, as was shown above, by not answering, Mark learns $M_5$. Not answering changes the state of Mark's knowledge. This means that Rose's ability to predict Mark's behavior doesn't prevent her from gaining new information. She can predict in advance both what he will do (not answer) and what he will learn when he does it ($M_5$), but since he doesn't learn $M_5$ until he actually declines to answer, his failure to answer provides her with the information that he knows $M_5$. Since he didn't know $M_5$ beforehand, the knowledge that he does is by definition new information for Rose. Rose already knew that she now would know this, but until Mark doesn't answer, she doesn't actually know it (because it isn't true). By following this prediction logic out, it's possible to show that Rose knows (at the start) that Mark will be unable to answer until the $4^\text{th}$ morning, but not whether or not he'll be able to answer then. Mark, meanwhile, knows that Rose will be unable to answer until the $3^\text{rd}$ evening, but not whether or not she'll be able to answer then. As soon as one of the prisoners observes an event that they were unable to predict at the beginning, they can deduce from it something they didn't know about the state of the other's knowledge. Since the only hidden information is how many bars are in the other prisoners window, and they know that it must be one of two values, learning new information about that allows them to eliminate one of the values and find the correct result.
Difference between revisions of "Group cohomology of dihedral group:D8" (→Over an abelian group) (→Over the integers) Line 40: Line 40: The cohomology groups with coefficients in the ring of integers are as follows: The cohomology groups with coefficients in the ring of integers are as follows: − <math>H^p(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 2 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \ne 0, p \equiv 0 \pmod 4\\ 0, & p \ \operatorname{odd} \\\end{array}\right.</math> + <math>H^p(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 2 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \ne 0, p \equiv 0 \pmod 4\\ 0, & p \ \operatorname{odd} \\\end{array}\right.</math> ===Over an abelian group=== ===Over an abelian group=== Revision as of 01:45, 9 October 2011 Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups with coefficients in the ring of integers are as follows: As a sequence (Starting ), the first few homology groups are: 0 1 2 3 4 5 6 7 8 0 0 0 0 Over an abelian group Over an abelian group The homology groups with coefficients in an abelian group are as follows: Here, denotes the 2-torsion subgroup of and denotes the 8-torsion subgroup of . Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the ring of integers are as follows: Over an abelian group The cohomology groups with coefficients in an abelian group are as follows: Here denotes the 2-torsion subgroup of and denotes the 8-torsion subgroup of . Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Second cohomology groups for trivial group action Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action Extensions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 ? ? second cohomology group for trivial group action of D8 on Z4
Howto Raytracer: Ray / Plane Intersection Theory In this tutorial I will derive how to calculate the intersection of a ray and a plane. As already stated in my ray / sphere intersection howto, a ray $$r(t)$$ can be represented by a point on the ray $$e$$ and the ray’s direction $$d$$: $$r(t)=e + t d$$. The set $$R$$ of all points on the ray is then given by: $$R = {r(t) \mid t \in \mathbb{R}}$$ Similarly, a plane $$P$$ can be represented by a point on the plane $$p$$ and by its normal $$n$$. So how do we characterize the points on a plane? The main insight is that given any two points $$a,b$$ on the plane, the vector from $$a$$ to $$b$$, i.e. $$b-a$$, lies itself inside the plane and is thus by definition of the plane’s normal $$n$$ perpendicular to it. To check for perpendicularity, we check if the dot product $$(b-a)\cdot n$$ is $$0$$. The set of the plane’s points $$x$$ is then given by $$P = {(x-p) \cdot n = 0 \mid x \in \mathbb{R}^3}$$ To find the intersection of the ray and the plane now, we have to find the points that are in both sets. So we check if a point $$r(t)$$ on the ray also fullfills the plane equation: $$ \begin{aligned} (r(t) - p) \cdot n = 0 \\ (e + t d - p) \cdot n = 0 \\ e \cdot n + t d \cdot n - p \cdot n = 0 \\ t = \frac{p \cdot n - e \cdot n }{d \cdot n} \\ t = \frac{(p - e) \cdot n }{d \cdot n} \end{aligned} $$ So we just calculate $$t$$, plug it back in the ray’s equation $$r(t)$$ and end up with the hit point, unless the denominator $$d \cdot n$$ is $$0$$ in which case there is no intersection. Geometrically this corresponds to the ray and the plane being parallel. Here’s some sample code that implements this collision test: public override RayTracer.HitInfo Intersect(Ray ray) { RayTracer.HitInfo info = new RayTracer.HitInfo(); Vector3 d = ray.direction; float denominator = Vector3.Dot(d, normal); if (Mathf.Abs(denominator) < Mathf.Epsilon) return info; // direction and plane parallel, no intersection float t = Vector3.Dot(center - ray.origin, normal) / denominator; if (t < 0) return info; // plane behind ray's origin info.time = t; info.hitPoint = ray.GetPoint(t); info.normal = normal; return info; } All in all, ray / plane intersection is rather easy if you ‘re working with inifinitely long planes. If your planes have a width and height, it gets drastically more complicated using just a normal to represent a plane, because this representation is ambiguous to rotations around the normal. However for planes with finite width/height this orientation is important and you would be better suited representing the plane by the (orthogonal) vectors that span it.
Proposition: Such posets have exactly two maximal elements, one of which lies above every non-maximal element, I call this one supermaximal (according with the original definition of Jeremy Rickard in the first link). Also, removing the supermaximal element leaves the incomparability graph connected (this is easy to see, it has degree 1). So these posets are precisely the bichains as defined in multiple ways by Matthew Fayers (2nd link). One of the definitions is the solution Jan Kynčl was proposing: The posets whose incomparability graph is a caterpillar. Lemma: If the incomparability graph of a poset $P$ is disconnected, we can partition $P$ into two parts $X<Y$, that is $x<y\;\forall x\in X, y\in Y$. Proof: Let $M$ be a connected component of the incomparability graph and $N=P\setminus M$. Now we distinguish two cases: If $\forall y\in N: (\forall x \in M: x<y)\vee(\forall x \in M: x>y)$, then this gives a natural partition of $N$ into elements bigger and smaller than $M$. By transitivity, the set $X$ of smaller elements does the job.If, $\exists y\in N: (\exists m\in M: m<y)\wedge (\exists x \in M: x>y)$, then there is a natural decomposition of $M$ into the part smaller and the part bigger than $y$. Thus $M$ is disconnected, which is a contradiction. Proof of Proposition: If the poset had only one maximal element, this would be the greatest element of the poset and thus be disconnected from the rest of the incomparability graph. Claim: If such a poset $P$ has at least three maximal elements $x,y,z$, then we can add one of the cover relation between any two of them, say $z>y$ or $y>z$. Proof of Claim: Take the transitive closure of $z>y$. This adds only relations of the form $e\le z$ for some elements $e\in P$. Assume this disconnects the incomparability graph, and let $X_z, Y_z$ be the partition classes given by the lemma. Since the incomparability graph was connected before, there is $x_z\in X_z$ (originally) incomparable to $z$ in $P$. Now play the same game adding the relation $y>z$ and you get $x_y\in X_y$ incomparable to $y$ in $P$. This means $x_y\in Y_z$, because $z\in Y_z$ and $(z,x),(x,y),(y,x_y)$ are incomparable pairs. Yet this would imply $x_y>x_z$ which is a contradiction by the symmetry of the construction. Thus the poset has exactly two maximal elements $x,y$. Now assume none of $x,y$ is supermaximal: Then there are $p,q\in P$ incomparable to $x,y$ respectively. They are both non-maximal so $p<y,q<x$. Now add the cover-relation $x>p$ and take the transitive closure. Again, this adds only relations of the form $e<x$. We get an element $x_x\in X_x$ incomparable to $x$ in $P$ in a similar way as above. Now $q\in Y_x$, since $x\in Y_x$ and $(x,y),(y,q)$ are still incomparable. But this means already $x>q>x_x$ in the original poset. Thus we reach a contradiction. As Nik pointed out, while it is quite obvious, that the obtained poset has a connected incomparability graph, it is not obvious that it is maximal in that regard. Imagine it is not. Let $P$ be the original poset with maximal elements $x,y$, $P_x$ its truncation by deleting the supermaximal element $x$ and $P_x^>$ its extension. Case 1: $y$ is maximal in $P_x^>$: Then we can add (back) an element $x$ to $P_x^>$ that is greater than all elements except $y$, yielding a poset $P^>$, that is connected and has all relations that $P$ had. Also, the number of relations it has is strictly greater than the number of relations of $P$, since the difference of number of relations of $P$ and $P_x$ as well as $P_x^>$ and $P^>$ is precisely $|P|-2$, the number of relations of $x$. So $P$ was not maximal, which is a contradiction. Case 2: Since the poset $P_x^>$ is still connected, it has at least two maximal elements $a,b$, these are maximal in $P_x$ as well. In the claim above we convinced ourselves, that since $P_x$ has at least three maximal elements, we can add a cover relation between $a$ and $b$ without disconnecting the comparability graph. Now we could also have chosen this extension to be $P_x^>$ and ended up in the first case, so this is a contradiction as well. My proof shows in particular that minimal connected incomparability graphs are minimal connected graphs, aka trees, by iteratively pointing at a leaf of the graph, which one can delete. Related links: Has anyone seen these posets before? http://www.maths.qmul.ac.uk/~mf/papers/posets.pdf
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death. In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid. In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid. Question: In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
I am new to Mathematica, I am trying to generate the polynomial function of a operator. So for example, the operator $L $ is $\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y} $, and I want to generate the polynomial $ \sum_{n=0}^{n=k} (L/2)^n$ and apply that polynomial operator to a function. I was trying to use Nest, any help?? Define L = (1/2) (D[#, x] + D[#, y]) & We see that L works as desired. For instance: Simplify[Nest[L, f[x, y], 3]](* (Derivative[0, 3][f][x, y] + 3*Derivative[1, 2][f][x, y] + 3*Derivative[2, 1][f][x, y] + Derivative[3, 0][f][x, y])/8 *) And the Sum can be constructed in a similar manner. For instance: Simplify[Sum[Nest[L, f[x, y], n], {n, 0, 3}]](* (8*f[x, y] + 4*Derivative[0, 1][f][x, y] + 2*Derivative[0, 2][f][x, y] + Derivative[0, 3][f][x, y] + 4*Derivative[1, 0][f][x, y] + 4*Derivative[1, 1][f][x, y] + 3*Derivative[1, 2][f][x, y] + 2*Derivative[2, 0][f][x, y] + 3*Derivative[2, 1][f][x, y] + Derivative[3, 0][f][x, y])/8 *) Update As kindly pointed out by @b.gatessucks in the Comment below, computation of the final result can be simplified with NestList. (Thanks!) Simplify[Total[NestList[L, f[x, y], 3]]] Here is a function makeOperator that takes any polynomial together with a replacement rule that maps the desired variable onto the desired operator. It outputs the result as a new operator: Clear[makeOperator]; makeOperator[poly_, Rule[x_, op_]] /; PolynomialQ[poly, x] := Module[{f}, Function[#1, #2] & @@ {f, Expand[poly]} /. Power[x, n_: 1] :> Nest[op, f, n]] I define operators using Function. Since that has attribute HoldAll, the necessary replacements in the polynomial have to be done outside the Function body, and are injected afterwards using Apply ( @@). The pattern Power[x, n_: 1] detects powers of the variable (including first powers) and replaces them by Nest. Expand makes sure that the polynomial is in a canonical form before doing the replacements, in particular it eliminates parentheses like $x(x+c)$. Here is a test with the operator L in the question, and a Hermite polynomial: Clear[x, y];L = Function[f, D[f, x] + D[f, y]];hp = HermiteH[5, x](* ==> 120 x - 160 x^3 + 32 x^5 *)hpOp = makeOperator[hp, x -> L];hpOp[ψ[x, y]] $$120 \left(\frac{\partial \psi }{\partial x}+\frac{\partial \psi }{\partial y}\right)\\ -160 \left(3 \frac{\partial ^3\psi }{\partial x^2\, \partial y}+3 \frac{\partial ^3\psi }{\partial x\, \partial y^2}+\frac{\partial ^3\psi }{\partial x^3}+\frac{\partial ^3\psi }{\partial y^3}\right)\\+32 \left(5 \frac{\partial ^5\psi }{\partial x^4\, \partial y}+10 \frac{\partial ^5\psi }{\partial x^3\, \partial y^2}+10 \frac{\partial ^5\psi }{\partial x^2\, \partial y^3}+5 \frac{\partial ^5\psi }{\partial x\, \partial y^4}+\frac{\partial ^5\psi }{\partial x^5}+\frac{\partial ^5\psi }{\partial y^5}\right)$$
I think there is a typo in the first formula.Let me propose this (partial) answer for the $3$ first formulae: Because $H(z)H(0) \sim -ln(z)$, we may write the OPE for any pair of operators $F(H), G(H)$ functions of $H$ (in analogy with formula $2.2.10$ p.$39$ vol $1$) $$:F::G: = e^{- \large \int dz_1 dz_2 ln z_{12} \frac{\partial}{\partial H(z_1)}\frac{\partial}{\partial H(z_2)}} :FG:\tag{1}$$ This gives, for $F = e^{i \epsilon_1 H(z_1)}, G = e^{i \epsilon_2 H(z_2)}$ $$:e^{i \epsilon_1 H(z_1)}::e^{i \epsilon_2 H(z_2)}: = (z_{12})^{\epsilon_1 \epsilon_2} :e^{i \epsilon_1 H(z_1)}e^{i \epsilon_2 H(z_2)}:\tag{2}$$ So, we have : $$:e^{iH(z)}::e^{-iH(0)}:~ = \frac{1}{z}~:e^{i H(z)}e^{-i H(0)}: ~\sim \frac{1}{z}:e^{i H(0)}e^{-i H(0)}: \sim \frac{1}{z}\tag{3}$$ $$:e^{iH(z)}::e^{iH(0)}:~ = z~:e^{i H(z)}e^{i H(0)}: ~\sim z~:e^{2i H(0)}: \sim O(z)\tag{4}$$ $$:e^{-iH(z)}::e^{-iH(0)}:~ = z~:e^{-i H(z)}e^{-i H(0)}: ~\sim z~:e^{-2i H(0)}: \sim O(z)\tag{5}$$ [EDIT] For the last equation, I think it is the same reasoning that the one done in Vol $1$, page $173,174$, formulae $6.2.24$ until $6.2.31$ [EDIT 2] The formula $1$, and the formula $(2.2.10)$ are not formulae ad hoc. These are the consequence of a definition of the normal ordering, and the definitition of the contractions. These are the consequence of the general formulae $2.2.5$ to $2.2.9$, , for instance : $$F = :F:+ ~contractions \tag{2.2.8}$$$$:F::G: = :FG:+ ~cross-contractions \tag{2.2.9}$$ Now, we may specialize to holomorphic fields $Y(z)$, so that $Y(z)Y(0) \sim f(z)$, and write : $$:F::G: = e^{ \large \int dz_1 dz_2 f(z_{12}) \frac{\partial}{\partial Y(z_1)}\frac{\partial}{\partial Y(z_2)}} :FG:\tag{6}$$where $F$ and $G$ are functions of $Y$ The specialization to an holomorphic field does not change the logic and the calculus done in $2.2.5$ to $2.2.9$
The Einstein solid is a model of a solid based on two assumptions: While the assumption that a solid has independent oscillations is very accurate, these oscillations are sound waves or phonons, collective modes involving many atoms. In the Einstein model, each atom oscillates independently. Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Contents Historical impact 1 Heat capacity (microcanonical ensemble) 2 Heat capacity (canonical ensemble) 3 See also 4 References 5 Further reading 6 External links 7 Historical impact The original theory proposed by Einstein in 1907 has great historical relevance. The heat capacity of solids as predicted by the empirical Dulong-Petit law was required by classical mechanics, the specific heat of solids should be independent of temperature. But experiments at low temperatures showed that the heat capacity changes, going to zero at absolute zero. As the temperature goes up, the specific heat goes up until it approaches the Dulong and Petit prediction at high temperature. By employing Planck's quantization assumption, Einstein's theory accounted for the observed experimental trend for the first time. Together with the photoelectric effect, this became one of the most important pieces of evidence for the need of quantization. Einstein used the levels of the quantum mechanical oscillator many years before the advent of modern quantum mechanics. In Einstein's model, the specific heat approaches zero exponentially fast at low temperatures. This is because all the oscillations have one common frequency. The correct behavior is found by quantizing the normal modes of the solid in the same way that Einstein suggested. Then the frequencies of the waves are not all the same, and the specific heat goes to zero as a T^3 power law, which matches experiment. This modification is called the Debye Model, which appeared in 1912. When Walther Nernst learned of Einstein's 1907 paper on specific heat, [1] he was so excited that he traveled all the way from Berlin to Zurich to meet with him. [2] [3] Heat capacity (microcanonical ensemble) Heat capacity of an Einstein solid as a function of temperature. Experimental value of 3 Nk is recovered at high temperatures. The heat capacity of an object at constant volume V is defined through the internal energy U as C_V = \left({\partial U\over\partial T}\right)_V. T, the temperature of the system, can be found from the entropy {1\over T} = {\partial S\over\partial U}. To find the entropy consider a solid made of N atoms, each of which has 3 degrees of freedom. So there are 3N quantum harmonic oscillators (hereafter SHOs for "Simple Harmonic Oscillators"). N^{\prime} = 3N Possible energies of an SHO are given by E_n = \hbar\omega\left(n+{1\over2}\right) or, in other words, the energy levels are evenly spaced and one can define a quantum of energy \varepsilon = \hbar\omega which is the smallest and only amount by which the energy of an SHO is increased. Next, we must compute the multiplicity of the system. That is, compute the number of ways to distribute q quanta of energy among N^{\prime} SHOs. This task becomes simpler if one thinks of distributing q pebbles over N^{\prime} boxes or separating stacks of pebbles with N^{\prime}-1 partitions or arranging q pebbles and N^{\prime}-1 partitions The last picture is the most telling. The number of arrangements of n objects is n!. So the number of possible arrangements of q pebbles and N^{\prime}-1 partitions is \left(q+N^{\prime}-1\right)!. However, if partition #3 and partition #5 trade places, no one would notice. The same argument goes for quanta. To obtain the number of possible distinguishable arrangements one has to divide the total number of arrangements by the number of indistinguishable arrangements. There are q! identical quanta arrangements, and (N^{\prime}-1)! identical partition arrangements. Therefore, multiplicity of the system is given by \Omega = {\left(q+N^{\prime}-1\right)!\over q! (N^{\prime}-1)!} which, as mentioned before, is the number of ways to deposit q quanta of energy into N^{\prime}-1 oscillators. Entropy of the system has the form S/k = \ln\Omega = \ln{\left(q+N^{\prime}-1\right)!\over q! (N^{\prime}-1)!}. N^{\prime} is a huge number—subtracting one from it has no overall effect whatsoever: S/k \approx \ln{\left(q+N^{\prime}\right)!\over q! N^{\prime}!} With the help of Stirling's approximation, entropy can be simplified: S/k \approx \left(q+N^{\prime}\right)\ln\left(q+N^{\prime}\right)-N^{\prime}\ln N^{\prime}-q\ln q. Total energy of the solid is given by U = {N^{\prime}\varepsilon\over2} + q\varepsilon, since there are q energy quanta in total in the system in addition to the ground state energy of each oscillator. Some authors, such as Schroeder, omit this ground state energy in their definition of the total energy of an Einstein solid. We are now ready to compute the temperature {1\over T} = {\partial S\over\partial U} = {\partial S\over\partial q}{dq\over dU} = {1\over\varepsilon}{\partial S\over\partial q} = {k\over\varepsilon} \ln\left(1+N^{\prime}/q\right) Elimination of q between the two preceding formulas gives for U: U = {N^{\prime}\varepsilon\over2} + {N^{\prime}\varepsilon\over e^{\varepsilon/kT}-1}. The first term is associated with zero point energy and does not contribute to specific heat. It will therefore be lost in the next step. Differentiating with respect to temperature to find C_V we obtain: C_V = {\partial U\over\partial T} = {N^{\prime}\varepsilon^2\over k T^2}{e^{\varepsilon/kT}\over \left(e^{\varepsilon/kT}-1\right)^2} or C_V = 3Nk\left({\varepsilon\over k T}\right)^2{e^{\varepsilon/kT}\over \left(e^{\varepsilon/kT}-1\right)^2}. Although the Einstein model of the solid predicts the heat capacity accurately at high temperatures, it noticeably deviates from experimental values at low temperatures. See Debye model for how to calculate accurate low-temperature heat capacities. Heat capacity (canonical ensemble) Heat capacity is obtained through the use of the canonical partition function of a simple harmonic oscillator (SHO). Z = \sum_{n=0}^{\infty} e^{-E_n/kT} where E_n = \varepsilon\left(n+{1\over2}\right) substituting this into the partition function formula yields \begin{align} Z = \sum_{n=0}^{\infty} e^{-\varepsilon\left(n+1/2\right)/kT} = e^{-\varepsilon/2kT} \sum_{n=0}^{\infty} e^{-n\varepsilon/kT}=e^{-\varepsilon/2kT} \sum_{n=0}^{\infty} \left(e^{-\varepsilon/kT}\right)^n \\ = {e^{-\varepsilon/2kT}\over 1-e^{-\varepsilon/kT}} = {1\over e^{\varepsilon/2kT}-e^{-\varepsilon/2kT}} = {1\over 2 \sinh\left({\varepsilon\over 2kT}\right)}. \end{align} This is the partition function of one SHO. Because, statistically, heat capacity, energy, and entropy of the solid are equally distributed among its atoms (SHOs), we can work with this partition function to obtain those quantities and then simply multiply them by N^{\prime} to get the total. Next, let's compute the average energy of each oscillator \langle E\rangle = u = -{1\over Z}\partial_{\beta}Z where \beta = {1\over kT}. Therefore u = -2 \sinh\left({\varepsilon\over 2kT}\right){-\cosh\left({\varepsilon\over 2kT}\right)\over 2 \sinh^2\left({\varepsilon\over 2kT}\right)}{\varepsilon\over2} = {\varepsilon\over2}\coth\left({\varepsilon\over 2kT}\right). Heat capacity of one oscillator is then c_V = {\partial u\over\partial T} = -{\varepsilon\over2} {1\over \sinh^2\left({\varepsilon\over 2kT}\right)}\left(-{\varepsilon\over 2kT^2}\right) = k \left({\varepsilon\over 2 k T}\right)^2 {1\over \sinh^2\left({\varepsilon\over 2kT}\right)}. Up to now, we calculated the heat capacity of a unique degree of freedom, which has been modeled as an SHO. The heat capacity of the entire solid is then given by C_V = 3Nc_V, where the total number of degree of freedom of the solid is three (for the three directional degree of freedom) times N, the number of atoms in the solid. One thus obtains C_V = 3Nk\left({\varepsilon\over 2 k T}\right)^2 {1\over \sinh^2\left({\varepsilon\over 2kT}\right)}. which is algebraically identical to the formula derived in the previous section. The quantity T_E=\varepsilon / k has the dimensions of temperature and is a characteristic property of a crystal. It is known as the Einstein temperature. [4] Hence, the Einstein Crystal model predicts that the energy and heat capacities of a crystal are universal functions of the dimensionless ratio T / T_E. Similarly, the Debye model predicts a universal function of the ratio T/T_D. See also References ^ Annalen der Physik (ser. 4), 22, 180–190, 800 link and correction ^ Einstein and the Quantum, A. Douglas Stone, Princeton University Press, 2013, p. 146. ^ http://press.princeton.edu/titles/10068.html ^ Rogers, Donald (2005). Einstein's other theory: the Planck-Bose-Einstein theory of heat capacity. Princeton University Press. p. 73. "Die Plancksche Theorie der Strahlung und die Theorie der spezifischen Wärme", A. Einstein, Annalen der Physik, volume 22, pp. 180–190, 1907. Further reading Stone, A. Douglas (2013): Einstein and the Quantum, Princeton University Press. ISBN 978-0-691-13968-5 External links This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
In this post, we will show that the Earth's rotation alters the surface of its ocean, with "ocean" here meaning the global ocean of the Earth, i.e., the continuous body of water encircling it. Our model will consist of a spherical Earth with radius $R$ rotating at an angular velocity $\omega$ around a fixed axis. The easiest way to solve this problem is by using a non-inertial frame of reference which rotates with the Earth and with origin at its center. The vertical axis $z'$ of this rotating frame is chosen to be the axis of rotation of the Earth (see figure 1). In this frame, the ocean is at rest since it moves with the Earth. Fig. 1: Earth and the surface of the ocean. The reference frame $S$ sees the Earth rotating with angular velocity $\omega$ around the $z$ axis, while in the frame $S'$, the Earth is at rest since $S'$ is also rotating with respect to $S$ with angular velocity $\omega$, i.e., $S'$ is "glued" to the Earth. The origins of both $S$ and $S'$ coincide with the center of the Earth. The effect of the Earth's rotation on the surface of its ocean is greatly exaggerated in the figure. Let $S$ be a reference frame with origin at the center of the Earth and such that the Earth itself is seen as rotating with angular velocity $\omega$, and let $S'$be the frame which rotates with the Earth as shown in figure 1. As proven in a previous post, the following equation relates the force ${\bf F}_S$ experienced by a fluid element at the ocean with mass $m$ in $S$ and the effective force ${\bf F}^{\textrm{eff}}_{S'}$ experienced by this same fluid element in $S'$ (terms which trivially vanish were omitted below): $$ {\bf F}^{\textrm{eff}}_{S'} = m{\bf a}_{S'} = {\bf F}_S - m{\pmb\omega} \times ({\pmb\omega}\times{\bf x}') = {\bf F}_S + m\omega^2 \rho\hat{\pmb\rho} \label{post_f9a58faf18bdbd019f06a0f09c123d60_eq_forces1} $$ where ${\bf a}_{S'}$ and ${\bf x}'$ are the acceleration and position of the fluid element as measured in $S'$ respectively, ${\pmb\omega} = \omega\hat{\bf z}$ is the angular velocity of the Earth as seen in $S$, $\rho$ is the distance between the fluid element and the $z$ (or $z'$) axis and $\hat{\pmb\rho}$ is the unit vector which points radially outwards from the $z$ (or $z'$) axis (see figure 2). Fig. 2: Forces acting on a fluid element of mass $m$ on the surface of the Earth as seen in $S'$. The figure shows a cross section of the Earth which passes through the $z$ (or $z'$) axis. The term ${\bf F}_S$ is the sum of the only two forces experienced in the non-rotating frame $S$: the gravitational force $m{\bf g}$ and the force ${\bf F}_{\textrm{fluid}}$ exerted on the fluid element by the surrounding fluid (up to this point, we have not explicitly assumed that the fluid element is on the surface, so the equation below applies to any fluid element on the ocean): $$ {\bf F}_S = m{\bf g} + {\bf F}_{\textrm{fluid}} \label{post_f9a58faf18bdbd019f06a0f09c123d60_eq_forces2} $$ Since in $S'$ the fluid element is at rest, ${\bf a}_{S'} = {\bf 0}$. This fact, together with equations \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_eq_forces1} and \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_eq_forces2}, yields the following: $$ {\bf 0} = {\bf F}_{\textrm{fluid}} + m\left({\bf g} - {\pmb\omega} \times ({\pmb\omega}\times{\bf x}')\right) = {\bf F}_{\textrm{fluid}} + m\left({\bf g} + \omega^2 \rho\hat{\pmb\rho}\right) \label{post_f9a58faf18bdbd019f06a0f09c123d60_eq_acc} $$ Figure 2 shows these three forces. In equation \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_eq_acc}, the centrifugal acceleration term $\omega^2 \rho\hat{\pmb\rho}$ can be interpreted as an additional component which, together with the gravitational acceleration ${\bf g}$, yields an effective gravitational acceleration ${\bf g}^{\textrm{eff}}$ which is no longer homogeneous in space: $$ {\bf g}^{\textrm{eff}} = {\bf g} - {\pmb\omega} \times ({\pmb\omega}\times{\bf x}') = {\bf g} + \omega^2 \rho\hat{\pmb\rho} = {\bf g} + \omega^2 R \sin\theta\hat{\pmb\rho} \label{post_f9a58faf18bdbd019f06a0f09c123d60_effec_gravity} $$ where above we used the fact that $\rho \approx R\sin\theta$ since the surface of the ocean is close enough to the surface of the spherical Earth, and $\theta$ is the angle between the $z'$ axis and a segment connecting the center of the Earth to the fluid element. The force ${\bf F}_{\textrm{fluid}}$ is a very interesting one since it is always orthogonal to the surface of the ocean for fluid elements on the ocean surface. To understand this point, imagine the case in which there is no rotation, i.e., ${\pmb\omega} = {\bf 0}$. In this case, the frames $S$ and $S'$ coincide and the ocean is at rest on both. From equation \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_eq_acc}, we see that ${\bf F}_{\textrm{fluid}} = - m{\bf g}$, which means ${\bf F}_{\textrm{fluid}}$ is the force on the fluid element exerted by the surrounding fluid against the gravitational force: the fluid element is not moving, so the surrounding fluid must be providing the force ${\bf F}_{\textrm{fluid}} = -m{\bf g}$ to keep it in place. Since the fluid surface is locally horizontal in this case, i.e., orthogonal to ${\bf g}$, then ${\bf F}_{\textrm{fluid}}$ is orthogonal to the fluid surface. When $\omega \neq 0$, ${\bf F}_{\textrm{fluid}}$ plays exactly the same role, but now the effective gravitational force given in equation \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_effec_gravity} is no longer homogeneous in space but depends explicitly on the position ${\bf x}'$: ${\bf F}_{\textrm{fluid}}$ will still be orthogonal to the surface of the ocean for every fluid element on the ocean surface; a tangential force component would set the fluid element in motion since it would not be able to resist this force (but a perpendicular force is resisted by a pressure gradient created inside the water). Since ${\bf F}_{\textrm{fluid}}$ is always perpendicular to every fluid element on the ocean surface, and since, from equations \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_eq_acc} and \eqref{post_f9a58faf18bdbd019f06a0f09c123d60_effec_gravity}: $$ {\bf F}_{\textrm{fluid}} = -m\left({\bf g} + \omega^2 R\sin\theta\hat{\pmb\rho}\right) = -m{\bf g}^{\textrm{eff}} \label{post_f9a58faf18bdbd019f06a0f09c123d60_eq_acc2} $$ then the ocean surface is always perpendicular to the effective gravitational acceleration ${\bf g}^{\textrm{eff}}$. As shown in figure 3, even though ${\bf g}$ always points to the center of the Earth, ${\bf g}^{\textrm{eff}}$ does not, meaning the surface of the ocean will not be spherical. At the poles, we have $\theta = 0$ and $\theta = \pi$, so ${\bf g}^{\textrm{eff}} = {\bf g}$, meaning the ocean surface is perpendicular to ${\bf g}$ at those points. Along the equator, $\theta = \pi/2$ and therefore ${\bf g}^{\textrm{eff}} = {\bf g} + \omega^2 R \hat{\pmb\rho}$; since $\hat{\pmb\rho}$ is parallel to ${\bf g}$, the resulting ${\bf g}^{\textrm{eff}}$ still points to the center of the Earth and therefore the ocean is perpendicular to ${\bf g}$ there as well, but the fact that $\omega^2 R \hat{\pmb\rho}$ points away from ${\bf g}$ means ${\bf g}^{\textrm{eff}}$ is smaller in magnitude at the equator than at the poles (where it attains its highest magnitude). But at any other point on the surface of the Earth, $\bf g$ and $\omega^2 R\sin\theta\hat{\pmb\rho}$ are not parallel to each other and therefore the surface of the ocean is in general not perpendicular to ${\bf g}$. Fig. 3: Components of ${\bf g}^{\textrm{eff}}$ for fluid elements on diverse points along the surface of the ocean. Except at the poles and at the equator, the surface of the ocean is in general not orthogonal to ${\bf g}$. A final comment is necessary here: when studying the surface of the ocean, we only considered the rotation of the Earth. In reality, the centrifugal acceleration $\omega^2 \rho \hat{\pmb\rho}$ causes the Earth itself to be shaped more like an oblate spheroid with an equatorial bulge of $42.77\textrm{km}$, and the ocean is affected as a result. The gravitational forces of the Sun and the Moon also significantly change the shapes of both the Earth and the ocean due to the tidal forces which they generate. For the curious, $\omega^2 R \approx 0.34\textrm{m/s}^2$, so the rotation of the Earth makes objects on the equator be $\omega^2 R / g \approx 0.35\%$ lighter than at the poles (here we used $g = 9.8\textrm{m/s}^2$). In practice, however, the fact that the Earth is an oblate spheroid means the difference is even higher since objects at the equator are farther from the center of the Earth than objects at the poles.
Let $X$ be a Banach space and $X^*$ its dual. We know that the weak* topology is the least topology that makes every $x \in X$ continuous as an evaluation functional. However, this does not imply that every weak* continuous linear functional is something in $X$, even though this happens to be true. The question is: how can we prove this? What have I though is: It is enough to show that $\cap_{i=1}^{k} Ker{x_i} \subset Ker{\phi}$ for some $x_i \in X, i=1,2,...,k$ I have shown this for infinitely many $x_i$s (easy, using the weak* continuity and that 0 is always in the ker) and in order to pass to finitely many I would need some kind of compactness result (probably by using Banach-Alaoglu somehow), but I do not know how to do this. Can anyone help?
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including: Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT (1) \(x^2+y^2=1\). Recall that \((x-y)^2 \ge 0\) (square of any number is more than or equal to zero). Expand: \(x^2-2xy+y^2 \ge 0\) and since \(x^2+y^2=1\) then: \(1-2xy \ge 0\). So, \(xy \le \frac{1}{2}\). Sufficient. (2) \(x^2-y^2=0\). Re-arrange and take the square root from both sides: \(|x|=|y|\). Clearly insufficient. The same applicable to (x+y)^2 ==> x^2 + y^2 + 2 x*y is greter than or equal to ZERO. 1 + 2 x*y is greter than or equal to ZERO. xy greter than or equal -1/2...This can prove A is insufficient, right ??? The same applicable to (x+y)^2 ==> x^2 + y^2 + 2 x*y is greter than or equal to ZERO. 1 + 2 x*y is greter than or equal to ZERO. xy greter than or equal -1/2...This can prove A is insufficient, right ??? You need to consider \((x-y)^2\geq{0}\) to get sufficiency._________________ The same applicable to (x+y)^2 ==> x^2 + y^2 + 2 x*y is greter than or equal to ZERO. 1 + 2 x*y is greter than or equal to ZERO. xy greter than or equal -1/2...This can prove A is insufficient, right ??? You need to consider \((x-y)^2\geq{0}\) to get sufficiency. Please explain why do we need to consider only (x-y)^2 to get sufficiency, as also in (x+y)^2 we use x^2 + y^ 2=1 premise, hence its not in all cases that(A) results in single conclusion or sufficiency. The same applicable to (x+y)^2 ==> x^2 + y^2 + 2 x*y is greter than or equal to ZERO. 1 + 2 x*y is greter than or equal to ZERO. xy greter than or equal -1/2...This can prove A is insufficient, right ??? You need to consider \((x-y)^2\geq{0}\) to get sufficiency. Please explain why do we need to consider only (x-y)^2 to get sufficiency, as also in (x+y)^2 we use x^2 + y^ 2=1 premise, hence its not in all cases that(A) results in single conclusion or sufficiency. shrutikasat wrote: yeah even i have a same doubt why not (x+y)^2 as harshalnamdeo88... Guys, the point is that if you consider \((x+y)^2 \ge 0\), you'll get \(xy\geq{-\frac{1}{2}}\), which is not helpful at all. Actually since both \(xy\geq{-\frac{1}{2}}\) (from \((x+y)^2 \ge 0\)) and \(xy \le \frac{1}{2}\) (from \((x-y)^2 \ge 0\)) are true, then we get that \(-\frac{1}{2} \leq{xy} \leq{\frac{1}{2}}\). But only the approach given in the solution above gives you the answer we are looking for, while another one gives you an inequality which is not helpful. (1) \(x^2+y^2=1\). Recall that \((x-y)^2 \ge 0\) (square of any number is more than or equal to zero). Expand: \(x^2-2xy+y^2 \ge 0\) and since \(x^2+y^2=1\) then: \(1-2xy \ge 0\). So, \(xy \le \frac{1}{2}\). Sufficient. (2) \(x^2-y^2=0\). Re-arrange and take the square root from both sides: \(|x|=|y|\). Clearly insufficient. Answer: A Hi Bunuel, Could you help me if I am wrong? What I did was first to convert question stem to 2xy - 1 < 0, then from (1) I concluded that in order to satisfy x^2+y^2=1, The only x and y possible were; x=1, y=0, x=-1, y=0, x=0, y=1, x=0, y=-1; so question stem 2xy - 1 < 0 is always negative... Is my thinking process wrong or is correct? (1) \(x^2+y^2=1\). Recall that \((x-y)^2 \ge 0\) (square of any number is more than or equal to zero). Expand: \(x^2-2xy+y^2 \ge 0\) and since \(x^2+y^2=1\) then: \(1-2xy \ge 0\). So, \(xy \le \frac{1}{2}\). Sufficient. (2) \(x^2-y^2=0\). Re-arrange and take the square root from both sides: \(|x|=|y|\). Clearly insufficient. Answer: A Hi Bunuel, Could you help me if I am wrong? What I did was first to convert question stem to 2xy - 1 < 0, then from (1) I concluded that in order to satisfy x^2+y^2=1, The only x and y possible were; x=1, y=0, x=-1, y=0, x=0, y=1, x=0, y=-1; so question stem 2xy - 1 < 0 is always negative... Is my thinking process wrong or is correct? Thanks a lot. Regards. Luis Navarro Looking for 700 Your thought process is not precise. We are NOT told that x and y are integers, thus x^2 + y^2 = 1 will have infinitely many solutions, not just the ones you mentioned._________________ (1) \(x^2+y^2=1\). Recall that \((x-y)^2 \ge 0\) (square of any number is more than or equal to zero). Expand: \(x^2-2xy+y^2 \ge 0\) and since \(x^2+y^2=1\) then: \(1-2xy \ge 0\). So, \(xy \le \frac{1}{2}\). Sufficient. (2) \(x^2-y^2=0\). Re-arrange and take the square root from both sides: \(|x|=|y|\). Clearly insufficient. Answer: A Hi Bunuel, Could you help me if I am wrong? What I did was first to convert question stem to 2xy - 1 < 0, then from (1) I concluded that in order to satisfy x^2+y^2=1, The only x and y possible were; x=1, y=0, x=-1, y=0, x=0, y=1, x=0, y=-1; so question stem 2xy - 1 < 0 is always negative... Is my thinking process wrong or is correct? Thanks a lot. Regards. Luis Navarro Looking for 700 Your thought process is not precise. We are NOT told that x and y are integers, thus x^2 + y^2 = 1 will have infinitely many solutions, not just the ones you mentioned. I used geometrical approach to solve this problem. Did I get lucky? (X)^2 +(Y)^2=1 That would indicate a right angle triangle with x and y as height and base. Maximize the area of triangle would maximise the x*y product. Thus X=Y=1/(root2) (not sure if this step is right.... I don't have a reason for this) Thus the product will always be less than or equals 1/2 For statement 1 I used a different approach: I considered X^2+Y^2=1 a circle with centre in (0,0) and radius 1. The product of xy is represented by a parallelogram whose area is maximized in the first quadrant when the parallelogram is a square. Hence xy reaches the max when x=y = 1/square root of 2. So the product can be max 1/2. Though your method is the correct and best method but I couldnt think of that method in the first place. I read the first statement... by this it could mean that a. X could be 0 and Y could be 1; Or X could be 1 and Y could be 0; In both cases, question stem is TRUE b. X = Y = plus/minus 1/2; then also question stem is TRUE so Answer is A or D By B; either x = y= 0; Question Stem is TRUE or x=y= 2; Question stem is False.
Why is $\zeta(2) = \frac{\pi^2}{6}$ almost equal to $\sqrt{e}$? Experimenting a bit I also found $\zeta(\frac{8}{3}) \approx e^\frac{1}{4}$, $\zeta(\frac{31}{9}) \approx e^\frac{1}{8}$ and $\zeta(\frac{141}{23}) \approx e^\frac{1}{64}$. I also figured out that $\zeta(x)$ approaches $e^{2^{-x}}$ but I'm not sure that helps explain why these almost-equalities exist. How to quantify how surprising these almost-equalities are, and what is the explanation for them if any? EDIT: There does seem to be a pattern here: $\log \zeta(n + (\frac{2}{3})^{n-1}) \approx 2^{-n}$ for $n = 1,2,3,4,...$. I think this formula explains the observations but where does it come from? BONUS, since I've retagged this as a soft-question already: Is there any wrong but somehow plausible argument that two random integers are relatively prime with probability $\frac{1}{\sqrt{e}}$? I guess it would be like a Lucky Larry story.
We have thought a bit about the last paragraph of the above question and have some arguments as to what the answer should be. Since there have been no replies here so far, maybe I am allowed to hereby suggest an answer myself. Recall, the last part of the above question was: is there a nonabelian 7-dimensional Chern-Simons theory holographically related to the nonabelian $(2,0)$-theory on coincident M5-branes, and if so, does it involve the Lagrangian that controls differential 5-brane structures? The following is an argument for the answer: Yes. First, in Witten's AdS/CFT correspondence and TFT (hep-th/9812012) a careful analysis of $AdS_5 /CFT_4$-duality shows that the spaces of conformal blocks of the 4d CFT are to be identified with the spaces of states of (just) the Chern-Simons-type Lagrangians inside the full type II action. At the very end of the article it is suggested that similarly the conformal blocks of the 6d $(2,0)$-CFT are given by the spaces of states of (just) the Chern-Simons-part inside 11d supergravity/M-theory. But there only the abelian sugra effective Lagrangian $$ \int_{AdS_7} \int_{S^4} C_3 \wedge G_4 \wedge G_4 = N \int_{AdS_7} C_3 \wedge G_4 $$ is briefly considered. So we need to have a closer look at this: notice that there are two quantum corrections to the 11d sugra Chern-Simons term. First, the 11-dimensional analog of the Green-Schwarz anomaly cancellation changes the above Chern-Simons term to (from (3.14) in hep-th/9506126 and ignoring prefactors here for notational simplicty) $$ \int_{AdS_7} \int_{S^4} C_3 (\wedge G_4 \wedge G_4 + I_8(\omega)) = N \int_{AdS_7} \left( C_3 \wedge G_4 - CS_7(\omega) \right) \,, $$ for $I_8 = \frac{1}{48}(p_2 - (\frac{1}{2}p_1)^2)$, where now the second term is the corresponding Chern-Simons 7-form evaluated in the spin connection (all locally). So taking quantum anomaly cancellation into account, the argument of the above hep-th/9812012 appears to predict a non-abelian 7d Chern-Simons theory computing the conformal blocks of the 6d (2,0) theory, namely one whose field configurations involve both the abelian higher C-field as well as the non-abelian spin connection field. But there is a second quantum correction that further refines this statement: by Witten's On Flux Quantization In M-Theory And The Effective Action (hep-th/9609122) the underlying integral 4-class $[G_4]$ of the $C$-field in the 11d bulk is constrained to satisfy $$ 2[G_4] = \frac{1}{2}p_1 - 2a \,, $$ where on the right the first term is the fractional first Pontryagin class on $B Spin$ and where $a$ is the universal 4-class of an $E_8$-bundle, the one that in Horava-Witten compactification yields the $E_8$-gauge field on the boundary of the 11d bulk. In that context, the boundary condition for the C-field is $[G_4]_{bdr} = 0$, reducing the above condition to the 10d Green-Schwarz cancellation condition. If this boundary condition on the $C$-field is also relevant for the asymptotic $AdS_7$-boundary, then this means that what locally lookes like a Spin-connection above is really a twisted differential String-2-connection with $2a$ being the twist. As discussed in detail there, such twisted differential String-2-connections involve a further field $H_3$ such that $d H_3 = tr(F_\omega \wedge F_\omega) - tr(F_{A_{E_8}} \wedge F_{A_{E_8}}))$. Plugging this condition into the above 7-dimensional Chern-Simons action adds to the abelian $C_3$-field a Chern-Simons term for the new $H_3$-field, plus a bunch of nonabelian correction terms. In total this argument produces a certain nonabelian 7d Chern-Simons theory whose fields are twisted String-2-connections and whose states would yield the conformal blocks of a 6d CFT. Notice that by math/0504123 there is a gauge in which $String$-2-connections are given by loop-group valued nonabelian 2-forms (but there are other gauges in which this is not manifest). This is consistent with expectations for the "nonabelian gerbe theory" in 6d. That's the physics argument, a more detailed writeup is in section 4.5.4.3.1 of my notes. Now the point is this: in the next section, 4.5.4.3.2, it is shown that, independently of all of this physics handwaving, there is naturally a fully precise 7-dimensional higher Chern-Simons Lagrangian defined on the full moduli 2-stack of twisted differential String-2-connections induced via higher Chern-Weil theory from the second fractional Pontryagin class. As discussed there, on local differential form data this reproduces precisely the nonabelian 7d Chern-Simons functional of the above argument. We are in the process of writing this up as Fiorenza, Sati, Schreiber, Nonabelian 7d Chern-Simons theory and the 5-brane . Comments are welcome.
My search for numbers to support any conclusion to this question that included wind factors led me down a rabbit hole of interesting science. I'll try to keep the following answer as clear and concise as I can. I started with a basic question: What are the limits of a wall? After some finagling of my Google search terms, I found what must be the most authoritative source of engineering formulae I've ever had the (mis)fortune to try to understand. This report on the Strength of Masonry Walls Under Compressive and Transverse Loads was both an eye-opener and informative, but incredibly dense to the point I spent over an hour trying to understand the equations and what they were telling me. (I'd relate them here, but there's a simplification later, so you can peruse if you want.) After seeing the term "cavity wall" in that report, I decided to do some digging on what kinds of walls were out there and what their limits were. That led me to a Study on Stress Performance and Free Brickwork Height Limit of Traditional Chinese Cavity Wall. This report indicated that traditional Chinese cavity walls could survive a 6.0-magnitude earthquake if they weren't more than 12.79 meters tall and they could survive a 20-meter-per-second wind if they weren't more than 7.5 meters tall. (Note: handy tool for calculating wind pressure.) But what about other kinds of walls, like a solid wall? Back to the drawing board. Looking for the limits of a structure, in general, led me to this question on our sister site, Physics SE: How high can be tower or building? (sic) The OP's research led them to a simple equation: $$h = \dfrac{\sigma}{\rho g}$$ The OP did some additional research after asking the question, which produced another equation that for shapes other than a cylinder or cone, $\sigma$ is constrained by $$\sigma \geq \dfrac{\rho g V}{S}$$ where $\rho$ is the density of the structure, $g$ is acceleration due to gravity, $V$ is the volume of the structure, and $S$ is the surface area. But wait, there's more! From comments on that question, I made my way over to this answer to a question about ice walls. There, the answerer indicated that [t]he most heavily solicited cross section will be the one at the very bottom, which will be supporting a compressive pressure of $\rho h g$, where $\rho$ is the density of the ice, $h$ the height of the wall, and $g$ the acceleration of gravity. Comparing that resulting value to the compressive strength of the material in question will indicate at which point the wall will fail. However, s/he also noted: As an aside note, if you are willing to sacrifice perfectly vertical walls, having a wall with width growing as $A e^{by}$, where $y$ is vertical distance from the top of the wall, will have every cross section of it standing the exact same compressive pressure. This would allow you to make the wall as high as you wanted.
Kramer (1980) proposed a method for assessing inter-rater reliability for tasks in which raters could select multiple categories for each object of measurement. The intuition behind this method is to reframe the problem from one of classification to one of rank ordering. Thus, all selected categories are tied for first place and all non-selected categories are tied for second place. Chance-adjusted agreement can then be calculated using rank correlation coefficients or analysis of variance of the ranks. Naturally, this approach also allows multiple categories to be ranked by raters. $$\kappa_0 = \frac{\bar{P} - P_e}{1 - P_e} + \frac{1 - \bar{P}}{Nm_0(1 - P_e)}$$where $\bar{P}$ is the average proportion of concordant pairs out of all possible pairs of observations for each subject, $P_e=\sum_j p_j^2$ and $p_j$ is the overall proportion of observations in which response category $j$ was selected, $m_0$ is the number of observations per subject, and $N$ is the number of subjects. It can also be shown that, when only one category is selected, $\kappa_0$ asymptotically approaches Cohen's and Fleiss' kappa coefficients. A clever solution, but not one that I've ever seen used in an article. References Kraemer, H. C. (1980). Extension of the kappa coefficient. Biometrics, 36(2), 207–16.
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates The most general conic section has an equation of the form $$Ax^2 + By^2 + Cxy + Dx + E y + F = 0.$$ So far we have taken $C=D=E=0$, except for parabolas. Here we'll see how to adjust for nonzero values of $D$ and $E$. We'll still assume that $C=0$, which means that our curves will point in the direction of the coordinate axes. $C \ne 0$ describes rotated conic sections. If $A \ne 0$, then we can always absorb the $Dx$ term by completing the square: \begin{eqnarray*} Ax^2 + Dx & = & A \left (x^2 + \frac{D}{A}x \right) \cr \cr & = & A\left (x + \frac{D}{2A}\right )^2 - \frac{D^2}{4A}\end{eqnarray*} Likewise, if $B \ne 0$ then we can absorb the $Ey$ term: $$\displaystyle{By^2 + Ey = B\left (y+\frac{E}{2B}\right )^2-\frac{E^2}{4B}}.$$ If $B=0$ then we cannot absorb the $Ey$ term, and the equation of a parabola winds up looking like $\displaystyle{y-k = \frac{-A}{E} (x-h)^2}$.
A supernova explosion is caused by the collapse of the core. Some of the gravitational potential energy released in this collapse is (somehow) transferred to the envelope. The transferred energy is sufficient to unbind the envelope. Some numbers: If a 1.25 solar-mass $(M_{\odot})$ stellar core (roughly the Chandrasekhar mass for an iron core), with the size of the Earth, collapses to a 10km radius (size of a neutron star) then it releases roughly $2\times 10^{46}$ J. About 10% of this energy dissociates the iron nuclei (8.8 MeV/nucleon), but most of the rest ends up in neutrinos. The gravitational binding energy of the $\sim 10M_{\odot}$ envelope depends on its density profile. An extreme upper limit would be to place this mass just above the original core radius - so less than a few $10^{45}$ J. But more realistically if we place most of this mass at a solar radius, then its binding energy is a few $10^{43}$ J. Thus only about 1-2% of the gravitational energy released by the collapsing core, is required to unbind (and explode) the envelope. How this transfer takes place after core bounce is still debated, but there is no problem with the energy budget. Your second question is a non-sequitur. It is not the case that all supernovae eject matter in the form of jets. Some supernovae may do so; the cause is open to question, but will involve rapid rotation and magnetic fields. The jet axis will be the rotation axis of the star. Sometimes this is called the collapsar model of supernovae.
In this article, we’ll look at the relationship between the rise/fall time of a digital signal and its bandwidth. Creating a connection between these two parameters, one from the time domain and the other from the frequency domain, allows us to more easily analyze a circuit. We’ll see that insight into both domains allows us to calculate the increase in the rise time of a signal as it goes through a system with a limited bandwidth. Time and Frequency Domains There are two different representations that are commonly used to analyze the operation of a circuit: the time domain and frequency domain representations. The time domain analysis is based on examining the changes a voltage or current experiences over time. On the other hand, the frequency domain analysis represents the signals as a sum of several sinusoids with different frequencies and examines the circuit behavior in response to each of these frequency components. Answering a particular question may become easier if we choose the suitable representation to analyze a circuit. For example, using the time-domain representation allows us to more easily understand the wave reflection phenomenon that occurs when driving a relatively long wire with a fast logic gate. However, for example, the EMC performance of a PC board can be better analyzed if we use the frequency domain representation. Since answering a given question can be easier in one domain or another, we sometimes need to be able to translate time-domain parameters to frequency-domain parameters and vice versa. In this article, we’ll look at the relationship between a digital signal’s rise/fall time and its bandwidth, which is a frequency-domain parameter. But before that, let’s have a look at some important concepts. Rise Time: An Important Time Domain Parameter The rise time of a digital signal is a very important time-domain parameter. For example, rise time can directly affect the ground bounce of a PCB. This is illustrated in Figure 1 below. In this figure, the inductance of the ground path is modeled by a lumped inductor. When the output of gate 1 goes from logic high to logic low, the electric charge stored in \[C_{STRAY}\] discharges through the ground path. This leads to a ground bounce given by \[V = \frac {\Delta I}{\Delta t}\] where \[\Delta I \] is the discharge current that flows through the ground inductance and \[\Delta t \] is the discharge time interval which is related to the rise/fall time of the gate output. Figure 1. Image courtesy of Electromagnetic Compatibility Engineering. Figure 1.Image courtesy of Electromagnetic Compatibility Engineering. The ground bounce can lead to a noise voltage at the output of gate 2 and, if large enough, even cause an unwanted transition at the output of gate 4. This is only one example illustrating the importance of rise time in a high-speed digital design. We discussed in a previous article that a sufficiently small rise time can lead to the wave reflection phenomenon when driving a relatively long wire with a fast logic gate. Bandwidth of a Digital Signal Bandwidth is a common frequency domain parameter used to describe the behaviour of a circuit. For example, we usually consider a 3-dB bandwidth to describe the frequency response of a filter or communication channel. As shown in Figure 2, the 3dB bandwidth of a low-pass filter is part of the frequency response that lies within 3 dB of the transfer function magnitude at DC (in this figure, the magnitude at DC is 0 dB and it drops to -3 dB at the far end of the transfer function bandwidth). Figure 2. Image created by Robert Keim via AAC Figure 2.Image created by Robert Keim via AAC While the above definition of bandwidth describes the behavior of a circuit, there is another definition of bandwidth that describes the frequency content of a digital signal. This definition specifies the highest significant frequency component in the spectral content of a digital signal. We’ll explain the word “significant” used in this definition in a minute, but before that, what is the bandwidth of an ideal square wave? The spectral content of a 50% duty cycle ideal square wave (with zero rise/fall time) includes all of the odd harmonics of its fundamental frequency. For this ideal square wave, the bandwidth is infinite. However, we cannot have this ideal signal in the real world because the device that produces this signal or the interconnect that is used to transmit it will inevitably exhibit a finite bandwidth. As a result, all of its harmonics above the 3-dB frequency of our device/interconnect will be attenuated. Due to these suppressed high-frequency harmonics, we will no longer have a zero rise time square wave. Instead, we’ll have a trapezoidal-like waveform that needs some time to transition from low to high or vice versa. Figure 3 below compares a trapezoidal signal with an ideal square wave. Figure 3. Image courtesy of Signal and Power Integrity-Simplified. Figure 3.Image courtesy of Signal and Power Integrity-Simplified. The above figure also shows the frequency content of the two signals. As you can see, the high-frequency harmonics in the spectrum of the trapezoidal waveform are significantly attenuated (compared to that of the ideal square wave). Since the trapezoidal waveform doesn’t have the high-frequency components, it cannot have the fast changes that are required to exhibit sharp transitions. As discussed above, an ideal square wave has infinite bandwidth, but what is the bandwidth of the above trapezoidal waveform? The book Signal and Power Integrity-Simplified by Eric Bogatin, a well-respected PCB designer, suggests that if the magnitude of a frequency component of a trapezoidal waveform (with arbitrary rise/fall time) is attenuated by a factor greater than 0.7 compared to the same harmonic of the ideal square wave, then that frequency component is sufficiently attenuated and is not a significant frequency component in the signal spectrum. Considering only the significant frequency components, we can find the bandwidth of a given trapezoidal waveform. For example, by visual inspection of Figure 3, we observe that the 7th harmonic is attenuated by a factor greater than 0.7 (compared to the same harmonic of the ideal square wave). Hence, this harmonic is not significant. However, the 5th harmonic seems to be significant by the above definition. Therefore, the bandwidth of this trapezoidal waveform extends from DC to the 5th harmonic. If the waveform period is 10 ns, the fundamental harmonic will be 100 MHz and the bandwidth will be 500 MHz. Rise Time and Bandwidth We saw that in practice, we cannot have zero rise time. The non-zero rise/fall time of a trapezoidal waveform corresponds to a limited bandwidth in the frequency domain. There is an approximation that relates the rise time of a signal to its bandwidth as: \[BW = \frac {0.35}{T_r}\] In this equation, \[T_r\] is the 10-90% rise time of the signal. The 10-90% rise time is the time interval it takes the signal to go from 10% of its final value to 90% of its final value. For example, if a signal has a rise time of 0.5 ns, its bandwidth will be 700 MHz. How Does an Interconnect Change a Signal Rise Time? In the final section of this article, let’s have a brief look at an interesting problem. What happens if we pass a signal with a given rise time \[T_{r, in}\] through a circuit that has bandwidth BW? How does the limited bandwidth of the circuit affect the signal rise time. For example, assume that we pass the signal through a 4-inch-long transmission line that has a bandwidth of BW. If BW is low enough, it can suppress some of the frequency components of the input signal and make them insignificant (we explained above what significant means in this context). Since some of the high frequency components are suppressed, the signal rise time can increase as it reaches the far end of the interconnect. Therefore, a low bandwidth circuit/interconnect can increase the rise time of a signal. The following equation allows us to quantify this effect: \[T_{r,out} = \sqrt{T^2 _{r,in} + T^2 _{r,system}}\] Here, \[T_{r, out}\] is the rise time at the interconnect output and \[T_{r, system}\] is the rise time associated with the interconnect. The interconnect rise time can be obtained from its bandwidth using the equation discussed in the previous section. For example, if an interconnect has a bandwidth of 6 GHz, we can associate a rise time of 58.3 ps (picoseconds) with this interconnect. Now, if we send a signal with \[T_{r, in} = 50 ps \] into this interconnect, the rise time of the signal would increase to 76.8 ps at the far end of the interconnect. You can find the mathematical derivation of the above equation in Section 8.5.3 of the book The Design of CMOS Radio-Frequency Integrated Circuits. Unfortunately, the derivation of this equation is not so straightforward. Note that we can easily extend this equation to calculate the rise time at the output of N cascaded systems. Conclusion In this article, we discussed how insight into both time and frequency domains allows us to more easily understand the behavior of a circuit. We looked at the relationship between the rise time and bandwidth of a signal. We saw that the rise time of a signal is inversely proportional to its bandwidth and that the product of these two parameters is always approximately 0.35. We also saw that as a signal goes through a system with limited bandwidth, its rise time increases. To see a complete list of my articles, please visit this page.
Difference between revisions of "Kakeya problem" Line 23: Line 23: One can get essentially the same conclusion using the "bush" argument. There are <math>N := (3^n-1)/2</math> different directions. Take a line in every direction, let E be the union of these lines, and let <math>\mu</math> be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that E has cardinality at least <math>2\mu+1</math>. If we minimise <math>\max(3N/\mu, 2\mu+1)</math> over all possible values of <math>\mu</math> one obtains approximately <math>\sqrt{6N} \approx 3^{(n+1)/2}</math> as a lower bound of <math>|E|</math>. One can get essentially the same conclusion using the "bush" argument. There are <math>N := (3^n-1)/2</math> different directions. Take a line in every direction, let E be the union of these lines, and let <math>\mu</math> be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least <math>3N/\mu</math>. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that E has cardinality at least <math>2\mu+1</math>. If we minimise <math>\max(3N/\mu, 2\mu+1)</math> over all possible values of <math>\mu</math> one obtains approximately <math>\sqrt{6N} \approx 3^{(n+1)/2}</math> as a lower bound of <math>|E|</math>. − A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> + A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a-b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus, − :<math>k_n + :<math>k_n \ge 3^{6(n-1)/11}.</math> == General upper bounds == == General upper bounds == Revision as of 01:26, 19 March 2009 Define a Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. General lower bounds Trivially, [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, we have [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below. To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\gtrsim 3^{(n+1)/2}.[/math] One can get essentially the same conclusion using the "bush" argument. There are [math]N := (3^n-1)/2[/math] different directions. Take a line in every direction, let E be the union of these lines, and let [math]\mu[/math] be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least [math]3N/\mu[/math]. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that E has cardinality at least [math]2\mu+1[/math]. If we minimise [math]\max(3N/\mu, 2\mu+1)[/math] over all possible values of [math]\mu[/math] one obtains approximately [math]\sqrt{6N} \approx 3^{(n+1)/2}[/math] as a lower bound of [math]|E|[/math]. A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a-b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa. Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal [math]1[/math] to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula) contains lines in positive proportion of directions. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207\ldots+o(1))^n \le k_n \le (1.88988+o(1))^n.[/math]
I have some open-ended questions on the use of staggered indices in writing Lorentz transformations and their inverses and transposes. What are the respective meanings of $\Lambda^\mu{}_\nu$ as compared to $\Lambda_\mu{}^\nu$? How does one use this staggered index notation to denote transpose or inverse? If I want to take any of these objects and explicitly write them out as matrices, then is there a rule for knowing which index labels row and which labels column for a matrix element? Is the rule: "(left index, right index) = (row, column)" or is it "(upper index, lower index) = (row, column)" or is there a different rule for $\Lambda^\mu{}_\nu$ as compared to $\Lambda_\mu{}^\nu$? Are there different conventions for any of this used by different authors? As a concrete example of my confusion, let me try to show two definitions of a Lorentz transformation are equivalent. Definition-1 (typical QFT book): $\Lambda^\mu{}_\alpha \Lambda^\nu{}_\beta \eta^{\alpha\beta} = \eta^{\mu\nu}$ Definition-2 ($\Lambda$ matrix must preserve pseudo inner product given by $\eta$ matrix): $(\Lambda x)^T \eta (\Lambda y) = x^T \eta y$, for all $x,y \in \mathbb{R}^4$. This implies, in terms of matrix components (and now I'll switch to linear algebra notation, away from physics-tensor notation): $\sum_{j,k}(\Lambda^T)_{ij} \eta_{jk} \Lambda_{kl} = \eta_{il}$. This last equation is my "Definition-2" of a Lorentz transformation, $\Lambda$, and I can't get it to look like "Definition-1", i.e., I can't manipulate-away the slight difference in the ordering of the indices.
I have managed to compute the half width at half maximum (HWHM) and full width at half maximum (FWHM) of a sinc function through the code below. ClearAll["Global`*"]Plot[Sin[\[Pi]*x]/(\[Pi]*x), {x, -2, 2}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]Normal[Series[Sin[\[Pi]*x]/(\[Pi]*x), {x, 0, 12}]]Plot[Evaluate[Normal[Series[Sin[\[Pi]*x]/(\[Pi]*x), {x, 0, 12}]]], {x,-2, 2}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]N[Solve[Normal[Series[Sin[\[Pi]*x]/(\[Pi]*x), {x, 0, 12}]] == 1/2, x]] where the HWHM is equal to the absolute value of the lowest Real root, and the FWHM is two times this value. I have tryed to apply the same procedure to compute the HWHM and FWHM of $ \int_{0}^{t} \cos\left( 2\pi f_{c}\tau \right) \cos\left[ 2\pi\left( kt + f_{0}\right)\tau \right]\; d\tau $, where $ f_{c} = 2\times 10^9 $, $ k=30\times 10^{15} $ and $ f_{0}=0.5\times 10^9 $, with the following code. ClearAll["Global`*"] Subscript[f, c] = 2*10^9;k = 30*10^15;Subscript[f, 0] = 0.5*10^9;Subscript[s, out] = \!\(\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(t\)]\(Cos[2*\[Pi]*\*SubscriptBox[\(f\), \(c\)]*\[Tau]]*Cos[2*\[Pi]*\((k*t + \*SubscriptBox[\(f\), \(0\)])\)*\[Tau]] \[DifferentialD]\[Tau]\)\)Plot[Subscript[s, out], {t, 49*10^-9, 51*10^-9}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]soutexpansion = Expand[Normal[Series[Subscript[s, out], {t, 50*10^-9, 2}]]]Plot[Evaluate[soutexpansion], {t, 49.6*10^-9, 50.4*10^-9}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]Solve[soutexpansion == (2.5*10^-8)/2, t] I know the maximum is at $ t=50\times 10^{-9} $, and that it attains a value of $ 2.5\times10^{-8} $. However, after the Taylor expansion around $ t=50\times 10^{-9} $, I cannot reconstruct the curve around this point. QUESTION 1: Ploting the expanded second order polynomial, as in the snippet, the parabola shows a positive concavity. Shouldn't it have a negative concavity? I have applied a trigonometric identity to rearange the termes of the integrand and thus were able to reconstruct the expression around $ t=50\times 10^{-9} $ with the code below. Subscript[s, out2] = \!\(\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(t\)]\(\*FractionBox[\(1\), \(2\)]*\((Cos[2*\[Pi]*\((k*t + \*SubscriptBox[\(f\), \(0\)] - \*SubscriptBox[\(f\), \(c\)])\)*\[Tau]] + Cos[2*\[Pi]*\((k*t + \*SubscriptBox[\(f\), \(0\)] + \*SubscriptBox[\(f\), \(c\)])\)*\[Tau]])\) \[DifferentialD]\[Tau]\)\)Plot[Subscript[s, out2], {t, 49.6*10^-9, 50.4*10^-9}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]Subscript[s, out2series] = Normal[Series[Subscript[s, out2], {t, 50*10^-9, 30}]]Plot[Evaluate[Subscript[s, out2series]], {t, 49.6*10^-9, 50.4*10^-9}, PlotRange -> Full, Exclusions -> None, GridLines -> Automatic, ImageSize -> Full]Solve[Subscript[s, out2series] == (2.5*10^-8)/2, t] However, this time, when I try to find the roots of the expression that would lead me to the value of the HWHM and FWHM, all roots returned have the same value, what does not make sense. QUESTION 2: Am I missing something? Is it correct what I am doing? QUESTION 3: Is there a better way to find the HWHM and FWHM of the above function?
Every day, Danny buys one sweet from the candy store and eats it. The store has $m$ types of sweets, numbered from $1$ to $m$. Danny knows that a balanced diet is important and is applying this concept to his sweet purchasing. To each sweet type $i$, he has assigned a target fraction, which is a real number $f_ i$ ($0 < f_ i \le 1$). He wants the fraction of sweets of type $i$ among all sweets he has eaten to be roughly equal to $f_ i$. To be more precise, let $s_ i$ denote the number of sweets of type $i$ that Danny has eaten, and let $n = \sum _{i=1}^ m s_ i$. We say the set of sweets is balanced if for every $i$ we have Danny has been buying and eating sweets for a while and during this entire time the set of sweets has been balanced. He is now wondering how many more sweets he can buy while still fulfilling this condition. Given the target fractions $f_ i$ and the sequence of sweets he has eaten so far, determine how many more sweets he can buy and eat so that at any time the set of sweets is balanced. The input consists of three lines. The first line contains two integers $m$ ($1 \le m \le 10^5$), which is the number of types of sweets, and $k$ ($0 \le k \le 10^5$), which is the number of sweets Danny has already eaten. The second line contains $m$ positive integers $a_1, \ldots , a_ m$. These numbers are proportional to $f_1, \ldots , f_ m$, that is, $\displaystyle f_ i = \frac{a_ i}{\sum _{j = 1}^ m a_ j}$. It is guaranteed that the sum of all $a_ j$ is no larger than $10^5$. The third line contains $k$ integers $b_1, \ldots , b_ k$ ($1 \le b_ i \le m$), where each $b_ i$ denotes the type of sweet Danny bought and ate on the $i^\text {th}$ day. It is guaranteed that every prefix of this sequence (including the whole sequence) is balanced. Display the maximum number of additional sweets that Danny can buy and eat while keeping his diet continuously balanced. If there is no upper limit on the number of sweets, display the word forever. Sample Input 1 Sample Output 1 6 5 2 1 6 3 5 3 1 2 5 3 5 1 Sample Input 2 Sample Output 2 6 4 2 1 6 3 5 3 1 2 5 3 forever
LEARNING OBJECTIVES By the end of this module, you will be able to: Explain the function of a catalyst in terms of reaction mechanisms and potential energy diagrams List examples of catalysis in natural and industrial processes We have seen that the rate of many reactions can be accelerated by catalysts. A catalyst speeds up the rate of a reaction by lowering the activation energy; in addition, the catalyst is regenerated in the process. Several reactions that are thermodynamically favorable in the absence of a catalyst only occur at a reasonable rate when a catalyst is present. One such reaction is catalytic hydrogenation, the process by which hydrogen is added across an alkene C=C bond to afford the saturated alkane product. A comparison of the reaction coordinate diagrams (also known as energy diagrams) for catalyzed and uncatalyzed alkene hydrogenation is shown in Figure 1. Catalysts function by providing an alternate reaction mechanism that has a lower activation energy than would be found in the absence of the catalyst. In some cases, the catalyzed mechanism may include additional steps, as depicted in the reaction diagrams shown in Figure 2. This lower activation energy results in an increase in rate as described by the Arrhenius equation. Note that a catalyst decreases the activation energy for both the forward and the reverse reactions and hence accelerates both the forward and the reverse reactions. Consequently, the presence of a catalyst will permit a system to reach equilibrium more quickly, but it has no effect on the position of the equilibrium as reflected in the value of its equilibrium constant (see the later chapter on chemical equilibrium). Example 1 Using Reaction Diagrams to Compare Catalyzed Reactions The two reaction diagrams below represent the same reaction: one without a catalyst and one with a catalyst. Identify which diagram suggests the presence of a catalyst, and determine the activation energy for the catalyzed reaction: Solution A catalyst does not affect the energy of reactant or product, so those aspects of the diagrams can be ignored; they are, as we would expect, identical in that respect. There is, however, a noticeable difference in the transition state, which is distinctly lower in diagram (b) than it is in (a). This indicates the use of a catalyst in diagram (b). The activation energy is the difference between the energy of the starting reagents and the transition state—a maximum on the reaction coordinate diagram. The reagents are at 6 kJ and the transition state is at 20 kJ, so the activation energy can be calculated as follows: Check Your Learning Determine which of the two diagrams below (both for the same reaction) involves a catalyst, and identify the activation energy for the catalyzed reaction: Answer:Diagram (b) is a catalyzed reaction with an activation energy of about 70 kJ. Homogeneous Catalysts A homogeneous catalyst is present in the same phase as the reactants. It interacts with a reactant to form an intermediate substance, which then decomposes or reacts with another reactant in one or more steps to regenerate the original catalyst and form product. As an important illustration of homogeneous catalysis, consider the earth’s ozone layer. Ozone in the upper atmosphere, which protects the earth from ultraviolet radiation, is formed when oxygen molecules absorb ultraviolet light and undergo the reaction: Ozone is a relatively unstable molecule that decomposes to yield diatomic oxygen by the reverse of this equation. This decomposition reaction is consistent with the following mechanism: The presence of nitric oxide, NO, influences the rate of decomposition of ozone. Nitric oxide acts as a catalyst in the following mechanism: The overall chemical change for the catalyzed mechanism is the same as: The nitric oxide reacts and is regenerated in these reactions. It is not permanently used up; thus, it acts as a catalyst. The rate of decomposition of ozone is greater in the presence of nitric oxide because of the catalytic activity of NO. Certain compounds that contain chlorine also catalyze the decomposition of ozone. Mario J. Molina The 1995 Nobel Prize in Chemistry was shared by Paul J. Crutzen, Mario J. Molina (Figure 3), and F. Sherwood Rowland “for their work in atmospheric chemistry, particularly concerning the formation and decomposition of ozone.” [1] Molina, a Mexican citizen, carried out the majority of his work at the Massachusetts Institute of Technology (MIT). In 1974, Molina and Rowland published a paper in the journal Nature (one of the major peer-reviewed publications in the field of science) detailing the threat of chlorofluorocarbon gases to the stability of the ozone layer in earth’s upper atmosphere. The ozone layer protects earth from solar radiation by absorbing ultraviolet light. As chemical reactions deplete the amount of ozone in the upper atmosphere, a measurable “hole” forms above Antarctica, and an increase in the amount of solar ultraviolet radiation— strongly linked to the prevalence of skin cancers—reaches earth’s surface. The work of Molina and Rowland was instrumental in the adoption of the Montreal Protocol, an international treaty signed in 1987 that successfully began phasing out production of chemicals linked to ozone destruction. Molina and Rowland demonstrated that chlorine atoms from human-made chemicals can catalyze ozone destruction in a process similar to that by which NO accelerates the depletion of ozone. Chlorine atoms are generated when chlorocarbons or chlorofluorocarbons—once widely used as refrigerants and propellants—are photochemically decomposed by ultraviolet light or react with hydroxyl radicals. A sample mechanism is shown here using methyl chloride: Chlorine radicals break down ozone and are regenerated by the following catalytic cycle: A single monatomic chlorine can break down thousands of ozone molecules. Luckily, the majority of atmospheric chlorine exists as the catalytically inactive forms Cl 2 and ClONO 2. Since receiving his portion of the Nobel Prize, Molina has continued his work in atmospheric chemistry at MIT. Glucose-6-Phosphate Dehydrogenase Deficiency Enzymes in the human body act as catalysts for important chemical reactions in cellular metabolism. As such, a deficiency of a particular enzyme can translate to a life-threatening disease. G6PD (glucose-6-phosphate dehydrogenase) deficiency, a genetic condition that results in a shortage of the enzyme glucose-6-phosphate dehydrogenase, is the most common enzyme deficiency in humans. This enzyme, shown in Figure 4, is the rate-limiting enzyme for the metabolic pathway that supplies NADPH to cells (Figure 5). A disruption in this pathway can lead to reduced glutathione in red blood cells; once all glutathione is consumed, enzymes and other proteins such as hemoglobin are susceptible to damage. For example, hemoglobin can be metabolized to bilirubin, which leads to jaundice, a condition that can become severe. People who suffer from G6PD deficiency must avoid certain foods and medicines containing chemicals that can trigger damage their glutathione-deficient red blood cells. Heterogeneous Catalysts A heterogeneous catalyst is a catalyst that is present in a different phase (usually a solid) than the reactants. Such catalysts generally function by furnishing an active surface upon which a reaction can occur. Gas and liquid phase reactions catalyzed by heterogeneous catalysts occur on the surface of the catalyst rather than within the gas or liquid phase. Heterogeneous catalysis has at least four steps: Adsorption of the reactant onto the surface of the catalyst Activation of the adsorbed reactant Reaction of the adsorbed reactant Diffusion of the product from the surface into the gas or liquid phase (desorption). Any one of these steps may be slow and thus may serve as the rate determining step. In general, however, in the presence of the catalyst, the overall rate of the reaction is faster than it would be if the reactants were in the gas or liquid phase. Figure 6 illustrates the steps that chemists believe to occur in the reaction of compounds containing a carbon–carbon double bond with hydrogen on a nickel catalyst. Nickel is the catalyst used in the hydrogenation of polyunsaturated fats and oils (which contain several carbon–carbon double bonds) to produce saturated fats and oils (which contain only carbon–carbon single bonds). Other significant industrial processes that involve the use of heterogeneous catalysts include the preparation of sulfuric acid, the preparation of ammonia, the oxidation of ammonia to nitric acid, and the synthesis of methanol, CH 3OH. Heterogeneous catalysts are also used in the catalytic converters found on most gasoline-powered automobiles Figure 7. Automobile Catalytic Converters Scientists developed catalytic converters to reduce the amount of toxic emissions produced by burning gasoline in internal combustion engines. Catalytic converters take advantage of all five factors that affect the speed of chemical reactions to ensure that exhaust emissions are as safe as possible. By utilizing a carefully selected blend of catalytically active metals, it is possible to effect complete combustion of all carbon-containing compounds to carbon dioxide while also reducing the output of nitrogen oxides. This is particularly impressive when we consider that one step involves adding more oxygen to the molecule and the other involves removing the oxygen (Figure 7). Most modern, three-way catalytic converters possess a surface impregnated with a platinum-rhodium catalyst, which catalyzes the conversion nitric oxide into dinitrogen and oxygen as well as the conversion of carbon monoxide and hydrocarbons such as octane into carbon dioxide and water vapor: In order to be as efficient as possible, most catalytic converters are preheated by an electric heater. This ensures that the metals in the catalyst are fully active even before the automobile exhaust is hot enough to maintain appropriate reaction temperatures. Enzyme Structure and Function The study of enzymes is an important interconnection between biology and chemistry. Enzymes are usually proteins (polypeptides) that help to control the rate of chemical reactions between biologically important compounds, particularly those that are involved in cellular metabolism. Different classes of enzymes perform a variety of functions, as shown in Table 1. Class Function oxidoreductases redox reactions transferases transfer of functional groups hydrolases hydrolysis reactions lyases group elimination to form double bonds isomerases isomerization ligases bond formation with ATP hydrolysis Enzyme molecules possess an active site, a part of the molecule with a shape that allows it to bond to a specific substrate (a reactant molecule), forming an enzyme-substrate complex as a reaction intermediate. There are two models that attempt to explain how this active site works. The most simplistic model is referred to as the lock-and-key hypothesis, which suggests that the molecular shapes of the active site and substrate are complementary, fitting together like a key in a lock. The induced fit hypothesis, on the other hand, suggests that the enzyme molecule is flexible and changes shape to accommodate a bond with the substrate. This is not to suggest that an enzyme’s active site is completely malleable, however. Both the lock-and-key model and the induced fit model account for the fact that enzymes can only bind with specific substrates, since in general a particular enzyme only catalyzes a particular reaction (Figure 8). Key Concepts and Summary Catalysts affect the rate of a chemical reaction by altering its mechanism to provide a lower activation energy. Catalysts can be homogenous (in the same phase as the reactants) or heterogeneous (a different phase than the reactants). Chemistry End of Chapter Exercises Account for the increase in reaction rate brought about by a catalyst. Compare the functions of homogeneous and heterogeneous catalysts. Consider this scenario and answer the following questions: Chlorine atoms resulting from decomposition of chlorofluoromethanes, such as CCl 2F 2, catalyze the decomposition of ozone in the atmosphere. One simplified mechanism for the decomposition is: [latex]\begin{array}{l}\\ \\ {\text{O}}_{3}\stackrel{\text{sunlight}}{\to }{\text{O}}_{2}+\text{O}\\ {\text{O}}_{3}+\text{Cl}\rightarrow{\text{O}}_{2}+\text{ClO}\\ \text{ClO}+\text{O}\rightarrow\text{Cl}+{\text{O}}_{2}\end{array}[/latex] Explain why chlorine atoms are catalysts in the gas-phase transformation: [latex]2{\text{O}}_{3}\rightarrow 3{\text{O}}_{2}[/latex] Nitric oxide is also involved in the decomposition of ozone by the mechanism: [latex]\begin{array}{l}\\ {\text{O}}_{3}\stackrel{\text{sunlight}}{\to }{\text{O}}_{2}+\text{O}\\ {\text{O}}_{3}+\text{NO}\rightarrow{\text{NO}}_{2}+{\text{O}}_{2}\\ {\text{NO}}_{2}+\text{O}\rightarrow\text{NO}+{\text{O}}_{2}\end{array}[/latex] Is NO a catalyst for the decomposition? Explain your answer. Explain why chlorine atoms are catalysts in the gas-phase transformation: For each of the following pairs of reaction diagrams, identify which of the pair is catalyzed: For each of the following pairs of reaction diagrams, identify which of the pairs is catalyzed: For each of the following reaction diagrams, estimate the activation energy ( E a) of the reaction: For each of the following reaction diagrams, estimate the activation energy ( E a) of the reaction: Based on the diagrams in question 6, which of the reactions has the fastest rate? Which has the slowest rate? Based on the diagrams in question 7, which of the reactions has the fastest rate? Which has the slowest rate? Selected Answers 1. The general mode of action for a catalyst is to provide a mechanism by which the reactants can unite more readily by taking a path with a lower reaction energy. The rates of both the forward and the reverse reactions are increased, leading to a faster achievement of equilibrium. 3. (a) Chlorine atoms are a catalyst because they react in the second step but are regenerated in the third step. Thus, they are not used up, which is a characteristic of catalysts. (b) NO is a catalyst for the same reason as in part (a). 5. The lowering of the transition state energy indicates the effect of a catalyst. a) A; b) B 7. the energy needed to go from the initial state to the transition state is (a) 10 kJ; (b) 10 kJ 9. The smaller the activation energy, the faster the reaction. In this case, both have the same activation energy, so they would have the same rate. Glossary heterogeneous catalystcatalyst present in a different phase from the reactants, furnishing a surface at which a reaction can occur homogeneous catalystcatalyst present in the same phase as the reactants “The Nobel Prize in Chemistry 1995,” Nobel Prize.org, accessed February 18, 2015, http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1995/. ↵
I want to know the heat conduction coefficient of a cup in order to investigate why tea cools at different rates in different containers. I know the cup is mostly plastic and paper. How do I determine the heat conduction? Do I use a known value? If you are interested in knowing why your tea is cooling in different times, you already answered the question yourself. It is due to the different thermal conductivities of the different items you use it to store. $\dot{Q} = A U \Delta T \tag{1}$ is the heat flux you want to know. This gives you a relationship between the heat that is transfered to the enviroment and the dependance on the type of your container. So you need to know the thermal transmittance $U$. If you are interested in calculating specific values for different items the following approach should do: Gather a list of all necessary heat transfer coefficients $\alpha_{ij}$ and thermal conductivity coefficients $\lambda_i$ In my opinion there is no need to calculate / measure these yourself, that's why books with tables of different coefficients exist. Create a simple but accurate enough model of your container. A cup can be regarded as a hollow cylinder for example. Make a sketch of it! Calculate the thermal transmittance $U$. Notice that different regions of your cup will have different heat transfer and thermal conductivity coefficients. For example the part of a coffee cup that is covered with a plastic lid and the lower part of the cup with no contact with the lid. Add them with the formulas for parallel and series (2). The attached image gives an example. Calculate the overall heat flux. A higher heat flux means a quicker cooling time. $$U=\frac{1}{R_{overall}}$$ $$\begin{array}{c} \\ R_{overall} = \sum_{i=1}^n R_i &\text{in series} \tag{2}\\ \dfrac{1}{R_{overall}} = \sum_{i=1}^n \dfrac{1}{R_i} &\text{in parallel} \end{array}$$ $R_i$ are the resistances for heat transfer and conductivity. $R_{\lambda} = \frac{l}{\lambda A}$ $R_{\alpha} = \frac{1}{\alpha A}$ $l \equiv \text{length}$ $A \equiv \text{cross-sectional area}$
Suppose you want to buy some candy at a store. You look at your pocket and find you have (in your local currency) an amount of money equal to $i^i$ ($i$ here is really the complex number $i$). You choose a snack and check its price: its costs $0.10$. Do you have money to buy it? If $i^i$ looks absurd to you, you might be impressed by the fact that there is a thing such as complex exponentiation. It turns out that complex exponentiation is extremely important in mathematics, in the natural sciences and in engineering. Being myself a physicist, I dealt with complex exponentiation countless times when learning topics such as waves, circuits, fluids, classical mechanics, quantum mechanics and many others. But back to the question: how do we compute $i^i$? The answer starts with the definition of the exponential function for a pure imaginary number $i\theta$, where $\theta$ is a real number: $$ \boxed{ e^{i\theta} := \cos\theta + i\sin\theta } \label{post_827f3801cb1a0f2b336b0fc67f9e1abd_euler_eq} $$ Equation \eqref{post_827f3801cb1a0f2b336b0fc67f9e1abd_euler_eq} is called Euler's formula. If you are wondering what could motivate such a strange definition, consider the Taylor series for $e^x$, where $x$ is a real number: $$ e^x = 1 + \displaystyle\frac{x}{1!} + \displaystyle\frac{x^2}{2!} + \displaystyle\frac{x^3}{3!} + \displaystyle\frac{x^4}{4!} + \ldots = \sum_{n=0}^{\infty} \displaystyle\frac{x^n}{n!} \label{post_827f3801cb1a0f2b336b0fc67f9e1abd_exp_real_def} $$ What would happen if we lost our senses and decided to replace $x$ with $i\theta$ on equation \eqref{post_827f3801cb1a0f2b336b0fc67f9e1abd_exp_real_def}? Well, we would obtain: $$ \begin{eqnarray} e^{i\theta} & = & 1 + \displaystyle\frac{(i\theta)}{1!} + \displaystyle\frac{(i\theta)^2}{2!} + \displaystyle\frac{(i\theta)^3}{3!} + \displaystyle\frac{(i\theta)^4}{4!} + \displaystyle\frac{(i\theta)^5}{5!} + \ldots \nonumber\\[5pt] & = & 1 + i\theta - \displaystyle\frac{\theta^2}{2!} - i\displaystyle\frac{\theta^3}{3!} + \displaystyle\frac{\theta^4}{4!} + i\displaystyle\frac{\theta^5}{5!} + \ldots \nonumber\\[5pt] & = & \left(1 - \displaystyle\frac{\theta^2}{2!} + \displaystyle\frac{\theta^4}{4!} + \ldots\right) + i\left(\theta - \displaystyle\frac{\theta^3}{3!} + \displaystyle\frac{\theta^5}{5!} + \ldots\right) \nonumber\\[5pt] & = & \cos\theta + i\sin\theta \end{eqnarray} $$ where above we used the Taylor series for both $\cos\theta$ and $\sin\theta$. This is exactly what equation \eqref{post_827f3801cb1a0f2b336b0fc67f9e1abd_euler_eq} states! In general, since any complex number $z$ is such that $z = a + ib$ with both $a$ and $b$ being real numbers, then we can compute $e^{a+ib}$ through: $$ \boxed{ e^z = e^{a+ib} := e^ae^{ib} = e^a(\cos{b} + i\sin{b}) } $$ Now we can define the logarithm of a complex number. Since every nonzero complex number $z$ can be uniquely written in the form $z = |z|(\cos\theta + i\sin\theta)$ for some $\theta$ such that $0 \leq \theta \lt 2\pi$, then we can write: $$ z = |z|(\cos\theta + i\sin\theta) = |z|e^{i\theta} = e^{\log|z|}e^{i\theta} = e^{\log|z| + i\theta} $$ This leads us to define the logarithm of a complex number $z \neq 0$ as: $$ \boxed{ \log{z} = \log(e^{\log|z| + i\theta}) := \log|z| + i\theta } $$ The value of $\log{z}$ is uniquely defined provided that we enforce $0 \leq \theta \lt 2\pi$. This restriction is important since on $z = |z|(\cos\theta + i\sin\theta)$ there are infinitely many possible values of $\theta$ values as $$ \cos(\theta + 2\pi n) = \cos\theta,\quad \sin(\theta + 2\pi n) = \sin\theta $$ for any integer $n$. As an example, let's compute $\log i$. Since $$ i = 0 + i = \cos(\pi/2) + i\sin(\pi/2) $$ then: $$ \log i = \log|i| + i\pi/2 = \log 1 + i\pi/2 = i\pi/2 \label{post_827f3801cb1a0f2b336b0fc67f9e1abd_log_i} $$ The last ingredient we need is how to compute $z^w$ for two complex numbers $z$ and $w$ with $z \neq 0$. This can be done according to the following definition: $$ \boxed{ z^w := e^{w\log z} } $$ which is motivated by the fact that for real numbers $x \gt 0$ and $y \neq 0$ we have $x^y = e^{\log x^y} = e^{y\log x}$. If you have endured all of this, you are probably eager to know what the value of $i^i$ is. So let's compute it: $$ \boxed{ i^i = e^{i\log i} = e^{i(i\pi/2)} = e^{-\pi/2} \approx 0.2078 } $$ where equation \eqref{post_827f3801cb1a0f2b336b0fc67f9e1abd_log_i} was used. Interestingly, $i^i$ is actually a real number! Well, there you have it. You can now go ahead and buy your candy. You deserve it! :-)
I am not sure that it is exactly what you need, but the following is true: For an alhabet with $q\geq 4$ letters and a sequence of forbidden words with lengths $n_1<n_2<\dots$ there exists an infinite word without forbidden subwords (where subword of a word W is a segment of consecutive letters in W, like "hab" is a subword of "alphabet".) Proof. Choose $c$ like $c=2$ and prove that for the number $f(n)$ of permitted words of length $n$ (i.e. words without forbidden subwords) we have $f(n)\geq cf(n-1)$. Induction in $n$. Base $n=1$ holds ($f(0)=1$, $f(1)\geq q-1$). We have $f(n)\geq qf(n-1)-\sum_i f(n-n_i)$ (take any permitted word with $n-1$ letter and add arbitrary letter. If new word is not permitted, then it ends by some forbidden subword.) By induction purpose we have $f(n-i)\leq c^{1-n}f(n-1)$, thus $f(n)\geq (q-\sum c^{1-n_i})f(n-1)\geq cf(n-1)$ as desired. Now we have arbitrarily long permitted words. It follows that there exists an infinite permitted word (proof: choose letters $x_1,x_2,\dots,x_n$ so that word $x_1\dots x_n$ has arbitrarily long permitted continuations. We may always proceed.) Above argument works for $q=2$ and $q=3$ under stronger assumptions on $(n_i)$.
I just asked this question concerning the application of Noether's theory. Think about this got me wondering about the following. In the usual derivation of the Noether current the assumption is made that: $$\mathcal{L}(\phi'(x'),\partial_\mu'\phi'(x'),x')=\mathcal{L}(\phi(x),\partial_\mu\phi(x),x)+\delta x^\mu\partial_\mu\mathcal{L}(\phi(x),\partial_\mu\phi(x),x).\tag{1}$$ This is usually shown by considering the Lagrangian to be a function of $x$ only then, the statement that: $$\mathcal{L}(x')=\mathcal{L}(x)+\delta x^\mu\partial_\mu\mathcal{L}(x)\tag{2}$$ does indeed hold true by trivial Taylor expansion. But as far as I can tell this derivation is making the assumption that: $$\phi'(x')=\phi(x').\tag{3}$$ I have seen (1) used in cases where this is not the case. Thus please can someone explain why (1) holds for a general mapping $\phi(x) \mapsto \phi'(x')$
Does anybody know why offset in a Poisson regression is used? What do you achieve by this? Here is an example of application. Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individuals are not followed the same amount of time. For example, six cases over 1 year should not amount to the same as six cases over 10 years. So, instead of having $\log \mu_x = \beta_0 + \beta_1 x$ (where $\mu_x$ is the expected count for those with covariate $x$), you have $\log \tfrac{\mu_x}{t_x} = \beta'_0 + \beta'_1 x$ (where $t_x$ is the exposure time for those with covariate $x$). Now, the last equation could be rewritten $\log \mu_x = \log t_x + \beta'_0 + \beta'_1 x$ and $\log t_x$ plays the role of an offset.
A professor once asked us in class: for the Hydrogen atom, what is the probability of finding the electron inside the nucleus? The radius of a proton (the nucleus of a Hydrogen atom) is $r_0 \approx 10^{-15}\textrm{m}$. The answer to the question will be hard to find if one tries to compute it analytically by integrating the probability density function $|\psi({\bf x})|^2$ over the volume of the proton: $$ P = \int_{\|{\bf x}\| \leq r_0} |\psi({\bf x})|^2 d{\bf x} \label{post_ec0bf1e2547df878846dd841b8d657b2_eq_prob_r0} $$ where $\|{\bf x}\| \leq r_0$ refers to the fact that the integral should be computed over a sphere of radius $r_0$ centered at the origin (where the center of the proton is assumed to be). Instead, as my professor told us, a much better idea is to compute $P$ "as Fermi would". By that, he meant the problem can be easily solved using a very good approximation: assume that $|\psi({\bf x})|$ is constant over the extension of the nucleus. This is indeed an excellent approximation which turns the computation of the desired probability into trivial task: $$ \boxed{ \displaystyle P \approx |\psi({\bf 0})|^2 \int_{\|{\bf x}\| \leq r_0} d{\bf x} = \frac{4\pi r_0^3}{3}|\psi({\bf 0})|^2 } \label{post_ec0bf1e2547df878846dd841b8d657b2_eq_prob_approx1} $$ For any state for which the wavefunction $\psi({\bf x})$ is known and varies little within the extension of the nucleus (this is typically true), one can use equation \eqref{post_ec0bf1e2547df878846dd841b8d657b2_eq_prob_approx1} and quickly obtain a very good approximation for $P$. Consider, for instance, the eigenfunctions of the Hydrogen atom. They are specified by three quantum numbers ($n,l,m$) and always have the form: $$ \psi_{nlm}(r,\theta,\phi) = R_{nl}(r)Y^m_l(\theta,\phi) $$ where $Y^m_l(\theta,\phi)$ is the normalized spherical harmonic of degree $l$ and order $m$. The function $R_{nl}(r)$ is a real-valued function of $r$ which has the form (see [1], equations 4.73 - 4.75): $$ R_{nl}(r) = \displaystyle\frac{1}{r}\rho^{l+1} e^{-\rho} v(\rho) \label{post_ec0bf1e2547df878846dd841b8d657b2_eq_R} $$ where $\rho = r / (a_0 n)$, $v(\rho)$ is a polynomial function and $$ a_0 = \displaystyle\frac{4\pi \epsilon_0 \hbar^2}{me^2} = 0.529 \times 10^{-10}\textrm{m} $$ is the Bohr radius ($m$ is the mass of the electron). Equation \eqref{post_ec0bf1e2547df878846dd841b8d657b2_eq_R} implies $R_{nl}(r) \propto r^l$ so for every $l \gt 0$, $R_{nl}(0) = 0$. This means: $$ \psi_{nlm}(0,\theta,\phi) = \psi({\bf 0}) = 0 $$ whenever $l \neq 0$. In other words, writing $P_{nlm}$ for the value of $P$ for the eigenstate specified by the quantum numbers $(n,l,m)$, our approximation yields: $$ P_{nlm} = 0 \quad \textrm{if} \quad l \neq 0 $$ For the Hydrogen atom, the probability of finding the electron inside the nucleus can then only be nonzero if $l = 0$. Since $m = 0$ is the only allowed value of $m$ in this case ($m$ can take the values $-l,\ldots,-1,0,1,\ldots,l$), the wave functions we need to consider will have the form (see [1], table 4.2): $$ \psi_{n00}(r,\theta,\phi) = R_{n0}(r)Y^0_0(\theta,\phi) = R_{n0}(r) \displaystyle\frac{1}{\sqrt{4\pi}} \label{post_ec0bf1e2547df878846dd841b8d657b2_eq_psi_n00} $$ These are the equations for $\psi_{n00}(r,\theta,\phi)$ for $n = 1,2,3$ (obtained from [1], table 4.6, and equation \eqref{post_ec0bf1e2547df878846dd841b8d657b2_eq_psi_n00}): $$ \begin{eqnarray} \psi_{100}(r,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{\pi a_0^3}}e^{-r/a_0} \nonumber\\[5pt] \psi_{200}(r,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{8\pi a_0^3}} \left(1 - \frac{1}{2}\frac{r}{a_0} \right)e^{-r/2a_0} \nonumber\\[5pt] \psi_{300}(r,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{27\pi a_0^3}} \left[1 - \frac{2}{3}\frac{r}{a_0} + \frac{2}{27}\left(\frac{r}{a_0}\right)^2\right]e^{-r/3a_0} \end{eqnarray} $$ For $r = 0$, we obtain: $$ \begin{eqnarray} \psi_{100}(0,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{\pi a_0^3}} \nonumber\\[5pt] \psi_{200}(0,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{8\pi a_0^3}} \nonumber\\[5pt] \psi_{300}(0,\theta,\phi) &=& \displaystyle\frac{1}{\sqrt{27\pi a_0^3}} \label{post_ec0bf1e2547df878846dd841b8d657b2_eq_psi_0} \end{eqnarray} $$ We can now use the values of equation \eqref{post_ec0bf1e2547df878846dd841b8d657b2_eq_psi_0} on equation \eqref{post_ec0bf1e2547df878846dd841b8d657b2_eq_prob_r0} to obtain the probability of finding the electron inside the nucleus for the states $n = 1,2,3$ with $l = m = 0$: $$ \begin{eqnarray} P_{100} &=& \displaystyle\frac{4\pi r_0^3}{3} \frac{1}{\pi a_0^3} = \displaystyle\frac{4}{3}\left(\frac{r_0}{a_0}\right)^3 \approx 9 \times 10^{-15} \nonumber\\[5pt] P_{200} &=& \displaystyle\frac{4\pi r_0^3}{3} \frac{1}{8 \pi a_0^3} = \displaystyle\frac{1}{6}\left(\frac{r_0}{a_0}\right)^3 \approx 1.1 \times 10^{-15} \nonumber\\[5pt] P_{300} &=& \displaystyle\frac{4\pi r_0^3}{3} \frac{1}{27 \pi a_0^3} = \displaystyle\frac{4}{81}\left(\frac{r_0}{a_0}\right)^3 \approx 3.3 \times 10^{-16} \end{eqnarray} $$ Therefore, for a Hydrogen atom on the ground state, the probability of finding the electron inside the nucleus is extremely small. The probability gets even smaller for higher energy states. References [1] D. Griffiths, Introduction to Quantum Mechanics, Prentice Hall; 1st edition (1994)
I just watched the Blender Guru video How to Make a Beer in Blender and stopped at the point where it advises us to make the liquid slightly larger in diameter than the inner diameter of the glass - approximately half way between the inner and outer walls. I then turned to Fluid in a Glass(and why you’ve been doing it wrong your whole life) where the video points. Although the wording there isn't exactly correct, I think that I understand the point. In the glass shader where you specify IOR, you are not actually specifying the index of refraction of the material inside the volume defined by the mesh. You are actually only specifying the ratio of the index of refractions of the spaces on either side of the mesh. And that's all that Snell's Law actually needs when applied to a single interface: $$ \mathbf{n_1} \sin(\theta_1) = \mathbf{n_2} \sin(\theta_2) $$ $$ \sin(\theta_1) = \frac{\mathbf{n_2}}{\mathbf{n_1}} \sin(\theta_2) $$ $$ \theta_1 = \sin^{-1}\left(\frac{\mathbf{n_2}}{\mathbf{n_1}} \sin(\theta_2)\right) $$ The problem is that in the physical world, the interface between liquid and glass has a very small index difference - the ratio is nearly 1.0. This is why some transparent object can seem to almost completely disappear when submerged in water. But it seems to me that the workaround of embedding the liquid into the glass is not going to have the desired effect - if the desired effect is to get closer to a realistic image. It creates two interfaces and each has a large ratio of indices of refraction of something like 1.3 or 1.4, or 1/1.3 and 1/1.4 depending on the directions of the normals. Question: Wouldn't embedding the liquid inside the glass wall as described in the video produce physically wrong and unrealistic refraction? Since there is just one interface, using two meshes that just touch, or overlap seems like it's just asking for trouble. From a rendering point of view, meshes are interfaces between physical materials, even though we say that we assign "materials" to them we're actually assigning surface characteristics. edit: I've just found an excellent explanation here, yes the IOR < 1 technique should be correct and a single mesh for the glass-liquid interface used. I'll use an index of refraction of 1.4 and 1.3 for glass and liquid respectively in the following discussion just to make it simpler. Wouldn't a more realistic method be to just use a single mesh for the boundary between glass and liquid, and choose IOR = 0.93 with the normals pointing out, or IOR = 1.08 with the normals pointing in? That doesn't mean that there is an actual index of refraction is 0.93, it just means that when rays pass between the space inside the glass mesh and the space inside the liquid mesh they will be refracted and (Fresnel) reflected based on the correct physics for a single interface between materials within indices of refraction of 1.4 and 1.3? note: This would require the top of the liquid to have a different mesh or at least a different material, with the IOR set to 1.3 for the correct liquid-air interface behavior. Or you could just put foam on top.
I am given a matrix $ A =(a,b;c,d)$ in $ GL(2,\mathbb{C})$ and a real algebra say, $ V$ with basis $ X,Y,Z$ such that $ [X,Y]=0, [X,Z]=aX+bY, [Y,Z]=cX+dY$ I have to show that $ V$ is a real Lie Algebra. My attempt: it’s vector space over $ \mathbb{R}$ (duh!) I think we first need to find $ [X,X], [Y,Y], [Z,Z]$ which I don’t know how…and then use it to show bilinearity. Similarly, first somehow find $ [Y,X], [Z,Y], [Z,X]$ to verify anti symmetry. Assuming both bilinearity and anti symmetry, it’s sufficient to verify Jacobi Identity for elements $ \alpha, \beta, \gamma$ where $ \alpha, \beta, \gamma \in {X,Y,Z}$ . Consider the three cases when each of these three elements are different from one another, all are the same,two of them are same and one different from the other two. Now, using anti symmetry and above computed values of lie bracket helps us verify the Jacobi Identity. But as trivial as it seems, I don’t seem to have a clue about how to show bilinearity and compute lie brackets $ [X,X],… I’d appreciate any hint/s. Please do not post a solution. Thank you very much!
The lower attic From Cantor's Attic Revision as of 13:37, 27 May 2018 by Denis Maksudov (the Takeuti-Feferman-Buchholz ordinal) Welcome to the lower attic, where the countably infinite ordinals climb ever higher, one upon another, in an eternal self-similar reflecting ascent. $\omega_1$, the first uncountable ordinal, and the other uncountable cardinals of the middle attic stable ordinals The ordinals of infinite time Turing machines, including admissible ordinals and relativized Church-Kleene $\omega_1^x$ Church-Kleene $\omega_1^{ck}$, the supremum of the computable ordinals the omega one of chess $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ = the supremum of the game values for white of all positions in infinite chess $\omega_1^{\mathfrak{Ch},c}$ = the supremum of the game values for white of the computable positions in infinite chess $\omega_1^{\mathfrak{Ch}}$ = the supremum of the game values for white of the finite positions in infinite chess the Takeuti-Feferman-Buchholz ordinal the Bachmann-Howard ordinal the large Veblen ordinal the small Veblen ordinal the Extended Veblen function the Feferman-Schütte ordinal $\Gamma_0$ $\epsilon_0$ and the hierarchy of $\epsilon_\alpha$ numbers indecomposable ordinal the small countable ordinals, such as $\omega,\omega+1,\ldots,\omega\cdot 2,\ldots,\omega^2,\ldots,\omega^\omega,\ldots,\omega^{\omega^\omega},\ldots$ up to $\epsilon_0$ Hilbert's hotel and other toys in the playroom $\omega$, the smallest infinity down to the parlour, where large finite numbers dream
Though frameworks like Tensorflow, Pytorch has done the heavy lifting of implementing gradient descent, it helps to understand the nuts and bolts of how it works. After all, neural network is pretty much a series of derivative functions. In this blog post, let’s look at getting gradient of the lost function used in multi-class logistic regression. Softmax function is used to describe the posterior conditional probability \(P(y^{(i)}|x^{(i)}, w)\). Using maximum likelihood approach, the loss function is derived as: \[ J_{(w)} = - \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K} y_{k}^{(i)} log s_{k}^{(i)} \] To get gradient, the trick is to get partial derivative of \(J_{w}\) with respect to \(w_{jz}\), which is the weight connecting input \(x_{j}\) and output class \(z\) \[ \begin{align} \frac{\partial J_{(w)}}{\partial w_{jz}} & = - \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K} y_{k}^{(i)} \frac{\partial log s_{k}^{(i)}}{\partial w_{jz}} \\ & = - \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K} \frac{y_{k}^{(i)}}{s_{k}^{(i)}} \frac{\partial s_{k}^{(i)}}{\partial w_{jz}} \end{align} \] With \( s_{k}^{(i)} = \frac{e^{w_{k}^{T}x^{(i)}}}{\sum_{j=1}^{K} e^{w_{j}^{T}x^{(i)}}} \), we can expand the partial derivative of \( s_{k}^{(i)} \) as below: \[ \frac{\partial s_{k}^{(i)}}{\partial w_{jz}} = \begin{cases} x_{j}^{(i)}s_{k}^{(i)}(1 - s_{z}^{(i)}), \text{if } k=z \\ (-x_{j}^{(i)}s_{k}^{(i)}s_{z}^{(i)}), \text{if } k \neq z \end{cases}\] Putting pieces together, we have: \[ \begin{align}\frac{\partial J_{(w)}}{\partial w_{jz}} & = - \frac{1}{m} \sum_{i=1}^{m} (y_{z}^{(i)}x_{j}^{(i)}(1 - s_{z}^{(i)}) - \sum_{k \neq z}^{K}y_{k}^{(i)}x_{j}^{(i)}s_{z}^{(i)}) \\ & = - \frac{1}{m} \sum_{i=1}^{m} (y_{z}^{(i)}x_{j}^{(i)} - s_{z}^{(i)}x_{j}^{(i)}\sum_{k=1}^{K}y_{k}^{(i)}) \end{align} \] Since it is a single class problem, only one \(y_{k}\) is 1 for an input i, hence \(\sum_{k=1}^{K}y_{k}^{(i)} = 1 \). \[\frac{\partial J_{(w)}}{\partial w_{jz}} = - \frac{1}{m} \sum_{i=1}^{m} x_{j}^{(i)} (y_{z}^{(i)} - s_{z}^{(i)}) \] Finally, we have the numpy friendly vector form: \[\frac{\partial J_{(w)}}{\partial w} = - \frac{1}{m} x^{T} (y - s) \]
In this post, we will derive the components of a rotation matrix in three dimensions. Our derivation favors geometrical arguments over a purely algebraic approach and therefore requires only basic knowledge of analytic geometry. Given a vector ${\bf x} = (x,y,z)$, our goal is to rotate it by an angle $\theta \gt 0$ around a fixed axis represented by a unit vector $\hat{\bf n} = (n_x, n_y, n_z)$; we call ${\bf x}'$ the result of rotating ${\bf x}$ around $\hat{\bf n}$. The rotation is such that if we look into $\hat{\bf n}$, the vector ${\bf x}$ will be rotated along the counter-clockwise direction (see figure 1). Fig. 1: The vector ${\bf x}$ is rotated by an angle $\theta$ around $\hat{\bf n}$. Figure (a) shows the components ${\bf x}_{\parallel}$ and ${\bf x}_{\perp}$ of ${\bf x}$ which are parallel and perpendicular to $\hat{\bf n}$ respectively. Figure (b) shows the rotation as seen from top to bottom, i.e., from the perspective of an observer looking into the head of $\hat{\bf n}$: ${\bf x}_{\parallel}$ remains unchanged after the rotation; it is only ${\bf x}_{\perp}$ which changes. The unit vector $\hat{\bf q}$ is parallel to $\hat{\bf n} \times {\bf x} = \hat{\bf n} \times {\bf x}_{\perp}$. The rotation is in the counterclockwise direction for $\theta \gt 0$. Even though we already anticipated the fact that the transformation which rotates ${\bf x}$ into ${\bf x}'$ can be represented as a matrix, we will prove this explicitly by showing that ${\bf x}' = R(\hat{\bf n},\theta){\bf x}$ for a $3 \times 3$ matrix $R(\hat{\bf n},\theta)$ whose components depend only on $\hat{\bf n}$ and $\theta$. As a first step, notice that ${\bf x}$ can be decomposed into two components ${\bf x}_{\parallel}$ and ${\bf x}_{\perp}$ which are parallel and perpendicular to $\hat{\bf n}$ respectively as shown in figure 1a. This means: $$ {\bf x} = {\bf x}_{\parallel} + {\bf x}_{\perp} $$ Since $\hat{\bf n}$ is a unit vector, then: $$ \begin{eqnarray} {\bf x}_{\parallel} &=& (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_parallel} \\[5pt] {\bf x}_{\perp} &=& {\bf x} - {\bf x}_{\parallel} = {\bf x} - (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_perp} \end{eqnarray} $$ When we rotate ${\bf x}$ around $\hat{\bf n}$, its parallel component ${\bf x}_{\parallel}$ remains unchanged; it is only the perpendicular component ${\bf x}_{\perp}$ that actually rotates around $\hat{\bf n}$. This gives us: $$ {\bf x}_{\parallel}' = {\bf x}_{\parallel} = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_parallel} $$ Let us now define a unit vector $\hat{\bf q}$ which is orthogonal to both $\hat{\bf n}$ and ${\bf x}$ as shown in figure 1b. We can do this by computing and normalizing the cross product of $\hat{\bf n}$ and ${\bf x}$ (below, we implicitly assume that $\hat{\bf n}$ and ${\bf x}$ are not parallel to each other, but if they are, we have trivially that ${\bf x}' = {\bf x} = {\bf x}_{\parallel}$; our final results will be compatible with this corner case as well): $$ \displaystyle\hat{\bf q} = \frac{\hat{\bf n} \times {\bf x}}{\|\hat{\bf n} \times {\bf x}\|} \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_def_q} $$ Since a rotation does not change the length of a vector, we have that $\|{\bf x}'\| = \|{\bf x}\|$; in particular, $\|{\bf x}_{\perp}'\| = \|{\bf x}_{\perp}\|$ as shown in figure 1b. When we rotate ${\bf x}_{\perp}$ by an angle $\theta$ around $\hat{\bf n}$, a component proportional to $\|{\bf x}_{\perp}\|\cos\theta$ remains parallel to ${\bf x}_{\perp}$, and a component proportional to $\|{\bf x}_{\perp}\|\sin\theta$ which is parallel to $\hat{\bf q}$ is generated. Therefore: $$ {\bf x}_{\perp}' = \cos\theta\,{\bf x}_{\perp} + \|{\bf x}_{\perp}\|\sin\theta\,\hat{\bf q} = \cos\theta\,{\bf x}_{\perp} + \sin\theta\,(\hat{\bf n}\times{\bf x}) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_perp} $$ where above we used the definition of $\hat{\bf q}$ from equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_def_q} as well as the fact that: $$ \|\hat{\bf n} \times {\bf x}\| = \|\hat{\bf n}\|\|{\bf x}\|\sin\alpha = \|{\bf x}_{\perp}\| $$ with $\alpha$ being the angle between $\hat{\bf n}$ and ${\bf x}$ as shown in figure 1a. We can now obtain an expression relating ${\bf x}'$ and ${\bf x}$ in terms of $\hat{\bf n}$ and $\theta$. Since ${\bf x}' = {\bf x}_{\parallel}' + {\bf x}_{\perp}'$, and using equations \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_parallel} and \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_perp}, we obtain: $$ {\bf x}' = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \cos\theta\,{\bf x}_{\perp} + \sin\theta\,(\hat{\bf n}\times{\bf x}) $$ and now using equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_perp}, we get: $$ {\bf x}' = (\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \cos\theta\,({\bf x} - (\hat{\bf n}\cdot{\bf x})\hat{\bf n}) + \sin\theta\,(\hat{\bf n}\times{\bf x}) $$ Rearranging terms, we obtain a very useful expression for computing ${\bf x}'$: $$ \boxed{ {\bf x}' = \cos\theta\,{\bf x} + (1 - \cos\theta)(\hat{\bf n}\cdot{\bf x})\hat{\bf n} + \sin\theta\,(\hat{\bf n}\times{\bf x}) } \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} $$ As we claimed earlier, equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} can be expressed as ${\bf x}' = R(\hat{\bf n},\theta){\bf x}$, where $R(\hat{\bf n},\theta)$ is a $3 \times 3$ matrix. To see that this is true, notice that: $$ \cos\theta\,{\bf x} = \cos\theta \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) $$ and that: $$ (\hat{\bf n}\cdot{\bf x})\hat{\bf n} = \left( \begin{matrix} (\hat{\bf n}\cdot{\bf x})n_x \\ (\hat{\bf n}\cdot{\bf x})n_y \\ (\hat{\bf n}\cdot{\bf x})n_z \end{matrix} \right) = \left( \begin{matrix} n_x^2 & n_y n_x & n_z n_x \\ n_x n_y & n_y^2 & n_z n_y \\ n_x n_z & n_y n_z & n_z^2 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_mat1} $$ and that: $$ \hat{\bf n}\times{\bf x} = \left( \begin{matrix} n_y z - n_z y \\ n_z x - n_x z \\ n_x y - n_y x \end{matrix} \right) = \left( \begin{matrix} 0 & -n_z & n_y \\ n_z & 0 & -n_x \\ -n_y & n_x & 0 \end{matrix} \right) \left( \begin{matrix} x \\ y \\ z \end{matrix} \right) \label{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_mat2} $$ Therefore ${\bf x'} = R(\hat{\bf n},\theta){\bf x}$, with: $$ \boxed{ \begin{eqnarray} R(\hat{\bf n},\theta) &=& \cos\theta \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \right) + (1 - \cos\theta) \left( \begin{matrix} n_x^2 & n_y n_x & n_z n_x \\ n_x n_y & n_y^2 & n_z n_y \\ n_x n_z & n_y n_z & n_z^2 \end{matrix} \right) \nonumber \\[5pt] &+& \sin\theta \left( \begin{matrix} 0 & -n_z & n_y \\ n_z & 0 & -n_x \\ -n_y & n_x & 0 \end{matrix} \right) \nonumber \end{eqnarray} } $$ Whenever $\hat{\bf n}$ and ${\bf x}$ are parallel, we have ${\bf x} = {\bf x}_{\parallel}$ and $\hat{\bf n} \times {\bf x} = {\bf 0}$, so equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_prime_x} together with equation \eqref{post_b155574a293a5cbfdd0fbe82a9b8bf28_eq_x_parallel} gives us ${\bf x}' = {\bf x}$, as expected. Additionally, our derivation did not actually rely on the assumption that $\theta \gt 0$, so it is valid for arbitrary values of $\theta$. Finally, notice that by changing $\hat{\bf n} \rightarrow -\hat{\bf n}$ and $\theta \rightarrow -\theta$, $R(\hat{\bf n},\theta)$ does not change, i.e., $R(\hat{\bf n},\theta) = R(-\hat{\bf n},-\theta)$, so we can always convert a rotation with $\theta \lt 0$ into an equivalent one having $\theta \gt 0$ by inverting the direction of $\hat{\bf n}$ and negating $\theta$.
Find $$\int_0^t A(s)ds$$ if $$A(t)=\begin{pmatrix}\sin(t),\cos(t)\\ -\sin(t),\cos(t)\end{pmatrix}$$ I'm a little confused with the format of the question because it asks me to integrate with respect to s, but the function is of t. Do I just integrate A(t) as an indefinite integral, so the answer would be $$A(t)=\begin{pmatrix}-\cos(t),\sin(t)\\ \cos(t),\sin(t)\end{pmatrix}$$ or do I integrate it a definite integral from 0 to t, so the answer would be $$A(t)=\begin{pmatrix}-\cos(t)+1,\ \sin(t)\\ \cos(t)-1,\ \sin(t)\end{pmatrix}$$ Or do I need to do something completely different? Given: $$A(t)=\begin{bmatrix}~~~\sin(t) & ~~\cos(t)\\ -\sin(t) & ~~\cos(t)\end{bmatrix}$$ Find: $$\int_0^t A(s)~ds = \int_0^t \begin{bmatrix}~~~\sin(s)& ~~\cos(s)\\ - \sin(s)&~~ \cos(s)\end{bmatrix}~ds = \begin{bmatrix} 1 - \cos t & ~~\sin t\\ \cos t - 1 & ~~ \sin t\end{bmatrix}$$ You are integrating each matrix element, one at a time as a function of $s$ and then applying the limits of integration on that result.
The physics creating eddy currents and EMFs in inductors is the same: Faraday's law of induction. $ \oint_C {E \cdot d\ell = - \frac{d}{{dt}}} \int_S {B_n dA} $ The strength of any induced current and voltage is dependent on: 1) The amount of magnetic flux ($\int_S {B_n dA}$) 2) The rate at which the flux is changing So for the loop in your first picture, lets assign some dimensions. We'll call the outer radius $ R $ and the inner radius $ r $ so that the thickness of the ring is $R-r$. For a very small thickness, clearly the amount of flux through the ring will be >> than the magnetic field hitting the conductor at any given time. So left side of faraday's law will be much larger for the circuit of the entire loop than for the that of the loop drawn on the conductor. Moreover, the left side of the equation indicates an induced voltage, so the current generated is proportional the resistance. Eddy currents are primarily on the surface of the conductor (this is why they are often used for non-destructive testing of materials to find cracks on the surface of sheet metal). So the resistance seen by the eddy currents I believe would be much larger than the resistance seen by the currents in the ring as the A in: $ R = \rho L/A $ Would be much larger for the current induced around the loop. In conclusion, both can exist (and can oppose each other), and certainly do in your example, typically the induced current in the loop is just dominant for any well designed inductor. In brakes that utilize the drag induced by eddy currents, one would design for the opposite effect.
Literature on Carbon Nanotube Research I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate! Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes - Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber 8 Tensile and Electrical Properties of Carbon Nanotube Yarns and Knitted Tubes in Pure or Composite Form Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes B. G. Demczyk et al., Materials and Engineering, A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science, 304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,... The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below). Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology M. Zhang, K. R. Atkinson, and R. H. Baughman, Science, 306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given: <math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math> where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here. In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry. Ultra-high-yield growth of vertical single-walled carbon nanotubes - Hidden roles of hydrogen and oxygen Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon. In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper. In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs. This article can be found in our archive. Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns. The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}}</math> and could be sustained for 90 minutes, This is very different from the <math>1{\rm \mu m\ min.^{-1}}</math> reported by G. Zhang et al. (2005), which shows that the growth is very dependent on the method and materials used. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007). Overall the paper is somewhat short on the details of the process, but the results are very interesting. Perhaps the 5mm CNTs are long enough to be spun into a usable yarn. In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically. Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock. If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing. High-Performance Carbon Nanotube Fiber K. Koziol et al., Science, 318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel. They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber. As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. Tensile and Electrical Properties of Carbon Nanotube Yarns and Knitted Tubes in Pure or Composite Form The paper by S. Hutton et al. is the latest on yarns spun out of CNTs. The core of the paper is concerned with effect the different amounts of twist on the tensile strength and on the electrical conductivity of the yarn. The bad news for us is that they arrive only at 1GPa/ccm for the optimum tensile strength of the yarn. However, some insight is given in the spinning process and in the different methods of processing the CNTs. They use relatively short CNTs (0.2 to 0.3mm) grown into a MWNT forest by chemical vapour deposition (CVD) onto a silicon substrate covered with metal catalyst. The latter method appears to have become standard recently.
Astrid the astronaut is floating in a grid. Each time she pushes off she keeps gliding until she collides with a solid wall, marked by a thicker line. From such a wall she can propel herself either parallel or perpendicular to the wall, but always travelling directly \(\leftarrow, \rightarrow, \uparrow, \downarrow\). Floating out of the grid means death. In this grid, Astrid can reach square Y from square ✔. But if she starts from square ✘ there is no wall to stop her and she will float past Y and out of the grid. In this grid, from square X Astrid can float to three different squares with one push (each is marked with an *). Push \(\leftarrow\) is not possible from X due to the solid wall to the left. From X it takes three pushes to stop safely at square Y, namely \(\downarrow, \rightarrow, \uparrow\). The sequence \(\uparrow, \rightarrow\) would have Astrid float past Y and out of the grid. Question: In the following grid, what is the least number of pushes that Astrid can make to safely travel from X to Y?
The conjecture Are all zeros of $\zeta(0+s) \pm \zeta(0-s)$ except a finite few on the line $\Re(s)=0$? was shown to be unconditionally true. The proof can even be extended towards the domain $\sigma_0 \le 0$ with $\zeta(\sigma_0 + s) \pm \zeta(\sigma_0 -s)$. Building on this, I started to experiment with the sum/difference of finite Euler products and now like to conjecture that with: $$E(s,X) := \prod_{p \le X} \left( \dfrac{1}{1-p^{-s}} \right)$$ and $\sigma_0 \le 0$ and $X \ge 3$, all zeros (i.e. no exceptions) of: $$E(\sigma_0 + s,X) \pm E(\sigma_0 -s,X)$$ are on the line $\Re(s)=0$. Obviously couldn't test all values of $\sigma_0$, but is this provable? P.S.: I also experimented with the zeros of: $$E(s,X) \pm E(1-s,X)$$ and found that in the critical strip and with $X \ge 2$, by far most of the zeros lie on the line $\Re(s)=\frac12$. For lower values of $X$, I did also find zeros off the critical line (within and outside the strip), but with an increasing $X$ these appear to 'crawl' towards the lines $\Re(s)=0,\frac12$ or $1$. However, I struggle to figure out what the exact final destiny of these zeros is when $X \rightarrow \infty$. There are also real zeros for this formula, however only for any other prime. So, $E(s,X) + E(1-s,X)$ only has a real root for $s$ at $X=2,5,11,17,23,\dots$, whereas $E(s,X) - E(1-s,X)$ only vanishes for $s=\frac12$ and values of $s$ at $X=3,7,13,19,29,\dots$. Guess this is just something trivial that stems from the odd/even number of factors in the finite Euler product?
I am studying Electrodynamics and I have been introduced to the concept of Gauge Invariance. This was introduced by noting that $E$ and $B$ amount to 6 six degrees of freedom and the Maxwell equations amount of 3 degrees of freedom. On the other hand, if we write $$E = - \nabla \phi - \frac{\partial A}{\partial t}, \qquad B = \nabla \times A$$ this contains 4 degrees of freedom. The extra degree of freedom forms part of this Gauge invariance. My lecture notes go on to talk about the Neumann gauge and the Lorenz gauge and how these are both 'natural' choices for a Gauge. I have come to Stack Exchange because I am fairly confused. I am not sure what a 'gauge' even is and what their point is. It's not obvious from what I've put above... Furthermore, I read on in my lecture notes and it says that in the Lorenz gauge, $A$ and $\phi$ satisfy wave equations. Again, I don't see how this is useful, but maybe a user on here can shed some light on gauges and this will make sense.
$X\sim U(0,\theta)$. To find the umvue of $\cos\theta$ is it enough to find the umvue of theta and substitute for it. Umvue of $\theta$ being $(n+1)X_{(n)}/n$, is the answer $\cos (n+1)X_{(n)}/n$? Assuming you have a sample of $n$ observations. The density of the complete sufficient statistic $X_{(n)}$ is $$f_{X_{(n)}}(t)=\frac{nt^{n-1}}{\theta^n}\mathbf1_{0<t<\theta}$$ Any function of $X_{(n)}$ that is unbiased for $\cos\theta$ will be UMVUE of $\cos\theta$. Let $g(\cdot)$ be that function. Set up the equation $$E_{\theta}\left[g(X_{(n)})\right]=\cos\theta\quad,\,\forall\,\theta>0$$ That is, $$\int_0^\theta g(t)t^{n-1}\,dt=\frac{\theta^n\cos\theta}{n}$$ Differentiating both sides of the last equation wrt $\theta$, one can solve for $g(\cdot)$.
The original Noether's theorem assumes a Lagrangian formulation. Is there a kind of Noether's theorem for the Hamiltonian formalism? Action formulation. It should be stressed that Noether's theorem is a statement about consequences of symmetries of an action functional (as opposed to, e.g., symmetries of equations of motion, or solutions thereof, cf. this Phys.SE post). So to use Noether's theorem, we first of all need an action formulation. How do we get an action for a Hamiltonian theory? Well, let us for simplicity consider point mechanics (as opposed to field theory, which is a straightforward generalization). Then the Hamiltonian action reads $$ S_H[q,p] ~:=~ \int \! dt ~ L_H(q,\dot{q},p,t). \tag{1}$$ Here $L_H$ is the so-called Hamiltonian Lagrangian $$ L_H(q,\dot{q},p,t) ~:=~\sum_{i=1}^n p_i \dot{q}^i - H(q,p,t). \tag{2}$$ We may view the action (1) as a first-order Lagrangian system $L_H(z,\dot{z},t)$ in twice as many variable $$ (z^1,\ldots,z^{2n}) ~=~ (q^1, \ldots, q^n;p_1,\ldots, p_n).\tag{3}$$ $$ 0~\approx~\frac{\partial S_H}{\partial z^I} ~=~\sum_{J=1}^{2n}\omega_{IJ}\dot{z}^J -\frac{\partial H}{\partial z^I} \qquad\Leftrightarrow\qquad \dot{z}^I~\approx~\{z^I,H\} \qquad\Leftrightarrow\qquad $$ $$ \dot{q}^i~\approx~ \{q^i,H\}~=~\frac{\partial H}{\partial p_i}\qquad \text{and}\qquad \dot{p}_i~\approx~ \{p_i,H\}~=~-\frac{\partial H}{\partial q^i}. \tag{4}$$ [Here the $\approx$ symbol means equality on-shell, i.e. modulo the equations of motion (eom).] Equivalently, for an arbitrary quantity $Q=Q(q,p,t)$ we may collectively write the Hamilton's eoms (4) as $$ \frac{dQ}{dt}~\approx~ \{Q,H\}+\frac{\partial Q}{\partial t}.\tag{5}$$ Returning to OP's question, the Noether theorem may then be applied to the Hamiltonian action (1) to investigate symmetries and conservation laws. Statement 1: "A symmetry is generated by its own Noether charge." Sketched proof: Let there be given an infinitesimal (vertical) transformation $$ \delta z^I~=~ \epsilon Y^I(q,p,t), \qquad I~\in~\{1, \ldots, 2n\}, \qquad \delta t~=~0,\tag{6}$$ where $Y^I=Y^I(q,p,t)$ are (vertical) generators, and $\epsilon$ is an infinitesimal parameter. Let the transformation (6) be a quasisymmetry of the Hamiltonian Lagrangian $$ \delta L_H~=~\epsilon \frac{d f^0}{dt},\tag{7}$$ where $f^0=f^0(q,p,t)$ is some function. By definition, the bare Noether charge is $$ Q^0~:=~ \sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} Y^I \tag{8}$$ while the full Noether charge is $$ Q~:=~Q^0-f^0. \tag{9} $$ Noether's theorem then guarantees an off-shell Noether identity $$\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I} +\frac{\partial Q}{\partial t} ~=~ \frac{dQ}{dt} ~\stackrel{\text{NI}}{=}~ -\sum_{I=1}^{2n} \frac{\delta S_H}{\delta z^I}Y^I ~\stackrel{(4)}{=}~\sum_{I,J=1}^{2n}\dot{z}^I\omega_{IJ}Y^J + \sum_{I=1}^{2n} \frac{\partial H}{\partial z^I}Y^I . \tag{10}$$ By comparing coefficient functions of $\dot{z}^I$ on the 2 sides of eq. (10), we conclude that the full Noether charge $Q$ generates the quasisymmetry transformation $$ Y^I~=~\{z^I,Q\}.\tag{11}$$ $\Box$ Statement 2: "A generator of symmetry is essentially a constant of motion." Sketched proof: Let there be given a quantity $Q=Q(q,p,t)$ (a priori not necessarily the Noether charge) such that the infinitesimal transformation $$ \delta z^I~=~ \{z^I,Q\}\epsilon,\qquad I~\in~\{1, \ldots, 2n\}, \qquad \delta t~=~0,$$ $$ \delta q^i~=~\frac{\partial Q}{\partial p_i}\epsilon, \qquad \delta p_i~=~ -\frac{\partial Q}{\partial q^i}\epsilon, \qquad i~\in~\{1, \ldots, n\},\tag{12}$$ generated by $Q$, and with infinitesimal parameter $\epsilon$, is a quasisymmetry (7) of the Hamiltonian Lagrangian. The bare Noether charge is by definition $$ Q^0~:=~ \sum_{I=1}^{2n}\frac{\partial L_H}{\partial \dot{z}^I} \{z^I,Q\} ~\stackrel{(2)}{=}~ \sum_{i=1}^n p_i \frac{\partial Q}{\partial p_i}.\tag{13}$$ Noether's theorem then guarantees an off-shell Noether identity $$ \frac{d (Q^0-f^0)}{dt} ~\stackrel{\text{NI}}{=}~-\sum_{I=1}^{2n}\frac{\delta S_H}{\delta z^I} \{z^I,Q\} $$ $$~\stackrel{(2)}{=}~ \sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I} +\{H,Q\} ~=~\frac{dQ}{dt}-\frac{\partial Q}{\partial t} +\{H,Q\}. \tag{14}$$ Firstly, Noether theorem implies that the corresponding full Noether charge $Q^0-f^0$ is conserved on-shell $$ \frac{d(Q^0-f^0)}{dt}~\approx~0,\tag{15}$$ which can also be directly inferred from eqs. (5) and (14). Secondly, the off-shell Noether identity (14) can be rewritten as $$ \{Q,H\}+\frac{\partial Q}{\partial t} ~\stackrel{(14)+(17)}{=}~~\frac{dg^0}{dt}~=~\sum_{I=1}^{2n}\dot{z}^I \frac{\partial g^0}{\partial z^I}+\frac{\partial g^0}{\partial t},\tag{16} $$ where we have defined the quantity $$ g^0~:=~Q+f^0-Q^0.\tag{17}$$ We conclude from the off-shell identity (16) that (i) $g^0=g^0(t)$ is a function of time only, $$ \frac{\partial g^0}{\partial z^I}~=~0\tag{18}$$ [because $\dot{z}$ does not appear on the lhs. of eq. (16)]; and (ii) that the following off-shell identity holds $$ \{Q,H\} +\frac{\partial Q}{\partial t} ~=~\frac{\partial g^0}{\partial t}.\tag{19}$$ Note that the quasisymmetry and the eqs. (12)-(15) are invariant if we redefine the generator $$ Q ~~\longrightarrow~~ \tilde{Q}~:=~Q-g^0 .\tag{20} $$ Then the new $\tilde{g}^0=0$ vanishes. Dropping the tilde from the notation, the off-shell identity (19) simplifies to $$ \{Q,H\} +\frac{\partial Q}{\partial t}~=~0.\tag{21}$$ Eq. (21) is the defining equation for an off-shell constant of motion $Q$. $\Box$ Statement 3: "A constant of motion generates a symmetry and is its own Noether charge." Sketched proof: Conversely, if there is given a quantity $Q=Q(q,p,t)$ such that eq. (21) holds off-shell, then the infinitesimal transformation (12) generated by $Q$ is a quasisymmetry of the Hamiltonian Lagrangian $$ \delta L_H ~\stackrel{(2)}{=}~\sum_{i=1}^n\dot{q}^i \delta p_i -\sum_{i=1}^n\dot{p}_i \delta q^i -\delta H +\frac{d}{dt}\sum_{i=1}^np_i \delta q^i \qquad $$ $$~\stackrel{(12)+(13)}{=}~ -\sum_{I=1}^{2n}\dot{z}^I \frac{\partial Q}{\partial z^I}\epsilon -\{H,Q\}\epsilon + \epsilon \frac{d Q^0}{dt}$$ $$~\stackrel{(21)}{=}~ \epsilon \frac{d (Q^0-Q)}{dt} ~\stackrel{(23)}{=}~ \epsilon \frac{d f^0}{dt},\tag{22}$$ because $\delta L_H$ is a total time derivative. Here we have defined $$ f^0~=~ Q^0-Q .\tag{23}$$ The corresponding full Noether charge $$ Q^0-f^0~\stackrel{(23)}{=}~Q \tag{24}$$ is just the generator $Q$ we started with! Finally, Noether's theorem states that the full Noether charge is conserved on-shell $$ \frac{dQ}{dt}~\approx~0.\tag{25}$$ Eq. (25) is the defining equation for an on-shell constant of motion $Q$. $\Box$ Discussion. Note that it is overkill to use Noether's theorem to deduce eq. (25) from eq. (21). In fact, eq. (25) follows directly from the starting assumption (21) by use of Hamilton's eoms (5) without the use of Noether's theorem! For the above reasons, as purists, we disapprove of the common praxis to refer to the implication (21)$\Rightarrow$(25) as a 'Hamiltonian version of Noether's theorem'. Interestingly, an inverse Noether's theorem works for the Hamiltonian action (1), i.e. a on-shell conservation law (25) leads to an off-shell quasisymmetry (12) of the action (1), cf. e.g. my Phys.SE answer here. In fact, one may show that (21)$\Leftrightarrow$(25), cf. my Phys.SE answer here. Example 4: The Kepler problem: The symmetries associated with conservation of the Laplace-Runge-Lenz vector in the Kepler problem is difficult to understand via a purely Lagrangian formulation in configuration space $$ L~=~ \frac{m}{2}\dot{q}^2 + \frac{k}{q},\tag{26}$$ If your Hamiltonian is invariant, that means there should be a vanishing Poisson bracket for some function $F(q,p)$ of your canonical coordinates so that $$\{ H(q,p), F(q,p)\} = 0$$ Since the Poisson bracket with the Hamiltonian also gives the time derivative, you automatically have your conservation law. One thing to note: The Lagrangian is a function of position and velocity, whereas the Hamiltonian is a function of position and momentum. Thus, your $T$ and $V$ in $L = T - V$ and $H = T + V$ are not the same functions.
I have two function $c_1, c_2 \colon [0,2\pi] \to \mathbb{R}^2$ define by $c_1(t) = (\cos(t), \sin(t))$ and $c_2(t) = (\cos(2t), \sin(2t))$ I want to show that they have the same image. It is pretty obvious, but I don't know how to prove it. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community The key to problems like this is to carefully write down what you want to prove, namely that two sets are equal. In this case, to prove $image(c_1) = image(c_2)$, you first show that $image(c_1) \subset image(c_2)$, and then show the reverse containment. To do the first, you have to know what $image(c_1)$ actually is. It's $$ image(c_1) = \{ (\cos t, \sin t) \mid t \in [0, 2\pi]\}. $$ Thus every element in the image is a cosine-sine pair for some argument. You can do the same for the second image, and then you're ready to go: Take a point in the image of the first function; it must be $(\cos a, \sin a)$ for some $a \in [0, 2\pi]$. You want to show that it's also $(\cos 2b, \sin 2b)$ for at least one point $b \in [0, 2\pi]$. Hint: pick $b = a/2$. Then write out what you get. And confirm that $b$ really is in the specified domain, while you're at it. Then you have to do the same thing in the other direction.
I am aware of the debate on whether Schrödinger equation was derived or motivated. However, I have not seen this one that I describe below. Wonder if it could be relevant. If not historically but for educational purposes when introducing the equation. Suppose that we have the time dependent Schrödinger equation for a free particle, $V=0$. $$-\frac {\hbar i}{2m} \nabla^2 \Psi_\beta = \frac {\partial \Psi_{\beta}}{\partial t} $$ As the particle moves its heat is diffused throughout space. Now consider that we consider Heat equation or in general Diffusion equation: $$\alpha\nabla^2 u= \frac {\partial u}{\partial t} $$ Where $u$ is temperature. Also we have particle diffusion equation due to Fick's second law. $$D \frac {\partial^2 \phi}{\partial x^2}= \frac {\partial \phi}{\partial t} $$ Where $\phi$ is concentration. Furthermore, probability density function obeys Diffusion equation. So as the free particle moves, the heat, the temperature, or the density is diffused. Now we can motivate Schrödinger equation in an intuitive way. Mathematically it is describing the same diffusion. Am I right? Have you seen more like this motivation elsewhere?
This question already has an answer here: The term "B" denotes both magnetic flux density and magnetic induction. Are the terms same? If, not what do they mean Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: The term "B" denotes both magnetic flux density and magnetic induction. Are the terms same? If, not what do they mean Do you understand flux and surface density? See Magnetic Flux What you refer to as "Magnetic flux density", would be the magnitude of the magnetic field $\mathbf B $. Magnetic flux $\Phi$ is a scalar, it is modeled as the amount of field lines passing trough a given surface thus, since it is a dot product, flux is the magnitude of the perpendicular component of the magnetic field times the surface area, $$\Phi = \int B \circ \hat n \mathcal dA = B_\perp A$$ Surface density is a quantity per unit area hence, dividing flux by area leaves you with the magnitude of the magnetic field perpendicular to the surface, $$\frac \Phi A = B_\perp $$ Finally, magnetic induction is the result of a change in flux, $$\frac {\Delta \Phi} {\Delta t} = \frac{\Delta (B_\perp A)}{\Delta t} $$ See Faraday's Law. I think your confusion comes from an older usage of the terms $ \mathbf{B}$ and $\mathbf{H}$. Old text books called $\mathbf{H} $ the magnetic field and came up with different names for $\mathbf{B}$, such as magnetic induction or magnetic flux density and etc. Nowadays, we just call $\mathbf{B}$ the "magnetic field" and $\mathbf{H}$ is just called the "$\mathbf{H}$ field". The difference is that $\mathbf{B}$ is due to all currents, whereas $\mathbf{H}$ is due to "free current" and the magnetization $\mathbf{M}$.
The partial log-likelihood function in Cox proportional hazards is given with such formula $${}_{p}\ell(\beta) = \sum\limits_{i=1}^{K}X_i'\beta - \sum\limits_{i=1}^{K}\log\Big(\sum\limits_{l\in \mathscr{R}(t_i)}^{}e^{X_l'\beta}\Big),$$ where $K$ is the number of observations for which we have observed an event (where generaly there were $n$ observations, so $K-n$ observations were censored) and $\mathscr{R}(t_i)$ is a risk set for time $t_i$ defined as: $\mathscr{R}(t_i) = : \{X_j: t_j >= t_i, j = 1, \dots, n \}$. I am trying to implement function calculating this partial log-likelihood function for given $\beta$ vector and input data set. I thought it's clever to first sort data by the observed time such that higher row number indicates higher survival time. For such a form of data I have prepared an implementation for 2 explanatory variables: full_cox_loglik <- function(beta1, beta2, x1, x2, censored){ sum(rev(censored)*(beta1*rev(x1) + beta2*rev(x2) - log(cumsum(exp(beta1*rev(x1) + beta2*rev(x2))))))} where beta1 is a coefficient for x1 variable, beta2 is a coefficient for x2 variable and censored is a vector indicating whether the observation had event (then $1$) or was censored (then $0$). Being careful I have also prepared a second longer implementation that generally works for every dimension - it also takes data in a sorted format. This assumes that dCox is a data frame with explanatory variables and with censored column, beta is a vector of coefficients and status_number indicates the number of a column that has the information about censroing: library(foreach)partial_coxph_loglik <- function (dCox, beta, status_number) { n <- nrow(dCox) foreach(i=1:n) %dopar% { sum(dCox[i,status_number]*(dCox[i,-status_number]*beta)) } %>% unlist -> part1 foreach(i=1:n) %dopar% { exp(sum(dCox[i,-status_number]*beta)) } %>% unlist -> part2 foreach(i=1:n) %dopar% { part1[i] - dCox[i, status_number]*(log(sum(part2[i:n]))) } %>% unlist -> part3 sum(part3)} Where to be more readable the partial log-likelihood would be (for that implementation): $${}_{p}\ell(\beta) = \sum\limits_{i=1}^{K} \text{part1}_i - \sum\limits_{i=1}^{K}\log\Big(\text{part2}_i\Big).$$ For this implementation I have tried to calculate the values of the partial log-likelihood for the Cox proportional models for data that were generated from real $\beta$ parameters that were set to beta=c(2,2). But I have received results telling me that the maximum of partial log-likelihood is not in the point beta=c(2,2) but far from this point. One can prepare simulated survival data for Cox proportional hazards model, that came from Weibull distribution, with the metod explained here https://stats.stackexchange.com/a/135129/49793 . The similiar implementation based on that solution is below (works for 2 explanatory variables): set.seed(456)dataCox <- function(N, lambda, rho, x, beta, censRate){ # real Weibull times u <- runif(N) Treal <- (- log(u) / (lambda * exp(x %*% beta)))^(1 / rho) # censoring times Censoring <- rexp(N, censRate) # follow-up times and event indicators time <- pmin(Treal, Censoring) status <- as.numeric(Treal <= Censoring) # data set data.frame(id=1:N, time=time, status=status, x=x)}x <- matrix(sample(0:1, size = 2000, replace = TRUE), ncol = 2)dataCox(10^3, lambda = 5, rho = 1.5, x, beta = c(2,2), censRate = 0.2) -> dCox So for this implementation and simulated data I have calculated the values of partial log-likelihood for beta from c(0,0) to c(2,2) and received such results: library(dplyr)dCox %>% dplyr::arrange(time) -> dCoxArrbeta1 <- seq(0,2,0.05)beta2 <- seq(0,2,0.05)res <- numeric(length(beta1))for(i in 1:length(beta1)){ full_cox_loglik(beta1[i], beta2[i], dCoxArr$x.1, dCoxArr$x.2, dCoxArr$status ) -> res[i]}library(ggplot2)qplot(beta1, res)res2 <- numeric(length(beta1))for(i in 1:length(beta1)){ partial_coxph_loglik(dCoxArr[, c(4,5,3)], c(beta1[i],beta2[i]), 3 ) -> res2[i] cat("\r", i, "\r")}qplot(beta1, res2) It does not look like the maximum of partial log-likelihood function is in the point beta = c(2,2) from which I have generated data. So now there appears my questio? Where did I make mistake? In data generation? In partial log-likelihood implementation? Or somewhere esle?
The Annals of Probability Ann. Probab. Volume 19, Number 4 (1991), 1587-1628. Solutions of a Stochastic Differential Equation Forced Onto a Manifold by a Large Drift Abstract We consider a sequence of $\mathbb{R}^d$-valued semimartingales $\{X_n\}$ satisfying $X_n(t) = X_n(0) + \int^t_0\sigma_n(X_n(s-))dZ_n(s) + \int^t_0F(X_n(s-))dA_n(s),$ where $\{Z_n\}$ is a "well-behaved" sequence of $\mathbb{R}^e$-valued semimartingales, $\sigma_n$ is a continuous $d \times e$ matrix-valued function, $F$ is a vector field whose deterministic flow has an asymptotically stable manifold of fixed points $\Gamma$, and $A_n$ is a nondecreasing process which asymptotically puts infinite mass on every interval. Many Markov processes with lower dimensional diffusion approximations can be written in this form. Intuitively, if $X_n(0)$ is close to $\Gamma$, the drift term $F dA_n$ forces $X_n$ to stay close to $\Gamma$, and any limiting process must actually stay on $\Gamma$. If $X_n(0)$ is only in the domain of attraction of $\Gamma$ under the flow of $F$, then the drift term immediately carries $X_n$ close to $\Gamma$ and forces $X_n$ to stay close to $\Gamma$. We make these ideas rigorous, give conditions under which $\{X_n\}$ is relatively compact in the Skorohod topology and give a stochastic integral equation for the limiting process(es). Article information Source Ann. Probab., Volume 19, Number 4 (1991), 1587-1628. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176990225 Digital Object Identifier doi:10.1214/aop/1176990225 Mathematical Reviews number (MathSciNet) MR1127717 Zentralblatt MATH identifier 0749.60053 JSTOR links.jstor.org Subjects Primary: 60H10: Stochastic ordinary differential equations [See also 34F05] Secondary: 60J60: Diffusion processes [See also 58J65] 60J70: Applications of Brownian motions and diffusion theory (population genetics, absorption problems, etc.) [See also 92Dxx] Citation Katzenberger, G. S. Solutions of a Stochastic Differential Equation Forced Onto a Manifold by a Large Drift. Ann. Probab. 19 (1991), no. 4, 1587--1628. doi:10.1214/aop/1176990225. https://projecteuclid.org/euclid.aop/1176990225