text
stringlengths
256
16.4k
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
Remember that an oracle machine isn't really a "complete object" - basically anything interesting we might ask of it depends on what oracle we feed it. For example, whether an oracle machine $\Phi_e^-(e)$ halts or not depends in general on what oracle it's working with. So let me rephrase the fact you're starting with: There is an oracle $X$ and an oracle machine $\Phi_e^-$ such that $\Phi_e^X$ (= $\Phi_e^-$ with oracle $X$) computes the halting problem. Now every specific oracle has a corresponding halting problem: namely, given an oracle $X$ we let $$X':=\{e: \Phi_e^X(e)\mbox{ halts}\}.$$ The usual proof that the halting problem is not computable translates immediately to prove that $X'$ is not $X$-computable - that is, for every oracle $X$, there is no oracle machine $\Phi_e^X$ such that $\Phi_e^X$ computes $X'$. Since $X'$ depends heavily on $X$, there is no "halting problem for oracle machines" - rather, each oracle determines a different "relativized halting problem," and the more complicated we make the oracle the more complicated this becomes, with the result that we never "catch our tail." EDIT: here's how to "relativize" the unsolvability of the halting problem: Fix an oracle $X$. Suppose $c$ "solves the $X$-halting problem" - that is, for each $n$ we have $\Phi_c^X(n)=1$ iff $n=X'$. As in the non-oracle case, there is$^*$ an oracle machine $\Phi_e^-$ such that for all $n$, we have $\Phi_e^X(n)\downarrow$ iff $\Phi_c^X(n)\downarrow=0$. Then $\Phi_c^X(e)=0$ iff $e\in X'$, contradicting the assumption on $c$. $^*$This uses the relativized version of the existence of a universal machine, which is proved analogously as for non-oracle machines. Note, incidentally, that $e$ is independent from $X$: a single $e$ does the job for every oracle.
L.S., In my book Vector Analysis by Klaus Jänich, Three different 'versions' of the Tangent space of a point $p$ at a differentiable variety are being discussed. The 'geometrical': the set of differentiable curves in which pass through $p$ at $t=0$. (with two curves equivalent when they have the same derivative in 0 on chart) The 'algebraic': the set of al derivations on the ring of germs at $p$, that satisfy the product rule. The 'physical' the set of al vectors that send al the charts around $p$ to a subspace of $\mathbb R^n$, with the property that for any two charts the associated vectors in $\mathbb R^n$ are mapped to eachother by the differential of the transition map. Then three maps are given, $\phi_1$ : geometric $ \rightarrow $ algebraic by $f\mapsto(f\circ\alpha$)$'$$(0)$ $\phi_2$ : algebraic $ \rightarrow $ physical by $(U,h)$ $\mapsto$ $(v(h_1), ..., v(h_n))$ $\phi_3$ : physical $ \rightarrow $ geometric by $(\alpha(t) := h^{-1}(h(p) + tv(U,h))$ and then is stated that $\phi_1 \circ\phi_2 \circ\phi_3 = Id$ $\phi_2 \circ\phi_3 \circ\phi_1 = Id$ $\phi_3 \circ\phi_1 \circ\phi_2 = Id$ And it is shown for the first equation that it holds. These are pages 27 - 36 of the book. My question is: How to show that the other equations also hold? (this is not a homework question) The book says it goes in a similar way as the first, but I get stuck at a point. For instance the third equation, what I have tried is this: You have a 'physical' tangent vector $v$, which sends al the charts around $p$ to a subspace of $\mathbb R^n$. Now you want to make it a geometric tangent vector, so we define the curve $(\alpha(t) := h^{-1}(h(p) + tv(U,h))$. Now we want to make it a algebraic tangent vector, so we define the vector $v'$ that maps every $f$ in the ring of germs around $p$ to $(f\circ\alpha)'(0)$, so that's now $(f\circ h^{-1}(h(p) + tv(U,h)))'(0)$. Then we want to make it a physical vector again, so it now becomes $v''$ which sends every $(U,h)$ around $p$ to $(v'(h))$, which then becomes $(h$ 0 ($h^{-1}(h(p) + tv(U,h))))'(0)$ but how do we know now that that is the same as the original $v$? I thought it should be possible to somehow use the chainrule for derivatives, and then maybe you could throw away the $h(p)$ term since you're taking the derivative with respect to $t$! But then still I don't now how this would be done and if it would help. Any help I would appreciate very much! Thanks, Willem
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
I am trying to prove that the global maxima of $f(x_1,x_2,...,x_n)=(x_1x_2···x_n)^2$, subject to $\lVert(x_1,x_2,...,x_n)\rVert_2=r$ is $(r^2/n)^n$ I know I have to find the critical points of the Lagrange function, that is $$F(x_1,x_2,...,x_n,\lambda)=(x_1x_2···x_n)^2+\lambda(\sqrt{\sum_{i=1}^nx_i^2}-r)$$ In order to do that, I've built the system in which all partial derivatives are null, so I have: $$ D_1F(x_1,x_2,...,x_n,\lambda)=2(x_1x_2···x_n)(x_2x_3···x_n)+\frac{\lambda x_1}{\sqrt{\sum_{i=1}^nx_i^2}}=0 $$ $$ ... $$ $$ D_jF(x_1,x_2,...,x_n,\lambda)=2(x_1x_2···x_n)(x_1x_2···x_{j-1}x_{j+1}···x_n)+\frac{\lambda x_j}{\sqrt{\sum_{i=1}^nx_i^2}}=0 $$ $$ ... $$ $$ D_nF(x_1,x_2,...,x_n,\lambda)=2(x_1x_2···x_n)(x_1x_2···x_{n-1})+\frac{\lambda x_n}{\sqrt{\sum_{i=1}^nx_i^2}}=0 $$ $$ D_{n+1}F(x_1,x_2,...,x_n,\lambda)=\sqrt{\sum_{i=1}^n}x_i^2-r=0 $$ I think, because I have tried to solve reduced forms of this system in a mathematical software, that the solutions are like the following: $$ (x_1,x_2,...,x_n)=(\pm\frac{r}{\sqrt{n}},\pm\frac{r}{\sqrt{n}},...\pm\frac{r}{\sqrt{n}}) $$ where all the $x_i$ take the positive and negative sign, so there are $2^n$ solutions. However, I didn't manage to solve the system. I have tried to replace the summation with $r$ in all equations, to sum all the equations, to reduce dividing by $x_1···x_n$... but I didn't get to any interesting end. Could you help me, please, giving me any hint about how can I solve this system? Thank you very much.
In your second edit, you ask whether there exists an example of such a bundle over a lower-dimensional manifold. Four-dimensional example Let $M = (\mathbb{RP}^2\times\mathbb{RP}^2)\#(S^1\times S^3)$. Note that $$H^1(M; \mathbb{Z}_2) \cong H^1(\mathbb{RP}^2\times\mathbb{RP}^2; \mathbb{Z}_2)\oplus H^1(S^1\times S^3;\mathbb{Z}_2).$$ Let $a$ and $b$ denote elements of $H^1(M; \mathbb{Z}_2)$ corresponding to generators of $H^1(\mathbb{RP}^2\times\mathbb{RP}^2; \mathbb{Z}_2)$, and let $c$ denote the element of $H^1(M; \mathbb{Z}_2)$ corresponding to the generator of $H^1(S^1\times S^3; \mathbb{Z}_2)$. Consider the rank four vector bundle $E = L_a \oplus L_b \oplus L_c\oplus L_{a + b + c}$ where $L_x$ is the unique real line bundle over $M$ with $w_1(L_x) = x$; note that $L_{a+b+c} \cong L_a\otimes L_b\otimes L_c$. We have \begin{align*}w_1(E) =&\ w_1(L_a) + w_1(L_b) + w_1(L_c) + w_1(L_{a + b + c})\\ =&\ a + b + c + (a + b + c) = 0\\&\\w_2(E) =&\ w_1(L_a)w_1(L_b) + w_1(L_a)w_1(L_c) + w_1(L_a)w_1(L_{a + b + c})\\ &+ w_1(L_b)w_1(L_c) + w_1(L_b)w_1(L_{a + b + c}) + w_1(L_c)w_1(L_{a + b + c})\\=&\ ab + ac + a(a + b + c) + bc + b(a + b + c) + c(a + b + c)\\=&\ ab + a^2 + b^2 \neq 0\\&\\w_3(E) =&\ w_1(L_a)w_1(L_b)w_1(L_c) + w_1(L_a)w_1(L_b)w_1(L_{a + b + c})\\ &+ w_1(L_a)w_1(L_c)w_1(L_{a + b + c}) + w_1(L_b)w_1(L_c)w_1(L_{a + b + c})\\=&\ abc + ab(a + b + c) + ac(a + b + c) + bc(a + b + c)\\=&\ a^2b + ab^2 \neq 0\\&\\w_4(E) =&\ w_1(L_a)w_1(L_b)w_1(L_c)w_1(L_{a + b + c})\\=&\ abc(a + b + c) = 0.\end{align*} So $E$ is a rank four vector bundle over a four-manifold $M$ with $w(E) = 1 + w_2(E) + w_3(E)$. In fact, we can do better. As $H^4(M; \mathbb{Z}) \cong \mathbb{Z}_2$, reduction mod $2$ defines an isomorphism $H^4(M; \mathbb{Z}) \to H^4(M; \mathbb{Z}_2)$. Under this isomorphism, $e(E)$ is mapped to $w_4(E) = 0$, so $e(E) = 0$ and hence $E \cong F\oplus\varepsilon^1$. Note that $F \to M$ is a rank three vector bundle with $w(F) = 1 + w_2(F) + w_3(F)$. Three-dimensional characterisation Let $X$ be a three-dimensional CW complex. Recall that there is a bijection between isomorphism classes of orientable rank three bundles on $X$ and homotopy classes of maps $X \to BSO(3)$. As $X$ is three-dimensional, we can instead map to $BSO(3)[3]$, the third stage of the Postnikov tower for $BSO(3)$. As $\pi_1(BSO(3)) = 0$, $\pi_2(BSO(3)) = \mathbb{Z}_2$, and $\pi_3(BSO(3)) = 0$, we see that $BSO(3)[3]$ is a $K(\mathbb{Z}_2, 2)$. Moreover, as the map $BSO(3) \to BSO(3)[3]$ induces an isomorphism on $\pi_1$ and $\pi_2$, the map $H^2(BSO(3)[3]; \mathbb{Z}_2) \to H^2(BSO(3); \mathbb{Z}_2)$ is also an isomorphism. It follows that there is a bijection between orientable rank three bundles on $X$ and $H^2(X; \mathbb{Z}_2)$ given by the second Stiefel-Whitney class of the bundle. Now suppose that $X$ is a connected three-dimensional manifold. In order for $w_3(E) \in H^3(X; \mathbb{Z}_2)$ to be non-zero, we need $X$ to be closed. Furthermore, if $X$ is closed, $$w_3(E) = \operatorname{Sq}^1(w_2(E)) = \nu_1(X)w_2(E) = w_1(X)w_2(E)$$ so $X$ must be non-orientable. By Poincaré duality, there is at least one $\alpha \in H^2(X; \mathbb{Z}_2)$ such that $w_1(X)\alpha \neq 0$. For each such $\alpha$, there is a unique $SO(3)$-bundle $E \to X$ with $w(E) = 1 + \alpha + w_1(X)\alpha$. In conclusion, we have the following statement: Let $X$ be a connected, closed three-manifold. There is a real rank three vector bundle $E \to X$ with $w(E) = 1 + w_2(E) + w_3(E)$ if and only if $X$ is non-orientable. Moreover, on any non-orientable $X$, for every choice of $\alpha \in H^2(X; \mathbb{Z}_2)$ satisfying $w_1(X)\alpha\neq 0$, there is a unique real rank three bundle $E$ with $w(E) = 1 + \alpha + w_1(X)\alpha$.
Here's an example showing that in general, for pregeometries arising in model theory, you can't characterize the dimension of a union of a chain of models just in terms of the dimensions of the models. In other words, it matters how the models embed into each other. Consider the theory of a single equivalence relation $E$ with infinitely many infinite classes, and define $\dim(M) = |M/E|$. First union: Let $M_0$ be the unique countable model of this theory up to isomorphism. For every countable ordinal $\alpha$, let $M_{\alpha+1}$ be the elementary extension of $M_\alpha$ obtained by adding a single new equivalence class with countably many elements. For a limit ordinal $\lambda$, let $M_\lambda = \bigcup_{\alpha<\lambda} M_\alpha$. Then $\dim(M_\alpha) = \aleph_0$ for all $\alpha<\omega_1$, but $\dim(M_{\omega_1}) = \aleph_1$, Second union: Let $N_0 = M_0$, and pick an equivalence class $C$. This time, for every $\alpha$, let $N_{\alpha+1}$ be the elementary extension of $N_\alpha$ obtained by adding a single new element to $C$. As before, take unions as limit stages. Then $\dim(N_\alpha) = \aleph_0$ for all $\alpha<\omega_1$, and also $\dim(N_{\omega_1}) = \aleph_0$. Now I need to convince you that this dimension function arises in a standard way from a pregeometry studied in model theory. In stability theory, there's the notion of a regular type: a stationary type which is orthogonal to all of its forking extensions. The key point is that if $p(x)$ is a regular type (let's say over $\emptyset$ for simplicity), then forking dependence gives rise to a pregometry on the realizations of $p$ via the closure operator $\mathrm{cl}(A) = \{b\models p(x)\mid \text{tp}(b/A) \text{ forks over }\emptyset\}$. In my example, the theory is stable, the unique $1$-type is a regular type, and forking is governed by the equivalence relation $E$, so we get a pregometry on the whole model with closure operator $\mathrm{cl}(A) = \bigcup_{a\in A} [a]_E$, where $[a]_E$ is the $E$-class of $a$. And the induced dimension function is $\dim(M) = |M/E|$. Well, maybe you don't like this kind of pregeometry, and you only want to consider the kind you meet more often in model theory, namely pregeometries induced by the $\text{acl}$ operator. That's fine, but then the dimensions are only interesting for models that are at most the size of the language (so only countable models if the language is countable). Indeed, suppose $T$ is a theory such that $\text{acl}$ induces a pregometry on every model of $T$, and let $M\models T$ with $|M| > |L|$. Since $|\text{acl}(A)| = \max(|A|,|L|)$ for all $A\subseteq M$, any basis for $M$ must have cardinality $|M|$, and $\dim(M) = |M|$. In this case, the answer to your question about unions is an easy yes. Added in edit: You might also decide that you're only interested in closure operators with the property that when $A\subseteq M\prec N$, $\text{cl}(A)$ in $M$ equals $\text{cl}(A)$ in $N$, i.e. closures don't grow in elementary extensions. This is the case for $\text{cl} = \text{acl}$, and it would salvage the proof in your answer that $\dim$ takes unions of chains to sums, since if $N$ is a proper elementary extension of $M$, the closure of a basis for $M$ is contained in $M$, and we need at least one new element to form a basis for $N$. But we actually don't get anything beyond $\text{acl}$ under this assumption. Indeed, suppose $\text{cl}$ satisfies the condition above, and look at $A\subseteq M$. Embed $M$ in a large monster model $\mathbb{M}$. Then $\text{cl}_M(A) = \text{cl}_{\mathbb{M}}(A)$. In fact, for any $A\subseteq N\prec \mathbb{M}$, we have $\text{cl}_N(A) = \text{cl}_{\mathbb{M}}(A)$, so $\text{cl}_{\mathbb{M}}(A)\subseteq N$. But $\bigcap\{N\mid A\subseteq N\prec \mathbb{M}\} = \text{acl}(A)$, so $\text{cl}(A)\subseteq \text{acl}(A)$. If you're interested in axiomatizing dimension functions, you might want to look this paper, which gives a number of equivalent axiom systems for infinite matroids. In particular, look at the axioms in terms of rank functions. Their rank functions take values in $\mathbb{N}\cup \{\infty\}$, but you might as well be in this situation if you're thinking about $\text{acl}$ pregometries ($\dim(M) = \infty$ means $\dim(M) = |M|$).
I learned this through Ken Brown's textbook "Cohomology of Groups" (and studying under him): I basically use the beginning of his Chapter 5 and solve the two exercises in that section. Let $M$ (resp. $M'$) be an arbitrary $G$-module (resp. $G'$-module), let $F$ (resp. $F'$) be a projective resolution of $\mathbb{Z}$ over $\mathbb{Z}G$ (resp. $\mathbb{Z}G'$), and consider the map $(F\otimes_GM)\otimes(F'\otimes_{G'}M')\rightarrow (F\otimes F')\otimes_{G\times G'}(M\otimes M')$ given by $(x\otimes m)\otimes(x'\otimes m')\mapsto(x\otimes x')\otimes(m\otimes m')$. Note that $(F\otimes_GM)$ $\otimes(F'\otimes_{G'}M')$ = $(F\otimes M)_G \otimes(F'\otimes M')_G$ , which is the quotient of $F\otimes M\otimes F'\otimes M'$ by the subgroup generated by elements of the form $gx\otimes gm\otimes g'x'\otimes m'$. The isomorphism $F\otimes M\stackrel{\cong}{\rightarrow}M\otimes F$ of chain complexes ($M$ in dimension $0$) is given by $x\otimes m\mapsto (-1)^{deg(m)\cdot deg(x)}m\otimes x=m\otimes x$, and so the aforementioned quotient is isomorphic to $F\otimes F'\otimes M\otimes M'$ modulo the subgroup generated by elements of the form $(gx\otimes g'x'\otimes gm\otimes g'm')=(g,g')\cdot(x\otimes x'\otimes m\otimes m')$ where this latter action is the diagonal $(G\times G')$-action. Now this is precisely $$(F\otimes F'\otimes M\otimes M')_{G\times G'} = (F\otimes F')\otimes_{G\times G'}(M\otimes M')$$ and hence the considered map is an isomorphism. Assuming now that either $M$ or $M'$ is $\mathbb{Z}$-free, we have a corresponding Künneth formula $\bigoplus_{p=0}^nH_p(G,M)\otimes H_{n-p}(G',M')\rightarrow H_n(G\times G',M\otimes M')$ $\rightarrow\bigoplus_{p=0}^{n-1}Tor_1^\mathbb{Z}(H_p(G,M),H_{n-p-1}(G',M'))$ by Proposition I.0.8[Brown]. Note that in order to apply the proposition we needed one of the chain complexes (say, $F\otimes_G M$) to be dimension-wise $\mathbb{Z}$-free (and so with a free resolution $F$ this means we needed $M$ to be $\mathbb{Z}$-free). Actually, the general Künneth theorem has a more relaxed condition and it suffices to choose $M$ (or $M'$) as a $\mathbb{Z}$-torsion-free module}. Cohomology Künneth Formula (no proofs, just statements/notes) Let $M$ (resp. $M'$) be an arbitrary $G$-module (resp. $G'$-module), let $F$ (resp. $F'$) be a projective resolution of $\mathbb{Z}$ over $\mathbb{Z}G$ (resp. $\mathbb{Z}G'$), and consider the cochain cross-product $Hom_G(F,M)\otimes Hom_{G'}(F',M')\rightarrow Hom_{G\times G'}(F\otimes F',M\otimes M')$ which maps the cochains $u$ and $u'$ to $u\times u'$ given by $\langle u\times u',x\otimes x'\rangle=(-1)^{deg(u')\cdot deg(x)}\langle u,x\rangle\otimes\langle u',x'\rangle$. This map is an isomorphism under the hypothesis that either $H_i(G,M)$ or $H_i(G',M')$ is of finite type, that is, the $i$th-homology group is finitely generated for all $i$ (alternatively, we could simply require the projective resolution $F$ or $F'$ to be finitely generated). For example, if $M=\mathbb{Z}$ then $Hom_G(-,\mathbb{Z})$ commutes with finite direct sums, so we need only consider the case $F=\mathbb{Z}G$. An inverse to the above map is given by $t\mapsto \varepsilon\otimes \phi$, where $\varepsilon$ is the augmentation map and $\phi:F'\rightarrow M'$ is given by $\phi(f')=t(1\otimes f')$. Note that this does not hold for infinitely generated $P=\bigoplus^\infty \mathbb{Z}G$ because $Hom_G(P,\mathbb{Z})\cong\prod^\infty \mathbb{Z}$ is not $\mathbb{Z}$-projective (i.e. free abelian). Assuming now that either $M$ or $M'$ is $\mathbb{Z}$-free, we have a corresponding Künneth formula $\bigoplus_{p=0}^nH^p(G,M)\otimes H^{n-p}(G',M')\rightarrow H^n(G\times G',M\otimes M')\rightarrow$ $\bigoplus_{p=0}^{n+1}Tor_1^\mathbb{Z}(H^p(G,M),H^{n-p+1}(G',M'))$ by Proposition I.0.8[Brown].
I need to prove that: $$\sum_{n=1}^{\infty} (-1)^{n+1}\log\left(1+\frac{1}{n}\right)$$ is convergent, but not absolutely convergent. I tried the ratio test: $$\frac{a_{n+1}}{a_n} = -\frac{\log\left(1+\frac{1}{n+1}\right)}{\log\left(1+\frac{1}{n}\right)} = -\log\left({\frac{1}{n+1}-\frac{1}{n}}\right)$$ I know that the thing inside the $\log$ converges to $1$, so $-\log$ converges to $0$? This is not right, I cannot conclude that this series is divergent. Also, for the sequence without the $(-1)^{n+1}$ it would give $0$ too.
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Search Now showing items 31-40 of 167 Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
4.7. Forward Propagation, Backward Propagation, and Computational Graphs¶ In the previous sections, we used mini-batch stochastic gradient descentto train our models. When we implemented the algorithm, we only worriedabout the calculations involved in forward propagation through themodel. In other words, we implemented the calculations required for themodel to generate output corresponding to come given input, but when itcame time to calculate the gradients of each of our parameters, weinvoked the backward function, relying on the autograd module tofigure out what to do. The automatic calculation of gradients profoundly simplifies the implementation of deep learning algorithms. Before automatic differentiation, even small changes to complicated models would require recalculating lots of derivatives by hand. Even academic papers would too often have to allocate lots of page real estate to deriving update rules. While we plan to continue relying on autograd, and we have alreadycome a long way without every discussing how these gradients arecalculated efficiently under the hood, it’s important that you know howupdates are actually calculated if you want to go beyond a shallowunderstanding of deep learning. In this section, we’ll peel back the curtain on some of the details ofbackward propagation (more commonly called backpropagation or backprop). To convey some insight for both the techniques and how theyare implementated, we will rely on both mathematics and computationalgraphs to describe the mechanics behind neural network computations. Tostart, we will focus our exposition on a simple multilayer perceptronwith a single hidden layer and \(\ell_2\) norm regularization. 4.7.1. Forward Propagation¶ Forward propagation refers to the calculation and storage ofintermediate variables (including outputs) for the neural network withinthe models in the order from input layer to output layer. In thefollowing, we work in detail through the example of a deep network withone hidden layer step by step. This is a bit tedious but it will serveus well when discussing what really goes on when we call backward. For the sake of simplicity, let’s assume that the input example is \(\mathbf{x}\in \mathbb{R}^d\) and there is no bias term. Here the intermediate variable is: \(\mathbf{W}^{(1)} \in \mathbb{R}^{h \times d}\) is the weight parameter of the hidden layer. After entering the intermediate variable \(\mathbf{z}\in \mathbb{R}^h\) into the activation function \(\phi\) operated by the basic elements, we will obtain a hidden layer variable with the vector length of \(h\), The hidden variable \(\mathbf{h}\) is also an intermediate variable. Assuming the parameters of the output layer only possess a weight of \(\mathbf{W}^{(2)} \in \mathbb{R}^{q \times h}\), we can obtain an output layer variable with a vector length of \(q\): Assuming the loss function is \(l\) and the example label is \(y\), we can then calculate the loss term for a single data example, According to the definition of \(\ell_2\) norm regularization, given the hyper-parameter \(\lambda\), the regularization term is where the Frobenius norm of the matrix is equivalent to the calculation of the \(L_2\) norm after flattening the matrix to a vector. Finally, the model’s regularized loss on a given data example is We refer to \(J\) as the objective function of a given data example and refer to it as the ‘objective function’ in the following discussion. 4.7.2. Computational Graph of Forward Propagation¶ Plotting computational graphs helps us visualize the dependencies of operators and variables within the calculation. The figure below contains the graph associated with the simple network described above. The lower-left corner signifies the input and the upper right corner the output. Notice that the direction of the arrows (which illustrate data flow) are primarily rightward and upward. 4.7.3. Backpropagation¶ Backpropagation refers to the method of calculating the gradient of neural network parameters. In general, back propagation calculates and stores the intermediate variables of an objective function related to each layer of the neural network and the gradient of the parameters in the order of the output layer to the input layer according to the ‘chain rule’ in calculus. Assume that we have functions \(\mathsf{Y}=f(\mathsf{X})\) and \(\mathsf{Z}=g(\mathsf{Y}) = g \circ f(\mathsf{X})\), in which the input and the output \(\mathsf{X}, \mathsf{Y}, \mathsf{Z}\) are tensors of arbitrary shapes. By using the chain rule, we can compute the derivative of \(\mathsf{Z}\) wrt. \(\mathsf{X}\) via Here we use the \(\text{prod}\) operator to multiply its arguments after the necessary operations, such as transposition and swapping input positions have been carried out. For vectors, this is straightforward: it is simply matrix-matrix multiplication and for higher dimensional tensors we use the appropriate counterpart. The operator \(\text{prod}\) hides all the notation overhead. The parameters of the simple network with one hidden layer are \(\mathbf{W}^{(1)}\) and \(\mathbf{W}^{(2)}\). The objective of backpropagation is to calculate the gradients \(\partial J/\partial \mathbf{W}^{(1)}\) and \(\partial J/\partial \mathbf{W}^{(2)}\). To accompish this, we will apply the chain rule and calculate, in turn, the gradient of each intermediate variable and parameter. The order of calculations are reversed relative to those performed in forward propagation, since we need to start with the outcome of the compute graph and work our way towards the parameters. The first step is to calculate the gradients of the objective function \(J=L+s\) with respect to the loss term \(L\) and the regularization term \(s\). Next, we compute the gradient of the objective function with respect to variable of the output layer \(\mathbf{o}\) according to the chain rule. Next, we calculate the gradients of the regularization term with respect to both parameters. Now we are able calculate the gradient \(\partial J/\partial \mathbf{W}^{(2)} \in \mathbb{R}^{q \times h}\) of the model parameters closest to the output layer. Using the chain rule yields: To obtain the gradient with respect to \(\mathbf{W}^{(1)}\) we need to continue backpropagation along the output layer to the hidden layer. The gradient with respect to the hidden layer’s outputs \(\partial J/\partial \mathbf{h} \in \mathbb{R}^h\) is given by Since the activation function \(\phi\) applies elementwise, calculating the gradient \(\partial J/\partial \mathbf{z} \in \mathbb{R}^h\) of the intermediate variable \(\mathbf{z}\) requires that we use the elementwise multiplication operator, which we denote by \(\odot\). Finally, we can obtain the gradient \(\partial J/\partial \mathbf{W}^{(1)} \in \mathbb{R}^{h \times d}\) of the model parameters closest to the input layer. According to the chain rule, we get 4.7.4. Training a Model¶ When training networks, forward and backward propagation depend on each other. In particular, for forward propagation, we traverse the compute graph in the direction of dependencies and compute all the variables on its path. These are then used for backpropagation where the compute order on the graph is reversed. One of the consequences is that we need to retain the intermediate values until backpropagation is complete. This is also one of the reasons why backpropagation requires significantly more memory than plain ‘inference’—we end up computing tensors as gradients and need to retain all the intermediate variables to invoke the chain rule. Another reason is that we typically train with minibatches containing more than one variable, thus more intermediate activations need to be stored. 4.7.5. Summary¶ Forward propagation sequentially calculates and stores intermediate variables within the compute graph defined by the neural network. It proceeds from input to output layer. Back propagation sequentially calculates and stores the gradients of intermediate variables and parameters within the neural network in the reversed order. When training deep learning models, forward propagation and back propagation are interdependent. Training requires significantly more memory and storage. 4.7.6. Exercises¶ Assume that the inputs \(\mathbf{x}\) are matrices. What is the dimensionality of the gradients? Add a bias to the hidden layer of the model described in thischapter. Draw the corresponding compute graph. Derive the forward and backward propagation equations. Compute the memory footprint for training and inference in model described in the current chapter. Assume that you want to compute secondderivatives. What happens to the compute graph? Is this a good idea? Assume that the compute graph is too large for your GPU. Can you partition it over more than one GPU? What are the advantages and disadvantages over training on a smaller minibatch?
SCM Repository View of /branches/lamont/test/unicode-cheatsheet.diderot Revision File size: 782 byte(s) 2081- ( download) ( annotate) Mon Nov 5 23:26:06 2012 UTC(6 years, 11 months ago) by lamonts File size: 782 byte(s) Creating new developmented branch based on vis12 /* useful unicode characters for Diderot ⊛ convolution, as in field#2(3)[] F = bspln3 ⊛ image("img.nrrd"); LaTeX: \circledast is probably typical, but \varoast (with \usepackage{stmaryrd}) is slightly more legible × cross product, as in vec3 camU = normalize(camN × camUp); LaTeX: \times π Pi, as in real rad = degrees*π/360.0; LaTeX: \pi ∇ Del, as in vec3 grad = ∇F(pos); LaTeX: \nabla • dot product, as in real ld = norm • lightDir; LaTeX: \bullet, although \cdot more typical for dot products ⊗ tensor product, as in tensor[3,3] Proj = identity[3] - norm⊗norm LaTeX: \otimes ∞ Infinity, as in output real val = -∞; LaTeX: \infty */ strand blah (int i) { output real out = 0.0; update { stabilize; } } initially [ blah(i) | i in 0..0 ]; root@smlnj-gforge.cs.uchicago.edu ViewVC Help Powered by ViewVC 1.0.0
Under the auspices of the Computational Complexity Foundation (CCF) The minrank of a graph $G$ is the minimum rank of a matrix $M$ that can be obtained from the adjacency matrix of $G$ by switching some ones to zeros (i.e., deleting edges) and then setting all diagonal entries to one. This quantity is closely related to the fundamental information-theoretic problems of (linear) index coding (Bar-Yossef et al., FOCS'06), network coding and distributed storage, and to Valiant's approach for proving superlinear circuit lower bounds (Valiant, Boolean Function Complexity '92). We prove tight bounds on the minrank of random Erd\H{o}s-R\'enyi graphs $G(n,p)$ for all regimes of $p\in[0,1]$. In particular, for any constant $p$, we show that $\mathsf{minrk}(G) = \Theta(n/\log n)$ with high probability, where $G$ is chosen from $G(n,p)$. This bound gives a near quadratic improvement over the previous best lower bound of $\Omega(\sqrt{n})$ (Haviv and Langberg, ISIT'12), and partially settles an open problem raised by Lubetzky and Stav (FOCS '07). Our lower bound matches the well-known upper bound obtained by the "clique covering" solution, and settles the linear index coding problem for random graphs. Finally, our result suggests a new avenue of attack, via derandomization, on Valiant's approach for proving superlinear lower bounds for logarithmic-depth semilinear circuits. The minrank of a graph $G$ is the minimum rank of a matrix $M$ that can be obtained from the adjacency matrix of $G$ by switching ones to zeros (i.e., deleting edges) and setting all diagonal entries to one. This quantity is closely related to the fundamental information-theoretic problems of (linear) index coding (Bar-Yossef et al., FOCS'06), network coding and distributed storage, and to Valiant's approach for proving superlinear circuit lower bounds (Valiant, Boolean Function Complexity '92). We prove tight bounds on the minrank of random Erd\H{o}s-R\'enyi graphs $G(n,p)$ for all regimes of $p\in[0,1]$. In particular, for any constant $p$, we show that $minrk(G) = \Theta(n/\log n)$ with high probability, where $G$ is chosen from $G(n,p)$. This bound gives a near quadratic improvement over the previous best lower bound of $\Omega(\sqrt{n})$ (Haviv and Langberg, ISIT'12), and partially settles an open problem raised by Lubetzky and Stav (FOCS '07). Our lower bound matches the well-known upper bound obtained by the "clique covering" solution, and settles the linear index coding problem for random knowledge graphs. Finally, our result suggests a new avenue of attack, via derandomization, on Valiant's approach for proving superlinear lower bounds for logarithmic-depth semilinear circuits.
While trying to solve questions involving impulses and step functions, we are supposed to assume that an uncharged capacitor or an uncharged inductor acts as a short circuit and open-circuit respectively. But, I don't see the theoretical reasoning behind it. Furthermore, can an impulse show up against a capacitor or inductor with only a step source? The basic equation for a capacitor is charge, Q = voltage, V x capacitance, C. If this is differentiated we get: - \$\dfrac{dQ}{dt} = C\cdot\dfrac{dV}{dt}\$ And rate of change of charge is current, therefore: - \$I = C\cdot\dfrac{dV}{dt}\$ A voltage impulse has very high dV/dt therefore I is also very high. This to me, represents a short when dV/dt is infinite. I'll leave you to use the formulas for an inductor to satisfy your curiosity and my laziness. Why does a capacitor act like a short-circuit during a currentimpulse? It doesn't act like a short circuit for a current impulse. Here's the equation that defines the ideal capacitor: $$i_C(t) = C\cdot \frac{d}{dt}v_C(t) $$ Applying the Laplace transform to this equation (assuming zero initial conditions) yields $$I_C(s) = sC\cdot V_C(s)$$ The Laplace transform for the unit impulse is $$\delta(t) \Leftrightarrow 1$$ Thus, if the capacitor current is the unit impulse, the Laplace voltage is $$V_C(s) = \frac{1A}{sC}$$ Going back to the time domain, we have $$v_C(t) = \frac{1}{C}u(t)\, V$$ Thus, the voltage across the capacitor is a scaled step. If we instead approximate the unit current impulse with a current pulse $$i_C(t) = \frac{1}{T}[u(t) - u(t - T)] $$ then the capacitor voltage is $$v_C(t) = \begin{cases} 0 & \text{if } t \le 0\\ \frac{t}{CT} & 0 \lt t \le T\\ \frac{1}{C} & t \gt T \end{cases} $$ In any case, this is different behavior than that of an ideal short circuit where the voltage across is zero for any current through. Andy gives a great answer addressing the motivation for such statements. This answer will be the theoretical answer requested in the post. Given an input voltage function \$V(t)\$ then the current through the capacitor is \$I(t) = C \frac{dV(t)}{dt}\$. Strictly speaking if \$V(t)\$ is the unit step function then \$I(t)\$ is not defined at \$0\$. The impulse "function" is even more problematic in that it is not even a function. We basically have two options: 1) Realize that while \$C \frac{d}{dt} \$ is not strictly defined on this space it is defined on the space of differentiable functions. However, this operator is linear on this dense set and extends uniquely to a linear operator on the entire space of functions. In layman's terms, we can estimate the step/impulse functions by differentiable functions \$f_i(t)\$ and then \$C\frac{df_i(t)}{dt}\$ will estimate \$C\frac{dV(t)}{dt}\$. 2) Use that the space of voltage functions, with appropriate mathematical adjectives, is self-dual to figure out how one can write this in the language of distributions. Option 1) is pretty clear. Any reasonable estimates of the step or impulse functions will clearly have a derivatives approaching infinity at 0. Option 2) obviously takes more study but the answer is interesting to think about. As a distribution the impulse functional \$\delta\$ is defined as \$\delta(f)=f(0)\$ for all voltage input functions \$f\$ and the resulting current functional is (uniquely defined by) \$(C\frac{d \delta}{dt})(f) = C\frac{df}{dt}|_{t=0} \$ for all (smooth) voltage input functions \$f\$. This is why you will see an "instantaneous" infinite short current when you feed in the impulse function. You can do a similar computation for the unit step function. First, you must understand what you are dealing with in terms of voltage and current. A capacitor is an electrical component across which the voltage can only change in a continuous manner; i.e. there can be no 'instant' jumps from one voltage to another. This is always true whether the capacitor is charged or not. This happens because the capacitor is designed to store voltages on its plates: as a external voltage is applied across a capacitor, it starts charging or discharging until it matches the voltage. Similarly, an inductor forces the current going through it to always be continuous, regardless of whether it is charged or not because it is storing the charge in its magnetic fields. The question specified uncharged capacitors and inductors as a means of setting both to voltage on the capacitors and the current on the inductors to zero before the impulse or steps occur. So what happens when a step function first hits a capacitor? The voltage across the capacitor, which had been zero, cannot change instantly, so it stays at zero, while the current through it changes instantly to match the step function. For that instant, this is exactly the same behavior any wire or short-circuit would have. A step function hitting a induction results in an instant change in voltage while the current flowing through remains at zero. This is exactly the same behavior as an open circuit. Now, both of these components start changing over time. Given enough time, the capacitor starts acting as an open circuit and the inductor as a short-circuit. But you aren't dealing with that right now. You are just dealing with the instantaneous responses. As to whether an impulse can show up against a capacitor or inductor with only a step source, the answer is it depends entirely on what part of the impulse you are looking for. If you are looking for the voltage across an inductor, for example, it will most definately show up. If you were looking for a current through the inductor however, no, an impulse will be invisible. I don't know if this helps, but its the way I think about it: in short, the reason is that an impulse is imagined to be an infinite current in an infinitely short time. (the total charge is finite) So when dealing with infinities, anything not infinite is effectively zero. I.e. the capacitor can only pick up a finite charge (Q = CV) therefore it can only develop a finite voltage, which is treated as approximately zero in the heuristic impulse analysis. Resistors will get infinite voltage across them, so they conduct according to V=IR (V is infinite). However, inductors are open circuits for rapidly changing current, so do not conduct. As for the step analysis, I think Brian Drozd above had a good answer. At t=0, Vc(0)(=0) can not jump to step input voltage. So, Vc(0)=0, which means that the capacitor is short circuited at that moment. Thus, the step input voltage is transferred to the resistor.
I am looking for pointers on how one might approach the following definite integral: $$ \int_{-\infty}^\infty e^{-x^4 + x^2}\, dx$$ Or more generally: $$ \int_{-\infty}^\infty e^{-x^4 + \alpha x^2}\, dx, \quad \alpha > 0$$ Mathematica does return the following result, which seems correct based on numerical verification: $$ \frac{\pi e^{\frac{\alpha ^2}{8}} \sqrt{\alpha } \left(I_{\frac{1}{4}}\left(\frac{\alpha ^2}{8}\right)+I_{-\frac{1}{4}}\left(\frac{\alpha ^2}{8}\right)\right)}{2 \sqrt{2}} $$ Here $I_a$ is the modified Bessel function of the first kind, which I am not very familiar with, though I can see its definition as the solution of a differential equation. Is there anything I can do, other than browse formula tables like this one (p. 21), to see how one may arrive to this result or perhaps how to arrive to a different (and potentially more useful) representation?
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (inactive) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (active) There is also a chance that we will be able to improve the known bounds on Moser's cube problem. Here are some unsolved problems arising from the above threads. Here is a tidy problem page. Bibliography Density Hales-Jewett H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. Behrend-type constructions M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. Triangles and corners M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
This is College Physics Answers with Shaun Dychko. A particle is being examined using a photon, as most things are and we are going to show that the uncertainty in the energy of the object or the particle multiplied by uncertainty in time is approximately equal to Planck's constant. And we are going to use the assumption that the uncertainty in position of the particle is approximately equal to the wavelength of light being used to observe it. And also use the fact that up to all of the photon's energy could be transferred to the particle. And so the measurement we have for the particle's energy can be influenced by the photon making the measurement and so the photon could be changing the energy of the thing that it's measuring the energy of and by an amount equal to the photon's at the worst case scenario. OK. So first let's figure out an expression for the uncertainty in time; it's gonna be the uncertainty in position divided by the speed of the photon and the uncertainty in position, we are told, is the wavelength of the photon and then we can turn our attention to the uncertainty in energy. And so the uncertainty in energy, we are told, it could be as bad as the energy of the photon. So let's express this energy of the photon— Planck's constant times frequency— in terms of its wavelength. So the wave equation says that the speed of the wave equals the frequency times the wavelength and we'll divide both sides by lambda to get an expression for frequency is speed of light over lambda and then replace <i>f</i> with that here. So the energy is <i>hc</i> over <i>λ</i> and we'll say this photon energy is the uncertainty in the energy measurement of the particle. So then we multiply this by this because this is our expression for uncertainty in energy— <i>hc</i> over <i>λ</i>— and then our uncertainty in time is <i>λ</i> over <i>c</i> and we see that this equals <i>h</i>. Quod erat demonstrandum. Question Derive the approximate form of Heisenberg’s uncertainty principle for energy and time, $\Delta E \Delta t \approx h$, using the following arguments: Since the position of a particle is uncertain by $\Delta x \approx \lambda$, where $\lambda$ is the wavelength of the photon used to examine it, there is an uncertainty in the time the photon takes to traverse $\Delta x$ . Furthermore, the photon has an energy related to its wavelength, and it can transfer some or all of this energy to the object being examined. Thus the uncertainty in the energy of the object is also related to $\lambda$. Find $\Delta t$ and $\Delta E$ ; then multiply them to give the approximate uncertainty principle. Final Answer Please see the solution video for the derivation. Video Transcript
In the answers to this question, it is said that The de-icing system on most turbine aircraft (including MD-82 involved in that accident) uses bleed air from the engines, that is it extracts some air from behind the (low pressure stage of the) compressor. This air is therefore not ejected from the nozzle and not producing thrust, so the thrust is reduced. My question is: Why is bleed air taken from some stage of the compressor used? Why not e.g. exhaust gas? For de-icing, the temperature of the used medium must be above 0°C (melting ice), while the environmental temperature usually is far below. So the engine has to invest energy to generate bleed air with reasonable temperature, the air cools down to still >0°C during de-icing, and is then vented out to the environment. There, it expands and cools down far below the environmental temperature. This expansion means that energy is wasted, and also, when the bleed air is cooled down / expanded before inserting it into the de-icing system (I don't know if this is done), energy would be lost. In addition, air/pressure is lost in the compressor stage, which makes the combustion less effective. (Again, I don't know how much air is taken, and how big the effect is) On the other side, the exhaust gas of the engine is very hot due the combustion and could be used without the need of extra compression. So, using some exhaust gas, one would not waste so much energy. If it's too dirty, heat exchangers could be used to heat fresh air. I would think of this reasons Bleed air is anyway used for many purposes in an aircraft, so this is more economic than a completely separated system De-icing is not used for long time during flight, so again no need for a dedicated system EDIT: I'd like to expand the question to explain what makes me curious. In the comment, the correctness of this sentence is challenged: There, it expands and cools down far below the environmental temperature. This is a simple thermodynamic process. The air is compressed adiabatically, i.e. without adding heat to it. The heat comes from the thermal energy of the air, now also compressed to a lower volume. De-icing cools down the air, and when releasing the air to the environment, it expands to the orignal pressure. As thermal energy has been removed during de-icing, the temperature drops below environmental temperature. Here is the math behind: The relation of pressure and temperature in this case is: $$ p_1^{1-\gamma}\cdot T_1^\gamma=p_2^{1-\gamma}\cdot T_2^\gamma\qquad \gamma \approx 1.4$$ Let's assume a pilot switches on de-icing during flight at 11km altitude. There, environmental pressure is 0.25bar (atmospheres) and temperature is -50°C (223K). It was also said here in the answers that it's possible that the bleed air has about 200°C (473K). The formula now gives a bleed air pressure of 3.47 bar, so a pressure ratio of about 14. The air is now cooled down while the pressure is maintained by the engine. I assume de-icing will be effective for bleed air temperatures above 0°C. So if the air is released at this temperature, the temperature will fall down to -144°C (128K). Another number: If releasing at 100°C, the temperature will drop to -97°C (175K). (Of course, the air will mix with the environmental air immediately) In principle, one can play with the numbers, increase/decrease altitude / temperatures and discuss how adiabatically this (de)compression processes are. Anyway, this is a big air conditioner, using the thermal energy for de-icing and wasting the cooled air. If one only needs hot air, something coming from the exhaust system would always be more efficient. This is not really efficient. May be the bleed air behind the de-icing system can still be used for other purposes, as it still has the pressure?
What causes the melting and boiling points of noble gases to rise when the atomic number increases? What role do the valence electrons play in this? The melting and boiling points of noble gases are very low in comparison to those of other substances of comparable atomic and molecular masses. This indicates that only weak van der Waals forces or weak London dispersion forces are present between the atoms of the noble gases in the liquid or the solid state. The van der Waals force increases with the increase in the size of the atom, and therefore, in general, the boiling and melting points increase from $\ce{He}$ to $\ce{Rn}$. Helium boils at $-269\ \mathrm{^\circ C}$. Argon has larger mass than helium and have larger dispersion forces. Because of larger size the outer electrons are less tightly held in the larger atoms so that instantaneous dipoles are more easily induced resulting in greater interaction between argon atoms. Therefore, its boiling point ($-186\ \mathrm{^\circ C}$) is more than that of $\ce{He}$. Similarly, because of increased dispersion forces, the boiling and melting points of monoatomic noble gases increase from helium to radon. For more data of melting and boiling points of noble gas compounds, read this page Other answers have mentioned that dispersion forces are the key to answering the question but not how they increase from helium to radon (or let’s take xenon because that’s not radioactive so I feel safer breathing it). The larger the mass of a nucleus the more protons are in there, and the more protons in a nucleus the more electrons are around the outside. Traditionally, one thought of electrons orbiting the nucleus rather like satellites orbiting a planet on more or less fixed orbits at certain heights. But that picture is wrong. It is much better to consider electrons as waves that completely surround the nucleus. If one were to translate these waves into particles (because of the wave-particle dualism at quantum levels), all one would get would be probabilities of finding specific electron $e$ at specific location $x$. For a neutral atom that is surrounded by nothing, these probabilities depend only on the wave function, and are inherently centrosymmetric or anticentrosymmetric, leading to a net charge distribution of zero. In a sample of xenon gas, however, other atoms approach the atom we are observing. Consider an atom approaching our xenon atom at from ‘above’ (i.e. perpendicular to our viewpoint). While the atom as a whole is neutral, the could of electrons surrounding the nucleus is negatively charged. This new negative charge changes the potential energy the electrons in our observed atom are perceiving: While we originally had a centrosymmetric potential distribution (decreasing positive charge intensity from the nucleus) we now have a second negative source at 12 o’clock. Therefore, the probability distributions will shift ever so slightly and it will be slightly more likely to detect our electron $e$ at position $y$ below the nucleus rather than at position $z$ above the nucleus. Note that this simplification relies on the freezing of time at a certain moment when the other atom is approaching and a certain controlled environment that I have invented for example purposes. With the electrons now more likely to be at $y$ rather than $z$, we can say that we have created a spontaneous or an induced dipole. A mild positive charge is now pointing towards the other atom and ever so slightly attracting it. This will, if you advance the time flow by one spec, create another induced dipole in the originally approaching atom plus further in every other atom that is close around. We cannot freeze this picture though. Every infinitesimal change in position, movement direction or rotation will change the entire picture, meaning that our induced dipole is extremely short-lived. It is only the combination of all these induced dipoles and their slight attraction that attracks the different atoms together. Since they rely on electron distribution changing, the more electrons we have the stronger these induced dipoles can be and the more force can be excersized between atoms. Since the number of electrons loosely correlates with mass (and strictly correlates with nuclear charge), larger atoms are said to display stronger van der Waals forces than smaller ones. Valence electrons have not much to do with this, as their outer shell is closed. As the other answer mentioned, dispersion forces are the ones responsible for any interaction between these atoms. The size dependence therefore is directly coming from the size dependence of dispersion forces: In a very simplistic way, a random charge fluctuation can polarize the otherwise perfectly apolar atoms. This induced dipole moment is then responsible for the dispersion interactions. The polarizibility of an atom increases (easier to polarize) if the atomic number increases, therefore the interactions in nobel gases will reflect this behavior. As mentioned in other answers the dispersion force is responsible for noble gases forming liquids. The calculation of the boiling points is now outlined after some general comments about the dispersion force. The dispersion force (also called London, charge-fluctuation, induced-dipole-induced-dipole force) is universal, just like gravity, as it acts between all atoms and molecules. The dipole forces can be long-range, >10 nm down to approx 0.2 nm depending on circumstances, and can be attractive or repulsive. Although the dispersion force is quantum mechanical in origin it can be understood as follows: for a non-polar atom such as argon the time average dipole is zero, yet at any instance there is a finite dipole given by the instantaneous positions of the electrons relative to the nucleus. This instantaneous dipole generates an electric field that can polarise another nearly atom and so induce a dipole in it. The resulting interaction between these two dipoles gives rise to an instantaneous attractive force between the two atoms, whose time average in not zero. The dispersion energy was derived by London in 1930 using quantum mechanical perturbation theory. The result is $$U(r)=-\frac{3}{2}\frac{\alpha_0^2I}{(4\pi\epsilon _0)^2r^6}=-\frac{C_{\mathrm{disp}}}{r^6}$$ where $\alpha_0$ is the electronic polarisability, $I$ the first ionisation energy, $\epsilon_0$ the permittivity of free space and $r$ the separation of the atoms. The electronic polarisability $\alpha_0$ arises from the displacement of an atom's electrons relative to the nucleus and it is the constant of proportionality between the induced dipole and the electric field $E$, viz., $\mu_{\mathrm{ind}} = \alpha_0 E$. The polarisability has units of $\pu{J-1 C2 m2}$, which means that in SI units $\alpha_0/(4\pi\epsilon_0)$ has units of $\pu{m3}$ and this polarisability is in effect a measure of electronic volume, or put another way $\alpha_0 = 4\pi\epsilon_0r_0^3$ where experimentally it is found that $r_0$ is approximately the atomic radii. The ionisation energy $I$ arises because to estimate $r_0$ a simple model of an atom is used to calculate the orbital energy and hence radius and in doing so the energy is equated to the ionisation energy since this can be measured. As can be seen from the formula the energy depends on the product of the square of the polarisability, i.e. volume of molecule or atom and its ionisation energy, and also on the reciprocal of the sixth power of the separation of the molecules/atoms. In a liquid of noble gases this separation may be taken to be the atomic radius, $r_0$. Thus the dependence is much more complex than just size, see table of values below. The increase in polarisability as the atomic number increases, is offset somewhat by the reduction in ionisation energy and increase in atomic radius. If experimental values are put into the London equation then the attractive energy can be calculated. In addition the boiling point can be estimated by equating the London energy with the average thermal energy as $U(r_0)=3k_\mathrm{B}T/2$ where $k_\mathrm B$ is the Boltzmann constant and $T$ the temperature. The relevant parameters are given in the table below, with values in parentheses being experimental values: [1] $$\begin{array}{c|c|c|c|c|c} \text{Noble gas} & (\alpha_0/4\pi\epsilon_0)~/~\pu{10^{-30}m^3} & I~/~\pu{eV} & r_0~/~\pu{nm} & C_\mathrm{disp}~/~\pu{10^{-70} J m6} & T_\mathrm{b}~/~\pu{K} \\ \hline \ce{Ne} & 0.39 & 21.6 & 0.308 & 3.9~(3.8) & 22~(27) \\ \ce{Ar} & 1.63 & 15.8 & 0.376 & 50~(45) & 85~(87) \\ \ce{Xe} & 4.01 & 12.1 & 0.432 & 233~(225) & 173~(165) \end{array}$$ The fit to data is very good, possibly this is fortuitous, but these are spherical atoms showing only dispersion forces and a good correlation to experiment is expected. However, there short range repulsive forces that are ignored as well as higher order attractive forces. Nevertheless it does demonstrate that dispersion forces can account for the trend in boiling quite successfully. Israelachvili, J. N. Intermolecular and Surface Forces,3rd ed.; Academic Press: Burlington, MA, 2011; p 110. protected by Community♦ May 10 '16 at 16:12 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
$ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data 1. Department of Mathematics, Faculty of Science and Technology, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan 2. Center for Advanced Intelligence Project, RIKEN, Japan 3. Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan 4. Division of Mathematics and Physics, Faculty of Engineering, Shinshu University, 4-17-1 Wakasato, Nagano City 380-8553, Japan 5. Department of Engineering for Production and Environment, Graduate School of Science and Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama, Ehime, 790-8577, Japan $ \begin{align*} \partial_{t}^2 u - \Delta u + \partial_t u = 0 \end{align*} $ $ L^p $ $ L^q $ $ 1\le q \le p < \infty\ (p\neq 1) $ $ (H^s\cap H_r^{\beta}) \times (H^{s-1} \cap L^r) $ $ r \in (1,2] $ $ s\ge 0 $ $ \beta = (n-1)|\frac{1}{2}-\frac{1}{r}| $ $ 1+\frac{2r}{n} $ $ 1+\frac{2}{n} $ $ r = 1 $ Mathematics Subject Classification:Primary: 35L71; Secondary: 35A01, 35B40, 35B44. Citation:Masahiro Ikeda, Takahisa Inui, Mamoru Okamoto, Yuta Wakasugi. $ L^p $-$ L^q $ estimates for the damped wave equation and the critical exponent for the nonlinear problem with slowly decaying data. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1967-2008. doi: 10.3934/cpaa.2019090 References: [1] H. Brezis, [2] [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] K. Fujiwara, M. Ikeda and Y. Wakasugi, Estimates of lifespan and blow-up rates for the wave equation with a time-dependent damping and a power-type nonlinearity, [6] [7] M.-H. Giga, Y. Giga and J. Saal, [8] [9] [10] [11] N. Hayashi, E. I. Kaikina and P. I. Naumkin, On the critical nonlinear damped wave equation with large initial data, [12] [13] T. Hosono and T. Ogawa, Large time behavior and $L^p$-$L^q$ estimate of solutions of 2-dimensional nonlinear damped wave equations, [14] M. Ikeda, T. Inui and Y. Wakasugi, The Cauchy problem for the nonlinear damped wave equation with slowly decaying data, [15] R. Ikehata, Y. Miyaoka and T. Nakatake, Decay estimates of solutions for dissipative wave equations in $\mathbf R^N$ with lower power nonlinearities, [16] [17] [18] R. Ikehata and K. Tanizawa, Global existence of solutions for semilinear damped wave equations in $\mathbf R^N$ with noncompactly supported initial data, [19] [20] S. Kawashima, M. Nakao and K. Ono, On the decay property of solutions to the Cauchy problem of the semilinear wave equation with a dissipative term, [21] [22] [23] P. Marcati and K. Nishihara, The $L^p$-$L^q$ estimates of solutions to one-dimensional damped wave equations and their application to the compressible flow through porous media, [24] [25] [26] [27] [28] [29] T. Narazaki and K. Nishihara, Asymptotic behavior of solutions for the damped wave equation with slowly decaying data, [30] K. Nishihara, $L^p$-$L^q$ estimates of solutions to the damped wave equation in 3-dimensional space and their application, [31] [32] [33] [34] [35] [36] [37] Qi S. Zhang, A blow-up result for a nonlinear wave equation with damping: the critical case, show all references References: [1] H. Brezis, [2] [3] [4] F. M. Christ and M. I. Weinstein, Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation, [5] K. Fujiwara, M. Ikeda and Y. Wakasugi, Estimates of lifespan and blow-up rates for the wave equation with a time-dependent damping and a power-type nonlinearity, [6] [7] M.-H. Giga, Y. Giga and J. Saal, [8] [9] [10] [11] N. Hayashi, E. I. Kaikina and P. I. Naumkin, On the critical nonlinear damped wave equation with large initial data, [12] [13] T. Hosono and T. Ogawa, Large time behavior and $L^p$-$L^q$ estimate of solutions of 2-dimensional nonlinear damped wave equations, [14] M. Ikeda, T. Inui and Y. Wakasugi, The Cauchy problem for the nonlinear damped wave equation with slowly decaying data, [15] R. Ikehata, Y. Miyaoka and T. Nakatake, Decay estimates of solutions for dissipative wave equations in $\mathbf R^N$ with lower power nonlinearities, [16] [17] [18] R. Ikehata and K. Tanizawa, Global existence of solutions for semilinear damped wave equations in $\mathbf R^N$ with noncompactly supported initial data, [19] [20] S. Kawashima, M. Nakao and K. Ono, On the decay property of solutions to the Cauchy problem of the semilinear wave equation with a dissipative term, [21] [22] [23] P. Marcati and K. Nishihara, The $L^p$-$L^q$ estimates of solutions to one-dimensional damped wave equations and their application to the compressible flow through porous media, [24] [25] [26] [27] [28] [29] T. Narazaki and K. Nishihara, Asymptotic behavior of solutions for the damped wave equation with slowly decaying data, [30] K. Nishihara, $L^p$-$L^q$ estimates of solutions to the damped wave equation in 3-dimensional space and their application, [31] [32] [33] [34] [35] [36] [37] Qi S. Zhang, A blow-up result for a nonlinear wave equation with damping: the critical case, [1] Karen Yagdjian, Anahit Galstian. Fundamental solutions for wave equation in Robertson-Walker model of universe and $L^p-L^q$ -decay estimates. [2] [3] Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. $L^p$ Estimates for the wave equation with the inverse-square potential. [4] [5] Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. [6] Alexandre N. Carvalho, Jan W. Cholewa. Strongly damped wave equations in $W^(1,p)_0 (\Omega) \times L^p(\Omega)$. [7] [8] [9] A. Kh. Khanmamedov. Global attractors for strongly damped wave equations with displacement dependent damping and nonlinear source term of critical exponent. [10] Björn Birnir, Kenneth Nelson. The existence of smooth attractors of damped and driven nonlinear wave equations with critical exponent , s = 5. [11] Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. [12] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [13] Xinghong Pan, Jiang Xu. Global existence and optimal decay estimates of the compressible viscoelastic flows in $ L^p $ critical spaces. [14] Shengfan Zhou, Linshan Wang. Kernel sections for damped non-autonomous wave equations with critical exponent. [15] Shouming Zhou. The Cauchy problem for a generalized $b$-equation with higher-order nonlinearities in critical Besov spaces and weighted $L^p$ spaces. [16] Damiano Foschi. Some remarks on the $L^p-L^q$ boundedness of trigonometric sums and oscillatory integrals. [17] José Caicedo, Alfonso Castro, Rodrigo Duque, Arturo Sanjuán. Existence of $L^p$-solutions for a semilinear wave equation with non-monotone nonlinearity. [18] Igor Chueshov, Irena Lasiecka, Daniel Toundykov. Long-term dynamics of semilinear wave equation with nonlinear localized interior damping and a source term of critical exponent. [19] Jiao Chen, Weike Wang. The point-wise estimates for the solution of damped wave equation with nonlinear convection in multi-dimensional space. [20] Seung-Yeal Ha, Mitsuru Yamazaki. $L^p$-stability estimates for the spatially inhomogeneous discrete velocity Boltzmann model. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I have some doubts about the proof in example $4.8$ in chapter $0$ of Do Carmo's Riemannian Geometry book concerning to the part of the proof that the canonical projection of a differentiable manifold $M$ in it's quotient space $M/G$ given by the relation defined by an properly discontinuous action is a local diffeomorphism. Precisely, I didn't understand the following statement: Let $p_1 = (\pi_1^{-1} \circ \pi_2) (p_2)$. Then $p_1$ and $p_2$ are equivalent in $M$, hence there is a a $g \in G$ such that $gp_2 = p_1$. It follows easily that the restriction $(\pi_1^{-1} \circ \pi_2)|_{\textbf{x}_2(W)}$ coincides with the diffeomorphism $\varphi_g|_{\textbf{x}_2(W)}$. Why $(\pi_1^{-1} \circ \pi_2)|_{\textbf{x}_2(W)}$ coincides with the diffeomorphism $\varphi_g|_{\textbf{x}_2(W)}$? How the fact that the action is properly discontinuous is used here to ensure that the functions coincide on $\textbf{x}_2(W)$? Is $G$ the group of $\textbf{all}$ diffeomorphisms of $M$ or just a group of a $\textbf{specific}$ diffeomorphisms of $M$? About my last question, I think $G$ is the group of $\textbf{all}$ diffeomorphisms of $M$, because he stated that $p_1$ and $p_2$ are equivalent. Below, there is the example $4.8$. $4.8$ Example. ($\textit{Discontinuous action of a group}$). We say that a group $G$ acts on a differentiable manifold $M$ if there exists a mapping $\varphi: G \times M \rightarrow M$ such that (i) For each $g \in G$, the mapping $\varphi_g: M \longrightarrow M$ given by $\varphi_g(p) = \varphi(g,p)$, $p \in M$, is a diffeomorphism and $\varphi_e = \text{identity}$. (ii) If $g_1, g_2 \in G$, $\varphi_{g_1g_2} = \varphi_{g_1} \circ \varphi_{g_2}$. Frequently, when dealing with a single action, we set $\varphi(g,p) = gp$; in this notation, condition $(ii)$ can be interpreted as a formof associativity: $(g_1g_2)p = g_1(g_2p)$. We say that the action is $\textit{properly discontinuous}$ if every $p \in M$ has a neighborhood $U \subset M$ such that $U \cap g(U) = \emptyset$ for all $g \neq e$. When $G$ acts on $M$, the action determines an equivalence relation $\sim$ on $M$, in which $p_1 \sim p_2$ if and only if $p_2 = gp_1$, for some $g \in G$. Denote the quotient space of $M$ by this equivalence relation by $M / G$. The mapping $\pi: M \longrightarrow M/G$, given by $$\pi(p) = \text{equiv. class of} \ p = Gp$$ will be called the $\textit{projection}$ of $M$ onto $M/G$. Now let $M$ be a differentiable manifold and let $\varphi: G \times M \longrightarrow M$ be a properly discontinuous action of a group $G$ on $M$. We are going to show that $M/G$ has a differentiable structure with respect to which the projection $\pi: M \longrightarrow M/G$ is a local diffeomorphism. For each $p \in M$ choose a parametrization $\textbf{x}: V \longrightarrow M$ at $p$ so that $\textbf{x}(V) \subset U$, where $U \subset M$ is a neighborhood og $p$ such that $U \cap gU = \emptyset$, $g \neq e$. Clearly $\pi|_U$ is injective, hence $\textbf{y} = \pi \circ \textbf{x}: V \longrightarrow M/G$ is injective. The family $\{ (V,\textbf{y}) \}$ clearly covers $M/G$; for such a family to be a differentiable structure, it suffices to show that given two mappings $\textbf{y}_1 = \pi \circ \textbf{x}_1: V_1 \longrightarrow M/G$ and $\textbf{y}_2 = \pi \circ \textbf{x}_2: V_2 \longrightarrow M/G$ with $\textbf{y}_1(V_1) \cap \textbf{y}_2(V_2) \neq \emptyset$, then $\textbf{y}_1^{-1} \circ \textbf{y}_2$ is differentiable. For this, let $\pi_i$ be the restriction of $\pi$ to $\textbf{x}_i(V_i)$, $i = 1,2$. Let $q \in \textbf{y}_1(V_1) \cap \textbf{y}_2(V_2)$ and let $\textbf{r} = \textbf{x}_2^{-1} \circ \pi_2^{-1}(q)$. Let $W \subset V_2$ be a neighborhood of $\textbf{r}$ such that $(\pi_2 \circ \textbf{x}_2)(W) \subset \textbf{y}_1(V_1) \cap \textbf{y}_2(V_2)$. Then, the rstriction to $W$ is given by $$\textbf{y}_1^{-1} \circ \textbf{y}_2|_W = \textbf{x}_1^{-1} \circ \pi_1^{-1} \circ \pi_2 \circ \textbf{x}_2.$$ Therefore, it is enough to show that $\pi_1^{-1} \circ \pi_2$ is differentiable at $p_2 = \pi_2^{-1}(q)$. Let $p_1 = \pi_1^{-1} \circ \pi_2 (p_2)$. Then $p_1$ and $p_2$ are equivalent in $M$, hence there is a $g \in G$ such that $gp_2 = p_1$ . It follows easily that the restrinction $\pi_1^{-1} \circ \pi_2|_{\textbf{x}_2(W)}$ coincides with the diffeomorphism $\varphi_g|_{\textbf{x}_2(W)}$, which proves that $\pi_1^{-1} \circ \pi_2$ is differentiable at $p_2$, as stated. Thanks in advance!
The forward rate from time $t$ to $T$ ($f_{t,T}$) can be approximated by: $$ f_{t,T}= \left[ \frac{(1+r_T)^T}{(1+r_t)^t} \right]^{\frac{1}{{T-t}}}-1 \sim \frac{(1+r_T)^T-(1+r_t)^t}{T-t} $$ Why is that the case? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community You may show it as follows: \begin{align*} f_{t,T}&= \left[ \frac{(1+r_T)^T}{(1+r_t)^t} \right]^{\frac{1}{T-t}}-1\\ &=e^{\frac{1}{T-t} \left[\ln (1+r_T)^T - \ln (1+r_t)^t \right]} -1\\ &\approx e^{\frac{1}{T-t} \left[(1+r_T)^T-1 - \big((1+r_t)^t -1\big)\right]} -1\\ &=e^{\frac{1}{T-t} \left[(1+r_T)^T - (1+r_t)^t\right]} -1\\ &\approx 1+ \frac{1}{T-t} \left[(1+r_T)^T - (1+r_t)^t\right] -1\\ &=\frac{1}{T-t} \left[(1+r_T)^T - (1+r_t)^t\right]. \end{align*}
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Comparison Tests for Convergence Example: $\displaystyle\int_1^\infty \frac{dx}{x^4}$ converges, and, since $\displaystyle\frac{1}{x^4+1} < \frac{1}{x^4}$, we can conclude that also $\displaystyle\int_1^\infty \frac{dx}{1+x^4}$ converges. Notice that the test doesn't tell us the value of the integral, just that it converges. The idea of comparison tests will be used often when we get to sequences and series. It's a good idea to pay close attention to the following video, and
There are many different methods for this. Most people rely on a unit root test. Rmetrics has collected the most common unit root tests into the fUnitRoots package, which primarily provides a wrapper for Bernhard Pfaff's urca package. These include:Augmented Dickey–Fuller (ADF) testElliott–Rothenberg–Stock testKPSS unit root testPhillips–Perron ... There are a number of different tests that are generally used to compare samples to different distributions, such as Jarque-Bera, Anderson-Darling, and Kolmogorov–Smirnov (see this related question).In your case, with just the standard deviation and mean, there isn't a whole lot to say. You need to assume a distribution (e.g. normal). You would be able ... The main problem in your code is this line:rowSums(coef(model) * frame[, -1])I'm not sure exactly what is does, perhaps some matrix multiplication, but definitely not what you expect it to do. Try to replace it with manual multiplicationspread <- frame[,1] - (coef(model)[1]*frame[,2] + coef(model)[2]*frame[,3] + coef(model)[3]*frame[,4] + coef(... I will assume a white noise is a process $(\varepsilon_t)$ with zero mean, no autocorrelation and constant variance $\sigma^2 > 0$ while a random walk is a process $(x_t)$ defined by$$x_{t+1} = x_t + \varepsilon_{t+1}$$where $\varepsilon$ is a white noise.1) No since $Var(x_{t+1}) = Var(x_t) + Var(\varepsilon_{t+1})$ is stricly increasing while ... There is a lot of ways to understand why stationarity allows to apply usual time series analysis. Here is one more.Very often, the theoretical justification of what you do in time series need to be able to identify the mean formula and the expectation:$$\frac{1}{N}\sum_{n=1}^N X_n \underset{N\rightarrow +\infty}{\longrightarrow} \mathbb{E} X, $$where the ... To simplify, consider the errors rather than the returns. The variance is effectively the average of the squared errors, while absolute deviation is the average of the absolute errors. So plotting the squared errors or absolute errors over time could give an indication of whether the variance or absolute deviation is constant over time. Since variance is ... Yet another alternative are wavelet based tests. With comparable size, they often have higher power, especially for very near unit root alternatives. An example is here (free pre-print versions of this paper are available, too). In terms of interpretation, an $MA$ model simply means that the time series is a function of the error from previous periods. You might find it informative to consider plotting simple $AR(1)$ models alongside various $ARMA(1,1)$ to develop a more intuitive understanding. For instance, the $AR(1)$ model (chosen as it is common for financial time series)$$x_{... To quote Wikipedia:In mathematics, a stationary process (or strict(ly) stationary process or strong(ly) stationary process) is a stochastic process whose joint probability distribution does not change when shifted in time. Consequently, parameters such as the mean and variance, if they are present, also do not change over time and do not follow any ... I think the main difference even in this little example is the gain-loss asymmetry which is a known stylized fact: When you look at the big bump both time series posses your artificial one is perfectly symmetric whereas the real one takes longer for going up and then crashes in a relatively shorter time frame.This is a known phenomenon in real financial ... We can talk about whether a strictly stationary or weakly stationary process might usefully describe that data. My answer to both would be yes.I also have issues with other text that people have written here.A review of mathematical definitions:A stochastic process $\{X_t\}$ is called strictly stationary if it's joint distribution function $F(X_{t}, ... Simple...because you are interested in deviations from a metric, and not whether it deviates above or below. The very definition of volatility is a "measure of deviation". Squaring returns or using the absolute values just eases the calculation to arrive at a deviation measure. Otherwise volatility would have to be calculated in other ways as positive and ... O-U is continuous time mean reverting process, hence used to model stationary series. It has closed form analytic solution. This allows insight into stationary processes and act like asymptotic limiting case for calculating coefficients that matter.EDIT: You can see AR(1) below$$x_{k+1} = c + a x_k + b\varepsilon_k$$and by substituting c=θμΔt, a=−θΔt ... The concept of 'mean reversion' is tricky in continuous time. Most people would call 'mean reverting' a process where the drift pulls back towards a long run mean, and I assume that this is what you also mean. Something like the drift of an OU process.However, in continuous time the 'pull' can be generated by the volatility. For example the process$$dX_t ... The point of confusion may be in thinking that a predictable price process is synonymous with a mean-reverting process while using the definitions in these papers, it's actually the opposite! In the context of these papers, a random walk would be 100% predictable: the unpredictable component of a random walk (i.e. the period specific shock which has finite ... Here is a possible explanation. Consider $X_t \sim N(0,1)$ and $Y_t \sim N(1,1)$. Then $(X_t)_0^n$ and $(Y_t)_0^n$ are realizations from stationary time series and I would expect the null hypothesis of stationarity not to be rejected (compatibly with the size of your test). Instead, the sample $(Z_t)_1^{2n} = (X_1, \dots, X_n, Y_1, \dots, Y_n)$ is drawn from ... Saying that you can't analyze something as is does not make it garbage. You can't eat flour "as-is", but that doesn't mean you throw it out.In order to use "standard" analysis tools, you must first transform the series into something compatible. Some examples of such a transformation include k-th order differences or a log transformation. These ... Autocorrelation is the correlation of a series with itself. Suppose $X = {X_1, X_2, X_3, ...}$ is your time series. Then the autocorrelation between $X_t$ amd $X_s$ is:$$\frac{E[(X_t-\mu_t)(X_s-\mu_s)]}{\sigma_t \sigma_s}$$This can be simplified quite a lot if the series you have is stationary (a common assumption), in which case the autocorrelation ... The tseries package has GARCH models. Here is some simple code:library(quantmod)library(tseries)getSymbols("MSFT")ret <- diff.xts(log(MSFT$MSFT.Adjusted))[-1]arch_model <- garch(ret, order=c(0, 3))garch_model <- garch(ret, order=c(3, 3))plot(arch_model)plot(garch_model)... @Sergey correctly identified the problem. The explanation is that coef(model) is a vector, frame is a data.frame, and element-by-element multiplication takes place in column-major order. The shorter vector (coef(model)) is recycled along the longer vector (each column in frame). For example:frame <- data.frame(V1=1:5)frame$V2 <- 2frame$V3 <- ... Ergodicity is connected to mixing, meaning there is one limiting distribution and it is used for time averages too. If you take a process in the real numbers that starts at a random value and then just stays at its initial point, it is stationary but not ergodic because there is not a unique distribution for time averages. You should de-trend to whatever frequency scale you are testing. I.e. 1 min means de-trend 1 min data. Merely by moving to higher frequency data, you are eliminating much of the systematic bias present at higher scales -- as1) you have many more samples to compare (minimizing standard error)2) At smaller intervals, the drift component also shrinks ... For both time-series, just plot the log returns. You will see that one is not a Random-Walk .. the S&P500 since you will get values that far beyond the normal distribution. Just watch this video by Benoit Mandelbrot (starting at 11min:54sec). Looking at both graphs, your eyes can fool you making you believe that both are generated by Random Walks... 1.) Autocorrelation is the correlation of a time series against the lagged version of itself.2). First autocorrelation is the correlation of the time series against the lag(1) version of itself.Let's look at the example belowPeriod_Numbers = [1,2,3,4,5,6,7,8,9,10]Time_Series = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]First Autocorrelation is ... I think you misunderstood the definition. Be stationary does not mean not depend of the time as you can check here. (Sorry for putting an wikipedia link here as I suppose you may have read it)Another way to think is that the law any increment of the process is given by a same functionof the difference of time. More precisely $\forall ~t_2\geq t_1,$ :$$\... Consider the following AR(1) process with i.i.d. normal errors that have zero mean and finite variance $\sigma^2>0$,$$ x_t = (1-\rho)\mu + \rho x_{t-1} + \epsilon _t$$Now suppose $ \rho = 1/2$ and $\mu = 1$. This process does not have a unit root, and it is not mean stationary. At any point in time, the process has finite variance, although as time ... Any data transformation to assure stationarity eliminates part of the signal in many cases the signal is not completely eliminated so you can still perform the required analyses but in some others as may the your case the signal is erased and the results seem to indicate your variables have lost predictible power although its predictive power may have been ...
(AntiHermitian) Show that the eigenvalues of anti-Hermitian matrices are pure imaginary. (BasisTrans) Consider a (generic) $3\times 3$ Hermitian matrix $M$ with three orthonormal eigenvectors $\vert u\rangle$, $\vert v\rangle$, $\vert w\rangle$ with eigenvalues $\lambda_u$, $\lambda_v$,and $\lambda_w$, respectively. Suppose that you know $M$ and its basis vectors in a particular representation, so that $M$ is a particular matrix and the basis vectors are (known) columns. Build the matrix $U$ out of the columns that represent the basis vectors: \begin{equation} U=\big(\vert u\rangle\quad \vert v\rangle\quad \vert w\rangle\big) \end{equation} Show that $U$ is unitary, i.e.\ show that $U^{\dagger}=U^{-1}$. Show that $U^{\dagger} M U$ is a diagonal matrix and that the diagonal entries are the eigenvalues of $M$. (You will show that any matrix is diagonal in its own eigenbasis.)
An equation involving trigonometric functions is called a trigonometric equation. For example, an equation like \[\nonumber \tan\;A ~=~ 0.75 ~, \] which we encountered in Chapter 1, is a trigonometric equation. In Chapter 1 we were concerned only with finding a single solution (say, between \(0^\circ \) and \(90^\circ\)). In this section we will be concerned with finding the most general solution to such equations. To see what that means, take the above equation \(\tan\;A = 0.75 \). Using the \(\fbox{\(\tan^{-1}\)}\) calculator button in degree mode, we get \(A=36.87^\circ \). However, we know that the tangent function has period \(\pi \) rad \(= 180^\circ \), i.e. it repeats every \(180^\circ \). Thus, there are many other possible answers for the value of \(A \), namely \(36.87^\circ + 180^\circ \), \(36.87^\circ - 180^\circ \), \(36.87^\circ + 360^\circ \), \(36.87^\circ - 360^\circ \), \(36.87^\circ + 540^\circ \), etc. We can write this in a more compact form: \[\nonumber A ~=~ 36.87^\circ \;+\; 180^\circ k \quad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] This is the most general solution to the equation. Often the part that says "for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)'' is omitted since it is usually understood that \(k \) varies over all integers. The general solution in radians would be: \[\nonumber A ~=~ 0.6435 \;+\; \pi k \quad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] Example 6.1 Solve the equation \(\;2\,\sin\;\theta \;+\;1 ~=~ 0 \). Solution: Isolating \(\sin\;\theta \) gives \(\;\sin\;\theta ~=~ -\tfrac{1}{2} \). Using the \(\fbox{\(\sin^{-1}\)}\) calculator button in degree mode gives us \(\theta = -30^\circ \), which is in QIV. Recall that the reflection of this angle around the \(y\)-axis into QIII also has the same sine. That is, \(\sin\;210^\circ = -\tfrac{1}{2} \). Thus, since the sine function has period \(2\pi \) rad \(= 360^\circ \), and since \(-30^\circ \) does not differ from \(210^\circ \) by an integer multiple of \(360^\circ \), the general solution is: \[\nonumber \boxed{\theta ~=~ -30^\circ \;+\; 360^\circ k \quad\text{and}\quad 210^\circ \;+\; 360^\circ k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] In radians, the solution is: \[\nonumber \boxed{\theta ~=~ -\dfrac{\pi}{6} \;+\; 2\pi k \quad\text{and}\quad \dfrac{7\pi}{6} + 2\pi k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] For the rest of this section we will write our solutions in radians. Example 6.2 Solve the equation \(\;2\cos^2 \;\theta \;-\; 1 ~=~ 0 \). Solution: Isolating \(\;\cos^2 \;\theta \) gives us \[\nonumber \cos^2 \;\theta ~=~ \frac{1}{2} \quad\Rightarrow\quad \cos\;\theta ~=~ \pm\,\frac{1}{\sqrt{2}} \quad\Rightarrow\quad \theta ~=~ \frac{\pi}{4}\;,~\frac{3\pi}{4}\;,~\frac{5\pi}{4}\;,~ \frac{7\pi}{4}~, \] and since the period of cosine is \(2\pi \), we would add \(2\pi k \) to each of those angles to get the general solution. But notice that the above angles differ by multiples of \(\frac{\pi}{2} \). So since every multiple of \(2\pi \) is also a multiple of \(\frac{\pi}{2} \), we can combine those four separate answers into one: \[\nonumber \boxed{\theta ~=~ \frac{\pi}{4} \;+\; \frac{\pi}{2}\,k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] Example 6.3 Solve the equation \(\;2\,\sec\;\theta ~=~ 1 \). Solution: Isolating \(\;\sec\;\theta \) gives us \[\nonumber \sec\;\theta ~=~ \frac{1}{2} \quad\Rightarrow\quad \cos\;\theta ~=~ \frac{1}{\sec\;\theta} ~=~ 2~, \] which is impossible. Thus, there is \(\fbox{no solution}\). Example 6.4 Solve the equation \(\;\cos\;\theta ~=~ \tan\;\theta \). Solution: The idea here is to use identities to put everything in terms of a single trigonometric function: \[\nonumber \begin{align*} \cos\;\theta ~&=~ \tan\;\theta\\ \nonumber \cos\;\theta ~&=~ \frac{\sin\;\theta}{\cos\;\theta}\\ \nonumber \cos^2 \;\theta ~&=~ \sin\;\theta\\ \nonumber 1 \;-\; \sin^2 \;\theta ~&=~ \sin\;\theta\\ \nonumber 0 ~&=~ \sin^2 \;\theta \;+\; \sin\;\theta \;-\; 1 \end{align*}\] The last equation looks more complicated than the original equation, but notice that it is actually a quadratic equation: making the substitution \(x=\sin\;\theta \), we have \[\nonumber x^2 \;+\; x \;-\; 1 ~=~ 0 \quad\Rightarrow\quad x ~=~ \frac{-1 \;\pm\; \sqrt{1 - (4)\,(-1)}}{ 2\,(1)} ~=~ \frac{-1 \;\pm\; \sqrt{5}}{2} ~=~ -1.618\;,~0.618 \] by the quadratic formula from elementary algebra. But \(-1.618 < -1 \), so it is impossible that \(\;\sin\theta = x = -1.618 \). Thus, we must have \(\;\sin\;\theta = x = 0.618 \). Hence, there are two possible solutions: \(\theta = 0.666 \) rad in QI and its reflection \(\pi - \theta = 2.475\) rad around the \(y\)-axis in QII. Adding multiples of \(2\pi \) to these gives us the general solution: \[\nonumber \boxed{\theta ~=~ 0.666 \;+\; 2\pi k \quad\text{and}\quad 2.475 \;+\; 2\pi k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] Example 6.5 Solve the equation \(\;\sin\;\theta ~=~ \tan\;\theta \). Solution: Trying the same method as in the previous example, we get \[\nonumber \begin{align*} \sin\;\theta ~&=~ \tan\;\theta\\ \nonumber \sin\;\theta ~&=~ \frac{\sin\;\theta}{\cos\;\theta}\\ \nonumber \sin\;\theta~\cos\;\theta ~&=~ \sin\;\theta\\ \nonumber \sin\;\theta~\cos\;\theta \;-\; \sin\;\theta ~&=~ 0\\ \nonumber \sin\;\theta~(\cos\;\theta \;-\; 1) ~&=~ 0\\ \nonumber &\Rightarrow\quad \sin\;\theta ~=~ 0 \quad\text{or}\quad \cos\;\theta ~=~ 1\\ \nonumber &\Rightarrow\quad \theta ~=~ 0\;,~\pi \quad\text{or}\quad \theta ~=~ 0\\ \nonumber &\Rightarrow\quad \theta ~=~ 0\;,~\pi~, \end{align*}\] plus multiples of \(2\pi \). So since the above angles are multiples of \(\pi \), and every multiple of \(2\pi \) is a multiple of \(\pi \), we can combine the two answers into one for the general solution: \[\nonumber \boxed{\theta ~=~ \pi k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] Example 6.6 Solve the equation \(\;\cos\;3\theta ~=~ \frac{1}{2} \). Solution: The idea here is to solve for \(3\theta \) first, using the most general solution, and then divide that solution by \(3 \). So since \(\;\cos^{-1} \frac{1}{2} = \frac{\pi}{3} \), there are two possible solutions for \(3\theta\): \(3\theta = \frac{\pi}{3} \) in QI and its reflection \(-3\theta = -\frac{\pi}{3} \) around the \(x\)-axis in QIV. Adding multiples of \(2\pi \) to these gives us: \[\nonumber 3\theta ~=~ \pm\,\frac{\pi}{3} \;+\; 2\pi k \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] So dividing everything by \(3 \) we get the general solution for \(\theta\): \[\nonumber \boxed{\theta ~=~ \pm\,\frac{\pi}{9} \;+\; \frac{2\pi}{3} k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \] Example 6.7 Solve the equation \(\;\sin\;2\theta ~=~ \sin\;\theta \). Solution: Here we use the double-angle formula for sine: \[\nonumber \begin{align*} \sin\;2\theta ~&=~ \sin\;\theta\\ \nonumber 2\,\sin\theta~\cos\;\theta ~&=~ \sin\;\theta\\ \nonumber \sin\;\theta~(2\,\cos\;\theta \;-\; 1) ~&=~ 0\\ \nonumber &\Rightarrow\quad \sin\;\theta ~=~ 0 \quad\text{or}\quad \cos\;\theta ~=~ \frac{1}{2}\\ \nonumber &\Rightarrow\quad \theta ~=~ 0\;,~\pi \quad\text{or}\quad \theta ~=~ \pm\,\frac{\pi}{3}\\ \nonumber &\Rightarrow\quad \boxed{\theta ~=~ \pi k \quad\text{and}\quad \pm\,\frac{\pi}{3} \;+\; 2\pi k} \qquad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \end{align*} \] Solution: We will use the technique which we discussed in Chapter 5 for finding the amplitude of a combination of sine and cosine functions. Take the coefficients \(2 \) and \(3 \) of \(\;\sin\;\theta \) and \(\;-\cos\;\theta \), respectively, in the above equation and make them the legs of a right triangle, as in Figure 6.1.1. Let \(\phi \) be the angle shown in the right triangle. The leg with length \(3 >0 \) means that the angle \(\phi \) is above the \(x\)-axis, and the leg with length \(2>0 \) means that \(\phi \) is to the right of the \(y\)-axis. Hence, \(\phi \) must be in QI. The hypotenuse has length \(\sqrt{13} \) by the Pythagorean Theorem, and hence \(\;\cos\;\phi = \frac{2}{\sqrt{13}} \) and \(\;\sin\;\theta = \frac{3}{\sqrt{13}} \). We can use this to transform the equation to solve as follows: \[\nonumber \begin{align*} 2\,\sin\;\theta \;-\; 3\,\cos\;\theta ~&=~ 1\\ \nonumber \sqrt{13}\,\left( \tfrac{2}{\sqrt{13}}\,\sin\;\theta \;-\; \tfrac{3}{\sqrt{13}}\,\cos\;\theta \right) ~&=~ 1\\ \nonumber \sqrt{13}\,( \cos\;\phi\;\sin\;\theta \;-\; \sin\;\phi\;\cos\;\theta ) ~&=~ 1\\ \nonumber \sqrt{13}\,\sin\;(\theta - \phi) ~&=~ 1\quad\text{(by the sine subtraction formula)}\\ \nonumber \sin\;(\theta - \phi) ~&=~ \tfrac{1}{\sqrt{13}}\\ \nonumber &\Rightarrow\quad \theta - \phi ~=~ 0.281 \quad\text{or}\quad \theta - \phi ~=~ \pi - 0.281 = 2.861\\ \nonumber &\Rightarrow\quad \theta ~=~ \phi \;+\; 0.281 \quad\text{or}\quad \theta ~=~ \phi \;+\; 2.861 \end{align*}\] Now, since \(\;\cos\;\phi = \frac{2}{\sqrt{13}} \) and \(\phi \) is in QI, the most general solution for \(\phi \) is \(\phi = 0.983 + 2\pi k \) for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(... \) . So since we needed to add multiples of \(2\pi \) to the solutions \(0.281 \) and \(2.861 \) anyway, the most general solution for \(\theta \) is: \[\begin{align*} \theta ~&=~ 0.983 \;+\; 0.281 \;+\; 2\pi k\quad\text{and}\quad 0.983 \;+\; 2.861 \;+\; 2\pi k\\ &\Rightarrow\quad \boxed{\theta ~=~ 1.264 \;+\; 2\pi k\quad\text{and}\quad 3.844 \;+\; 2\pi k} \quad\text{for \(k=0 \), \(\pm\,1 \), \(\pm\,2 \), \(...\)} \end{align*}\] Note: In Example 6.8 if the equation had been \(\;2\,\sin\;\theta \;+\; 3\,\cos\;\theta ~=~ 1 \) then we still would have used a right triangle with legs of lengths \(2 \) and \(3 \), but we would have used the sine addition formula instead of the subtraction formula.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
The integration of double-infinite Toda lattice by means of inverse spectral problem and related questions Methods Funct. Anal. Topology 15 (2009), no. 2, 101-136 The solution of the Cauchy problem for differential-difference double-infinite Toda lattice by means of inverse spectral problem for semi-infinite block Jacobi matrix is given. Namely, we construct a simple linear system of three differential equations of first order whose solution gives the spectral matrix measure of the aforementioned Jacobi matrix. The solution of the Cauchy problem for the Toda lattice is given by the procedure of orthogonalization w.r.t. this spectral measure, i.e. by the solution of the inverse spectral problem for this Jacobi matrix. Methods Funct. Anal. Topology 15 (2009), no. 2, 137-151 For relations generated by a pair of operator symmetric differential expressions, a class of generalized resolvents is found. These resolvents are integro-differential operators. The expansion in eigenfunctions of these relations is obtained. Methods Funct. Anal. Topology 15 (2009), no. 2, 152-167 A general form of the Lions-Magenes theorems on solvability of an elliptic boundary-value problem in the spaces of nonregular distributions is proved. We find a general condition on the space of right-hand sides of the elliptic equation under which the operator of the problem is bounded and has a finite index on the corresponding couple of Hilbert spaces. Extensive classes of the spaces satisfying this condition are constructed. They contain the spaces used by Lions and Magenes and many others spaces. Methods Funct. Anal. Topology 15 (2009), no. 2, 168-176 In the present paper we study $\ast$-representations of semilinear relations with polynomial characteristic functions. For any finite simple non-oriented graph $\Gamma$ we construct a polynomial characteristic function such that $\Gamma$ is its graph. Full description of graphs which satisfy polynomial (degree one and two) semilinear relations is obtained. We introduce the $G$-orthoscalarity condition and prove that any semili ear relation with quadratic characteristic function and condition of $G$-orthoscalarity is $\ast$-tame. This class of relations contains, in particular, $\ast$-representations of $U_{q}(so(3)).$ Algebras of unbounded operators over the ring of measurable functions and their derivations and automorphisms Methods Funct. Anal. Topology 15 (2009), no. 2, 177-187 In the present paper derivations and $*$-automorphisms of algebras of unbounded operators over the ring of measurable functions are investigated and it is shown that all $L^0$-linear derivations and $L^{0}$-linear $*$-automorphisms are inner. Moreover, it is proved that each $L^0$-linear automorphism of the algebra of all linear operators on a $bo$-dense submodule of a Kaplansky-Hilbert module over the ring of measurable functions is spatial. Methods Funct. Anal. Topology 15 (2009), no. 2, 188-194 The norm closure of the algebra generated by the set $\{n\mapsto {\lambda}^{n^k}:$ $\lambda\in{\mathbb {T}}$ and $k\in{\mathbb{N}}\}$ of functions on $({\mathbb {Z}}, +)$ was studied in \cite{S} (and was named as the Weyl algebra). In this paper, by a fruitful result of Namioka, this algebra is generalized for a general semitopological semigroup and, among other things, it is shown that the elements of the involved algebra are distal. In particular, we examine this algebra for $({\mathbb {Z}}, +)$ and (more generally) for the discrete (additive) group of any countable ring. Finally, our results are treated for a bicyclic semigroup. Methods Funct. Anal. Topology 15 (2009), no. 2, 195-200 The purpose of our paper is to introduce some topology on the group $G_F^{r}(M)$ of all $C^{r}$-isometries of foliated manifold $(M,F)$, which depends on a foliation $F$ and coincides with compact-open topology when $F$ is an $n$-dimensional foliation. If the codimension of $F$ is equal to $n$, convergence in our topology coincides with pointwise convergence, where $ n=\operatorname{dim}M.$ It is proved that the group $G_F^{r}(M)$ is a topological group with compact-open topology, where $r\geq{0}.$ In addition it is showed some properties of F-compact-open topology.
A rigid steel bar with mass $M$ is hit sideways (very close to its end) by a steel ball with mass $m$ and velocity $v$. What are the equations of motion after elastic impact and how about conservation of momentum and energy? closed as off-topic by ACuriousMind♦, Gert, Bill N, Daniel Griscom, Floris Jan 26 '16 at 0:35 This question appears to be off-topic. The users who voted to close gave this specific reason: "Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – ACuriousMind, Gert, Bill N, Daniel Griscom A rod of mass $M$ and length $\ell$ has mass moment of inertia $I = \frac{M}{12} \ell^2 $. The impact at a distance of $c = \frac{\ell}{2}$ from the center of mass imparts an impulse $J$, while an equal and opposite impulse $-J$ is applied to the projectile mass $m$. The projectile is going to bounce with velocity $v_B = v - \frac{J}{m}$. The center of mass of the rod is going to start moving with velocity $v_C = \frac{J}{M}$ while the rod rotation is going to be $\omega = \frac{c J}{I}$. The linear velocity of the point of impact is thus $v_A = v_C + \omega c = \frac{J}{M} + \frac{c^2 J}{I} $. The law of impact states that the final separating velocity is a fraction of the initial impacting velocity. $ v_A - v_B = \epsilon v $, where $\epsilon$ is the coefficient of restitution. Putting it all together yields: $$J = (1+\epsilon) \mu v $$$$\mu = \left( \frac{1}{m} + \frac{1}{M} + \frac{c^2}{I} \right)^{-1} $$ The term $\mu$ is called the reduced mass of the system, and it can be viewed as the effective mass of the impact. It converts the impact speed $v$ into momentum $\mu v$. Depending on the bounciness the exchanged momentum (impulse) is between $ J = \mu v \ldots 2 \mu v $. Back substitute $J$ into the equations above to find $v_C$, $\omega$ and $v_B$. This is a standard rotational motion problem. Use conservation of linear momentum for the translational motion. Use conservation of angular momentum about any axis noting that some axes make the algebra easier than others. Use conservation of kinetic energy as the collision is stated to be elastic. Let us take the system - Rod + Ball Let the final velocity of center of mass of rod and ball be $ v_1 $ and $ v_2 $ respectively. As there is no net external force on the system, the net linear momentum of the system will be conserved. $ mv = Mv_1 + mv_2 $ And, as there is no net external torque about the COM of the system, the angular momentum of the system about it's COM will be conserved. This will give you one more equation with $ v_1 $, $ v_2 $ and $ v $. Solve both equations simultaneously to get the results. The linear motion of the bars center in the direction of the ball will be as if the ball had hit the bar at its center. The ball will bounce off the bar in the exact opposite direction but with reduced speed. The ball will also make the bar rotate around its center with some angular velocity. After impact, the sum of the translational energy of the ball and the bar and the rotational energy of the bar must equal the translational energy of the ball before impact (assuming no friction or deformation energy during impact). In this example, the conservation of momentum only seems to apply to the translational motion and does not include conservation of angular momentum?
I have only encountered the inverse Lagrangian problem in mathematics books that treat Lagrangian field theory using jet bundles and homological algebra, and while I am studying this approach, I still find the language somewhat difficult to understand/use. In a more familiar, however much less precise formalism, I thought about the following. If one's given a field $\psi$ and an action $$ S[\psi]=\int d^nx\ \mathcal L(\psi,\partial\psi)$$ for this field, the Euler-Lagrange equations are given by $$ E\mathcal L(x)=\frac{\delta S[\psi]}{\delta\psi(x)}=0. $$ The functional derivative here is an analogue of the gradient in finite dimensional calculus. Naively, using finite dimensional intuition, one can argue that $$ \frac{\delta^2S[\psi]}{\delta\psi(x)\delta\psi(y)} $$ is symmetric in $x,y$, and if an analogue of the Poincaré's lemma holds for "functionals" as well, then for a "functional" (more like a nonlinear operator, really) $F[\psi](x)$ it holds that $$ \frac{\delta}{\delta\psi(y)}F[\psi](x)=\frac{\delta}{\delta\psi(x)}F[\psi](y), $$ then "locally" there should be a functional $$ S[\psi] $$ such that $$ F[\psi](x)=\frac{\delta}{\delta\psi(x)}S[\psi]. $$ However, then, the inverse Lagrangian problem can be attacked as such: Assume that a field $\psi$ is given with field equations $E[\psi](x)=0$, where $E[\psi](x)$ is some local function/field that depends on $\psi$ and its derivatives. If the equations of motion come from a variational principle, then $E[\psi]$ is "functionally exact", so the "functional exterior derivative" $$ \mathcal{D}E[\psi](x,y)=\frac{\delta}{\delta\psi(x)}E[\psi](y)-\frac{\delta}{\delta\psi(y)}E[\psi](x) $$ should vanish, and by Poincaré's lemma, there is at least a "locally defined" action functional for $E$. Questions: Is this approach in any way tenable? As in is there a functional version of Poincaré's lemma? If so it seems to me that determining whether a field equation is variational or not is reducible to a mechanical calculation. If there is a functional Poincaré's lemma, is there any explicitly calculable homotopy operator for it? The usual proof of Poincaré's lemma in finite dimensions also constructs the homotopy operator explicitly, and it can be used to calculate primitive forms. Now, as a note, I'll say that probably this approach is not well defined in the rigorous sense, but physicists have a way of doing infinite-dimensional calculations via very very unrigorously used distributions and such pretty effectively, even if things are ill-defined mathematically, so I am only expecting answers on that level of rigour.
A friend of mine has a very nicely typeset paper on the arXiv and I would like to adapt large parts of the typesetting for my next paper. The paper uses the class file from Annalen der Physik, which is available under Manuscript submission > Article templates in their author guidelines page, or alternatively as andp2012.cls from e.g. here. I tried to adapt the class to something I like a bit better (onecolumn with better margins, and a bit more room to breathe) and I sort of managed it, but the class is poorly documented and my alterations are a bodge that's going to come apart at the slightest push. (Also, to begin with, the class issues some unavoidable funky warnings right out of the box.) Instead of that, then, I would like to take the elements I like the most and work them into a more standard class like amsart. This means, in particular, the fonts, which are nicely shaped and away from the serif impertinence of Computer Modern. (No offence, but I'm just deadly tired of it by now.) This should ideally be the lot: the fonts for the text, math, title, author, abstract, and section headings. What fonts or packages are responsible for this, and how can I get them working on amsart? A sample file compiles into the following look: The source is below; it needs andp2012.cls and picins.sty to run. \documentclass{andp2012}% no class options needed by now\usepackage[english]{babel}\usepackage{lipsum}\title{Article title}\author{J. Doe}\begin{abstract}This is an abstract.\end{abstract}\shortabstract\begin{document}\maketitle\section{Introduction}Introduction text.\section{Content}\label{section1}Some initial text, and some equations.\begin{equation}V(\vec{x}_A,\vec{x}_B)=d^2\frac{r^2-2\lambda^2}{(r^2+\lambda^2)^{5/2}},\end{equation}being $d$ a letter, $\lambda$ a gathingammy $r$ a letter in $r=|\vec{x}_A-\vec{x}_B|=\sqrt{(x_A-x_B)^2+(y_A-y_B)^2}$, with $A$, $B$ labels. Moreover $\Lambda=\lambda/a$, and $\chi = a_{d}/\lambda = m_{eff}d^2/(\hbar^2 \lambda)$, with $m_\mathrm{eff}=\hbar^2/2ta^2$, and $t$, are more maths expressions. So are $k=\sqrt{k_x^2+k_y^2}$ and $V_{latt}(\vec r)= V_0\left(\sin^2(k_x x)+ \sin^2(k_y y)\right))$, and a displayed equation is \begin{equation}\left(\hat{T}_A+\hat{T}_B+{\hat V}(\vec{x}_A,\vec{x}_B)\right)\Phi(\vec{x}_A,\vec{x}_B)=E\Phi(\vec{x}_A,\vec{x}_B).\end{equation}Other displayed equations are\begin{equation}(\vec{\xi}_{\vec{K}}\cdot\vec{\hat{T}}_D+V(\vec{r}))\psi(\vec{r})=E\psi(\vec{r}),\label{K-r}\end{equation}where $\vec{\xi }_{\vec{K}}=-2t(\cos(K_x a/2),\cos(K_y a/2))$ and $\vec{\hat{I}}\cdot\vec{\hat{T}}_D\psi(\vec{r})=\sum_{i=x,y}\left(\psi(\vec{r}+\vec{\delta}_i)+\psi(\vec{r}-\vec{\delta}_i)\right)$, where $\vec{\delta}_i=a\hat{e}_{i}$, and also\begin{equation}\psi(\vec{r})=\frac{1}{N_x N_y}\sum_{\vec{q}}\psi(\vec{q})e^{i\vec{q}\cdot\vec{r}}\end{equation}and$$E_{\vec{K},\vec{q}}=-4t\left(\cos(K_xa/2)\cos(q_xa)+\cos(K_ya/2)\cos(q_ya)\right)$$ and \begin{equation}(E-E_{\vec{K},\vec{q}})\psi(\vec{q})=\sum_{\vec{q'}}V(\vec{q}-\vec{q'})\psi(\vec{q'}). \end{equation}Then you do some blah blah blah and you finish the paper.\lipsum[1-3]\end{document}
Bob "Hey Alice let's play paper-scissor-rocks :) I will always play paper." Alice "What if you don't play paper?" Bob "If I play don't play paper you win if it's a draw [under original rules] and you draw if you lose [under original rules]. If I win you have to obey my order." Alice "What if I get a draw?" Bob "Then you will have to do my little favour :P" Alice "What favour?" Bob: The payout table is as follows It turned out that Alice had scissor and Bob had rock --- a draw --- with consequences equivalent to a lose as Bob defined. The analysis from Alice: based on random choices there are a 2/3 chance to win for rocks and scissors, but Bob claimed to play scissor so she'd better play scissor. Bob made a conclusion that the risk for Alice is invariant despite the rules: -------------------------------------------------- Consider the following payout matrix. $P_1 = \begin{bmatrix}1&0&1\\1&1&0\\-1&1&0\end{bmatrix}$, $P_2 = \begin{bmatrix}1&-1&1\\1&1&-1\\-1&1&-1\end{bmatrix}$ Where $P_1$ is the matrix that Alice thought to be but $P_2$ is the real one. The matrix $P$ is defined as follows: $[P]_{ij}$ is the amount you win if you play the j-th option (vertical) and the opp plays the i-th option (horizontal). (Some textbooks may switch between you and your opp but it's just a matter of perspective on whether you, the analyzer is taking part in the game.) By strong duality we know that the optimal strategy for both Bob and Alice should have the same expectation. Now suppose Alice has a probability vector $\vec{x}$ to play the three choices with payout matrix $P$. Then the expected outcome for Bob is given by $P\vec{x}$ for different choices from Bob. He as the payer would of course want to minimize the paid. Then we can set up the following linear program for Alice, to maximize the gain when Bob minimizes her gain among the three choices. $\max (\min ([P\vec{x}]_i)) ~~~\text{subject to}~~~\sum [\vec{x}]_i = 1, \vec{x}\geq \vec{0}$. And by adding dummy variable we have: $\max z ~~~\text{subject to}~~~P\vec{x} - z\vec{b}\geq \vec{0},~\sum [\vec{x}]_i = 1,~\vec{x}\geq \vec{0}$ Where $\vec{b} = (1,1,1)^T$. This will be the optimal strategy for Alice. Using $P_1$ we have $\vec{x}^* = (0,0.5,0.5)^T$ with expected gain $0.5$. Using $P_2$ we have the same optimal vector, but the expected gain is zero. It can be concluded that Alice had the perception that she has an advantage in the game, while actually she had not. The matrix with four -1 and five 1, turns out to be a fair game (mathematically). ----------------------------------------------- Gambling at high stakes are of course a totally different game. There are not enough games for one to apply central limit theorem so that the result converges so that they can play in the most probable way. In these games all factors including the psychological status shall be considered and here Alice obviously did not have a good enough mind to do such analysis, so she eventually lose the game. Can you think of other non-trivial matrix which is fair as well? Can you think of other non-trivial matrix which is fair as well?
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges.2 Comments
Learning Objectives Calculate real GDP based on nominal GDP values Calculate the real growth rate in GDP Now we’re in a position to answer the question that we posed previously: given nominal GDP for the U.S. economy from 1960-2010, how much did real GDP actually increase? In order to see how much production has actually increased, we need to extract the effects of higher prices on nominal GDP, so that what we’re left with is real GDP, the increase in the quantity of goods and services produced. This can be easily done using a concept known as the GDP deflator. The GDP deflator is a price index measuring the average price of all goods and services included in the economy. We will explore price indices in detail and how they are computed when we learn more about inflation, but this definition will do for now. The data for the GDP deflator are given in Table 1 and shown graphically in Figure 1. Table 1. U.S. GDP Deflator, 1960-2010 1960 19.0 1965 20.3 1970 24.8 1975 34.1 1980 48.3 1985 62.3 1990 72.7 1995 81.7 2000 89.0 2005 100.0 2010 110.0 Source: www.bea.gov, National Accounts Figure 1 shows that the price level, as measured by the GDP deflator, has risen dramatically since 1960. Using the simple growth rate formula that we explained on the last page, we see that the price level in 2010 was almost six times higher than in 1960 (the deflator for 2010 was 110 versus a level of 19 in 1960). Clearly, much of the apparent growth in nominal GDP was due to inflation, not an actual change in the quantity of goods and services produced, in other words, not in real GDP. Recall that nominal GDP can rise for two reasons: an increase in output, and/or an increase in prices. What is needed is to extract the increase in prices from nominal GDP so as to measure only changes in output. After all, the dollars used to measure nominal GDP in 1960 are worth more than the inflated dollars of 1990—and the price index tells exactly how much more. This adjustment is easy to do if you use the Nominal-to-Real formula that we explained previously: [latex]\text{Nominal Value of Output}=\text{Price}\times\text{Quantity of Output}[/latex] Taking the GDP form of this equation: [latex]\text{Nominal GDP}=\text{GDP Deflator}\times\text{Real GDP}[/latex] Divide both sides by the GDP Deflator: [latex]\displaystyle\text{Real GDP}=\frac{\text{Nominal GDP}}{\text{GDP Deflator}}[/latex] For reasons that will be explained in more detail below, mathematically, a price index (like the GDP Deflator) is a two-digit decimal number like 1.00 or 0.85 or 1.25. Because some people have trouble working with decimals, when the price index is published, it has traditionally been multiplied by 100 to get integer numbers like 100, 85, or 125. What this means is that when we “deflate” nominal figures to get real figures (by dividing the nominal by the price index), we also need to remember to divide the published price index by 100 to make the math work. So the formula becomes: [latex]\displaystyle\text{Real GDP}=\frac{\text{Nominal GDP}}{\frac{\text{GDP Deflator}}{100}}[/latex] Computing Real GDP Let’s practice finding real GDP by looking at the actual data on nominal GDP and the GDP deflator. Table 2. U.S. Nominal GDP and the GDP Deflator Year Nominal GDP (billions of dollars) GDP Deflator (2005 = 100) 1960 543.3 19.0 1965 743.7 20.3 1970 1,075.9 24.8 1975 1,688.9 34.1 1980 2,862.5 48.3 1985 4,346.7 62.3 1990 5,979.6 72.7 1995 7,664.0 81.7 2000 10,289.7 89.0 2005 13,095.4 100.0 2010 14,958.3 110.0 Source: www.bea.gov Step 1. Look at Table 2 to see that, in 1960, nominal GDP was $543.3 billion and the price index (GDP deflator) was 19.0. Step 2. To calculate the real GDP in 1960, use the formula: [latex]\begin{array}{l}\text{Real GDP}=\frac{\text{Nominal GDP}}{\frac{\text{Price Index}}{100}}\\\text{Real GDP}=\frac{543.3\text{ billion}}{\frac{19}{100}}=\$2,859.5\text{ billion}\end{array}[/latex] We’ll do this in two parts to make it clear. First adjust the price index: 19 divided by [latex]100=0.19[/latex]. Then divide into nominal GDP: [latex]\frac{\$543.3\text{ billion}}{0.19}=\$2,859.5\text{ billion}[/latex]. Step 3. Use the same formula to calculate the real GDP in 1965. [latex]\begin{array}{l}\text{Real GDP}=\frac{\text{Nominal GDP}}{\frac{\text{Price Index}}{100}}\\\text{Real GDP}=\frac{743.7\text{ billion}}{\frac{20.3}{100}}=\$3,663.5\text{ billion}\end{array}[/latex] Step 4. Continue using this formula to calculate all of the real GDP values from 1960 through 2010. The calculations and the results are shown in Table 3. Table 3. Converting Nominal to Real GDP Year Nominal GDP (billions of dollars) GDP Deflator (2005 = 100) Calculations Real GDP (billions of 2005 dollars) 1960 543.3 19.0 [latex]\displaystyle\frac{543.3}{(\frac{19.0}{100})}[/latex] Show Answer 2859.5 1965 743.7 20.3 743.7 / (20.3/100) [latex]\displaystyle\frac{743.7}{(\frac{20.3}{100})}[/latex] Show Answer 3663.5 1970 1075.9 24.8 1,075.9 / (24.8/100) [latex]\displaystyle\frac{1,075.9}{(\frac{24.8}{100})}[/latex] Show Answer 4338.3 1975 1688.9 34.1 1,688.9 / (34.1/100) [latex]\displaystyle\frac{1,688.9}{(\frac{34.1}{100})}[/latex] Show Answer 4952.8 1980 2862.5 48.3 2,862.5 / (48.3/100) [latex]\displaystyle\frac{2,862.5}{(\frac{48.3}{100})}[/latex] Show Answer 5926.5 1985 4346.7 62.3 4,346.7 / (62.3/100) [latex]\displaystyle\frac{4,346.7}{(\frac{62.3}{100})}[/latex] Show Answer 6977.0 1990 5979.6 72.7 5,979.6 / (72.7/100) [latex]\displaystyle\frac{5,979.6}{(\frac{72.7}{100})}[/latex] Show Answer 8225.0 1995 7664.0 82.0 7,664 / (82.0/100) [latex]\displaystyle\frac{7,664.0}{(\frac{82.0}{100})}[/latex] Show Answer 9346.3 2000 10289.7 89.0 10,289.7 / (89.0/100) [latex]\displaystyle\frac{10,289.7}{(\frac{89.0}{100})}[/latex] Show Answer 11561.5 2005 13095.4 100.0 13,095.4 / (100.0/100) [latex]\displaystyle\frac{13,095.4}{(\frac{100.0}{100})}[/latex] Show Answer 13095.4 2010 14958.3 110.0 14,958.3 / (110.0/100) [latex]\displaystyle\frac{14,958.3}{(\frac{110.0}{100})}[/latex] Show Answer 13598.5 Source: Bureau of Economic Analysis, www.bea.gov There are a couple things to notice here. Whenever you compute a real statistic, one year (or period) plays a special role. It is called the base year (or base period). The base year is the year whose prices are used to compute the real statistic (as we showed on the last page). When we calculate real GDP, for example, we take the quantities of goods and services produced in each year (for example, 1960 or 1973) and multiply them by their prices in the base year (in this case, 2005), so we get a measure of GDP that uses prices that do not change from year to year. That is why real GDP is labeled “Constant Dollars” or “2005 Dollars,” which means that real GDP is constructed using prices that existed in 2005. The formula used is: [latex]\displaystyle\text{GDP deflator}=\frac{\text{Nominal GDP}}{\text{Real GDP}}\times{100}[/latex] Rearranging the formula and using the data from 2005: [latex]\begin{array}{l}\text{Real GDP}=\frac{\text{Nominal GDP}}{\frac{\text{Price Index}}{100}}\\\text{Real GDP}=\frac{13,095.4\text{ billion}}{\frac{100}{100}}=\$13,095.4\text{ billion}\end{array}[/latex] Comparing real GDP and nominal GDP for 2005, you see they are the same. This is no accident. It is because 2005 has been chosen as the “base year” in this example. Since the price index in the base year always has a value of 100 (by definition), nominal and real GDP are always the same in the base year. Look at the data for 2010. [latex]\begin{array}{l}\text{Real GDP}=\frac{\text{Nominal GDP}}{\frac{\text{Price Index}}{100}}\\\text{Real GDP}=\frac{14,958.3\text{ billion}}{\frac{100}{100}}=\$13,598.5\text{ billion}\end{array}[/latex] Use this data to make another observation: As long as inflation is positive, meaning prices increase on average from year to year, real GDP should be less than nominal GDP in any year after the base year. The reason for this should be clear: The value of nominal GDP is “inflated” by inflation. Similarly, as long as inflation is positive, real GDP should be greater than nominal GDP in any year before the base year. Try It Figure 2 shows the U.S. nominal and real GDP since 1960. Because 2005 is the base year, the nominal and real values are exactly the same in that year. However, over time, the rise in nominal GDP looks much larger than the rise in real GDP (that is, the nominal GDP line rises more steeply than the real GDP line), because the rise in nominal GDP is exaggerated by the presence of inflation, especially in the 1970s. Let’s return to the question posed originally: How much did GDP increase in real terms? What was the rate of growth of real GDP from 1960 to 2010? To find the real growth rate, we apply the formula for percentage change: In other words, the U.S. economy has increased real production of goods and services by nearly a factor of four (i.e. 376%) since 1960. Of course, that understates the material improvement since it fails to capture improvements in the quality of products and the invention of new products. For short periods of time, there is a quicker way to answer this question approximately, using another math trick. Remember that nominal GDP increases for two reasons, first, because prices increase and second because real GDP increases. In other words the percentage increase in nominal GDP is (approximately) equal to the percentage increase in prices plus the percentage increase in real GDP. Expressing this as an equation, [latex]\%{\text{ change in nominal GDP}}=\%{\text{ change in prices}}+\%{\text{ change in real GDP}}[/latex] Subtracting % change in prices from both sides gives: [latex]\%{\text{ change in nominal GDP}}-\%{\text{ change in prices}}=\%{\text{ change in real GDP}}[/latex] Therefore, the growth rate (percent change) of real GDP equals the growth rate in nominal GDP (% change in value) minus the growth rate in prices (% change in GDP Deflator). Two Ways to Calculate Growth Rates Let’s look at the bottom numbers from the following table: Year Nominal GDP GDP Deflator Real GDP 2005 $13095.4 100 $13095.4 2010 $14958.3 110 $13598.5 Method 1: Using the Simple Growth Rate formula (Real GDP in 2010 – Real GDP in 2005) / Real GDP in 2005 = Growth of Real GDP Plugging in the numbers gives ($13,598.5- $13,095.4) / $13,095.5 = 4% Method 2: Using the Math Trick Growth of Nominal GDP – Growth of GDP Deflator = Growth of Real GDP Plugging in the numbers and using the Simple Growth Rate formula gives [latex]\frac{($14,958.3-$13,095.4)}{$13095.5}-(\frac{(110-100)}{100})=14.2\%-10\%=4.2\%[/latex] Note that Method 2 is only a quick approximation to Method 1. Try It
Has anyone ever actually written a system (software or detailed explanation on paper with simple examples) that generates computer programs? I input $Prime(x) \wedge x<10$ and it creates a program that lists the prime numbers less than 10. $Prime(x)$ is simply defined as $$1<x \wedge \not\exists A\; s.t. 1<A \wedge A<x \wedge x=A\times B,\mbox{ with } A,B\in \mathbb{N}$$ Professors say they can but nobody gives actual complete examples. This is a very active research topic, very promising, though full automation of program generation probably has intrinsic limitations (but are human beings any better?). But the idea is still be very useful in assisting considerably the creation of programs by mechanizing many steps, and by automatically checking the correctness of the program generation. It is strongly related to a result in logic, called the Curry-Howard correspondance (or isomorphism), that shows that computer programs and mathematical proofs are very similar. So the idea is that the system will take your program specification as a theorem to be proved. In the case of your example, it would be something like (informally): "there is a set of all prime numbers smaller than 10". Then, you will attempt to prove that theorem, and existing systems will assist you in doing the proof, automating some parts, possibly the whole proof, and making sure you never make errors. From that proof one can then extract a program that actually computes the wanted list of prime numbers that had been initially specified. Several systems were developed in the past to elucidate these ideas. One of the better known was LCF by Robin Milner, who created the language ML for that purpose. One of the currently most advanced systems is Coq. There are examples fully worked out, some of them quite complex. You may find some in the following article, though it is in no way simple reading and requires advanced knowledge of Logic. The wag answer: Yes, but at the time of writing, for most nontrivial programs the specifications seem to be just as hard to write and debug as the programs would be. More seriously, babou's answer is good, but I'm also going to suggest checking out the area of dependent types. There's a rather good book using Coq (full disclaimer: written by a friend of mine), but there's also Epigram, Agda, and Idris. Isabelle/HOL is also worth checking out. These are all based on the calculus of constructions. If you want to know the theoretical basis, look up Martin-Löf type theory. There are some great introductions around. Going off in a tangent here, program generators (i.e., systems that given a high-level description of something in some special language) have been around forever. Any compiler is one of those, as is any of the many parser generators. Back in the day systems called "third generation languages," which generated (most of the) code of a typical business application given a high-level description and a catalog of available data were popular. One area which seems to specifically address the "primes less than 10" example you give is Constraint Programing which tries to find solutions to problems involving certain constraints, including integer constraints like those you gave. You might want to try ECLiPSe for a specific (open source) implementation of such a system.
Here’s batch 2 of my old google+ posts on ‘Inter Universal Teichmuller theory’, or rather on the number theoretic examples of Frobenioids. June 5th, 2013 Mochizuki’s categorical prime number sieve And now for the interesting part of Frobenioids1: after replacing a bunch of arithmetic schemes and maps between them by a huge category, we will reconstruct this classical picture by purely categorical means. Let’s start with the simplest case, that of the ‘baby arithmetic Frobenioid’ dismantling $\mathbf{Spec}(\mathbb{Z}) (that is, the collection of all prime numbers) and replacing it by the category having as its objects all $(a)$ where $a$ is a strictly positive rational number and morphisms labeled by triples $(n,r,z)$ where $n$ and $z$ are strictly positive integers and $r$ is a strictly positive rational number and connecting two objects \[ (n,r,z) : (a) \rightarrow (b) \quad \text{ if and only if } \quad a^n.z=b.r \] Composition of morphisms is well-defined and looks like $(m,s,v) \circ (n,r,u) = (m.n,r^m.s,u^m.v)$ as one quickly checks. The challenge is to recover all prime numbers back from this ‘Frobenioid’. We would like to take an object $(a)$ and consider the maps $(1,1,p)$ from it for all prime numbers $p$, but cannot do this as categorically we have to drop all labels of objects and arrows. That is, we have to recognize the map $(1,1,p)$ among all maps starting from a given object. We can identify all isomorphisms in the category and check that they are precisely the morphisms labeled $(1,r,1)$. In particular, this implies that all objects are isomorphic and that there is a natural correspondence between arrows leaving $(a)$ and arrows leaving $(b)$ by composing them with the iso $(1,b/a,1) : (b) \rightarrow (a)$. Another class of arrows we can spot categorically are the ‘irreducibles’, which are maps $f$ which are not isos but have the property that in any factorization $f=g \circ h$ either $g$ or $h$ must be an iso. One easily verifies from the composition rule that these come in two flavours: – those of Frobenius type : $(p,r,1)$ for any prime number $p$ – those of Order type : $(1,r,p)$ for any prime number $p$ We would like to color the froBs Blue and the oRders Red, but there seems to be no way to differentiate between the two classes by purely categorical means, until you spot Mochizuki’s clever little trick. start with a Red say $(1,r,q)$ for a prime number $q$ and compose it with the Blue $(p,1,1)$, then you get the morphism $(p,r^p,q^p)$ which you can factor as a composition of $p+1$ irreducibles \[ (p,r^p,q^p) = (1,r,q) \circ (1,r,q) \circ …. \circ (1,r,q)o(p,1,1) \] and if $p$ grows, so will the number of factors in this composition. On the other hand, if you start with a Blue and compose it with either a Red or a Blue irreducible, the obtained map cannot be factored in more irreducibles. Thus, we can identify the Order-type morphisms as those irreducibles $f$ for which there exists an irreducible $g$ such that the composition $g \circ f$ can be factored as the composition in at least $n$ irreducibles, where we can take n arbitrarily large. Finally we say that two Reds out of $(a)$ are equivalent iff one is obtained from the other by composing with an isomorphism and it is clear that the equivalence classes are exactly the arrows labeled $(1,r,p)$ for fixed prime number $p$. So we do indeed recover all prime numbers from the category. Similarly, we can see that equivalence classes of Frobs from $(a)$ are of the form $(p,r,1)$ for fixed prime $p$. An amusing fact is that we can recover the prime $p$ for a Frob by purely categorical ways using the above long factorization of a composition with a Red. There seems to be no categorical way to determine the prime number associated to an equivalence class of Order-morphisms though… Or, am i missing something trivial? June 7th, 2013 Mochizuki’s Frobenioids for the Working Category Theorist Many of you, including +David Roberts +Charles Wells +John Baez (and possibly others, i didn’t look at all comments left on all reshares of the past 4 posts in this MinuteMochizuki project) hoped that there might be a more elegant category theoretic description of Frobenioids, the buzz-word apparently being ‘Grothendieck fibration’ … Hence this attempt to deconstruct Frobenioids. Two caveats though: – i am not a category theorist (the few who know me IRL are by now ROFL) – these categories are meant to include all arithmetic information of number fields, which is a messy business, so one should only expect clear cut fibrations in easy situation such as principal ideal domains (think of the integers $\mathbb{Z}$). Okay, we will try to construct the Frobenioid associated to a number field $K$ (that is, a finite dimensional extension of the rationals $\mathbb{Q}$) with ring of integers $R$ (the integral closure of $\mathbb{Z}$ in $K$). For a concrete situation, look at the quadratic case. The objects will be fractional ideals of K which are just the R-submodules $I$ of $K$ such that there in an $r$ in $R$ such that $I.r$ is a proper ideal of $R$. Dedekind showed that any such thing can be written uniquely as a product \[ I = P_1^{a_1} … P_k^{a_k} \] where the $P_i$ are prime ideals of $R$ and the $a_i$ are integers (if they are all natural numbers, I will be a proper ideal of $R$). Clearly, if one multiplies two fractional ideals $I$ and $J$, the result $I.J$ is again a fractional ideal, so they form a group and by Dedekind’s trick this group is the free Abelian group on all prime ideals of $R$. Next, we define an equivalence relation on this set, calling two fractional ideals $I$ and $J$ equivalent if there is a $k$ in $K$ such that $I=J.k$ (or if you prefer, if they are isomorphic as $R$-modules). We have a set with an equivalence relation and hence a groupoid where these is a unique isomorphism between any two equivalent objects. This groupoid is precisely the groupoid of isomorphisms of the Frobenioid we’re after. The number of equivalence classes is finite and these classes correspond to the element of a finite group $Cl(R)$ called the class group of $R$ which is the quotient group of ideals modulo principal ideals (so if your $R$ is a principal ideal domain there is just one component). The ‘groups’ corresponding to each connected component of the groupoid are all isomorphic to the quotient group of the units in $K$ by the units in $R$. Next, we will add the other morphisms. By definition they are all compositions of irreducibles which come in 2 flavours: – the order-morphisms $P$ for any prime ideal $P$ of $R$ sending $I$ to $I.P$. Typically, these maps switch between different equivalence classes (unless $P$ itself is principal). We can even explicitly compute small norm prime ideals which will generate all elements in the class group $Cl(R)$. – the power-maps $[p]$ for any prime number $p$ which sends $I$ to $I^p$. The nature of these maps really depend on the order of the component in the finite group $Cl(R)$. Well, that’s it basically for the layer of the Frobenioid corresponding to the number field $K$. (You have to repeat all this for any subfield between $\mathbb{Q}$ and $K$). A cute fact is that all endomorphism-monoids of objects in the layer of K are all isomorphic as abstract monoid to the skew-monoid $\mathbb{N}^x_{>0} x Prin(R)$ of the multiplicative group of all strictly positive integers $n$ with the monoid of all principal ideals in R with multiplication defined by \[ (n,Ra).(m,Rb)=(nm,Rab^n) \] The only extra-type morphisms we still have to include are those between the different layers of the Frobenioid, the green ones which M calls the pull-back morphisms. They are of the following form: if $R_1$ and $R_2$ are rings of integers in the fields $K_1$ contained in $K_2$, then for any ringmorphism $\sigma : R_1 \rightarrow R_2$ one can extend a fractional ideal $I$ of $R_1$ to $K_2$ by considering $R_2.\sigma(I)$. These then give the morphism $r_2\sigma(I) \rightarrow I$ and as we will see in a next instalment, they encode the splitting behaviour of prime ideals. a question for category people Take the simplest situation, that of the integers $\mathbb{Z}$. So, we have just a groupoid with extra morphisms generated by the order-maps $o_p$ and the power maps $f_p$. The endo-ring of any object is then isomorphic top the abstract group generated by the $f_p$ and $o_p$ and satisfying following relations \[ o_p.o_q=o_q.o_p \] \[ f_p.f_q=f_q.f_p \] \[ f_p.o_q=o_q^p.f_p \] My question now is: if for two different primes $p$ and $q$ i switch their role in the endo-ring of 1 object and propagate this via all isos to all morphisms, do i get a category equivalence? (or am i missing something?). (tbc) June 11th, 2013 my problem with Mochizuki’s Frobenioid1 Let us see how much arithmetic information can be reconstructed from an arithmetic Frobenioids. Recall that for a fixed finite Galois extension $L$ of $\mathbb{Q}$ this is a category with objects all fractional ideals in subfields of $L$, and maps generated by multiplication-maps with ideals in rings of integers, power-maps and Galois-extension maps. When all objects and morphisms are labeled it is quite easy to reconstruct the Galois field $L$ from it as well as all maps between prime spectra of rings of integers in intermediate fields, which after all was the intended use of Frobenioids, to ‘dismantle’ these arithmetic schemes and endow them with extra structure given by the power-maps. However, in this reconstruction process we are only allowed t use the category structure, so all objects and morphisms are unlabelled (the situation top left) and we want to reconstruct from it the different layers of the Frobenioid (corresponding to the different subfields) and divide all arrows according to their type (situation bottom left). First we can look at all isomorphisms. They will divide the category in the dashed regions, some of them will be an entire layer (for example for $\mathbb{Q}$) but in general a finite number of these regions will make up the full layer of a subfield (the regions labeled by the elements of the ideal class group). Another categorical notion we can use are ‘irreducible morphisms’, that is a morphism $f$ which is not an iso but having the property that in each factorisation $f = g \circ h$ either $g$ or $h$ must be an iso. If we remember the different types of morphisms in our Frobenioid we see that the irreducibles come in 3 flavours: – oRder-maps (Red) : multiplication by a prime ideal $P$ of the ring of integers of the subfield – froBenius or power-maps (Blue) sendingg a fractional ideal $I$ to $I^p$ for a prime number $p$ – Galois-maps (Green) extending ideals for a subfield $K$ to $K’$ having no intermediate field. We would like to determine the colour of these irreducibles purely categorical. The idea is that reds have the property that they can be composed with another irreducible (in fact, of power type) such that the composition can again be decomposed in irreducibles and that there is no a priori bound on the number of these terms (this uses the fact that $[p] \circ Q = Q \circ Q \circ … Q \circ [p]$ and that there are infinitely many prime numbers $p$). One checks that compositions of order or Galois maps with irreducibles have factorisation with a bounded number of irreducibles. The most interesting case is the composition of a Galois map with an order map $P$, this can be decomposed alternatively as order maps in the bigger field followed by a Galois map, the required order maps are the bigger primes $Q_i$ occurring in the decomposition of the extended ideal \[ S.P = Q_1.Q_2….Q_k \] but the number $k$ of this decomposition is bounded by the dimension of the bigger field over the smaller one. Summarizing: – we can determine all the red maps, which will then give us also the different layers – we can determine the green maps as they move between different layers – to the remaining blue ones we can even associate their label $[p]$ by the observed property of composition with order maps. Taking an object in a layer, we get the set of prime ideals of the ring on integers in that field as the set of all red arrows leaving that object unto equivalence (by composing with an isomorphism), so we get the prime spectra $\mathbf{Spec}(S)$. For a ring-extension $R \rightarrow S$ we also can recover the cover map $\mathbf{Spec}(S) \rightarrow \mathbf{Spec}(R)$ Indeed, composing the composition of the Galois map with the order-map $P$ and decomposing it alternatively will give us the finite number of prime ideals $Q_i$ of $S$ lying over $P$. That is, we get all splitting behaviour of prime ideals in intermediate field-extensions. Let $K$ be a Galois subfield of $L$ then we have a way to see how a prime ideals in $\mathbf{Spec}(\mathbb{Z})$ splits, ramifies or remains inert in $K$ and so by Chebotarev density this gives us the dimension of $K$ as well as the Galois group. And, if we could label the prime ideal by a prime number $p$, we could even reconstruct $K$ itself as $K$ is determined once we know all prime numbers which completely split. The problem i have is that i do not see a categorical way to label the red arrows in $Frob(\mathbb{Z})$ by prime numbers. Mochizuki says we can do this in the proof of Thm 6.4(iii) by using the fact that the $log(p)$ are linearly independent over $\mathbb{Q}$. This suggests that one might use the ‘Arakelov information’ (that is the archimidean valuations) to do this (the bit i left out so far), but i do not see this in the case of $\mathbb{Q}$ as there is just 1 extra (real) valuation determined by the values of the nonarchimidean valuations. Probably i am missing something so all sorts of enlightenment re welcome!
Azuma’s inequality is a useful result concerning martingales. A (discrete-time) martingale is a stochastic process \(X_0, X_1, \ldots \) which satisfies* * $$\mathbf{E}(|X_i|) < \infty$$ for all \(i\), and $$\mathbf{E}(X_{n+1} | X_1, \ldots, X_n) = X_n.$$ Roughly speaking, martingales corresponding to fair betting games: if \(X_n\) is my fortune after \(n\) rounds of a game, my expected fortune after playing the \((n+1)\)-st round is equal to my fortune before playing that round. Azuma’s inequality applies to martingales in which the differences \(|X_{n+1} – X_n|\) are bounded. Specifically, suppose $$|X_{i+1} – X_i| \leq c_i$$ for some constants \(c_i\) for \(i = 1, 2, \ldots, m\). Define \(C = \sum_{i = 1}^m c_i\). Then Azuma’s inequality states that $$\mathbf{P}(X_m – X_0 > \lambda) < e^{- \lambda^2 / C}.$$ That is, the martingale \(X_1, X_2, \ldots, X_m\) is tightly concentrated around its expectation. I just posted an essay which gives some applications of Azuma’s inequality to combinatorics and theoretical computer science, which is available here. Azuma’s inequality is an example of the concentration of measure phenomenon, which has rich applications in combinatorics, probability theory and Banach space theory. This book gives a very thorough survey of the phenomenon, although it approaches the subject from a very geometric/measure theoretic standpoint. A more condensed (and perhaps user-friendly) overview is available on Terence Tao’s blog.
Search Now showing items 1-10 of 52 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Search Now showing items 1-10 of 17 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
I am currentely discussing generating functions in my discrete mathematic class. However the professor showed us how to solve counting problems of the form $x^1 + x^2 + x^3 = n$ with $x_i$ and $n$ positives integers and some restrictions. I understand the procedure, but I cannont find why we redefine the problem this way. If someone could maybe give me a link (because I've been looking forever) or a proof of why it works (other than the intuitive "the exponents adds up so it works") I would be eternally grateful :D. Heres what I (think) I understand: A generating function would be a way to "encode" information about any real number sequence, finite or inifinite $a_n = a_1, a_2, ... a_k, ... $ into a power serie $$\sum_{k\geq 0}{}{a_kx^k}$$ where $x$ is an "inderterminate" variable, which in some way we dont really care about It's value (maybe?) and see it more as a position indicator. Therefore, we kind of take for granted that for some appropriate x, the function is analytical and we can manipulate it. Alright so from there I get it, it's kind of a way to represent a sequence and by finding it's generating function we can find the $k^{th}$ term desired. But here's the example we did with no explanation of why it worked. "Find the number of solution of $$e_1 + e_2 + e_3 = 17$$ such that $$2\leq e_1\leq5$$ $$3\leq e_2\leq 6$$ $$4\leq e_3\leq7$$ and $n$, $e_1$, $e_2$, $e_3$ $\in \mathbb{Z}_\geq0$ And the way they do it is just saying "well all u have to do is find the coefficient of $x^{17}$ in $(x^2+x^3+x^4+x^5)(x^3+x^4+x^5+x^6)(x^4+x^5+x^6+x^7)$ But from what I've been told so far it's kind of multiplying the 3 generating functions of the sequences $(0,0,1,1,1,1,0...)(0,0,0,1,1,1,1,0,...)(0,0,0,0,1,1,1,1,0,0...)$ Is there something I am missing, or maybe are there no real "logic" behind it and we just define it this way cause it works..?? Anyways, thank you everyone. I tried to make my question as clear as possible but still I'm sorry of the question is unclear but ultimately I'm trying to understand why do we define it this way other than "because it works". Thank you!!!!
I am currently working on my bachelor’s diploma. The research concerns mixed finite element method for the 2D Stokes system $$ - \Delta \boldsymbol u + \nabla p = \boldsymbol f, \quad \boldsymbol x \in \Omega \subset \mathbb R^2, \\ \nabla \cdot \boldsymbol u = 0, \quad \boldsymbol x \in \Omega, $$ with homogeneous Dirichlet (no–slip) BCs $$ \boldsymbol u = \boldsymbol 0, \quad \boldsymbol x \in \partial \Omega. $$ Obtaining weak form and choosing appropriate spaces $\mathbb U_h \times \mathbb P_h \ni (\boldsymbol u_h, \, p_h)$ for the velocity field and pressure distribution, one reduces the problem to a saddle–point linear system $$ \begin{bmatrix} \mathbf A & \mathbf B^T \\ \mathbf B & \mathbf 0 \end{bmatrix} \begin{bmatrix} \boldsymbol \xi \\ \boldsymbol \psi \end{bmatrix} = \begin{bmatrix} \boldsymbol b \\ \boldsymbol 0 \end{bmatrix}, $$ from which one then gets FE–solutions (using natural isomorphism $\mathcal P$ between vectors and FE–interpolants) for pressure $p = \mathcal P \, \boldsymbol \psi $ and ditto for velocity components. I focus on the analysis and implementation of (geometric) multigrid solving techniques for this system. I have an experience [Russian text] in solving elliptic and time–dependent parabolic and hyperbolic PDEs, yet I’ve never dealt with saddle–point problems before. So I want to clarify several aspects. It is well–known that one should carefully choose FE–spaces in order to obtain a well–posed problem. I focus on two LBB–stable finite element pairs: i. $\Delta \, \boldsymbol P^1 \, CR — \Delta \, P^0 \, L$ finite element pair (non–conform space for the velocity components), ii. $\Delta \, \boldsymbol P^2 \, L — \Delta \, P^1 \, L$ finite element pair (also known as Taylor–Hood). Let me clarify these notations. I follow Ciarlet’s definition of FE here: $“\Delta”$ defines geometry (a triangle in my case), $“P^i”$ defines a polynomial space for shape functions, and the last component defines DOFs (which one uses to obtain shape functions and their cross–element behavior)—$“L”$ for “Lagrange” and $“CR”$ for “Crouzeix–Raviart.” I have some visualizations for (i) here. As far as I concerned, inf–sup stability of these elements should guarantee existence of the unique solution of the above linear system. However, since the momentum equation involves only pressure gradient, it is determined up to a constant; so one usually requires $p$ to have zero mean value: $\int_\Omega p \, d \boldsymbol x = 0$. The question is, should I enforce this constraint explicitly in the system? Authors of [1, p. 309] suggest the following modification (they use (i) stFE pair): $$ \begin{bmatrix} \mathbf A & \mathbf B^T & \boldsymbol 0 \\ \mathbf B & \mathbf 0 & \boldsymbol a \\ \boldsymbol 0 & \boldsymbol a^T & 0 \end{bmatrix} \begin{bmatrix} \boldsymbol \xi \\ \boldsymbol \psi \\ \mu \end{bmatrix} = \begin{bmatrix} \boldsymbol b \\ \boldsymbol 0 \\ 0 \end{bmatrix}. $$Other publications I’ve seen so far do not mention such modifications. Moreover, since I am interested in implementation of geometric multigrid, I have to assemble prolongation / restriction operators (sparse matrices); this is pretty straightforward for velocity components and pressure vectors (however, non–conform case will require some special handling), yet it is not for $\mu$. UPD: I noticed this answer. It seems pretty reasonable that, since the “discrete version” of $\nabla p$ is $\mathbf B^T \boldsymbol \psi$, it should imply that operator $\mathbf B^T$ has a kernel consisting of the identity vector (I should check this by direct computations (for not–too–big matrices, of course) when I’m done with assembling routine). This also implies that the final system actually has an infinite number of solutions ( doesn’t it?). However, this should not cause any problems for a Krylov solver to converge, and one can enforce the desired constraints on the pressure at the post–processing step. But now I got a bit confused. Does LBB–stability of a finite element pair really imply existence and uniqueness? Isn’t it really about existence? According, for one, to [2, p. 99] it is reasonable to use multigrid as a preconditioner for a Krylov solver for div–grad elliptic problems. For the Stokes problem, is it reasonable to use multigrid (for one, with the Vanka smoother) as a preconditioner for MINRES? MINRES seems reasonable since the system is symmetric and sign–indefinite. I’ve seen articles suggesting using multigrid as a stand–alone solver. I also want to compare my future implementation with “black–box” Krylov solvers (for one, in terms of residuals’ history). And I do not want to implement them all. I’ve already implemented data structures for CSC and symmetric CSlC sparse matrices (these data structures are used for $B$ and $A$ blocks of the system, respectively). I’ve also implemented import / export in Harwell–Boeing format for these data structures (HB–format is pretty popular and supported, for example, by Mathematica). So I am wondering which tools I can use. It would be nice if the solver routine provided residuals’ history (not only the solution vector). Any suggestions? References: Mats G. Larson, Fredrik Bengzon The Finite Element Method: Theory, Implementation, and Applications 2013 Maxim A. Olshanskii, Eugene E. Tyrtyshnikov Iterative Methods for Linear Systems 2014
Kenneth Alexander proved a uniform Law Of Iterated logarithm for Vapnik-Chervonenkis classes in the article Probability Inequalities for Empirical Processes and a Law of the Iterated Logarithm (Ann. Probab. volume 12, number 4 (1984), 1041-1067). It states that given a VC class of sets $\cal C$ and a sequence of i.i.d. random variables $(Y_n)_n$, the following holds: $$ \limsup_{n \rightarrow \infty} \sup_{C \in \cal C} \frac{\left|\sum_{i=1}^{n} 1_{C}(Y_i) - nP_{Y_1}(C)\right|}{\sqrt{ 2 n \log \log n}} = \sup_{c \in \cal C} \left(P_{Y_1}(C)(1-P_{Y_1}(C))\right)^{1/2} $$ What if one has a slightly more general setting: the variables $(Y_n)_n$ 'live' on a general measure space $(\Omega, {\cal F}, \mu)$, where $\mu$ is a finite measure. Which conditions on the measure $\mu$ and the variables $(Y_n)_n$ should be imposed for the statement above to hold? Best regards
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Bunuel wrote: If \(|3x + 7| ≥ 2x + 12\), then which of the following is true? A. \(x \leq \frac{-19}{5}\) B. \(x \geq \frac{-19}{5}\) C. \(x \geq 5\) D. \(x \leq \frac{-19}{5}\) or \(x \geq 5\) E. \(\frac{-19}{5} \leq x \leq 5\) Square both sides.. \(9x^2+42x+49\geq{4x^2+48x+144}........5x^2-6x-95\geq{0}..........5x^2-25x+19x-95\geq{0}...........(5x+19)(x-5)\geq{0}.....\) So either \(x\leq{-19/5}\) or \(x\geq{5}\) D Or open the mod I.. \(3x+7\geq{2x+12}........x\geq{5}\) II. \(-3x-7\geq{2x+12}............5x\leq{-19}\) D Or the choices should help us.. A,B,C are subset of D and E so if any of D and E is correct, one of A,B and C will also be correct.. So our answer has to be D or E.. Now to choose between D and E, take X as 0 So equation becomes 7>12...NO So our ans should not contain 0 as a value... Eliminate E D _________________
I need help finding the radius of convergence for $\frac{x^n}{n\sqrt{n}12^n}$ I tried both the root and ratio tests but neither one is getting me anywhere. Trying the root test I reduced to $\lim \limits_{n \to \infty}\left(\frac{x}{n^{1/2}n^{1/4}12}\right)$ which is either incorrect or not helpful. I need to find radius of convergence, as well as interval of convergence, then determine absolute and conditional convergence.
Let $d_1$ and $d_2$ be two metrics on the same set $M$. Then $d_1$ and $d_2$ are called uniformly equivalent if the identity maps $i:(M,d_1)\rightarrow(M,d_2)$ and $i^{-1}:(M,d_2)\rightarrow(M,d_1)$ are uniformly continuous. Now this textbook gives the following exercise: Given any metric space $(M,d)$, show that the metric $\rho=\frac{d}{1+d}$ is always uniformly equivalent to $d$[.] My question is, is the result of the exercise correct? Because two metrics are uniformly equivalent if and only if they induce the same uniformity, and if two metrics induce the same uniformity then they have the same bounded sets. Yet all sets are bounded with respect to $\frac{d}{1+d}$, whereas all sets need not be bounded with respect to $d$. Where am I going wrong?
This question already has an answer here: In light of the discovery of the Higgs boson. The Higgs Boson is a force particle which interacts with matter particles. My question is what does the Higgs Boson interact with to give itself mass. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: In light of the discovery of the Higgs boson. The Higgs Boson is a force particle which interacts with matter particles. My question is what does the Higgs Boson interact with to give itself mass. Yes, it is correct to say that the Higgs boson, just like other elementary particles, get its mass from the interaction with the Higgs boson – which means "with itself" in the case of this particle. More concretely, the mass may be derived from the Higgs potential (energy density) $$ V(h) = \frac a4 h^4 - \frac b2 h^2 + c$$ where the additive shift $c$ isn't important (in the absence of curved spacetime). The constants $a,b$ determine the shape of the potential or, if you wish, the coefficients of the "quadratic" and "quartic" self-interactions of the Higgs field. The function $V(h)$ has minima at $$ h_\pm = \pm \sqrt{\frac{b}{a}} $$ and near these minima, the potential may be approximated as $$ V(h_\pm + \Delta h ) = c' + \frac{m^2}{2} (h-h_\pm)^2 $$ and because of this reinterpretation, the parameter $m$ in the coefficient $m^2/2$ above (which is a simple function of $a,b$ again) may be interpreted as the Higgs mass. This quadratic+quartic self-interaction plays the role of the gauge interaction $h-h-Z$ or $h-h-W$ which gives masses to the Z-bosons or W-bosons; or the $h\bar{\psi}\psi$ Yukawa interactions that give masses to the fermions. There is one sense in which the "God particle giving mass to itself" is somewhat more vacuous: we had to use the quadratic term, $-bh^2/2$, itself, and quadratic terms are of course nothing else than mass terms in disguise (although the sign of $b$ was the opposite one than for a "direct" Higgs mass). So we could only derive the mass term around the physical vacuum $h_\pm$ because we inserted some (tachyonic) mass term to the initial formula. But that doesn't change the fact that even this mass term of the Higgs boson may be interpreted as a self-interaction of the Higgs with itself.
I would like to model OTM Swaptions. I can use some implementation of the Bachelier model (not B76 due to negative rates) and implied volatilities from Bloomberg. For 10Y X 10Y (10 years option maturity, 10 years swap length) and 100 bps OTM I get something like 100% volatility pa for the EUR. I would like to make this number plausible. Does it make sense to use a delta normal method here? Thus apply $$ \sigma \approx \Delta \times \text{vola(rate}) \times f $$ where $f$ should be some leverage factor. Which rate would I use? What if I look at duration and put $$ \sigma \approx D \times \text{vola(rate}), $$ again which rate? Are there comparable numbers on the web or in Bloomberg? I can not estimate the volatility of swaptions in PORT, right?
Several statistical tests may be automatically performed to test the different components of the model. These tests use individual parameters drawn from the conditional distribution, which means that you need to run the task “Conditional distribution” in order to get these results. In addition, the tests for the residuals require to have first generated the residuals diagnostic plots (scatter plot or distribution). The tests are all performed using the individual parameters sampled from the conditional distribution (or the random effects and residuals derived thereof). They are thus not subject to bias in case of shrinkage. For each individual, several samples from the conditional distribution may be used. The used tests include a correction to take into account that these samples are correlated among each other. Results of the tests are available in the tab “Results” and selecting “Tests” in the left menu The model for the individual parameters Consider a PK example (warfarin data set from the demos) with the following model for the individual PK parameters (ka, V, Cl): In this example, the different assumptions we make about the model are: The 3 parameters are lognormally distributed ka is function of age only V is function of sex and weight. More precisely, the log-volume log(V) is a linear function of the log-weight \({\rm lw70 }= \log({\rm wt}/70)\). Cl is not function of any of the covariates. The random effects \(\eta_V\) and \(\eta_{Cl}\) are linearly correlated \(\eta_{ka}\) is not correlated with \(\eta_V\) and \(\eta_{Cl}\) Let’s see how each of these assumptions are tested: Covariate model Individual parameters vs covariates – Test whether covariates should be removed from the model Individual parameters vs covariates – Test whether covariates should be removed from the model If an individual parameter is function of a continuous covariate, the linear correlation between the transformed parameter and the covariate is not 0 and the associated \(\beta\) coefficient is not 0 either. Then, Pearson’s correlation tests and Wald tests are used to test whether continuous covariates should be removed from the model. ANOVA and Wald tests are performed for categorical covariates in a same way. Pearson’s correlation test and ANOVA For continuous covariates, the Pearson’s correlation test tests the following null hypothesis: H0: the person correlation coefficient between the individual parameters sampled from the conditional distribution and the covariate values is zero For categorical covariates, the one-way ANOVA tests the following null-hypothesis: H0: the mean of the individual parameters sampled from the conditional distribution is the same for each category of the categorical covariate A small p-value indicates that the null hypothesis can be rejected and thus that the correlation between the individual parameter values and the covariate values is significant. If this is the case, the covariate should be kept in the model. On the opposite, if the p-value is large, the null hypothesis cannot be rejected and this suggests to remove the covariate from the model. High p-values are colored in yellow (p-value in [0.01-0.05]), orange (p-value in [0.05-0.10]) or red (p-value in [0.10-1]) to draw attention on parameter-covariate relationships that can be removed from the model from a statistical point of view. In our example, the ANOVA test indicates that the mean of individual ka values is not significantly different for males and females, and this suggest to remove the covariate sex from the parameter ka. Remark: The two covariates weight and sex are strongly dependent. Then, the fact that both lw70 and sex are significant on the parameter V does not mean that these two covariates should be kept in the model. Wald test The Wald test relies on the standard errors. Thus the task “Standard errors” must have been calculated to see the test results. The test can be performed using the standard errors calculated using either the “linearization method” (indicated as “linearization”) or not (indicated as “stochastic approximation” in the tests). The Wald test tests the following null hypothesis: H0: the beta parameter is equal to zero (in case of more than two groups for categorical covariates: all beta parameters are equal to zero) A small p-value indicates that the null hypothesis can be rejected and thus that the estimated beta parameter is significantly different from zero. If this is the case, the covariate should be kept in the model. On the opposite, if the p-value is large, the null hypothesis cannot be rejected and this suggests to remove the covariate from the model. Note that if beta is equal to zero, then the covariate has no impact on the parameter. High p-values are colored in yellow (p-value in [0.01-0.05]), orange (p-value in [0.05-0.10]) or red (p-value in [0.10-1]) to draw attention on parameter-covariate relationships that can be removed from the model from a statistical point of view. In our example, the Wald test suggests to remove sex from ka and V: Remark: the Wald test and Pearson/ANOVA tests may suggest different covariates to keep or remove. Note that the null hypothesis tested is not the same. Random effects vs covariates – Test whether covariates should be added to the model Random effects vs covariates – Test whether covariates should be added to the model Pearson’s correlation tests and ANOVA are performed to check if some relationships between random effects and covariates not yet included in the model should be added to the model. For continuous covariates, the Pearson’s correlation test tests the following null hypothesis: H0: the person correlation coefficient between the random effects (calculated from the individual parameters sampled from the conditional distribution) and the covariate values is zero For categorical covariates, the one-way ANOVA tests the following null-hypothesis: H0: the mean of the random effects (calculated from the individual parameters sampled from the conditional distribution) is the same for each category of the categorical covariate A small p-value indicates that the null hypothesis can be rejected and thus that the correlation between the random effects and the covariate values is significant. If this is the case, it is probably worth considering to add the covariate in the model. Note that the decision of adding a covariate in the model should not only be driven by statistical considerations but also biological relevance. Note also that for parameter-covariate relationships already included in the model, the correlation between the random effects and covariates is not significant (while the correlation between the parameter and the covariate can be – see above). Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]) to draw attention on parameter-covariate relationships that can be considered for addition in the model from a statistical point of view. In our example, we already have sex on ka and V, and lw70 on V in the model. The only remaining relationship that could possibly be worth investigating is between weight (or the log-transformed weight “lw70”) and clearance. The model for the random effects Distribution of the random effects – Test if the random effects are normally distributed Distribution of the random effects – Test if the random effects are normally distributed In the individual model, the distributions for the parameters assume that the random effects follow a normal distribution. Shapiro-Wilk tests are performed to test this hypothesis. The null hypothesis is: H0: the random effects are normally distributed If the p-value is small, there is evidence that the random effects are not normally distributed and this calls the choice of the individual model (parameter distribution and covariates) into question. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). In our example, there is no reason to reject the null-hypothesis and no reason to question the chosen log-normal distributions for the parameters. Joint distribution of the random effects – Test if the random effects are correlated Joint distribution of the random effects – Test if the random effects are correlated Pearson’s correlation tests are performed to test if the random effects (calculated from the individual parameters sampled from the conditional distribution) are linearly correlated. The null-hypothesis is: H0: the correlation between the random effects of the first and second parameter is zero For correlations not yet included in the model, a small p-value indicates that there is a significant correlation between the random effects of two parameters and that this correlation should be estimated as part of the model (otherwise simulations from the model will assume that the random effects of the two parameters are not correlated, which is not what is observed for the random effects estimated using the data). Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). For correlations already included in the model, a large p-value indicates that one cannot reject the hypothesis that the correlation between the random effects is zero. If the correlation is not significantly different from zero, it may not be worth estimating it in the model. High p-values are colored in yellow (p-value in [0.01-0.05]), orange (p-value in [0.05-0.10]) or red (p-value in [0.10-1]) In our example, we have assumed in the model that \(\eta_V\) and \(\eta_{Cl}\) are correlated. The high p-value indicated that the correlation between the random effects of V and Cl is not significantly different from zero and suggests to remove this correlation from the model. Remark: as correlations can only be estimated by groups (i.e if a correlation is estimated between (ka, V) and between (V, Cl), then one must also estimate the correlation between (ka, Cl)), it may happen that it is not possible to remove a non-significant correlation without removing also a significant one. Starting from the 2019 version, the test is a t-test that checks: H0: the expectation of the correlations between the random effects of the first and second parameter over the replicates is zero The p-values have the same meaning and color code as with the previous test, however this method is more powerful. Correlations already included in the model are now highlighted in blue. The distribution of the individual parameters Distribution of the individual parameters not dependent on covariates – Test if transformed individual parameters are normally distributed Distribution of the individual parameters not dependent on covariates – Test if transformed individual parameters are normally distributed When an individual parameter doesn’t depend on covariates, its distribution (normal, lognormal, logit or probit) can be transformed into the normal distribution. Then, a Shapiro-Wilk test can be used to test the normality of the transformed parameter. The null hypothesis is: H0: the transformed individual parameter values (sampled from the conditional distribution) is normally distributed If the p-value is small, there is evidence that the transformed individual parameter values are not normally distributed and this calls the choice of the parameter distribution into question. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). In our example, there is no reason to reject the null hypothesis of lognormality for Cl. Remark: testing the normality of a transformed individual parameter that does not depend on covariates is equivalent to testing the normality of the associated random effect. We can check in our example that the Shapiro-Wilk tests for \(\log(Cl)\) and \(\eta_{Cl}\) are equivalent. Distribution of the individual parameters dependent on covariates – test the marginal distribution of each individual parameter Distribution of the individual parameters dependent on covariates – test the marginal distribution of each individual parameter Individual parameters that depend on covariates are not anymore identically distributed. Each transformed individual parameter is normally distributed, with its own mean that depends on the value of the individual covariate. In other words, the distribution of an individual parameter is a mixture of (transformed) normal distributions. A Kolmogorov-Smirnov test is used for testing the distributional adequacy of these individual parameters. The null-hypothesis is: H0: the individual parameters are samples from the mixture of transformed normal distributions (defined by the population parameters and the covariate values) A small p-value indicates that the null hypothesis can be rejected. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). With our example, we obtain: The model for the observations A combined1 error model with a normal distribution is assumed in our example: Distribution of the residuals Different tests are performed for the individual residuals (IWRES), the NPDE and for the population residuals (PWRES). Test if the distribution of the residuals is symmetrical around 0 Test if the distribution of the residuals is symmetrical around 0 A Van Der Waerden test is used for testing the symmetry of the residuals. Indeed, symmetry of the residuals around 0 is an important property that deserves to be tested, in order to decide, for instance, if some transformation of the observations should be done. The null hypothesis tested is: H0: the distribution of the residual is symmetrical around 0 A small p-value indicates that the null hypothesis can be rejected. Small p-values are colored in yellow (p-value in [0.05-0.10]), orange (p-value in [0.01-0.05]) or red (p-value in [0.00-0.01]). With our example, we obtain: Starting from the 2019 version, a more powerful symmetry test ( Miao, Gel and Gastwirth ( 2006)) is used instead of the Van Der Waerden test. Test if the residuals are normally distributed Test if the residuals are normally distributed A Shapiro Wilk test is used for testing the normality of the residuals. The null hypothesis is: H0: the residuals are normally distributed If the p-value is small, there is evidence that the residuals are not normally distributed. The Shapiro Wilk test is known to be very powerful. Then, a small deviation of the empirical distribution from the normal distribution may lead to a very significant test (i.e. a very small p-value), which does not necessarily means that the model should be rejected. Thus, no color highlight is made for this test. In our example, we obtain:
So I'm looking at running a plate from the terminal of a battery (26650) and I'm trying to calculate the width of the plate needed given a certain thickness and length as well as a specified current. I have the resistivities of various metals (in order to calculate the best option (price and size of plate) but I'm not sure how to proceed/ what equation I could use. If someone could link me to a webpage with information that would be amazing! Itachi I am going to assume you mean something like a busbar or solid metal conductor with a rectangular cross-section. You know \$ \rho \$ and its units are Ω⋅m (ohm-metre). If you divide that by the cross-sectional area of your conductor you will have \$ \frac {Ω \cdot m}{m^2} = \mathrm {Ω/m} \$ which is the resistance per metre of your conductor. If you multiply the figure obtained in 2 by the length of the conductor you will get \$ \frac {\Omega}{m} \cdot m = \Omega\$, the resistance of the conductor. From \$ V = IR \$, using R calculated in 3, you can calculate the voltage drop along the conductor for a given current, \$ I \$. From \$ P = I^2R \$ you can calculate the power in watts (W) dissipated in the conductor. Amazing?
$\mathscr T_X$ will denote the set of all functions from a non-empty set $X$ into itself, with the binary operation of composition $\circ$ making it a semigroup, called the full transformation semigroup on $X$. Is there a topology on the set $\mathscr T_X$ such that $\circ:\mathscr T_X\times \mathscr T_X\longrightarrow \mathscr T_X$ is a continuous function with respect to the product topology on $\mathscr T_X\times \mathscr T_X?$ (i.e., is there a topology on $\mathscr T_X$ making it a topological semigroup?) Clearly, two (one if $X$ is a one-element set) topologies always work: the discrete and the indiscrete topology on $\mathscr T_X$ make the composition continuous as any function into an indiscrete space is continuous and any function from a discrete space is. (And the product of two discrete spaces is discrete.) I will call those two topologies trivial. These topologies don't seem useful at all, so I will re-write the question. Let $\operatorname{card}(X)>1.$ Is there a non-trivial topology on $\mathscr T_X$ making it a topological semigroup? (or at least, is there a construction of such a topology depending on $\operatorname{card}(X)$ which yields non-trivial topologies at least in some cases?) I cannot think of any general approach to this question and I think I may not have the tools -- I know virtually nothing about topological semigroups. I would be grateful for any help, be it in the form of a hint, a reference, or a full or partial answer to the question. Also, please don't hesitate to comment on anything even remotely related to this. Re Tara B's answer I may be mistaken but I think your example works only for finite sets. In general, I think, when we have a semigroup $S$ and a non-empty proper subset $A\subset S,$ then $\{\emptyset, A,S\}$ forms a good topology iff the following two conditions are satisfied: $(1)$ $A$ is a subsemigroup of $S;$ $(2)$ $S\setminus A$ is an ideal in $S.$ Let's say that in this situation, we call $A$ saturated and $S\setminus A$ prime. (I'm not sure if this is standard nomenclature, but I can imagine it might be.) Suppose a non-empty proper subset $A\subset \mathscr T_X$ is saturated. Then there is $\phi\cdot\operatorname{id}=\phi\in A$ and so $\operatorname{id}\in A.$ Let $\psi\in S_X.$ Then $\psi\psi^{-1}=\operatorname{id}\in A,$ and so $\psi\in A.$ Therefore $S_X\subseteq A.$ But also, let $\mathscr T_X\ni\chi\mathscr J\operatorname{id}.$ Then for some $\alpha,\beta\in\mathscr T_X,$ we have $\alpha\chi\beta=\operatorname{id}.$ Hence $\alpha\chi\in A,$ and so $\chi\in A.$ Therefore the $\mathscr J$-class $J_{\operatorname{id}}$ is contained in $A.$ But for an infinite set $X,$ we have the proper containment $S_X\subsetneq J_{\operatorname{id}},$ because there are functions from $X$ to $X$ whose rank is equal to $\operatorname{card}(X)$ but which aren't permutations. So $S_X$ cannot be saturated. I think for an infinite set $X$ there will be no such $A$ at all. I'm unable to prove this but I think we can obtain any function in $\mathscr T_X$ by composing functions in $J_{\operatorname{id}}.$ If that's true, then if $A$ were saturated, then it would be a subsemigroup containing a set generating the whole $\mathscr T_X$ and so $A=\mathscr T_X.$ $S_X$ clearly works for finite sets $X$ though. It's a subsemigroup of $\mathscr T_X$ and its complement is an ideal because the composition of functions of which at least one doesn't have the maximal rank cannot have the maximal rank either. And for finite $X,$ a function $\phi: X\longrightarrow X$ is a permutation iff it has the maximal rank. I believe $S_X$ is the only saturated subsemigroup of $\mathscr T_X$ for $X$ finite. EDIT The statement "when we have a semigroup $S$ and a non-empty proper subset $A\subset S,$ then $\{\emptyset, A,S\}$ forms a good topology iff the following two conditions are satisfied: $(1)$ $A$ is a subsemigroup of $S;$ $(2)$ $S\setminus A$ is an ideal in $S.$" is false. When a semigroup isn't a monoid it may not be true. For example, Let $\mathbb N=\{1,2,\ldots\}$ be the additive semigroup of natural numbers. Let $A=\{1\}.$ Then $\{\emptyset,A,\mathbb N\}$ is a good topology on $\mathbb N,$ because the inverse image of $A$ under addition is empty. This is impossible in a monoid. It is also impossible in a monoid for the inverse image of $A$ to be equal $S\times S$ because the image of $S\times S$ under the semigroup operation is equal to $S.$ I have to think about it some more.
I need some little help here I'm reading about gravitational waves and particularly about gravitational waves described by the linearized non-homogeneus Einstein's equations \begin{equation} \left( -\frac{\partial^2}{\partial t^2} + \nabla^2 \right)\bar h_{\mu\nu}=-16\pi T_{\mu\nu} \end{equation} These three classic books says the solution to ths equation is know as "retarded solution", given by From Schutz \begin{equation} h_{\mu\nu}(t,x^{i})=4\int\frac{T_{\mu\nu}\left( t-R, y^{i} \right)}{R}d^{3}y \end{equation} From Hartle \begin{equation} h^{\alpha\beta}(t,\vec x)=4\int d^{3}x \frac{\left[T^{\alpha\beta}\left( t', x' \right)\right]_{ret}}{\left| \vec x - \vec x' \right|} \end{equation} From Schneider \begin{equation} h^{\alpha\beta}(t,\vec x)=-\frac{4G}{c^4}\int \frac{ T^{\alpha\beta}\left( t-\frac{\left|y \right |}{c},\vec x + \vec y \right)}{\left | \vec y \right |} d^{3}y \end{equation} well, in this point i guess the three last equations representing the same solution to the wave equation, so my questions are This retarded solution, does where it come from? None of those books says where come from what is the physical situation that describe it? Until I know $T$ is the Tensor Stress-Energy related to the mass that deform the spacetime, I guess that mass is the source of the gravitational waves but if that the case. The gravitational waves eventually will be so far from the source that the mass or T in this case will be zero, or T is related to another mass?
Image of Interval by Continuous Function is Interval/Proof 1 Theorem Let $I$ be a real interval. Proof Let $J$ be the image of $f$. By definition of real interval, it suffices to show that: $\forall y_1, y_2 \in J: \forall \lambda \in \R: y_1 \le \lambda \le y_2 \implies \lambda \in J$ So suppose $y_1, y_2 \in J$, and suppose $\lambda \in \R$ is such that $y_1 \le \lambda \le y_2$. Consider these subsets of $I$: $S = \left\{{x \in I: f \left({x}\right) \le \lambda}\right\}$ $T = \left\{{x \in I: f \left({x}\right) \ge \lambda}\right\}$ As $y_1 \in S$ and $y_2 \in T$, it follows that $S$ and $T$ are both non-empty. Also, $I = S \cup T$. So, suppose that $s \in S$ is at zero distance from $T$. But $\forall n \in \N_{> 0}: f \left({t_n}\right) \ge \lambda$. Therefore by Lower and Upper Bounds for Sequences, $f \left({s}\right) \ge \lambda$. We already have that $f \left({s}\right) \le \lambda$. Therefore $f \left({s}\right) = \lambda$ and so $\lambda \in J$. A similar argument applies if a point of $T$ is at zero distance from $S$. $\blacksquare$
Calculating the Expected Return of a Single Note Introduction While monitoring the performance of an investment is a universal requirement in finance, Peer Lending creates unique challenges. We previously introduced general principles and how one can aggregate returns at the portfolio level. This article focuses on the performance calculation of a single asset, and how we update it over time. General Principle As mentioned previously, we use the Internal Rate of Return (IRR) to calculate financial performance. IRR takes into account the time-value of money and is an industry-wide standard. The internal rate of return is the rate of compounding of a stream of cash flows. For instance, it is the discounting rate r such that the sum of n monthly payments of amount p equals the initial investment A in a loan: \[ A = \sum_{i=1}^n{\frac{p_i}{(1+r)^i}} \] In Peer Lending each loan is paid on a monthly basis, therefore the IRR calculated based on loan payments is also monthly. To present meaningful numbers, we ‘annualize’ it, by compounding it over 12 months: \[ R = (1+r)^{12} – 1 \] Please note that annualizing returns that occurred over short periods may suggest inflated performance. For example, annualizing a one-day return of 2% gives an unrealistic 137,640% over a year. It is customary to only annualize returns once they exceed one year. However, notes in a portfolio were issued at different times. To be able to compared loans returns with various maturities in a portfolio, we do need to put them on the same time basis and annualize each of them indiscriminately. To counter inflated returns we may decide to discard returns that are suspiciously high as non-computable when averaging returns over an entire portfolio. Monthly Cash-flows The cash-flow array of a loan starts with a big negative number, the loan amount funded by the lenders, followed by a series of monthly payments back to those same lenders. If everything goes according to plan, each payment is identical and equals the installment calculated at loan issuance. We subtract the marketplace fees from those payments to ensure more accurate results. For instance, on Lending Club, service fees equal 1%, therefore a \$273.97 monthly installment equals 273.97 x (1.00 – 0.01) = \$271.23 net monthly payment distributed amongst lenders. Pre-payment A borrower may decide to repay the loan earlier than term, which will result in a larger payment than expected. In case of full early repayment, the last cash-flow event must equal the outstanding principal and accrued interest. The IRR in this case is identical to a fully paid loan at term. If the principal was not completely paid, it would have continued to generate interest, a portion of which would have been paid by each installment. Discounting Payments A bigger ‘risk’ is that a loan will default, which is when a borrower stops paying before the installments reach term. If a loan has a probability g to miss a payment at time i, the probability it makes a payment of installment amount a, as schedule, is (1-g): \[ p_i = \left\{ \begin{array}{ll} 0 & P( g_i)\\ a & P(1-g_i)\end{array} \right. \] For simplicity reason, we’ll ignore the cases of the loan making a payment lower or greater than the installment. Therefore: \[ p_i = a \times (1-g_i) \] In other words, we discount the installment by the probability that the loan miss the payment. Occurence of Default The probability for a loan to default is not necessarily constant over time. Based on the historical data provided by each marketplace, we can measure when defaulting loans have stopped paying, and graph what is called a ‘hazard rate’. The cumulative hazard rate shows the probability that a loan will have stopped paying for any given month. Obviously the probability is zero at issuance (since no payments have had to be made, it is impossible to default), and reaches the overall average default rate at term. The analysis of loans with varying features shows that loan hazard rates are approximately isomorphic. While the height of the curve is dependent of the default rate for a given characteristic (for instance the loan grade or the credit score of the borrower), the shape remains more or less identical. Further analysis shows that the hazard rate is mostly dependent upon only 2 factors: the marketplace (e.g. Lending Club vs Prosper) and the term of the loans. We can therefore calculate the probability g of a loan l of term t issued on marketplace m to miss a payment at time i based on 2 components: its lifetime probability of default d and a cumulative hazard rate h: We can calculate the probability g of missing payment at any given time i based on 2 components: its lifetime probability of default d for a given vector of loan properties l, and a hazard rate h based on the marketplace m, with term t and the time i: \[ g(l,i) = d(l) \times h(m,t,i) \] Calculating default rate To identify the loans offering the best return opportunities, LendingRobot has designed a machine-learning algorithm that takes into account features such as debt-to-income ratio, loan purpose, or credit history. In order to rely on something more ‘neutral’ to estimate returns, and avoid self-fulfilling prophecies (using the same algorithm to both select loans and predict their returns), we use a simpler, yet still relatively accurate estimator provided by each marketplace: the loan grade. This way, the accuracy of our return predictions is independent from our own selection model. To estimate the probability of default for a loan of a given grade, we consider the loans of that grade that are old enough to have reached maturity and take the ratio of defaulting ones to the total number of loans. However, such measurement is susceptible to noise, especially when going to a granular level like sub-grades. We smooth those measurements to eliminate noise by fitting a double exponential function, such as y the default rate for a loan of grade number x is: \[ y = a + b \cdot e^{-c \cdot x} + d \cdot e^{-f \cdot x} \] Discounting future cash-flow Once the hazard rate and the lifetime default probability have been estimated, we can discount future cash-flow events to take into account the upcoming and cumulative risks of default. With the loan amount A, the monthly installment a, the overall default rate d and the hazard rates h for each month, the monthly return r is such that: \[ A = \sum_{i=1}^n{\frac{a \; (1- d \times \sum_{j=1}^i h_j)}{(1+r)^i}} \] Taking account payments already made The cumulative hazard rate shifts down as the borrower makes payments. This is because payments already made have, naturally, a probability of default of zero, and the probability of future payments becomes increasingly more likely. To calculate expected returns for on-going loans, we combine the payments already made with the discounted future payments. As a consequence, the expected return of a consistently paying loan increases over time and while the loan itself becomes less risky. If k payments have been made already, out of a total of n expected payments, we have: \[ A = \sum_{i=1}^k{\frac{a}{(1+r)^i}} + \sum_{i=k+1}^n{\frac{a \; (1- d \times \sum_{j=k+1}^i h_j)}{(1+r)^i}} \] An example of calculation is shown is the following Google spreadsheet: https://docs.google.com/spreadsheets/d/10Fuen2LDW6CIHRC3XNs7uqU15PUSnOznW1OxeXvoLtE/edit?usp=sharing We consider a \$7,500 loan with term 36 months. The interest rate is 18.75%, which gives a monthly installment of \$273.97 (or \$271.23 net after deducting 1% of service fees). Historically, such a loan had a probability of default of 19.57%. The hazard rates defined on row 7 are non-cumulative. To discount future payment, we sum the probabilities of default from the first month of future payment. At issuance, when 0 payments have been made, all the 36 payments are discounted, which gives an annualized expected return of 9.64%. As more payments are made the discount on future payments is lowered and expected return rises. For instance, after 2 years (row 35), when 24 payments of \$271.33 have been made, the 25th payment is expected to be worth \$270.19 (it was only \$226.15 initially, due to the significant risk of default). The expected return is then 19.16%. The maximum value is reached when the loan is fully paid. In that case, the expected return reaches 19.59%. Expected Return higher than Interest Rate It may seem surprising at first that the expected return is higher than the interest rate, especially as the marketplace keeps 1% of servicing fees. The reason is that the expected return is compounded (raising 1 plus the month return to the power of 12), while the monthly interest charged to the borrower is simply the annual interest divided by 12. Taking the time value of money into account, the borrower pays more than the nominal rate. For instance, 18.75% of annual interest divided by 12 gives 1.56% per month, but if annualized (raising 1.0156 to the power of 12 ), the rate paid by the borrower is 20.45%. Incidentally, is consistent with 19.16% expected return, once the 1% servicing fees are deducted. Incomplete Information If we lack payment history for a given loan, or if this payment history is irrelevant because the note was purchased on the secondary market, we can discard any previous payments, and start with the outstanding principal or price paid for the note. In that case, the age of the loan is still needed, as it impacts the cumulative hazard rate. Late status When a borrower missed a payment several days after it was due, the loan becomes late. The loan becomes increasingly late as the days go by, until, after 120 days, it is considered charged-off. Naturally, a loan becoming late is a negative predictor, and should affect the expected return adversely. Marketplace statistics show the probably that a loan will default is based on the number of days the note is late. We can therefore estimate a function F(l) that gives the probability of a loan to stop paying, ex-ante of its overall probability of default, based on the number of days l it is late in payment. We further discount the installment by such probability. \[ A = \sum_{i=1}^k{\frac{a}{(1+r)^i}} + \sum_{i=k+1}^n{\frac{a \; (1- d \times \sum_{j=k+1}^i h_j) \times ( 1 – F(l)) }{(1+r)^i}} \] In the Google spreadsheet mentioned above, specifying a number of days a loan is late in payment in cell L1 shows the corresponding probability of default in the cell below. The future payments will automatically be updated when that value is changed. As mentioned previously, if the example loan is current and has made 24 payments already, the expected return is 19.16%. If the loan becomes late, for instance by 9 days, the expected return drops to 12.74%. Emmanuel Marot March 3, 2016 9 Comment
I am a bit confused with how to find work when there is a free body diagram. I am trying to work out this problem, and in it a box is being pulled at a constant speed by a rope at a constant angle above the horizontal. I am given mass, coefficient of kinetic friction, and the angle.I have drawn my free body diagram, and I think it's pretty accurate. I get the following equations, where $P$ is equal to force being pulled: \begin{align} \Sigma F_x&=T\cos\theta-f=ma=0 \\ \Sigma F_y &= T \sin \theta + N - mg = ma = 0 \end{align} So my unknowns are $P$, $T$, and $N$. I know $f=(\text{coefficient of kinetic friction})N$. I also know the distance the box is pulled. The work I am trying to solve for is the work being done by the man. I solved for $T$ by isolating $N$ in the second equation and plugging it into the first. Then I'm thinking that $T\cos\theta$ would be the force that I would have to multiply by the distance in order to get work, but it isn't giving me the right answer. Is it because the rope is being pulled up at an angle? Do I need to account for that somehow to find the work done by the man?
Search Now showing items 1-10 of 15 Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Dielectron production in proton-proton collisions at √s=7 TeV (Springer, 2018-09-12) The first measurement of e^+e^− pair production at mid-rapidity (|ηe| < 0.8) in pp collisions at √s=7 TeV with ALICE at the LHC is presented. The dielectron production is studied as a function of the invariant mass (m_ee ... Anisotropic flow of identified particles in Pb-Pb collisions at √sNN=5.02 TeV (Springer, 2018-09-03) The elliptic (v_2), triangular (v_3), and quadrangular (v-4) flow coefficients of π^±, K^±, p+p¯,Λ+Λ¯,K^0_S , and the ϕ-meson are measured in Pb-Pb collisions at √s_NN=5.02 TeV. Results obtained with the scalar product ... Azimuthally-differential pion femtoscopy relative to the third harmonic event plane in Pb–Pb collisions at √sNN = 2.76 TeV (Elsevier, 2018-06-22) Azimuthally-differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze out, provide very important information on the ... Inclusive J/ψ production at forward and backward rapidity in p-Pb collisions at √sNN=8.16 TeV (Springer Berlin Heidelberg, 2018-07-25) Inclusive J/ψ production is studied in p-Pb interactions at a centre-of-mass energy per nucleon-nucleon collision 𝑠NN‾‾‾‾√=8.16 TeV, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed, via its ... Inclusive J/ψ proInclusive J/ψ production in Xe–Xe collisions at √sNN = 5.44 TeVduction in Xe–Xe collisions at √sNN = 5.44 TeV (Elsevier, 2018-08-31) Inclusive J/ψ production is studied in Xe–Xe interactions at a centre-of-mass energy per nucleon pair of TeV, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed via its decay into a muon pair, in the ... Neutral pion and η meson production at midrapidity in Pb-Pb collisions at √sNN=2.76 TeV (American Physical Society, 2018-10-04) Neutral pion and η meson production in the transverse momentum range 1 < p_T < 20 GeV/c have been measured at midrapidity by the ALICE experiment at the Large Hadron Collider (LHC) in central and semicentral Pb-Pb collisions ...
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
I have a second order ODE that I can only solve numerically using NDSolve, but I then need to use the solution in FindRoot and am running into errors. A simplified but analogous problem is the following: Find the solution to $\phi^{\prime\prime}(u) = -\omega^2 \phi(u)$ on the interval $0<u<1$ with the boundary conditions $\phi(0)=\phi(1)=0$, which will only be true for certain values of $\omega$ ($\omega=n\pi$ in this case). The code I am using is eqnp = p''[ u] + ω^2 p[u];psol[ω_?NumericQ] = NDSolve[{eqnp == 0, p[0] == 0, p[1] == 0}, p, u];FindRoot[psol[ω], {ω, 3}] but I am receiving errors. How can I feed a solution from NDSolve into FindRoot? I know that in the specific case above everything can be done analytically, but my actual ODE must be solved numerically, so I need to figure out how to solve this simplified problem completely numerically. EDIT: I have updated the post with the full problem. The ODE is both homogeneous and linear λ = 0.00001; k = 1.0;f[u_] := 1/(2λ u) (1-Sqrt[1-4λ+4λ u^2]);A = Sqrt[1/2(1+Sqrt[1-4λ])];K2[u_] := u^(-3/2)f[u](1 + 2λ u^2f'[u]);K1[u_] := K2[u](4ω^2)/(A^2f[u]^2) - 4k^2u^(-1/2)(1 - λ(6u^2f'[u] + 4 u^3f''[u]));eomu = 4u^3K2[u]ϕ''[u] + (6u^2K2[u] + 4u^3K2'[u])ϕ'[u] + K1[u]ϕ[u]; I define the boundary conditions at $u=1$ by finding the first few coefficients in the series solution (using normal Froebenius method). I then want to find the solution and use Dirichlet boundary conditions at $u=0$ to determine the eigenfrequencies. The first coefficient is $a1$ and the indicial exponent is $-\frac{I\omega}{2A}$ (the other solution to the indicial equation is unphysical and is discarded) a1 = (k^2(1 + Sqrt[1 - 4λ])(1 + 8λ) + 2(-1 + 2λ)ω^2 + (I Sqrt[1 + Sqrt[1 - 4λ]] (12λ ω))/Sqrt[2])/(2Sqrt[2] Sqrt[1 + Sqrt[1 - 4λ]] (Sqrt[1 + Sqrt[1 - 4λ]]/Sqrt[2] - I ω)) (I have separate code to find an arbitrary number of coefficients, but to enter them here would be too much) Then define the solution to use as boundary conditions ϕhnum[u_] := (1 - u)^(-I ω/(2A))(1 + a1(1 - u))numh = 0.999999;numb = 0.000001; Then I use the code suggested by Jens ϕsol[ω_?NumericQ] := ϕ /. First@NDSolve[{eomu == 0, ϕ[numh] == ϕhnum[numh], ϕ'[ numh] == ϕhnum'[numh]}, ϕ, {u, numb, numh}];FindRoot[ϕsol[ω][numb], {ω, 1.0 - 1.0 I}] When I do this I get the following error FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >> I know that for the first solution I expect something like $\omega = 1.95- 1.27 I$. Hopefully this clarifies the problem, and thanks again for any help.
Methods Funct. Anal. Topology 14 (2008), no. 1, 1-9 We suggest a method to solve boundary and initial-boundary value problems for a class of nonlinear parabolic equations with the infinite dimensional L'evy Laplacian $\Delta _L$ $$f\Bigl(U(t,x),\frac{\partial U(t,x)}{\partial t},\Delta_LU(t,x)\Bigl)=0$$ in fundamental domains of a Hilbert space. Methods Funct. Anal. Topology 14 (2008), no. 1, 10-19 Small transversal vibrations of the Stieltjes string, i.e., an elastic thread bearing point masses is considered for the case of one end being fixed and the other end moving with viscous friction in the direction orthogonal to the equilibrium position of the string. The inverse problem of recovering the masses, the lengths of subintervals and the coefficient of damping by the spectrum of vibrations of such a string and its total length is solved. Methods Funct. Anal. Topology 14 (2008), no. 1, 20-31 The perturbations of Nevanlinna type functions which preserve the set of zeros of this function or add to this set new points are discussed. Generalized stochastic derivatives on a space of regular generalized functions of Meixner white noise Methods Funct. Anal. Topology 14 (2008), no. 1, 32-53 We introduce and study generalized stochastic derivatives on a Kondratiev-type space of regular generalized functions of Meixner white noise. Properties of these derivatives are quite analogous to the properties of the stochastic derivatives in the Gaussian analysis. As an example we calculate the generalized stochastic derivative of the solution of some stochastic equation with a Wick-type nonlinearity. The involutive automorphisms of $\tau$-compact operators affiliated with a type I von Neuman algebra Methods Funct. Anal. Topology 14 (2008), no. 1, 54-59 Let $M$ be a type I von Neumann algebra with a center $Z,$ and a faithful normal semi-finite trace $\tau.$ Consider the algebra $L(M, \tau)$ of all $\tau$-measurable operators with respect to $M$ and let $S_0(M, \tau)$ be the subalgebra of $\tau$-compact operators in $L(M, \tau).$ We prove that any $Z$-linear involutive automorphisms of $S_0(M, \tau)$ is inner. About nilpotent $C_0$-semigroups of operators in the Hilbert spaces and criteria for similarity to the integration operator Methods Funct. Anal. Topology 14 (2008), no. 1, 60-66 In the paper, we describe a class of operators $A$ that have empty spectrum and satisfy the nilpotency property of the generated $C_0$-semigroup $U(t)=\exp\{-iAt\},\, t\geqslant 0$, and such that the operator$A^{-1}$ is similar to the integration operator on the corresponding space $L_2(0,a)$. Methods Funct. Anal. Topology 14 (2008), no. 1, 67-80 We construct two types of equilibrium dynamics of infinite particle systems in a locally compact Polish space $X$, for which certain fermion point processes are invariant. The Glauber dynamics is a birth-and-death process in $X$, while in the case of the Kawasaki dynamics interacting particles randomly hop over $X$. We establish conditions on generators of both dynamics under which corresponding conservative Markov processes exist. Methods Funct. Anal. Topology 14 (2008), no. 1, 81-100 The interpolation of couples of separable Hilbert spaces with a function parameter is studied. The main properties of the classical interpolation are proved. Some applications to the interpolation of isotropic Hörmander spaces over a closed manifold are given.
The integration of double-infinite Toda lattice by means of inverse spectral problem and related questions Methods Funct. Anal. Topology 15 (2009), no. 2, 101-136 The solution of the Cauchy problem for differential-difference double-infinite Toda lattice by means of inverse spectral problem for semi-infinite block Jacobi matrix is given. Namely, we construct a simple linear system of three differential equations of first order whose solution gives the spectral matrix measure of the aforementioned Jacobi matrix. The solution of the Cauchy problem for the Toda lattice is given by the procedure of orthogonalization w.r.t. this spectral measure, i.e. by the solution of the inverse spectral problem for this Jacobi matrix. Methods Funct. Anal. Topology 15 (2009), no. 2, 137-151 For relations generated by a pair of operator symmetric differential expressions, a class of generalized resolvents is found. These resolvents are integro-differential operators. The expansion in eigenfunctions of these relations is obtained. Methods Funct. Anal. Topology 15 (2009), no. 2, 152-167 A general form of the Lions-Magenes theorems on solvability of an elliptic boundary-value problem in the spaces of nonregular distributions is proved. We find a general condition on the space of right-hand sides of the elliptic equation under which the operator of the problem is bounded and has a finite index on the corresponding couple of Hilbert spaces. Extensive classes of the spaces satisfying this condition are constructed. They contain the spaces used by Lions and Magenes and many others spaces. Methods Funct. Anal. Topology 15 (2009), no. 2, 168-176 In the present paper we study $\ast$-representations of semilinear relations with polynomial characteristic functions. For any finite simple non-oriented graph $\Gamma$ we construct a polynomial characteristic function such that $\Gamma$ is its graph. Full description of graphs which satisfy polynomial (degree one and two) semilinear relations is obtained. We introduce the $G$-orthoscalarity condition and prove that any semili ear relation with quadratic characteristic function and condition of $G$-orthoscalarity is $\ast$-tame. This class of relations contains, in particular, $\ast$-representations of $U_{q}(so(3)).$ Algebras of unbounded operators over the ring of measurable functions and their derivations and automorphisms Methods Funct. Anal. Topology 15 (2009), no. 2, 177-187 In the present paper derivations and $*$-automorphisms of algebras of unbounded operators over the ring of measurable functions are investigated and it is shown that all $L^0$-linear derivations and $L^{0}$-linear $*$-automorphisms are inner. Moreover, it is proved that each $L^0$-linear automorphism of the algebra of all linear operators on a $bo$-dense submodule of a Kaplansky-Hilbert module over the ring of measurable functions is spatial. Methods Funct. Anal. Topology 15 (2009), no. 2, 188-194 The norm closure of the algebra generated by the set $\{n\mapsto {\lambda}^{n^k}:$ $\lambda\in{\mathbb {T}}$ and $k\in{\mathbb{N}}\}$ of functions on $({\mathbb {Z}}, +)$ was studied in \cite{S} (and was named as the Weyl algebra). In this paper, by a fruitful result of Namioka, this algebra is generalized for a general semitopological semigroup and, among other things, it is shown that the elements of the involved algebra are distal. In particular, we examine this algebra for $({\mathbb {Z}}, +)$ and (more generally) for the discrete (additive) group of any countable ring. Finally, our results are treated for a bicyclic semigroup. Methods Funct. Anal. Topology 15 (2009), no. 2, 195-200 The purpose of our paper is to introduce some topology on the group $G_F^{r}(M)$ of all $C^{r}$-isometries of foliated manifold $(M,F)$, which depends on a foliation $F$ and coincides with compact-open topology when $F$ is an $n$-dimensional foliation. If the codimension of $F$ is equal to $n$, convergence in our topology coincides with pointwise convergence, where $ n=\operatorname{dim}M.$ It is proved that the group $G_F^{r}(M)$ is a topological group with compact-open topology, where $r\geq{0}.$ In addition it is showed some properties of F-compact-open topology.
Matching Annotations Sep 2019 A set is an unordered collection of zero or more immutable Python data objects Set in python is mutable as it provides an appendfunction Aug 2019 Jul 2019 as "as" is not necessary here. This is very minor mistake but since you are doing excellent job I am going to point out any mistake I find to contribute the project towards perfection. www.weightwatchers.com www.weightwatchers.com Don’t stop there, and get creative! Unnecessary comma. Maybe this would be better: Don't stop there. Get creative! Low calorie Should be Low-calorie , This comma is unnecessary. -- This should be a proper em dash. Jun 2019 file should be "folder" May 2019 Apr 2019 Mar 2019 make typo benificial beneficital Feb 2019 link.springer.com link.springer.com emphasis ours It doesn't look like the emphasis made it through. contend Small typo: "contends" Jan 2019 supermemo.guru supermemo.guru Disgreement www.washingtonpost.com www.washingtonpost.com Kernen is well-known co-anchor he president at there, www.businessinsider.com www.businessinsider.com innocence irritant, perhaps charter charger Dec 2018 www.businessinsider.com www.businessinsider.com changing I assume this should be "chanting". milliseconds timeit returns time in seconds, not milliseconds Nov 2018 At left, an image with surface depth cues at 100% and interior depth cues at 50%. The image on the right has surface depth cues at 100% and interior depth cues at 50%. According to the description both images have surface depth cues at 100% and interior depth cues at 50%. It's clearly a typo. Oct 2018 www.marcjahjah.net www.marcjahjah.net répond répondent existe existent des des Répétition the autoposts from Twitter to Facebook were a hanging thought? I feel like I do this on my site all too often... Sep 2018 www.americanyawp.com www.americanyawp.com Mississppi TYPO supermemo.guru supermemo.guru See: Futility of fine-tuning the algorithm. Aug 2018 Jul 2018 ≥ they are using what proposition A.1 is saying, which has a typo, it should be \(\frac{\Delta_k (t_j - t_{j-1})}{4} \exp (-t_{j-1} \Delta^2/2)\) max1≤k≤MMXj=1∆ktj4exp(−tj−1∆2k/2)−1∆k The \(\sum_{j=1}^M\) should be inside the bracket. The \(-\frac{1}{\Delta_k}\) is not part of the sum. t This numerator should be \(t_j - t_{j-1}\) 1 A \(\Delta\) is missing here Jun 2018 x3yif and only ifdxù3e y Maybe x and y (at least y) should be said to be natural numbers here, but then again it is fairly obvious from the context. Writedzeforthe smallest natural number greater thanz2R, and writebzcfor the largest naturalnumber smaller thanz2R, e I think both of these definitions should include "or equal to", especially given the immediate following example of x <= 3y if and only if ... s Not really a typo, but slightly confusing since sis also used for a different funciton in the same example in the following paragraph. www.biorxiv.org www.biorxiv.org need delete? May 2018 www.lemonde.fr www.lemonde.fr Apr 2018 artsandscience.usask.ca artsandscience.usask.ca Pease please Mar 2018 Jan 2018 learnbayes.org learnbayes.org guaranteed not too guaranteed not to. Dec 2017 this exercises: Typo geneerating Typo if (x == []) then total else sumOfList (total + (head x)) (tail x) Replace x with lst. simply Typo that Typo ineffient inefficient Nov 2017 www.degruyter.com www.degruyter.com wich h the creation a new author of reccomend Oct 2017 torontoist.com torontoist.com urbansim panning proscriptive prescriptive? www.reproducibleimaging.org www.reproducibleimaging.org modulde Typo wellcomeopenresearch.org wellcomeopenresearch.org Sep 2017 wellcomeopenresearch.org wellcomeopenresearch.org Aug 2017 docs.greenitglobe.com docs.greenitglobe.com Jul 2017 May 2017 Apr 2017 Mar 2017 netzfueralle.blog.rosalux.de netzfueralle.blog.rosalux.de published will published andwill and trasfer and transfer return at back to set the sets the Agend Agenda externatlities externality healtcare healthcare pubic resources public ressources - pubic ressources sind was gaaanz anderes ;-) renats rents Inhaltlich: vgl. die Anmerkung auf S. 17 zu Big Data-Auswertungen in Berlin, die dort ebenfalls nach den "großen" kleinen gesucht haben. experimeng experiment it ways it inways arleady already hegemoncy hegemony need to reformulated need to bereformulated That said, it just so happens that many political forces that do question many parts of the neoliberal agenda to have some influence in c That said, many political forces that do question many parts of the neoliberal agenda happento have some influence in c themselves . [Punkt] a popular Das muss wohl "app" statt "add" heißen hier. they advanced they advance like neutral neural bsolet obsolete exercises exercise
Quasi-invariance of completely random measures Habeebat O. Ibraheem Department of Mathematics, Swansea University, Singleton Park, Swansea SA2 8PP, U.K. Eugene Lytvynov Department of Mathematics, Swansea University, Singleton Park, Swansea SA2 8PP, U.K Abstract Let $X$ be a locally compact Polish space. Let $\mathbb K(X)$ denote the space of discrete Radon measures on $X$. Let $\mu$ be a completely random discrete measure on $X$, i.e., $\mu$ is (the distribution of) a completely random measure on $X$ that is concentrated on $\mathbb K(X)$. We consider the multiplicative (current) group $C_0(X\to\mathbb R_+)$ consisting of functions on $X$ that take values in $\mathbb R_+=(0,\infty)$ and are equal to 1 outside a compact set. Each element $\theta\in C_0(X\to\mathbb R_+)$ maps $\mathbb K(X)$ onto itself; more precisely, $\theta$ sends a discrete Radon measure $\sum_i s_i\delta_{x_i}$ to $\sum_i \theta(s_i)s_i\delta_{x_i}$. Thus, elements of $C_0(X\to\mathbb R_+)$ transform the weights of discrete Radon measures. We study conditions under which the measure $\mu$ is quasi-invariant under the action of the current group $C_0(X\to\mathbb R_+)$ and consider several classes of examples. We further assume that $X=\mathbb R^d$ and consider the group of local diffeomorphisms $\operatorname{Diff}_0(X)$. Elements of this group also map $\mathbb K(X)$ onto itself. More precisely, a diffeomorphism $\varphi\in \operatorname{Diff}_0(X)$ sends a discrete Radon measure $\sum_i s_i\delta_{x_i}$ to $\sum_i s_i\delta_{\varphi(x_i)}$. Thus, diffeomorphisms from $\operatorname{Diff}_0(X)$ transform the atoms of discrete Radon measures. We study quasi-invariance of $\mu$ under the action of $\operatorname{Diff}_0(X)$. We finally consider the semidirect product $\mathfrak G:=\operatorname{Diff}_0(X)\times C_0(X\to \mathbb R_+)$ and study conditions of quasi-invariance and partial quasi-invariance of $\mu$ under the action of $\mathfrak G$. Key words: Random measure, point process, Poisson point process, completely random measure, current group, diffeomorphism group. Full Text Article Information Title Quasi-invariance of completely random measures Source Methods Funct. Anal. Topology, Vol. 24 (2018), no. 3, 207-239 Milestones Received 15/08/2017; Revised 20/02/2018 Copyright The Author(s) 2018 (CC BY-SA) Authors Information Habeebat O. Ibraheem Department of Mathematics, Swansea University, Singleton Park, Swansea SA2 8PP, U.K. Eugene Lytvynov Department of Mathematics, Swansea University, Singleton Park, Swansea SA2 8PP, U.K Export article Citation Example Habeebat O. Ibraheem and Eugene Lytvynov, Quasi-invariance of completely random measures, Methods Funct. Anal. Topology 24 (2018), no. 3, 207-239. BibTex @article {MFAT1083, AUTHOR = {Habeebat O. Ibraheem and Eugene Lytvynov}, TITLE = {Quasi-invariance of completely random measures}, JOURNAL = {Methods Funct. Anal. Topology}, FJOURNAL = {Methods of Functional Analysis and Topology}, VOLUME = {24}, YEAR = {2018}, NUMBER = {3}, PAGES = {207-239}, ISSN = {1029-3531}, URL = {http://mfat.imath.kiev.ua/article/?id=1083}, } References Coming Soon.
Let $X$ is a random vector size $p$ from multivariate normal distribution $\mathcal{N}$( $0$, $\sigma$ $I$), $I$ is identity matrix. I want to find the expected value of reciprocal of norm like this $$E\left(\lVert x \rVert^{-1}\right)$$ I have tried to search for this in many statistics literature but didn't find anything. I hope someone can help me to solve this problem. Let Let $Y = \lVert X \rVert^2$. Then $Y/\sigma^2 \sim \chi^2_p$. And you wish to find $EY^{-1/2}$. This is$$ \frac{1}{\sigma}\cdot\dfrac{1}{2^{p/2}\Gamma(p/2)}\int_0^\infty y^{-1/2} y^{p/2 - 1} e^{-y/2}dy$$$$ = \frac{1}{\sigma} \dfrac{2^{(p-1)/2}\Gamma((p-1)/2)}{2^{p/2}\Gamma(p/2)} = \frac{1}{\sigma\sqrt{2}}\dfrac{\Gamma((p-1)/2)}{\Gamma(p/2)}$$
I have the following conditions: $\lvert\psi(0)\rangle=\lvert+\rangle_x=\frac{1}{\sqrt{2}}\lvert+\rangle+\frac{1}{\sqrt{2}}\lvert-\rangle$. So the state at $t=T$ is $\lvert\psi(t)\rangle=\frac{1}{\sqrt{2}}e^{-\omega_0 t/2}\lvert+\rangle+\frac{1}{\sqrt{2}}e^{+\omega_0 t/2}\lvert-\rangle$. This was with respect to a magnetic field $\vec{B}=B_0\hat{z}$. Now the field is very rapidly changed to $\vec{B}=B_0\hat{y}$. Then after some time $T$ a measurement of $S_x$ is made. I then need to find the probability of getting $\frac{\hslash}{2}$. However, the problem is that I don't know what state the particle should switch to after the change of the magnetic field. How is it determined? Would appreciate some clarification.
I'm studying the principle of reciprocity for the MRI. At some point during a calculation the book states that: $$\oint d\vec{l} \cdot \left[\int d^3r' \frac{\vec{\nabla'}\times\vec{M}(\vec{r}')}{\left| \vec{r}-\vec{r}' \right|} \right] = \int d^3r' \oint d\vec{l} \cdot \left[ \left( - \vec{\nabla'}\frac{1}{\left| \vec{r}-\vec{r}' \right|} \right) \times\vec{M}(\vec{r}') \right] $$ This is supposed to come up when you integrate by parts negletting a surface term because there are finte sources. Unfortunately I don't see the right way to prove this relation. Additional physics context: The space is $\mathbb{R}^3$ and the primed terms refer to the sources, there is a magnetization vector $\vec{M}$ with the relative current density $\vec{J}=\vec{\nabla}\times \vec{M}$. The flux of the magnetization field trough an antenna (i.e a simple coil) is $\Phi = \oint d\vec{l} \cdot \vec{A}$ where $\vec{A}$ is the vector potential: $$ \vec{A} \propto \int d^3r' \frac{\vec{\nabla'}\times\vec{M}(\vec{r}')}{\left| \vec{r}-\vec{r}' \right|} $$ What I thought so far: By using the vector identity $\nabla \times (f \mathbf{A}) = \nabla f \times \mathbf{A} + f \nabla \times \mathbf{A}$ I get the solution above plus an extra term: $$\int d^3r' \; \vec{\nabla'}\times\left( \frac{\vec{M}(\vec{r}')}{\left| \vec{r}-\vec{r}' \right|} \right) $$ But I don't see how this is zero.
Ordinary differential equations based model Don’t forget the initial conditions! Delayed differential equations based model Objectives: learn how to implement a model with ordinary differential equations (ODE) and delayed differential equations (DDE). Projects: tgi_project, seir_project tgi_project(data = tgi_data.txt , model = tgi_model.txt) Here, we consider the tumor growth inhibition (TGI) model proposed by Ribba et al. (Ribba, B., Kaloshi, G., Peyre, M., Ricard, D., Calvez, V., Tod, M., . & Ducray, F., *A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy*. Clinical Cancer Research, 18(18), 5071-5080, 2012.). This model is defined by a set of ordinary differential equations where is the total tumor size. This set of ODEs is valid for t greater than 0, while This model (derivatives and initial conditions) can easily be implemented with Mlxtran: DESCRIPTION: Tumor Growth Inhibition (TGI) model proposed by Ribba et al A tumor growth inhibition model for low-grade glioma treated with chemotherapy or radiotherapy. Clinical Cancer Research, 18(18), 5071-5080, 2012. Variables - PT: proliferative equiescent tissue - QT: nonproliferative equiescent tissue - QP: damaged quiescent cells - C: concentration of a virtual drug encompassing the 3 chemotherapeutic components of the PCV regimen Parameters - K : maximal tumor size (should be fixed a priori) - KDE : the rate constant for the decay of the PCV concentration in plasma - kPQ : the rate constant for transition from proliferation to quiescence - kQpP : the rate constant for transfer from damaged quiescent tissue to proliferative tissue - lambdaP: the rate constant of growth for the proliferative tissue - gamma : the rate of damages in proliferative and quiescent tissue - deltaQP: the rate constant for elimination of the damaged quiescent tissue - PT0 : initial proliferative equiescent tissue - QT0 : initial nonproliferative equiescent tissue [LONGITUDINAL] input = {K, KDE, kPQ, kQpP, lambdaP, gamma, deltaQP, PT0, QT0} PK: depot(target=C) EQUATION: ; Initial conditions t0 = 0 C_0 = 0 PT_0 = PT0 QT_0 = QT0 QP_0 = 0 ; Dynamical model PSTAR = PT + QT + QP ddt_C = -KDE*C ddt_PT = lambdaP*PT*(1-PSTAR/K) + kQpP*QP - kPQ*PT - gamma*KDE*PT*C ddt_QT = kPQ*PT - gamma*KDE*QT*C ddt_QP = gamma*KDE*QT*C - kQpP*QP - deltaQP*QP OUTPUT: output = PSTAR Remark: t0, PT_0 and QT_0 are reserved keywords that define the initial conditions. Then, the graphic of individual fits clearly shows that the tumor size is constant until and starts changing according to the model at t=0. tgiNoT0_project(data = tgi_data.txt , model = tgiNoT0_model.txt) The initial time t0 is not specified in this example. Since t0 is missing, Monolix uses the first time value encountered for each individual. If, for instance, the tumor size has not been computed before 5 for the individual fits, then t0=5 will be used for defining the initial conditions for this individual, which introduces a shift in the plot: As defined here, the following rule applies When no starting time t0 is defined in the Mlxtran model for Monolix then by default t0 is selected to be equal to the first dose or the first observation, whatever comes first. If t0 is defined, a differential equation needs to be defined. Conclusion: don’t forget to properly specify the initial conditions of a system of ODEs! A system of delay differential equations (DDEs) can be implemented in a block EQUATION of the section [LONGITUDINAL] of a script Mlxtran. Mlxtran provides the command delay(x,T) where x is a one-dimensional component and T is the explicit delay. Therefore, DDEs with a nonconstant past of the form $$ \begin{array}{ccl} \frac{dx}{dt} &=& f(x(t),x(t-T_1), x(t-T_2), …), ~~\text{for}~~t \geq 0\ x(t) &=& x_0(t) ~~~~\text{for}~~\text{min}(T_k) \leq t \leq 0 \end{array} $$ can be solved. The syntax and rules are explained here. seir_project(data = seir_data.txt , model = seir_model.txt) The model is a system of 4 DDEs and defined with the following mode: DESCRIPTION: SEIR model, using delayed differential equations. "An Epidemic Model with Recruitment-Death Demographics and Discrete Delays", Genik & van den Driessche (1999) Decomposition of the total population into four epidemiological classes S (succeptibles), E (exposed), I (infectious), and R (recovered) The parameters corresponds to - birthRate: the birth rate, - deathRate: the natural death rate, - infectionRate: the contact rate of infective individuals, - recoveryRate: the rate of recovery, - excessDeathRate: the excess death rate for infective individuals There is a time delay in the model: - tauImmunity: a temporary immunity delay [LONGITUDINAL] input = {birthRate, deathRate, infectionRate, recoveryRate, excessDeathRate, tauImmunity, tauLatency} EQUATION: ; Initial conditions t0 = 0 S_0 = 15 E_0 = 0 I_0 = 2 R_0 = 3 ; Dynamical model N = S + E + I + R ddt_S = birthRate - deathRate*S - infectionRate*S*I/N + recoveryRate*delay(I,tauImmunity)*exp(-deathRate*tauImmunity) ddt_E = infectionRate*S*I/N - deathRate*E - infectionRate*delay(S,tauLatency)*delay(I,tauLatency)*exp(-deathRate*tauLatency)/(delay(I,tauLatency)+delay(S,tauLatency)+delay(E,tauLatency)+delay(R,tauLatency)) ddt_I = -(recoveryRate+excessDeathRate+deathRate)*I + infectionRate*delay(S,tauLatency)*delay(I,tauLatency)*exp(-deathRate*tauLatency)/(delay(I,tauLatency)+delay(S,tauLatency)+delay(E,tauLatency)+delay(R,tauLatency)) ddt_R = recoveryRate*I - deathRate*R - recoveryRate*delay(I,tauImmunity)*exp(-deathRate*tauImmunity) OUTPUT: output = {S, E, I, R} Introducing these delays allows to obtain nice fits for the 4 outcomes, including (corresponding to the output y4): 8.case_studies/arthritis_project
A long while ago I promised to take you from the action by the modular group $\Gamma=PSL_2(\mathbb{Z})$ on the lattices at hyperdistance $n$ from the standard orthogonal laatice $L_1$ to the corresponding ‘monstrous’ Grothendieck dessin d’enfant. Speaking of dessins d’enfant, let me point you to the latest intriguing paper by Yuri I. Manin and Matilde Marcolli, ArXived a few days ago Quantum Statistical Mechanics of the Absolute Galois Group, on how to build a quantum system for the absolute Galois group from dessins d’enfant (more on this, I promise, later). Where were we? We’ve seen natural one-to-one correspondences between (a) points on the projective line over $\mathbb{Z}/n\mathbb{Z}$, (b) lattices at hyperdistance $n$ from $L_1$, and (c) coset classes of the congruence subgroup $\Gamma_0(n)$ in $\Gamma$. How to get from there to a dessin d’enfant? The short answer is: it’s all in Ravi S. Kulkarni’s paper, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math 113 (1991) 1053-1135. It is a complete mystery to me why Tatitscheff, He and McKay don’t mention Kulkarni’s paper in “Cusps, congruence groups and monstrous dessins”. Because all they do (and much more) is in Kulkarni. I’ve blogged about Kulkarni’s paper years ago: – In the Dedekind tessalation it was all about assigning special polygons to subgroups of finite index of $\Gamma$. – In Modular quilts and cuboid tree diagram it did go on assigning (multiple) cuboid trees to a (conjugacy class) of such finite index subgroup. – In Hyperbolic Mathieu polygons the story continued on a finite-to-one connection between special hyperbolic polygons and cuboid trees. – In Farey codes it was shown how to encode such polygons by a Farey-sequence. – In Generators of modular subgroups it was shown how to get generators of the finite index subgroups from this Farey sequence. The modular group is a free product \[ \Gamma = C_2 \ast C_2 = \langle s,u~|~s^2=1=u^3 \rangle \] with lifts of $s$ and $u$ to $SL_2(\mathbb{Z})$ given by the matrices \[ S=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix},~\qquad U= \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix} \] As a result, any permutation representation of $\Gamma$ on a set $E$ can be represented by a $2$-coloured graph (with black and white vertices) and edges corresponding to the elements of the set $E$. Each white vertex has two (or one) edges connected to it and every black vertex has three (or one). These edges are the elements of $E$ permuted by $s$ (for white vertices) and $u$ (for black ones), the order of the 3-cycle determined by going counterclockwise round the vertex. Clearly, if there’s just one edge connected to a vertex, it gives a fixed point (or 1-cycle) in the corresponding permutation. The ‘monstrous dessin’ for the congruence subgroup $\Gamma_0(n)$ is the picture one gets from the permutation $\Gamma$-action on the points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$, or equivalently, on the coset classes or on the lattices at hyperdistance $n$. Kulkarni’s paper (or the blogposts above) tell you how to get at this picture starting from a fundamental domain of $\Gamma_0(n)$ acting on teh upper half-plane by Moebius transformations. Sage gives a nice image of this fundamental domain via the command FareySymbol(Gamma0(n)).fundamental_domain() Here’s the image for $n=6$: The boundary points (on the halflines through $0$ and $1$ and the $4$ half-circles need to be identified which is indicaed by matching colours. So the 2 halflines are identified as are the two blue (and green) half-circles (in opposite direction). To get the dessin from this, let’s first look at the interior points. A white vertex is a point in the interior where two black and two white tiles meet, a black vertex corresponds to an interior points where three black and three white tiles meet. Points on the boundary where tiles meet are coloured red, and after identification two of these reds give one white or black vertex. Here’s the intermediate picture The two top red points are identified giving a white vertex as do the two reds on the blue half-circles and the two reds on the green half-circles, because after identification two black and two white tiles meet there. This then gives us the ‘monstrous’ modular dessin for $n=6$ of the Tatitscheff, He and McKay paper: Let’s try a more difficult example: $n=12$. Sage gives us as fundamental domain giving us the intermediate picture and spotting the correct identifications, this gives us the ‘monstrous’ dessin for $\Gamma_0(12)$ from the THM-paper: In general there are several of these 2-coloured graphs giving the same permutation representation, so the obtained ‘monstrous dessin’ depends on the choice of fundamental domain. You’ll have noticed that the domain for $\Gamma_0(6)$ was symmetric, whereas the one Sage provides for $\Gamma_0(12)$ is not. This is caused by Sage using the Farey-code \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_1 & \frac{1}{5} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & 1} \] One of the nice results from Kulkarni’s paper is that for any $n$ there is a symmetric Farey-code, giving a perfectly symmetric fundamental domain for $\Gamma_0(n)$. For $n=12$ this symmetric code is \[ \xymatrix{ 0 \ar@{-}[r]_1 & \frac{1}{6} \ar@{-}[r]_2 & \frac{1}{4} \ar@{-}[r]_3 & \frac{1}{3} \ar@{-}[r]_4 & \frac{1}{2} \ar@{-}[r]_4 & \frac{2}{3} \ar@{-}[r]_3 & \frac{3}{4} \ar@{-}[r]_2 & \frac{5}{6} \ar@{-}[r]_1 & 1} \] It would be nice to see whether using these symmetric Farey-codes gives other ‘monstrous dessins’ than in the THM-paper. Remains to identify the edges in the dessin with the lattices at hyperdistance $n$ from $L_1$. Using the tricks from the previous post it is quite easy to check that for any $n$ the monstrous dessin for $\Gamma_0(n)$ starts off with the lattices $L_{M,\frac{g}{h}} = M,\frac{g}{h}$ as below Let’s do a sample computation showing that the action of $s$ on $L_n$ gives $L_{\frac{1}{n}}$: \[ L_n.s = \begin{bmatrix} n & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} \] and then, as last time, to determine the class of the lattice spanned by the rows of this matrix we have to compute \[ \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -n \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -1 & 0 \\ 0 & -n \end{bmatrix} \] which is class $L_{\frac{1}{n}}$. And similarly for the other edges.2 Comments
Forgot password? New user? Sign up Existing user? Log in Square 12. We get 144. Reverse 12. We get 21 Square 21. We get 441 Reverse 441. We get 144. Note by Mohmmad Farhan 1 year, 2 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Yes. It is quite good. Log in to reply Try to find if there are any such numbers. Well I noticed that the 144 is 12 squared. And the prime factorization of 144 is 3^2 times 2^4 which is 4^2 and 3 and 4 are consecutive. AND THAT WAY WORKS Yes that is a good way finding such numbers. I too will try. P.S. I am 10 this my Brother's account @Mohmmad Farhan – P.S. I am an indian living in singapore @Mohmmad Farhan – So your brother name is Mohammad Farhan. How come at the age of 10 you came to know all about these. @Ram Mohith – Well, I find school outdated that is why I check up all of this and learn and ask from my father @Ram Mohith – My brother's name is Mohammad FARHAN @Mohmmad Farhan – Oh sorry for the typo. @Ram Mohith – OK Apology accepted @Mohmmad Farhan – Once check my profile. I have changed my quotation. It is quite interesting and yet it has quite depth meaning in it. You just take two consecutive numbers. Then multiply them. and then the rest works. My assumption WORKED. YAY I am thinking to frame a question based on these observations. May I assist Did you get still anymore numbers like these. @Ram Mohith – I am currently doing courses. Give me a minute and I will update you @Mohmmad Farhan – Ok. No problem. Take your own time. @Ram Mohith – I already updated @Ram Mohith – NO @Ram Mohith – There has to be some kind of error. I am trying to obtain a general form for these numbers or if there is some periodicity between the numbers . @Ram Mohith – What does periodicity mean @Mohmmad Farhan – Like there is some common difference between the numbers. More clearly they should be in any progression or series. @Ram Mohith – oh You cannot frame a question Why can't we ? It only works around 20 All single digit integers will satisfy this condition the reason being when they are reversed the same number is obtained. True All multiples of 10 are exceptions to this condition. Reason : 102=10010^2 = 100102=100. On reversing 10 we get 010101. Now square it we get 012=1=01=001=0001.....and so on01^2 = 1 = 01 = 001 = 0001 ..... and~so~on012=1=01=001=0001.....and so on. Now reversing it we get 10,100,1000.....and so on10,100,1000 .....and~so~on10,100,1000.....and so on. So, the multiples of 101010 are exceptions. More clearly they will be in an undetermined form. Make sense @Ram Mohith, There is a problem that goes something like this 122=144;212=44112^2= 144 ; 21^2 = 441122=144;212=441 1222=14884;2212=48841122^2 = 14884; 221^2 = 488411222=14884;2212=48841 12222=1493284;22212=49328411222^2 = 1493284; 2221^2 = 493284112222=1493284;22212=4932841 Did you know that both the endings of 31 and 19 end in 61. 31 times 31 = 961 and 19 times 19 = 361 Square 20. We get 400. Reverse 20. We get 02. Square 02. We get 004 Reverse 400. We get 004 Good one again It only works below 20 EDIT: The pattern is with 1 and a followed number of 2's Ok. Should see about this !!! WAIT 10 and 11 work Sorry. It is around 20 20 means squaring and reversing The idea of making a problem out of this out of question Unless it is a proof stating question Yes. Your point is also correct. But my idea is not to write a proof based question. @Ram Mohith – You can twist the question in the direction of palindromes. Say you give examples for palindromes and ask whether it works all the time ( For other numbers too). And you give three options. Yes, always , No, never and Yes , sometimes. But do not make it too obvious @Mohmmad Farhan – Yes I am also thinking about it. Can you help me in my other notes surely. Lets go to interesting prime powers relationship(the name) @Mohmmad Farhan – I am coming one by one. @Ram Mohith – ok There is a question on this (but I forgot and cannot search). The real pattern is 1 followed by some number of 2's and when you reverse it still makes sense This also works with 13 13 13: 132=169 13^2 = 169 132=169 312=961 31^2 = 961 312=961 132=169 13^2 = 169 132=169 312=961 31^2 = 961 312=961 My friend told me that on Friday but I forgot And if you take 14 14 14 in base 20 20 20 (so 24 in base 10) and square it, you get 18G20 18G_{20} 18G20. If you square 4220 42_{20} 4220 (81 in base 10) you get G8120 G81_{20} G8120. Problem Loading... Note Loading... Set Loading...
I made some research on the self-inductance of a straight wire, and many sources cited “Inductance Calculations” of F.W. Grover. So I read a bit of that book but I couldn't find the answer to my question. As far as I know, the self-inductance is given by the sum of internal and external inductance of the wire. Since the first one is constant with respect to the radius, I assume that the latter decreases when the radius increases. For the external inductance, I found here (see Figure 1) the formula \$\frac{\mu l}{(2\pi)}ln(D1/D2) \$ with D2 the diamater of the wire, and D1 the diameter of a fictitious wire. It seems indeed that when D1 gets larger, the inductance decrease. However, in my opinion, with a single wire D2 should tend to infinity. So, how does this formula make sense? Maybe I have this doubt because I don't really have clear what is the external inductance of a single wire. In fact, I don't see what the flux is linked to when the magnetic field is external to the wire and how it can generate an emf across the wire. Also, other than that formula, the real question is what is the physical phenomenon that makes the self-inductance decrease when the radius increases?
Does the series \(\sum_{n=0}^\infty {n^5\over 5^n}\) converge? It is possible, but a bit unpleasant, to approach this with the integral test or the comparison test, but there is an easier way. Consider what happens as we move from one term to the next in this series: $$\cdots+{n^5\over5^n}+{(n+1)^5\over 5^{n+1}}+\cdots$$ The denominator goes up by a factor of 5, \( 5^{n+1}=5\cdot5^n\), but the numerator goes up by much less: $$ (n+1)^5=n^5+5n^4+10n^3+10n^2+5n+1,$$ which is much less than \( 5n^5\) when \(n\) is large, because \( 5n^4\) is much less than \( n^5\). So we might guess that in the long run it begins to look as if each term is \(1/5\) of the previous term. We have seen series that behave like this: $$\sum_{n=0}^\infty {1\over 5^n} = {5\over4},$$ a geometric series. So we might try comparing the given series to some variation of this geometric series. This is possible, but a bit messy. We can in effect do the same thing, but bypass most of the unpleasant work. The key is to notice that $$ \lim_{n\to\infty} {a_{n+1}\over a_n}= \lim_{n\to\infty} {(n+1)^5\over 5^{n+1}}{5^n\over n^5}= \lim_{n\to\infty} {(n+1)^5\over n^5}{1\over 5}=1\cdot {1\over5} ={1\over 5}. $$ This is really just what we noticed above, done a bit more officially: in the long run, each term is one fifth of the previous term. Now pick some number between \(1/5\) and \(1\), say \(1/2\). Because $$\lim_{n\to\infty} {a_{n+1}\over a_n}={1\over5},$$ then when \(n\) is big enough, say \(n\ge N\) for some \(N\), $$ {a_{n+1}\over a_n} < {1\over2} \quad \hbox{and}\quad a_{n+1} < {a_n\over2}. $$ So \( a_{N+1} < a_N/2\), \( a_{N+2} < a_{N+1}/2 < a_N/4\), \( a_{N+3} < a_{N+2}/2 < a_{N+1}/4 < a_N/8\), and so on. The general form is \( a_{N+k} < a_N/2^k\). So if we look at the series $$ \sum_{k=0}^\infty a_{N+k}= a_N+a_{N+1}+a_{N+2}+a_{N+3}+\cdots+a_{N+k}+\cdots, $$ its terms are less than or equal to the terms of the sequence $$ a_N+{a_N\over2}+{a_N\over4}+{a_N\over8}+\cdots+{a_N\over2^k}+\cdots= \sum_{k=0}^\infty {a_N\over 2^k} = 2a_N. $$ So by the comparison test, \(\sum_{k=0}^\infty a_{N+k}\) converges, and this means that \(\sum_{n=0}^\infty a_{n}\) converges, since we've just added the fixed number \( a_0+a_1+\cdots+a_{N-1}\). Under what circumstances could we do this? What was crucial was that the limit of \( a_{n+1}/a_n\), say \(L\), was less than 1 so that we could pick a value \(r\) so that \(L < r < 1\). The fact that \(L < r\) (\(1/5 < 1/2\) in our example) means that we can compare the series \(\sum a_n\) to \(\sum r^n\), and the fact that \(r < 1\) guarantees that \(\sum r^n\) converges. That's really all that is required to make the argument work. We also made use of the fact that the terms of the series were positive; in general we simply consider the absolute values of the terms and we end up testing for absolute convergence. Theroem 11.7.1: The Ratio Test Suppose that $$\lim_{n\to \infty} |a_{n+1}/a_n|=L.$$ If $$L < 1$$ the series \(\sum a_n\) converges absolutely, if \(L>1\) the series diverges, and if \(L=1\) this test gives no information. Proof. The example above essentially proves the first part of this, if we simply replace \(1/5\) by \(L\) and \(1/2\) by \(r\). Suppose that \(L>1\), and pick \(r\) so that \(1 < r < L\). Then for \(n\ge N\), for some \(N\), $${|a_{n+1}|\over |a_n|} > r \quad \hbox{and}\quad |a_{n+1}| > r|a_n|.$$ This implies that $$ |a_{N+k}|>r^k|a_N|$$, but since \(r>1\) this means that $$\lim_{k\to\infty}|a_{N+k}|\not=0$$, which means also that $$\lim_{n\to\infty}a_n\not=0$$. By the divergence test, the series diverges. \(\square\) To see that we get no information when \(L=1\), we need to exhibit two series with \(L=1\), one that converges and one that diverges. It is easy to see that \(\sum 1/n^2\) and \(\sum 1/n\) do the job. Example 11.7.2 The ratio test is particularly useful for series involving the factorial function. Consider \(\sum_{n=0}^\infty 5^n/n!\). $$ \lim_{n\to\infty} {5^{n+1}\over (n+1)!}{n!\over 5^n}= \lim_{n\to\infty} {5^{n+1}\over 5^n}{n!\over (n+1)!}= \lim_{n\to\infty} {5}{1\over (n+1)}=0. $$ Since \(0 < 1\), the series converges. A similar argument, which we will not do, justifies a similar test that is occasionally easier to apply. Theroem 11.7.3: The Root Test Suppose that \[\lim_{n\to \infty} |a_n|^{1/n}=L.\] If \(L < 1\),the series \(\sum a_n\) converges absolutely, if \(L>1\) the series diverges, and if \(L=1\) this test gives no information. The proof of the root test is actually easier than that of the ratio test, and is a good exercise. Example 11.7.4 Analyze \(\sum_{n=0}^\infty {5^n\over n^n}\). Solution The ratio test turns out to be a bit difficult on this series (try it). Using the root test: $$ \lim_{n\to\infty} \left({5^n\over n^n}\right)^{1/n}= \lim_{n\to\infty} {(5^n)^{1/n}\over (n^n)^{1/n}}= \lim_{n\to\infty} {5\over n}=0. $$ Since \(0 < 1\), the series converges. The root test is frequently useful when \(n\) appears as an exponent in the general term of the series.
Skills to Develop Use variables and algebraic symbols Identify expressions and equations Simplify expressions with exponents Simplify expressions using the order of operations Use Variables and Algebraic Symbols Greg and Alex have the same birthday, but they were born in different years. This year Greg is \(20\) years old and Alex is \(23\), so Alex is \(3\) years older than Greg. When Greg was \(12\), Alex was \(15\). When Greg is \(35\), Alex will be \(38\). No matter what Greg’s age is, Alex’s age will always be \(3\) years more, right? In the language of algebra, we say that Greg’s age and Alex’s age are variable and the three is a constant. The ages change, or vary, so age is a variable. The \(3\) years between them always stays the same, so the age difference is the constant. In algebra, letters of the alphabet are used to represent variables. Suppose we call Greg’s age \(g\). Then we could use \(g + 3\) to represent Alex’s age. See Table \(\PageIndex{1}\). Greg’s age Alex’s age 12 15 20 23 35 38 g g + 3 Letters are used to represent variables. Letters often used for variables are \(x, y, a, b,\) and \(c\). Definition: Variables and Constants A variable is a letter that represents a number or quantity whose value may change. A constant is a number whose value always stays the same. To write algebraically, we need some symbols as well as numbers and variables. There are several types of symbols we will be using. In Whole Numbers, we introduced the symbols for the four basic arithmetic operations: addition, subtraction, multiplication, and division. We will summarize them here, along with words we use for the operations and the result. Operation Notation Say: The result is... Addition a + b a plus b the sum of a and b Subtraction a − b a minus b the difference of a and b Multiplication a • b, (a)(b), (a)b, a(b) a times b the product of a and b Division a ÷ b, a / b, \(\frac{a}{b}\), \(b \overline{)a}\) a divided by b the quotient of a and b In algebra, the cross symbol, \(×\), is not used to show multiplication because that symbol may cause confusion. Does \(3xy\) mean \(3 × y\) (three times \(y\)) or \(3 • x • y\) (three times \(x\) times \(y\))? To make it clear, use \(•\) or parentheses for multiplication. We perform these operations on two numbers. When translating from symbolic form to words, or from words to symbolic form, pay attention to the words of or and to help you find the numbers. The sum of \(5\) \(3\) means add \(5\) plus \(3\), which we write as \(5 + 3\). and The difference of \(9\) \(2\) means subtract \(9\) minus \(2\), which we write as \(9 − 2\). and The product \(4\) of \(8\) means multiply \(4\) times \(8\), which we can write as \(4 • 8\). and The quotient of \(20\) \(5\) means divide \(20\) by \(5\), which we can write as \(20 ÷ 5\). and Translate from algebra to words: \(12 + 14\) \((30)(5)\) \(64 ÷ 8\) \(x − y\) Solution 12 + 14 12 plus 14 the sum of twelve and fourteen (30)(5) 30 times 5 the product of thirty and five 64 ÷ 8 64 divided by 8 the quotient of sixty-four and eight x − y x minus y the difference of x and y exercise \(\PageIndex{1}\) Translate from algebra to words. \(18 + 11\) \((27)(9)\) \(84 ÷ 7\) \(p − q\) Answer a \(18\) plus \(11\); the sum of eighteen and eleven Answer b \(27\) times \(9\); the product of twenty-seven and nine Answer c \(84\) divided by \(7\); the quotient of eighty-four and seven Answer d \(p\) minus \(q\); the difference of \(p\) and \(q\) exercise \(\PageIndex{2}\) Translate from algebra to words. \(47 − 19\) \(72 ÷ 9\) \(m + n\) \((13)(7)\) Answer a \(47\) minus \(19\); the difference of forty-seven and nineteen Answer b \(72\) divided by \(9\); the quotient of seventy-two and nine Answer c \(m\) plus \(n\); the sum of \(m\) and \(n\) Answer d \(13\) times \(7\); the product of thirteen and seven When two quantities have the same value, we say they are equal and connect them with an equal sign. Definition: Equality Symbol \(a = b\) is read \(a\) is equal to \(b\) The symbol \(=\) is called the equal sign. An inequality is used in algebra to compare two quantities that may have different values. The number line can help you understand inequalities. Remember that on the number line the numbers get larger as they go from left to right. So if we know that \(b\) is greater than \(a\), it means that \(b\) is to the right of \(a\) on the number line. We use the symbols "\(<\)" and "\(>\)" for inequalities. Definition: Inequality \(a < b\) is read \(a\) is less than \(b\) \(a\) is to the left of \(b\) on the number line \(a > b\) is read \(a\) is greater than \(b\) \(a\) is to the right of \(b\) on the number line The expressions \(a < b\) and \(a > b\) can be read from left-to-right or right-to-left, though in English we usually read from left-to-right. In general, \(a < b\) is equivalent to \(b > a\). For example, \(7 < 11\) is equivalent to \(11 > 7\). \(a > b\) is equivalent to \(b < a\). For example, \(17 > 4\) is equivalent to \(4 < 17\). When we write an inequality symbol with a line under it, such as \(a ≤ b\), it means \(a < b\) or \(a = b\). We read this \(a\) is less than or equal to \(b\). Also, if we put a slash through an equal sign, \(≠\), it means not equal. We summarize the symbols of equality and inequality in Table \(\PageIndex{3}\). Algebraic Notation Say a = b a is equal to b a ≠ b a is not equal to b a < b a is less than b a > b a is greater than b a ≤ b a is less than or equal to b a ≥ b a is greater than or equal to b Definition: Symbols \(<\) and \(>\) The symbols \(<\) and \(>\) each have a smaller side and a larger side. smaller side \(<\) larger side larger side \(>\) smaller side The smaller side of the symbol faces the smaller number and the larger faces the larger number. Translate from algebra to words: \(20 ≤ 35\) \(11 ≠ 15 − 3\) \(9 > 10 ÷ 2\) \(x + 2 < 10\) Solution 20 ≤ 35 20 is less than or equal to 35 11 ≠ 15 − 3 11 is not equal to 15 minus 3 9 > 10 ÷ 2 9 is greater than 10 divided by 2 x + 2 < 10 x plus 2 is less than 10 exercise \(\PageIndex{3}\) Translate from algebra to words. \(14 ≤ 27\) \(19 − 2 ≠ 8\) \(12 > 4 ÷ 2\) \(x − 7 < 1\) Answer a fourteen is less than or equal to twenty-seven Answer b nineteen minus two is not equal to eight Answer c twelve is greater than four divided by two Answer d \(x\) minus seven is less than one exercise \(\PageIndex{4}\) Translate from algebra to words. \(19 ≥ 15\) \(7 = 12 − 5\) \(15 ÷ 3 < 8\) \(y + 3 > 6\) Answer a nineteen is greater than or equal to fifteen Answer b seven is equal to twelve minus five Answer c fifteen divided by three is less than eight Answer d \(y\) minus three is greater than six The information in Figure \(\PageIndex{1}\) compares the fuel economy in miles-per-gallon (mpg) of several cars. Write the appropriate symbol =, in each expression to compare the fuel economy of the cars. Figure \(\PageIndex{1}\): (credit: modification of work by Bernard Goldbach, Wikimedia Commons) MPG of Prius _____ MPG of Mini Cooper MPG of Versa _____ MPG of Fit MPG of Mini Cooper _____ MPG of Fit MPG of Corolla _____ MPG of Versa MPG of Corolla_____ MPG of Prius Solution MPG of Prius____MPG of Mini Cooper Find the values in the chart. 48____27 Compare. 48 > 27 MPG of Prius > MPG of Mini Cooper MPG of Versa____MPG of Fit Find the values in the chart. 26____27 Compare. 26 < 27 MPG of Versa < MPG of Fit MPG of Mini Cooper____MPG of Fit Find the values in the chart. 27____27 Compare. 27 = 27 MPG of Mini Cooper = MPG of Fit MPG of Corolla____MPG of Versa Find the values in the chart. 28____26 Compare. 28 > 26 MPG of Corolla > MPG of Versa MPG of Corolla____MPG of Prius Find the values in the chart. 28____48 Compare. 28 < 48 MPG of Corolla < MPG of Prius exercise \(\PageIndex{5}\) Use Figure \(\PageIndex{1}\) to fill in the appropriate symbol, \(=\), \(<\), or \(>\). MPG of Prius_____MPG of Versa MPG of Mini Cooper_____ MPG of Corolla Answer a \(>\) Answer b \(>\) exercise \(\PageIndex{6}\) Use Figure \(\PageIndex{1}\) to fill in the appropriate symbol, \(=\), \(<\), or \(>\). MPG of Fit_____ MPG of Prius MPG of Corolla _____ MPG of Fit Answer a \(<\) Answer b \(<\) Grouping symbols in algebra are much like the commas, colons, and other punctuation marks in written language. They indicate which expressions are to be kept together and separate from other expressions. Table \(\PageIndex{4}\) lists three of the most commonly used grouping symbols in algebra. Common Grouping Symbols parentheses ( ) brackets [ ] braces { } Here are some examples of expressions that include grouping symbols. We will simplify expressions like these later in this section. \[8(14 - 8) \qquad 21 - 3[2 + 4(9 - 8)] \qquad 24 \div {13 - 2[1(6 - 5) + 4]} \nonumber\] Identify Expressions and Equations What is the difference in English between a phrase and a sentence? A phrase expresses a single thought that is incomplete by itself, but a sentence makes a complete statement. “Running very fast” is a phrase, but “The football player was running very fast” is a sentence. A sentence has a subject and a verb. In algebra, we have expressions and equations. An expression is like a phrase. Here are some examples of expressions and how they relate to word phrases: Expression Words Phrase 3 + 5 3 plus 5 the sum of three and five n - 1 n minus one the difference of n and one 6 • 7 6 times 7 the product of six and seven \(\frac{x}{y}\) x divided by y the quotient of x and y Notice that the phrases do not form a complete sentence because the phrase does not have a verb. An equation is two expressions linked with an equal sign. When you read the words the symbols represent in an equation, you have a complete sentence in English. The equal sign gives the verb. Here are some examples of equations: Equation Sentence 3 + 5 = 8 The sum of three and five is equal to eight. n − 1 = 14 n minus one equals fourteen. 6 • 7 = 42 The product of six and seven is equal to forty-two. x = 53 x is equal to fifty-three. y + 9 = 2y − 3 y plus nine is equal to two y minus three. Definition: Expressions and Equations An expression is a number, a variable, or a combination of numbers and variables and operation symbols. An equation is made up of two expressions connected by an equal sign. Determine if each is an expression or an equation: \(16 − 6 = 10\) \(4 • 2 + 1\) \(x ÷ 25\) \(y + 8 = 40\) Solution (a) 16 − 6 = 10 This is an equation—two expressions are connected with an equal sign. (b) 4 • 2 + 1 This is an expression—no equal sign. (c) x ÷ 25 This is an expression—no equal sign. (d) y + 8 = 40 This is an equation—two expressions are connected with an equal sign. exercise \(\PageIndex{7}\) Determine if each is an expression or an equation: \(23 + 6 = 29\) \(7 • 3 − 7\) Answer a equation Answer b expression exercise \(\PageIndex{8}\) Determine if each is an expression or an equation: \(y ÷ 14\) \(x − 6 = 21\) Answer a expression Answer b equation Simplify Expressions with Exponents To simplify a numerical expression means to do all the math possible. For example, to simplify \(4 • 2 + 1\) we’d first multiply \(4 • 2\) to get \(8\) and then add the \(1\) to get \(9\). A good habit to develop is to work down the page, writing each step of the process below the previous step. The example just described would look like this: $$\begin{split} 4 \cdot 2 + &1 \\ 8 + &1 \\ &9 \end{split}$$ Suppose we have the expression \(2 • 2 • 2 • 2 • 2 • 2 • 2 • 2 • 2\). We could write this more compactly using exponential notation. Exponential notation is used in algebra to represent a quantity multiplied by itself several times. We write \(2 • 2 • 2\) as \(2^3\) and \(2 • 2 • 2 • 2 • 2 • 2 • 2 • 2 • 2\) as \(2^9\). In expressions such as \(2^3\), the \(2\) is called the base and the \(3\) is called the exponent. The exponent tells us how many factors of the base we have to multiply. means multiply three factors of \(2\) We say \(2^3\) is in exponential notation and \(2 • 2 • 2\) is in expanded notation. Definition: Exponential Notation For any expression \(a^n\), \(a\) is a factor multiplied by itself \(n\) times if \(n\) is a positive integer. The expression \(a^n\) is read \(a\) to the \(n^{th}\) power. For powers of \(n = 2\) and \(n = 3\), we have special names. \(a^2\) is read as "\(a\) squared" \(a^3\) is read as "\(a\) cubed" Table \(\PageIndex{7}\) lists some examples of expressions written in exponential notation. Exponential Notation In Words \(7^2\) 7 to the second power, or 7 squared \(5^3\) 5 to the third power, or 5 cubed \(9^4\) 9 to the fourth power \(12^5\) 12 to the fifth power Write each expression in exponential form: \(16 • 16 • 16 • 16 • 16 • 16 • 16\) \(9 • 9 • 9 • 9 • 9\) \(x • x • x • x\) \(a • a • a • a • a • a • a • a\) Solution (a) The base 16 is a factor 7 times. \(16^7\) (b) The base 9 is a factor 5 times. \(9^5\) (c) The base x is a factor 4 times. \(x^4\) (d) The base a is a factor 8 times. \(a^8\) exercise \(\PageIndex{9}\) Write each expression in exponential form: \(41 • 41 • 41 • 41 • 41\) Answer \(41^5\) exercise \(\PageIndex{10}\) Write each expression in exponential form: \(7 • 7 • 7 • 7 • 7 • 7 • 7 • 7 • 7\) Answer \(7^9\) exercise \(\PageIndex{11}\) Write each exponential expression in expanded form: \(4^8\) \(a^7\) Answer a \(4\cdot 4\cdot 4\cdot 4\cdot 4\cdot 4\cdot 4\cdot 4\) Answer b \(a\cdot a\cdot a\cdot a\cdot a\cdot a\cdot a\) exercise \(\PageIndex{12}\) Write each exponential expression in expanded form: \(8^8\) \(b^6\) Answer a \(8\cdot 8\cdot 8\cdot 8\cdot 8\cdot 8\cdot 8\cdot 8\) Answer b \(b\cdot b\cdot b\cdot b\cdot b\cdot b\) To simplify an exponential expression without using a calculator, we write it in expanded form and then multiply the factors. exercise \(\PageIndex{13}\) Simplify: \(5^3\) \(1^7\) Answer a \(125\) Answer b \(1\) exercise \(\PageIndex{14}\) Simplify: \(7^2\) \(0^5\) Answer a \(49\) Answer b \(0\) Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (formerly of Santa Ana College). This content produced by OpenStax and is licensed under a Creative Commons Attribution License 4.0 license.
Let me echo Benjamin's comment that any proactive step that you take should be done with the instructor's permission. At a practical level, I think there are ways to address issues (a) and (b). For (a), make a rubric (either in advance or a running one as you go) which lays out the criteria for awarding points. This allows you to be consistent with how you distribute points and, since the assignment of points itself is subjective in nature, a reasonable rubric enforced as consistently as possible usually results in good grading. Your instructor can give you a sense of whether your overall grading standards are too harsh or too lenient. For (b), if you have followed (a) and have the support of your instructor, I think it would be a reasonable policy not to consider changes to homework grades unless a substantial error was made (e.g. you misread a student's response and marked a correct method as being incorrect). But regardless of grading issues, we would certainly like calculus students to produce better responses than the one you used as an example. In my view, one needs to take a perspective similar to triage in medicine. Often, students have many deficiencies relative to the level of performance we would like to see from them, so prioritizing the most important issues to address is essential to good instruction. Since examples are often easier to discuss than general principles, I'll stick with your specific example. I see one glaring issue that is very concerning, and that is the use of the step $\frac{\infty}{\infty \cdot \infty} = \frac{1}{\infty}$. Not only is it an invalid thought process, it is likely to produce incorrect conclusions if the student attempts to extend it to other contexts. Would $\lim\limits_{x \rightarrow \infty} \frac{(x - 1)(x + 4)}{x^3}$ equal $\frac{\infty \cdot \infty}{\infty}$ or $\frac{\infty \cdot \infty}{\infty^3}$? What would happen if the denominator is $\sqrt[4]{x}^3$ instead of $x^3$? It's not clear that the student would use the right "power" of $\infty$ to be able to compute the limit. (If they are able to, that's a positive sign -- at least they have some sense of what is going on, even if their articulation of the reasoning is very flawed.) In comparison, if a student writes $\lim\limits_{x \rightarrow \infty} \frac{1}{x} = \frac{1}{\infty} = 0$, I'm not thrilled with the inappropriate use of $\infty$ (according to the standard definitions), but I consider it mostly harmless. There is reasonable sense that can be made to $\frac{1}{\infty} = 0$ that is much more palatable than nonsense such as $\frac{\infty}{\infty \cdot \infty} = 0$. The other issue that I would want to address is that "H.A. at 0" is not very precise and I would prefer a student write that "the line $y = 0$ is a horizontal asymptote" ("H.A." in place of "horizontal asymptote" is fine), but some instructors may prefer the less formal means of expression. After all, someone could be pedantic with my approach and say that $y = 0$ is not actually a line, so we should write "the line which is the locus of all points where $y = 0$" or some such and now we've simply confused the students entirely. In practice, I would accept an answer such as "H.A. at 0" from students, but expect myself and my TAs to be more precise and refer to the line $y = 0$ as the horizontal asymptote. Once you've identified the issues you want to address, here are some concrete ways you can obtain improvements: Be sure you are very careful in your own explanations of solutions. I've had TAs who wanted to mark very harshly for sloppy writing on homework ... and then I observed them engage in the same sloppiness on the board during their recitation periods. When I brought this issue up, they argued, "Well of course I have to be sloppy in class because I am under time pressure, but students doing homework have the time to write better." The flaws in this reasoning are many, but in brief (1) it is generally better to write explanations clearly on the board and cover less problems than to sloppily solve many problems; (2) students cannot possibly learn to write well if they do not see a consistent high standard [and certainly not if they see sloppiness is acceptable for an instructor or TA]; (3) students do not have the time or do not choose to use their time to write polished solutions to every calculus problem they are assigned -- and it's not realistic to expect that [so if good writing has not become a learned habit, don't expect it to happen]. So you start by providing a good example yourself. Then you perform triage and identify the worst offenses that you will mark off on homework. The cancellation of factors of $\infty$ in your example would certainly qualify. You should take time to address common mistakes. I recommend a brief amount of time addressing such issues in class (a list of things not to do and 1-2 sentence explanations of why for each item on the list), with an offer to explain more thoroughly the rationale in office hours. Students will respond to the incentive of losing points, and they are generally accepting if you can show that there is a reasonable rationale behind it (even if they don't fully understand or accept that rationale). Just don't appeal to authority or superiority. If you cannot give an explanation that a reasonably strong student can understand, you are probably setting an unrealistic expectation. As a TA, it should be added, that you should generally rely on your instructor's opinion to perform your triage assessment. Certainly it is inappropriate to simply decide your own priorities without any consultation of the instructor. The instructor often cannot micromanage every single prioritization decision you make, but a good conversation with your instructor can leave you with a good understanding of how to prioritize (and a good instructor will understand that at the level of small details, you two may not come to the exact same conclusions and that is okay as long as you are in harmony overall). Ideally, by the time exams come around, students have had good instruction to see what an appropriate solution looks like for the types of problems they encounter, and the homework has identified the most egregious errors in writing up solutions. Then the markings for exams should generally follow the same standards. (In practice, I prefer to be have a somewhat more harsh grading standard for homework than exams. Harsh feedback on homework is more likely to get their attention and lead to improvement before the next exam. And that improvement will hopefully lead to increased retention of conceptual understanding for use later in the course and beyond.)
Methods Funct. Anal. Topology 13 (2007), no. 4, 301-317 Let $\mathfrak{S}_\infty$ be the infinity permutation group and $\Gamma$ an arbitrary group. Then $\mathfrak{S}_\infty$ admits a natural action on $\Gamma^\infty$ by automorphisms, so one can form a semidirect product $\Gamma^\infty \times \mathfrak{S}_\infty$, known as the wreath product $\Gamma\wr\mathfrak{S}_\infty$ of $\Gamma$ by $\mathfrak{S}_{\infty}$. We obtain a full description of unitary $I\!I_1-$factor-representations of $\Gamma\wr\mathfrak{S}_\infty$ in terms of finite characters of $\Gamma$. Our approach is based on extending Okounkov's classification method for admissible representations of $\mathfrak{S}_\infty\times\mathfrak{S}_\infty$. Also, we discuss certain examples of representations of type $I\!I\!I$, where the modular operator of Tomita-Takesaki expresses naturally by the asymptotic operators, which are important in the theory of characters of $\mathfrak{S}_\infty$. Methods Funct. Anal. Topology 13 (2007), no. 4, 318-328 In the present paper we are going to introduce an operator-valued integral of a square modulus weakly integrable mappings the ranges of which are Hilbert spaces, as bounded operators. Then, we shall show that each operator-valued integrable mapping of the index set of an orthonormal basis of a Hilbert space $H$ into $H$ can be written as a multiple of a sum of three orthonormal bases. Methods Funct. Anal. Topology 13 (2007), no. 4, 329-332 Let $A$ be a bounded operator on a Hilbert space and $g$ a vector-valued function, which is holomorphic in a neighborhood of zero. The question about existence of holomorphic solutions of the Cauchy problem $\left\{ \begin{array}{ll} \displaystyle\frac{\partial u}{\partial t}= A\displaystyle\frac{\partial^{2}u}{\partial x^2}\\ u(0,x)=g(x) \\ \end{array} \right.$ is considered in the paper. Methods Funct. Anal. Topology 13 (2007), no. 4, 333-337 We consider examples of operators that act in some Hilbert rigging from positive Hilbert space into the negative one. For the first derivative operator we investigate a ``generalized'' selfadjointness in the sense of weight Hilbert riggings of the spaces $L^2([0,1])$ and $L^2(\mathbb{R})$. We will show that an example of the operator $i \frac{d}{dt}$ in some rigging scales, which is selfadjoint in usual case and not generalized selfadjoint, can not be constructed. On an extended stochastic integral and the Wick calculus on the connected with the generalized Meixner measure Kondratiev-type spaces Methods Funct. Anal. Topology 13 (2007), no. 4, 338-379 We introduce an extended stochastic integral and construct elements of the Wick calculus on the Kondratiev-type spaces of regular and nonregular gene alized functions, study the interconnection between the extended stochastic integration and the Wick calculus, and consider examples of stochastic equations with Wick-type nonlinearity. Our researches are based on the general approach that covers the Gaussian, Poissonian, Gamma, Pascal and Meixner analyses. Methods Funct. Anal. Topology 13 (2007), no. 4, 380-385 It is the object of this paper to introduce the $(1, 2)^*$pre-$D_k$ axioms for $k$ = $0$, $1$, $2$. Methods Funct. Anal. Topology 13 (2007), no. 4, 386-400 The paper is a survey of the main ideas and results on using of the Markov moment problem method in the optimal control theory. It contains a version of the presentation of the Markov moment approach to the time-optimal control theory, linear and nonlinear.
Today we will link modular quilts (via their associated cuboid tree diagrams) to special hyperbolic polygons. The above drawing gives the hyperbolic polygon (the gray boundary) associated to the M(24) tree diagram (the black interior graph). In general, the correspondence goes as follows. Recall that a cuboid tree diagram is a tree such that all internal vertices are 3-valent and have a specified ordering on the incident edges (given by walking counterclockwise around the vertex) and such that all leaf-vertices are tinted blue or red, the latter ones are paired via an involution (indicated by giving paired red vertices the same label). Introduce a new 2-valent vertex on all edges joining two internal vertices or a blue vertex to an internal vertex. So, the picture on the right corresponds to the tree diagram on the left. Equip this extended tree with a metric such that every edge has length equal to an f-edge in the Dedekind tessellation. Fix an edge having a red vertex and develop this isometrically onto the f-edge connecting $i $ to $\rho $ in the tessellation. Then, the extended tree develops uniquely along the f-edges of the tessellation and such that the circled black and blue vertices correspond to odd vertices, the circled red and added uncircled vertices correspond to even vertices in the tessellation. Starting from the above tree (and choosing the upper-left edge to start the embedding), we obtain the picture on the left (we have removed the added 2-valent vertices) We will now associate a special hyperbolic polygon to this tree. At a red vertex take the even line going through the vertex. If under the involution the red vertex is send to itself, the even edges will be paired. Otherwise, the line is a free side and will be paired to the free side containing the red vertex corresponding under the involution. At a blue vertex, take the two odd edges making an angle of $\frac{\pi}{3} $ with the tree-edge containing the blue vertex. These odd edges will be paired. If we do this procedure for all blue and red vertices, we obtain a special polygon (see the picture on the right, the two vertical lines are paired). Conversely, suppose we start with a special polygon such as the one on the left below and consider all even and odd vertices on the boundary (which are tinted red, respectively blue) together with all odd vertices in the interior of the special polygon. These are indicated in the picture on the right above. If we connect these vertices with the geodesics in the polygon we get a cuboid tree diagram. This correspondence special polygons —>> tree diagrams is finite to one as we have made a choice in the starting red vertex and edge. If we would have taken the other edge containing a red vertex we would end up with the following special polygon It is no accident that these two special polygons consist of exactly 24 triangles of the Dedekind tessellation as they correspond to the index 12 subgroup of the modular group $\Gamma $ determining the 12-dimensional permutation representation of the Mathieu group $M_{12} $. Similarly, the top drawing has 48 hyperbolic triangles and corresponds to the 24-dimensional permutation representation of $M_{24} $. Another time we will make the connection with Farey series which will allow us to give free generators of finite index subgroups. Reference Ravi S. Kulkarni, “An arithmetic-geometric method in the study of the subgroups of the modular group”, Amer. J. Math. 113 (1991) 1053-1133
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Home Integration by PartsIntegration by Parts Examples Integration by Parts with a definite integral Going in Circles Tricks of the Trade Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions Product of Sines and Cosines (mixed even and odd powers or only odd powers) Product of Sines and Cosines (only even powers) Product of Secants and Tangents Other Cases Trig SubstitutionsHow Trig Substitution Works Summary of trig substitution options Examples Completing the Square Partial FractionsIntroduction to Partial Fractions Linear Factors Irreducible Quadratic Factors Improper Rational Functions and Long Division Summary Strategies of IntegrationSubstitution Integration by Parts Trig Integrals Trig Substitutions Partial Fractions Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration Type 2 - Improper Integrals with Discontinuous Integrands Comparison Tests for Convergence Modeling with Differential EquationsIntroduction Separable Equations A Second Order Problem Euler's Method and Direction FieldsEuler's Method (follow your nose) Direction Fields Euler's method revisited Separable EquationsThe Simplest Differential Equations Separable differential equations Mixing and Dilution Models of GrowthExponential Growth and Decay The Zombie Apocalypse (Logistic Growth) Linear EquationsLinear ODEs: Working an Example The Solution in General Saving for Retirement Parametrized CurvesThree kinds of functions, three kinds of curves The Cycloid Visualizing Parametrized Curves Tracing Circles and Ellipses Lissajous Figures Calculus with Parametrized CurvesVideo: Slope and Area Video: Arclength and Surface Area Summary and Simplifications Higher Derivatives Polar CoordinatesDefinitions of Polar Coordinates Graphing polar functions Video: Computing Slopes of Tangent Lines Areas and Lengths of Polar CurvesArea Inside a Polar Curve Area Between Polar Curves Arc Length of Polar Curves Conic sectionsSlicing a Cone Ellipses Hyperbolas Parabolas and Directrices Shifting the Center by Completing the Square Conic Sections in Polar CoordinatesFoci and Directrices Visualizing Eccentricity Astronomy and Equations in Polar Coordinates Infinite SequencesApproximate Versus Exact Answers Examples of Infinite Sequences Limit Laws for Sequences Theorems for and Examples of Computing Limits of Sequences Monotonic Covergence Infinite SeriesIntroduction Geometric Series Limit Laws for Series Test for Divergence and Other Theorems Telescoping Sums Integral TestPreview of Coming Attractions The Integral Test Estimates for the Value of the Series Comparison TestsThe Basic Comparison Test The Limit Comparison Test Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test Absolute Convergence Rearrangements The Ratio and Root TestsThe Ratio Test The Root Test Examples Strategies for testing SeriesStrategy to Test Series and a Review of Tests Examples, Part 1 Examples, Part 2 Power SeriesRadius and Interval of Convergence Finding the Interval of Convergence Power Series Centered at $x=a$ Representing Functions as Power SeriesFunctions as Power Series Derivatives and Integrals of Power Series Applications and Examples Taylor and Maclaurin SeriesThe Formula for Taylor Series Taylor Series for Common Functions Adding, Multiplying, and Dividing Power Series Miscellaneous Useful Facts Applications of Taylor PolynomialsTaylor Polynomials When Functions Are Equal to Their Taylor Series When a Function Does Not Equal Its Taylor Series Other Uses of Taylor Polynomials Functions of 2 and 3 variablesFunctions of several variables Limits and continuity Partial DerivativesOne variable at a time (yet again) Definitions and Examples An Example from DNA Geometry of partial derivatives Higher Derivatives Differentials and Taylor Expansions Differentiability and the Chain RuleDifferentiability The First Case of the Chain Rule Chain Rule, General Case Video: Worked problems Multiple IntegralsGeneral Setup and Review of 1D Integrals What is a Double Integral? Volumes as Double Integrals Iterated Integrals over RectanglesHow To Compute Iterated Integrals Examples of Iterated Integrals Fubini's Theorem Summary and an Important Example Double Integrals over General RegionsType I and Type II regions Examples 1-4 Examples 5-7 Swapping the Order of Integration Area and Volume Revisited Double integrals in polar coordinatesdA = r dr (d theta) Examples Multiple integrals in physicsDouble integrals in physics Triple integrals in physics Integrals in Probability and StatisticsSingle integrals in probability Double integrals in probability Change of VariablesReview: Change of variables in 1 dimension Mappings in 2 dimensions Jacobians Examples Bonus: Cylindrical and spherical coordinates Ellipses and hyperbolas are usually defined using two foci, but they can also be defined using a focus and a directrix. For instance, suppose that we have a conic section with focus at the origin, directrix at $y=-1$, and eccentricity $e \ne 1$. Then \begin{eqnarray*} \sqrt{x^2 + y^2} &=& e|y+1| \cr x^2 + y^2 & = & e^2 (y^2 + 2 y + 1) \cr x^2 + (1-e^2)y^2 -2e^2 y& = & e^2 . \end{eqnarray*} Notice that the coefficient of $y^2$ in the last equation is positive if $e<1$, giving us an ellipse, and is negative when $e>1$, giving us a hyperbola. After completing the square and applying some more algebraic manipulations, we can put the equation in standard form: $$\left ( \frac{1-e^2}{e^2}\right) x^2 + \left( \frac{(1-e^2)^2}{e^2}\right )\left ( y - \frac{e^2}{1-e^2}\right )^2 = 1.$$ If $e<1$, this is an vertically aligned ("tall and skinny") ellipse with center at $(0,\frac{e^2}{1-e^2})$, with $a = \frac{e}{1-e^2}$, $b=\frac{e}{\sqrt{1-e^2}}$ and $c=\frac{e^2}{1-e^2}$ and eccentricity $e=c/a$. As $e \to 1$, the center and the size of the ellipse both go to infinity. If $e>1$ the situation is analogous. Our curve is then a hyperbola of with center at $(0,-\frac{e^2}{e^2-1})$, with $a = \frac{e}{e^2-1}$, $b=\frac{e}{\sqrt{e^2-1}}$ and $c=\frac{e^2}{e^2-1}$ and eccentricity $e=c/a$. As $e \to 1$, the center and the size of the hyperbola both go to infinity. The following video describes — without equations — what happens to a conic section as the eccentricity is increased from 0 to 1, and then beyond 1.
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Search Now showing items 1-3 of 3 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... Net-baryon fluctuations measured with ALICE at the CERN LHC (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ...
Q. a) If there's an obstacle at the bottom of a river why does one observe an indentation on the surface of the river at the position of the obstacle? Attempted solution: The water near the bottom of the river is forced to flow quicker over the obstacle which in turn results in the pressure decreasing at this position. Since the atmospheric pressure on the surface of the river remains constant, there is a greater net downward pressure on the river at the position of the obstacle and thus an indentation is observed. Q. b) If a ball is floating on the surface of the river and is rotating with the local vorticity, why does the rotation slow down when the ball flows into the indent in the river? Attempted Solution: Since vorticity is given by $$ \vec \omega = \nabla \times\vec v,$$ and we assume a 2d flow (i.e no z-direction), $$\vec\omega = {\left(\frac{\partial v_y}{\partial x} - \frac{\partial v_x}{\partial y} \right)}\vec z.$$ Initially $\vec v_y$ = 0 (that is before the indentation), so $|\vec \omega| = \frac{\partial v_x}{\partial y} $ and in the indentation $|\vec \omega|= {\left(\left(\frac{\partial v_y}{\partial x}\right)^2 + \left(\frac{\partial v_x}{\partial y}\right)^2 \right)^{1/2}}.$ For the vorticity to be conserved, the rotation must slow down since we have two contributing terms for when the ball is in the indent. Is this along the right track of thinking?
Take an infinite hexagonal lattice (or equivalently, an equilateral triangular lattice), with unit spacing between the closest lattice point pairs, and draw a disc of radius $r$ centered on a lattice point at $(0, 0)$. Let $N(r, hex)$ denote the number of hexagonal lattice points at coordinates $(a, b)$ s.t. $(a^2 + b^2) \leq r^2$, i.e. the number of lattice points on or within the aforementioned disc of radius $r$. Are there any literature references for approximations to $N(r, hex)$ (I haven't been able to find any through a Google search)? What is an exact counting solution for $N(r, hex)$? Using the exact counting solution for the $Z^2$ integer lattice, (http://mathworld.wolfram.com/GausssCircleProblem.html) I suppose we can guess a lowerbound for the hexagonal lattice of: Lowerbound $N(r, hex) = 1 + Floor[\frac{r}{2}] + 4*\sum^{Floor[\frac{r}{2}]}_{i=1} Floor[((\frac{r}{2})^2-i^2)^{\frac{1}{2}}] + 2*Floor[r]$ Where we simply overlay the $Z^2$ lattice with (closest) nearest-neighbor spacing $2$ on top of an $A_2$ hexagonal lattice with (closest) nearest-neighbor spacing $1$, and add an additional $2*Floor[r]$ correctional term. [10/13/12] The OEIS sequences are extremely helpful, but after searching the literature for awhile, I'm still having difficulty finding an exact (counting) solution for the number of lattice points within a circle of real number radius $r$. Any references would be very much appreciated! [10/14/12] Still no luck finding a reference in the literature. Surely someone has looked at this problem for, say, graphene and other molecular or atomic lattices where one would like to have a precise atom count a certain physical distance away from one atom? [10/19/12] I managed to find the exact OEIS sequence I was looking for: http://oeis.org/A053416 However, I'd still like to find an exact counting solution, like the one presented above the $Z^2$ integer lattice.
Measurements of $\mathrm{B}^*_\mathrm{s2}(5840)^0$ and $\mathrm{B}_\mathrm{s1}(5830)^0$ mesons are performed using a data sample of proton-proton collisions corresponding to an integrated luminosity of 19.6 fb$^{-1}$, collected with the CMS detector at the LHC at a centre-of-mass energy of 8 TeV. The analysis studies $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson decays into $\mathrm{B}^{(*)+}\mathrm{K}^-$ and $\mathrm{B}^{(*)0}\mathrm{K}^0_\mathrm{S}$, where the $\mathrm{B}^+$ and $\mathrm{B}^0$ mesons are identified using the decays $\mathrm{B}^+\to\mathrm{J}/\psi\,\mathrm{K}^+$ and $\mathrm{B}^0\to\mathrm{J}/\psi\,\mathrm{K}^*(892)^0$. The masses of the $P$-wave $\mathrm{B}^0_\mathrm{S}$ meson states are measured and the natural width of the $\mathrm{B}^*_\mathrm{s2}(5840)^0$ state is determined. The first measurement of the mass difference between the charged and neutral $\mathrm{B}^*$ mesons is also presented. The $\mathrm{B}^*_\mathrm{s2}(5840)^0$ decay to $\mathrm{B}^0\mathrm{K}^0_\mathrm{S}$ is observed, together with a measurement of its branching fraction relative to the $\mathrm{B}^*_\mathrm{s2}(5840)^0\to\mathrm{B}^+\mathrm{K}^-$ decay. The ratio of the production cross sections times branching fractions $ \left(\sigma \left({\mathrm{B}}_{\mathrm{c}}^{\pm}\right)\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right)\right)/\left(\sigma \left({\mathrm{B}}^{\pm}\right)\mathrm{\mathcal{B}}\left({\mathrm{B}}^{\pm}\to \mathrm{J}/\psi {K}^{\pm}\right)\right) $ is studied in proton-proton collisions at a center-of-mass energy of 7 TeV with the CMS detector at the LHC. The kinematic region investigated requires B$_{c}^{±}$ and B$^{±}$ mesons with transverse momentum p$_{T}$ > 15 GeV and rapidity |y| < 1.6. The data sample corresponds to an integrated luminosity of 5.1 fb$^{−1}$. The ratio is determined to be $ \left[0.48\pm 0.05\left(\mathrm{stat}\right)\pm 0.03\left(\mathrm{syst}\right)\pm 0.05\ \left({\tau}_{{\mathrm{B}}_{\mathrm{c}}}\right)\right]\% $ . The B$_{c}^{±}$ → J/ψπ$^{±}$ π$^{±}$ π$^{∓}$ decay is also observed in the same data sample. Using a model-independent method developed to measure the efficiency given the presence of resonant behaviour in the three-pion system, the ratio of the branching fractions $ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $ is measured to be $ 2.55\pm 0.80\left(\mathrm{stat}\right)\pm 0.33{\left(\mathrm{syst}\right)}_{-0.01}^{+0.04}\left({\tau}_{B_c}\right) $ , consistent with the previous LHCb result. We present a measurement of the t anti-t production cross section in p anti-p collisions at s**(1/2) = 1.96 TeV which uses events with an inclusive signature of significant missing transverse energy and jets. This is the first measurement which makes no explicit lepton identification requirements, so that sensitivity to W --> tau nu decays is maintained. Heavy flavor jets from top quark decay are identified with a secondary vertex tagging algorithm. From 311 pb-1 of data collected by the Collider Detector at Fermilab we measure a production cross section of 5.8 +/- 1.2(stat.)+0.9_-0.7(syst.) pb for a top quark mass of 178 GeV/c2, in agreement with previous determinations and standard model predictions. We present a measurement of the $\ttbar$ production cross section using $194 \mathrm{pb^{-1}}$ of CDF II data using events with a high transverse momentum electron or muon, three or more jets, and missing transverse energy. The measurement assumes 100% $t\to Wb$ branching fraction. Events consistent with $\ttbar$ decay are found by identifying jets containing heavy flavor semileptonic decays to muons. The dominant backgrounds are evaluated directly from the data. Based on 20 candidate events and an expected background of 9.5$\pm$1.1 events, we measure a production cross section of $5.3\pm3.3^{+1.3}_{-1.0} \mathrm{pb}$, in agreement with the standard model. The combination of searches for squarks and gluinos in final states containing jets, missing transverse momentum and zero or one electron or muon is presented. In the MSUGRA/CMSSM framework with tan beta=3, A_0=0 and mu>0, squarks and gluinos of equal mass are excluded below 815 GeV. These are the most stringent limits to date. This paper describes the measurement of elliptic flow of charged particles in lead-lead collisions at sqrt(s_NN) = 2.76 TeV using the ATLAS detector at the Large Hadron Collider (LHC). The results are based on an integrated luminosity of approximately 7 ub^-1. Elliptic flow is measured over a wide region in pseudorapidity, |eta| < 2.5, and over a broad range in transverse momentum, 0.5 < p_T < 20 GeV. The elliptic flow parameter v_2 is obtained by correlating individual tracks with the event plane measured using energy deposited in the forward calorimeters. As a function of transverse momentum, v_2(p_T) reaches a maximum at p_T of about 3 GeV, then decreases and becomes weakly dependent on p_T above 7 - 8 GeV. Over the measured pseudorapidity region, v_2 is found to be approximately independent of |eta| for all collision centralities and particle transverse momenta, something not observed in lower energy collisions. The results are discussed in the context of previous measurements at lower collision energies, as well as recent results from the LHC. A search for production of supersymmetric particles in final states containing jets, missing transverse momentum, and at least one hadronically decaying tau lepton is presented. The data were recorded by the ATLAS experiment in sqrt(s) = 7 TeV proton-proton collisions at the Large Hadron Collider. No excess above the Standard Model background expectation was observed in 2.05 fb-1 of data. The results are interpreted in the context of gauge mediated supersymmetry breaking models with Mmess = 250 TeV, N5 = 3, mu > 0, and Cgrav = 1. The production of supersymmetric particles is excluded at 95% C.L. up to a supersymmetry breaking scale Lambda = 30 Tev, independent of tan(beta), and up to Lambda = 43 TeV for large tan(beta). We present a measurement of the $\ttbar$ production cross section in $\ppbar$ collisions at $\sqrt{s}=1.96$ TeV using events containing a high transverse momentum electron or muon, three or more jets, and missing transverse energy. Events consistent with $\ttbar$ decay are found by identifying jets containing candidate heavy-flavor semileptonic decays to muons. The measurement uses a CDF Run II data sample corresponding to $2 \mathrm{fb^{-1}}$ of integrated luminosity. Based on 248 candidate events with three or more jets and an expected background of $79.5\pm5.3$ events, we measure a production cross section of $9.1\pm 1.6 \mathrm{pb}$. Searches for the electroweak production of charginos, neutralinos and sleptons in final states characterized by the presence of two leptons (electrons and muons) and missing transverse momentum are performed using 20.3 fb-1 of proton-proton collision data at sqrt(s) = 8 TeV recorded with the ATLAS experiment at the Large Hadron Collider. No significant excess beyond Standard Model expectations is observed. Limits are set on the masses of the lightest chargino, next-to-lightest neutralino and sleptons for different lightest-neutralino mass hypotheses in simplified models. Results are also interpreted in various scenarios of the phenomenological Minimal Supersymmetric Standard Model. Drell-Yan lepton pairs are produced in the process $p\bar{p} \rightarrow \mu^+\mu^- + X$ through an intermediate $\gamma^*/Z$ boson. The forward-backward asymmetry in the polar-angle distribution of the $\mu^-$ as a function of the invariant mass of the $\mu^+\mu^-$ pair is used to obtain the effective leptonic determination $\sin^2 \theta^{lept}_{eff}$ of the electroweak-mixing parameter $\sin^2 \theta_W$, from which the value of $\sin^2 \theta_W$ is derived assuming the standard model. The measurement sample, recorded by the Collider Detector at Fermilab (CDF), corresponds to 9.2 fb-1 of integrated luminosity from $p\bar{p}$ collisions at a center-of-momentum energy of 1.96 TeV, and is the full CDF Run II data set. The value of $\sin^2 \theta^{lept}_{eff}$ is found to be 0.2315 +- 0.0010, where statistical and systematic uncertainties are combined in quadrature. When interpreted within the context of the standard model using the on-shell renormalization scheme, where $\sin^2 \theta_W = 1 - M_W^2/M_Z^2$, the measurement yields $\sin^2 \theta_W$ = 0.2233 +- 0.0009, or equivalently a W-boson mass of 80.365 +- 0.047 GeV/c^2. The value of the W-boson mass is in agreement with previous determinations in electron-positron collisions and at the Tevatron collider. A search is presented for the direct pair production of a chargino and a neutralino $pp\rightarrow \tilde{\chi }_1^\pm \tilde{\chi }_2^0$ , where the chargino decays to the lightest neutralino and the $W$ boson, $\tilde{\chi }_1^\pm \rightarrow \tilde{\chi }_1^0(W^{\pm }\rightarrow \ell ^{\pm }\nu )$ , while the neutralino decays to the lightest neutralino and the 125 GeV Higgs boson, $\tilde{\chi }_2^0\rightarrow \tilde{\chi }_1^0(h\rightarrow bb/\gamma \gamma /\ell ^{\pm }\nu qq)$ . The final states considered for the search have large missing transverse momentum, an isolated electron or muon, and one of the following: either two jets identified as originating from bottom quarks, or two photons, or a second electron or muon with the same electric charge. The analysis is based on 20.3 $\mathrm {fb}^{-1}$ of $\sqrt{s}=8{\mathrm {\ TeV}}$ proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with the Standard Model expectations, and limits are set in the context of a simplified supersymmetric model. A search for supersymmetry involving the pair production of gluinos decaying via third-generation squarks to the lightest neutralino (χ˜10) is reported. It uses an LHC proton-proton data set at a center-of-mass energy s=13 TeV with an integrated luminosity of 3.2 fb-1 collected with the ATLAS detector in 2015. The signal is searched for in events containing several energetic jets, of which at least three must be identified as b jets, large missing transverse momentum, and, potentially, isolated electrons or muons. Large-radius jets with a high mass are also used to identify highly boosted top quarks. No excess is found above the predicted background. For χ˜10 masses below approximately 700 GeV, gluino masses of less than 1.78 TeV and 1.76 TeV are excluded at the 95% C.L. in simplified models of the pair production of gluinos decaying via sbottom and stop, respectively. These results significantly extend the exclusion limits obtained with the s=8 TeV data set. We measure the ratio of cross sections, {\sigma}(ppbar -> Z + b jet)/{\sigma}(ppbar -> Z + jet), for associated production of a Z boson with at least one jet. The ratio is also measured as a function of the jet transverse momentum, jet pseudorapidity, Z boson transverse momentum, and the azimuthal angle between the Z boson and the closest jet for events with at least one b jet. These measurements use data collected by the D0 experiment in Run II of Fermilab's Tevatron ppbar Collider at a center-of-mass energy of 1.96 TeV, and correspond to an integrated luminosity of 9.7 fb$^{-1}$. The results are compared to predictions from next-to-leading order calculations and various Monte Carlo event generators. The results of a search for top squark (stop) pair production in final states with one isolated lepton, jets, and missing transverse momentum are reported. The analysis is performed with proton-proton collision data at $ \sqrt{s} $ = 8 TeV collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20 fb$^{−1}$. The lightest supersymmetric particle (LSP) is taken to be the lightest neutralino which only interacts weakly and is assumed to be stable. The stop decay modes considered are those to a top quark and the LSP as well as to a bottom quark and the lightest chargino, where the chargino decays to the LSP by emitting a W boson. A wide range of scenarios with different mass splittings between the stop, the lightest neutralino and the lightest chargino are considered, including cases where the W bosons or the top quarks are off-shell. Decay modes involving the heavier charginos and neutralinos are addressed using a set of phenomenological models of supersymmetry. No significant excess over the Standard Model prediction is observed. A stop with a mass between 210 and 640 GeV decaying directly to a top quark and a massless LSP is excluded at 95% confidence level, and in models where the mass of the lightest chargino is twice that of the LSP, stops are excluded at 95% confidence level up to a mass of 500 GeV for an LSP mass in the range of 100 to 150 GeV. Stringent exclusion limits are also derived for all other stop decay modes considered, and model-independent upper limits are set on the visible cross-section for processes beyond the Standard Model. We report measurements of the inclusive transverse momentum pT distribution of centrally produced kshort, kstar(892), and phi(1020) mesons up to pT = 10 GeV/c in minimum-bias events, and kshort and lambda particles up to pT = 20 GeV/c in jets with transverse energy between 25 GeV and 160 GeV in pbar p collisions. The data were taken with the CDF II detector at the Fermilab Tevatron at sqrt(s) = 1.96 TeV. We find that as pT increases, the pT slopes of the three mesons (kshort, kstar, and phi) are similar, and the ratio of lambda to kshort as a function of pT in minimum-bias events becomes similar to the fairly constant ratio in jets at pT ~ 5 GeV/c. This suggests that the particles with pT >~ 5 GeV/c in minimum-bias events are from soft jets, and that the pT slope of particles in jets is insensitive to light quark flavor (u, d, or s) and to the number of valence quarks. We also find that for pT <~ 4 GeV relatively more lambda baryons are produced in minimum-bias events than in jets. Measurements are presented of the t-channel single-top-quark production cross section in proton-proton collisions at $ \sqrt{s} $ = 8 TeV. The results are based on a data sample corresponding to an integrated luminosity of 19.7 fb$^{−1}$ recorded with the CMS detector at the LHC. The cross section is measured inclusively, as well as separately for top (t) and antitop $ \left(\overline{\mathrm{t}}\right) $ , in final states with a muon or an electron. The measured inclusive t-channel cross section is σ$_{t-ch.}$ = 83.6 ± 2.3 (stat.) ± 7.4 (syst.) pb. The single t and $ \overline{\mathrm{t}} $ cross sections are measured to be σ$_{t-ch.}$(t) = 53.8 ± 1.5 (stat.) ± 4.4 (syst.) pb and σ$_{t-ch.}$ $ \left(\overline{t}\right) $ = 27.6 ± 1.3 (stat.) ± 3.7 (syst.) pb, respectively. The measured ratio of cross sections is R$_{t-ch.}$ = σ$_{t-ch.}$(t)/σ$_{t-ch.}$ $ \left(\overline{\mathrm{t}}\right) $ = 1.95 ± 0.10 (stat.) ± 0.19 (syst.), in agreement with the standard model prediction. The modulus of the Cabibbo-Kobayashi-Maskawa matrix element V$_{tb}$ is extracted and, in combination with a previous CMS result at $ \sqrt{s} $ = 7 TeV, a value |V$_{tb}$| = 0.998 ± 0.038 (exp.) ± 0.016 (theo.) is obtained. A search is presented for direct top squark pair production in final states with one isolated electron or muon, jets, and missing transverse momentum in proton-proton collisions at sqrt(s) = 7 TeV. The measurement is based on 4.7 fb-1 of data collected with the ATLAS detector at the LHC. Each top squark is assumed to decay to a top quark and the lightest supersymmetric particle (LSP). The data are found to be consistent with Standard Model expectations. Top squark masses between 230 GeV and 440 GeV are excluded with 95% confidence for massless LSPs, and top squark masses around 400 GeV are excluded for LSP masses up to 125 GeV.
I'm trying to solve the DE: $\partial^2 \phi(x)=\frac{\rho(x)}{\epsilon}$ with $ \rho\left(x\right)=0$ for $x<-L$; $ \rho\left(x\right)=\rho_a$ for $-L<x<0$; $ \rho\left(x\right)=\rho_b$ for $0<x<L$ and $ \rho\left(x\right)=0$ for $x>L$ and Neumann boundary condition $\partial_x\phi(\pm\infty)=0$. I'm trying to use the following command, but I got no response from Mathematica DSolve[{\[Phi]''[x] == 1/\[Epsilon]d \[Rho][x] + NeumannValue[0, x < -1000000] + NeumannValue[0, x > 1000000]},\[Phi][x],x] Thank you very much for the help. Best,
Image credits Consider this problem [1]statement: Suppose \(k\) identical boxes contain n balls numbered \(1\) through \(n\). One ball is drawn from each box. What is the probability that \(m\) is the largest number drawn? This problem can be seen in any of the two similar ways of interpreting repeated trials: There is an experiment in which we draw a ball from a box containing balls numbered \(1\) through \(n\). Then, the experiment is repeated \(k\) times. Alternatively, a ball is drawn from the same box with replacement \(k\) times. There is just one experiment of drawing one ball each from \(k\) identical boxes. In the first case, the sample space of each trial contains just the numbers on the balls in the box \(n \)as its elements. In the second case, the sample space consists of \(n^k\)elements. So, let us see how the approach to the solution may vary in both the cases: Let the numbers be \(\{1,2,3\ldots m-1,m, m+1,\ldots n-1, n\}\) Repeat Experiment k times: Since we are interested in finding out the probability that \(m\) is the largest number drawn, we can define it as a combination of mutually exclusive events. We want to see in how many ways can the event be possible. One way of doing it might be: (Getting m exactly once and all other k-1 observations are less than m) + (Getting m exactly twice and all other k-2 observations are less than m) + ... + (Getting m exactly k-1 times and 1 observation is less than m) + (Getting m exactly k times and no observation is less than m) Each of the events is mutually exclusive of the other events. Here, we can look into one particular event and hope to generalize it, resulting in a series. For simplicity, consider one particular sequence of repeating the experiment k times and getting a ball with number 5 (i.e. m=5) exactly 2 times. The balls are numbered from 1 through 10 (i.e. n=10). \(1,2,3,1,4,5,3,4,5,2\ldots 2,3,4\) Here, one may reason that the probability of picking a 5 numbered ball in a single draw is \(\frac{1}{10}\)So, the entire sequence may be modelled as a sequence of tosses, where getting a 5 means success and not getting a 5 means failure. Note that Failure is a composite event of (Getting less than 5 or more than 5). Further, each iteration of the experiment is assumed to be independent of each other. Then, the probability of getting this sequence with exactly 2 successes and k-2 failures distributed as k-2 numbers less than 5 and 0 numbers greater than 5, is: \((\frac{1}{10})^2(\frac{4}{10})^{k-2}(\frac{5}{10})^0\) And this is just one sequence. There are \(k \choose 2\)ways of getting the same probability (from different combinations). Then, total probability of getting exactly 2 times, a m=5 numbered ball as the highest numbered ball in k draws is: \({k\choose 2} (\frac{1}{n})^2(\frac{m-1}{n})^{k-2}\) Generalizing, the total probability of getting exactly i times, a ' m' numbered ball as the highest numbered ball in k draws is: \({k\choose i} (\frac{1}{n})^i(\frac{m-1}{n})^{k-i}\) Using this equation, the total probability that m is the highest number drawn is: \(\sum_{i=1}^{k}{k\choose i} (\frac{1}{n})^i(\frac{m-1}{n})^{k-i}\) Single experiment of k draws with replacement Another approach can be based on the following reasoning grounded in the collective trail of k draws with replacement: (Getting exactly 1 draw with number m) + (Getting k-1 draws with number m or less than m) + (Getting 0 draws with number more than m) This is really short! Let us see what happened here: In order to have m as the highest number, it must be drawn at least once. So, the (Getting exactly 1 draw with number m) ensures that. Beyond that, it may be drawn more than once, but might not be drawn at all. This is ensured by (Getting k-1 draws with number m or less). Finally, no number greater than m must be drawn. This probability can then be computed as a multinomial probability. In how many ways can we distribute k balls among 3 partitions such that partition 1 gets exactly one ball, partition 2 gets k-1 and partition 3 gets 0 balls? \(\frac{k!}{(k-1)!\cdot1!}(P(getting\text{ }m))(P(getting\text{ }m\text{ }or\text{ }less))^{k-1} = \frac{k!}{(k-1)!\cdot1!}(\frac{1}{n})(\frac{m}{n})^{k-1}\) This method, however, is wrong! This multinomial solution Here's an illustration: overcounts. Consider k=2, n=10, m=5. Then, we have had 2 draws. Two partitions are possible. So, either ' exactly 5' goes to the first partition, or to the second partition. Consequently, ' 5 or less' goes to the other partition. But, in the overall experiment, a combination of (5,5) will be repeated twice with this method, and we'll get an excess of 1/10. Let's see if this is correct. For 2 draws, the possible favourable outcomes are: {(both 5), (first 5 and second less than 5), (first less than 5 and second 5)}. The probabilities with each event are \(\frac{1}{10}^2\), \((\frac{1}{10}\times\frac{4}{10})\), \((\frac{4}{10}\times \frac{1}{10})\) respectively. The sum comes out to be \(\frac{9}{100}\). Using the multinomial formula, it comes out to be \(\frac{10}{100}\) which is \(\frac{1}{10}\) greater than what we computed before. Thus, this line of reasoning, though cogent, is wrong! The approach given before gives the correct solution. Alternative Solution There is yet another way of getting at the desired probability, much like how the CDF is defined: Consider a discrete number line starting from 0 and extending to a large integer number n. On this line, a value m is defined. Rather than telling us the Probability that the number 'r' occurs with certain probability, its distribution is given: \(P(x\leq r) = P_r\) Then, if one is to compute the probability \(P(X=r)\), one can do so as \(P(x\leq r) - P(x \leq r-1)\). On similar lines, one may find the probability P('m' is the largest number drawn in k draws) by evaluating the probability as P(m or less than m is drawn in k draws) - P(m-1 or less than m-1 is drawn in k draws). So, what is the probability P(m or less than m is drawn k times)? \((\frac{m}{n})^k\) So, the probability final probability is: \(P(m\text{ }is\text{ }the\text{ }highest\text{ }number\text{ }drawn) = (\frac{m}{n})^k - (\frac{m-1}{n})^k\) Plug in the values from the example as k=2, n=10, m=5, we have the probability as \(\frac{9}{100}\). Compare this method with the iterative method discussed above. It is much neater! This can be simulated and interesting observations can be made from the simple experiment. Here's a code for Matlab/Octave: %%Initialize variables: num_iter = 100000; %Number of iterations n = 10; %Number of balls numbered 1...n m = 5; %Number we want to compute probability for k = 2; %Number of boxes num_favourable = 0; for ii=1:num_iter outcomes = randi(n, [1,k]); %Outcomes from each box if(max(outcomes)==m) num_favourable = num_favourable+1; end end fprintf('\nProbability that m=%d number is the highest is %f\n', ... m, num_favourable/num_iter); An interesting observation can be made with this code and also from the Alternative Formulation above: Increasing 'k' (number of boxes) reduces the chances of small numbers to win. That is, if k increases, there are more chances that every number appears in the boxes. Then, the higher numbers will have increased chances of winning. From the forumla above, increasing k will reduce smaller 'm's more than larger 'm's keeping 'n' same. Example: for k=2, n=10 and m=5, Probability of m being largest is \(\frac{900}{10000}\). For k=10, it comes to be \(\frac{87}{10000}\). The chances have reduced significantly. Similarly, for m=10, the respective probabilities are \(\frac{19}{100}\) and \(\frac{65}{100}\) References: [1] Athanasios Papoulis and S. Unnikrishna Pillai, "Probabilty, Random Variables and Stochastic Processes", Fourth Edition, Chapter 2, Problem 2.17
(4 intermediate revisions by one other user not shown) Line 1: Line 1: + + + Functions that are solutions of the Legendre equation Functions that are solutions of the Legendre equation + + + + + + + + + + − <table class= "eq" style= "width:100%;"> <tr><td valign= "top" style= "width:94%;text-align:center;"><img align= "absmiddle" border="0" src= "https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580301.png" /></td> <td valign= "top" style= "width:5%;text- align:right;">(*)</td></tr></table> + ======== − + − where <img align=" absmiddle" border=" 0" src="https://www. encyclopediaofmath. org/legacyimages/l/l058/l058030/l0580302. png" /> and <img align=" absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580303.png" /> are arbitrary numbers. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580304.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580305.png" />, then the solutions of equation ( *) , restricted to <img align="absmiddle" border="0" src="https://www. encyclopediaofmath. org/legacyimages/l/l058/l058030/l0580306.png" />, are called [[Legendre polynomials| Legendre polynomials]]; for integers <img align=" absmiddle" border=" 0" src="https://www. encyclopediaofmath. org/legacyimages/l/l058/l058030/l0580307.png" /> with <img align=" absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580308.png" />, the solutions of equation ( *) , restricted to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058030/l0580309.png" />, are called Legendre associated functions. + - − + =""="". .. "", () . . − + − + |=""="".. "", () − ====Comments==== + − + − ====References==== + [[Special functions − <table><TR><TD valign="top">[ a1]</TD> <TD valign="top"> M. Abramowitz, I.A. Stegun, "Handbook of mathematical functions" , Dover, reprint (1965) pp. Chapt. 8</TD></TR><TR><TD valign="top">[ a2]</TD> <TD valign="top"> N.N. Lebedev, "Special functions and their applications" , Dover, reprint (1972) (Translated from Russian)</TD></TR></table> Latest revision as of 22:14, 1 November 2014 Functions that are solutions of the Legendre equation\begin{equation}\label{eq1}\bigl( 1 - x^2 \bigr)\frac{\mathrm{d}^2y}{\mathrm{d}x^2} -2x \frac{\mathrm{d}y}{\mathrm{d}x} +\left(\nu(\nu+1) - \frac{\mu^2}{1-x^2}\right)y = 0,\end{equation}where $\nu$ and $\mu$ are arbitrary numbers. If $\nu = 0,1,\ldots$, and $\mu=0$, then the solutions of equation \ref{eq1}, restricted to $[-1,1]$, are called Legendre polynomials; for integers $\mu$ with $-\nu \leq \mu \leq \nu$, the solutions of \ref{eq1}, restricted to $[-1,1]$, are called Legendre associated functions. References [AbSt] M. Abramowitz, I.A. Stegun, "Handbook of mathematical functions", Dover, reprint (1965) pp. Chapt. 8 [Le] N.N. Lebedev, "Special functions and their applications", Dover, reprint (1972) (Translated from Russian) How to Cite This Entry: Legendre functions. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Legendre_functions&oldid=15230 This article was adapted from an original article by A.B. Ivanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Let $X$ be a Killing field on a Riemannian manifold $M$. Define a mapping $A_X : \mathcal X(M) \to \mathcal X(M)$ by $A_X(Z)=\nabla_Z X$, $Z \in \mathcal X(M)$. Consider the function $f : M \to \mathbb R$ given by $f(q) = \langle X,X \rangle_q$, $q \in M$. Let $p \in M$ be a critical point of $f$ (that is, $df_p=0$). Prove that for any $Z \in \mathcal X(M)$, at $p$, $\langle A_X(Z),A_X(Z) \rangle(p)=\frac 12Z_p(Z\langle X,X\rangle)+\langle R(X,Z)X,Z\rangle$. Hint:Put $S=\frac 12Z_p(Z\langle X,X\rangle)+\langle R(X,Z)X,Z\rangle(p)$. Using the Killing equation $\langle \nabla_Z X,X\rangle+\langle \nabla_X X,Z \rangle=0$, we obtain $$ S=\langle \nabla_{[X,Z]}X,Z\rangle-\langle \nabla_X X,\nabla_Z Z \rangle-\langle \nabla_X \nabla_Z X, Z \rangle. $$ Using the Killing equation again, we obtain \begin{align} S&=-\langle \nabla_Z X,\nabla_X Z\rangle + \langle \nabla_Z X, \nabla_Z X \rangle + \langle \nabla_Z X,\nabla_X Z \rangle - \langle \nabla_X X,\nabla_Z Z \rangle \\ &= \langle \nabla_Z X, \nabla_Z X \rangle - \langle \nabla_X X,\nabla_Z Z \rangle. \end{align} Because of the Killing equation at $p$, $\nabla_X X(p)=0$, and we conclude the assertion. How can we get from the first expression of $S$ to the second expression of $S$? To answer that question, it suffices to show that $$ \langle \nabla_{[X,Z]}X,Z\rangle-\langle \nabla_X \nabla_Z X, Z \rangle = -\langle \nabla_Z X,\nabla_X Z\rangle + \langle \nabla_Z X, \nabla_Z X \rangle + \langle \nabla_Z X,\nabla_X Z \rangle. $$ So how can we establish this equality? The only thing I could do so far was that I algebraically rearranged some terms (so that they are all positive) as follows: \begin{align} \langle \nabla_{[X,Z]}X,Z\rangle + \langle \nabla_Z X, \nabla_X Z \rangle &= \langle \nabla_X \nabla_Z X, Z \rangle + \langle \nabla_Z X,\nabla_X Z \rangle + \langle \nabla_Z X,\nabla_Z X \rangle \\ &= X\langle \nabla_Z X,Z \rangle + \langle \nabla_Z X,\nabla_Z X \rangle \end{align} I'm hoping that this is progress. What can I do now? (Particularly, I have a hard time working with the connection $\nabla_{[X,Z]}X$, with the Lie bracket...)
Answer inverse property of multiplication Work Step by Step The inverse property of multiplication states that for any nonzero real number, $a$, $a \cdot \frac{1}{a} = 1$ Thus, $\dfrac{1}{\sqrt{17}} ( \sqrt{17})=1$ Therefore, the given statement illustrates the inverse property of multiplication.
Defining parameters Level: \( N \) = \( 20 = 2^{2} \cdot 5 \) Weight: \( k \) = \( 7 \) Nonzero newspaces: \( 3 \) Newforms: \( 6 \) Sturm bound: \(168\) Trace bound: \(1\) Dimensions The following table gives the dimensions of various subspaces of \(M_{7}(\Gamma_1(20))\). Total New Old Modular forms 82 38 44 Cusp forms 62 34 28 Eisenstein series 20 4 16 Decomposition of \(S_{7}^{\mathrm{new}}(\Gamma_1(20))\) We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 20.7.b \(\chi_{20}(11, \cdot)\) 20.7.b.a 12 1 20.7.d \(\chi_{20}(19, \cdot)\) 20.7.d.a 1 1 20.7.d.b 1 20.7.d.c 2 20.7.d.d 12 20.7.f \(\chi_{20}(13, \cdot)\) 20.7.f.a 6 2
The alternating group $A_5 $ has two conjugacy classes of order 5 elements, both consisting of exactly 12 elements. Fix one of these conjugacy classes, say $C $ and construct a graph with vertices the 12 elements of $C $ and an edge between two $u,v \in C $ if and only if the group-product $u.v \in C $ still belongs to the same conjugacy class. Observe that this relation is symmetric as from $u.v = w \in C $ it follows that $v.u=u^{-1}.u.v.u = u^{-1}.w.u \in C $. The graph obtained is the icosahedron, depicted on the right with vertices written as words in two adjacent elements u and v from $C $, as indicated. Kostant writes : “Normally it is not a common practice in group theory to consider whether or not the product of two elements in a conjugacy class is again an element in that conjugacy class. However such a consideration here turns out to be quite productive.” Still, similar constructions have been used in other groups as well, in particular in the study of the largest sporadic group, the monster group $\mathbb{M} $. There is one important catch. Whereas it is quite trivial to multiply two permutations and verify whether the result is among 12 given ones, for most of us mortals it is impossible to do actual calculations in the monster. So, we’d better have an alternative way to get at the icosahedral graph using only $A_5 $-data that is also available for the monster group, such as its character table. Let $G $ be any finite group and consider three of its conjugacy classes $C(i),C(j) $ and $C(k) $. For any element $w \in C(k) $ we can compute from the character table of $G $ the number of different products $u.v = w $ such that $u \in C(i) $ and $v \in C(j) $. This number is given by the formula $\frac{|G|}{|C_G(g_i)||C_G(g_j)|} \sum_{\chi} \frac{\chi(g_i) \chi(g_j) \overline{\chi(g_k)}}{\chi(1)} $ where the sum is taken over all irreducible characters $\chi $ and where $g_i \in C(i),g_j \in C(j) $ and $g_k \in C(k) $. Note also that $|C_G(g)| $ is the number of $G $-elements commuting with $g $ and that this number is the order of $G $ divided by the number of elements in the conjugacy class of $g $. The character table of $A_5 $ is given on the left : the five columns correspond to the different conjugacy classes of elements of order resp. 1,2,3,5 and 5 and the rows are the character functions of the 5 irreducible representations of dimensions 1,3,3,4 and 5. Let us fix the 4th conjugacy class, that is 5a, as our class $C $. By the general formula, for a fixed $w \in C $ the number of different products $u.v=w $ with $u,v \in C $ is equal to $\frac{60}{25}(\frac{1}{1} + \frac{(\frac{1+\sqrt{5}}{2})^3}{3} + \frac{(\frac{1-\sqrt{5}}{2})^3}{3} – \frac{1}{4} + \frac{0}{5}) = \frac{60}{25}(1 + \frac{4}{3} – \frac{1}{4}) = 5 $ Because for each $x \in C $ also its inverse $x^{-1} \in C $, this can be rephrased by saying that there are exactly 5 different products $w^{-1}.u \in C $, or equivalently, that the valency of every vertex $w^{-1} \in C $ in the graph is exactly 5. That is, our graph has 12 vertices, each with exactly 5 neighbors, and with a bit of extra work one can show it to be the icosahedral graph. For the monster group, the Atlas tells us that it has exactly 194 irreducible representations (and hence also 194 conjugacy classes). Of these conjugacy classes, the involutions (that is the elements of order 2) are of particular importance. There are exactly 2 conjugacy classes of involutions, usually denoted 2A and 2B. Involutions in class 2A are called “Fischer-involutions”, after Bernd Fischer, because their centralizer subgroup is an extension of Fischer’s baby Monster sporadic group. Likewise, involutions in class 2B are usually called “Conway-involutions” because their centralizer subgroup is an extension of the largest Conway sporadic group. Let us define the monster graph to be the graph having as its vertices the Fischer-involutions and with an edge between two of them $u,v \in 2A $ if and only if their product $u.v $ is again a Fischer-involution. Because the centralizer subgroup is $2.\mathbb{B} $, the number of vertices is equal to $97239461142009186000 = 2^4 * 3^7 * 5^3 * 7^4 * 11 * 13^2 * 29 * 41 * 59 * 71 $. From the general result recalled before we have that the valency in all vertices is equal and to determine it we have to use the character table of the monster and the formula. Fortunately GAP provides the function ClassMultiplicationCoefficient to do this without making errors. gap> table:=CharacterTable("M"); CharacterTable( "M" ) gap> ClassMultiplicationCoefficient(table,2,2,2); 27143910000 Perhaps noticeable is the fact that the prime decomposition of the valency $27143910000 = 2^4 * 3^4 * 5^4 * 23 * 31 * 47 $ is symmetric in the three smallest and three largest prime factors of the baby monster order. Robert Griess proved that one can recover the monster group $\mathbb{M} $ from the monster graph as its automorphism group! As in the case of the icosahedral graph, the number of vertices and their common valency does not determine the monster graph uniquely. To gain more insight, we would like to know more about the sizes of minimal circuits in the graph, the number of such minimal circuits going through a fixed vertex, and so on. Such an investigation quickly leads to a careful analysis which other elements can be obtained from products $u.v $ of two Fischer involutions $u,v \in 2A $. We are in for a major surprise, first observed by John McKay: Printing out the number of products of two Fischer-involutions giving an element in the i-th conjugacy class of the monster, where i runs over all 194 possible classes, we get the following string of numbers : 97239461142009186000, 27143910000, 196560, 920808, 0, 3, 1104, 4, 0, 0, 5, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 That is, the elements of only 9 conjugacy classes can be written as products of two Fischer-involutions! These classes are : 1A = { 1 } written in 97239461142009186000 different ways (after all involutions have order two) 2A, each element of which can be written in exactly 27143910000 different ways (the valency) 2B, each element of which can be written in exactly 196560 different ways. Observe that this is the kissing number of the Leech lattice leading to a permutation representation of $2.Co_1 $. 3A, each element of which can be written in exactly 920808 ways. Note that this number gives a permutation representation of the maximal monster subgroup $3.Fi_{24}’ $. 3C, each element of which can be written in exactly 3 ways. 4A, each element of which can be written in exactly 1104 ways. 4B, each element of which can be written in exactly 4 ways. 5A, each element of which can be written in exactly 5 ways. 6A, each element of which can be written in exactly 6 ways. Let us forget about the actual numbers for the moment and concentrate on the orders of these 9 conjugacy classes : 1,2,2,3,3,4,4,5,6. These are precisely the components of the fundamental root of the extended Dynkin diagram $\tilde{E_8} $! This is the content of John McKay’s E(8)-observation : there should be a precise relation between the nodes of the extended Dynkin diagram and these 9 conjugacy classes in such a way that the order of the class corresponds to the component of the fundamental root. More precisely, one conjectures the following correspondence This is similar to the classical McKay correspondence between finite subgroups of $SU(2) $ and extended Dynkin diagrams (the binary icosahedral group corresponding to extended E(8)). In that correspondence, the nodes of the Dynkin diagram correspond to irreducible representations of the group and the edges are determined by the decompositions of tensor-products with the fundamental 2-dimensional representation. Here, however, the nodes have to correspond to conjugacy classes (rather than representations) and we have to look for another procedure to arrive at the required edges! An exciting proposal has been put forward recently by John Duncan in his paper Arithmetic groups and the affine E8 Dynkin diagram. It will take us a couple of posts to get there, but for now, let’s give the gist of it : monstrous moonshine gives a correspondence between conjugacy classes of the monster and certain arithmetic subgroups of $PSL_2(\mathbb{R}) $ commensurable with the modular group $\Gamma = PSL_2(\mathbb{Z}) $. The edges of the extended Dynkin E(8) diagram are then given by the configuration of the arithmetic groups corresponding to the indicated 9 conjugacy classes! (to be continued…)
Recall the following definitions from elementary geometry: An angle is acuteif it is between \(0°\) and \(90°\). An angle is a right angleif it equals \(90°\). An angle is obtuseif it is between \(90°\) and \(180°\). An angle is a straight angleif it equals \(180°\). Figure 1.1.1 Types of angles In elementary geometry, angles are always considered to be positive and not larger than \(360^\circ \). For now we will only consider such angles. The following definitions will be used throughout the text: Two acute angles are complementaryif their sum equals \(90^◦\) . In other words, if \(0^◦ ≤ ∠ A , ∠B ≤ 90^◦ \text{ then }∠ A \text{ and }∠B\) are complementary if \(∠ A +∠B = 90^◦\) . Two angles between \(0^◦ \text{ and }180^◦\) are supplementaryif their sum equals \(180^◦\) . In other words, if \(0^◦ ≤ ∠ A , ∠B ≤ 180^◦ \text{ then }∠ A \text{ and }∠B\) are supplementary if \(∠ A +∠B = 180^◦\) . Two angles between \(0^◦ \text{ and }360^◦\) are conjugate(or explementary) if their sum equals \(360^◦\) . In other words, if \(0^◦ ≤ ∠ A , ∠B ≤ 360^◦ \text{ then }∠ A \text{ and }∠B\text{ are conjugate if }∠ A+∠B = 360^◦\) . Figure 1.1.2 Types of pairs of angles Instead of using the angle notation \(∠ A\) to denote an angle, we will sometimes use just a capital letter by itself (e.g. \(A, B, C\)) or a lowercase variable name (e.g. \(x, y, t\)). It is also common to use letters (either uppercase or lowercase) from the Greek alphabet, shown in the table below, to represent angles: Table 1.1 The Greek alphabet In elementary geometry you learned that the sum of the angles in a triangle equals \(180^◦\), and that an isosceles triangle is a triangle with two sides of equal length. Recall that in a right triangle one of the angles is a right angle. Thus, in a right triangle one of the angles is \(90^◦\) and the other two angles are acute angles whose sum is \(90^◦\) (i.e. the other two angles are complementary angles). Example 1.1 For each triangle below, determine the unknown angle(s): Note: We will sometimes refer to the angles of a triangle by their vertex points. For example, in the first triangle above we will simply refer to the angle \(\angle\,BAC\) as angle \(A\). Solution: For triangle \(\triangle\,ABC\), \( A = 35^\circ\) and \(C = 20^\circ\), and we know that \(A + B + C = 180^\circ\), so \[\nonumber 35^◦ + B + 20^◦ = 180^◦ ⇒ B = 180^◦ − 35^◦ − 20^◦ ⇒ \fbox{\(B = 125^◦\)} .\] For the right triangle \(△DEF,\, E = 53^◦ \text{ and }F = 90^◦\) , and we know that the two acute angles \(D\) and \(E\) are complementary, so \[\nonumber D + E = 90^◦ ⇒ D = 90^◦ − 53^◦ ⇒ \fbox{\(D = 37^◦\)} .\] For triangle \(△ XY Z\), the angles are in terms of an unknown number \(α\), but we do know that \(X +Y + Z = 180^◦\) , which we can use to solve for \(α\) and then use that to solve for \(X, Y, \text{ and }Z\): \[\nonumber α + 3α + α = 180^◦ ⇒ 5α = 180^◦ ⇒ α = 36^◦ ⇒ \fbox{\(X = 36^◦ ,\, Y = 3×36^◦ = 108^◦ ,\, Z = 36^◦\)}\] Example 1.2: Thales' Theorem Thales' Theorem states that if \(A, \, B,\text{ and }C\) are (distinct) points on a circle such that the line segment \(\overline{AB}\) is a diameter of the circle, then the angle \(\angle\,ACB\) is a right angle (see Figure 1.1.3(a)). In other words, the triangle \(\triangle\,ABC\) is a right triangle. Figure 1.1.3 Thales’ Theorem: \(∠ ACB = 90^◦\) To prove this, let \(O\) be the center of the circle and draw the line segment \(\overline{OC}\), as in Figure 1.1.3(b). Let \(α = ∠BAC \text{ and }β = ∠ ABC\). Since \(\overline{AB}\) is a diameter of the circle, \(\overline{OA} \text{ and }\overline{OC}\) have the same length (namely, the circle’s radius). This means that \(△OAC \text{ is an isosceles triangle, and so }∠OCA = ∠OAC = α\). Likewise, \(△OBC\) is an isosceles triangle and \(∠OCB = ∠OBC = β\). So we see that \(∠ ACB = α+β\). And since the angles of \(△ ABC\) must add up to \(180^◦\) , we see that \(180^◦ = α+(α+β)+β = 2 (α+β), \text{ so }α+β = 90^◦\) . Thus, \(∠ ACB = 90^◦\) . QED By knowing the lengths of two sides of a right triangle, the length of the third side can be determined by using the Pythagorean Theorem: Theorem 1.1. Pythagorean Theorem The square of the length of the hypotenuse of a right triangle is equal to the sum of the squares of the lengths of its legs. Recall that triangles are similar if their corresponding angles are equal, and that similarity implies that corresponding sides are proportional. Thus, since \(\triangle\,ABC \) is similar to \(\triangle\,CBD \), by proportionality of corresponding sides we see that \[\nonumber \overline{AB}~\text{is to}~\overline{CB}~\text{(hypotenuses)}\text{ as } \overline{BC}~\text{is to}~\overline{BD}~\text{(vertical legs)} \quad\Rightarrow\quad \frac{c}{a} ~=~ \frac{a}{d} \quad\Rightarrow\quad cd ~=~ a^2 ~.\] Since \(\triangle\,ABC \) is similar to \(\triangle\,ACD \), comparing horizontal legs and hypotenuses gives \[\nonumber \frac{b}{c-d} ~=~ \frac{c}{b} \quad\Rightarrow\quad b^2 ~=~ c^2 ~-~ cd ~=~ c ^2 ~-~ a^2 \quad\Rightarrow\quad a^2 ~+~ b^2 ~=~ c^2 ~. \textbf{QED}\] Note: The symbols \(\perp\) and \(\sim\) denote perpendicularity and similarity, respectively. For example, in the above proof we had \(\,\overline{CD} \perp \overline{AB}\, \) and \(\,\triangle\,ABC \sim \triangle\,CBD \sim \triangle\,ACD \). Example 1.3 For each right triangle below, determine the length of the unknown side: Solution: For triangle \(\triangle\,ABC \), the Pythagorean Theorem says that \[\nonumber a^2 ~+~ 4^2 ~=~ 5^2 \quad\Rightarrow\quad a^2 ~=~ 25 ~-~ 16 ~=~ 9 \quad\Rightarrow\quad \fbox{\(a ~=~ 3\)} ~.\] For triangle \(\triangle\,DEF \), the Pythagorean Theorem says that \[\nonumber e^2 ~+~ 1^2 ~=~ 2^2 \quad\Rightarrow\quad e^2 ~=~ 4 ~-~ 1 ~=~ 3 \quad\Rightarrow\quad \fbox{$e ~=~ \sqrt{3}$} ~.\] For triangle \(\triangle\,XYZ \), the Pythagorean Theorem says that \[\nonumber 1^2 ~+~ 1^2 ~=~ z^2 \quad\Rightarrow\quad z^2 ~=~ 2 \quad\Rightarrow\quad \fbox{$z ~=~ \sqrt{2}$} ~.\] Example 1.4 A 17 ft ladder leaning against a wall has its foot 8 ft from the base of the wall. At what height is the top of the ladder touching the wall? Solution: Let \(h \) be the height at which the ladder touches the wall. We can assume that the ground makes a right angle with the wall, as in the picture on the right. Then we see that the ladder, ground, and wall form a right triangle with a hypotenuse of length 17 ft (the length of the ladder) and legs with lengths 8 ft and \(h \) ft. So by the Pythagorean Theorem, we have \[\nonumber h^2 ~+~ 8^2 ~=~ 17^2 \quad\Rightarrow\quad h^2 ~=~ 289 ~-~ 64 ~=~ 225 \quad\Rightarrow\quad \fbox{$h ~=~ 15 ~\text{ft}$} ~.\]
Refrigeration and Air-Conditioning Category : Railways Refrigeration and Air-Conditioning Refrigeration is a process in which work is done to move eat from one location to another. The work of heat transport is traditionally driven by mechanical work, but can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including, but not limited to: household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to refrigeration units. Refrigeration has had a large impact on industry, lifestyle, agriculture and settlement patterns. The idea of preserving food dates back to the ancient Roman and Chinese empires. However, refrigeration technology has rapidly evolved. In the last century from ice harvesting to temperature controlled rail cars. The introduction of refrigerated rail cars contributed to the westward expansion of the United States, allowing settlement in areas that were not on main transport channels such as rivers, harbors, or valley trails. The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1758, Benjamin Franklin and John Hadley, professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University, England. They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to 'quicken' the evaporation; they lowered the temperature of the thermometer bulb down to\[7\,\,\text{ }\!\!{}^\circ\!\!\text{ F}\]\[(-14{}^\circ C),\] while the ambient temperature was\[65\,\,\text{ }\!\!{}^\circ\!\!\text{ F}\]\[(18{}^\circ C)\]. They noted that soon after they passed the freezing point of water \[(32{}^\circ F),\] a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a quarter inch thick when they stopped the experiment upon reaching \[7{}^\circ F\,\,(-\,14{}^\circ C).\] In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system in the world. In 1842, a similar attempt was made by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapor compression system that used ether. The first practical vapor compression refrigeration system was built by James Harrison, a British journalist who had emigrated to Australia. His 1856 patent was for a vapour compression system using ether, alcohol or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat packing houses, and by 1861, a dozen of his systems were in operation. The first gas absorption refrigeration system using gaseous ammonia dissolved in water was developed by Ferdinand Carre of France in 1859 and patented in 1860. Air conditioning is the process of altering the properties of air to more comfortable conditions, typically with the aim of distributing the conditioned air to an occupied space to improve thermal comfort and indoor air quality. The basic concept behind air conditioning is said to have been applied in ancient Egypt, where reeds were hung in windows and were moistened with trickling water. The evaporation of water cooled the air blowing through the window. This process also made the air more humid, which can be beneficial in a dry desert climate. In Ancient Rome, water from aqueducts was circulated through the walls of certain houses to cool them. Other techniques in medieval Persia involved the use of cisterns and wind towers to cool buildings during the hot season. Modem air conditioning emerged from advances in chemistry during the 19th century, and the first large scale electrical air conditioning was invented and used in 1902 by American inventor Willis Carrier. The introduction of residential air conditioning in the 1920s helped enable the great migration to the Sun Belt in the United States. Inductors or choke coils are made of iron core because large valued flux densities can be produced in iron cores and so inductances of large value can be had. Air-cored inductors become too much bulky to provide an inductance of a required value. An inductance draws instantaneous power but no average power. The power drawn by inductance during one quarter cycle is released in another quarter cycle, and so average power drawn by an inductance is zero. Power factor may be defined as the cosine of the phase angle between voltage and current. It may also be defined as the ratio of resistance to impedance or the ratio of true power to apparent power. The current component which is in phase with circuit voltage (i.e.,\[I\cos f\]) and contributes to active or true power of the circuit is called the active (wattfull or in phase) component of current. The current component which is in quadrature (or \[90{}^\circ \]out of phase) to the circuit voltage (i.e.,\[I\,\,sin\,\,\phi \]) and contributes to reactive power of the circuit is called the reactive (or wattless) component of current. The power which is actually consumed or utilized in an ac circuit is called the true or active power of the circuit. Power is consumed only in resistance. It is given by the product of the circuit voltage V, current I and power factor \[\cos \,\,\phi \] i.e., \[P=VI\cos \,\,\phi \]. It is expressed in watts. A pure inductor and a pure capacitor do not consume any power, as in a quarter cycle whatsoever power is drawn from the supply source by these components, the same is returned to the supply source in the other quarter cycle. This power which flows back and forth (i.e., in both directions in the circuit) or reacts itself is called the reactive power. This is also known as wattles power. The reactive power of an ac circuit is given by the product of voltage V, current I and sine of the phase angle f, i.e., Reactive power, \[Q=VIsin\,\,\phi \] it is expressed in reactive volt-amperes. In a purely capactive circuit, current leads the applied voltage by \[90{}^\circ \,\,or\,\,\pi /2\] radians. Inductive reactance of an inductor increases proportionately with the increase in supply frequency i.e., \[{{\text{X}}_{L}}\propto f.\] In 1758, Benjamin Franklin and John Hadley, a chemistry professor at Cambridge University, conducted an experiment to explore the principle of evaporation as a means to rapidly cool an object. Franklin and Hadley confirmed that evaporation of highly volatile liquids could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to speed-up the evaporation. They lowered the temperature of the thermometer bulb down to\[-\,14{}^\circ C\,\,(7{}^\circ F)\] while the ambient temperature was \[18{}^\circ \,C\,\,(64{}^\circ \,F).\] In 1820, English scientist and inventor Michael Faraday discovered that compressing and liquefying ammonia could chill air when the liquefied ammonia was allowed to evaporate. In 1842, Florida physician John Gorrie used compressor technology to create ice, which he used to cool air for his patients in his hospital in Apalachicola, Florida.
I need to understand the logic of below FOL statement. Can someone help? $$ \forall x \exists y \forall z (z \neq y \iff f(x) \neq z) $$ Does this imply that x, y and z cannot be same or f(x) has no value? Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up.Sign up to join this community I need to understand the logic of below FOL statement. Can someone help? $$ \forall x \exists y \forall z (z \neq y \iff f(x) \neq z) $$ Does this imply that x, y and z cannot be same or f(x) has no value? The statement is "for all $x$, there exists a value of $y$ such that for all $z$, $z\neq y$ if and only if $z \neq f(x)$". This can be simplified: $$\begin{align} & & \forall x \exists y \forall z (z\neq y \iff z \neq f(x))\\ &\implies & \forall x \exists y \forall z (z=y \iff z = f(x))\\ &\implies & \forall x \exists y \forall z (y = f(x))\\ &\implies & \forall x \exists y (y = f(x))\\ \end{align}$$ If we denote the set of all values of $x$ by $X$ and the set of all values of $y$ by $Y$, then this tells us that the function $f$ maps every $x$ in $X$ to a $y$ in $Y$. That is, $f: X \to Y$.
Proofs of Fermat's Last Theorem\\and Beal's Conjecture AbstractIf $\pi$ is an odd prime and $x, y, z,$ are relatively prime positive integers, then $z^\pi\not=x^\pi+y^\pi.$ In this note, an elegant simple proof of this theorem is given that if $\pi$ is an odd prime and $x, y, z$ are positive integers satisfying $z^\pi=x^\pi+y^\pi,$ then $x, y, z,$ are each divisible by $2:$ (Beal\rq{}s conjecture) The equation $z^\xi=x^\mu+y^\nu$ has no solution in relatively prime positive integers $x, y, z, $ with $\xi, \mu, \nu$ primes at least $3.$ is also proved; that is $x, y, z $ are all even. References H. Edwards, {it Fermat's Last Theorem:A Genetic Introduction to Algebraic Number Theory/}, Springer-Verlag, New York, (1977). A. Wiles, {it Modular ellipic eurves and Fermat's Last Theorem/}, Ann. Math. 141 (1995), 443-551. A. Wiles and R. Taylor, {it Ring-theoretic properties of certain Heche algebras/}, Ann. Math. 141 (1995), 553-573. Journal of Progressive Research in Mathematics, 10(1), 1446-1447. Retrieved from http://scitecresearch.com/journals/index.php/jprm/article/view/931 Copyright (c) 2016 Journal of Progressive Research in Mathematics This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. TRANSFER OF COPYRIGHT JPRM is pleased to undertake the publication of your contribution to Journal of Progressive Research in Mathematics. The copyright to this article is transferred to JPRM(including without limitation, the right to publish the work in whole or in part in any and all forms of media, now or hereafter known) effective if and when the article is accepted for publication thus granting JPRM all rights for the work so that both parties may be protected from the consequences of unauthorized use.
For some of the sequences, we can predict the \(n^{th} \) term easily, and the explicit (general) formula can be checked using induction. In this section, we will explore this kind of sequences. Example \(\PageIndex{1}\): Harmonic sequence Find a formula for the \(n^{th}\) term of the a sequence that has the following initial terms: \(\displaystyle \frac{1}{2}, \displaystyle\frac{1}{3}, \displaystyle\frac{1}{4}, \displaystyle\frac{1}{5}, \displaystyle\frac{1}{6}, \cdots\). Answer: \(\displaystyle\frac{1}{n+1}\) Caution Two sequences may start with the same initial terms but diverge later on. Example \(\PageIndex{2}\): One's \(1 \times 11=11\) \(11 \times 11=121\) \(111 \times 111=12321\) \(1111 \times 1111=1234321\) Can you predict the pattern? Example \(\PageIndex{3}\): Perfect Squares Find a formula for the \(n^{th}\) term of the following sequence \(1,4,9,16,25,\cdots\). Answer: \( n^2\). Here is another way to find the \(n^{th}\) term: Note that the difference in the sequence \(1,4,9,16,25,\cdots\) is \(3,5,7,9,\cdots\) and the difference in this sequence is \(2\). Therefore the \(n^{th}\) can be written as a quadratic formulae \(pn^2+qn+c\). \(p,q,c\) can be found using the following information: \(t_n\) \(d_n=t_n-t_{(n-1)}\) \(d=d_n-d_{(n-1)}\) n=1 p+q+c n=2 4p+2q+c 3p+q n=3 9p+3q+c 5p+q 2p n=4 16p+4q+c 7p+q 2p Hence, in our case, \(2p=2\). Therefore, \(p=1\). Now \(3p+q=3 \) and \(p=2\). Thus \(q=0\). Now \(p+q+c=1\), which implies \(c=0\). Hence, \(t_n=n^2\). Quadratic Sequences: A sequence is called quadratic sequence if the differences of consecutive terms, \( d_n=t_n−t_{(n−1)} \) differ by the same amount \(d=d_n−d_{(n−1)}, \forall n \in \bf{N} \). In this case, the \(n^{th}\) term of the sequence is given by \(t_n= a+(n-1) d_1+(n-1)(n-2)\frac{d}{2}\), where \(d_1\) is the first difference between first term and the second term of the sequence, and \(d\) is the common second difference. This result can be shown by using induction. We will explore this is in the next section. Below is another way to solve: Example \(\PageIndex{4}\): Quadratic Sequences Find a formula for the \(n^{th}\) term of the following sequence: \(2,6,12,20,\cdots\) 2 6 12 20 4 6 8 2 2 Hence the \(n^{th}\) term has a term \((2/2) n^2.\) \(t_n\) 2 6 12 20 \(n^2\) 1 4 9 16 \(n^2-t_n \) 1 2 3 4 Notice that \(n^2-t_n \) is an arithmetic sequence with the first term \(1\) and the common difference is \(1\). Thus \(n^2-t_n =1+1(n-1)\). Hence \(t_n=n^2-n.\) Triangular numbers Thinking Out Loud: \(1, 3, 6, 10, 15, 21, 28, ...\) are triangular numbers: you can arrange them in a triangular array: \(T_n = n\) th triangular number. What is the hundredth triangular number? Example \(\PageIndex{5}\): Hexagonal Tilling (Centered hexagonal numbers) Find the \(n^{th}\) term of the sequence \(1,7,19, 37, \cdots\). There are 6 triangular numbers and 1 center, Therefore \(6 \frac{(n -1)(n)}{2} + 1 = 3(n -1)(n) + 1 = 3(n 2 -n) + 1 = 3n 2 -3n + 1 .\) Example \(\PageIndex{6}\): Consider the sequence of tilling using hexagons. The hexagon numbers are the sequence 1, 6, 15, · · · . Predict the nth term. Explain your prediction. A. Term # ( 1 2 3 4 B. # of hexagons 1 6 15 28 C. B/A 1 3 5 7 D. 2(n)-1 1 3 5 7 C = 2n-1 Since we know that (A)C = B, we can conclude that term number (A)= n, multiplied by C (2n-1) will give you the corresponding number in the sequence (B). Therefore: $$ t_n = n(2 n-1) = 2n^2-n.$$ Tower of Hanoi According to the legend of the Tower of Hanoi (formerly the "Tower of Brahma" in a temple in the Indian city of Benares), the temple priests are to transfer a tower consisting of 64 fragile disks of gold from one part of the temple to another, one disk at a time. The disks are arranged in order, no two of them the same size, with the largest on the bottom and the smallest on top. Because of their fragility, a larger disk may never be placed on a smaller one, and there is only one intermediate location where disks can be temporarily placed. It is said that before the priests complete their task the temple will crumble into dust and the world will vanish in a clap of thunder. Move 1: move disk 1 to post C Move 1: move disk 2 to post B Move 2: move disk 1 to post C Move 3: move disk 2 to post C Move 1: move disk 3 to post C Move 2: move disk 2 to post B Move 3: move disk 3 to post B Move 4: move disk 1 to post C Move 5: move disk 3 to post A Move 6: move disk 2 to post C Move 7: move disk 3 to post C Number of Disks Min. number of Moves 1 1 2 3 3 7 4 15 5 31 Recursive Sequences Definition A recurrence relation for a sequence \(\{a_n\}\) is formula that relates to each term \(a_n\) to its predecessors \(a_0,a_1, \cdots, a_{n-1}\). Thinking Out Loud: A tiling consists of covering a region using tile pieces from some given set so that the region is completely covered without overlaps. How many ways can you arrange the 2 x 1 dominoes to cover 2 x n checker-board? Fibonacci Sequences Fibonacci sequence: \(1, 1, 2, 3, 5, 8, 13, 21, 34, 55, , ...\) Let \(F_n = n\) \(th\) That is, \(F_n = F\) \((n - 1)\) \(+ \, F\) \((n - 2)\), \(n > 2, \, F_1=1, \) and \( F_2 = 1\) Some facts about Fibonacci sequence : The sum of the first n even numbered Fibonacci numbers is one less than the next Fibonacci number. The only square Fibonacci numbers are 0, 1 and 144. The sum of the first n odd numbered Fibonacci numbers is the next Fibonacci number. Example \(\PageIndex{9}\): Consider the Fibonacci sequence: \(1, 1, 2, 3, 5, 8, 13, 21, 34, 55, , ...\) The squares of the Fibonacci sequence: \( 1,1 ,4,9, 25, 64, 169 ....\). Consider the sum of the squares of consecutive terms, F 2 \((n - 1)\) \(+ \, F^2\) \((n - 2)\), for \( n >2\). That is: 1+1=2 1+4 =5 4+9=13 What can you say about the resulting number? Answer: It is Fibonacci. In fact, \(F_{(2n-3)} = F^2\) \((n - 1)\)\(+ \, F^2\) \((n - 2)\). Example \(\PageIndex{11}\): More Fibonacci Let \(a_n\) be the Fibonacci sequence. That is \(a_n=a_{n-1}+a_{n-2}, n \in \mathbb{N}, \mbox{ with } a_1=1,a_0=0\), where \( \mathbb{N}\) be the set of all natural numbers. Consider the Fibonacci squares illustrated in figure: Let \(h_n\) be the shortest (perpendicular) distance between \(n^{th}\) parallel diagonal vectors. Consider the figure: Comparing the the area of trapezoids, we get \(h_n=\displaystyle\frac{a_n^2-a_{n-1}^2} {\sqrt{2}(a_n-a_{n-1})} =\displaystyle\frac{1}{\sqrt{2}}\left(a_n+a_{n-1}\right).\) Hence, \(h_n\) is Fibonacci like sequence. Hexagonal numbers by Incnis Mrsi (Own work) [CC0], via Wikimedia Commons Tower of Hanoi by André Karwath aka Aka (Own work) [CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons Finacci Rabits By Ein_Hase_mit_blauem_Ei.svg: MichaelFrey & Sundance Raphael derivative work: HB (Ein_Hase_mit_blauem_Ei.svg) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions 1. Mathematical Institute, Slovak Academy of Sciences, Štefánikova 49, 84173 Bratislava, Slovak Republic 2. Institute of Applied Mathematics and Statistics, Comenius University, Mlynská dolina, 84248 Bratislava Let us consider the borderline in the $(p,q)$-plane between the region where all very weak solutions are bounded and the region where unbounded solutions exist. It turns out that this borderline coincides with the corresponding borderline for the system with the Neumann boundary conditions $\partial_\nu u=\partial_\nu v = 0$ on $\partial\Omega$ if $p\leq N/(N-2)$, while it coincides with the borderline for the system with the Dirichlet boundary conditions $u=v=0$ on $\partial\Omega$ if $p\geq(N+1)/(N-2)$. If $p\in (N/(N-2),(N+1)/(N-2))$ then the borderline for the Dirichlet-Neumann problem lies strictly between the borderlines for the systems with pure Neumann and pure Dirichlet boundary conditions. Our proofs are based on some new $L^p-L^q$ estimates in weighted $L^p$-spaces. Mathematics Subject Classification:Primary: 35J55, 35J65; Secondary: 35B33, 35B45, 35B6. Citation:Sándor Kelemen, Pavol Quittner. Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (3) : 731-740. doi: 10.3934/cpaa.2010.9.731 [1] Patrick Winkert, Rico Zacher. A priori bounds for weak solutions to elliptic equations with nonstandard growth. [2] Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. [3] [4] Chérif Amrouche, María Ángeles Rodríguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. [5] [6] Lucas C. F. Ferreira, Everaldo Medeiros, Marcelo Montenegro. An elliptic system and the critical hyperbola. [7] Jesus Idelfonso Díaz, Jean Michel Rakotoson. On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary. [8] Yinbin Deng, Wentao Huang. Positive ground state solutions for a quasilinear elliptic equation with critical exponent. [9] Xiaomei Sun, Wenyi Chen. Positive solutions for singular elliptic equations with critical Hardy-Sobolev exponent. [10] Yinbin Deng, Shuangjie Peng, Li Wang. Existence of multiple solutions for a nonhomogeneous semilinear elliptic equation involving critical exponent. [11] [12] Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. [13] [14] Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. [15] Elder Jesús Villamizar-Roa, Henry Lamos-Díaz, Gilberto Arenas-Díaz. Very weak solutions for the magnetohydrodynamic type equations. [16] Claudianor Oliveira Alves, Paulo Cesar Carrião, Olímpio Hiroshi Miyagaki. Signed solution for a class of quasilinear elliptic problem with critical growth. [17] [18] Jing Zhang, Shiwang Ma. Positive solutions of perturbed elliptic problems involving Hardy potential and critical Sobolev exponent. [19] M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. [20] Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Skills to Develop In this section, we strive to understand the ideas generated by the following important questions: How does the limit definition of the derivative of a function \(f\) lead to an entirely new (but related) function \(f'\)? What is the difference between writing \(f'(a)\) and \(f'(x)\)? How is the graph of the derivative function \(f'(x)\) connected to the graph of \(f(x)\)? What are some examples of functions \(f\) for which \(f'\) is not defined at one or more points? Given a function \(y=f(x)\), we now know that if we are interested in the instantaneous rate of change of the function at \(x=a\), or equivalently the slope of the tangent line to \(y=f(x)\) at \(x=a\), we can compute the value \(f'(a)\). In all of our examples to date, we have arbitrarily identified a particular value of \(a\) as our point of interest: \(a=1,\; a=3\), etc. But it is not hard to imagine that we will often be interested in the derivative value for more than just one a-value, and possibly for many of them. In this section, we explore how we can move from computing simply \(f'(1)\) or \(f'(3)\) to working more generally with \(f'(a)\), and indeed \(f'(x)\). Said differently, we will work toward understanding how the so-called process of “taking the derivative” generates a new function that is derived from the original function \(y=f(x)\). The following preview activity starts us down this path. Preview Activity \(\PageIndex{1}\) Consider the function \(f(x)=4x-x^2 \). Use the limit definition to compute the following derivative values: \(f'(0)\), \(f'(1)\), \(f'(2)\), and \(f'(3)\). Observe that the work to find \(f'(a)\) is the same, regardless of the value of \(a\). Based on your work in (a), what do you conjecture is the value of \(f'(4)\)? How about \(f'(5)\)? (Note: you should not use the limit definition of the derivative to find either value.) Conjecture a formula for \(f'(a)\) that depends only on the value \(a\). That is, in the same way that we have a formula for \(f(x)\) (recall \(f(x)=4x-x^2\) ), see if you can use your work above to guess a formula for \(f'(a)\) in terms of \(a\). The Derivative is Itself a Function In your work in Preview Activity 1.4 with \(f(x)=4x-x^2 \), you may have found several patterns. One comes from observing that \(f'(0)=4\), \(f'(1)=2\), \(f'(2)=0\), and \(f'(3)=-2\). That sequence of values leads us naturally to conjecture that \(f'(4)=-4\) and \(f'(5)=-6\). Even more than these individual numbers, if we consider the role of 0, 1, 2, and 3 in the process of computing the value of the derivative through the limit definition, we observe that the particular number has very little effect on our work. To see this more clearly, we compute \(f'(a)\), where \(a\) represents a number to be named later. Following the now standard process of using the limit definition of the derivative, \[\begin{align} (f'(a) &=\lim_{h\to 0} \frac{f(a+h)-f(a)}{h} \\ &=\lim_{h\to 0} \frac{4(a+h)-(a+h)^2-(4a-a^2 )}{h} \\ &= \lim_{h\to 0} \frac{4a+4h-a^2-2ha-h^2-4a+a^2}{h} \\ &= \lim_{h\to 0} \frac{4h-2ha-h^2}{h} \\ &= \lim_{h\to 0} \frac{h(4-2a-h)}{h} \\ &= \lim_{h\to 0} (4-2a-h). \end{align} \] Here we observe that neither 4 nor \(2a\) depend on the value of \(h\), so as \[h \to 0,\; (4-2a-h) \to (4-2a).\] Thus, \(f'(a)=4-2a.\) This observation is consistent with the specific values we found above: e.g., \(f'(3)=4-2(3)=-2\). And indeed, our work with a confirms that while the particular value of a at which we evaluate the derivative affects the value of the derivative, that value has almost no bearing on the process of computing the derivative. We note further that the letter being used is immaterial: whether we call it \(a\), \(x\), or anything else, the derivative at a given value is simply given by “4 minus 2 times the value.” We choose to use \(x\) for consistency with the original function given by \(y=f(x)\), as well as for the purpose of graphing the derivative function, and thus we have found that for the function \(f(x)=4x-x^2\) , it follows that \(f'(x)=4-2x\). Because the value of the derivative function is so closely linked to the graphical behavior of the original function, it makes sense to look at both of these functions plotted on the same domain. In Figure 1.18, on the left we show a plot of \(f(x)=4x-x^2\) together with a selection of tangent lines at the points we’ve considered above. On the right, we show a plot of \(f'(x)=4-2x\) with emphasis on the heights of the derivative graph at the same selection of points. Notice the connection between colors in the left and right graph: the green tangent line on the original graph is tied to the green point on the right graph in the following way: the slope of the tangent line at a point on the lefthand graph is the Figure 1.18: The graphs of \(f(x)=4x-x^2\) (at left) and \(f'(x)=4-2x\) (at right). Slopes on the graph of \(f\) correspond to heights on the graph of \(f'\) . same as the height at the corresponding point on the righthand graph. That is, at each respective value of \(x\), the slope of the tangent line to the original function at that x-value is the same as the height of the derivative function at that x-value. Do note, however, that the units on the vertical axes are different: in the left graph, the vertical units are simply the output units of \(f\) . On the righthand graph of \(y=f'(x)\), the units on the vertical axis are units of \(f\) per unit of \(x\). Of course, this relationship between the graph of a function \(y=f(x)\) and its derivative is a dynamic one. An excellent way to explore how the graph of \(f(x)\) generates the graph of \(f'(x)\) is through a java applet. See, for instance, the applets at http://gvsu.edu/s/5C or http://gvsu.edu/s/5D, via the sites of Austin and Renault 5 . In Section 1.3 when we first defined the derivative, we wrote the definition in terms of a value \(a\) to find \(f'(a)\). As we have seen above, the letter a is merely a placeholder, and it often makes more sense to use \(x\) instead. For the record, here we restate the definition of the derivative. Definition 1.4 Let \(f\) be a function and \(x\) a value in the function’s domain. We define the derivative of \(f\) with respect to \(x\) at the value \(x\), denoted \(f'(x)\), by the formula \(f'(x)=\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}\) , provided this limit exists. We now may take two different perspectives on thinking about the derivative function: given a graph of \(y=f(x)\), how does this graph lead to the graph of the derivative function \(y=f'(x)\)? and given a formula for \(y=f(x)\), how does the limit definition of the derivative generate a formula for \(y=f'(x)\)? Both of these issues are explored in the following activities. Exercise \(\PageIndex{1}\) For each given graph of \(y=f(x)\), sketch an approximate graph of its derivative function, \(y=f'(x)\), on the axes immediately below. The scale of the grid for the graph of \(f\) is \(1\times 1\); assume the horizontal scale of the grid for the graph of \(f'\) is identical to that for \(f\) . If necessary, adjust and label the vertical scale on the axes for \(f'\). When you are finished with all 8 graphs, write several sentences that describe your overall process for sketching the graph of the derivative function, given the graph the original function. What are the values of the derivative function that you tend to identify first? What do you do thereafter? How do key traits of the graph of the derivative function exemplify properties of the graph of the original function? C For a dynamic investigation that allows you to experiment with graphing \(f'\) when given the graph of \(f\) , see http://gvsu.edu/s/8y. 6 6Marc Renault, Calculus Applets Using Geogebra. 39 Now, recall the opening example of this section: we began with the function \(y=f(x)=4x-x^2\) and used the limit definition of the derivative to show that \(f'(a)=4-2a\), or equivalently that \(f'(x)=4-2x\). We subsequently graphed the functions \(f\) and \(f'\) as shown in Figure 1.18. Following Activity 1.10, we now understand that we could have constructed a fairly accurate graph of \(f'(x)\) without knowing a formula for either \(f\) or \(f'\). At the same time, it is ideal to know a formula for the derivative function whenever it is possible to find one. In the next activity, we further explore the more algebraic approach to finding \(f'(x)\): given a formula for \(y=f(x)\), the limit definition of the derivative will be used to develop a formula for \(f'(x)\). Activity \(\PageIndex{2}\) For each of the listed functions, determine a formula for the derivative function. For the first two, determine the formula for the derivative by thinking about the nature of the given function and its slope at various points; do not use the limit definition. For the latter four, use the limit definition. Pay careful attention to the function names and independent variables. It is important to be comfortable with using letters other than \(f\) and \(x\). For example, given a function \(p(z)\), we call its derivative \(p'(z)\). \(f(x)=1\) \(g(t)=t\) \(p(z)=z^2\) \(q(s)=s^3\) \(F(t)=\frac{1}{t}\) \(G(y)=\sqrt{y}\) Summary In this section, we encountered the following important ideas: The limit definition of the derivative, \(f'(x)=lim_{h\to 0} \frac{f(x+h)-f(x)}{h}\) , produces a value for each \(x\) at which the derivative is defined, and this leads to a new function whose formula is \(y=f'(x)\). Hence we talk both about a given function \(f\) and its derivative \(f'\). It is especially important to note that taking the derivative is a process that starts with a given function ( \(f\) ) and produces a new, related function ( \(f'\)). There is essentially no difference between writing \(f'(a)\) (as we did regularly in Section 1.3) and writing \(f'(x)\). In either case, the variable is just a placeholder that is used to define the rule for the derivative function. Given the graph of a function \(y=f(x)\), we can sketch an approximate graph of its derivative \(y=f'(x)\) by observing that heights on the derivative’s graph correspond to slopes on the original function’s graph. In Activity 1.10, we encountered some functions that had sharp corners on their graphs, such as the shifted absolute value function. At such points, the derivative fails to exist, and we say that \(f\) is not differentiable there. For now, it suffices to understand this as a consequence of the jump that must occur in the derivative function at a sharp corner on the graph of the original function. Contributors Matt Boelkins (Grand Valley State University), David Austin (Grand Valley State University), Steve Schlicker (Grand Valley State University)
You derivation here is flawed because you are deriving with respect to two processes and you do not take into account that the variable $W_t$ is stochastic and hence $S_t$ is as well. So, to derive $S_t$ from $dS_t$, you have to apply Ito's Lemma, see this question for details. This is the "classic" way you see it. If you want to do it the other way round, setting $S_t = f(W_t,t) = S_0 \exp \left[ (\mu - \frac{\sigma^2}{2}) t + \sigma W_t \right]$ and applying Ito's Lemma gives you: $$ df(W_t,t) = \frac{\partial f}{ \partial t } dt + \frac{\partial f}{ \partial W_t } dW_t + \frac{\partial^2 f}{ (\partial W_t)^2 } d \langle W\rangle_t$$$$ df(W_t,t) = S_t \left(\mu - \frac{\sigma^2}{2} \right) dt + S_t \sigma dW_t + \frac{1}{2} S_t \sigma^2 dt$$$$ df(W_t,t) = S_t \left[ \left(\mu - \frac{\sigma^2}{2} + \frac{\sigma^2}{2} \right) dt + \sigma dW_t \right]$$$$ df(W_t,t) = S_t ( \mu dt + \sigma dW_t ) = dS_t$$ So, essentially Ito's Lemma adds a term for the quadratic variation of a stochastic process, $d \langle W\rangle_t$, which is 0 for deterministic processes. This is where the $\frac{\sigma^2}{2}$ appears (or disappears depending how you see it).