text
stringlengths
256
16.4k
Research Open Access Published: On global behavior of weak solutions to the Navier-Stokes equations of compressible fluid for \(\gamma=5/3\) Boundary Value Problems volume 2015, Article number: 176 (2015) Article metrics 919 Accesses 1 Citations Abstract In this article, we consider the global behavior of weak solutions of the Navier-Stokes equations of a compressible fluid in a bounded domain driven by bounded forces for the adiabatic constant \(\gamma=5/3\). Under the condition of a small mass depending on the given forces, we prove the existence of bounded absorbing sets of weak solutions, and thus we further get global bounded trajectories and global attractors to the weak solutions. Introduction In this article, we investigate the global behavior of finite energy weak solutions to the Navier-Stokes equations of a viscous compressible isentropic fluid: in \(\Omega\times I\), and with a non-slip boundary condition: In this article, we always assume that \(\Omega\subset\mathbb{R}^{3}\) is a bounded domain with Lipschitz boundary, and I an open time interval. The unknown functions \(\varrho=\rho(t,x)\) and \(\mathbf{u}= \mathbf{u}(t,x)=(u^{1}(t,x),u^{2}(t,x),u^{3}(t,x))\) represent the density and velocity of fluid, respectively. The external force \({\mathbf{f}}=(f^{1}(t,x),f^{2}(t,x), f^{3}(t,x))\) is given. The pressure takes the form \(P=a\rho^{\gamma}\), where a is a positive constant, and γ the adiabatic constant. \(\mu>0\) and λ are the viscosity constants, satisfying \(3\lambda +2\mu\geq0\). Definition 1.1 ρ, uenjoy the regularity$$ \rho\in L_{\mathrm{loc}}^{\infty}\bigl(I;L^{\gamma}( \Omega)\bigr)\cap L_{ \mathrm{loc}}^{s(\gamma)}\bigl(I;L^{s(\gamma)}(\Omega) \bigr),\qquad u^{i}\in L^{2}_{\mathrm{loc}} \bigl(I;W^{1,2}_{0}(\Omega)\bigr) $$(4) for \(i=1,2,3\) and \(s(\gamma)=({5\gamma-3})/{3}\). Let the energy Ebe defined as follows:$$ E[\rho,{\mathbf{u}} ](t) =\int_{\Omega}\biggl[ \frac{1}{2}\rho(t,x)\bigl|{\mathbf{u}}(t,x)\bigr|^{2} +\frac{a}{{\gamma-1}} \rho^{\gamma}(t,x) \biggr] \,\mathrm{d}x, $$(5) then \(E\in L^{1}_{\mathrm{loc}}(I)\) satisfies the following energy inequality in \(\mathcal{D}'(I)\):$$ \frac{\mathrm{d}}{\mathrm{d}t}E[\rho,{\mathbf{u}} ](t)+\int_{\Omega}\bigl[\mu\bigl|\nabla\mathbf{u}(t)\bigr|^{2}+(\lambda+\mu) \bigl|\operatorname{div} { \mathbf{u}}(t)\bigr|^{2}\bigr]\,\mathrm{d}x \leq \int_{\Omega}\rho(t){\mathbf{f}}(t)\cdot{{\mathbf{u}}}(t)\,\mathrm{d}x. $$(6) Equation (1) is satisfied in the sense of renormalized solutions, i.e.,$$ b(\rho)_{t}+\operatorname{div} \bigl(b(\rho)\mathbf{u} \bigr)+\bigl(b'(\rho)\rho-b(\rho)\bigr)\operatorname{div}\mathbf{u}=0 $$(7) holds in \(D'(I\times\Omega)\) for any b satisfying$$ b\in C^{0}\bigl([0,\infty]\bigr)\cap C^{1} \bigl((0,\infty )\bigr),\qquad \bigl|b'(t)\bigr|\leq Ct^{-\lambda_{0}}, \quad t \in(0,1), \lambda_{0}< 1, $$(8) and$$ \bigl|b'(t)\bigr|\leq C t^{\lambda_{1}},\quad t\geq1,\textit{ where }C>0, -1< \lambda_{1}\leq\frac{s(\gamma)}{2}-1. $$(9) The existence of globally defined weak solutions for \(\Omega\subset \mathbb{R}^{3}\) was proved by Lions [3] under the hypothesis that \(\gamma>9/5\). Then, by using the curl-div lemma to subtly derive a certain compactness, and applying Lions’ idea and a technique from [4], Feireisl et al. [1] extended Lions’ existence result to the case \(\gamma> 3/2\). For any \(1\leq\gamma\leq3/2\), a global weak solution still exists when the initial data have a certain symmetry ( e.g., spherical, or axisymmetric symmetry); see [4, 5]. The theory of weak solutions is also applied to other models of fluid mechanics; see [6–10] for examples. In [2, 11–13] Feireisl and Petzeltová investigated the global behavior of weak solutions of the problem (1)-(3), and showed the existence of bounded absorbing sets, global bounded trajectories and global attractors to weak solutions of compressible flows for \(\gamma> 5/3\). Jiang et al. [14–16] and Wang [17] further generalized their results to the Navier-Stokes-Poisson equations and nematic liquid crystals, respectively. However, it is still an open problem for the case \(\gamma>3/2\). In this article, under the proof-frame of [2, 11], we investigate the global behavior of weak solutions of the problem (1)-(3) for \(\gamma= 5/3\) under the assumption of small mass depending on the given forces. Finally, we mention that, from the definition of renormalized solutions, the total mass m is conserved, i.e. Theorem 1.1 Let \(\gamma=5/3\), \(a_{0}>-\infty\), \(I=(a_{0}, \infty)\subset\mathbb{R}\) be an open interval, and the bounded measurable function \({\mathbf{f}}=(f^{1}(t,x),f^{2}(t,x),f^{3}(t,x))\) satisfy Then there exist constants \(m_{0}:=m_{0}(K )\in(0,1)\) and \(E_{\infty}:=E( K)\) satisfying the following property: then there exists a time point \(T=T(E_{0}, a_{0})\) such that Here we explain why our arguments work only for \(\gamma=5/3\). In Feireisl and Petzeltová’s article [2], they deduced the following key estimate: where \(c(K,m)\) and \(\tilde{c}(m)\) are two positive constants. Under the condition \(\gamma>5/3\), \((4\gamma-3)/(3(\gamma+\theta -1))<\gamma\), and thus one can apply the Young inequality to the estimate above to deduce for some constant \(L>0\). The local-time boundedness (15) is very important to further deduce the existence of a bounded absorbing set. However, if \(\gamma\leq5/3\), then \((4\gamma-3)/(3(\gamma+\theta -1))\geq\gamma\), and thus the above idea to deduce (15) obviously fails. However, when \(\gamma=5/3\), (14) implies Based on Theorem 1.1, we can further get global bounded trajectories of weak solutions to the problem (1)-(3) as in [12], since the family of trajectories generated by the finite energy weak solutions of (1)-(3) defined on I possesses a bounded absorbing set in the energy ‘norm’. To this purpose, we define Then we have the second result concerning the large-time behavior of the short trajectories defined in (17). Theorem 1.2 Let \(\gamma=5/3\), \(J_{1}=(0,1)\), and Assume that there exists a certain sequence \(t_{n}\rightarrow\infty\) satisfying then we can extract a subsequence ( not relabeled) such that and for any \(p \in [1,\frac{5}{4} )\), where ( ρ̄, ) is a finite energy weak solution of the problem (1)-(3) defined on the whole real line \(I=\mathbb{R}\) such that \(E\in L^{\infty}(\mathbb{R})\), \(\int_{\Omega}\bar {\rho}\,\mathrm{d}x= m\), and \(\mathbf{f}\in\mathcal{F}^{+}\). The theorem above presents that the energy E of finite energy weak solutions defined on \(I=\mathbb{R}\) is uniformly bounded on \(\mathbb{R}\), and thus we can further construct a set of short trajectories to which any finite energy weak solution is asymptotically attracted by Theorem 1.2. To this end, we define Thus, we have the third conclusion as regards a global attractor to the short trajectories of the set \(\mathcal{A}^{s}(\mathcal{F})\) as in [12]. Theorem 1.3 Assume \(\gamma=5/3\) and \(\mathcal{F}\) satisfies (18). Then the set \(\mathcal{A}^{s}[\mathcal{F}]\) is compact in \(L^{5/3}(J_{1}\times \Omega)\times(L^{p}(J_{1}\times\Omega))^{3} \). Moreover, for any \(p\in [1, 5 )\), as \(t\rightarrow\infty\). The theorem above shows that the set \(\mathcal{A}^{s}(\mathcal{F})\) is a global attractor to the space of short trajectories; moreover, the set \(\mathcal{A}^{s}(\mathcal{F})\) is nonempty and compact, if \(\mathcal{F}\) is nonempty. Similar to [18], we can further build a set of global trajectories. To this end, we define and thus we get the fourth result on attractors as in [12]. Theorem 1.4 We redefine the energy E by Assume that \(\gamma=5/3\) and \(\mathcal{F}\) satisfies (18), then \(\mathcal{A}[\mathcal{F}]\) is compact in \(L^{\alpha}(\Omega) \times( L^{\frac{5}{4}}_{\mathrm{weak}}(\Omega))^{3}\), i. e., for any \(1\leq \alpha < 5/3\) and any \(\phi\in(L^{5}(\Omega))^{3}\), Remark 1.1 It should be noted that the energy \(E(t)\) defined by (27) is equal to (5) a.e. in I (see [18], Lemma 7.18) and (27) is lower semicontinuous (see [18], Proposition 7.21). Then the two conditions ‘\(\operatorname{ess}\limsup_{t\rightarrow a} E(t)\leq E_{0}\)’ and ‘\(\limsup_{t\rightarrow a} E(t)\leq E_{0}\)’ are equivalent, and, thus, the conclusions in Theorems 1.1-1.2 with \(E(t)\) defined by (27) still hold, in particular, we have \(E(t):=E[\rho,\mathbf{u} ](t)\leq E_{\infty}\) for \(t>T\) in Theorem 1.1. In next section, we use the proof-frame of [2] to prove Theorem 1.1 under the condition of small mass. Once we establish Theorem 1.1, the conclusions in Theorems 1.2-1.4 obviously hold by the standard compactness method as in [1, 12]; hence we omit the proof. Proof of Theorem 1.1 Proposition 2.1 Under the hypotheses of Theorem 1.1, let \(m\in(0,1)\) and \((\rho, \mathbf{u} )\) be a renormalized solution of (1)-(3), then the energy E is locally bounded variation on I ( being redefined on a set of measure zero if necessary), and Moreover, there exists a constant \(c(K)\), only depending on K and independent of m, such that Proposition 2.2 Under the assumptions of Theorem 1.1, there exists a constant \(m_{0}\in(0,1)\) such that for any \(m\in(0,m_{0})\) there exists a constant \(L:=L( K )\) enjoying the following property: If then For completeness of this article, we provide the proof of Theorem 1.1 in detail, based on Propositions 2.1-2.2. It is easy to see that there exists \(T=T(E_{0}, a_{0})>a_{0}\) satisfying \(E(T-)>E((T-1)+)-1\). Indeed if it fails, then, when the t is sufficiently large, the energy would be negative. This contradicts the fact that the energy is non-negative. Therefore \(E(t_{0})\leq L\) for some \(t_{0}< T\), where L is defined as in Proposition 2.2. Next we claim that which implies \(E((t_{0}+n+1)-)\leq L\), or Next we turn to strictly show the two propositions above. We mention that all the estimate constants appearing in this section is independent of m. Proof of Proposition 2.1 Let \(E_{1}(t)\) satisfy then \(E_{2}:=(E-E_{1})\in L^{1}_{\mathrm{loc}}(I)\). In view of (6), we get Hence E is the sum of ‘an absolutely function’ and ‘a nonincreasing function’, and thus, E is a continuous function except a countable set of points in which (29) holds. In addition, using the condition (11), we can control the right-hand side of (6) as follows: Proof of Proposition 2.2 Before further providing the proof of Proposition 2.2, we shall establish the following four auxiliary lemmas. Lemma 2.1 holds for a constant \(c_{1}=c_{1}(K)\). Proof On the other hand, we can use the Hölder inequality and the condition \(m\in(0,1)\) to estimate Consequently, we immediately get the desired result by using the embedding theorem again. □ Lemma 2.2 we have for some constant \(c_{2}=c_{2}(K )\). Proof We integrate (30) for the choice \(t_{2}=T+1\) with respect to \(t_{1}\) to obtain In addition, Now, exploiting the Hölder inequality and Lemma 2.1, we can infer that We can use the interpolation inequality to get and thus Hence we further have Consequently, there exists a sufficiently small constant \(m_{0}\in(0,1)\) dependent on K such that, for any \(m\in(0,m_{0}]\), (39) holds. □ Lemma 2.3 Then a. e. in \(I\times\mathbb{R}^{3}\). Moreover, if then Proof Lemma 2.4 Let \(p,r\in(1,\infty)\) be given numbers, then there exists a bounded linear operator \(\mathcal{B}\), such that \({\mathbf{v}}:={\mathcal{B}}\{f\}\) satisfies In addition, if \(f\in L^{r}(\Omega)\) can be written by then Proof We are now in the position to prove Proposition 2.2. Let \(0\leq\psi\leq1\), \(\psi\in{\mathcal{D}}(T,T+1)\), and \(S_{\varepsilon}\) are the smoothing operators given by Lemma 2.3. We consider test functions where and In addition, we can use (46) to see that Noting that there exists a sequence \(\psi_{\varepsilon}\) approximating the characteristic function of the interval \([T, T+1]\), thus, letting \(\varepsilon\rightarrow0\) in (48)-(56), we can obtain Recalling the interpolating the spaces \(L^{1}\) and \(L^{26/15}\), we have Then, exploiting Lemma 2.1, one has In addition, thanks to (5), we have This completes the proof of Proposition 2.2. References 1. Feireisl, E, Novotnỳ, A, Petzeltová, H: On the existence of globally defined weak solutions to the Navier-Stokes equations. J. Math. Fluid Mech. 3(4), 358-392 (2001) 2. Feireisl, E, Petzeltova, H: Bounded absorting sets for the Navier-Stokes equations of compressible fluid. Commun. Partial Differ. Equ. 26(7-8), 1133-1144 (2001) 3. Lions, PL: Mathematical Topics in Fluid Mechanics: Compressible Models. Oxford University Press, Oxford (1998) 4. Jiang, S, Zhang, P: On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations. Commun. Math. Phys. 215, 559-581 (2001) 5. Jiang, S, Zhang, P: Axisymmetric solutions of the 3-D Navier-Stokes-equations for compressible isentropic fluids. J. Math. Pures Appl. 82, 949-973 (2003) 6. Jiang, F, Tan, Z: Global weak solution to the flow of liquid crystals system. Math. Methods Appl. Sci. 32(17), 2243-2266 (2009) 7. Jiang, F, Jiang, S, Wang, DH: Global weak solutions to the equations of compressible flow of nematic liquid crystals in two dimensions. Arch. Ration. Mech. Anal. 214, 403-451 (2014) 8. Jiang, F, Jiang, S, Wang, DH: On multi-dimensional compressible flows of nematic liquid crystals with large initial energy in a bounded domain. J. Funct. Anal. 265, 3369-3397 (2013) 9. Jiang, F: A remark on weak solutions to the barotropic compressible quantum Navier-Stokes equations. Nonlinear Anal., Real World Appl. 12, 1733-1735 (2011) 10. Jiang, F, Jiang, S, Yin, JP: Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces. Discrete Contin. Dyn. Syst., Ser. A 34(2), 567-587 (2014) 11. Feireisl, E, Petzeltová, H: Asymptotic compactness of global trajectories generated by the Navier-Stokes equations of a compressible fluid. J. Differ. Equ. 173, 390-409 (2001) 12. Feireisl, E: Propagation of oscillations, complete trajectories and attractors for compressible flows. Nonlinear Differ. Equ. Appl. 10, 83-98 (2003) 13. Feireisl, E: On compactness of solutions to the compressible isentropic Navier-Stokes equations when the density is not square integrable. Comment. Math. Univ. Carol. 42(1), 83-98 (2001) 14. Jiang, F, Tan, Z, Yan, Q: Asymptotic compactness of global trajectories generated by the Navier-Stokes-Poisson equations of a compressible fluid. NoDEA Nonlinear Differ. Equ. Appl. 16(3), 355-380 (2009) 15. Jiang, F, Tan, Z: Complete bounded trajectories and attractors for compressible barotropic self-gravitating fluid. J. Math. Anal. Appl. 351, 408-427 (2009) 16. Guo, RC, Jiang, F, Yin, JP: A note on complete bounded trajectories and attractors for compressible self-gravitating fluids. Nonlinear Anal., Theory Methods Appl. 75(4), 1933-1944 (2012) 17. Wang, W: On global behavior of weak solutions of compressible flows of nematic liquid crystals. Acta Math. Sci. 35(3), 650-672 (2015) 18. Novotnỳ, A, Straškraba, I: Introduction to the Mathematical Theory of Compressible Flow. Oxford University Press, Oxford (2004) 19. Bogovskii, ME: Solution of some vector analysis problems connected with operators div and grad. In: Theory of Cubature Formulas and the Application of Functional Analysis to Problems of Mathematical Physics. Trudy Sem. S. L. Soboleva, vol. 1, pp. 5-40 (1980) (in Russian) 20. Feireisl, E, Petzeltová, H: On integrability up to the boundary of the weak solutions of the Navier-Stokes equations of compressible flow. Commun. Partial Differ. Equ. 25(3-4), 755-767 (1999) Acknowledgements The authors appreciate the anonymous referees for useful comments and suggestions on our manuscript, which improved the presentation of this article. The research of Xiaoying Wang was supported by NSFC (U1430103), Beijing Higher Education Young Elite Teacher Project (YETP0724) and the Fundamental Research Funds for the Central Universities (13MS35), and Weiwei Wang was supported by NSFC (11501116). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions The authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the final manuscript.
By stars-and-bars, there are $\binom{2007}{2}$ ways to select 2005 balls without the even and odd conditions. Write each choice as $(2R+r, 2Y+y, G)$ where $r,y\in\{0,1\}$. Most of the possible combinations come in matched quadruples with the same $R$ and $Y$, and in each such quadruple 3 of the four possibilities meet the condition. The only combinations that don't fit into a quadruple are those where $2R+2Y=2004$, in which case the $r=y=1$ combination doesn't exist. So these additional combinations come in triples, where two out of the three combinations meed the condition. There are $1003$ such triples. So the total number of combinations that meet the even-odd condition should be $$ \frac34\left( \binom{2007}{2} - 3\cdot 1003 \right) + 2\cdot 1003 $$
My reading of Joy's paper —just as it is, without having carefully read the arXiv paper I cited, nor all of Joy's responses to critics that I also mentioned— is, so far: the left and right hand sides of eq(1) and eq(2), without the central interpolations, state that $A(\mathbf{a},\lambda)=\lambda$ and $B(\mathbf{b},\lambda)=-\lambda$, where $\lambda$ takes the values $\pm 1$. $A(\mathbf{a},\lambda)$ and $B(\mathbf{b},\lambda)$ are independent of $\mathbf{a}$ and $\mathbf{b}$, respectively, hence the expected value of the product is $-1$. The central interpolations introduce nine algebraic objects, each of which is the basis of and satisfies the algebraic relations of a quaternion algebra, $\beta_i$ and $\beta_{i'}(\lambda)$, with $\lambda=\pm 1$. For $\lambda=+1$, the $\beta_{i'}(+1)$ satisfy the same algebra as the $\beta_i$; for $\lambda=-1$, $-\beta_{i'}(-1)$ satisfy the same quaternion algebra, with the sign change to be noted. To fix the algebraic structure further, which is absolutely necessary so we know how to handle products like $\beta_i\beta_{i'}(+1)$, Joy states that $\beta_{i}(\lambda)=\lambda\beta_i$, so we are in fact dealing with a purely quaternion algebra, of real dimension 4. The whole of the prelude to eq(5-7) could be stated using only $\beta_i$; for me the $\beta_i(\lambda)$ just obscures things. I would like to see a mathematical justification for introducing the $\beta_i(\lambda)$ instead of just using $\lambda\beta_i$. The notation of eq(5-7) is problematic because it seems to play fast and loose with the non-commutative structure of the quaternions. One cannot in general write $\frac{p}{q}$ for two quaternions $p$ and $q$, because in general $pq^{-1}$ is different from $q^{-1}p$. Since eq(5-7) obtains a different result from the result that I get in my first paragraph, I'd want to see the whole thing rewritten using inverses so that the order of the multiplications is kept under control. Unless there is a potent reason for using the $\beta_{i'}(\lambda)$ notation, I'd like to see everything written out using only the $\beta_i$. If the answer is still $-\mathbf{a.b}$ I'd want to check that it does not make any unwarranted reversal of the quaternions $a_i\beta_i$ and $b_i\beta_i$, even one of which would be exactly enough to get the result $-\mathbf{a.b}$ instead of $-1$. I currently cannot see any way to justify the jump from the left hand expression of eq(6) to the right hand expression. Perhaps someone can show me how to get from one to the other. If my discussion above is OK, this leaves questions about Joy's earlier papers. My impression is that Joy tried to make the argument of his earlier papers as succinct as possible. He may have made a mistake in doing so, in which case, if he claims the earlier papers do not make any mistake, then they have to be considered on their own merits. On the other hand, before I would consider checking that I think I would want to see Joy withdraw or replace this paper on the arXiv with something that at least made play at addressing my discussion here. Finally, I look forward to comments.
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more. In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$. These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$. Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$. See Paulsen's book (2002) for more. On page 69 he writes: The fact that von Neumann’s inequality holds for two commuting contractions but not three or more is still the source of many surprising results and intriguing questions. Many deep results about analytic functions come from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an analogue of the classical Nevanlinna–Pick interpolation formula for analytic functions on the bidisk. Because of the failure of a von Neumann inequality for three or more commuting contractions, the analogous formula for the tridisk is known to be false, and the problem of finding the correct analogue of the Nevanlinna–Pick formula for polydisks in three or more variables remains open.
Forgot password? New user? Sign up Existing user? Log in f(x)=(a2−3a+2)(cos2x4−sin2x4)+(a−1)x+sin1f\left( x \right) =\left( { a }^{ 2 }-3a+2 \right) \left( \cos ^{ 2 }{ \frac { x }{ 4 } } -\sin ^{ 2 }{ \frac { x }{ 4 } } \right) +\left( a-1 \right) x+\sin { 1 } f(x)=(a2−3a+2)(cos24x−sin24x)+(a−1)x+sin1 The set of all values of aaa for which the function above does not posses critical point is __________\text{\_\_\_\_\_\_\_\_\_\_} __________. Note by Akhilesh Prasad 3 years, 8 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: What's the answer ?? I'm getting a€(0,4)- {1} ........... I might have missed cases because I'm a expert in doing that.. ;-) Log in to reply The answer given is (1,∞)(1,\infty)(1,∞). How you solved ?? I found f'(x) and ensured that it does not vanish!! And ultimately got the wrong answer!! @Rishabh Jain – Hey were u able to open the file i uploaded. coz it was on google drive @Rishabh Jain – Finally, found it My solution @Akhilesh Prasad – And shouldn't where you multiplied inequality by (a-2) cases must be made for a-2>0 and a-2<0?? @Rishabh Jain – Considered the cases you told me to, still i am not getting the desired answer.Corrected solution part 1 Corrected solution part 2 @Akhilesh Prasad – So we both are getting the same answer that is not correct!! ... ?? @Rishabh Jain – On a side not, I would wanna ask you are you gonna give JEE this year. @Akhilesh Prasad – absolutely Yes!! @Akhilesh Prasad – Should at the end cases be like: (1) Right Max value<0 (2) Left Min value>0 @Rishabh Jain – It's too time consuming writing the whole answer so i wanted to attach the scan of the hand written answer, but i dunno how to do it, if you could tell me how to do it. @Rishabh Cool, I have got a issue in this one too so if you will please, see this one too @Rishabh Cool. Would you spare some of your time to please post a solution of this. Problem Loading... Note Loading... Set Loading...
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-05-15 16:57 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2019-01-10 15:54 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 დეტალური ჩანაწერი - მსგავსი ჩანაწერები 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. დეტალური ჩანაწერი - მსგავსი ჩანაწერები
If I understood correctly, you have a signal $x[n]$ and an unknown discrete-time LTI filter, so that you can look at the output of the filter $y[n]$. Now you are looking for the impulse response of the filter $h[n]$. The output follows a convolution rule$$y[n]=x[n]*h[n]=\sum_{m=-\infty}^{\infty}x[m]h[n-m]$$The process to find $h[n]$, given $x[n]$ and $y[n]$ is called deconvolution. There are some approaches to do that. For example, if the Fourier transform of the signals exist, using convolution theorem, the convolution can be converted to product in the frequency domain: $$Y(\omega)=X(\omega)H(\omega)$$Therefore, $$H(\omega)=\frac{Y(\omega)}{X(\omega)}$$You should however be careful about the zeros of $X(\omega)$. If $x[n]$ can be an option, then you should choose it such that it can excite the system at all frequencies such that you can probe the desired frequency range that you are interested in $H(\omega)$. Alternatively, you can excite the filter in repeated number of times at different frequencies, which can be interpreted as sampling the frequency response. This will give you a sampled spectrum, so if you know the filter does not have singularities between the sample frequencies, then you can model it by an LTI filter with your desired order (depending on your acceptable approximation error). In response to your second question, after you acquired an approximation such as $\hat{h}[n]$ for the impulse response, then you can apply it to any desired input by using convolution.
I recently plotted Gamma and Vega against Delta for a European call option and found that the graphs look very similar. This makes sense to me mathematically since the two formulas are pretty much the same from Black Scholes, just with a few different constants $$ \Gamma = Ke^{-rT}\phi(d_2)\frac{1}{S^2\sigma\sqrt{T}} $$ $$ \nu = Ke^{-rT}\phi(d_2)\sqrt{T} $$ However, I am uncertain how these apply specifically to delta and how it all conceptually comes together. Apologies if this is an obvious question and thank you for any assistance.
Research Open Access Published: Topological sensitivity analysis of a time-dependent nonlinear problem Boundary Value Problems volume 2019, Article number: 23 (2019) Article metrics 351 Accesses Abstract We are interested in the optimization of the pipe shape allowing minimization of the dissipated energy in time-dependent Navier–Stokes Darcy flow. The used technique is based on the topological gradient method. In the theoretical part, we present an analysis of the topological sensitivity for the dissipated energy function. Some numerical tests are presented to illustrate the developed approach. Introduction Let O be a bounded cavity of \(\mathbb{R}^{2}\) occupied by a viscous and incompressible fluid modeled by the time-dependent nonlinear Navier–Stokes equations. We assume that the cavity O has some inlets \(\varGamma_{i}^{k}\), \(k=1,\ldots,n\), and some outlets \(\varGamma_{o}^{i}\), \(i=1,m\) (see Fig. 1). The aim of this work is to obtain the optimal pipes form connecting the inputs and the outputs of the cavity that minimizes the dissipated energy in the fluid under a volume constraint. Let \(S_{\mathrm{ad}}= \{D\subset O \mbox{ with } \varGamma _{i}^{k} \subset \partial D \cap\partial O \mbox{ and } \varGamma_{o}^{i} \subset\partial D \cap\partial O \mbox{ with } |D|\le V_{d} \}\) the set of admissible domains, where \(|\cdot|\) is the measure of Lebesgue and \(V_{d}\) is the desired volume. For each \(O\in S_{\mathrm{ad}}\), we denote by v and p, respectively, the velocity and the pressure, solution to the Navier–Stokes equations in O ν is the fluid kinematic viscosity, T is the flow time and \(v_{d}\) is the boundary condition given by A variety of publications were focused on the design of an optimal pipe shape domain [1,2,3], but the majority of studies were focused on determining the optimal form of an existing boundary. The topological gradient method has been lately introduced in optimal shape problems [4,5,6]. This method allows for the introduction of new boundaries into the design. The idea of the method is to measure the effect of a small topology change in the domain with respect to a given cost function. This effect is described through an asymptotic expansion of this function. An approach using the analysis of the topological sensitivity [7,8,9] is presented in this work. The optimal pipe shape domain is obtained by the inserted obstacles in the initial domain. Taking into account the friction between the fluid and obstacles, which is modeled by The studied optimization problem is to find where \(J(v)=\nu \int_{O}|\nabla v|^{2} \,dx+\kappa \int _{O}|v|^{2} \,dx\) is the dissipation energy function and v is solution of (2). To optimize the obstacles’ location, we developed in Sect. 2 a topological asymptotic expansion of the dissipation energy function relative to the introduction of an obstacle of small size within the domain O of the fluid flow. Section 3 is devoted to the numerical tests. Topological asymptotic development Let \(y\in O\), \(\eta>0\) and \(\xi\subset\Bbb {R}^{2} \) a bounded given domain which contains the origin and \(\partial\xi\in\mathcal{C}^{1}\). We denote \({\xi}_{y,\eta}=y+\eta {\xi} \in O\). When an obstacle \({\xi}_{y,\eta}\) is inserted in O, \((v_{\eta },p_{\eta})\) is solution of We define the dissipation energy function associated to the perturbed domain where \(\kappa_{\eta}=c_{\eta} \kappa\) is the perturbed impermeability with and c is a contrast parameter which permits one to switch the impermeability value [12]. The variational formulation of (3) is: Find \(v_{\eta}\in V\) solution of where and The topological gradient method consists in finding the asymptotic expansion of the cost function J with respect to a small perturbation of the initial domain. For this reason, we interested in calculate the difference between the perturbed cost function \(J_{\eta}(u_{\eta})\) and the unperturbed one \(J(u_{0})\). A similar study is presented in [14] for the three dimensional non-stationary Navier–Stokes equations using a numerical approximation based on the sensitivity analysis of the Stokes equation. In this work we interested in the non-stationary Navier–Stokes Darcy equations. The variation of the studied cost function is written In the following \(|v|^{2}\) will be denoted by \(v^{2}\) for simplification. By remarking that and Following the definition of the parameter c, By using (4) and the integration by parts By choosing \(w=v_{\mathrm{adj}}\), the solution of the adjoint problem associated to (2), we obtain By comparing this last equation with (11), we obtain where We remark that it can be shown that \(\varSigma(\eta)=O(\eta^{2})\). Finally, using the Lebesgue differentiation theorem [17], we obtain which gives the following result. Theorem 2.1 The function J satisfies the asymptotic development where \(f(\eta)=|\xi_{y,\eta}|\) and DJ is the topological gradient given by Corollary 2.1 Summing over time, the topological gradient of \(J_{T}(v)\) is given by Numerical results Optimization algorithm Using (16), we remark that \(J_{\eta}(v_{\eta})< J(v)\) if \(D J(y)<0\). Then the minimum of J, which corresponds to the best location y of the obstacle, is obtained where \(D J(y)\) is the most negative. Following this result, we propose the following numerical algorithm: We begin first by choosing \(O_{0} = O\). Then we construct the sequence of domains \((O_{k})_{k\ge0}\) such that \(O_{k+1}=O_{k}\setminus \overline{\xi_{k}}\), where \(\xi_{k}\) is the obstacle defined by a level set curve of \(D_{k} J_{T}\) Here, \(d_{k}\) is a given constant and \(D_{k} J_{T}(y)=D J_{T}(v^{k})\) is defined by \(v^{k}\) is the solution of the Navier–Stokes Darcy problem where \((v^{k}_{\mathrm{adj}})_{t}\) and \((v^{k}_{\mathrm{adj}})_{n}\) are, respectively, the tangential and normal component of \(v^{k}_{\mathrm{adj}}\) and \(J_{\partial O}\) is the boundary part of J. Numerical discretization The numerical resolution of the Navier–Stokes Darcy problem (20) and its adjoint problem is done on two steps. To overcome the problem of nonlinearity terms in the first equation, we use the characteristics method [18]. It consists of approximating the convection term as where Δ t is the time step, \(t^{n}=n \Delta t\) and \(X(x,t^{n+1},t^{n})\) is the position at time \(t^{n}\) of the fluid particle which is located at x at time \(t^{n+1}\). The time discretization of problem (20) can then be written where \(\lambda=\frac{1}{\Delta t}+\kappa\), \(g_{n}=\frac{1}{\Delta t} v^{k}(X(x,t^{n+1},t^{n}),t^{n}) \), \(v^{k}_{n+1}=v^{k}(\cdot,t^{n+1})\). In the same way, we can express the objective function by where \(v^{k}_{n}\) and \((v^{k}_{\mathrm{adj}})_{n}\) are, respectively, the numerical solution of the Navier–Stokes Darcy problem and its adjoint at time \(t^{n}\). Example 1 In this test, the domain O is taken as the square with side equal to 1 containing one input \(\varGamma_{i}\) and one output \(\varGamma_{o}\). A Dirichlet parabolic profile is, respectively, prescribed at \(\varGamma _{i}\) and \(\varGamma_{o}\) with maximum inflow and outflow equal to 1. On the other part of the boundary \(\partial O_{k} \setminus (\varGamma_{i} \cup\varGamma_{o})\) a homogeneous Dirichlet condition is imposed. \(d_{k}\) is selected in practice such that \(J_{T}(O_{k+1})- J_{T}(O_{k})\) is negative. It determines the obstacle volume. In numerical tests, we choose \(d_{k}\) such that \(\xi_{k} \subset O_{k}\), \(D J_{T} \leq0\) and \(|\xi _{k}| \leq0.1 |O_{k}|\). We use the presented algorithm in order to find the pipe optimal domain connecting the inlet of the cavity and its outlet with minimum dissipated energy. We present, in Figs. 4 and 5 (respectively, Figs. 6 and 7) two intermediate geometries obtained throughout the optimization process for the case (a) (respectively, the case (b)). The obtained optimal domain is presented for the case (a) (respectively, the case (b)) in Fig. 8 (respectively, Fig. 9). It corresponds to \(V_{d} = 0.1 \pi |\varOmega|\) (respectively, \(V_{d} =0.08 \pi |\varOmega|\)). Example 2 In this example, \(\varOmega=\, ]0, 3/2[\, \times\, ]0, 1[\) is a rectangular domain with two inlets and outlets, The considered boundary condition is similar to that considered for the pipe example. As in the previous example, we consider here two cases describing various relative positions of inlets and outlets. The domains of the considered cases, showing inlets and outlets positions, are depicted in Figs. 10 (Case (c)) and 11 (Case (d)). The geometries obtained during the optimization process are presented respectively in Figs. 12–13 for the case (c) and Figs. 14–15 for the case (d). The optimal geometries plotted in Figs. 16 and 17 are respectively computed with \(V_{d} = \frac{1}{5}|\varOmega|\) for the case (c) and \(V_{d} = \frac{1}{6}|\varOmega|\) for the case (d). Conclusion We developed in this work an efficient topological optimization algorithm for determining the optimal shape design of unsteady flow described by the coupled Navier–Stokes and Darcy equations. Using the asymptotic expansion of the energy function, the obtained optimal domain is generated by inserting obstacles at each iteration until reaching the desired volume. The location of these obstacles is determined by the developed topological gradient. This problem can be generalized to the three dimensional case and used for realistic applications such the bypass problem in biomedical fluid. References 1. Afshar, M.H., Afshar, A., Marino, M.A., Asce, H.M.: An iterative penalty method for the optimal design of pipe networks. Int. J. Civ. Eng. 7(2), 1–16 (2009) 2. Pingen, G.: Optimal design for fluidic systems: topology and shape optimization with the lattice Boltzmann method. Ph.D. thesis, University of Colorado (2008) 3. Mohammadi, B., Pironneau, O.: Applied Shape Optimization for Fluids. Oxford University Press, New York (2001) 4. Abdelwahed, M., Hassine, M.: Topological optimization method for a geometric control problem in Stokes flow. Appl. Numer. Math. 59(8), 1823–1838 (2009) 5. Borrvall, T., Petersson, J.: Topological optimization of fluids in Stokes flow. Int. J. Numer. Methods Fluids 44(1), 77–107 (2003) 6. Guest, J.K., Prévost, J.H.: Topology optimization of creeping fluid flows using a Darcy–Stokes finite element. Int. J. Numer. Methods Eng. 66, 461–484 (2006) 7. Guillaume, P., Sid Idris, K.: Topological sensitivity and shape optimization for the Stokes equations. SIAM J. Control Optim. 43(1), 1–31 (2004) 8. Hassine, M., Masmoudi, M.: The topological asymptotic expansion for the quasi-Stokes problem. ESAIM Control Optim. Calc. Var. 10(4), 478–504 (2004) 9. Sokolowski, J., Zochowski, A.: On the topological derivative in shape optimization. SIAM J. Control Optim. 37(4), 1251–1272 (1999) 10. Gersborg-Hansen, A., Sigmund, O., Haber, R.B.: Topology optimization of channel flow problems. Struct. Multidiscip. Optim. 30(3), 181–192 (2005) 11. Yongbo, D.: Topology optimization of steady and unsteady incompressible Navier–Stokes flows driven by body forces. Struct. Multidiscip. Optim. 47(4), 555–570 (2013) 12. Gartling, D.K., Hickox, C.E., Givler, R.C.: Simulation of coupled viscous and porous flow problems. Comput. Fluid Dyn. J. 7(1), 23–48 (1996) 13. Rădulescu, V., Repovš, D.: Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis. Taylor & Francis, Boca Raton (2015) 14. Abdelwahed, M., Hassine, M.: Topology optimization of time dependent viscous incompressible flows. Abstr. Appl. Anal. 2014, Article ID 923016 (2014) 15. Hasund, K.E.: Toplogy optimization for unsteady flow with applications in biomedical flows. Ph.D. thesis, Norwegian University of Sciences and Technology (2017) 16. Rădulescu, V.: Nonlinear elliptic equations with variable exponent: old and new. Nonlinear Anal., Theory Methods Appl. 121, 336–369 (2015) 17. Rudin, W.: Real and Complex Analysis. Mathematics Series. McGraw-Hill, New York (1987) 18. Pironneau, O.: On the transport diffusion algorithm and its applications to Navier–Stokes equations. Numer. Math. 38, 309–332 (1982) 19. Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer, New York (1991) Acknowledgements The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this Research group No. (RG-1435-026). Availability of data and materials Not applicable. Funding Not applicable. Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Existence Property Pattern Untimed existence Pattern Name and Classification Existence : Occurrence Specification Pattern Structured English Specification Scope, P eventually [holds]. (see the English grammar). Pattern Intent To describe a portion of a system's execution that contains an instance of certain events or states. Also known as Eventually. Temporal Logic Mappings LTL Globally: $\Diamond (P)$ Before R: $\neg R \; \mathcal{W}\; (P \wedge \neg R)$ After Q: $\Box (\neg Q) \vee \Diamond (Q \wedge \Diamond P))$ Between Q and R: $\Box (Q \wedge \neg R \rightarrow (\neg R \; \mathcal{W} \; (P \wedge \neg R)))$ After Q until R: $\Box (Q \wedge \neg R \rightarrow (\neg R \; \mathcal{U} \; (P \wedge \neg R)))$ CTL Globally: $AF(P)$ Before R: $A[\neg R \; \mathcal{U} \; (P \wedge \neg R)]$ After Q: $A[\neg Q \; \mathcal{W} \; (Q \wedge AF(P))]$ Between Q and R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; \mathcal{W} \; (P \wedge \neg R)])$ After Q until R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; \mathcal{U} \; (P \wedge \neg R)])$ Additional notes The Existence Property Pattern has been proposed by Dwyer in [1]. The original version that does not contain time-constraints can be found on Untimed version. Time-constrained version Pattern Name and Classification Time-constrained Existence: Real-time Occurrence Specification Pattern Structured English Specification Scope, P eventually [holds] [ Time(0)]. (see the English grammar). Pattern Intent This pattern aims at describing a portion of a system's execution bounded by a time interval that contains an instance of certain events or states. Temporal Logic Mappings MTL Globally: $\Diamond ^{[t1,t2]}\; (P)$ Before R: $\neg R \; \mathcal{W}^{[t1,t2]} \; (P \wedge \neg R)$ After Q: $\Box (\neg Q) \vee \Diamond(Q \wedge \Diamond^{[t1,t2]} P))$ Between Q and R: $\Box ((Q \wedge \Box^{[0,t1]} (\neg R) \wedge (\Diamond^{[t1,\infty)} R)) \rightarrow (\neg R \; \mathcal{W}^{[t1,t2]}\; (P \wedge \neg R)))$ After Q until R: $\Box ((Q \wedge \Box^{[0,t1]} (\neg R)) \rightarrow (\neg R \; \mathcal{U}^{[t1,t2]}\; (P \wedge \neg R)))$ TCTL Globally: $AF^{[t1,t2]}\;(P)$ Before R: $A[\neg R \; \mathcal{W}^{[t1,t2]}\; (P \wedge \neg R)]$ After Q: $A[\neg Q \; \mathcal{W} \; (Q \wedge AF^{[t1,t2]}\; (P))]$ Between Q and R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; v{W}^{[t1,t2]}\; (P \wedge \neg R)])$ After Q until R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; \mathcal{U}^{[t1,t2]} \; (P \wedge \neg R)])$ Example and Known Uses The classic example of existence [1] is specifying termination, e.g., on all executions do we eventually reach a terminal state within a certain time bound. Additional notes This pattern is the extension of the Existence pattern introduced by Dwyer in [1]. This pattern can be considered the dual of the Time-constrained Absence pattern. Probabilistic version Pattern Name and Classification Probabilistic Existence: Probabilistic Occurrence Specification Pattern Structured English Specification Scope, P eventually [holds] [ Time(0)] [ Probability]. (see the English grammar). Pattern Intent This pattern aims at describing a portion of a system's execution bounded by a time interval that contains an instance of certain events or states with a certain probability. Temporal Logic Mappings PLTL Globally: $[\Diamond ^{[t1,t2]}\; (P)]_{\bowtie p}$ Before R: $[\neg R \; \mathcal{W}^{[t1,t2]} \; (P \wedge \neg R)]_{\bowtie p}$ After Q: $[\Box (\neg Q) \vee \Diamond(Q \wedge \Diamond^{[t1,t2]} P))]_{\bowtie p}$ Between Q and R: $[\Box ((Q \wedge \Box^{[0,t1]} (\neg R) \wedge (\Diamond^{[t1,\infty)} R)) \rightarrow (\neg R \; \mathcal{W}^{[t1,t2]}\; (P \wedge \neg R)))]_{\bowtie p}$ After Q until R: $[\Box ((Q \wedge \Box^{[0,t1]} (\neg R)) \rightarrow (\neg R \; \mathcal{U}^{[t1,t2]}\; (P \wedge \neg R)))]_{\bowtie p}$ CSL Globally: $\mathcal{P}_{\bowtie p}\Diamond^{[t1,t2]}\;(P)$ Before R: $\mathcal{P}_{\bowtie p}(\neg R \; \mathcal{W}^{[t1,t2]}\; (P \wedge \neg R))$ After Q: $\mathcal{P}_{=1}[\neg Q \; \mathcal{W} \; (Q \wedge \mathcal{P}_{\bowtie p}\Diamond^{[t1,t2]}\; (P))]$ Between Q and R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; \mathcal{W}^{[t1,t2]}\; (P \wedge \neg R)])$ After Q until R: $AG(Q \wedge \neg R \rightarrow A[\neg R \; \mathcal{U}^{[t1,t2]} \; (P \wedge \neg R)])$ Example and Known Uses Some example of the probabilistic existence pattern from [2]: An ambulance must arrive at the incident scene within 15 min in 95 percent of the cases. or At least 95 percent of issued checks are success-fully cleared. Additional notes This pattern is the extension of the Existence pattern introduced by Dwyer in [1]. The Existence Property Pattern has been proposed by Dwyer in [2] and it can be found on the following webpage Probabilistic version. Bibliography 1. Matthew B. Dwyer; George S. Avrunin; James C. Corbett, Patterns in Property Specifications for Finite-State Verification.ICSE 1999. pp. 411-420. 2. Lars Grunske, Specification patterns for probabilistic quality properties.ICSE 2008. pp. 31-40.
For a PDF version of this article, please click this link Introduction You may well have come across impedance expressed in the the terms real and imaginary (or resistance and reactance) when using equipment such an Antenna Analyser or impedance in the form R + jX. This may mean something to some readers whilst it probably means nothing and is somewhat baffling to many others. In this article I am going to try to explain in SIMPLE terms what it is all about. Please do not “get turned off” because it contains some mathematics, it’s all very simple — really! This article is not meant as a mathematical treatise on the subject and covers, for sake of simplicity, only the series circuit. It will, however give your brain, calculator and computer some exercise! Basic AC Theory During studies for the old RAE or newer Full Licence the concepts or resistance and reactance have been taught and the following equations will have been given: \text{Inductive reactance:}\quad X_L = 2\pi f L ~\Omega \text{Capacitive reactance:}\quad X_C = \frac{1}{2 \pi f C} ~\Omega ( Note: at least X is in Ohms and we have a chance of combining it with resistance R, L and C themselves are not in Ohms). Figure 1: Impedance in a Series Circuit You will also have been taught that inductance and capacitance introduce a phase shift in the circuit between the applied voltage and the current flowing. A circuit has impedance rather than resistance when inductance and capacitance are also involved in a circuit carrying an alternating current. Again, referring to what has been taught, impedance can be represented by a triangle such as shown in Fig. 1 for a series circuit. It is not correct to write that Z = R + X_L or Z = R + X_C as this has not taken into account the phase shift introduced by the reactive element. Rather, you must use the formulae given in Fig 1. It would, however, be very convenient it there was a method whereby R and X could be combined in some form without the use of square roots and trigonometric functions. It would allow a consistent set of units — Ohms — instead of dealing with \text{pF}, \text{μF}, \text{μH}, \text{mH} and so forth and also be convenient if reactances could just be added and subtracted. This would help us in, for example, antenna calculations, where we need to get a series reactance that will make an antenna look purely resistive. The next section explain a method for attaining this with some examples. The “j” Operator There is a mathematical tool which uses the j operator (it is often called i in mathematical books but engineers use j). This allows us to write Z=R+jX_L or Z= R-jX_C — note the minus sign for capacitive reactance, it is important. The R and the j terms cannot be further simplified, i.e. if Z = 65 + j40 this is its simplest form. The j term implies a quantity that is at 90\degree (or quadrature) to the resistive term. Two practical examples – see Fig, 2. Using the formulae for reactance given earlier and the frequencies quoted in the examples, then the series circuits can be specified as Z = 220 + j 628.3 and Z=100-j15.9 respectively (note these figures have been rounded off). This gives phase angles of approximately 72\degree (lagging) and 9\degree (leading) respectively. The minus sign indicates the reactance is capacitive and the plus sign denotes inductive. Figure 2: Two Practical Examples If the series circuit as shown in Fig. 3 is used, the combined impedance is given by: Z = R_1 + R_2 + j X_L – j X_C The non j and the j terms can he collected together which gives: Z = (R_1 + R_2) + j (X_L – X_C) Figure 3: Combined Circuits Thus series resistance can be added together (something that should be known) as also can series reactance — but taking into account the sign. The reactances can only be added together provided that they are quoted at the same frequency. Taking the examples from Fig. 2 and combining them in series gives: Z = 100 +330 + j(628.3 – 15.9) \\ Z = 430 + j612.4 This denotes that the combined circuit at 100\text{kHz} has a resistive part of 430\Omega and an inductive reactance of 612.4\Omega (because the j term is positive). This is equivalent to 0.975\text{mH} (or 975\text{μH}). The resulting phase angle of 55\degree is obtained from: \tan \text{\o} = 612.4/430 A well-known condition is achieved when the resultant j terms equals zero, i.e. when X_L = X_C. From earlier then: 2 \pi fL = 1/(2 \pi fC) When rearranging this one obtains: f=\frac{1}{2\pi \sqrt{LC}} This is the well-known resonant frequency formula. You are then left with Z = the resistive term only, i.e. a series circuit at resonance is purely resistive — something one learnt for the exams? Figure 4: Antenna System Impedance A Practical Use You could well ask what is the use of this, is it just a mathematical exercise? No, it is not: a practical use was hinted at earlier. The following example is just one application. The impedance at an antenna system is measured at 3.7\text{MHz} using an antenna analyser and it is found that the resistive part is 38\Omega and the reactive part is -j100\Omega (ie Z = 38 – j100). To get maximum power into the antenna it is desirable to eliminate the reactive part so that, from terminals AB, the impedance is purely resistive. Assuming the antenna analyser gives an equivalent series circuit, then a reactance must be added in series to cancel the -j100 term. This is obviously +j100 and the value of the inductance can now be calculated as 4.3\text{μH} at 3.7\text{MHz}. Conclusion It is hoped that this short article has provided an insight into the use of the operator j, but the article really only touches the surface regarding the use or this operator. Sufficient information is given for converting between physical values (ie. farads and henrys and sub-multiples) and equivalent reactances which are expressed in one single unit — the Ohm. The practical examples given will hopefully allow you to use the operator for other applications. To ease the maths, you can write a simple spreadsheet to do it for you.
I'm trying to align equations with already aligned case by case answers in them What I am currently getting is: and what I need is to align all the equals signs throughout the function The required excerpt from my LaTex is \documentclass[10pt,a4paper]{article}\usepackage[latin1]{inputenc} \usepackage{amsmath}\usepackage{mathtools}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{graphicx}\usepackage{csquotes}\begin{document}\begin{multline}\langle \cos(m \omega x), \cos(n \omega x) \rangle =\frac{1}{T}\int_{-T/2}^{T/2}\cos({m \omega x}) \cos({n \omega x}) dx = \begin{cases}0 & \text{if } {m \neq n } \\ \frac{1}{2} & \text{if } {m = n} > 0 \\1 & \text{if } {m = n} = 0 \\ \end{cases} \\\langle \sin(m \omega x), \sin(n \omega x) \rangle =\frac{1}{T}\int_{-T/2}^{T/2}\sin({m \omega x}) \sin({n \omega x}) dx = \begin{cases}0 & \text{if } {m \neq n } \\ \frac{1}{2} & \text{if } {m = n} > 0 \\1 & \text{if } {m = n} = 0 \\\end{cases}\\\langle \sin(m \omega x), \cos(n \omega x) \rangle =\frac{1}{T}\int_{-T/2}^{T/2}\sin({m \omega x}) \cos({n \omega x}) dx = 0\end{multline}\end{document} I've tried using \begin{aligned}[t]one function with cases\end{aligned}\\\begin{algined}[t]next function with cases\end{aligned}\\\begin{algined}[t]next function without cases\end{aligned}\\ inside the \begin{multline}...\end{multiline}, but I've had no success. I assume it can be done given that I can see it in this paper, but I am very new to LaTex. It would be interesting to know how it is able to deal with this sort of alignment issue, given that for each equation, there are different amounts of equals signs, and therefore different amounts of alignments required. Is there are way of replacing the active symbol & for aligned and not case?
I am getting a little confused about the huge number of slight variations on the Sobolev Embedding Theorem. Let $\Omega\subseteq\mathbb{R}^n$ be a bounded Lipschitz domain and suppose that $f\in L_\infty(\Omega)\cap W^{\tau,2}(\Omega)$ for some $\tau\in\mathbb{R}$ with $\tau>n/2$. Do we have the inequality $$ \left\Vert f\right\Vert_{L_\infty(\Omega)}\leq C\left\Vert f\right\Vert_{W^{\tau,2}(\Omega)} $$ for some constant C? Does it hold if $\Omega$ is unbounded? Thanks.
Proposition: We assume the following. 1) The force exerted by the air on a surface is pure pressure thus normal to the surface without friction. The pressure increases with respect to the magnitude of the surface normal component of the incident air flow velocity and is zero when the surface normal component becomes negative. 2) The surface of the capsule is axially symmetric. Label the intersection of the symmetric axis and the surface (bottom) facing the incoming airflow $B$. The inward normal vector $\vec n$ of any infinitesimal surface patch either intersects the axis at point $N$ some finite distance from $B$ or $\vec n$ parallels the axis. The center of mass of the capsule $C$ locates between $B$ and $N$. The capsule achieves aerodynamic stability. Before presenting the proof of this proposition, I give a plausible toy model of this air flow pressure function. The realistic function will surely be more complicated. However, interestingly, two and a half months after I posted this answer, I happened upon the theory of hypersonic aerodynamics that surprisingly endorsed fully the following derivation as the correct computation for the pressure of hypersonic (Mach 3-5) airflow on an largely axial symmetric body with blunt surface geometry. c.f. equations (11-2) and (11-3) of chapter 11 on the hypersonic aerodynamics of W. H. Mason's lecture on configuration aerodynamics. Search for "Newtonian Impact Theory" in this accompanying PPT to that chapter. Suppose an air column of an infinitesimal cross section area $dA$ collide with a facet with its normal vector forming an angle $\theta\in\big[0,\frac\pi2\big]$ with the air flow direction vector. The air bounces off the facet completely elastically. The momentum change (all in the normal direction of the facet) per unit time is then $2\rho v^2\cos\theta dA$, where $\rho$ is the density of the air flow and $v$ the speed of it. The area upon which this momentum change occurs is $\frac{dA}{\cos\theta}$. Divide the first quantity by the second, we get the pressure $p(\theta):=2\rho v^2\cos^2\theta$. Now the early arriving particles bounce off of the surface normally and collide completely elastically with the late arriving particles and bounce back towards the surface again. By symmetry, the average particle velocity near the surface vanishes in the surface normal direction but its component tangent to the surface remains. Macroscopically, the fluid on average as a whole moves along the tangent of the surface. Alternatively we can assume the complete inelastic collision of the air molecule with the surface, so that the momentum normal to the surface completely dissipates only the tangential component is unmolested so the air molecules after the collision move parallel along the surface. In this case, it is clear $p(\theta):=\rho v^2\cos^2\theta$ which is half of the previous value as the surface normal momentum transferred is half of that in the elastic case. In the case of fractional elastic collision, the $p(\theta):=(1+\alpha)\rho v^2\cos^2\theta$ where $\alpha\in[0,1]$ is the coefficient of collision elasticity. Moreover, the part of the object surface that is in the "shadow" of the incoming airflow will remain untouched by the airflow and thus experience no pressure. Proof: 1) 2-dimension. Let us formulate the problem formally. Let $s\in[-s_0,s_0],\,s_0>0$ measure the distance, with sign, from the intersection of the symmetry axis with the surface. Denote the unit inward normal vector at $s$ by $\hat n(s)$. Let $\theta(s)$ be the angle from $\hat n(0)$ to $\hat n(s)$ with counterclockwise direction as the positive direction for the angle. $\theta(-s)=-\theta(s)$ by the axial symmetry. Let the angle from $\hat n(s=0)$ to the incoming airflow direction be $\theta_a$ also with counterclockwise direction as the positive direction. Place the curve $(x(s),y(s))$ in the Cartesian coordinate such that $(x(s=0)=0,y(s=0)=0)$ and the center of mass be located at $(x=0,y=y_c)$. We have $(x(-s),y(-s))=(-x(s),y(s))$. Let $p(\beta)$ be the pressure as a function of the angle $\beta$ with respect to the incoming air flow. The torque at each curve with respect to $(0,y_c)$ is $l(s)p(\theta_a-\theta(s))$ where $l(s)\hat z = \big((x(s),y(s))-(0,y_c)\big)\times \hat n(s)$. Without loss of generality we assume $\theta_a>0$. Otherwise we can just reflect the coordinate with respect to the $y$ axis and get back the same problem because of the axial symmetry. The total torque is, needing to account for only the surface facing the incoming airflow, \begin{align}T&:=\int_{-s_0}^{s_0}l(s)p(\theta_a-\theta(s))ds \\&=\int_0^{s_0}l(s)\big(p(\theta_a-\theta(s))-p(\theta_a+\theta(s))\big)\,ds \end{align}as $l(-s)=-l(s)$ by the axial symmetry of the curve. Stability is achieved if $T>0$. We have $l(s)>0,\,\forall s>0$ since, by Assumption 2), the center of mass $C$ located at $(0,y_c)$ is between $N$ (at the origin of the coordinate $(0,0)$) and $B$. $p(\theta_a-\theta(s))>p(\theta_a+\theta(s))$, since $|\theta_a-\theta(s)|<\theta_a+\theta(s),\ \forall \theta_a>0,\, \theta(s)>0,\, s>0$, and the fact that $p(u)>p(v),\,\forall |u|<|v|$. Therefore $T>0$. QED
In this chapter you will do more work with fractions written in the decimal notation. When fractions are written in the decimal notation, calculations can be done in the same way as for whole numbers. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Equivalent forms Fractions in decimal notation 1. What fraction of each rectangle is coloured in? Write your answers in the table. (a) (b) (c) (d) (a) Red (b) Green Yellow (c) Green Yellow (d) Yellow Green 2. Now find out what fraction in each rectangle in question 1 is not coloured in. (a) (b) (c) (d) Decimal fractions and common fractions are simply different ways of expressing the same number. We call them different notations. To write a common fraction as a decimal fraction, we must first express the common fraction with a power of ten (10, 100, 1 000 etc.) as denominator. For example: \(\frac{9}{20}=\frac{9}{20} \times \frac{5}{5} = \frac{45}{100} = 045\) If you have a calculator, you can also divide the numerator by the denominator to get the decimal form of a fraction, for example: \(\frac{9}{20} = 9 \div 20 = 0,45\) To write a decimal fraction as a common fraction, we must first express it as a common fraction with a power of ten as denominator and then simplify if necessary. For example: \( 0,65 = \frac{65}{100} = \frac{65 \div 5}{100 \div 5} = \frac{13}{20}\) 3. Give the decimal form of each of the following numbers. \(\frac{1}{2} \) __________ \(\frac{3}{4}\) __________ \(\frac{4}{5}\) __________ \(\frac{7}{5}\) __________ \(\frac{7}{2} \) __________ \(\frac{65}{100}\)__________ 4. Write the following as decimal fractions. (a) \(2 \times 10 + 1 \times 1 + \frac{3}{10}\) (b) \(3 \times 1 + 6 \times \frac{1}{100}\) (c) Three hundredths (d) \(7 \times \frac{1}{1000}\) 5. Write each of the following numbers as fractions in their simplest form. 0,2 0,85 0,07 12,04 40,006 6. Write in the decimal notation. (a) 5 + 12 tenths (b) 2 + 3 tenths + 17 hundredths (c) 13 hundredths + 15 thousandths (d) 7 hundredths + 154 hundredths Hundredths, percentages and decimals It is often difficult to compare fractions with different denominators. Fractions with the same denominator are easier to compare. For this and other reasons, fractions are often expressed as hundredths. A fraction expressed as hundredths is called a percentage. Instead of 6 hundredths we can say 6 per cent or \(\frac{6}{100}\) or 0,06. 6 per cent, \(\frac{6}{100}\) and 0,06 are just three different ways of writing the same number. The symbol % is used for per cent. Instead of writing "17 per cent", we may write 17%. 1. Write each of the following in three ways: in decimal notation, in percentage notation and in common fraction notation. Leave your answers in hundredths. (a) 80 hundredths (b) 5 hundredths (c) 60 hundredths (d) 35 hundredths 2. Complete the following table. 0,3 \(\frac{1}{4}\) 15% \(\frac{1}{8}\) 0,55 1% Ordering and comparing decimal fractions Bigger, smaller or the same? 1. Write the values of the marked points (A to D) in as accurately as possible in decimal notation. Write the values beneath the letters A to D. (a) (b) (c) (d) (e) (f) (g) (h) (i) 2. Order the following numbers from biggest to smallest. Explain your thinking. 5267 1263 1300 12689 635 1267 125 126 12 3. Order the following numbers from biggest to smallest. Explain your method. 0,8 0,05 0,901 0,15 0,465 0,55 0,75 0,4 0,62 0,901 0,8 0,75 0,62 0,55 0,465 0,4 0,15 0,05 4. Write down three different numbers that are bigger than the first number and smaller than the second number. (a) 5 and 5,1 (b) 5,1 and 5,11 (c) 5,11 and 5,12 (d) 5,111 and 5,116 (e) 0 and 0,001 (f) \(\frac{1}{2}\) and 1 5. Underline the bigger of the two numbers. (a) 2,399 and 2,6 (b) 5,604 and 5,64 (c) 0,11 and 0,087 (d) \(\frac{3}{4}\) and 50% (e) \(\frac{75}{100}\) and \(\frac{50}{100}\) (f) 0,125 and 0,25 6. The table gives information about two world champion heavyweight boxers. If they fight against one another, who would you expect to have the advantage, and why? Height (m) 1,98 1,88 Weight (kg) 112 103,3 Reach (m) 2,03 1,91 7. Fill in <, > or = . (a) 3,09 ☐ 3.9 (b) 3,9 ☐ 3,90 (c) 2,31 ☐ 3,30 (d) 3,197 ☐ 3,2 (e) 4,876 ☐ 5,987 (f) 123,321 ☐ 123,3 8. How many numbers are there between 3,1 and 3,2? Rounding off decimal fractions Decimal fractions can be rounded in the same way as whole numbers. They can be rounded to the nearest whole number or to one, two, three etc. figures after the comma. If the last digit of the number is 5 or bigger it is rounded up to the next number. For example: 13,5 rounded to the nearest whole number is 14; 13,526 rounded to two figures after the comma is 13,53. If the last digit is 4 or less it is rounded down to the previous number. For example: 13,4 rounded to the nearest whole number is 13. Let's round off 1. Round each of the following numbers off to the nearest whole number. 29,34 3,65 14,452 3,299 39,1 564,85 1,768 2. Round each of the following numbers off to one decimal place. 19,47 421,34 489,99 24,37 6,77 3. Round each of the following numbers off to two decimal places. 8,345 6,632 5,555 34,239 21,899 4. Mr Peters buys a radio for R206,50. The shop allows him to pay it off over six months. How must he pay back the money? 5. (a) Mrs Smith buys a carton of 10 kg grapes at the market for R24,77. She must divide it between herself and two friends. How much does each woman get? (b) How much must each person pay Mrs Smith for the grapes? 6. Estimate the answers for each of the following by rounding off the numbers. (a) \(1,43 \times 1,62\) (b) \(3,89 \times 4,21\) Calculations with decimal fractions To add and subtract decimal fractions tenths may be added to tenths tenths may be subtracted from tenths hundredths may be added to hundredths hundredths may be subtracted from hundredths etc. Let's do calculations! 1. Four consecutive stages in a cycling race are 21,4 km; 14,7 km; 31 km and 18,6 km long. How long is the whole race? Answer: 2. Calculate. (a) \( 16,52 + 2,35 \) (b) \(16,52 + 9,38\) (c) \(16,52 + 9,78\) (d) \( 30,08 + 2,9 \) (e) \(0,042 + 0,103\) (f) \(9,99 + 0,99\) 3. Calculate. (a) \( 45,67 - 23,25 \) (b) \( 45,67 -23,80 \) (c) \(187,6 - 98,45\) (d) \( 1,009 - 0,998 \) (e) \(0,9 - 0,045\) (f) \(65,7 - 37,6\) 4. The following set of measurements (in cm) was recorded during an experiment: 56,8; 55,4; 78,9; 57,8; 34,2; 67,6; 45,5; 34,5; 64,5; 88 (a) Find the sum of the measurements and round it off to the nearest whole number. (b) First round off each measurement to the nearest whole number and then find the sum. (c) Which of your answers in 4(a) and (b) is closest to the actual sum? Explain why. 5. By how much is 0,7 greater than 0,07? 6. The difference between two numbers is 0,75. The bigger number is 18,4. What is the other number? To multiply fractions written as decimals, convert the fractions to whole numbers by multiplying by powers of 10 (e.g. \(0,3 \times 10 = 3\)), do your calculations with the whole numbers, and then convert back to decimals again. For example: \(13,1 \times 1,01\) \(13,1 {\bf\times 10} \times 1,01 {\bf\times 100} = 131 \times 101 = 13 231; 13 231 \div {\bf 10 \div 100} = 13,231\) When you do division you can first multiply the number and the divisor by the same number to make the working easier. For example: \(21,7 \div 0,7 = (21,7 {\bf\times 10}) \div (0,7 {\bf\times 10}) = 217 \div 7 = 31\) 7. Calculate each of the following. You may use fraction notation if you wish. (a) \(0,12 \times 0,3 \) (b) \( 0,12\times 0,03 \) (c) \(1,2 \times 0,3\) (d) \(350 \times 0,043 \) (e) \( 0,035\times 0,043 \) (f) \(0,13 \times 0,16\) (g) \(1,3 \times 1,6 \) (h) \(0,13 \times 1,6\) 8. \(30,5 \times 1,3 = 39,65\). Use this answer to work out each of the following. (a) \(3,05 \times 1,3 \) (b) \( 305 \times1,3 \) (c) \(0,305 \times 0,13\) (d) \(305 \times 13 \) (e) \( 39,65 \div 30,5 \) (f) \(39,65 \div 0,305\) (g) \( 39,65 \div 0,13 \) (h) \(3,965 \div 130\) 9. \( 3,5 \times 4,3 = 15,05\). Use this answer to work out each of the following. (a) \(3,5 \times 43 \) (b) \( 0,35 \times43 \) (c) \(3,5 \times 0,043\) (d) \(0,35 \times 0,43 \) (e) \( 15,05\div 0,35 \) (f) \(15,05 \div 0,043\) 10. Calculate each of the following. You may convert to whole numbers to make it easier. (a) \( 62,5 \div 2,5 \) (b) \(6,25 \div 2,5\) (c) \( 6,25 \div 0,25 \) (d) \(0,625 \div 2,5\) Solving problems 1. (a) Divide R44,45 between seven people so that each one receives the same amount. (b) John saves R15,25 every week. He now has R106,75 saved up. For how many weeks has he been saving? 2. (a) Calculate \(14,5 \div 6\), correct to two decimal places (b) Calculate \(7,41 \div 5\), correct to one decimal place 3. Determine the value of \(x\). (Give answers rounded to 2 decimal places.) (a) \( 7,1 \div x = 4,2 \) (b) \(x \div 0,7 = 6,2\) (c) \(12 \div x = 6,4\) (d) \( x \div 3,5 = 7 \) (e) \(2,3 \times x = 6\) (f) \(0,023 \times x = 8\) 4. (a) 1 â of water weighs almost 0,995 kg.What will 50 â of water weigh? What will 0,5 â of water weigh? Mincemeat costs R36,65 per kilogram. What will 3,125 kg mince-meat cost? What will 0,782 kg cost?
I have recently acquired a couple of mysterious ultra/super capacitors from my brother. Apparently he doesn't remember any of the specifications or even brand... To further complicate matters, they have no meaningful identification information stamped or printed on them. (There is a bar code label with alphanumeric code but a quick Google search using it found nothing.) Looks like it's time to fire up the Scooby-Doo Mystery Buss, 'cause were going on an adventure folks. First, I figured I'd try to measure the capacitance. Since my LCR meter isn't specified for enormous capacitors like these, I had to get creative with my test equipment. Taking basic physics into account, we have that capacitance is proportional to the stored charge per volt across the capacitor: $$ C=\frac{q}{V} $$ where the accumulated charge in the capacitor is the integral of the current through the capacitor: $$ \int i(t)dt=q$$ Using a current source to charge the capacitor we can simplify the calculations, using only delta measurements of the charge and voltage across the capacitor. $$ C=\frac{\Delta q}{\Delta V}=\frac{i\Delta t}{\Delta V} $$ With my Advantest R6144 current source I can then charge the capacitor at a set current and simply measure the voltage across the capacitor using my Tektronix DMM4050 in the trendplot mode. However, this is where I start to see some rather large numbers. It's possible the capacitor really is ~2200 farads, but that seems a bit high. Admittedly, the capacitor is quite large at ~5.5" long by ~1" radius. And now some questions for the fine folks of Electrical Engineering Stack Exchange: Is this method a viable means to measure super capacitors? Or is there a more suitable method that I can apply to measure them? Also, does the capacitance of super/ultra capacitors significantly change vs. voltage of the capacitor? E.g., are these measured results predictive/indicative for higher charge voltages. I would reckon the capacitance should fluctuate some, but I doubt its that much. Probably at worst it's a few hundred farads, but I'm no expert on the matter. Also, and somewhat more importantly, how would I find the maximum charge voltage without destroying the capacitor? Would a constant current charge of say 100uA over a few weeks till the voltage reaches some sort of equilibrium with self discharge work. Then back off a couple hundred milivolts and call that the max charge voltage. Or will it just reach a tripping point and self-destruct while spraying electrolyte all over my lab? Finally, how do you determine the polarity orientation of the capacitors? These are not marked in any way, and both terminals are identical. I cast my bet with the residual voltage stored in the capacitor. I assume the dielectric absorption/memory effect from previous charging knows the correct direction... At any rate, it's sort of fun to try and determine the characteristics of these capacitors. But it's still a touch aggravating that there are no useful markings on them, like polarity orientation, manufacturer, ect.
In this chapter you will do more work with fractions written in the decimal notation. When fractions are written in the decimal notation, calculations can be done in the same way as for whole numbers. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Equivalent forms Fractions in decimal notation 1. What fraction of each rectangle is coloured in? Write your answers in the table. (a) (b) (c) (d) (a) Red (b) Green Yellow (c) Green Yellow (d) Yellow Green 2. Now find out what fraction in each rectangle in question 1 is not coloured in. (a) (b) (c) (d) Decimal fractions and common fractions are simply different ways of expressing the same number. We call them different notations. To write a common fraction as a decimal fraction, we must first express the common fraction with a power of ten (10, 100, 1 000 etc.) as denominator. For example: \(\frac{9}{20}=\frac{9}{20} \times \frac{5}{5} = \frac{45}{100} = 045\) If you have a calculator, you can also divide the numerator by the denominator to get the decimal form of a fraction, for example: \(\frac{9}{20} = 9 \div 20 = 0,45\) To write a decimal fraction as a common fraction, we must first express it as a common fraction with a power of ten as denominator and then simplify if necessary. For example: \( 0,65 = \frac{65}{100} = \frac{65 \div 5}{100 \div 5} = \frac{13}{20}\) 3. Give the decimal form of each of the following numbers. \(\frac{1}{2} \) __________ \(\frac{3}{4}\) __________ \(\frac{4}{5}\) __________ \(\frac{7}{5}\) __________ \(\frac{7}{2} \) __________ \(\frac{65}{100}\)__________ 4. Write the following as decimal fractions. (a) \(2 \times 10 + 1 \times 1 + \frac{3}{10}\) (b) \(3 \times 1 + 6 \times \frac{1}{100}\) (c) Three hundredths (d) \(7 \times \frac{1}{1000}\) 5. Write each of the following numbers as fractions in their simplest form. 0,2 0,85 0,07 12,04 40,006 6. Write in the decimal notation. (a) 5 + 12 tenths (b) 2 + 3 tenths + 17 hundredths (c) 13 hundredths + 15 thousandths (d) 7 hundredths + 154 hundredths Hundredths, percentages and decimals It is often difficult to compare fractions with different denominators. Fractions with the same denominator are easier to compare. For this and other reasons, fractions are often expressed as hundredths. A fraction expressed as hundredths is called a percentage. Instead of 6 hundredths we can say 6 per cent or \(\frac{6}{100}\) or 0,06. 6 per cent, \(\frac{6}{100}\) and 0,06 are just three different ways of writing the same number. The symbol % is used for per cent. Instead of writing "17 per cent", we may write 17%. 1. Write each of the following in three ways: in decimal notation, in percentage notation and in common fraction notation. Leave your answers in hundredths. (a) 80 hundredths (b) 5 hundredths (c) 60 hundredths (d) 35 hundredths 2. Complete the following table. 0,3 \(\frac{1}{4}\) 15% \(\frac{1}{8}\) 0,55 1% Ordering and comparing decimal fractions Bigger, smaller or the same? 1. Write the values of the marked points (A to D) in as accurately as possible in decimal notation. Write the values beneath the letters A to D. (a) (b) (c) (d) (e) (f) (g) (h) (i) 2. Order the following numbers from biggest to smallest. Explain your thinking. 5267 1263 1300 12689 635 1267 125 126 12 3. Order the following numbers from biggest to smallest. Explain your method. 0,8 0,05 0,901 0,15 0,465 0,55 0,75 0,4 0,62 0,901 0,8 0,75 0,62 0,55 0,465 0,4 0,15 0,05 4. Write down three different numbers that are bigger than the first number and smaller than the second number. (a) 5 and 5,1 (b) 5,1 and 5,11 (c) 5,11 and 5,12 (d) 5,111 and 5,116 (e) 0 and 0,001 (f) \(\frac{1}{2}\) and 1 5. Underline the bigger of the two numbers. (a) 2,399 and 2,6 (b) 5,604 and 5,64 (c) 0,11 and 0,087 (d) \(\frac{3}{4}\) and 50% (e) \(\frac{75}{100}\) and \(\frac{50}{100}\) (f) 0,125 and 0,25 6. The table gives information about two world champion heavyweight boxers. If they fight against one another, who would you expect to have the advantage, and why? Height (m) 1,98 1,88 Weight (kg) 112 103,3 Reach (m) 2,03 1,91 7. Fill in <, > or = . (a) 3,09 ☐ 3.9 (b) 3,9 ☐ 3,90 (c) 2,31 ☐ 3,30 (d) 3,197 ☐ 3,2 (e) 4,876 ☐ 5,987 (f) 123,321 ☐ 123,3 8. How many numbers are there between 3,1 and 3,2? Rounding off decimal fractions Decimal fractions can be rounded in the same way as whole numbers. They can be rounded to the nearest whole number or to one, two, three etc. figures after the comma. If the last digit of the number is 5 or bigger it is rounded up to the next number. For example: 13,5 rounded to the nearest whole number is 14; 13,526 rounded to two figures after the comma is 13,53. If the last digit is 4 or less it is rounded down to the previous number. For example: 13,4 rounded to the nearest whole number is 13. Let's round off 1. Round each of the following numbers off to the nearest whole number. 29,34 3,65 14,452 3,299 39,1 564,85 1,768 2. Round each of the following numbers off to one decimal place. 19,47 421,34 489,99 24,37 6,77 3. Round each of the following numbers off to two decimal places. 8,345 6,632 5,555 34,239 21,899 4. Mr Peters buys a radio for R206,50. The shop allows him to pay it off over six months. How must he pay back the money? 5. (a) Mrs Smith buys a carton of 10 kg grapes at the market for R24,77. She must divide it between herself and two friends. How much does each woman get? (b) How much must each person pay Mrs Smith for the grapes? 6. Estimate the answers for each of the following by rounding off the numbers. (a) \(1,43 \times 1,62\) (b) \(3,89 \times 4,21\) Calculations with decimal fractions To add and subtract decimal fractions tenths may be added to tenths tenths may be subtracted from tenths hundredths may be added to hundredths hundredths may be subtracted from hundredths etc. Let's do calculations! 1. Four consecutive stages in a cycling race are 21,4 km; 14,7 km; 31 km and 18,6 km long. How long is the whole race? Answer: 2. Calculate. (a) \( 16,52 + 2,35 \) (b) \(16,52 + 9,38\) (c) \(16,52 + 9,78\) (d) \( 30,08 + 2,9 \) (e) \(0,042 + 0,103\) (f) \(9,99 + 0,99\) 3. Calculate. (a) \( 45,67 - 23,25 \) (b) \( 45,67 -23,80 \) (c) \(187,6 - 98,45\) (d) \( 1,009 - 0,998 \) (e) \(0,9 - 0,045\) (f) \(65,7 - 37,6\) 4. The following set of measurements (in cm) was recorded during an experiment: 56,8; 55,4; 78,9; 57,8; 34,2; 67,6; 45,5; 34,5; 64,5; 88 (a) Find the sum of the measurements and round it off to the nearest whole number. (b) First round off each measurement to the nearest whole number and then find the sum. (c) Which of your answers in 4(a) and (b) is closest to the actual sum? Explain why. 5. By how much is 0,7 greater than 0,07? 6. The difference between two numbers is 0,75. The bigger number is 18,4. What is the other number? To multiply fractions written as decimals, convert the fractions to whole numbers by multiplying by powers of 10 (e.g. \(0,3 \times 10 = 3\)), do your calculations with the whole numbers, and then convert back to decimals again. For example: \(13,1 \times 1,01\) \(13,1 {\bf\times 10} \times 1,01 {\bf\times 100} = 131 \times 101 = 13 231; 13 231 \div {\bf 10 \div 100} = 13,231\) When you do division you can first multiply the number and the divisor by the same number to make the working easier. For example: \(21,7 \div 0,7 = (21,7 {\bf\times 10}) \div (0,7 {\bf\times 10}) = 217 \div 7 = 31\) 7. Calculate each of the following. You may use fraction notation if you wish. (a) \(0,12 \times 0,3 \) (b) \( 0,12\times 0,03 \) (c) \(1,2 \times 0,3\) (d) \(350 \times 0,043 \) (e) \( 0,035\times 0,043 \) (f) \(0,13 \times 0,16\) (g) \(1,3 \times 1,6 \) (h) \(0,13 \times 1,6\) 8. \(30,5 \times 1,3 = 39,65\). Use this answer to work out each of the following. (a) \(3,05 \times 1,3 \) (b) \( 305 \times1,3 \) (c) \(0,305 \times 0,13\) (d) \(305 \times 13 \) (e) \( 39,65 \div 30,5 \) (f) \(39,65 \div 0,305\) (g) \( 39,65 \div 0,13 \) (h) \(3,965 \div 130\) 9. \( 3,5 \times 4,3 = 15,05\). Use this answer to work out each of the following. (a) \(3,5 \times 43 \) (b) \( 0,35 \times43 \) (c) \(3,5 \times 0,043\) (d) \(0,35 \times 0,43 \) (e) \( 15,05\div 0,35 \) (f) \(15,05 \div 0,043\) 10. Calculate each of the following. You may convert to whole numbers to make it easier. (a) \( 62,5 \div 2,5 \) (b) \(6,25 \div 2,5\) (c) \( 6,25 \div 0,25 \) (d) \(0,625 \div 2,5\) Solving problems 1. (a) Divide R44,45 between seven people so that each one receives the same amount. (b) John saves R15,25 every week. He now has R106,75 saved up. For how many weeks has he been saving? 2. (a) Calculate \(14,5 \div 6\), correct to two decimal places (b) Calculate \(7,41 \div 5\), correct to one decimal place 3. Determine the value of \(x\). (Give answers rounded to 2 decimal places.) (a) \( 7,1 \div x = 4,2 \) (b) \(x \div 0,7 = 6,2\) (c) \(12 \div x = 6,4\) (d) \( x \div 3,5 = 7 \) (e) \(2,3 \times x = 6\) (f) \(0,023 \times x = 8\) 4. (a) 1 â of water weighs almost 0,995 kg.What will 50 â of water weigh? What will 0,5 â of water weigh? Mincemeat costs R36,65 per kilogram. What will 3,125 kg mince-meat cost? What will 0,782 kg cost?
I believe you are confusing the wing angle of attack with the pitch of the aircraft. Aircraft moving at a slow, near-stall speed, despite pointing the nose up, will still be traveling more or less horizontally. Their VSI instrument will read near zero. Whereas, if you take an aircraft moving quickly and pull the nose up to the same angle, the aircraft will, obviously, climb rapidly. Why does this matter? The angle of attack is defined based on the wing's motion through the relative wind. The wing's orientation relative to the ground isn't involved in the definition in any way. When the aircraft as a whole is climbing, the relative wind is coming down from above. As a result the angle of attack is reduced, compared to what it would be if the plane were not climbing. Just to show some quick numbers, suppose you took an aircraft moving at 100 kts in still air and pulled the nose up so that you are now climbing at 3,000 FPM (most aircraft will lose speed doing this, but the math is valid until the airplane slows down). $1knot\approx100FPM$, so you'll now have an upward vector of 30 knots. Your 100 kt airspeed is now moving up at an angle. A little trigonometry: $$\sin(x)=\frac{30}{100}$$$$x=17.46°$$ So, your angle of attack is 17.46 degrees farther away from stalling when climbing at 3000FPM than it would be if your aircraft had the same pitch but was in level flight. However, few aircraft have the engine power to sustain a climb at this rate. The aircraft will bleed off speed, and as the speed bleeds off, the aircraft will slow, the climb rate will decrease, the aircraft's velocity will become closer to horizontal, and, eventually, the aircraft will stall if the pitch is held constant.
good night... I was looking into the Pedersen Book, $C^{*}$-Algebras and their automorphism groups, and found the definition of analytic elements $x\in A$, where $(A,\alpha)$ is a $C^{*}-$dynamical system. We say that an element $x\in A$ is analytic for $\alpha$ if the function $t\mapsto \alpha_{t}(x)$ has an extension, necessarily unique, to an analytic (entire) vector-valued function $\zeta\rightarrow \alpha_{\zeta}(x)$, $\zeta\in\mathbb{C}$. If $x\in A$ then $$x_n=\sqrt{\frac{n}{\pi}}\int_{\mathbb{R}}\alpha_t(x)exp(-nt^{2})dt$$ is analytic for $\alpha$. And it's not clear for me that $x_n$ is well defined, i.e., why $x_n\in A$? and how is the integral defined: $[1]$ The integration theory for vector-valued functions in a general Banach space, with Riemann sums, or $[2]$ The Lebesgue integral for Banach spaces is the Bochner integral. $\alpha_t(x_n)$ extends to $$\alpha_{\zeta}(x_n)=\sqrt{\frac{n}{\pi}}\int_{\mathbb{R}}\alpha_t(x)exp(-n(t-\zeta)^{2})dt$$ Analyticity of $\alpha_{\zeta}(x_n)$ is automatic because $\alpha_t(x)exp(-n(t-\zeta)^{2})$ is continuous and the Fundamental Theorem of Calculus? I hope you answer me, i really thanks full. Do you recommend some bibliography? Thanks so much.
Siméon Denis Poisson (French: ; 21 June 1781 – 25 April 1840), was a French mathematician, geometer, and physicist. He obtained many important results, but within the elite Académie des Sciences he also was the final leading opponent of the wave theory of light and was proven wrong on that matter by Augustin-Jean Fresnel. Contents Biography 1 Contributions 2 Flawed views on the wave theory of light 3 See also 4 References 5 Biography Poisson was born in Pithiviers, Loiret, the son of soldier Siméon Poisson. In 1798, he entered the École Polytechnique in Paris as first in his year, and immediately began to attract the notice of the professors of the school, who left him free to make his own decisions as to what he would study. In 1800, less than two years after his entry, he published two memoirs, one on Étienne Bézout's method of elimination, the other on the number of integrals of a finite difference equation. The latter was examined by Sylvestre-François Lacroix and Adrien-Marie Legendre, who recommended that it should be published in the Recueil des savants étrangers, an unprecedented honour for a youth of eighteen. This success at once procured entry for Poisson into scientific circles. Joseph Louis Lagrange, whose lectures on the theory of functions he attended at the École Polytechnique, recognized his talent early on, and became his friend (the Mathematics Genealogy Project lists Lagrange as his advisor, but this may be an approximation); while Pierre-Simon Laplace, in whose footsteps Poisson followed, regarded him almost as his son. The rest of his career, till his death in Sceaux near Paris, was nearly occupied by the composition and publication of his many works and in fulfilling the duties of the numerous educational positions to which he was successively appointed. Immediately after finishing his studies at the École Polytechnique, he was appointed répétiteur (teaching assistant) there, a position which he had occupied as an amateur while still a pupil in the school; for his schoolmates had made a custom of visiting him in his room after an unusually difficult lecture to hear him repeat and explain it. He was made deputy professor ( professeur suppléant) in 1802, and, in 1806 full professor succeeding Jean Baptiste Joseph Fourier, whom Napoleon had sent to Grenoble. In 1808 he became astronomer to the Bureau des Longitudes; and when the Faculté des Sciences was instituted in 1809 he was appointed professor of rational mechanics ( professeur de mécanique rationelle). He went on to become a member of the Institute in 1812, examiner at the military school ( École Militaire) at Saint-Cyr in 1815, graduation examiner at the École Polytechnique in 1816, councillor of the university in 1820, and geometer to the Bureau des Longitudes succeeding Pierre-Simon Laplace in 1827. In 1817, he married Nancy de Bardi and with her he had four children. His father, whose early experiences had led him to hate aristocrats, bred him in the stern creed of the First Republic. Throughout the Revolution, the Empire, and the following restoration, Poisson was not interested in politics, concentrating on mathematics. He was appointed to the dignity of baron in 1821; but he neither took out the diploma or used the title. In March 1818, he was elected a Fellow of the Royal Society [1] and in 1823 a foreign member of the Royal Swedish Academy of Sciences. The revolution of July 1830 threatened him with the loss of all his honours; but this disgrace to the government of Louis-Philippe was adroitly averted by François Jean Dominique Arago, who, while his "revocation" was being plotted by the council of ministers, procured him an invitation to dine at the Palais Royal, where he was openly and effusively received by the citizen king, who "remembered" him. After this, of course, his degradation was impossible, and seven years later he was made a peer of France, not for political reasons, but as a representative of French science. As a teacher of mathematics Poisson is said to have been extraordinarily successful, as might have been expected from his early promise as a répétiteur at the École Polytechnique. As a scientific worker, his productivity has rarely if ever been equalled. Notwithstanding his many official duties, he found time to publish more than three hundred works, several of them extensive treatises, and many of them memoirs dealing with the most abstruse branches of pure mathematics, applied mathematics, mathematical physics, and rational mechanics. (Arago attributed to him the quote, "Life is good for only two things: doing mathematics and teaching it." [2]) A list of Poisson's works, drawn up by himself, is given at the end of Arago's biography. All that is possible is a brief mention of the more important ones. It was in the application of mathematics to physics that his greatest services to science were performed. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics. Next (or in the opinion of some, first) in importance stand the memoirs on celestial mechanics, in which he proved himself a worthy successor to Pierre-Simon Laplace. The most important of these are his memoirs Sur les inégalités séculaires des moyens mouvements des planètes, Sur la variation des constantes arbitraires dans les questions de mécanique, both published in the Journal of the École Polytechnique (1809); Sur la libration de la lune, in Connaissances des temps (1821), etc.; and Sur le mouvement de la terre autour de son centre de gravité, in Mémoires de l'Académie (1827), etc. In the first of these memoirs, Poisson discusses the famous question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation for the disturbing forces. Poisson showed that the result could be extended to a second approximation, and thus made an important advance in planetary theory. The memoir is remarkable inasmuch as it roused Lagrange, after an interval of inactivity, to compose in his old age one of the greatest of his memoirs, entitled Sur la théorie des variations des éléments des planètes, et en particulier des variations des grands axes de leurs orbites. So highly did he think of Poisson's memoir that he made a copy of it with his own hand, which was found among his papers after his death. Poisson made important contributions to the theory of attraction. His name is one of the 72 names inscribed on the Eiffel Tower. Contributions Poisson's well-known correction of Laplace's second order partial differential equation for potential: \nabla^2 \phi = - 4 \pi \rho \; today named after him Poisson's equation or the potential theory equation, was first published in the Bulletin de la société philomatique (1813). If a function of a given point ρ = 0, we get Laplace's equation: \nabla^2 \phi = 0 \; . In 1812 Poisson discovered that Laplace's equation is valid only outside of a solid. A rigorous proof for masses with variable density was first given by Carl Friedrich Gauss in 1839. Both equations have their equivalents in vector algebra. Poisson's equation for the divergence of the gradient of a scalar field, φ in 3-dimensional space is: \nabla^2 \phi = \rho (x, y, z) \; . Consider for instance Poisson's equation for surface electrical potential, Ψ as a function of the density of electric charge, ρ e at a particular point: \nabla^2 \Psi = {\partial ^2 \Psi\over \partial x^2 } + {\partial ^2 \Psi\over \partial y^2 } + {\partial ^2 \Psi\over \partial z^2 } = - {\rho_{e} \over \varepsilon \varepsilon_{0}} \; . The distribution of a charge in a fluid is unknown and we have to use the Poisson-Boltzmann equation: \nabla^2 \Psi = {n_{0} e \over \varepsilon \varepsilon_{0}} \left( e^{e\Psi (x,y,z)/k_{B}T} - e^{-e\Psi (x,y,z)/ k_{B}T} \right), \; which in most cases cannot be solved analytically. In polar coordinates the Poisson-Boltzmann equation is: {1\over r^{2}} {d\over dr} \left( r^{2} {d\Psi \over dr} \right) = {n_{0} e \over \varepsilon \varepsilon_{0}} \left( e^{e\Psi (r) / k_{B}T} - e^{-e\Psi (r) / k_{B}T} \right) \; which also cannot be solved analytically. If a field, φ is not scalar, the Poisson equation is valid, as can be for example in 4-dimensional Minkowski space: \sqrt \phi_{ik} = \rho (x, y, z, ct) \; . If ρ( x, y, z) is a continuous function and if for r→ ∞ (or if a point 'moves' to infinity) a function φ goes to 0 fast enough, a solution of Poisson's equation is the Newtonian potential of a function ρ( x, y, z): \phi_M = - {1\over 4 \pi} \int {\rho (x, y, z)\, dv \over r} \; where r is a distance between a volume element dv and a point M. The integration runs over the whole space. Another "Poisson's integral" is the solution for the Green function for Laplace's equation with Dirichlet condition over a circular disk: \phi(\xi \eta) = {1\over 4 \pi} \int _0^{2\pi} {R^2 - \rho^2\over R^2 + \rho^2 - 2R \rho \cos (\psi - \chi) } \phi (\chi)\, d \chi \; where \xi = \rho \cos \psi, \; \quad \eta = \rho \sin \psi, \; φ is a boundary condition holding on the disk's boundary. In the same manner, we define the Green function for the Laplace equation with Dirichlet condition, ∇² φ = 0 over a sphere of radius R. This time the Green function is: G(x,y,z;\xi,\eta,\zeta) = {1\over r} - {R\over r_1 \rho} \; , where \rho = \sqrt {\xi^2 + \eta^2 + \zeta^2} is the distance of a point (ξ, η, ζ) from the center of a sphere, r is the distance between points ( x, y, z) and (ξ, η, ζ), and r 1 is the distance between the point ( x, y, z) and the point ( Rξ/ρ, Rη/ρ, Rζ/ρ), symmetrical to the point (ξ, η, ζ). Poisson's integral now has a form: \phi(\xi, \eta, \zeta) = {1\over 4 \pi} \int\!\!\!\int_S {R^2 - \rho^2 \over R r^3} \phi\, ds \; . Poisson's two most important memoirs on the subject are Sur l'attraction des sphéroides (Connaiss. ft. temps, 1829), and Sur l'attraction d'un ellipsoide homogène (Mim. ft. l'acad., 1835). In concluding our selection from his physical memoirs, we may mention his memoir on the theory of waves (Mém. ft. l'acad., 1825). In pure mathematics, his most important works were his series of memoirs on definite integrals and his discussion of Fourier series, the latter paving the way for the classic researches of Peter Gustav Lejeune Dirichlet and Bernhard Riemann on the same subject; these are to be found in the Journal of the École Polytechnique from 1813 to 1823, and in the Memoirs de l'Académie for 1823. He also studied Fourier integrals. We may also mention his essay on the calculus of variations ( Mem. de l'acad., 1833), and his memoirs on the probability of the mean results of observations ( Connaiss. d. temps, 1827, &c). The Poisson distribution in probability theory is named after him. In his Traité de mécanique (2 vols. 8vo, 1811 arid 1833), which was written in the style of Laplace and Lagrange and was long a standard work, he showed many novelties such as an explicit usage of momenta: p_i = {\partial T\over {\partial q_i\over \partial t}}, which influenced the work of Hamilton and Jacobi. Besides his many memoirs, Poisson published a number of treatises, most of which were intended to form part of a great work on mathematical physics, which he did not live to complete. Among these may be mentioned Nouvelle théorie de l'action capillaire (4to, 1831); Théorie mathématique de la chaleur (4to, 1835); Supplement to the same (4to, 1837); Recherches sur la probabilité des jugements en matières criminelles et matière civile (4to, 1837), all published at Paris. A translation of Poisson's Treatise on Mechanics was published in London in 1842. In 1815 Poisson studied integrations along paths in the complex plane. In 1831 he derived the Navier-Stokes equations independently of Claude-Louis Navier. Flawed views on the wave theory of light Poisson, despite his brilliance, had surprising hubris on the wave theory of light. He was a member of the academic "old guard" at the Académie royale des sciences de l'Institut de France, who were staunch believers in the particle theory of light who were alarmed at the wave theory of light's increasing acceptance. In 1818, the Académie set their prize as diffraction, being certain that a particle theorist would win it. Poisson, relying on intuition rather than mathematics or scientific experiment, ridiculed participant and civil engineer Augustin-Jean Fresnel when he submitted a thesis explaining diffraction derived from analysis of both the Huygens–Fresnel principle and Young's double slit experiment. [3] Poisson studied Fresnel's theory in detail and of course looked for a way to prove it wrong, as he was a dogmatic supporter of the particle-theory of light. Poisson thought that he had found a flaw when he argued that a consequence of Fresnel's theory was that there would exist an on-axis bright spot in the shadow of a circular obstacle blocking a point source of light, where there should be complete darkness according to the particle-theory of light. Fresnel's theory could not be true, Poisson declared, surely this result was absurd. (The Poisson spot is not easily observed in everyday situations, because most everyday sources of light are not good point sources.) However, the head of the committee, Dominique-François-Jean Arago, who incidentally later became Prime Minister of France, did not have the hubris of Poisson and decided it was necessary to perform the experiment in more detail. He molded a 2mm metallic disk to a glass plate with wax. [4] To everyone's surprise he succeeded in observing the predicted spot, which convinced most scientists of the wave-nature of light. In the end Fresnel won the competition, much to Poisson's chagrin. After that, the corpuscular theory of light was vanquished, not to be heard of again until in a very different form, the 20th century revived it as the newly developed wave-particle duality. Arago later noted that the diffraction bright spot (which later became known as both the Arago spot and the Poisson spot) had already been observed by Joseph-Nicolas Delisle [4] and Giacomo F. Maraldi [5] a century earlier. See also References ^ "Library and Archive Catalogue". The Royal Society. Retrieved 4 October 2010. ^ François Arago (1786 - 1853) attributed to Poisson the quote: "La vie n'est bonne qu'à deux choses: à faire des mathématiques et à les professer." (Life is good for only two things: to do mathematics and to teach it.) See: J.-A. Barral, ed., Oeuvres complétes de François Arago ..., vol. II (Paris, France: Gide et J. Baudry, 1854), page 662. ^ Fresnel, A.J. (1868), OEuvres Completes 1, Paris: Imprimerie impériale ^ a b Fresnel, A.J. (1868), OEuvres Completes 1, Paris: Imprimerie impériale, p. 369 ^ Maraldi, G.F. (1723), 'Diverses expèriences d'optique' in Mémoires de l'Académie Royale des Sciences, Imprimerie impériale, p. 111 This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
In mathematical logic and model theory, one considers interpretations of syntactic expressions: terms without free variables are interpreted as elements of some structure, formulas without free variables have truth values, formulas with free variables can be interpreted as relations. Multiple expressions may have identical interpretations. For example, $\ulcorner 1 + 1\urcorner$ and $\ulcorner 2\urcorner$ are both interpreted as $2$. Question: does anyone ever consider formal languages where terms can have multiple interpretations? Is there some standard approach or framework? I am thinking about this because i am trying to understand the $\omicron$ and $O$ notation in analysis, like in $$ \ln(x) =\omicron(x),\quad x\to +\infty. $$ Also, when calculating an indefinite integral, on often writes $$ \int 2x\,dx = x^2 + C. $$ Update.I understand that when the equality sign is used with $\omicron$/$O$ notation, it does not represent an equivalence relation.I also know that $\omicron(g(x))$ can be viewed as a set of functions.However, this interpretation does not fit my intuition well.When i write $\sin x = x + \omicron(x^2)$, $x→0$, i do not think about sets of functions, i think that i am replacing an anonymous implicitly understood function with a placeholder.In other words, the designated object does not change (it is still a function or a number, not a set of functions or set of numbers), only the notation is abbreviated and made less explicit, a bit like when i write "$1 + 2 + 3$" instead of "$((1 + 2) + 3)$".
I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is inversely proportional to pressure that is, $$V \propto \frac{1}{p}$$ using this I can calculate the two volumes I need for this equation to calculate work done: $$\Delta W = - \int^{V_2}_{V_1} p(V)dV$$ but what I do not understand is how to use this equation to help me calculate the work done, I think I am confused by the fact that I need to have $p(V)$ but I am not sure what this is. If you could help me to understand this, that would be great
Research Open Access Published: Blow-up solutions, global existence, and exponential decay estimates for second order parabolic problems Boundary Value Problems volume 2015, Article number: 160 (2015) Article metrics 852 Accesses 3 Citations Abstract In this paper, we study the blow-up solutions, global existence, and exponential decay estimates for a class of second order parabolic problems with Dirichlet boundary conditions. By constructing auxiliary functions and using maximum principles, the sufficient conditions for the existence of the blow-up solution, the sufficient conditions for the global existence of the solution, an upper bound for the ‘blow-up time’, and some explicit exponential decay bounds for the solution and its derivatives are specified. Introduction Many authors have studied the blow-up solutions, global existence, and exponential decay estimates of nonlinear parabolic problems (see, for instance, [1–14]). In this paper, we investigate the following second order parabolic problems with Dirichlet boundary conditions: where \(D\subset\mathbb{R}^{N}\) (\(N\geq2\)) is a bounded convex domain with smooth boundary \(\partial D\in C^{2,\varepsilon}\), T is the maximal existence time of u, and D̅ is the closure of D. Set \(\mathbb {R}^{+}:=(0,+\infty)\). We assume, throughout the paper, that \(f(s)\) is a nonnegative \(C^{1}(\mathbb{R}^{+})\) function, \(f(0)=0\), \(g(s)\) is a positive \(C^{2}(\mathbb{R}^{+})\) function, \(g'(s)\leq0\) for any \(s\in\mathbb {R}^{+}\), \(k(s)\) is a \(C^{2}(\overline{\mathbb{R}^{+}})\) function, \(k'(s)>0\) for any \(s\in\overline{\mathbb{R}^{+}}\), and \(h(x)\) is a nonnegative \(C^{2}(\overline{D})\) function, \(h(x)\not\equiv0\) for any \(x\in\overline{D}\). Under these assumptions, it follows from the maximum principle [15] that \(u(x,t)\) is nonnegative. They established conditions on data sufficient to preclude blow-up and to ensure that the solution and its spatial gradient decay exponentially for all \(t>0\). In [17], Enache researched the following problem: His purpose was to establish conditions on the data sufficient to guarantee blow-up of solution at some finite time, conditions to ensure that the solution remains bounded as well as conditions to derive some explicit exponential decay bounds for the solution and its derivatives. Some authors also discussed blow-up phenomena for parabolic problems with Dirichlet boundary conditions and obtained a lot of interesting results (see, for instance, [18–24]). In the process of heat conduction and mass diffusion, many problems can be summarized as the problem (1.1). Therefore, in this paper, we study the problem (1.1). By constructing auxiliary functions and using maximum principles, the sufficient conditions for the existence of the blow-up solution, the sufficient conditions for the global existence of the solution, an upper bound for the ‘blow-up time’, and some explicit exponential decay bounds for the solution and its derivatives are specified. Our results extend and supplement those obtained in [16, 17]. We proceed as follows. In Section 2 we study the blow-up solution of (1.1). Section 3 is devoted to the global solution of (1.1) and the explicit exponential decay bounds for the solution. The explicit exponential decay bounds for the derivatives of the solution are given in Section 4. A few examples are presented in Section 5 to illustrate the applications of the abstract results. Blow-up solution In order to get the sufficient conditions for the existence of the blow-up solution, we define the following functions: The following theorem is the main result for the blow-up solution. Theorem 2.1 Let u be a classical solution of the problem (1.1). Suppose we have the following. (i) $$ \bigl(g(s)k'(s) \bigr)'\leq0,\qquad sf(s)g(s)\geq \frac {1}{2}(4+\alpha)F(s),\quad s\in\mathbb{R}^{+}, $$(2.1) where α is a positive constant. (ii) $$ \lim_{s\rightarrow0^{+}}s^{2}g(s)=0. $$(2.2) (iii) $$B(0)=\int_{D} \biggl(F(h)-\frac{1}{2}g^{2}(h)| \nabla h|^{2} \biggr)\,\mathrm{d}x\geq0. $$ Then \(u(x,t)\) must blow up at some finite time \(t^{*}< T\) and Proof It follows from the divergence theorem that Consequently, \(B(t)\) is a nondecreasing function in t and Thus, which implies Integrate (2.6) over \([0,t]\) to get which cannot hold for Hence, \(u(x,t)\) must blow up at some finite time \(t^{*}< T\). The proof is complete. □ Global solution In order to get the sufficient conditions for the existence of the global solution and the explicit exponential decay bounds for the solution, we suppose the following: where \(p(s)\) and \(q(s)\) are nondecreasing functions of s. Since the solution of problem (1.1) might blow up in a finite time \(t^{*}\), the solution exists in an internal \((0,\gamma)\) with \(\gamma< t^{*}\). Further we define Next, we give two lemmas from which the main results of this section are derived. Lemma 3.1 where c is a positive constant. Then Proof Construct an auxiliary function from which we have and In the following, we use the first Dirichlet eigenvalue \(\lambda_{1}\) of the Laplacian and the corresponding eigenfunction \(\Phi_{1}\) for a region \(\tilde{D}\supseteq D\): Further since \(\Phi_{1}(x)\) is determined up to an arbitrary multiplicative constant, we can normalize \(\Phi_{1}(x)\) by Lemma 3.2 Then \(u(x,t)\) satisfies the following inequality: where Proof Construct the following auxiliary function: Here, (3.1) and the fact that \(g'\leq0\) imply Thus, we have Let which implies Theorem 3.1 Then we have and Proof We assume that (3.20) cannot hold. There exists a first time \(\tilde{t}<\infty\) for which \(\frac{f(u)}{ug(u)}\) reaches the value \(\lambda_{1}\). Thus, we have and Hence, we have Exponential decay estimate In this section, we will use a comma to denote partial differentiation and adopt the summation convection, i.e., if an index is repeated, summation from 1 to N is understood, for example, Hence, the differentiated form of the first equation of (1.1) is In order to get the exponential decay bounds for the derivatives of the solution, we consider where \(a\geq1\) and \(0<\beta\leq1\) are some positive constants to be determined. Our main result is Theorem 4.1. Theorem 4.1 Let u be the classical solution of the problem (1.1). Suppose the following. (i) $$ 0< k'(s)\leq b\leq1,\quad s\in\mathbb{R}^{+}, $$(4.3) where b is a positive constant. (ii) $$ \lim_{s\rightarrow0^{+}}sg(s)=0. $$(4.4) (iii) $$ \frac{a}{b}:=M+\beta< \frac{\pi^{2}}{4d^{2}}g(\Gamma_{1})- \frac{f(\Gamma _{1})}{\Gamma_{1}}, $$(4.5) where d is the in- radius of D and with c given in Lemma 3.2. Thus, \(\Psi(x,t)\) takes its maximum value at \(t=0\), i. e., with Proof The theorem will be proved in three steps. Step 1. Differentiating (4.2), we get and It follows from the first equation of (1.1) that Differentiating (4.1), we have i.e., It follows from (4.11) that Multiplied by \(g^{2}u_{,i}\) from (4.12), we have It follows from (4.6) that Next, we use the Cauchy-Schwarz inequality in the following form: It follows from (4.6) that Moreover, by (4.5), it is easy to see It follows from Theorem 3.1 that which implies By means of the maximum principle, we have the following possible cases where Ψ may take its maximum value: (a) on the boundary \(\partial D\times(0,T)\), (b) at a point where \(\nabla u=0\), (c) for \(t=0\). Step 2. We first exclude the case (a). Assume \(\Psi(x,t)\) takes its maximum value at \(\hat{Q}=(\hat{x},\hat{t})\) on ∂D. Since \(u=0\) on ∂D, we have With (1.1) and \(f(0)=0\), evaluated on \(\partial D\in C^{2,\varepsilon }\), we get Hence, we have which contradicts with the maximum principle. Hence, Ψ cannot take its maximum value on ∂D. Step 3. In the following, we exclude the case (b). Assume \(\Psi(x,t)\) takes its maximum value at a critical point \(\bar {Q}=(\bar{x},\bar{t})\). Thus we have Replacing t with t̄ in (4.29), we obtain from which we have where \(u_{M}=\max_{D}u(x,\bar{t})\). Here, (3.1) and the fact that \(g'(s)\leq0\) imply Next, making use of Cauchy’s mean value theorem and of (4.31), we get where ξ is some intermediate value between \(u(x,\bar{t})\) and \(u_{M}\). The fact that \(g'(s)\leq0\) implies With (4.34), we have Integrate (4.35) on a straight line from x̄ to the nearest point \(x_{0}\in\partial D\) to obtain from which we have which with \(u_{M}\leq u_{m}\) implies which contradicts with (4.36). The proof is complete. □ Applications In what follows, as applications of the obtained results, two examples are presented. Example 5.1 Let u be a classical solution of the following problem: where \(D= \{x=(x_{1},x_{2},x_{3}) \mid |x|= (\sum^{3}_{i=1}x^{2}_{i} )^{1/2}<4 \}\) is the ball of \(\mathbb{R}^{3}\). The above problem can be transformed into the following problem: Now, We have and It follows from Theorem 2.1 that u blows up in a finite time \(t^{*}\) and Example 5.2 Let u be a classical solution of the following problem: where \(D= \{x=(x_{1},x_{2},x_{3}) \mid |x|= (\sum^{3}_{i=1}x^{2}_{i} )^{1/2}<\frac{\pi}{8}\}\) is the ball of \(\mathbb{R}^{3}\), \(\Phi_{1}(x)\) is the first eigenfunction of \(\tilde{D}=D\) and \(\max_{D}\Phi_{1}(x)=1\). The above problem may be turned into the following problem: Now we have Here, with Hence, we have which is the exponential decay estimate of the gradient for the solution. References 1. Levine, HA: The role of critical exponents in blow-up theorems. SIAM Rev. 32, 262-288 (1990) 2. Bandle, C, Brunner, H: Blow-up in diffusion equations: a survey. J. Comput. Appl. Math. 97, 3-22 (1998) 3. Deng, K, Levine, HA: The role of critical exponents in blow-up theorems: the sequel. J. Math. Anal. Appl. 243, 85-126 (2000) 4. Galaktionov, VA, Vázquez, JL: The problem of blow-up in nonlinear parabolic equations. Discrete Contin. Dyn. Syst. 8, 399-433 (2002) 5. Quittner, P, Souplet, P: Superlinear parabolic problems. In: Blow-up, Global Existence and Steady States. Birkhäuser Advanced Texts. Birkhäuser, Basel (2007) 6. Zhang, HL: Blow-up solutions and global solutions for nonlinear parabolic problems. Nonlinear Anal. TMA 69, 4567-4574 (2008) 7. Zhang, LL, Zhang, N, Li, LX: Blow-up solutions and global existence for a kind of quasilinear reaction-diffusion equations. Z. Anal. Anwend. 33, 247-258 (2014) 8. Payne, LE, Philippin, GA, Vernier-Piro, S: Blow-up, decay bounds and continuous dependence inequalities for a class of quasilinear parabolic problems. Math. Methods Appl. Sci. 29, 281-295 (2006) 9. Payne, LE, Philippin, GA: Decay bounds for solutions of second order parabolic problems and their derivatives II. Math. Inequal. Appl. 7, 534-549 (2004) 10. Payne, LE, Philippin, GA: Decay bounds for solutions of second order parabolic problems and their derivatives III. Z. Anal. Anwend. 23, 809-818 (2004) 11. Payne, LE, Philippin, GA, Vernier-Piro, S: Decay bounds for solutions of second order parabolic problems and their derivatives IV. Appl. Anal. 85, 293-302 (2006) 12. Philippin, GA, Vernier-Piro, S: Explicit exponential decay bounds in quasilinear parabolic problems. J. Inequal. Appl. 3, 1-23 (1999) 13. Philippin, GA, Vernier-Piro, S: Explicit decay bounds in some quasilinear one-dimensional parabolic problems. Math. Methods Appl. Sci. 22, 101-109 (1999) 14. Philippin, GA, Vernier-Piro, S: Decay estimates for solutions of a class of parabolic problems arising in filtration through porous media. Boll. Unione Mat. Ital. Sez. B Artic. Ric. Mat. (8) 4, 473-481 (2001) 15. Protter, MH, Weinberger, HF: Maximum Principles in Differential Equations. Prentice-Hall, Englewood Cliffs (1967) 16. Payne, LE, Philippin, GA: Decay bounds for solutions of second order parabolic problems and their derivatives. Math. Models Methods Appl. Sci. 5, 95-110 (1995) 17. Enache, C: Blow-up, global existence and exponential decay estimates for a class of quasilinear parabolic problems. Nonlinear Anal. TMA 69, 2864-2874 (2008) 18. Chen, SH: Global existence and blowup for quasilinear parabolic equations not in divergence form. J. Math. Anal. Appl. 401, 298-306 (2013) 19. Chen, SH, Yu, DM: Global existence and blowup solutions for quasilinear parabolic equations. J. Math. Anal. Appl. 335, 151-167 (2007) 20. Ding, JT: Blow-up solutions for a class of nonlinear parabolic equations with Dirichlet boundary conditions. Nonlinear Anal. TMA 52, 1645-1654 (2003) 21. Payne, LE, Schaefer, PW: Lower bounds for blow-up time in parabolic problems under Dirichlet conditions. J. Math. Anal. Appl. 328, 1196-1205 (2007) 22. Wang, H, He, YJ: On blow-up of solutions for a semilinear parabolic equation involving variable source and positive initial energy. Appl. Math. Lett. 26, 1008-1012 (2013) 23. Payne, LE, Song, JC: Lower bounds for blow-up time in a nonlinear parabolic problem. J. Math. Anal. Appl. 354, 394-396 (2009) 24. Xu, RZ, Cao, XY, Yu, T: Finite time blow-up and global solutions for a class of semilinear parabolic equations at high energy level. Nonlinear Anal., Real World Appl. 13, 197-202 (2012) Acknowledgements This work was supported by the National Natural Science Foundation of China (Nos. 61473180 and 61174082). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
The prototype of L-functions is the Riemann zeta-function defined by \[ \zeta(s)=\sum_{n=1}^\infty \frac {1}{n^s} \] for $\Re s>1$. Euler was interested in this function and discovered the beautiful fact that $\zeta(2)=\frac{\pi^2}{6}.$ He also found the fundamental identity \[ \zeta(s)=\prod_p \left(1-\frac{1}{p^s}\right)^{-1} \] and used it to prove that the series $\sum_p \frac{1}{p}$ diverges. Euler played with other divergent series and noticed a connection between values of $\zeta$ at $s$ and $1-s$, linked by powers of $\pi$ and Bernoulli numbers, such as the interpretations $$1+2+3+\dots =-\frac{1}{12}$$ and $$1+4+9+16+\dots =0.$$ Euler found hints of many of the remarkable features of L-functions that make them such worthy objects of study: the Euler product, special values, and the functional equation. It was left to Riemann to discover perhaps the most remarkable property of all, one that hasn't been proven yet: the Riemann Hypothesis that each non-real zero of $\zeta(s)$ has real part equal to $\frac 12 $. Riemann's memoir In 1859, in an 8 page paper “Über die Anzahl der Primzahlen unter einer gegebenen Grösse” [Transcript] read to the Berlin Academy of Sciences by Gauss' former student Encke, Riemann first considered $\zeta(s)$ as a function of a complex variable $s$. He proved that $\zeta(s)$ is a meromorphic function in the complex $s$-plane whose only singularity is a simple pole at $s=1$ with residue 1. He went on to prove that $$\xi(s)=\frac{1}{2} s(s-1)\pi^{-s/2}\Gamma(s/2) \zeta(s)$$ is entire of order 1 and satisfies the functional equation $$\xi(s)=\xi(1-s).$$ He proved that $\xi(s)$ has infinitely many zeros, all of which are in the critical strip $0\le \sigma\le 1$, where $\sigma$ denotes the real part of $s$. Riemann calculated the first few of these zeros (this was discovered later when Siegel studied Riemann's notes left to the Göttingen library) and found them to lie on the critical line $\sigma=1/2$. He conjectured that all of the zeros are on this line. And the analytic theory of L-functions was born! Dirichlet L-functions The first use of the letter L to denote these functions was by Dirichlet in 1837 (see Werke I [MR:249268] pages 313-342) who used L-functions to prove that there are infinitely many primes in any (primitive) arithmetic progression.He consideredseries of the form $$L(s,\chi)=\sum_{n=1}^\infty \frac{\chi(n)}{n^s}$$where $\chi$ is a (Dirichlet) character, which is the extension to the integers of a character of the group $(\mathbb Z/q\mathbb Z)^*$. For example the arithmetic function $\chi$ defined by \[\chi(n)=\begin{cases} 1 & \textrm{ if } n\equiv 1 \bmod 3\\ -1 & \textrm{ if } n\equiv 2\bmod 3\\ 0 & \textrm{ if }n\equiv 0\bmod 3\end{cases}\] is a Dirichlet character modulo 3. The above L-function has an Euler product$$L(s,\chi)=\prod_{p }\left(1-\frac{\chi(p)}{p^s}\right)^{-1}$$and satisfies a functional equation that$$\left(\frac{3}{\pi}\right)^{\frac{s}{2}}\Gamma\left(\frac{s+1}{2}\right) L(s,\chi)$$is invariant under $s\to 1-s$. Also, note the special value$$L(1,\chi)= \frac{\pi}{3\sqrt{3}}; $$Dirichlet needed to know that his L-functions did not vanish at 1 and he used special values to prove this fact.Dirichlet's original proof was for prime moduli; for composite moduli he requiredhis class number formula which was proven in 1839 - 1840 (see his Werke I, pp. 411-496). Dedekind zeta functions In 1877 Dedekind began generalizing some of Dirichlet's work to number fields.His first paper on the topic is Über die Anzahl der Ideal-Klassen in den verschiedenen Ordnungeneines endlichen Körpers [MR:237282]. It appears that Hecke named the Dedekind zeta-function after him. The automorphy properties of Hilbert modular forms for real quadratic fields were considered in the 1901 Göttingen University Habilitationssschrift of Otto Blumenthal. Therein he refers to unpublished work of 1893-1894 of his advisor David Hilbert. Hilbert modular forms were significantly developed by Hecke in his 1910 dissertation. The L-function of $\Delta$ In 1916 Ramanujan [MR:2280861] made the startling observation that the coefficients of the $\Delta$ function are multiplicative! Ramanujan defined $\tau(n)$ by $$q\prod_{n=1}^\infty (1-q^n)^{24}=\sum_{n=1}^\infty \tau(n)q^n$$ and conjectured that $\tau(mn)=\tau(m)\tau(n)$ whenever $m$ and $n$ are relatively prime. Moreover that $$\sum_{r=0}^\infty \tau(p^r)X^r = (1-\tau(p)X +p^{11}X^2)^{-1}$$ and that $|\tau(p)|< 2p^{11/2}$. The first two of these astounding conjectures were verified by Mordell in 1917 (see “On Mr. Ramanujan's Empirical Expansions of Modular Functions.” Proc. Cambridge Phil. Soc.19, (1917)) and the last by Deligne in 1974 in work for which he won a Fields Medal. It was already known that $$\Delta(z)=\sum_{n=1}^\infty \tau(n)e(nz)$$ is a modular form of weight 12. Thus $L(s)=\sum_{n=1}^\infty \tau(n)n^{-s}$ has a functional equation and (by Mordell) an Euler product. Ramanujan's discovery was the ushering in of a new age of arithmetic L-functions. Hecke L-functions The L-functions associated with finite characters of number fields (often called Dirichlet charatersor finite Hecke characters)were considered by Hecke in a series of papers beginning in 1917.In the first paper of this series, Über die Zetafunktion beliebiger algebraischer Zahlkörper [Gött. Nachr. 1917 77-89 (1917)], he refers to $\zeta_k(s)$ as the "Dirichlet-Dedekindsche Zetafunktion". Here he gives the functional equation for the Dedekind zeta-function of any number field.In the secondpaper Über eine neue anwendung der Zetafunktionnenauf die arithmetik der Zahlkörper also in 1917, he refers to it as "der Dedekindschen Funktion $\zeta_k(s)$." This paper ends with the remark thatfor abelian extensions $\zeta_k(s)$ can be factored as a product of Dirichlet L-functions. In the third paper in the series, he refers again to "Dirichlet-Dedekindsche Funktion $\zeta_k(s)$".Here he proves the functional equation for L-functions of finite Hecke characters. The L-functions associated with Hecke's Gröβencharaktere, i.e. characters of infinite order, and their functional equations appear in a two-paper series beginning in 1918 [MR:1544392]. It is only in the second of these papers (1920) that Hecke uses the term "Gröβencharaktere". Hecke [MR:1513122], building on Mordell, introduced operators acting on vector spaces of modular forms. The forms which were simultaneous eigenvalues of these operators have multiplicative Fourier coefficients so that their associated Dirichlet series have Euler products and functional equations. Hecke mainly worked with level 1 modular forms. The more subtle theory of Hecke operators for spaces of higher level and with character was developed by Atkin and Lehner, and Li [MR:268123]. Artin L-functions Artin L-functions were introduced in 1924 [MR:3069421]. Artin's conjecture that these L-functions are entire remains unsolved, though many instances are known. Siegel Modular forms In 1939 Siegel [MR:0001251] introduced his theory of higher degree modular forms and their L-functions. Rankin-Selberg convolution In 1939 R. A. Rankin [MR:0000411] studied Ramanujan's conjecture about the size of $\tau(n)$ and was led to consider the analytic properties of $g(s)=\sum_{n=1}^\infty \tau(n)^2 n^{-s}$. He proved that $\zeta(2s-22) g(s)$ is analytic everywhere except for a simple pole at $s=12$ and satisfies a functional equation. This was the beginning of the Rankin - Selberg convolution which we now realize was a hugely important event in the theory of L-functions. Selberg [MR:0002626] did the same calculation around the same time. L-functions of nonholomorphic modular forms In 1949 Hans Maass made his profound discovery that there are L-functions associated with non-holomorphic automorphic forms. In his Math Review of Maass' article [MR:0031519] J. Lehner writes In Hecke's theory of Dirichlet series with Euler products we associate, roughly speaking, a Dirichlet series with an automorphic function; the invariance of the latter under linear substitutions is used, together with the Mellin transform, to derive a functional equation for the Dirichlet series. This suffices for the discussion of the $\zeta$-function of an imaginary quadratic field, for example, but not of a real quadratic field. In order to handle the latter case, the author defines a class of functions ("automorphic wave functions") to take the place of the analytic automorphic functions of Hecke's theory. Maass' work prompted André Weil to remark “Il a fallu Maass pour nous sortir du ghetto des fonctions holomorphes.” Tate's thesis In 1950 Tate's PhD thesis “Fourier analysis in number fields and Hecke's zeta-functions” reprinted in Algebraic Number Theory (Proc. Instructional Conf., Brighton, 1965), 1967, pp. 305-347 Thompson, Washington, D.C., introduced a way to do harmonic analysis on adelic spaces and led to great simplifications in the calculations of Euler factors and functional equations of L-functions over number fields, in particular, and generally for any L-function with conductor larger than 1. This work paved the way for subsequent work associating L-functions to automorphic forms. Hasse-Weil L-functions In 1955 Hasse [MR:76807] introduced the zeta-function associated with a curve, today called the Hasse-Weil zeta function. For a Fermat curve $x^m+y^m=1$ he obtains an expression for his zeta-function in terms of L-functions with a Hecke character. Langlands Program Langlands theory of automorphic forms ushered in an age of a new understandingof the profundity of L-functions. His small book Euler Products of 1967 containsthe beginnings of a general theory. In 1969 Ogg [MR:0246819] proved the holomorphy and functional equation for the Rankin-Selberg convolution of two inequivalent cusp forms of the same level. This is a degree 4 L-function. In 1972 Jacquet [MR:0562503] considered the Rankin-Selberg convolution of two GL(2) cusp forms and obtained the analytic continuation and functional equation of the associated L-function. In 1979 Winnie Li [MR:0550843] completed the Rankin-Selberg story for two arbitrary GL(2) cusp forms. Her techniques really require the use of Tate's thesis and the theory of automorphic forms as it is virtually impossible to figure out the Euler factors at bad primes without these. In 1971 Andrianov [MR:0340178] explicitly constructed the L-function for a genus 2 Siegel modular form and gave its analytic continuation and functional equation. In 1972 Godement and Jacquet [MR:0342495] defined the L-function of a general automorphic form on a reductive group and obtained the analytic continuation and functional equation. It wasn't until the mid 1970's that Shimura [MR:0382176] and, shortly later but independently, Zagier [MR:0485703] proved that the L-function associated with the symmetric square of a cusp form is entire. See also the footnote in Selberg's paper [MR:1220477] that suggests that Selberg had discovered this many years earlier. In 1977 Asai [MR:0429751] obtained the holomorphy and functional equation of certain degree 4 L-functions associated with Hilbert modular forms over quadratic fields. These are known as Asai L-functions. In 1980 Langlands proved Artin's conjecture for 2-dimensional representations of tetrahedral type (and also for certain octahedral representations) [MR:0574808]. In 1981 Tunnell proved Artin's conjecture for octahedral representations [MR:0621884]. In work beginning in 1981 with [MR:0610479], Shahidi proved the holomorphy and functional equation for many automorphic L-functions. In 1983 Jacquet, Piatetskii-Shapiro, and Shalika [MR:0701565]obtained the meromorphicity and functional equation for the L-function that is a general Rankin-Selberg convolution of automorphic L-functions. In 1995 Andrew Wiles [MR:1333035] proved the holomorphy and functional equation of the Hasse-Weil zeta-function of most elliptic curves. In 2001 this work was extended by Breuil, Conrad, Diamond, and Taylor to include all elliptic curves [MR:1839918].
The probability density function (PDF) of the normal distribution or Bell Curve Gaussian Distribution by Guy Lakeman The probability density function (PDF) of the normal distribution or Bell Curve of Normal or Gaussian Distribution is the mean or expectation of the distribution (and also its median and mode). The parameter is its standard deviation with its variance then, A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.However, those who enjoy upskirts are called deviants and have a variable distribution :) A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate. If mu = 0 and sigma = 1 If the Higher Education Numbers Are Increased then the group decision making ability of society would be raised above that of a middle teenager as it is nowBUT Governments can control children by using bad parenting techniques, pandering to the pleasure principle, so they will make higher education more and more difficult as they are doing 85% of the population has a qualification level equal or below a 12th grader, 17 year old ... the chance of finding someone with any sense is low (~1 in 6) and the outcome of them being chosen by those who are uneducated in the policies they are to decide is even more rare !!! Experience means little if you don't have enough brain to analyse it Democracy is only as good as the ability of the voters to FULLY understand the implications of the policies on which they vote., both context and the various perspectives. National voting of unqualified voters on specific policy issues is the sign of corrupt manipulation. Democracy: Where a group allows the decision ability of a teenager to decide on a choice of mis-representatives who are unqualified to make judgement on social policies that affect the lives of millions.The kind of children who would vote for King Kong who can hold a girl in one hand and swat fighter jets out of teh sky off the tallest building, doesn't have a brain cell or thought to call his own but has a nice smile and offers little girls sweets. Prey&Predator Physical meaning of the equationsThe Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death. Using Systems thinking for technology in education Levin, B. B., & Schrum, L. (2013). Using systems thinking to leverage technology for school improvement: Lessons learned from award-winning secondary Schools/Districts. Journal of Research on Technology in Education, 46(1), 29-51. Population Stock and Flow The birth fraction and life expectancy are variables and are set as per page 66 of the text. The population is the stock and the births and deaths are the flows. orquesta Balancing an Inverted Pendulum PCT Model Population - BIDE The Educated Mind Student Achievement Learning Levels by Bateson BridgingTheGap Information Distribution Problem In this model, inputs balance with outputs creating a dynamic contribution. Velocity How many jobless graduates in the UK future scenarios The Modelling Process Predator-Prey Model ("Lotka'Volterra") Dynamic simulation modelers are particularly interested in understanding and being able to distinguish between the behavior of stocks and flows that result from internal interactions and those that result from external forces acting on a system. For some time modelers have been particularly interested in internal interactions that result in stable oscillations in the absence of any external forces acting on a system. The model in this last scenario was independently developed by Alfred Lotka (1924) and Vito Volterra (1926). Lotka was interested in understanding internal dynamics that might explain oscillations in moth and butterfly populations and the parasitoids that attack them. Volterra was interested in explaining an increase in coastal populations of predatory fish and a decrease in their prey that was observed during World War I when human fishing pressures on the predator species declined. Both discovered that a relatively simple model is capable of producing the cyclical behaviors they observed. Since that time, several researchers have been able to reproduce the modeling dynamics in simple experimental systems consisting of only predators and prey. It is now generally recognized that the model world that Lotka and Volterra produced is too simple to explain the complexity of most and predator-prey dynamics in nature. And yet, the model significantly advanced our understanding of the critical role of feedback in predator-prey interactions and in feeding relationships that result in community dynamics.The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death. Startup University Model lookUp Population Launched at an Angle object is projected with an initial velocity u at an angle to the horizontal direction. We assume that there is no air resistance .Also since the body first goes up and then comes down after reaching the highest point , we will use the Cartesian convention for signs of different physical quantities. The acceleration due to gravity 'g' will be negative as it acts downwards.h=v_ox*t-g*t^2/2 l=v_oy*t Dynamic Models Learning Content Balancing an Inverted Pendulum Fourier series Bio103 Predator-Prey Model ("Lotka'Volterra") Dynamic simulation modelers are particularly interested in understanding and being able to distinguish between the behavior of stocks and flows that result from internal interactions and those that result from external forces acting on a system. For some time modelers have been particularly interested in internal interactions that result in stable oscillations in the absence of any external forces acting on a system. The model in this last scenario was independently developed by Alfred Lotka (1924) and Vito Volterra (1926). Lotka was interested in understanding internal dynamics that might explain oscillations in moth and butterfly populations and the parasitoids that attack them. Volterra was interested in explaining an increase in coastal populations of predatory fish and a decrease in their prey that was observed during World War I when human fishing pressures on the predator species declined. Both discovered that a relatively simple model is capable of producing the cyclical behaviors they observed. Since that time, several researchers have been able to reproduce the modeling dynamics in simple experimental systems consisting of only predators and prey. It is now generally recognized that the model world that Lotka and Volterra produced is too simple to explain the complexity of most and predator-prey dynamics in nature. And yet, the model significantly advanced our understanding of the critical role of feedback in predator-prey interactions and in feeding relationships that result in community dynamics.The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death.
My Lyttle Lytton entry for 2020: "Actually, you do like this," maverick CEO Eric Davies, Ph.D., insisted as he pulled my foreskin back and cunnilingussed my pee hole. I’m not exactly proud of it, but I’m glad it’s no longer in my head. My Lyttle Lytton entry for 2020: "Actually, you do like this," maverick CEO Eric Davies, Ph.D., insisted as he pulled my foreskin back and cunnilingussed my pee hole. I’m not exactly proud of it, but I’m glad it’s no longer in my head. It’s hot tonight, and this is going to be mostly the whiskey talking while I wait for it to get cool enough to sleep. I’ve killed the mosquitos I’ve seen, however, so until then I have only Haskell to keep me company. This morning, tef tweeted about monads, which sent the Haskell pack his way with barks of not getting it. Just now, pinboard was reminded of some guy’s rage against Esperanto, from back in the 90’s when the web was fun and mostly devoted to things like explaining how "The Downward Spiral" is a concept album or destroying Unix instead of each other’s mental health. For the Haskell pack, I did a PhD in the type of math that necessitates a lot of category theory, and I have looked at your use of category theory, and judged it to be unnecessary and pretentious and mainly focused on making you look smart while being entirely trivial. But this is not that kind of blog post, one that gets too tangled up in whether category theory is useful to get to the point. (If nothing else, Pijul proves that category theory is useful.) We’re here to discuss how Haskell as a whole is nonsense if you’re not an academic. Our claim is that Haskell is a useless language for writing software that has users. Our point is simple, and focused on IO. We propose that you can measure how user-facing a program or language is by measuring how much of its time it spends or worries about doing IO. That is, after all, the medium through which anyone who is not a program’s author (of which there may be many) will interact with the program. The time spent doing IO can be on the command line, via a GUI, over a network, or wherever; but to be a serious contender for user-facing programs, a language has to make IO be easy. C is a terrible language for most new things today. Anyone writing new software in C, that they expect to be used by other than thouroughly vetted people, needs to be able to explain why they’ve chosen C. At the same time, a lot of us are still exposed to C through the BSD or Linux kernels and syscalls, the undying popularity of K&R, random software on the internet, or other vectors. The culture around C invented the modern language textbook, K&R, and the modern user-facing program, "Hello world", both of which spend most or all of their time dealing with IO to talk to you or other users. I claim making IO as simple as possible, which C does for all its faults, to dois analogous to trying to making it as simple as possible for other people totalk to you in designed languages as you can. [1]. Esperanto shows you can failat that goal, if you even had it, for it favors sounds native to Europeanlanguages above others. Likewise, Haskell shows you can fail at the goal ofmaking IO easy, if you even had it, for it does not. Haskell is a purely functional, lazily evaluated language, with a type system. Like tef explains, that is great, until you run into IO. Up until that point, you could rearrange computations in any order you liked, if they needed to be done at all. As soon as you need to do IO, though, you need something to happen before another thing, which makes you very unhappy if you’re Haskell. It in fact makes you so unhappy that you’ll drag the entire lost-at-sea community of category theorists into the orbit of your language just so you can have an abstraction for doing IO that fits into your model of the world. This abstraction, monoids, then comes with the added benefit of being abstract enough that all of your programmers can spend their time explaining it to each other instead of writing programs that use the abstraction to do IO, and therefore deal with any actual users. Haskell is where programmers go to not have users. Here’s something I thought of when I couldn’t sleep last night. The curvature tensor of a Kähler metric can be viewed as a Hermitian form on \(\bigwedge^{1,1} T_X^*\) by mapping \(\operatorname{End} T_X \to \bigwedge^{1,1} T_X^*\) via the metric. If we’re on a compact Kähler manifold with zero first Chern class, then for each Kähler class \(\omega\) and \((1,1)\)-classes \(u, v\), we can pick the Ricci-flat metric in \(\omega\) and the harmonic representatives of \(u, v\). If \(R\) is the curvature tensor of the metric, viewed as a Hermitian form, we can then set \[ b(u,v)(\omega) := \int_X R(u, v) \, dV_{\omega}. \] This defines a smooth bilinear form \(b\) on the tangent space of the Kähler cone of \(X\). Besides being fun times, can we say anything interesting about \(b\)? For example, what is its norm with respect to the Riemannian metric on the Kähler cone, or its trace with respect to that metric? Can we integrate it over some subset of the cone? I only have time to work on the arXiv project so often, so I’m taking a lot of time between sessions to think about what I’m doing. When I look at my system design notes, I feel like all the decisions I’m making are the obvious choices, but they also were not at all obvious when I started thinking about them. It’s a good feeling. I haven’t made a lot of progress on my projects. I did create a Scaleway VMand shoved an OAI harvester on there that’s happily downloading the arXiv’sbacklog of metadata. I can also parse the XML it fetches, and have some ideasabout how I’m going to store it. This lack of progress mostly comes from me being nerd-sniped into thinking about bounded queues under load. My old work project wanted to use a FIFO queue to hold its requests. That is a bad idea, because FIFO queues perform very poorly under load, as I went a little overboard in demonstrating. Funnily enough, that very simple thing is one of my most popular Github projects ever. Counterpoint: I truly, madly, deeply want to ignore all new and existing social networks. To motivate me to actually finish some of the projects I start, I’m going to try announcing them to the world. Shame me into finishing these: A nicer arXiv frontend of daily new additions. A modern typesetting of Beauville’s Surfaces algebriques complexes. Shame. Shame. Shame. Suppose we have an HTTP service. The behaviour of service depends on someconfiguration that may change at runtime; it may reload a static configurationfile on SIGHUP, need to react to changes in its service discovery mechanism,or have A/B test state or features toggled on and off. In Go, a naive way of handling this is by writing our configuration state to a struct and updating it in background goroutines: type State struct { frobinate bool}var state State{}func handler(w http.ResponseWriter, r *http.Request) { if s.frobinate { fmt.Fprintln(w, "Great success") } else { fmt.Fprintln(w, "Great non-success") }} This naive approach has at least two problems: The state may change while we’re processing a request, causing us to process part of the request with one state, and another part with another. This isn’t a big deal in our example, but becomes more of a problem as the time needed to handle a request increases. There are no synchronization primitives in play, so updating the state has data race conditions. Check out the working example in this commit to see the first problem in action. One way to resolve these problems is to add a mutex and to pass copies of the state to the request handlers: type State struct { frobinate bool *sync.Mutex}// Copy may be arbitrarily complicated if State contains slices, maps,// pointers, or other structs.func (s *State) Copy() State { s.Lock() defer s.Unlock() return *s}var state &State{}func handler(w http.ResponseWriter, r *http.Request) { s := state.Copy() if s.frobinate { fmt.Fprintln(w, "Great success") } else { fmt.Fprintln(w, "Great non-success") }} The background goroutine that updates the state then either does so through a dedicated method that locks/unlocks the mutex, or does the locking itself. While this works fine, it relies on global state and uses none of Go’s built-in concurrency features. We were promised a brave new world, and encouraged to "share memory by communicating". This points the way to another solution to our two problems that leverages Go’s primitives better: type State struct { frobinate bool}// Copy may be arbitrarily complicated if State contains slices, maps,// pointers, or other structs.func (s State) Copy() State { return s}var stateCh chan Statevar toggle chan struct{}func stateManager() { state := State{} for { select { case stateCh <- state.Copy(): case <-toggle: state.frobinate = !state.frobinate } }}func handler(w http.ResponseWriter, r *http.Request) { s := <-stateCh if s.frobinate { fmt.Fprintln(w, "Great success") } else { fmt.Fprintln(w, "Great non-success") }} A complete working example ishere.Note that the working example doesn’t relyon any global state to pass state to the handler functions, using closuresinstead. Modifying the state is also only possible within the stateManagerfunction. The working example could also be extended to have middleware copy thestate to each request processing function instead of the ad-hoc way done there. It is still up to the developers to ensure the Copy method doesn’t pass mutablestate around, but they no longer need to deal with mutexes and locks themselves.This also means that any future additions to the program that use the samepattern won’t need to worry about those locks either. There are no silver bullets in heavily concurrent systems, but in Go we can choose to not deal with some of the footguns we would need to handle in C, Java or other similar languages.
WordPress.com supports the use of standard LaTeX in posts and comments, which is displayed using embedded PNG images with the LaTeX source code appearing in the alt text. So, including inline maths in your comments is easy! All you have to do is to surround your code by the tags $latex … $. For example, typing $latex \int_{-\infty}^\infty e^{-x^2}\,dx=\sqrt\pi$ gives . Note that the initial space after the opening $latex is necessary. If this is omitted then the code will not be processed as LaTeX and will just appear as typed. There are some problems that can occur, but are easily avoided. As WordPress.com has some support for html in comments, any < or > signs could be misinterpreted as html tags, messing up the comment. This is avoided by either ensuring that spaces are left around these symbols or by using < and > in place of < and > respectively. Also, you can have newlines in your LaTeX code, which is treated as whitespace as with regular LaTeX. However, do not put a newline directly after the opening $latex, as this prevents the LaTeX code from being processed. Any LaTeX containing errors is displayed as . Unfortunately, WordPress.com does not support previewing or editing comments. So, if you want to test your LaTeX code before posting as a comment, feel free to first post it as a comment here. Also, I will correct any incorrectly entered LaTeX in comments as I come across them. Happy commenting! The displaymath environment In standard LaTeX it is possible to use the displaymath environment, which causes the formula to appear centered on its own line, with whitespace above and below, and with slightly different formatting making use of the additional space. WordPress.com does not directly support displaymath but, fortunately, it is possible to emulate it with inline LaTeX. This requires doing two things. First, use the html p (paragraph) tag to insert whitespace above and below the expression with the align attribute set to “center”. Then, the expression can be made to appear with the correct displaymath formatting by starting it with the \displaystyle LaTeX keyword. For example, typing <p align="center">$latex \displaystyle\int_{-\infty}^\infty e^{-x^2}\,dx=\sqrt\pi$</p> gives Typing this afresh every time you want to display a formula is likely to lead to mistakes, so you can simply copy the expression above directly from this page and paste it into your comment, replacing my LaTeX code with yours while retaining the \displaystyle command. Multiline expressions and alignment It is often desirable to break long formulas and expressions down into multiple lines. In standard LaTeX this can be done with the align environment, which uses \\ to end a line and aligns the expressions on the & character. However, the align environment is not supported in WordPress. Instead, the array environment can be used, as this works with standard LaTeX inside inline maths formulas. There are some things that should be done to get this to display properly though. To reduce the rather excessive amount of space that appears in place of the & alignment characters, the command \setlength\arraycolsep{2pt} should be used immediately before entering the array environment. Secondly, to make sure that enough space appears between lines, put \smallskip immediately before the \\ command for a newline. Also, to get correct displaymath formatting, the \displaystyle command should be reentered after every & alignment character and at the start of each line. You can copy and paste the following text into your comment, replacing [my-latex] with your LaTeX code. <p align="center">$latex \setlength\arraycolsep{2pt}\begin{array}{rl} \displaystyle[my-latex]&\displaystyle[my-latex]\smallskip\\ \displaystyle[my-latex]&\displaystyle[my-latex] \end{array}$</p> For example, the following multiline expression was produced by typing the following text. <p align="center">$latex \setlength\arraycolsep{2pt}\begin{array}{rl} \displaystyle\int_{-\infty}^\infty e^{-x^2}\,dx&\displaystyle=\sqrt\pi,\smallskip\\ \displaystyle\int_{-\infty}^\infty e^{-ax^2}\,dx&\displaystyle=\int_{-\infty}^\infty e^{-y^2}\frac{dy}{\sqrt a}\smallskip\\ &\displaystyle=\sqrt{\frac\pi a} \end{array}$</p> Equation numbering LaTeX documents support the use of equation numbering. This displays a right-aligned number inside parentheses after displayed equations, as follows. (1) In LaTeX this is done automatically whenever the equation environment is used to display formulas. Unfortunately, as WordPress does not support the equation environment, it does not directly support equation numbering either. One simple way to force an equation to be numbered is to insert the number directly into the LaTeX expression by putting something like \qquad\qquad{\rm(2)} before the terminating $, giving This works, but it is not an ideal solution. As the equation number has been inserted into the LaTeX expression with a space separating it from the main formula, it means that it is displayed as part of the included PNG image. The gap between the formula and the number is fixed, so that the number either floats somewhere between the right of the expression and the right of the screen or, if too much space is used, it will protrude beyond the right-hand margin and the equation will not be properly centered. A more satisfactory result can be achieved by using an html table to display and right-align the equation number, and separately display and center the LaTeX formula. This approach also works for giving a single number to a multiline formula such as (3) Typing in the code to do this from scratch would almost certainly lead to errors, so you can copy and paste the following text into your comment. Just replace [my-latex] by your LaTeX expression and *eqno* by the equation number. For a numbered single line formula, use <table border="0" cellspacing="0" cellpadding="0" width="100%"> <tbody><tr><td align="center" width="93%"><p> $latex \displaystyle[my-latex]$ </p></td><td align="left" width="7%"><p>(*eqno*)</p> </td></tr></tbody></table> For a numbered and aligned multiline expression use <table border="0" cellspacing="0" cellpadding="0" width="100%"> <tbody><tr><td align="center" width="93%"><p> $latex \setlength\arraycolsep{2pt}\begin{array}{rl} \displaystyle[my-latex]&\displaystyle[my-latex]\smallskip\\ \displaystyle[my-latex]&\displaystyle[my-latex] \end{array}$ </p></td><td align="left" width="7%"><p>(*eqno*)</p> </td></tr></tbody></table>
Determining the η− η′ mixing by the newly measured \(\mathit{BR}(D(D_{s})\to\eta(\eta')+\bar{l}+\nu_{l}\) 43 Downloads Citations Abstract The mixing of η− η′ or η− η′− G is of a great theoretical interest, because it concerns many aspects of the underlying dynamics and hadronic structure of pseudoscalar mesons and glueball. Determining the mixing parameters by fitting data is by no means trivial. In order to extract the mixing parameters from the available processes where hadrons are involved, theoretical evaluation of hadronic matrix elements is necessary. Therefore model dependence is somehow unavoidable. In fact, it is impossible to extract the mixing angle from a unique experiment because the model parameters must be obtained by fitting other experiments. Recently \(\mathit{BR}(D\to\eta+\bar{l}+\nu_{l})\) and \(\mathit{BR}(D_{s}\to\eta(\eta')+\bar{l}+\nu_{l})\) have been measured, thus we are able to determine the η− η′ mixing solely from the semileptonic decays of D-mesons where contamination from the final state interactions is absent. Thus we hope that the model dependence of the extraction can be somehow alleviated. Once \(\mathit{BR}(D\to\eta'+\bar{l}+\nu_{l})\) is measured, we can further determine all the mixing parameters for η− η′− G. As more data are accumulated, the determination will be more accurate. In this work, we obtain the transition matrix elements of D ( s)→ η (′)using the light-front quark model whose feasibility and reasonability for such processes have been tested. KeywordsForm Factor Decay Width Model Dependence Pseudoscalar Meson Semileptonic Decay Preview Unable to display preview. Download preview PDF.
Answer 98.4 Work Step by Step We assume the length of the first diagonal to be about: $2(8.2)=16.4$ The apothem is 12, and this is the other diagonal. Thus, we approximate area: $A = \frac{d_1d_2}{2} = \frac{16.4 \times 12}{2} \approx 98.4$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Research Open Access Published: Existence criterion for the solutions of fractional order p-Laplacian boundary value problems Boundary Value Problems volume 2015, Article number: 164 (2015) Article metrics 1039 Accesses 19 Citations Abstract The existence criterion has been extensively studied for different classes in fractional differential equations (FDEs) through different mathematical methods. The class of fractional order boundary value problems (FOBVPs) with p-Laplacian operator is one of the most popular class of the FDEs which have been recently considered by many scientists as regards the existence and uniqueness. In this scientific work our focus is on the existence and uniqueness of the FOBVP with p-Laplacian operator of the form: \(D^{\gamma}(\phi_{p}(D^{\theta}z(t)))+a(t)f(z(t)) =0\), \(3<{\theta}\), \(\gamma\leq{4}\), \(t\in[0,1]\), \(z(0)=z'''(0)\), \(\eta D^{\alpha}z(t)|_{t=1}= z'(0)\), \(\xi z''(1)-z''(0)=0\), \(0<\alpha<1\), \(\phi_{p}(D^{\theta}z(t))|_{t=0}=0 =(\phi_{p}(D^{\theta}z(t)))'|_{t=0}\), \((\phi_{p}(D^{\theta} z(t)))''|_{t=1} = \frac{1}{2}(\phi_{p}(D^{\theta} z(t)))''|_{t=0}\), \((\phi_{p}(D^{\theta}z(t)))'''|_{t=0}=0\), where \(0<\xi, \eta<{1}\) and \(D^{\theta}\), \(D^{\gamma}\), \(D^{\alpha}\) are Caputo’s fractional derivatives of orders θ, γ, α, respectively. For this purpose, we apply Schauder’s fixed point theorem and the results are checked by illustrative examples. Introduction Fractional calculus has widely been studied by scientists from the era of Leibniz to the present and has drawn the attention of mathematicians, engineers, and physicists in many scientific disciplines based on mathematical modeling, and it was found that the fractional order models are more precise in comparison with integer order models and, therefore, we can see many useful fractional order models in fluid flow, viscoelasticity, signal processing, and many other fields. For instance, see [1–6]. In fractional calculus, the existence of a positive solution for FOBVP with p-Laplacian operator has extensively attracted the attention of the scientific community. This side of the fractional calculus has a wide range of applications in day life problems and these were investigated for the existence of solutions by different mathematical tools. For instance, Lv [7], has studied existence results for m-point FOBVP with p-Laplacian operator with the help of a monotone iterative technique and produced interesting results which were examined by two examples. Prasad and Krushna [8] studied FOBVPs with p-Laplacian operator with the help of the Krasnosel’skii and five functional fixed point theorems and checked their results by examples. Yuan and Yang [9] studied the existence of a positive solution for a q-FOBVP with a p-Laplacian operator using the upper and lower solution method through Schauder’s fixed point theorem and the results were examined by examples. Some of the interesting results in the existence of positive solution for FOBVP with p-Laplacian operator which raised our attention toward this project are, for instance, that Zhang et al. [10] has contributed for positive solutions of an FOBVP with p-Laplacian with the help of degree theory, and they also used the upper and lower solution method for this work. Xu and Xu in [11] investigated the p-Laplacian equation for sign changing equations of the form using the method of upper and lower solution method with the help of Leray-Schauder degree theory. Wang in [12] considered three solutions of In this paper we consider the FOBVP with p-Laplacian of the form where \(D^{\theta}\), \(D^{\gamma}\), and \(D^{\alpha}\) are Caputo’s fractional derivatives of fractional order θ, γ, α where \(\theta, \gamma\in(3,4]\), and \(\alpha\in(0,1)\). \(\phi_{p}(s)=|s|^{p-2}s\), \(p>1\), \(\phi_{p}^{-1}=\phi_{q}\), \(\frac{1}{p}+\frac{1}{q}=1\). We recall some basic definitions and results. For \(\alpha>0\), choose \(n=[\alpha]+1\) in the case α is not an integer and \(n=\alpha\) in the case α is an integer. We recall the following definitions of a fractional order integral and a fractional order derivative in Caputo’s sense, and some basic results of fractional calculus [2, 13]. Definition 1 [13] For a function \(k:(0,\infty)\rightarrow R\) and \(\gamma>0\), fractional integral of order γis defined by with the condition that the integral converges. Definition 2 [13] For \(\gamma>0\) the left Caputo fractional derivative of order γ is defined by where n is such that \(n-1<\gamma<n\). Lemma 3 [2] For \(\mu, \beta>{0}\), the following relation holds: \(D^{\mu}t^{\nu}=\frac{\Gamma(\nu+1)}{\Gamma(\nu+1-\mu)}t^{\nu-\mu-1}\), \(\nu>{n}\) and \(D^{\mu}t^{l}=0\), for \(l=0,1,2,\ldots,n-1\). Lemma 4 For \(\mathcal{H}(t)\in{C(0,1)}\), the solution of the homogeneous FDE \(D_{0^{+}}^{\omega}\mathcal{H}(t)=0 \) is Definition 5 [14] A cone P is solid in a real Banach space X if its interior is non-empty. Definition 6 [14] Let P be a solid cone in a real Banach space X, \(T:P^{0}\rightarrow P^{0}\) be an operator and \(0<\alpha<1\). Then T is called θ-concave operator if \(T(ku)\geq{k^{\alpha}}T(u)\) for any \(0< k<1\) and \(u\in{P^{0}}\). Lemma 7 [14] Assume that P is a normal solid cone in a real Banach space X, \(0<\alpha<1\), and \(T:P^{0}\rightarrow{P^{0}}\) is a α- concave increasing operator. Then T has only one fixed point in \(P^{0}\). Main results Lemma 8 For \(z(t)\in C[0,1]\), the FOBVP has a solution of the form where Proof Using boundary conditions (9) for the values of \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), \(z(0)=0=z'''(0)\) yield to \(c_{1}=0=c_{4}\), and by \(\xi z''(1)-z''(0)=0\), we have \(c_{3}=\frac{\xi}{2(1-\xi)}I^{\theta-2}h(1)\). Using \(\eta D^{\alpha }z(1)=z'(0)\), we obtain \(c_{2}=\frac{\eta}{1-\frac{\eta}{\Gamma(2-\alpha)}} [I^{\theta-\alpha }h(1)+\frac{\xi}{(1-\xi)\Gamma(3-\alpha)}I^{\theta-2}h(1) ]\). Substituting the values of \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), in (12), we get The integral form is given as □ Lemma 9 Let \(3<{\theta}, \gamma\leq{4}\). For \(z(t)\in{C [0,1]}\), the FOBVP with p- Laplacian has a solution of the form where and \(G(t,s)\) is given by (11). The boundary conditions \(\phi_{p}(D^{\theta}z(0))=(\phi_{p}(D^{\theta}z(0)))'=(\phi_{p}(D^{\alpha}z(0)))'''=0\) lead to \(c_{1}=c_{2}=c_{4}=0\). From (18) and \(c_{1}=c_{2}=c_{4}=0\), we deduce and the boundary condition \((\phi_{p}(D^{\theta}z(1)))''=\frac{1}{2}(\phi_{p}(D^{\theta}z(0)))''\) yield \(c_{3}=\frac{1}{\Gamma(\gamma-2)} \int_{0}^{1}(1-x)^{\gamma-3} a(x)f(z(x))\, dx\). Consequently, (18) takes the form The boundary value problem (15) reduces to the following problem: which, in view of Lemma 8, yields the required result, Lemma 10 Let \(3< \gamma\leq{4}\). The Green’s function \(\mathcal{H}(t,x)\) is a continuous function and satisfies (A) \(\mathcal{H}(t,x)\geq0\), \(\mathcal{H}(t,x)\leq\mathcal{H}(1,x)\), for\(t,x\in(0,1]\), (B) \(\mathcal{H}(t,x) \geq t^{\gamma-1 }\mathcal{H}(1,x)\) for\(t,x \in(0,1]\). The continuity of \(\mathcal{H}(t,x)\) is ensured by its definition. For (A), considering the case when \(0<{x}\leq{t}\leq1\), we have In the case \(0< t\leq x\leq1\), the \(\mathcal{H}(t,s)>0\) is obvious. Now for \(\mathcal{H}(t,x)\leq\mathcal{H}(1,x)\), for \(t,x\in(0,1]\), we have For \(x, t \in(0,1]\), such that \(0< x\leq t\leq1\), from (24), we deduce from (25), we have \(\frac{\partial}{\partial t}\mathcal {H}(t,x)\geq0\). In the case \(0< t\leq x \leq1\), (24) implies that the relation \(\frac{\partial}{\partial t}\mathcal{H}(t,x)\geq0\) is obvious, which implies that \(\mathcal{H}(t,x)\) is an increasing function w.r.t. t. Hence \(\mathcal{H}(t,x)\leq\mathcal{H}(1,x)\). Now for part (B), using (17) in the case \(0< x\leq t\leq1\), we proceed thus: We can also prove the result for the case of \(0< t\leq x\leq1\), this completes the proof. Assume that the following hold: (J 1): \(0<\int_{0}^{1}\mathcal{H}(1,x)a(x)\, dx<+\infty\). (J 2): There exist \(0<\delta<1\) and \(\rho>0\) such that \(f(x)\leq\delta L \phi_{p}(x)\), for \(0\leq x \leq\rho\), where \(0< L\leq(\phi_{p}(\varpi)\delta\int_{0}^{1}\mathcal{H}(1,x)a(x)\, dx)^{-1}\). (J 3): There exists \(b>0\), such that \(f(x)\leq M \phi_{p}(x)\), for \(u>b\), \(0< M<( \phi_{p}(\varpi2^{q-1})\int_{0}^{1}\mathcal{H}(1,x) a(x)\, dx)^{-1}\). (J 4): \(f(z)\) is non-decreasing with respect to z. (J 5): There exists \(0\leq\beta<1\) such that \(f(Kz)\geq(\phi_{p}(K))^{\beta}f(z)\), for any \(0\leq k<1\) and \(0< z<+\infty\). Existence and uniqueness of solutions Theorem 11 Under the assumptions (J 1) and (J 2), the FOBVP (4) has at least one positive solution. Proof Define \(K_{1}=\{z\in{C [0,1]}:0\leq z(t) \leq\rho\mbox{ on } [0,1]\}\) a closed convex set [15]. Define an operator \(\mathcal{T}:K_{1}\rightarrow C[0,1]\) by By Lemma 9, \(z(t)\) is a solution of the FOBVP with p-Laplacian operator (4) if and only if \(z(t)\) is a fixed point of \(\mathcal{T}\). The compactness of the operator \(\mathcal{T}\) can easily be shown. Now consider where \(\varpi=\frac{1}{\Gamma(\theta+1)}+\frac{ \eta}{1-\frac{\eta }{\Gamma(2-\alpha)}} [\frac{1}{\Gamma(\theta-\alpha+1)}+\frac{\xi }{(1-\xi)\Gamma(3-\alpha)}\frac{1}{\Gamma(\theta-1)} ] +\frac{\xi }{2(1-\xi)}\frac{1}{\Gamma(\theta-1)}\). For any \(y\in K_{1}\), using (J 2) and Lemma 10, we obtain which implies \(\mathcal{T}(K_{1})\subseteq{K_{1}}\). By Schauder’s fixed point theorem \(\mathcal{T}\) has a fixed point in \(K_{1}\). □ Theorem 12 Under the assumptions (J 1), (J 3) the FOBVP with p- Laplacian operator (4) has at least one positive solution. Proof Let \(b>0\) as given in (J 3). Define \(\mathcal{F}^{*}=\max_{0\leq{x}\leq{b}}f(x)\), then \(\mathcal{F}^{*}\geq f(x)\) for \(0\leq{x}\leq{b}\). In view of (J 3), we have Choose \(b^{*}>b\) large enough such that Define \(K_{2}=\{z(t)\in C [0,1]: 0\leq z(t)\leq{b^{*}}\mbox{ on } [0,1]\}\), \(\Omega_{1}=\{t\in[0,1]:0\leq{z(t)\leq{b}}\}\), \(\Omega_{2}=\{ t\in [0,1]:b<{z(t)\leq{b^{*}}}\}\). Then \(\Omega_{1}\cup{\Omega_{2}}=[0,1]\) and \(\Omega_{1}\cap{\Omega_{2}}=\varphi\). For \(z\in{K_{1}}\), (J 3) implies \(f(z(t))\leq{M}\phi_{p}(z(t))\leq{M\phi_{p}(b^{*})}\) for \(t\in{\Omega_{2}}\) and thus \(\mathcal{T}(K_{2})\subseteq{K_{2}}\). Hence by Schauder’s fixed point theorem \(\mathcal{T}\) has a fixed point \(z\in{K_{2}}\), therefore, the FOBVP with p-Laplacian operator (4) has at least one positive solution in \(K_{2}\). □ Theorem 13 Assume that (J 1), (J 4), (J 5) hold. Then the FOBVP with p- Laplacian operator (4) has a unique positive solution. Proof Consider the set \(\Lambda=\{z(t)\in C[0,1]:z(t)\geq0 \mbox{ on }[0,1]\}\), where Λ is a normal solid cone in \(C[0,1]\) with \(\Lambda^{0}=\{z(t)\in C[0,1]:z(t)>0 \mbox{ on }[0,1]\}\). Let \(\mathcal{T}:\Lambda^{0}\rightarrow C[0,1]\) be defined by (27), we prove that \(\mathcal{T}\) is a θ-concave and increasing operator. For \(z_{1}, z_{2}\in\Lambda^{0}\) with \(z_{1}\geq z_{2}\) the assumption (J 4) implies that the operator \(\mathcal{T}\) is an increasing operator, i.e., \(\mathcal{T}(z_{1}(t))\geq\mathcal {T}(z_{2}(t))\) on \(t\in[0,1]\). With the help of \(f(kz)\geq\phi_{p}(k^{\alpha})f(z)\), we have which implies that \(\mathcal{T}\) is α-concave operator and with the help of Lemma 7, the operator \(\mathcal{T}\) has a unique fixed point which is the unique positive solution of the FOBVP with p-Laplacian operator (4) in \(\Lambda^{0}\). □ Examples Example 1 Consider the following boundary value problem: Here we have \(\alpha= \beta=3.5\), \(\alpha=0.5\), \(\xi=\gamma=\frac{1}{2}\), \(a(t)=t\), \(f(z(t))=z(t)\). By simple computation, we obtain \(0< L\leq{1.9092}\), choose \(L=1.5\), \(\delta=1\), \(p=2\), and then the conditions (J 1) and (J 2) are satisfied. Hence, by Theorem 11, the FOBVP with p-Laplacian operator (35) has at least one positive solution. Example 2 For the following boundary value problem: we have \(\alpha=\beta=3.5\), \(\xi=\eta=0.1\), \(a(t)=t\), \(f(u(t))=\sqrt[3]{u}(t)\) and by simple computation we get \(M<{4.4792}\) and thus by choosing \(M=4.00\), \(b=1\), and \(p=q=2\), we see that (36) satisfy (J 1) and (J 3). Hence by Theorem 12, the FOBVP with p-Laplacian operator (36) has at least one positive solution. Example 3 The uniqueness of the solution for FOBVP with the p-Laplacian operator; we have We apply Theorem 13. In (37), we have \(\theta=\gamma=3.5\), \(\alpha=0.5\), \(\xi=\gamma=\frac{1}{3}\), \(a(t)=t\), \(f(z(t))=z(t)\) it is clear that (37) satisfies conditions (J 1), (J 4). Also considering \(\beta=\frac {1}{2}\), (J 5) is satisfied. Thus by Theorem 13, the fractional differential equation (37) has a unique solution. References 1. Hilfer, R: Applications of Fractional Calculus in Physics. World Scientific, Singapore (2000) 2. Kilbas, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies. Elsevier, Amsterdam (2006) 3. Miller, KS, Ross, B: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993) 4. Oldhalm, KB, Spainer, J: The Fractional Calculus. Academic Press, New York (1974) 5. Podlubny, I: Fractional Differential Equations. Academic Press, New York (1999) 6. Sabatier, J, Agrawal, OP, Machado, JAT: Advances in Fractional Calculus. Springer, Berlin (2007) 7. Lv, ZW: Existence results for m-point boundary value problems of nonlinear fractional differential equations with p-Laplacian operator. Adv. Differ. Equ. 2014, 69 (2014) 8. Prasad, KR, Krushna, BMB: Existence of multiple positive solutions for p-Laplacian fractional order boundary value problems. Int. J. Anal. Appl. 6(1), 63-81 (2014) 9. Yuan, Q, Yang, W: Positive solution for q-fractional four-point boundary value problems with p-Laplacian operator. J. Inequal. Appl. 2014, 481 (2014) 10. Zhang, JJ, Liu, WB, Ni, JB, Chen, TY: Multiple periodic solutions of p-Laplacian equation with one side Nagumo condition. J. Korean Math. Soc. 45(6), 1549-1559 (2008) 11. Xu, X, Xu, B: Sign-changing solutions of p-Laplacian equation with a sub-linear nonlinearity at infinity. Electron. J. Differ. Equ. 2013, 61 (2013) 12. Wang, B: Positive solutions for boundary value problems on a half line. Int. J. Math. Anal. 3(5), 221-229 (2009) 13. Herzallah, MAE, Baleanu, D: On fractional order hybrid differential equations. Abstr. Appl. Anal. 2014, Article ID 389386 (2014) 14. Guo, D, Lakshmikantham, V: Nonlinear Problems in Abstract Cones. Academic Press, Orland (1988) 15. Han, Z, Lu, H, Sun, S, Yang, D: Positive solution to boundary value problem of p-Laplacian fractional differential equations with a parameter in the boundary. Electron. J. Differ. Equ. 2012, 213 (2012) Acknowledgements We are thankful to the referees and editor for their valuable comments and remarks. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All the authors read and approved the final manuscript.
Research Open Access Published: Approximate controllability of a class of coupled degenerate systems Boundary Value Problems volume 2016, Article number: 127 (2016) Article metrics 998 Accesses Abstract This paper concerns the approximate controllability of a class systems governed by coupled degenerate equations. The equations may be weakly degenerate and strongly degenerate on the boundary. It is shown that the systems are approximately controllable by constructing the controls via the conjugate problems. Introduction In this paper, we investigate the approximate controllability of the coupled degenerate parabolic equations where \(Q_{T}=\Omega\times(0,T)\), Ω is a bounded domain in \(\mathbb{R}^{n}\), \(T>0\), \(h\in L^{2}(Q_{T})\) is the control function, χ is the characteristic function, \(\omega_{1}\) and \(\omega_{2}\) are open subsets of Ω satisfying \(\omega_{1}\cap\omega_{2}\neq\emptyset\), \(a_{j}\in C(\overline{Q}_{T})\) is positive in \(\Omega\times(0,T)\) with \(\frac{1}{a_{j}}\frac{\partial a_{j}}{\partial t}\in{L^{\infty}(Q_{T})}\) and \(c_{j}\in L^{\infty}(Q_{T})\) for \(j=1,2\). Equations (1.1) and (1.2) can be used to describe some physical models. For instance, in [1] we can find a motivating example of a Crocco-type equation coming from the study of the velocity field of laminar flow on a flat plate. It is noted that (1.1) and (1.2) may be degenerate at some points on \(\partial\Omega\times(0,T)\). According to [2], we can prescribe the following boundary and initial values: where \(y_{0},u_{0}\in L^{2}(\Omega)\) and Note that \(\Sigma_{j}\) denotes the nondegenerate and weak degenerate part of the lateral boundary, which does not include the strong degenerate part. For example, if \(n=1\), \(\Omega=(0,1)\), \(a_{1}(x,t)=a_{2}(x,t)=x^{\alpha}\), then if \(\alpha=0\), the boundary \(x=0\) is nondegenerate part; if \(0<\alpha <1\), the boundary \(x=0\) is weak degenerate part; if \(\alpha\ge1\), the boundary \(x=0\) is strong degenerate part. When \(\Sigma_{j}=\emptyset\), the equations are in strong degeneracy at each point of the lateral boundary. Controllability theory has been widely investigated for the systems governed by nondegenerate parabolic equations over the last 40 years and there have been a great number of results (see for instance [3–5] and the references therein for a detailed account). However, the study of the systems governed by degenerate parabolic equations just began several years ago and there are some results (see [6–16] and the references therein). Different from nondegenerate parabolic equations, the null controllability and the approximate controllability for the systems governed by degenerate parabolic equations may be inconsistent. Indeed, if \(n=1\), \(\Omega=(0,1)\), and it is shown that the system (1.1), (1.3), (1.5) is null controllable if \(0<\alpha<2\) [6, 9, 10, 14], while not if \(\alpha\ge2\) [11], while it is approximately controllable for \(\alpha>0\) [15, 16]. More generally, the authors [15, 16] proved the approximate controllability of the system (1.1), (1.3), (1.5) governed by one single equation in the multi-dimensional case. Besides, [12, 13] are concerned with the null controllability of the degenerate coupled equations. Particularly, the authors studied the null controllability of the system (1.1)-(1.6) for the special case that and showed that the system is null controllable if \(0<\alpha<2\) in [12]. In the present paper, we prove the approximate controllability of the system (1.1)-(1.6). That is to say, for any admissible error value \(\varepsilon>0\) and the desired datum \((y_{d},u_{d})\in L^{2}(\Omega)\times L^{2}(\Omega)\), there exists a control function h such that the solution \((y,u)\) to the problem (1.1)-(1.6) approximately approaches \((y_{d},u_{d})\) at time T, i.e. Well-posedness and approximate controllability Definition 2.1 A pair of functions \((y,u)\) is called a weak solution to the problem (1.1)-(1.6), if \(y\in C([0,T];L^{2}(\Omega))\cap {\mathscr{B}}_{1}\), \(u\in C([0,T];L^{2}(\Omega))\cap{\mathscr{B}}_{2}\) satisfy for any \(\varphi\in L^{\infty}((0,T);L^{2}(\Omega)) \cap{\mathscr{B}}_{1}\), \(\psi\in L^{\infty}((0,T);L^{2}(\Omega)) \cap{\mathscr{B}}_{2}\) with \(\frac{\partial\varphi}{\partial t},\frac{\partial\psi}{\partial t}\in L^{2}(Q_{T})\) and \(\varphi(\cdot,T) |_{\Omega}=\psi(\cdot,T) |_{\Omega}=0\). Here, \({\mathscr{B}}_{j}\) is the closure of \(C^{\infty}_{0}(Q_{T})\) with respect to the norm for \(j=1,2\). Similar to the single equation case (Theorem 2.1 in [15]), one can prove the following well-posedness. Theorem 2.1 Assume \(a_{j}\in C(\overline{Q}_{T})\) is positive in \(\Omega\times(0,T)\) with \(\frac{1}{a_{j}}\frac{\partial a_{j}}{\partial t}\in{L^{\infty}(Q_{T})}\) and \(c_{j}\in L^{\infty}(Q_{T})\) for \(j=1,2\). Then for any \(h\in L^{2}(Q_{T})\) and \(y_{0},u_{0}\in L^{2}(\Omega)\), the problem (1.1)-(1.6) admits uniquely a weak solution \((y,u)\). Furthermore, the solution \((y,u)\) satisfies where \(C>0\) depends only on Ω, T, \(\|c_{1}\|_{L^{\infty}(Q_{T})}\), and \(\|c_{2}\|_{L^{\infty}(Q_{T})}\). Remark 2.1 If \(u\in{\mathscr{B}}_{j}\), then \(u|_{\Sigma_{j}}=0\) in the trace sense, while there is no trace on \((\partial\Omega\times(0,T))\setminus\Sigma_{j}\) in general. Define the mapping where \({\mathscr{H}}=L^{2}(\Omega)\times L^{2}(\Omega)\) with the norm Proposition 2.1 Proof From (2.1) and \(z=0\) a.e. in \(\omega_{1}\times(0,T)\), one gets \(v=0\) a.e. in \((\omega_{1}\cap\omega_{2})\times(0,T)\). For sufficiently small \(\delta>0\), denote Since (2.2) is nondegenerate in \(\Omega_{\delta}\times(0,T)\), one gets from the classical unique continuation [17] \(v=0\) a.e. in \(\Omega_{\delta}\times(0,T)\). It follows from the arbitrariness of δ that \(v=0\) a.e. in \(Q_{T}\), which also shows that z satisfies the homogeneous equation. Then the same discussion as for v leads to \(z=0\) a.e. in \(Q_{T}\). □ Define the functional where \(\langle(\cdot,\cdot),(\cdot,\cdot)\rangle_{\mathscr{H}}\) is the inner product in \({\mathscr{H}}\). Proposition 2.2 \(J(\cdot)\) is strictly convex and satisfies Furthermore, \(J(\cdot)\) reaches its minimum at a unique point \((\hat{z}_{0},\hat{v}_{0})\) in \({\mathscr{H}}\) and Proof Note that \({\mathscr{L}}\) is a linear operator, one can easily prove that \(J(\cdot)\) is strictly convex and continuous. Now we prove (2.7) by contradiction. Otherwise, there exists a sequence \(\{(z_{0}^{(k)},v_{0}^{(k)})\}_{k=1}^{\infty}\subset{\mathscr{H}}\) satisfying Define There exists a subsequence of \(\{(\tilde{z}_{0}^{(k)},\tilde{v}_{0}^{(k)})\}_{k=1}^{\infty}\), denoted like the sequence for convenience, which weakly converges in \({\mathscr{H}}\) to a function \((\tilde{z}_{0},\tilde{v}_{0})\in{\mathscr{H}}\) with \(\|(\tilde{z}_{0},\tilde{v}_{0})\|_{\mathscr{H}}\le1\). Denote by \((\tilde{z},\tilde{v})\) and \((\tilde{z}^{(k)},\tilde{v}^{(k)})\) the weak solutions of the conjugate problem (2.1)-(2.6) with \((z_{0},v_{0})=(\tilde{z}_{0},\tilde{v}_{0})\) and \((z_{0},v_{0})=(\tilde{z}_{0}^{(k)},\tilde{v}_{0}^{(k)})\), respectively. Then it follows from Theorem 2.1 that \((\tilde{z}^{(k)},\tilde{v}^{(k)})\) converges weakly in \({\mathscr{H}}\) to \((\tilde{z},\tilde{v})\). Additionally, (2.9) yields Hence which, together with Proposition 2.1, leads to \((\tilde{z},\tilde{v})=(0,0)\) in \(Q_{T}\) and thus \((\tilde{z}_{0},\tilde{v}_{0})=(0,0)\) in Ω. Thus From (2.7), the strict convexity and the continuity of \(J(\cdot)\), \(J(\cdot)\) must achieve its minimum at a unique point in \({\mathscr{H}}\). Finally, we prove (2.8). On the one hand, if \(\|(y_{d},u_{d})\|_{\mathscr{H}}\leq\varepsilon\), it follows from the Hölder inequality that and thus \((\hat{z}_{0},\hat{v}_{0})=(0,0)\). On the other hand, if \((\hat{z}_{0},\hat{v}_{0})=(0,0)\), then i.e. Letting \(\tau\to0^{+}\) yields \(\|(y_{d},u_{d})\|_{\mathscr{H}}\leq\varepsilon\). □ Theorem 2.2 Assume \(a_{j}\in C(\overline{Q}_{T})\) is positive in \(\Omega\times(0,T)\) with \(\frac{1}{a_{j}}\frac{\partial a_{j}}{\partial t}\in{L^{\infty}(Q_{T})}\) and \(c_{j}\in L^{\infty}(Q_{T})\) for \(j=1,2\). The system (1.1)-(1.7) is approximately controllable. That is to say, for any given \(y_{0},u_{0},y_{d},u_{d}\in L^{2}(\Omega)\) and \(\varepsilon>0\), there exists \(h\in L^{2}(Q_{T})\) such that the weak solution \((y,u)\) to the problem (1.1)-(1.6) satisfies (1.7). Proof without loss of generality. Otherwise, one can divide \((y,u)\) into two solutions; one solves the fixed system with nonhomogeneous initial data and the other solves the control system with homogeneous initial data. Let \((\hat{z}_{0},\hat{v}_{0})\) be the unique point of minimum of \(J(\cdot)\) and denote by \((\hat{z},\hat{v})\) the weak solution of the conjugate problem (2.1)-(2.6) with \((z_{0},v_{0})=({\hat{z}}_{0},{\hat{v}}_{0})\). Below, let us show that \(h=\hat{z}\) is a control to the system (1.1)-(1.7) under the assumption (2.10) by distinguishing into two cases. The case \(\|(y_{d},u_{d})\|_{\mathscr{H}}\le\varepsilon\). In this case, Proposition 2.2 yields \((\hat{z}_{0},\hat{v}_{0})=(0,0)\) a.e. in Ω and thus \((\hat{z},\hat{v})=(0,0)\) a.e. in \(Q_{T}\) from the uniqueness result in Theorem 2.1. Therefore \(h=0\) a.e. in \(Q_{T}\) and thus \((y,u)=(0,0)\) a.e. in \(Q_{T}\), which leads to First of all, we have the case \(\|(y_{d},u_{d})\|_{\mathscr{H}}>\varepsilon\). In this case, Proposition 2.2 yields \((\hat{z}_{0},\hat{v}_{0})\neq(0,0)\). For any \((\theta_{0},\psi_{0})\in{\mathscr{H}}\), denote by \((\theta,\psi)\) the weak solutions of the conjugate problem (2.1)-(2.6) with \((z_{0},v_{0})=(\theta _{0},\psi_{0})\). Since \((\hat{z}_{0},\hat{v}_{0})\) is the unique point of minimum of \(J(\cdot )\), one gets which implies (1.7) due to the arbitrariness of \((\theta_{0},\psi_{0})\in{\mathscr{H}}\). □ References 1. Martinez, P, Raymond, JP, Vancostenoble, J: Regional null controllability of a linearized Crocco-type equation. SIAM J. Control Optim. 42, 709-728 (2003) 2. Yin, J, Wang, C: Evolutionary weighted p-Laplacian with boundary degeneracy. J. Differ. Equ. 237(2), 421-445 (2007) 3. Fujii, N, Sakawa, Y: Controllability for nonlinear differential equations in Banach space. Autom. Control Theory Appl. 2(2), 44-46 (1974) 4. Fabre, C, Puel, J-P, Zuazua, E: Approximate controllability of a semilinear heat equation. Proc. R. Soc. Edinb., Sect. A, Math. 125(1), 31-61 (1995) 5. Russell, DL: Controllability and stabilizability theorems for linear partial differential equations: recent progress and open questions. SIAM Rev. 20(4), 639-739 (1978) 6. Alabau-Boussouira, F, Cannarsa, P, Fragnelli, G: Carleman estimates for degenerate parabolic operators with applications to null controllability. J. Evol. Equ. 6(2), 161-204 (2006) 7. Cannarsa, P, Fragnelli, G, Rocchetti, D: Controllability results for a class of one-dimensional degenerate parabolic problems in nondivergence form. J. Evol. Equ. 8(2), 583-616 (2008) 8. Cannarsa, P, Fragnelli, G, Vancostenoble, J: Regional controllability of semilinear degenerate parabolic equations in bounded domains. J. Math. Anal. Appl. 320(2), 804-818 (2006) 9. Cannarsa, P, Martinez, P, Vancostenoble, J: Carleman estimates for a class of degenerate parabolic operators. SIAM J. Control Optim. 47(1), 1-19 (2008) 10. Cannarsa, P, Martinez, P, Vancostenoble, J: Null controllability of degenerate heat equations. Adv. Differ. Equ. 10(2), 153-190 (2005) 11. Cannarsa, P, Martinez, P, Vancostenoble, J: Persistent regional controllability for a class of degenerate parabolic equations. Commun. Pure Appl. Anal. 3(4), 607-635 (2004) 12. Cannarsa, P, de Teresa, L: Controllability of 1-D coupled degenerate parabolic equations. Electron. J. Differ. Equ. 2009, 73 (2009) 13. Du, R, Wang, C: Null controllability of a class of systems governed by coupled degenerate equations. Appl. Math. Lett. 26(1), 113-119 (2013) 14. Martinez, P, Vancostenoble, J: Carleman estimates for one-dimensional degenerate heat equations. J. Evol. Equ. 6(2), 325-362 (2006) 15. Wang, C: Approximate controllability of a class of degenerate systems. Appl. Math. Comput. 203(1), 447-456 (2008) 16. Wang, C: Approximate controllability of a class of semilinear systems with boundary degeneracy. J. Evol. Equ. 10(1), 163-193 (2010) 17. Saut, JC, Scheurer, B: Unique continuation for some evolution equations. J. Differ. Equ. 66(1), 118-139 (1987) Acknowledgements The authors are grateful to the anonymous referees for useful comments and suggestions, which improved the exposition of the paper. This research is supported by the National Natural Science Foundation of China (11401049), the Scientific and Technological Research Project of Jilin Province’s Education Department (no. 2016285), the Twelfth Five-Year Plan project of Jilin Province’s Educational Science (ZD14078), and SRF, JPED [2014](B019). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
What was the velocity of the Command Module after it had penetrated through the Earth's atmosphere at the point where the parachutes were deployed? If you're asking about the deployment of the three main parachutes of C/M-ELS (Apollo Command Module Earth Landing System), then this is simple enough to answer, the pilot chutes are deployed at about 10,000 feet (3.05 km) by a barometric switch, pulling the three main parachutes from their containers. The ELS was designed so the drogue chutes slow the descent down to roughly 200 km/h (124 mi/h) before the pilot chutes pull the main chutes, eventually slowing down the CM to 22 miles per hour (35 km/h) for splashdown and to roughly 24.5 mi/h (39.5 km/h) with only two main chutes properly deployed, as it happened during the Apollo 15 splashdown. Earth Landing System sequence of events (Source: Project Apollo - NASSP) For the drogue deployment though (thanks go to @MarkAdler in the comments!), we now have this diagram of the parachute deployment envelope: So in normal atmospheric entry (not launch abort), the diagram for manual deployment of the drogues describes the region in altitudes between 40,000 and 25,000 feet (12.2 - 7.6 km) and CM velocity between mach 0.7 and 0.3. Translating that to US standard atmosphere in 1962 figures, 0.7 mach at 40,000 feet equals roughly 206 m/s (743 km/h or 461 mi/h) and 0.3 mach at 25,000 feet equals roughly 94 m/s (338 km/h or 210 mi/h). That averages out at 32,500 ft (9.9 km) and velocity of 150 m/s (540 km/h or 336 mi/h). The normal entry region for drogue chute deployment by barometric switch (such as was the case with Apollo 11) is described at altitudes between 25,000 and 20,000 feet (7.6 - 6.1 km) and CM velocity between mach 0.225 and 0.475 that come out at roughly 70.65 - 147.25 m/s (158 - 329 mi/h or 254 - 530 km/h). That averages out at 22,500 feet and 109 m/s (243.5 mi/h or 392 km/h). The command module's speed is going to be terminal velocity — the speed at which the force exerted by aerodynamic drag equals the force exerted by gravity — because it entered the atmosphere going at a crazy high speed, and it's been falling for tens of thousands of meters already. So we know it's going to be terminal velocity. But we can't stop there. I was curious about this too, so I decided to run the numbers. The formula for terminal velocity is as follows: $$V_T=\sqrt{\frac{2mg}{\rho AC_D}}$$ The mass of the command module, $m$, was 5809kg, from here. The force of gravity on Earth, $g$ is right around $9.8~\textrm{m/s}^2$. The drag coefficient, $C_D$, seems to be about 1.3 (judging from the graph on pg 48 of this NASA paper. The projected area of the CM's heat shield, $A$, was $11.631~\textrm{m}^2$ ($125.2 ~\textrm{ft}^2$), according to page 9 of another NASA paper. Here I'll assume you're talking about the drogue parachutes, the first ones out. They were deployed at 24,000 feet (7315.2m). The density of the air at that altitude is, according to Wolfram|Alpha, $0.57~\textrm{kg/m}^3$. So now we have all our numbers. We just need to do the calculations — which I performed, getting a terminal velocity at that altitude of (drumroll please)… $103.71~\textrm{m/s}$ (or about 230 mph). That is… About the maximum speed of a swing by a professional golf player, Almost as fast as a Ferrari F50 GT1 (about 2% slower), And about a third as fast as sound in 15°C dry air @ 1 atmosphere: Mach 0.33 at that altitude. Thanks, Wolfram|Alpha! protected by Community♦ Nov 24 '18 at 15:19 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
April 21st, 2017, 07:38 PM # 1 Member Joined: Feb 2017 From: East U.S. Posts: 40 Thanks: 0 Riemann Sums and Definite Integrals Use Example 1 as a model to evaluate the limit $\displaystyle \lim _{n\to \infty }\left(\sum _{i=1}^n\left(f\left(ci\right)Δxi\right)\right)$ over the region bounded by the graphs of the equations. (Round your answer to three decimal places.) f(x) = sqrt(x), y = 0, x = 0, x = 3 HINT: Let $\displaystyle ci=\frac{3i^2}{n^2}$ Can someone lead me in the right direction on how to solve this? Tags definite, integrals, riemann, sums Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Riemann Sums slabbxo Calculus 1 February 25th, 2014 04:52 AM Riemann Sums... n3rdwannab3 Calculus 1 January 17th, 2014 03:41 PM Help please on Riemann sums mctiger Real Analysis 0 May 5th, 2013 07:03 AM Limit of Riemann sums nubshat Calculus 1 November 17th, 2012 07:55 AM Riemann sums nubshat Calculus 2 November 13th, 2012 05:04 PM
Sorry, but my first answer was completely and utterly wrong. Since this question is one of the top search results for the query "braid group fundamental group configuration space" I think it's high time I've updated with a correct explanation! :-) I am not sure why there is a non-trivial loop. My understanding of homotopy is that if there is no "hole" in the space, then we can continuously retract our loop back to our base point. Why can we not do this in this case? Short answer. You're thinking of $B_n(\Bbb R)$ when you should be thinking of $B_n(\Bbb C)$. Long answer. Let $X$ be a "nice" topological space (say, a manifold). Define $F_n(X)$ to be the subspace of $X^n$ comprised of tuples with distinct coordinates. The symmetric group $S_n$ acts on it freely, and we can form the $n$-configuration space as the quotient $SF_n(X):=F_n(X)/S_n$. Then we define the braid group as $B_n(X)=\pi_1(SF_n(X))$. (Of course, $SF_n(X)$ should be path-connected...) If you take $X=\Bbb R$ then the connected components of $F_n(X)$ are blocks for the action of $S_n$. Given any two tuples $(x_1,\cdots,x_n)$ and $(y_1,\cdots,y_n)$ with $x_1<x_2<\cdots<x_n$ and $y_1<y_2<\cdots<y_n$, these two tuples will be path-connected: first shift all coordinates of $\vec{y}$ uniformly enough to the right so that $x_n<y_1$, then shift $y_1$ back until it's $x_1$, then shift $y_2$ back until it's $y_2$, and so on. The space of all tuples $(x_1,\cdots,x_n)$ with increasing coordinates is homeomorphic to $\Bbb R^n$ which is simply connected. Similarly for any other tuples whose coordinates are "ranked" in a given order. However $(x_1,x_2,x_3,\cdots,x_n)$ will not be path-connected to $(x_2,x_1,x_3,\cdots,x_n)$ within $F_n(X)$. The difference between the first two coordinates would need to change from positive to negative, and hence by IVT must be zero at some point. In general, a path in $F_n(X)$ cannot change the "rank order" of the coordinates of a tuple. So there are no paths between points in any $S_n$-orbit in $F_n(X)$. Therefore, any based loop in $SF_n(X)$ when lifted back to $F_n(X)$ must also be a loop, hence must be nullhomotopic since $F_n(X)$'s connected components are simply connected, so $SF_n(X)$ is simply connected, so the braid group $B_n(\Bbb R)=\pi_1(SF_n(\Bbb R))$ is trivial. Now consider $X=\Bbb C$ with $n=2$. We must delete the subspace $\{(z,z):z\in\Bbb C\}$ from $\Bbb C^2$. (Keep in mind for now that $C_2$ acts on the carved-out space by transposing coordinates.) This subspace is a plane inside Euclidean $4$-space, so it homeomorphic to $\Bbb R\times (\Bbb R^3- L)$ for a line $L\subset\Bbb R^3$. Better yet, consider the obvious Euclidean structure on the space and take the orthogonal complement $\{(z,-z):z\in\Bbb C\}$: there is an orthogonal projector given by $(z,w)\mapsto(z-w,w-z)/\sqrt{2}$ and then an isomorphism into the punctured plane given by $(u,-u)\mapsto u$. Thus, we have a deformation retract from $\Bbb C^2-{\rm diag}$ onto $\Bbb C^\times$, and we know $\pi_1(\Bbb C^\times)$ is infinite cyclic. (This is the pure braid group $P_2$.) If one further deformation retracts $\Bbb C^\times\to S^1$ and has $C_2$ act by swapping antipodal points, then our deformation retract from $\Bbb C^2-{\rm diag}$ is equivariant. Thus we have a commutative diagram: $$\begin{array}{ccc}\pi_1(\Bbb C^2-{\rm diag}) & \longrightarrow & \pi_1((\Bbb C^2-{\rm diag})/C_2) \\ \downarrow & & \downarrow \\ \pi_1(S^1) & \longrightarrow & \pi_1(S^1/C_2) \end{array} $$ The vertical maps are isomorphisms since they are induced from deformation retracts. As a result, we know that the inclusion of the pure braid group $P_2\hookrightarrow B_2$ is akin to $2\Bbb Z\hookrightarrow\Bbb Z$. I don't think this kind of argument will generalize though. So what about $n>2$? In configuration space (which has $2n$ real dimensions, so is hard to visualize) a single point, a "configuration," represents $n$ distinct points in a plane (which is easy to visualize). And a path in configuration space represents each of the $n$ points in the plane having a path in and out of it. Thus, imagine a continuum (indexed by $[0,1]$) of copies of $\Bbb C$ (resting flat) piled on top of each other. If one lets the altitude represent time, then the paths traced out between the points represent strings, and if one looks at this picture from the side one sees braid diagrams! Example: $\hskip 1.3in$ Since we can choose our basepoint for $\pi_1$ to be anything, without loss of generality we may assume it is $\{1,2,\cdots,n\}\subset\Bbb C$ for the purpose of visualization. Tuples in $\Bbb C^n$ with nondistinct coordinates represent two strings intersecting at the same point, which is why wemust delete this subspace from $\Bbb C^n$: to prevent collisions. A path in $\Bbb C^n$ ending where it started means each colored string above would have to go back to its original point, and this defines a pure braid. If we quotient by the action of $S_n$, we essentially allow the path in configuration space to go to any of the permuted configurations, which means the strings in the braid diagram can connect different dots. There is another way to visualize braids that is also very interesting: mapping classes of the closed unit disk with $n$ points inside deleted. I recently asked a question about generalizing this idea to generalized braid groups. Mappings can warp the unit disk like a sheet of rubber, but the rubber is attached to the boundary (the unit circle) which must remain fixed pointwise. When you delete $n$ points, that essentially means your mappings of the disk must restrict to a permutation of those $n$ points. To visualize what such mappings look like, for $B_2$ imagine putting two fingers on the two points in the disk, then using your two fingers to warp the rubber disk by turning it one way or the other with your fingers. Remember the border of the rubber sheet is stuck in place, so you'll be twisting the inside of the rubber relative to the outside rather than lamely rotating it. In general for $n$ points, you can do the same thing by twisting the rubber around two points with two fingers. There two ways to twist two marked points around each other with your fingers (clockwise or counterclockwise), corresponding to which string goes over/under which in the braid diagram. The paths that the marked points take throughout the twisting process essentially trace out a braid diagram. Intuitively, we should be able to "lift" any braid diagram into a composition of such twistings of the unit disk. More detail is given in the link.
Question: Suppose $a,b \in \Bbb N$, $\gcd (a,n) = \gcd(b,n) = 1$. The question is to prove or give a counterexample: $\gcd(ab,n) = 1$. My Work: This is what I have so far (for $\alpha, \beta, \gamma, \delta \in \Bbb Z$): \begin{align*} \gcd(a,n) = 1 \ &\Rightarrow 1 = \alpha a + \beta n\\ \gcd(b,n) = 1 \ &\Rightarrow 1 = \gamma b + \delta n \end{align*} Multiplying the top equation by $b$, and the bottom by $a$, I have $$ b + a = (\alpha + \gamma)ab + (\beta b + \delta a)n $$ Here is where I am stuck. I now know that you can write a linear combination of $ab, n$ in this form, where all coefficients are integers, but I think I may have gone down the wrong road in this proof in multiplying by $a,b$. Hints would be appreciated.
Let $p$ be a prime number, and $a \in \mathbb{Q}$, a number such that there is no integer $k$ satisfying $p^k=a$. Write $f= X^p -a \in \mathbb{Q}[X]$. I have to prove the following statements: The degree of the splitting field $\Omega/\mathbb{Q}$ equals $p(p-1)$ Prove that the Galois group is isomorphic to the following: $$\{ \left( \begin{array}{ccc}a & b \\0 & 1\end{array} \right): \ a,b \in \mathbb{F}_p \ , \ a \neq 0\}$$ My own attempts I should see the extension as a double extension I guess. I thought I had to add some root $^p\sqrt{a}$ and a primitive root of unity $\zeta$. The first degree would $p$, and the second one would be $p-1$ because $\sum_{k=0}^{p-1}X^k$ is the minimal polynomial of $\zeta$. This would give the degree $p(p-1)$, right? Every element $\sigma \in G$ has to map $\zeta$ to $\zeta^k$ where $1\leq k \leq p-1$. The other root $\sqrt{a}$ has to be sent to some $^p\sqrt{a}^m$, where $1 \leq m \leq p$. So I took the map. $$ \phi : \left( \begin{array}{ccc} a & b \\ 0 & 1 \end{array} \right) \longmapsto \left\{ \begin{array}{lr} \zeta \quad \mapsto \quad \zeta^a\\ ^p\sqrt{a} \quad \mapsto \quad ^p \sqrt{a} \cdot \zeta^b \end{array} \right.$$ If we multiply to matrices we get: $$ \left( \begin{array}{ccc} a & b \\ 0 & 1 \end{array} \right) \cdot \left( \begin{array}{ccc} x & y \\ 0 & 1 \end{array} \right) = \left( \begin{array}{ccc} ax & ay+b \\ 0 & 1 \end{array} \right) $$ The upper right corner troubles me. I don't see why it doesn't work componentwise. Could someone explain me? $$
Animation of an evolving network according to the initial Barabasi–Albert model Evolving Networks are networks that change as a function of time. They are a natural extension of network science since almost all real world networks evolve over time, either by adding or removing nodes or links over time. Often all of these processes occur simultaneously, such as in social networks where people make and lose friends over time, thereby creating and destroying edges, and some people become part of new social networks or leave their networks, changing the nodes in the network. Evolving network concepts build on established network theory and are now being introduced into studying networks in many diverse fields. Contents Network theory background 1 First evolving network model - scale free networks 2 Additions to BA model 3 Fitness 3.1 Removing nodes and rewiring links 3.2 Other ways of characterizing evolving networks 4 Treat evolving networks as successive snapshots of a static network 4.1 Define dynamic properties 4.2 Applications 5 Further reading 6 References 7 Network theory background The study of networks traces its foundations to the development of graph theory, which was first analyzed by Leonhard Euler in 1736 when he wrote the famous Seven Bridges of Königsberg paper. Probabilistic network theory then developed with the help of eight famous papers studying random graphs written by Paul Erdős and Alfréd Rényi. The Erdős-Rényi model (ER) supposes that a graph is composed of N labeled nodes where each pair of nodes is connected by a preset probability p. Watts-Strogatz graph While the ER model's simplicity has helped it find many applications, it does not accurately describe many real world networks. The ER model fails to generate local clustering and triadic closures as often as they are found in real world networks. Therefore, the Watts and Strogatz model was proposed, whereby a network is constructed as a regular ring lattice, and then nodes are rewired according to some probability β. [1] This produces a locally clustered network and dramatically reduces the average path length, creating networks which represent the small world phenomenon observed in many real world networks. [2] Despite this achievement, both the ER and the Watts and Storgatz models fail to account for the formulation of hubs as observed in many real world networks. The degree distribution in the ER model follows a Poisson distribution, while the Watts and Strogatz model produces graphs that are homogeneous in degree. Many networks are instead scale free, meaning that their degree distribution follows a power law of the form: P\left(k\right)\sim k^{-\gamma} This exponent turns out to be approximately 3 for many real world networks, however, it is not a universal constant and depends continuously on the network's parameters [3] First evolving network model - scale free networks The Barabási–Albert (BA) model was the first widely accepted model to produce scale-free networks. This was accomplished by incorporating preferential attachment and growth, where nodes are added to the network over time and are more likely to link to other nodes with high degree distributions. The BA model was first applied to degree distributions on the web, where both of these effects can be clearly seen. New web pages are added over time, and each new page is more likely to link to highly visible hubs like Google which have high degree distributions than to nodes with only a few links. Formally this preferential attachment is: p_i = \frac{k_i}{\displaystyle\sum_j k_j}, Additions to BA model The BA model was the first model to derive the network topology from the way the network was constructed with nodes and links being added over time. However, the model makes only the simplest assumptions necessary for a scale-free network to emerge, namely that there is linear growth and linear preferential attachment. This minimal model does not capture variations in the shape of the degree distribution, variations in the degree exponent, or the size independent clustering coefficient. Therefore, the original model has since been modified to more fully capture the properties of evolving networks by introducing a few new properties. Fitness One concern with the BA model is that the degree distributions of each nodes experience strong positive feedback whereby the earliest nodes with high degree distributions continue to dominate the network indefinitely. However, this can be alleviated by introducing a fitness for each node, which modifies the probability of new links being created with that node or even of links to that node being removed. [4] In order to preserve the preferential attachment from the BA model, this fitness is then multiplied by the preferential attachment based on degree distribution to give the true probability that a link is created which connects to node i. \Pi(k_i) = \frac{\eta_i k_i}{\displaystyle\sum_j \eta_j k_j}, Where \eta is the fitness, which may also depend on time. A decay of fitness with respect to time may occur and can be formalized by: \Pi(k_i) \propto k_i(t-t_i)^{-\nu}, Where \gamma increases with \nu Removing nodes and rewiring links Further complications arise because nodes may be removed from the network with some probability. Additionally, existing links may be destroyed and new links between existing nodes may be created. The probability of these actions occurring may depend on time and may also be related to the node's fitness. Probabilities can be assigned to these events by studying the characteristics of the network in question in order to grow a model network with identical properties. This growth would take place with one of the following actions occurring at each time step: Prob p: add an internal link. Prob q: delete a link. Prob r: delete a node. Prob 1-p-q-r: add a node. Other ways of characterizing evolving networks In addition to growing network models as described above, there may be times when other methods are more useful or convenient for characterizing certain properties of evolving networks. Treat evolving networks as successive snapshots of a static network The most common way to view evolving networks is by considering them as successive static networks. This could be conceptualized as the individual still images which compose a motion picture. Many simple parameters exist to describe a static network (number of nodes, edges, path length, connected components), or to describe specific nodes in the graph such as the number of links or the clustering coefficient. These properties can then individually be studied as a time series using signal processing notions. [5] For example, we can track the number of links established to a server per minute by looking at the successive snapshots of the network and counting these links in each snapshot. Unfortunately, the analogy of snapshots to a motion picture also reveals the main difficulty with this approach: the time steps employed are very rarely suggested by the network and are instead arbitrary. Using extremely small time steps between each snapshot preserves resolution, but may actually obscure wider trends which only become visible over longer timescales. Conversely, using larger timescales loses the temporal order of events within each snapshot. Therefore, it may be difficult to find the appropriate timescale for dividing the evolution of a network into static snapshots. Define dynamic properties It may be important to look at properties which cannot be directly observed by treating evolving networks as a sequence of snapshots, such as the duration of contacts between nodes [6] Other similar properties can be defined and then it is possible to instead track these properties through the evolution of a network and visualize them directly. Another issue with using successive snapshots is that only slight changes in network topology can have large effects on the outcome of algorithms designed to find communities. Therefore, it is necessary to use a non classical definition of communities which permits following the evolution of the community through a set of rules such as birth, death, merge, split, growth, and contraction. [7] [8] Applications Route map of the world's scheduled commercial airline traffic, 2009. This network evolves continuously as new routes are scheduled or cancelled. Almost all real world networks are evolving networks since they are constructed over time. By varying the respective probabilities described above, it is possible to use the expanded BA model to construct a network with nearly identical properties as many observed networks. [9] Moreover, the concept of scale free networks shows us that time evolution is a necessary part of understanding the network's properties, and that it is difficult to model an existing network as having been created instantaneously. Real evolving networks which are currently being studied include social networks, communications networks, the internet, the movie actor network, the world wide web, and transportation networks. Further reading "Understanding Network Science," http://www.zangani.com/blog/2007-1030-networkingscience "Linked: The New Science of Networks", A.-L. Barabási Perseus Publishing, Cambridge. "Evolving Network Analysis: A Survey", ACM Computing Surveys, 2014. [1] References ^ Watts, D.J.; Strogatz, S.H. (1998). "Collective dynamics of 'small-world' networks". Nature 393 (6684): 409–10. ^ Travers Jeffrey, Milgram Stanley (1969). "An Experimental Study of the Small World Problem". Sociometry 32 (4): 425–443. ^ R. Albert; A.-L. Barabási (2000). "Topology of Evolving Networks: Local Events and Universality". ^ Albert R. and Barabási A.-L., "Statistical mechanics of complex networks", Reviews of Modern Physics 74, 47 (2002) ^ Pierre Borgnat; Eric Fleury; et al. "Evolving Networks". ^ A. Chaintreau; P. Hui; J. Crowcroft; C. Diot; R. Gass; J. Scott (2006). "Impact of human mobility on the design of opportunistic forwarding algorithms". INFOCOM. ^ G. Palla; A. Barabasi; T. Vicsek; Y. Chi, S. Zhu, X. Song, J. Tatemura, and B.L. Tseng (2007). "Quantifying social group evolution". Nature 446 (7136): 664–667. ^ Y. Chi, S. Zhu; X. Song; J. Tatemura; B.L. Tseng (2007). "Structural and temporal analysis of the blogosphere through community factorization". KDD ’07: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining: 163–172. ^ I. Farkas; I. Derenyi; H. Heong; et al. (2002). "Networks in life: scaling properties and eigenvalue spectra". This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Your equation: $$(R^2 + (\omega L - \frac{1}{\omega C})^2)^{\frac{1}{2}} = 100R $$has the form of:$$\sqrt{R^2 + X^2} = |Z| = Z $$which defines the MAGNITUDE of Z, where X is reactance and R is resistance. So we can say 100R is the magnitude of the impedance. Notice the equation above looks alot like the pythagorean theorem where resistance is in the horizontal direction and reactance is in the vertical direction; it forms a triangle with Z as the hypotenuse. Just focusing on the LHS, reactance (X) takes an $$\omega$$ variable which is the frequency of the circuit whereas R, L, and C are constants of the circuit and therefore do not change over the operating lifetime of the circuit. The above implication: $$(\omega L - \frac{1}{\omega C})^2 \gg R^2$$ is just analyzing the edge case when the frequency of the circuit is massive. You could also take the frequency really close to zero and you'll see the implication above still holds! This is just a method to analyze edge cases of a circuit. For instance, fix R, L, C and let frequency (omega) get larger and larger to infinity. The inductance term balloons to infinity, the capacitance terms goes to zero. IF $$\omega \rightarrow \infty$$ THEN $$(\infty L - \frac{1}{\infty C})^2 \gg R^2$$ $$(\infty L - 0)^2 \gg R^2 $$ $$(\infty L)^2 \gg R^2 $$ $$\infty ^2 \gg R^2 $$ $$\infty \gg R^2 $$ So the implication holds true: as frequency gets large, the reactance will be so much larger than resistance that we can essentially disregard resistance in the impedance equation. The other case: IF $$ \omega \rightarrow 0 $$ THEN $$(0L - \frac{1}{0C})^2 \gg R^2 $$ $$(0 - \frac{1}{0})^2 \gg R^2 $$ $$(-\infty)^2 \gg R^2 $$ $$\infty \gg R^2 $$ Again the implication holds true. We can for sure say as frequency gets small (or large), reactance becomes so much larger than resistance that resistance is a negligible term in the impedance equation and thus can be omitted. Stated another way: IF $$ X^2 \gg R^2 $$ THEN $$ Z \approx \sqrt{X^2} $$ So to tie into the triangle visualization, when making the frequency massive or super small, the impedance triangle starts to look like a straight vertical line. Thus $$X = |Z|$$ $$ \omega L - \frac{1}{\omega C} = \pm100R$$
Here’s something I thought of when I couldn’t sleep last night. The curvature tensor of a Kähler metric can be viewed as a Hermitian form on \(\bigwedge^{1,1} T_X^*\) by mapping \(\operatorname{End} T_X \to \bigwedge^{1,1} T_X^*\) via the metric. If we’re on a compact Kähler manifold with zero first Chern class, then for each Kähler class \(\omega\) and \((1,1)\)-classes \(u, v\), we can pick the Ricci-flat metric in \(\omega\) and the harmonic representatives of \(u, v\). If \(R\) is the curvature tensor of the metric, viewed as a Hermitian form, we can then set \[ b(u,v)(\omega) := \int_X R(u, v) \, dV_{\omega}. \] This defines a smooth bilinear form \(b\) on the tangent space of the Kähler cone of \(X\). Besides being fun times, can we say anything interesting about \(b\)? For example, what is its norm with respect to the Riemannian metric on the Kähler cone, or its trace with respect to that metric? Can we integrate it over some subset of the cone?
Let $H$ be a complex, separable Hilbert space, and $T:H \rightarrow H$ a linear, bounded operator. Assume that $$\sigma(T) = \sigma(T^*) = \{ \lambda \in \mathbb{C}: a \leq |\lambda| \leq b \}$$ for $0< a < b$. Now assume that for $a < |\lambda| < b$ we have that $\lambda$ is an eigenvalue for $T^*$ but $T - \lambda I$ is bounded below. I'm studying a proof that assures that in this case, $(T - \lambda I)^*$ has infinite dimensional kernel. Why? I'm trying to prove that if an operator is bounded below then its adjoint fulfills this property, but I'm stuck trying it. My first attempt is note that $\ker ( (T - \lambda I)^*) = \overline{ Im(T - \lambda I)}^\perp$ and try to compute the codimension of $Im(T- \lambda I)$, where the fact that $\lambda$ is an eigenvalue may help. Anyone can help me? Thank you very much.
For the maximum: Suppose we have fixed values $x_1 \leq \frac{1}{n}$ and $x_n \geq \frac{1}{n}$. Then there is a unique point $x^*=(x_1, x_2, \dots, x_n)$ satisfying $\sum x_i=1$ with at most one index $j$ satisfying $x_1 < x_j < x_n$ (imagine starting with all the variables equal to $x_1$, then increasing them one by one to $x_n$). I claim this is where the unique maximum of your function is. Consider any other point in the domain, and suppose it has $x_1<x_i\leq x_j<x_n$ for some $i \neq j$. Let $\epsilon = \min\{x_i-x_1, x_n-x_j\}$. Replacing $x_i$ by $x_i'=x_i-\epsilon$ and $x_j$ by $x_j'=x_j+\epsilon$ maintains the $\sum x_i=1$ constraint, while decreasing the number of "interior to $(x_1, x_n)$" variables by one. Furthermore, the new point is better for our objective function: In the sum of squares objective we've replaced $x_i^2+x_j^2$ by $$x_i'^2+x_j'^2=(x_i-\epsilon)^2+(x_j+\epsilon)^2 = x_i^2+x_j^2 + 2 \epsilon^2 + 2 \epsilon(x_j-x_i) > x_i^2+x_j^2.$$ Repeatedly following this process, we'll eventually reach the point $x^*$ from our arbitrary point, increasing the objective at every step. The key idea hiding in the background here is that (as Michael Rozenberg noted) the function $x^2$ is convex. So if we want to maximize $\sum x_i^2$ given a fixed $\sum x_i$, we want to push the variables as far away from each other as possible. The $x_1$ and $x_n$ constraints place limits on this, so effectively what ends up happening is we push points out to the boundary until we can't push them out any further. The minimum you observed is the reverse of this: To minimize the sum of a convex function for fixed $\sum x_i$ we push all the inputs together as much as possible (this corresponds to Jensen's Inequality).
November 30th, 2018, 03:38 AM # 1 Senior Member Joined: Dec 2015 From: somewhere Posts: 711 Thanks: 96 Need explanation Let $\displaystyle A$ denote the largest of the numbers $\displaystyle a_1 , a_2 , ... , a_p$ for $\displaystyle p,n \in N $ Need explanation for $\displaystyle \frac{A}{\sqrt[n]{p}}\leq \sqrt[\displaystyle n]{\frac{a^{n}_1+a^{n}_2 +...+a^{n}_p } {p}}\leq A$ how to show it is true (how to derive?) Last edited by idontknow; November 30th, 2018 at 03:50 AM. November 30th, 2018, 08:28 AM # 2 Senior Member Joined: Sep 2015 From: USA Posts: 2,574 Thanks: 1421 the only restriction on $a$ is that $\max{(a_k)} = A$ so if we choose $s$ as $a_1=A,~a_k = 0,~k=2,p$ the expression on the right becomes $\dfrac{A}{\sqrt[n]{p}}$ Now consider the sequence $a_k = A$ This should produce the largest value of that expression $\sqrt[n]{\dfrac{ p A^n}{p}} = A \sqrt[n]{\dfrac{p}{p}} = A$ and thus we see $\dfrac{A}{\sqrt[n]{p}} \leq \sqrt[n]{\dfrac{\sum \limits_{k=1}^p~a_k^n}{p}} \leq A$ Tags explanation Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post I need explanation sadmath Geometry 8 September 13th, 2015 06:24 AM Need some explanation Vaki Calculus 7 April 7th, 2014 04:12 PM need explanation ranna Calculus 11 February 1st, 2011 09:34 PM Could use a better job of explanation Male_volence Calculus 2 February 13th, 2008 04:20 AM Need some explanation about (sin x)' rain Real Analysis 3 December 31st, 1969 04:00 PM
Research Open Access Published: Multiplicity of positive radial solutions of p-Laplacian problems with nonlinear gradient term Boundary Value Problems volume 2017, Article number: 36 (2017) Article metrics 798 Accesses 1 Citations Abstract In the present paper, we prove the existence of at least three radial solutions of the p-Laplacian problem with nonlinear gradient term and the corresponding one-parameter problem. Here Ω is a unit ball in \(\mathbb{R}^{N}\). Our approach relies on the Avery-Peterson fixed point theorem. In contrast with the usual hypotheses, no asymptotic behavior is assumed on the nonlinearity f with respect to \(\phi_{p}(\cdot)\). Introduction In the present paper, we are concerned with the multiplicity of positive radial solutions to the quasilinear elliptic p-Laplacian problem with nonlinear gradient term and the corresponding one-parameter problem where \(\Omega\subset\mathbb{R}^{N}\) is a unit ball in \(\mathbb{R} ^{N}\), \(\Delta_{p}u=\operatorname{div}(\vert \nabla u\vert ^{p-2}\nabla u)\) is the p-Laplacian with \(p>1\), and \(f:[0,+\infty)\times[0,+\infty) \times[0,+\infty)\rightarrow[0,+\infty)\) is continuous with \(f(r,s,t)>0\) for all \((r,s,t)\in(0,1]\times(0,+\infty)\times[0,+ \infty)\). In recent years, the elliptic p-Laplacian problems with nonlinear gradient term have been extensively studied via different methods [1–6], for example, critical point theory, Schauder’s fixed point theorem, Schaefer’s fixed point theorem, sub- and supersolutions, and so on. However, most of these results are concerned with the existence of one or two solutions, and a few works refer to the existence of three solutions for problems (1.1) and (1.2). In 2012, Bueno et al. [1] considered the p-Laplacian problem with dependence on the gradient where \(\Omega\subset\mathbb{R}^{N}\) (\(N > 1\)) is a smooth bounded domain, \(\omega: \Omega\to\mathbb{R}\) is a continuous nonnegative function with isolated zeros, and the \(C^{1}\)-nonlinearity \(f: [0,\infty) \times[0,\infty) \to[0,\infty)\) satisfies some local hypotheses. By applying the Schauder fixed point theorem and sub- and supersolutions, the authors showed that problem (1.3) has a positive solution. Moreover, as an application, the authors obtained that there exits \(\lambda^{*}>0\) such that the p-growth one-parameter problem with \(1 < q < p\) has a positive solution for each \(\lambda\in(0, \lambda^{*}]\). When the nonlinearity f does not depend on the gradient, He [7] considered the p-Laplacian problem and using the Leggett-Williams fixed point theorem, established the existence of at least three radial solutions. For other works concerned with p-Laplacian problems, we refer the reader to [8–18, 20, 21]. Motivated by the above works, the aim of this paper is to study the multiplicity of positive radial solutions of problems (1.1) and (1.2). Under the hypothesis that f has a local behavior and need not satisfy superlinear condition at the origin and sublinear condition at +∞ with respect to \(\phi_{p}(s):=\vert s\vert ^{p-2}s\), \(s\in \mathbb{R}\), by using the Avery-Peterson fixed point theorem we obtain the existence of triple radial solutions of the above problems. To the best of our knowledge, problems (1.1) and (1.2) have not been studied via this fixed point theorem. Main results and Our approach on problem (2.1) relies upon the Avery-Peterson fixed point theorem, which we recall here for the convenience of the reader. Let γ and θ be nonnegative continuous convex functionals on P, α be a nonnegative continuous concave functional on P, and ψ be a nonnegative continuous functional on P. Then for positive real numbers a, b, c, and d, we define the convex sets and the closed set The following fixed point theorem due to Avery and Peterson is fundamental in the proofs of our main results. Lemma 2.1 [19] Let P be a cone in a real Banach space E. Let γ and θ be nonnegative continuous convex functionals on P, α be a nonnegative continuous concave functional on P, and ψ be a nonnegative continuous functional on P satisfying \(\psi(\lambda x)\leq\lambda\psi(x)\) for \(0 \leq\lambda\leq1\) such that, for some positive numbers M and d, for all \(x\in\overline{P(\gamma,d)}\). Suppose that \(A:\overline{P( \gamma,d)}\rightarrow\overline{P(\gamma,d)}\) is completely continuous and there exist positive numbers a, b, and c with \(a< b\) such that (i) \(\{x\in P(\gamma,\theta,\alpha,b,c,d):\alpha(x)>b \}\neq\emptyset\) and\(\alpha(Ax)>b\) for\(x\in P(\gamma,\theta, \alpha,b,c,d)\); (ii) \(\alpha(Ax)>b\) for\(x\in P(\gamma,\alpha,b,d)\) with\(\theta(Ax)>c\); (iii) \(0\notin R(\gamma, \psi, a, d)\) and\(\psi(Ax)< a\) for\(x\in R(\gamma, \psi, a, d)\) with\(\psi(x)=a\). Then, A has at least three fixed points \(x_{1},x_{2},x_{3}\in \overline{P( \gamma,d)}\) such that Remark 2.1 In Lemma 2.1, if \(\gamma(u)\leq d\) and \(u\in P\) imply that \(\theta(u)\leq c\) and \(u\in P\), then assumption (i) implies assumption (ii). We further take \(E=(C^{1}[0,1],\Vert \cdot \Vert )\) with the maximum norm and define the cone \(P\subset E\) by Now we define the nonlinear operator A on P as follows: Then \((Au)(r)\geq0\) for all \(r\in[0,1]\), and \((Au)'(0)=(Au)(1)=0\), which implies \(A(P)\subset P\). Moreover, by a standard argument it is easy to show that \(A:P\rightarrow P\) is completely continuous. In addition, it can be easily proved that u is a solution of problem (2.1) if \(u\in P\) is a fixed point of the nonlinear operator A. Define the nonnegative continuous concave functional α, the nonnegative continuous convex functionals θ, γ, and the nonnegative continuous functional ψ on the cone P by where \(\eta\in(0,1)\). Then it is easy to see that \(\alpha(u) \leq\psi(u)\) and \(\Vert u\Vert \leq\gamma(u)\) for \(u\in P\). Theorem 2.1 Assume that there exist constants a, b, d, and η with \(0< a< b\leq\eta d\) such that (H 1): \(f(r,s,t)\leq N\phi_{p}(d)\) for all\((r,s,t) \in[0,1]\times[0,d]\times[0,d]\); (H 2): \(f(r,s,t)\geq\frac{N}{(1-\eta)^{N}}\phi _{p}(\frac{b}{ \eta})\) for all\((r,s,t)\in[0,1-\eta]\times[b,d]\times[0,d]\); (H 3): \(f(r,s,t)\leq N\phi_{p}(a)\) for all\((r,s,t) \in[0,1]\times[0,a]\times[0,d]\). Then, problem(1.1) has at least three radial solutions\(u_{1}\), \(u _{2}\), \(u_{3}\) satisfying Proof Choosing \(c=d\), we divide the proof into three steps. Step 1. We show that \(A:\overline{P(\gamma,d)}\rightarrow \overline{P( \gamma,d)}\). To do this, let \(u\in\overline{P(\gamma,d)}\). Then \(-d\leq u'(r)\leq0\) for \(r\in[0,1]\), and thus \(0\leq u(r)=\int_{1} ^{r}u'(s)\,\mathrm{d}s\leq\int_{0}^{1}\vert u'(s)\vert \,\mathrm{d}s\leq d\) for \(r\in[0,1]\). Hence, from assumption (H 1) it follows that Therefore, \(A:\overline{P(\gamma,d)}\rightarrow \overline{P(\gamma,d)}\). Step 2. We check assumption (i) of Lemma 2.1. To do this, let \(u(r)\equiv b/\eta\) on \([0,1]\). Then \(\gamma(u)=0< d\), \(\theta(u)=b/ \eta\leq d=c\), \(\alpha(u)=b/\eta>b\). Hence, \(\{x\in P(\gamma,\theta,\alpha,b,c,d):\alpha(x)>b\}\neq\emptyset\). Let \(u\in P(\gamma,\theta,\alpha,b,c,d)\). Then \(\gamma(u)\leq d\), \(\theta(u)\leq c=d\), \(\alpha(u)\geq b\), and thus So from (H 2) we have Step 3. We check assumption (iii) of Lemma 2.1. Notice that \(\psi(0)=0< a\), and thus \(0\notin R(\gamma, \psi, a, d)\). Let \(u\in R(\gamma, \psi, a, d)\) with \(\psi(u)=a\). Then \(\gamma(u) \leq d\) and \(\psi(u)=a\), and hence \(-d\leq u'(r)\leq0\) and \(0\leq u(r)\leq a\) for all \(r\in[0,1]\). It follows from (H 3) that In summary, by Remark 2.1 A has at least three fixed points \(u_{1},u_{2},u_{3}\in\overline{P(\gamma,d)}\), which are radial solutions of problem (1.1) satisfying (2.3). This completes the proof of the theorem. □ Remark 2.2 In Theorem 2.1, assumptions (H 1) and (H 3) can be replaced by (\(\mathrm{H}_{1}'\)): \(f^{\infty}:=\varlimsup _{s+t\rightarrow+\infty}\max_{r\in [0,1]}\frac{f(r,s,t)}{ \phi_{p}(s+t)}< N/\phi_{p}(2)\) (\(\mathrm{H}_{3}'\)): \(f^{0}:=\varlimsup _{s\rightarrow0^{+}} \max_{(r,t)\in[0,1]\times[0,d]}\frac{f(r,s,t)}{\phi_{p}(s)}< N\), Theorem 2.2 Assume that there exist constants a, b, d, and η with \(0< a< b<\eta d<d \) such that To illustrate our main results, we present the following example. Example 2.1 Consider the Dirichlet problem where Ω is a unit ball in \(\mathbb{R}^{2}\), \(p={\frac{3}{2}}\), and Choose \(a=1\), \(b=2\), \(d=100\), and \(\eta=1/2\). Since \(p=3/2\) and \(N=2\), it follows that So, \(f(r,s,t)\) satisfies (i) \(f(r,s,t)\leq17< N\phi_{p}(d)\), \(\forall(r,s,t) \in[0,1]\times[0,100]\times[0,100]\); (ii) \(f(r,s,t)\geq16.25>\frac{N}{(1-\eta)^{N}}\phi _{p}(\frac{b}{ \eta})\), \(\forall(r,s,t)\in[0,\frac{1}{2}]\times[2,100]\times[0,100]\); (iii) \(f(r,s,t)\leq2=N\phi_{p}(a)\), \(\forall(r,s,t) \in[0,1]\times[0,1]\times[0,100]\). Noticing that \(f(r,0,0)\not\equiv0\) on \([0,1]\), we have that the three radial solutions \(u_{1}\), \(u_{2}\), \(u_{3}\) are positive. References 1. Bueno, H, Ercole, G, Zumpano, A, Ferreira, WM: Positive solutions for the p-Laplacian with dependence on the gradient. Nonlinearity 25, 1211-1234 (2012) 2. Bueno, H, Ercole, G: A quasilinear problem with fast growing gradient. Appl. Math. Lett. 26, 520-523 (2012) 3. Faraci, F, Motreanu, D, Puglisi, D: Positive solutions of quasi-linear elliptic equations with dependence on the gradient. Calc. Var. Partial Differ. Equ. 54, 525-538 (2015) 4. Filippucci, R, Pucci, P, Rigoli, M: On entire solutions of degenerate elliptic differential inequalities with nonlinear gradient terms. J. Math. Anal. Appl. 356, 689-697 (2009) 5. Iturriaga, L, Lorca, S, Sánchez, J: Existence and multiplicity results for the p-Laplacian with a p-gradient term. Nonlinear Differ. Equ. Appl. 15, 729-743 (2008) 6. Iturriaga, L, Lorca, S, Ubilla, P: A quasilinear problem without the Ambrosetti-Rabinowitz-type condition. Proc. R. Soc. Edinb., Sect. A 140, 391-398 (2010) 7. He, X: Multiple radial solutions for a class of quasilinear elliptic problems. Appl. Math. Lett. 23, 110-114 (2010) 8. Ambrosetti, A, Brezis, H, Cerami, C: Combined effects of concave and convex nonlinearities in some problems. J. Funct. Anal. 122, 519-543 (1994) 9. Ambrosetti, A, Azorero, JG, Peral, I: Multiplicity results for some nonlinear elliptic equations. J. Funct. Anal. 137, 219-242 (1996) 10. Ambrosetti, A, Garcia, J, Peral, I: Quasilinear equations with a multiple bifurcation. Differ. Integral Equ. 24, 37-50 (1997) 11. Dai, G, Ma, R: Unilateral global bifurcation phenomena and nodal solutions for p-Laplacian. J. Differ. Equ. 252, 2448-2468 (2012) 12. Dai, G, Ma, R, Lu, Y: Bifurcation from infinity and nodal solutions of quasilinear problems without the signum condition. J. Math. Anal. Appl. 397, 119-123 (2013) 13. Dai, G: Bifurcation and one-sign solutions of the p-Laplacian involving a nonlinearity with zeros. Discrete Contin. Dyn. Syst. 36, 5323-5345 (2016) 14. De Figueiredo, DG, Lions, P-L: On pairs of positive solutions for a class of semilinear elliptic problems. Indiana Univ. Math. J. 34, 591-606 (1985) 15. Garcia, J, Peral, I: Some results about the existence of a second positive solution in a quasilinear critical problem. Indiana Univ. Math. J. 43, 941-957 (1994) 16. Garcia, J, Manfredi, J, Peral, I: Sobolev versus Hölder minimizers and global multiplicity for some quasilinear elliptic equations. Commun. Contemp. Math. 2, 385-404 (2000) 17. Iturriaga, L, Massa, E, Sánchez, J, Ubilla, P: Positive solutions of the p-Laplacian involving a superlinear nonlinearity with zeros. J. Differ. Equ. 248, 309-327 (2010) 18. Prashanth, S, Sreenadh, K: Multiplicity results in a ball for p-Laplace equation with positive nonlinearity. Adv. Differ. Equ. 7, 877-896 (2002) 19. Avery, RI, Peterson, AC: Three positive fixed points of nonlinear operators on ordered Banach spaces. Comput. Math. Appl. 42, 313-322 (2001) 20. Ma, R: On a conjecture concerning the multiplicity of positive solutions of elliptic problems. Nonlinear Anal. 27, 775-780 (1996) 21. Marcos do Ó, J, Ubilla, P: Multiple solutions for a class of quasilinear elliptic problems. Proc. Edinb. Math. Soc. 46, 159-168 (2003) Acknowledgements The authors thank the referee for valuable suggestions, which led to improvement of the original manuscript. This work was supported by the Education Department of JiLin Province ([2016]45). Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
AuNem Limiting case \(q\!\rightarrow\!0\) also can be considered. Value of \(q\) can be indicated as subscript after the name of the function. \(\mathrm{SuNem}_q(\mathrm{AuNem}_q(z))=z\) \(G=\mathrm{Nem}_q\) is solution of the Abel equation for the transfer function \(T=\mathrm{Nem}_q\) : \(G(T(z))=G(z)+1\) For real values of the argument, explicit plot of AuNem is shown in Fig.1; \(y\!=\!\mathrm{AuNem}_q(z)~\) is drawn versus \(x\) for \(q\!=\!0\), \(q\!=\!1\) and \(q\!=\!2\). For \(q\!=\!0\), \(q\!=\!1\) and \(q\!=\!2\), complex map of function \(\mathrm{AuNem}_q\) is shown in figures 2,3,4. Contents Asymptotic behaviour of AuNem The asymptotix behaviour of AuNem at small valies of the argiment (at least for positive argument) is determined by the asymptotic behaviour of function SuNem at large negative values of the real part of the argument. Inverting the corresponding expansion for SuNem, the asymptic can be written as follows. For some positive integer \(M\), let \(\mathrm{AuNe}_{q,M}(z)= \) \( \displaystyle -\frac{1}{2 z^2}+\frac{q}{z}\) \( \displaystyle +\frac{1}{2} \left(2 q^2+3\right) \log (z)\) \( \displaystyle +\frac{q^2}{2}+\frac{1}{4} \left(2 q^2+3\right) \log (2)\) \( \displaystyle +\sum _{n=1}^M c(n) z^n \) \(\mathrm{AuNe}_{q}(z)=\mathrm{AuNe}_{q,M}(z) +O(z^{M+1})\) Coefficients \(c\) can be found from the asymptotic analysis of equaiton \(\mathrm{SuNe}_q(\mathrm{AuNe}(z))=0\). The coefficients can be deduced also from the Abel equation; in this case, some tens of coefficients can be evaluated with the Mathematica. The mathematica code is copypasted below: T[z_] = z + z^3 + q z^4 P[m_, L_] := Sum[a[m, n] L^n, {n, 0, IntegerPart[m/2]}] F[m_, z_] := 1/(-2 z)^(1/2) (1 - q/(-2 z)^(1/2) + Sum[P[n, Log[-z]]/(-2 z)^(n/2), {n,2,m}]) G[m_, x_] := -1/(2 x^2) + q/x + q^2/2 + 1/4 (3 + 2 q^2) Log[2] + 1/2 (3 + 2 q^2) Log[x] + Sum[c[n] x^n, {n,1,m}] Series[ReplaceAll[F[3, h + G[3, z]], a[2, 1] -> 1/4 (3 + 2 q^2)], {z,0,4}] m = 1; sg[m] = Coefficient[Series[G[m + 3, T[z]] - G[m + 3, z] - 1, {z,0,3}], z^(m + 2)] st[m] = Solve[sg[m] == 0, c[m]] su[m] = Extract[st[m], 1] SU[m] = su[m]; m = 2; sf[m] = Series[ ReplaceAll[G[m + 3, T[z]] - G[m + 3, z] - 1, SU[m-1]], {z,0,m+2}] sg[m] = Simplify[Coefficient[sf[m] 2^m, z^4]] st[m] = Solve[sg[m] == 0, c[m]] SU[m] = Join[SU[m - 1], su[m]] m = 3; sf[m] = Series[ ReplaceAll[G[m + 3, T[z]] - G[m + 3, z] - 1, SU[m-1]], {z,0,m+2}] sg[m] = Simplify[Coefficient[sf[m] 2^m, z^(m+2)]] st[m] = Solve[sg[m] == 0, c[m]] su[m] = Extract[st[m], 1] SU[m] = Join[SU[m-1], su[m]] (*and so on.. *) After to accumulate 40 substitutions in variable SU, the C++ code for evaluation of the truncated asymptotic can be generated with the command below: For[m=1,m<41, Print["C[", m, "]=", CForm[ReplaceAll[HornerForm[ReplaceAll[c[m],SU[m]]], {q^2->K, q->Q}]], ";"]; m++;] Extension to the whole complex plane The asymptotic expansion of function AuNe above is valid only for the small part of vicinity of the positive part of the real axis, while the argument is small. This representation can be extended, using the inverse function of the Nemtsov function; this inverse function is denoted as ArqNem. It is important to use namely ArqNem, while two other inverse functions, ArcNem and ArkNem do not provide the convergence of the iterative procedure below. For some fixed integer \(M\!>\!0\), the Abel function \(G=\mathrm{AuNe}_q=\mathrm{SuNe}_q^{-1}\) appears as limit: \(\displaystyle G(z)=\lim_{n\rightarrow \infty} \Big(G_M(\mathrm{ArqNem}^n(z))+n\Big)\) where \(G_{M}(z)=\) \( \displaystyle -\frac{1}{2 z^2}+\frac{q}{z} +\frac{1}{2} \left(2 q^2+3\right) \log (z) +\frac{q^2}{2}+\frac{1}{4} \left(2 q^2+3\right) \log (2)\) \( \displaystyle +\sum _{n=1}^M c(n) z^n \) The limit does not depend on \(M\), although, for higher \(M\), the rate of convergence is higher. Practically, at \(M\!=\!40\), to get at least 14 decimal digits, it is sufficient to use the approximation of limit with \(n\!=\!10\); id est, the complex double implementation evaluates function ArqNem 20 times, and once evaluates the approximation \(G\) above. As function \(\mathrm{AuNe}_q\) is defined, the inverse of the superfunction SuNem appears as follows: \(\mathrm{AuNem}_q(z)=\mathrm{AuNe}_q(z)-\mathrm{AuNe}_q(1)\) Such a definition automatically provides the relation \(\mathrm{AuNem}_q(1)=0\), as it is supposed to while \(\mathrm{SuNem}(0)\!=\!1\). The precision of the implementation of function AuNem can be verified by evaluation of the agreement \(\displaystyle A_{\mathrm{inver}}(z)= -\lg\left(\frac {|\mathrm{SuNem}_q(\mathrm{AuNem}_q(z))-z|} {|\mathrm{SuNem}_q(\mathrm{AuNem}_q(z))|+|z|} \right) \) \(\displaystyle A_{\mathrm{abel}}(z)= -\lg\left(\frac {|\mathrm{AuNem}_q(\mathrm{Nem}_q(z))-(\mathrm{AuNem}_q(z)\!+\!1)|} {|\mathrm{AuNem}_q(\mathrm{Nem}_q(z))|+|\mathrm{AuNem}_q(z)\!+\!1|} \right)\) Roughly, these agreements indicate, how many decimal digits can one expect to get at the numerical implementation. The numerical implementations used to plot figures provide of order of 14 significant figures; this is close to the maximal precision reachable with the complex double variables. Branch points and the cut lines The definition of AuNem and its representation above are used to plot figure 1 and to make the complex maps in figures 2,3,4. In these figures, the additional grid lines are adder, that correspond to \(x\!=\!x_0\) and to \(y\!=\!\pm y_0\) such that \(x_0\!+\!\mathrm i y_0\) is branch point of function ArqNem; it is also branch point of function AuNem. Definition of function AuNem indicates, that its branch points and the cut lines coincide with those of function ArqNem. Values of \(x_0\) and \(y_0\) are determined by parameter \(q\) through function NemBran: \(x_0\!+\!y_0 = \mathrm{NemBran}(q)\) Parametric plot \(x+\mathrm i y= \mathrm{NemBran}(q)\) is shown in figure 5 for real valies of \(q\) from zero to infinity. Note, that at increment of value of function AuNem or, that is equivalent, at modification of its argument with function AuNem, its argument becomes farther from the cut lines, except values at vicinity of the real axis. This means, that at iterates of function ArqNem, the argument of AuNem gradual approaches to range of validity of the asymptotic approximation, until the approximation provides the required precision. In the figures, the routines are used that provide of order of 14 significant figures, that greatly exceed the precision, required for the camera–ready versions of the figures, but some extra digits are useful for the numerical, computational test of performance of the C++ implementations, mapping the agreements \(A_{\mathrm{inverse}}(x\!+\!\mathrm i y)\) and \(A_{\mathrm{Abel}}(x\!+\!\mathrm i y)\) in the \(x\), \(y\) plane. Such a phenomenon did not happen by itself; it is due to the special choice of the cut lines of function ArqNem. Other inverse functions for the Nemtsov function do not provide this property and, therefore, cannot be used for approximation of AuNem in the whole complex plane. However, they can be used too for evaluation of function AuNem in vicinity of the real axis, where values of functions ArqNem, Arknem and ArcNem coincide. Iterates of the Nemtsov function \(\mathrm{Nem}_q^{~n}=F(n+G(z))\) In this representation, the number \(n\) of iterates has no need to be integer. References
First of all, in lower dimensions (2+1 and 1+1) the gravity is much simpler. This is because in 3d curvature tensor is completely defined by Ricci tensor (and metric at a given point) while in 2d curvature tensor is completely defined by scalar curvature. This means that there are no purely gravitational dynamical degrees of freedom, in particular no gravitational waves. General note: horizon (which is the defining feature of black hole) representing our inability to obtain information about events under it would always imply the entropy of corresponding solution. So, in all of black hole models there is some black hole thermodynamics. For Hawking radiation one needs to include quantum effects into consideration and also radiative degrees of freedom (if there are no gravitons or photons or any other '-ons' than nothing would radiate). Let us start with case of 3d (that is 2+1). The Einstein equations in 2+1 spacetime without any matter fields would simply imply that spacetime is flat, that is 'constructed' from pieces of Minkowski spacetime. It may have nontrivial topology, so 2+1 gravity is a topological theory, but no black hole solutions exist. This model (in mathematical sense) is exactly solvable. To introduce non-trivial 2+1 solutions we can add matter or cosmological constant (which could be considered the simplest form of matter). It turns out that the spacetimes with negative cosmological constant (which would locally be composed of pieces of anti-de-Sitter spacetimes) do admit the black hole solution: BTZ black hole (name after authors of original paper). This solution shares many of the characteristics of the Kerr black hole: it has mass and angular momentum, it has an event horizon, an inner horizon, and an ergosphere; it occurs as an endpoint of gravitational collapse (for that, of course, we need to include matter beyond cosmological constant in the consideration); and it has a nonvanishing Hawking temperature and interesting thermodynamic properties (see, for instance, paper by S. Carlip). The Hawking temperature of BTZ black hole $T\sim M^{1/2}$ which, in contrast to the (3+1)-dimensional case, goes to zero as $M$decreases. Additionally, the simplicity of the model allows quantum treatment of it including statistical computation of the entropy (see references in paper by E.Witten). There are many other variations of solutions in 2+1 gravity theories (for instance by including dilaton and EM fields, scalar fields etc.) but all of them require negative cosmological constant. This is because dominant energy condition forbids the existence of a black holes in 2+1 dimensions (see here). Now to 1+1 dimensions. Locally all GR models in 1+1D are flat. So to include nontrivial spacetime geometry we need to modify gravity. This can be done by including dilaton field. The resulting models often admit nontrivial geometries with black holes (see paper by Brown, Hennaux, Teitelboim, wiki page on CGHS model, paper by Witten on BH in gauged WZW model, and this review). These black hole solutions also admit nontrivial thermodynamics and Hawking radiation. In particular the Hawking temperature is proportional to mass, so as the black hole evaporates it becomes colder (unlike 4D case where $T \sim M^{-1}$). Now to higher dimensional gravity. Gravity itself is much richer than in lower dimensional cases, so analogues of all 4D black holes also exist in higher dimensions, as well as some new black hole-like solutions such as black strings and black p-branes. There are also multi-black hole configurations where multiple black holes are placed along the ring or line such that the total force on each of them is zero, resulting in equilibrium configuration. Since many uniqueness theorems for black holes only work for 3+1 dimensions there are even solutions with nontrivial horizon topologies such as black rings. I suggest to look at the Living Review recommended by Ben Crowell or to this lectures by N. Obers. The simplest black hole would be Schwarzschild–Tangherlini solution (analogue of Schwarzschild black hole) which is vacuum solution to Einstein field equations: Here $\mu = R_s^{d-3} = \frac{16 \pi G M}{(d-2)\Omega_{d-2}}$ is mass parameter. This gives us the relationship between mass and Schwarzschild radius: $R_s \sim M^{1/(d-3)}$. The entropy is given by Bekenstein-Hawking formula: $$S = \frac {\cal A}{4G}=\frac 14 \left(\frac{\Omega_{d-2} R_s^{d-2}}{ G} \right).$$Temperature could be found from the first law $ dS = d M / T $: $$T = \frac{d-3}{4 \pi R_s}.$$ Rotating solution (generalization of Kerr metric) would be Myers-Perry metrics. Note, that rotations in higher dimensions are more complex, so the angular momentum is represented by several parameters. Also note, that many solutions with horizons elongated in one direction (such as black strings or black rings) turn out to be unstable via the Gregory-Laflamme instability, where the smooth 'tubular' horizon evolve growing perturbations of certain wavelengths. So possibly black strings and black rings would tend to decay into droplets-like black hole along them (the exact mechanics is yet unknown). But of course, the second law of thermodynamics would be observed, meaning that total area of the horizons would increase.
I want to include some TeX output (mainly math) in an SVG image. The SVG is meant to be used in a presentation and I would like to use a sans serif font for this (currently, I use Biolinum but another font would be acceptable as well). The font used in the TeX output should match the one used in the SVG. Therefore, I am using XeLaTeX and set the font to the same I am using for the text in the SVG. The problem is with the math parts of the TeX output. Ideally, I would like to make the font used there to match the one of the text. After some searching, I understand that (currently) there is no sans serif font which fully supports Unicode math. However, is there some way to get "as close as possible"? What I mean by this is a setting where the glyphs from the font get used whenever possible and, if the font does not provide something, some fallback (which looks similar) is used. What I currently have is the following: % !TEX program = xelatex\documentclass{article}\usepackage[no-math]{fontspec}\renewcommand{\familydefault}{\sfdefault}\setmainfont{Linux Libertine O}\setsansfont{Linux Biolinum O}\usepackage{mathspec}\setallmainfonts(Digits,Latin,Greek){Linux Biolinum O}\usepackage{lipsum}\begin{document} $\sum_{i = 0}^{n} a \gamma i \Sigma$ γɣ\textit{a} Σ \lipsum[1]\end{document} However, as you can see, the font, for example, provides a capital Sigma but it is not used for the sum. Even worse, the Sigma used for the sum has serifs and looks a bit out of place. Please note that this is meant as an example and that I am looking for a more general answer to the problem (than, e.g., replacing the sum by a capital Sigma).
The following identity seems to follow from a simple analysis of the sieve of Eratosthenes and inclusion-exclusion, where $p_i, p_j, p_k, \ldots$ denote primes and $N$ is an integer $\geq 2$: $\Sigma_{p_i\leq N} \lfloor{N/p_i}\rfloor - \Sigma_{p_i, p_j\leq N} \lfloor{N/(p_i.p_j)}\rfloor + \Sigma_{p_i, p_j, p_k\leq N} \lfloor{N/(p_i.p_j.p_k)}\rfloor + \ldots = N - 1$ For example for $N = 6$, $\lfloor 6/2 \rfloor + \lfloor6/3\rfloor + \lfloor6/5\rfloor - \lfloor6/(3.2)\rfloor = 3 + 2 + 1 - 1 = 5 = 6 - 1$ Is this identity correct? If wrong, can it be fixed? If right, it must be well known -- so apologies in advance for a silly question, but is there a published reference? Also, is there any way to remove $N$ from the LHS so it becomes an identity/inequality only on sums of reciprocals of prime products?
What is meant by a theory to be (1) perturbatively renormalizable, (2) perturbatively non-renormalizable, (3) non-perturbatively renormalizable, and non-perturbatively non-renormalizable? In each case, what are at least one example of such theories? Perturbatively renormalizable (or simply renormalizable) theories are those which can be consistently renormalized by tweaking values of a finite number ofparameters to any order of perturbation theory. The key moment here is that the finite number of parameters is fixed priorto choosing the order of perturbation theory. We have to be able to make sense of the theory to any order by tweaking the same finite set of parameters. Examples of renormalizable $4d$ QFTs include $\varphi^4$ (parameters are the field strengths renormalization $Z$, the particle mass $m$ and the interaction coupling $\lambda$), Yukawa theory (both scalar and pseudoscalar), QED, Yang-Mills for compact gauge groups. Perturbative nonrenormalizable (or simply nonrenormalizable) theories are those which aren't perturbatively renormalizable. Examples include $\varphi^6$ in $4d$, perturbative General Relativity. I've never heard the term "non-perturbatively renormalizable", but I suppose what is meant is finite. Finite theories are those which admit a well-defined Quantum Mechanical definition with a Hilbert space (or a Gelfand triple), and physical observables as self-adjoint operators acting on it. An incredibly beautiful and nontrivial moment here is that finite theories can have perturbative expansions which are actually nonrenormalizable. The best example here is General Relativity in $3d$. It was rigorously quantized by Witten, but its perturbative series is a nonrenormalizable asymptotic expansion. The ones which we can't formulate or define :) The logic is that quantum theories define effective actions, not the other way around. If we have a theory which can't be made sense of quantum-mechanically, then we don't have a theory at all. All these distinctions are somewhat old fashion, in the modern understanding of QFTs as effective field theories, and whether a QFT is fundamentally renormalizable or not is a concern mostly for people who still believe in a QFT of everything. (Note that from a popularization point of view, this seems to be very important, but one should keep in mind that most people using QFT/RG are not really working on this issue of a theory of everything.) (1) A perturbatively renormalizable theory is a QFT, where at each order of perturbation theory with a fixed UV cut-off $\Lambda$, one can redefine a finite number of parameters as a function of $\Lambda$, such that the limit $\Lambda\to\infty$ is now well defined. Note that this has to be done in that order : first, perturbation, then $\Lambda\to \infty$. The two limits do not necessarily commute. For instance, the $ \phi^4$ theory in 4D is perturbatively renormalizable, but does not exist in the continuum limit ($\Lambda\to\infty$ right from the start), that is, if one insists that the theory is defined directly with an infinite cut-off, with a finite interaction constant, then only the free theory is well defined. (2) A perturbatively non-renormalizable theory is a QFT where this cannot be done. One needs to increase the number of parameters as one increases the order of the perturbation expansion. This does not mean that the theory is useless, only that one cannot get rid of the high-energy dependence with only a few parameters. This is the case of most theories. (3) A non-perturbatively renormalizable theory is a QFT where the continuum limit can be taken, with only a few parameters needed to completely parametrize it. However, if one tries to expand in the coupling constant, and then take $\Lambda\to\infty$, then the theory seems to be non-renormalizable. This idea is behind the asymptotic safety scenario of quantum gravity, where one tries to perform non-perturbative calculation to find an UV RG fixed point to control the theory. (4) A non-perturbatively non-renormalizable theory is the negative of the above. Note that the continuum limit (a theory non-pertubatively defined to exist in the limit $\Lambda\to\infty$) is of little interest for most applications of QFTs, since in statistical physics and condensed matter there is always a finite UV cut-off, and in HEP, one can work with EFTs, sufficient to describe energies obtained in accelerators. See also this post Why do we expect our theories to be independent of cutoffs?
I've been studying statistical mechanics for a while, the topic is very far from what I've learned till now so my questions may be superficial. When trying to see what my entropy is in an isobaric-isothermal ensemble, which turns out to be applicable in reactive systems, if I write my partition function $\Delta$ without the degeneracy factor $\Omega$, the answer I get from $$S = \sum p_i\ln p_i$$ is $$G = -T\ln\Delta$$ (with $T$ in energy units, $k_\mathrm{B} = 1$). However, if I take my systems to be in favor of a state or degenerate, I get $$G + T\ln\Omega = -T\ln\Delta$$ Anywhere I read about this ensemble, the relationship for $G$ is in the first form and not the second, can someone give hints to some reading for what I'm missing here?
Name of function Name is important to deal with any object or subject, in order to distinguish it/him/her from other element of the same set. For functions, the name is especially important because it allows to denote the complicated expression or article in TORI with few digits. The name can be used to find description or implementation of a function in TORI or any other database. The name can be used in the call of function in a program, code, that is supposed to be translated or interpreted by some appropriate software. In TORI, most of codes refer to C++ language, but some are also in Mathematica. Those in Maple (See, for example, Maple and Tea) or in other languages may also appear. Policy of names in TORI Usually, the names of functions use the ascii characters, namely, letters and, in exceptional cases, cifers. (Cifer is one of the following symbols: 0,1,2,3,4,5,6,7,8,9), but the first character must be letter. The name should not contain spacebars, parenthesis nor special characters. For example, use of sequence of characters "f(x)" as name of function is wrong. For a local name that has specific meaning only within some formula, paragraph or article, the function may be called just f, and the parenthesis after it are used to indicate the argument, and not to identify the function. For example, expressions like \(~f(x)=\int \exp(ikx) f(k) {\rm d}{k}~\) are nonsense, similar to writing \(\int \frac{f(x)}{\mathrm d x}\). Local name The local names have specific meaning within one of few formulas, one or few sections, and have other meaning in other formulas. Usually, such local name consist of a single letter of the Latin alphabet, but letters from other alphabets can be used too, under conditions, that they are correctly treated by the Latex interpreter, used in TORI. For example, expression \(\displaystyle ~ {Ю}(x)= \int_{A}^{Я} \frac{鳥(\alpha)}{魚(x\!+\!\alpha)} \mathrm d \alpha~\) seems to be recognized, although there may be some problem with Italication of non-English characters. Such extravagant names of variables and names of functions can be used in the emergency cases, when all simple letters of the Latin and Greek alphabet already mean other objects, and the urgent help from the Russian and Japanese cultures is essential. However, at the translation to C++, such an expression may cause confusions, to, if possible, the characters from the English alphabet are strongly recommended. If the name of function is single letter, it may appear with Italics font. Names consisting of more characters, must be with Roman font. For example, \(arccos(x)\) means product of \(a,r,c^2,o\) and function \(s\) evaluated at artument \(x\), but never \(\arccos(x)\). If the letters of a name of a function appear with Italics, this is just error, misprint, that should be reported and corrected. Global names Some names go through several articles with the same meaning. In particular, this refers to the names of special functions. Several names, mainly those of the elementary functions may use the lowercase letters, as \(\sin\), \(\exp\), \(\mathrm{erf}\). Especially this applies to the names of functions that are already implemented in the most of the C++ compilers. There are few exceptions: tet, arctet, zex; mainly for the functions that are believed to have deep fundamental meaning. All other name should begin with Capital letter. In TORI, in the article, dedicated to some function and named after this function, the first letter is automatically capitalized. If possible, the global name should coincide with that used in Matematica software, in order to use expression from TORI in the Mathematica codes without any modification. (However, the parenthesis after the name of a function should be replaced from (~) to [~]; \(~\ln~\) should be replaced to Log and so on.) If the function yet has no established name (and not implemented in Matehmatica), or if this name is not found in time, the name of a colleague who is somehow related with the function is used. For example, the name for the Tania function had been given before to realize that it is just WrightOmega with argument displaced for unity. The name can be created from the object, that was not considered as a function, for example, the Logistic sequence. This sequence is generalized and interpreted a holomorphic function of complex variable. The use of non–English characters in the global names is not recommended at all. For example, the WrightOmega is not written as [[Wright\(\Omega\)]]; perhaps, some Sakana function (as soon as it will be requested, implemented, described and loaded) will be called \(\mathrm{Sakana}\) and not \(魚\). In general, the new global names are supposed to consist of English letters and to begin with capital letter. Suggestions Any critics of the names of functions used in TORI should be supplied with suggestions, how is it better to call some function. Suggestions of names like "Function_Used_in_Article_By_Brueck_et_All_[13]_and_denoted_there_with_letter_f" will not be considered; the advisers will be suggested to write some formulas where such a function appears several times and realize how ugly is such an expression.
Detectors (in my case I'm interested in homodyne detectors) with imperfect efficiency and losses are said to be able to be modeled using a beamsplitter and are to be in a statistical mixture with vacuum. I am struggling to reproduce this "statistical mixture" result by modeling the system as a beamsplitter. Just to overview how the homodyne works without losses: A homodyne detector measures the subtracted intensity output of two detectors. Since a quantum beamsplitter returns $c^\dagger = a^\dagger + b^\dagger$ and $d^\dagger = a^\dagger - b^\dagger$. The quantum operator becomes $c^\dagger c$ - $d^\dagger d$ = $a^\dagger b + b^\dagger a$. With a coherent state as our mode-b: this reduces to being $|\beta| X(\theta) $. Where $X(\theta)$ is the "generalized quadrature", a quantum observable of the form: $$X(\theta) = a^\dagger e^{i\theta} + a e^{-i \theta}$$ Theta($\theta$) is the relative phase between the two fields (or can be thought as the complex phase associated with $\beta$. First I want to understand how losses in the quantum channel ($a^\dagger$) affect our output observable. So to do this I am going to have my ($a^\dagger$) input mode to interfere with an additional mode (which will remain as vacuum) through a beamsplitter. Writing the beamsplitter's transmission as $\eta$ (my losses) and "reflection" as (1-$\eta$) we have my output state (after losses) as: \begin{bmatrix} \eta & \sqrt{\eta(1-\eta)} \\ \sqrt{\eta(1-\eta)} & (1-\eta)\\ \end{bmatrix} My output state is: $|\psi \rangle = \sqrt{\eta}|1, 0\rangle + \sqrt{1-\eta} |0, 0\rangle$ Now I think the way to move forward is to partial trace out the second vacuum mode, so that I can obtain a "single-mode" that I can use as the new quantum input-mode of the homodyne. So I'm going to get the density matrix, and trace it over this second vacuum mode. The tensored density-matrix for this two-mode state is: $|\psi \rangle \langle \psi| = (\sqrt{\eta}|1, 0\rangle + \sqrt{1-\eta} |0, 0\rangle) (\sqrt{\eta}\langle 1, 0| + \sqrt{1-\eta} \langle 0, 0|)$ $ = \eta|1, 0\rangle \otimes \langle 1, 0| + \sqrt{(1-\eta) \eta} (|0, 1\rangle \otimes \langle 0, 0| + |0, 0\rangle \otimes \langle 1, 0|) + (1-\eta) |0, 0\rangle \otimes \langle 0, 0| $ You can write this out and sum over one of the two dimensions. Tracing over the second mode, it looks like I just end up back at the normal density matrix of a superposition state (I don't get a mixture): \begin{bmatrix} \eta & \sqrt{\eta(1-\eta)} \\ \sqrt{\eta(1-\eta)} & (1-\eta)\\ \end{bmatrix} This suggest to me that I can just perform a substitution and swap $a^\dagger$ with $ \sqrt{\eta} a^\dagger|0\rangle + \sqrt{1-\eta} |0\rangle$ I'm not sure if this is result is consistent with the literature. If I'm understanding any losses are "inefficiencies" which can all be multiplied together as an effective mixture with vacuum. I believe in this resource they include one-mode losses in their measurement of efficiency, which they model as a statistical mixture of the state with vacuum. Two-mode losses, on the other hand, in which both outputs of the two ends of the detector have losses, I think works out differently. Doing the beam splitter model twice on modes c and d and extracting terms proportional to b (all others are small relative to the local oscillator), I get something looking like: $$\eta \beta X(\theta) + \sqrt{\eta(1-\eta)}((L1*c+c*L1)-(L2*d-d*L2)$$ Where I've made L1 and L2 to be loss channels of c and d respectively, using the same beam-splitter model as above (and assuming both have the same losses). I'm a bit stuck in how to work out that second term. Are the two losses correlated? Can I combine terms? Does this end up working out to be a statistical mixture? To quote: Remarkably, all imperfections of the experiment losses in transmission of the signal photon, quantum efficiency of the HD, trigger dark counts, mode matching of the signal photon and the local oscillator, and spatiotemporal coherence of the signal photon had a similar effect on the reconstructed state: admixture of the vacuum 0 to the ideal Fock state 1. Finally, I'm also trying to see how this can be used to find out how this changes the probabilities of measuring certain quadrature values (given what the quantum state is. I understand the normal projection operator provides the probabilites in the fock-state basis (which can be found here). With this statistical mixture model, it's said that this projection operator can be modified to look like: $$\sum_{n,m} B_{m+k, m}(\eta) B_{n+k, n}(\eta) \langle n| \theta, x \rangle \langle \theta, x| m \rangle |n+k\rangle \langle m+k|$$ I'm struggling to see how to derive this (particularly the sums).
The goal is to have a shortcut for all letters of the latin and greek alphabets in bold font and upper/lower case. For the latin alphabet the current solution works fine, but for the greek one it won't work. So I actually have two questions here: 1: Is it possible to automatically loop over greek letters with pgffor, just like with latin letters 2: Is there a reason for my current solution not to work with greek letters As usual, the answer may be very short or very long, so thanks in advance. \documentclass{article}% Command forcing 1st letter of argument to be capital one\usepackage{xparse}\ExplSyntaxOn\NewExpandableDocumentCommand \firstcap { m } { \tl_mixed_case:n {#1} }\ExplSyntaxOff% Loop over latin alphabet (working)\usepackage{pgffor}\foreach \x in {a,...,z}{%\expandafter\xdef\csname \firstcap{\x}mat\endcsname{\noexpand\ensuremath{\noexpand\mathbf{\firstcap{\x}}}}}\foreach \x in {a,...,z}{%\expandafter\xdef\csname \firstcap{\x}vec\endcsname{\noexpand\ensuremath{\noexpand\mathbf{\x}}}}% Loop over greek alphabet (non working)%\foreach \x in {alpha,zeta}{%%\expandafter\xdef\csname \firstcap{\x}mat\endcsname{\noexpand\ensuremath{\noexpand\mathbf{\firstcap{\x}}}}%}%\foreach \x in {\alpha,...,\zeta}{%%\expandafter\xdef\csname \firstcap{\x}vec\endcsname{\noexpand\ensuremath{\noexpand\mathbf{\x}}}%}\begin{document}$\Amat \Bmat \Cmat \Avec \Bvec \Cvec$%$\Alphamat \Betamat \Alphavec \Betavec$\end{document}
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Can anybody tell me what is known about the classification of abelian transitive groups of the symmetric groups? Let $G$ be a an abelian transitive subgroup of the symmetric group $S_n$. Show that $G$ has order $n$. Thanks for your help! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Can anybody tell me what is known about the classification of abelian transitive groups of the symmetric groups? Let $G$ be a an abelian transitive subgroup of the symmetric group $S_n$. Show that $G$ has order $n$. Thanks for your help! The following solution only needs basic group theory. Let $G$ be an transitive abelian subgroup of $S_n$. By transitivity, for each $i\in\{1,\ldots,n\}$ there is a $\sigma\in G$ such that $\sigma(1) = i$. So $\# G\geq n$. Assume that $\#G > n$. Then there are $\sigma, \tau\in G$ with $x := \sigma(1) = \tau(1)$ and $\sigma\neq \tau$. By the second condition, there is a $y\in\{1,\ldots,n\}$ with $\sigma(y) \neq \tau(y)$. From transitivity we get a $\pi\in G$ with $\pi(x) = y$. Now $$ \pi\tau\pi\sigma(1) = \pi\tau\pi(x) = \pi\tau(y) $$ and $$ \pi\sigma\pi\tau(1) = \pi\sigma\pi(x) = \pi\sigma(y)\text{.} $$ Because of $\tau(y) \neq \sigma(y)$, these two elements are distinct. So the elements $\pi\tau\in G$ and $\pi\sigma\in G$ do not commute, which contradicts the precondition that $G$ is abelian. The question is answered by user641 in the comments. Given our hypotheses, we obtain $\{1,\cdots,n\}\cong^\dagger G/H$ and by the second bullet point, we know the action is faithful by the first bullet point, and therefore we know $H=1$ by the third bullet point; thus we have proved $\{1,\cdots,n\}\cong G/1$, so $|G|=n$. ($^\dagger $A morphism of $G$-sets is a $G$-equivariant aka intertwining map, i.e. a map $\phi:X\to Y$ with the property that $\phi(gx)=g\phi(x)$ for all $x\in X$ and $g\in G$. In fact $G$-sets thus become a category.)
This question already has an answer here: I want to draw the roots of z^5=1-i. I am totally novice to latex and really have no idea about drawing. Thanks for helping! TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: I want to draw the roots of z^5=1-i. I am totally novice to latex and really have no idea about drawing. Thanks for helping! Hmmh, nobody seems to want to answer this. I agree that this problem is very similar to the linked questions, but personally would not consider it a duplicate since you do not ask to take the nth root of unity, but of a different complex number with nontrivial radius and phase. \documentclass[fleqn]{article} \usepackage{tikz}\usepackage{amsmath}\begin{document}You wish to plot the 5th roots of \[ 1-\mathrm{i}~=~\sqrt{2}\,\mathrm{e}^{-\mathrm{i}\pi/4}\]for some complex number $z=r\cdot\mathrm{e}^{\mathrm{i}\,\varphi}$.This means that you need to solve the equation\[ z^5~=~r^5\,\mathrm{e}^{5\cdot\mathrm{i}\cdot\varphi}~=~\sqrt{2}\,\mathrm{e}^{-\mathrm{i}\pi/4}\;,\]which translates to \[r~=~2^{1/10}\quad\text{and}\quad \varphi~=~\frac{\pi}{20}\left(-1+8\,n\right)~=~9^\circ\left(-1+8\,n\right)\]with $n\in\{0,...,4\}$.\begin{tikzpicture}[scale=3]\pgfmathsetmacro{\ticklength}{0.06}\draw [-latex] (-2,0) -- (2,0) node[below left]{Re$\,z$};\draw [-latex] (0,-2) -- (0,2) node[below left]{Im$\,z$};\draw (1,\ticklength) -- (1,-\ticklength) node[below] {1};\draw (\ticklength,1) -- (-\ticklength,1) node[left] {i};\draw (0,0) circle({pow(2,1/10)});\foreach \X in {0,...,4}{\node[scale=0.4,circle,fill,label={{9*(-1+8*\X)}:$n=\X$}] at ({9*(-1+8*\X)}:{pow(2,1/10)}) {}; }\end{tikzpicture}\end{document}
Defending the Schwarzschild singularity Since the paper by Oppenheimer and Snyder1 was published, the myth that the Schwarzschild singularity is merely a coordinate singularity and so is of little consequence has become ubiquitous2, 3, 4, 5. Pejorative language is a part of this, by describing the Schwarzschild metric as a function and the singularity as merely an apparent singularity whilst the singularity at the centre is described as being a real singularity that cannot be eliminated by any coordinate transformation. Yet such a real singularity is impossible with all known physics and a belief in its existence leads inevitably to further paradoxes. The fact is, that once you accept the impossible, almost anything becomes possible. Time travel, wormholes, other universes, and many other ideas straight from the pages of science fiction become worthy of serious academic discussion. The first thing to remember is that for any given geodesic, it will still be comprised of the same set of points in spacetime no matter which coordinate system is used. There are never two different geodesics for the same particle and for the particular case of a particle falling into a Schwarzschild black hole, how can it be that in one set of coordinates, the particle is hovering for a time approaching infinity whilst in another, it has already reached its total demise? The answer to this conundrum is that in Schwarzschild coordinates, the central singularity can only be reached after an infinitely long time has passed. This does not lead to a denial of a central singularity - just that we will never, from our viewpoint, observe it's an effect on our, or our descendent's lifetimes. But first let us pick apart the assertion in Oppenheimer and Snyder's paper, that the Schwarzschild singularity can be eliminated by a coordinate transformation. Consider the line element for the metric of a non-rotating spherical object in Schwarzschild coordinates: \[D^2=\left(1-\frac{r_s}{r}\right)c^2dt^2-\left(1-\frac{r_s}{r}\right)^{-1}dr^2\] \[-r^2(d\theta^2+sin^2\theta d\varphi^2) \] which, at the event horizon, approaches infinity. Now \(D\) in the above equation is an invariant quantity and so is unchanged in any coordinate system. If you were unaware of this fact, we can prove this quite simply. In a different (primed) coordinate system, we have by definition \[ D'^2=g_{\mu'\nu'}dx^{\mu'} dx^{\nu'}\] \[= \frac{ \partial x^\mu}{ \partial x^{\mu '}} \frac{ \partial x^\nu}{\partial x^{\nu '}} g_{\mu\nu}\frac{\partial x^{\mu '}}{\partial x^\mu}dx^\mu \frac{\partial x^{\nu '}}{\partial x^\nu} dx^\nu \] \[=g_{\mu\nu}dx^\mu dx^\nu =D^2\] This result seems to contradict long-established assertions by showing that solutions in Kruskal-Szekeres coordinates, or other comoving coordinates, have an infinite discontinuity at the event horizon, or see one of the points in counterarguments for a reason for discounting this view entirely. It is natural to wonder how this could have been missed by other investigators. The reason is that comoving coordinates systems effectively smooth out the curvature of spacetime by dividing out the curvature. At the event horizon this becomes \(\infty/ \infty\) which is undefined. With the spacetime being shown to be flat on either side of the event horizon, it seemed at the time, a very reasonable extension to assume that it is also flat across the event horizon. Now, with clear evidence that this is not the case, we have to recognise that there is a discontinuity that is not admissible in a Riemannian manifold. In comoving coordinates, the interior and exterior solutions are thus valid in separate coordinate patches, separated by the event horizon. There is no reason to assume that the internal solution is physically correct. By comparison, Schwarzschild coordinates are valid throughout the spacetime from infinity up to and including the event horizon, but not inside. This work was undertaken in the hope of reaffirming the validity of the internal structure of a non-rotating black hole proposed in an earlier paper. This model proposed that the field inside the event horizon of a black hole is infinite everywhere. The question to be resolved is - "Can a region that is infinite everywhere be a Riemannian manifold?". A necessary requirement for this is that all derivatives of the field remain finite. This may not be true for a field of infinite values, but physically, these infinities are a result of a limiting process. It will take infinite time for them to be fully realised. In the current epoch, they will still be extremely large values but not yet infinite. Let \(r=r_s - \delta r\) then the metric is given by \[D^2=\left(1-\frac{r+\delta r}{r}\right)c^2 dt^2 -\left(1-\frac{r+\delta r}{r}\right)^{-1}dr^2\] \[-r^2(d\theta^2+sin^2\theta d\varphi^2) \] \[=\frac{\delta r}{r}c^2 dt^2+\frac{ r}{\delta r}dr^2-r^2(d\theta^2+sin^2\theta d\varphi^2) \] This is differentiable, and in the limit as \(\delta r \rightarrow 0\) we can assume that it remains differentiable, making the interior a Riemannian manifold. Taking into account, the warning we have had, of the doubtful validity of assuming a limit in a physical situation, we can still assert that this becomes almost true in the future. By joining this with the Schwarzschild metric, we can thus cover the whole of space. To explain this again graphically, Courtesy of Wikipedia which shows infalling Kruskal–Szekeres coordinates, shows a blue hyperbola as the surface where the Schwarzschild radial coordinate is constant (and with a smaller value in each successive frame, until it ends at the singularities). To understand this, note that each hyperbola is a line of constant \(r\) in Schwarzschild coordinates with the event horizon being the 45° lines through the origin. The point to note is that as a particle 'sails' through the event horizon, it occupies every point along these axes. This is just a visual representation of the value of \(\infty/ \infty\), an indeterminate quantity. Rigidity-> See also the excellent work of Weller in debunking the myth of traversing the event horizon. 1 On Continued Gravitational Contraction, Oppenheimer, J. R. and Snyder, H., Phys. Rev., 56, issue 5, pages 455--459, Sep 1939, {http://link.aps.org/doi/10.1103/PhysRev.56.455} 2 Gravitation, W Misner, K. S. Thorne and J. A. Wheeler, - 1973 - Macmillan 3 General Relativity. R. M. Wald - 1984 - University of Chicago Press 4 Black Holes: a student text, D. Raine and E. Thomas, 2015 - Imperial College Press 5 A Most Incomprehensible Thing: Notes towards a very gentle introduction to the mathematics of relativity, P Collier, - 2013 - Incomprehensible Books Agree or disagree, or have any questions or observations about this, and I would love to hear from you, so please This email address is being protected from spambots. You need JavaScript enabled to view it., or leave a comment. Your views are always most welcome.
Basically, define \closure and \interior (or shorter names if you want) and use those symbolic definitions, this way, you can program things into the macros. \documentclass{scrartcl} \usepackage{mathtools,amssymb} \usepackage{xparse} \NewDocumentCommand\closure{sm} {\IfBooleanTF{#1}{\overline{#2}}{\bar{#2}}} \NewDocumentCommand\interior{sm} {\IfBooleanTF{#1}{?}{\mathring{#2}}{}} \begin{document} $\closure{\interior{\closure{\interior{A}}}}$ and $\closure*{B(x_0,R_1)}$ \end{document} I leave an ? in a situation in which I don't know what you want (some people write it like (...)^\circ and other put an overparentheses ( \overparen may be, or may be a self defined macro) and then a \mathring over the parenthesis, etc.). On a side note, may be, depending on your documment, you might want to define something like open-ball \oB and closed-ball \cB and use like \cB(x_0,R_1) in which case it's easy to change the definition of those commands whenever you want, and you are not stuck with the raw code.
Aliases: tf.linalg.lstsq tf.matrix_solve_ls tf.matrix_solve_ls( matrix, rhs, l2_regularizer=0.0, fast=True, name=None ) Defined in tensorflow/python/ops/linalg_ops.py. See the guide: Math > Matrix Math Functions Solves one or more linear least-squares problems. matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form M-by- N matrices. Rhs is a tensor of shape [..., M, K] whose inner-most 2 dimensions form M-by- K matrices. The computed output is a Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by- K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense. Below we will use the following notation for each pair of matrix and right-hand sides in the batch: matrix=\(A \in \Re^{m \times n}\), rhs=\(B \in \Re^{m \times k}\), output=\(X \in \Re^{n \times k}\), l2_regularizer=\(\lambda\). If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares problem \(X = \mathrm{argmin} {Z \in \Re^{n \times k}} ||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as \(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z|| F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond} (A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\) is sufficiently large. If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored. Args: : matrix Tensorof shape [..., M, N]. : rhs Tensorof shape [..., M, K]. : 0-D l2_regularizer double Tensor. Ignored if fast=False. : bool. Defaults to fast True. : string, optional name of the operation. name Returns: : output Tensorof shape [..., N, K]whose inner-most 2 dimensions form M-by- Kmatrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]in the least squares sense. Raises: : matrix_solve_ls is currently disabled for complex128 and l2_regularizer != 0 due to poor accuracy. NotImplementedError
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Registro completo - Registros similares 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Registro completo - Registros similares 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Registro completo - Registros similares 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Registro completo - Registros similares 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Registro completo - Registros similares 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Registro completo - Registros similares 2019-04-09 06:05 The new CGEM Inner Tracker and the new TIGER ASIC for the BES III Experiment / Marcello, Simonetta (INFN, Turin ; Turin U.) ; Alexeev, Maxim (INFN, Turin ; Turin U.) ; Amoroso, Antonio (INFN, Turin ; Turin U.) ; Baldini Ferroli, Rinaldo (Frascati ; Beijing, Inst. High Energy Phys.) ; Bertani, Monica (Frascati) ; Bettoni, Diego (INFN, Ferrara) ; Bianchi, Fabrizio Umberto (INFN, Turin ; Turin U.) ; Calcaterra, Alessandro (Frascati) ; Canale, N (INFN, Ferrara) ; Capodiferro, Manlio (Frascati ; INFN, Rome) et al. A new detector exploiting the technology of Gas Electron Multipliers is under construction to replace the innermost drift chamber of BESIII experiment, since its efficiency is compromised owing the high luminosity of Beijing Electron Positron Collider. The new inner tracker with a cylindrical shape will deploy several new features. [...] SISSA, 2018 - 4 p. - Published in : PoS EPS-HEP2017 (2017) 505 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.505 Registro completo - Registros similares 2019-04-09 06:05 CaloCube: a new homogenous calorimeter with high-granularity for precise measurements of high-energy cosmic rays in space / Bigongiari, Gabriele (INFN, Pisa)/Calocube The direct observation of high-energy cosmic rays, up to the PeV region, will depend on highly performing calorimeters, and the physics performance will be primarily determined by their acceptance and energy resolution.Thus, it is fundamental to optimize their geometrical design, granularity, and absorption depth, with respect to the total mass of the apparatus, probably the most important constraints for a space mission. Furthermore, a calorimeter based space experiment can provide not only flux measurements but also energy spectra and particle identification to overcome some of the limitations of ground-based experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 481 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.481 Registro completo - Registros similares 2019-03-30 06:08 Registro completo - Registros similares 2019-03-30 06:08 Registro completo - Registros similares
User:Jan A. Sanders/An introduction to Lie algebra cohomology/Lecture 3 Contents abstract In this lecture we define the cohomology modules \(H^n(\mathfrak{g},\mathfrak{a})\) Lifting the representation to the forms definition \(\mathfrak{g}\) is called an ideal in \(\mathfrak{l}\) if\[[\mathfrak{l},\mathfrak{g}]\subset \mathfrak{g}\ .\] example Let \(\mathfrak{g}=\ker d_1\ .\) Then for \(x\in\mathfrak{l}\) and \(y\in\mathfrak{g}\) one has \[d_1([x,y])=d_1(x)d_1(y)-d_1(y)d_1(x)=0\] It follows that \(\mathfrak{g}=\ker d_1\) is an ideal in \(\mathfrak{l}\ .\) definition Let \(\mathfrak{l}\) be a Lie algebra and \(\mathfrak{g}\) an ideal.Then \(\mathfrak{l}/\mathfrak{g}\) is a Lie algebra with the bracket\[ [[x],[y]]=[[x,y]] \]where \([x]\) denotes the equivalence class of \(x\ .\)This is well defined, since varying \(x\) and \(y\) with elements in \(\mathfrak{g}\)does not change the answer:\[ [[x],[y]]=[[x+g_1,y+g_2]]=[[x,y]]+[[x,g_2]]+[[g_1,y]]+[[g_1,g_2]]=[[x,y]]\] terminology When \(\mathfrak{a}\) is a module and a representation space of \(\mathfrak{l}\ ,\)one says that \(\mathfrak{a}\) is an \(\mathfrak{l}\)-module. If the representation is zero, \(\mathfrak{a}\) is a trivial \(\mathfrak{l}\)-module. definition Let \( \mathfrak{a}\) be an \(\mathfrak{l}\)-module. In order to give a general definition of a coboundary operator \( d^n , n\geq 0 \ ,\) one defines first an induced representation on \( C^n (\mathfrak{g},\mathfrak{a})\) as follows. Let, for \(y\in\mathfrak{l}\ ,\)\[ (d_1^n(y)a^n)(x_1,\cdots,x_n)=d_1(y)a^n(x_1,\cdots,x_n)-\sum_{i=1}^n a^n(x_1,\cdots, [y,x_i],\cdots,x_n).\]This is indeed a representation. Let \(y,z\in\mathfrak{l}\ .\) Then\[ d_1^n(y)d_1^n(z)a^n(x_1,\cdots,x_n)=\]\[=d_1(y)d_1^n(z)a^n(x_1,\cdots,x_n)-\sum_{i=1}^n d_1^n(z) a^n(x_1,\cdots, [y,x_i],\cdots,x_n)\]\[=d_1(y)d_1(z)a^n(x_1,\cdots,x_n)-\sum_{i=1}^n d_1(z) a^n(x_1,\cdots, [y,x_i],\cdots,x_n)\ :\]\[-\sum_{i=1}^n d_1(y) a^n(x_1,\cdots, [z,x_i],\cdots,x_n)+\sum_{j i}a^n(x_1,\cdots,[y,x_i],\cdots, [z,x_j],\cdots,x_n)+a^n(x_1,\cdots, [z,[y,x_i]],\cdots,x_n)\]It follows that\[d_1^n(y)d_1^n(z)a^n(x_1,\cdots,x_n)-d_1^n(z)d_1^n(y)a^n(x_1,\cdots,x_n)=\]\[=(d_1(y)d_1(z)-d_1(z)d_1(y))a^n(x_1,\cdots,x_n)+ \sum_{i=1}^n a^n(x_1,\cdots, [z,[y,x_i]],\cdots,x_n)- \sum_{i=1}^n a^n(x_1,\cdots, [y,[z,x_i]],\cdots,x_n)\]\[=d_1([y,z])a^n(x_1,\cdots,x_n)-\sum_{i=1}^n a^n(x_1,\cdots, [[y,z],x_i]],\cdots,x_n)\]\[=d_1^n([y,z])a^n(x_1,\cdots,x_n)\]or, \[d_2^n(y,z)=[d_1^n(y),d_1^n(z)]-d_1^n([y,z])=0\ .\] remark Remark that \( C^n(\mathfrak{g},\mathfrak{a})\) is mapped into itself by \(d_1^n(x) \) for all \(x\in\mathfrak{l}\ .\) Definition of the coboundary operator. We now reformulate the definition of \(d^{i}, i=1,2,3\) using the \(d_1^n\ .\) First we introduce the contraction operator \(\iota_1^n(y): C^n(\mathfrak{g},\mathfrak{a})\rightarrow C^{n-1}(\mathfrak{g},\mathfrak{a})\) by \[( \iota_1^n(y)a_n)(x_1,\cdots,x_{n-1})=a_n(y,x_1,\cdots,x_{n-1})\ .\] Recall the following definitions of the coboundary operators. \(d a_0(x)=d_1(x)a_0\ .\) \(d^1 a_1(x,y)=d_1(x)a_1(y)-d_1(y)a_1(x)-a_1([x,y])=(d_1^1(x)a_1)(y)-d_1(y)\iota_1^1(x)a_1=(d_1^1(x)a_1)(y)-d \iota_1^1(x)a_1 (y)\ .\) \(d^2 a_2(x,y,z)=d_1(x)a_2(y,z)-d_1(y)a_2(x,z)+d_1(z)a_2(x,y)-a_2([x,y],z)-a_2(y,[x,z])+a_2(x,[y,z])\ :\) \[=(d_1^2(x)a_2)(y,z)-d_1^1(y)\iota_1^2(x)a_2(z)+d_1(z)\iota_1^1(y)\iota_1^2(x)a_2\ :\] \[=(d_1^2(x)a_2)(y,z)-d^1 \iota_1^2(x)a_2 (y,z)\ .\] definition This strongly suggests the following recursive definition: \(\iota_1^1(x)d=d_1(x)\) \(\iota_1^{n+1}(x)d^n+d^{n-1}\iota_1^n(x)=d_1^n(x),\quad n>0\) lemma Let \( y\in\mathfrak{l}\) and \(z\in\mathfrak{g}\ .\) Then \(\iota_1^n(z)d_1^n(y)-d_1^{n-1}(y)\iota_1^n(z)=-\iota_1^{n}([y,z]).\) proof Consider \[ (\iota_1^n(z)d_1^{n}(y)-d_1^{n-1}(y)\iota_1^n(z))a_n(x_1,\cdots,x_{n-1})\] \[= d_1^{n}(y)a_n(z,x_1,\cdots,x_{n-1})-d_1^{n-1}(y)\iota_1^n(z)a_n(x_1,\cdots,x_{n-1})\] \[=d_1(y)a_n(z,x_1,\cdots,x_{n-1})-a_n([y,z],x_1,\cdots,x_{n-1})-\sum_{i=1}^{n-1}a_n(z,x_1,\cdots,[y,x_i],\cdots,x_n)\ :\] \[-d_1(y)a_n(z,x_1,\cdots,x_n)+\sum_{i=1}^{n-1}a_n(z,x_1,\cdots,[y,x_i],\cdots,x_{n-1})\] \[=-a_n([y,z],x_1,\cdots,x_{n-1})\] \[=-\iota_1^{n}([y,z])a_n(x_1,\cdots,x_{n-1})\quad\square\ .\] lemma Let \( y\in\mathfrak{l}\ .\) Then \[d_1^{n+1}(y)d^{n}=d^{n}d_1^{n}(y),\quad n\geq 0\ .\] proof For \( n=0\) one has \[ d_1^{1}(x)d a(y)-dd_1(x)a(y)=d_1(x)d_1(y)a-d_1([x,y])a-d_1(y)d_1(x)a=0\ .\] For \(n>0\) one has, with \(z\in\mathfrak{g}\) and \(n>0\ ,\) that \[ \iota_1^{n+1}(z)(d_1^{n+1}(y)d^{n}-d^{n}d_1^{n}(y))=\] \[ =-\iota_1^{n+1}([y,z])d^{n}+d_1^{n}(y)\iota_1^{n+1}(z)d^n-d_1^{n}(z)d_1^{n}(y)+d^{n-1}\iota_1^{n}(z)d_1^{n}(y)\] \[ =-\iota_1^{n+1}([y,z])d^{n}+d_1^{n}(y)(d_1^{n}(z)-d^{n-1}\iota_1^n(z)-d_1^{n}(z)d_1^{n}(y)+d^{n-1}(d_1^{n-1}(y)\iota_1^n(z)-\iota_1^n([y,z]))\] \[ =-d_1^{n}([y,z])+d_1^{n}(y)d_1^{n}(z)-d_1^{n}(z)d_1^{n}(y)+(d^{n-1}d_1^{n-1}(y)-d_1^{n}(y)d^{n-1})\iota_1^n(z)\] \[ =d_2^n(y,z)+(d^{n-1}d_1^{n-1}(y)-d_1^{n}(y)d^{n-1})\iota_1^n(z)\] \[ =(d^{n-1}d_1^{n-1}(y)-d_1^{n}(y)d^{n-1})\iota_1^n(z)\] This implies the statement of the lemma by induction.\(\square\) theorem - coboundary operator \( d^{\cdot} \) is a coboundary operator. proof One computes \[\iota_1^{n+2}(y)d^{n+1}d^{n}=d_1^{n+1}(y)d^{n}-d^{n}\iota_1^{n+1}(y)d^{n}\] \[=d^{n}d_1^{n}(y)-d^{n}(d_1^{n}(y)-d^{n-1}\iota_1^{n}(y))\] \[=d^{n}d^{n-1}\iota_1^{n}(y).\] Again, since \(d^1d^0=0\ ,\) it follows by induction that \[d^{n+1}d^{n}=0.\] This shows that \(d^i, i\in\mathbb{N}\) is a coboundary operator. proposition \[ d^n \omega_n(x_1,\cdots,x_{n+1})=\sum_{i=1}^{n+1} (-1)^{i-1} d_1(x_i) \omega_n(x_1,\cdots,\hat{x}_i,\cdots,x_{n+1}) +\sum_{i<j} (-1)^i\omega_n(x_1,\cdots,\hat{x}_i,\cdots,[x_i,x_j],\cdots,x_{n+1})\] corollary \(d^n\) maps \(C_{\wedge}^n(\mathfrak{g},\mathfrak{a})\) to \(C_{\wedge}^{n+1}(\mathfrak{g},\mathfrak{a})\ .\) Cohomology Define \(Z^n(\mathfrak{g},\mathfrak{a})=\ker d^n\ ,\) the space of cocycles, and \(B^n(\mathfrak{g},\mathfrak{a})=\mathrm{im\ }d^{n-1}\ ,\) the space of coboundaries. Since \(\mathrm{im\ }d^{n-1}\subset\ker d^{n}\ ,\) one can define \[ H^n(\mathfrak{g},\mathfrak{a})=Z^n(\mathfrak{g},\mathfrak{a})/B^n(\mathfrak{g},\mathfrak{a})\ ,\] the \(n\)-cohomology module of \(\mathfrak{g}\) with values in \(\mathfrak{a}\ .\) If \( a_n\in C^n(\mathfrak{g},\mathfrak{a})\ ,\) the equivalence class in \( H^n(\mathfrak{g},\mathfrak{a})\)is denoted by \( [a_n]\ .\) Elements in the zero equivalence class, the image of \(d^{n-1}\ ,\) are called trivial. remark For \(n=0\ ,\) \( H^0(\mathfrak{g},\mathfrak{a})=Z^0(\mathfrak{g},\mathfrak{a})=\ker d^0\ ,\) that is, is consists of all elements in \(\mathfrak{a}\) which are \(\mathfrak{g}\)-invariant. This indicates that computing the cohomology can be a formidable problem, since it contains for instance classical invariant theory. Cohomology theory itself does not provide the answers, it just asks the right questions and removes the trivial answers. theorem \( H^n(\mathfrak{g},\mathfrak{a}), n>0 ,\) is invariant under the action (by \(d_1^{n}\)) of \(\mathfrak{g}\ .\) So one could say that \( H^n(\mathfrak{g},\mathfrak{a})\) is a trivial \(\mathfrak{g}\)-module and an \(\mathfrak{l}/\mathfrak{g}\)-module. proof Indeed, since \(d^n a_n=0\ ,\) \[d_1^{n}(y)[a_n]=[d_1^{n}(y)a_n]=[\iota_1^{n+1}(y)d^{n}a_n+d^{n-1}\iota_1^{n}(y)a_n]=[0].\quad\square\] lemma If \(d^n a_n=0\) and \(d_1^n(y)a_n=0\ ,\) then, with \(b_{n-1}^y=\iota_1^n(y)a_n\ ,\) one has \( d^{n-1}b_{n-1}^y=0\ .\) corollary If under these conditions \(H^{n-1}(\mathfrak{g},\mathfrak{a})=0\ ,\) there exists \(c_{n-2}^y\) such that \(\iota_1^n(y)a_n=d^{n-2} c_{n-2}^y\ .\) In the case \(n=2\) this form is known as the Hamiltonian, and \(a_n\) is the symplectic form. Since \(d_1(x)c^y=\iota^{1}(x)d c^y=\iota^{1}(x)b_1^y=b_1^y(x)=a_2(y,x)\ ,\) one sees that \(c_0^y\) is invariant under \(y\) if \(a_2\) is antisymmetric, which is the usual assumption on symplectic forms. moment map Assume \([a_2]\in H_\wedge(\mathfrak{g},\mathfrak{a})\) and the existence of an index set \( I\) such that \(y_\iota, \iota\in I\) is the maximal set of linearly independent elements \(y_\iota\in\mathfrak{g}\) with \(\iota_1^2(y_\iota)a_2=0\) and \(a_2(y_{\iota_1},y_{\iota_2})=0, \iota_1,\iota_2\in I\ .\) Let \(\mathfrak{h}=\langle y^\iota\rangle_{\iota\in I}\ .\) Then the map \(\mathfrak{h}\rightarrow\mathfrak{a}^I\) is called the moment(um) map. Notice that \(d_1(y^{\iota_1})c^{y^{\iota_2}}=0,\iota_1,\iota_2\in I,\) by construction. definition (conformally) symplectic If \(a_2\) is a symplectic form, an element \(y\in\mathfrak{g}\) is called conformally symplectic if \(d_1^2(y)a_2=c_1(y)a_2\ ,\) where \( c_1 \) is a one form with values in a commutative ring. If \(c_1(y)\) is invertible, \(y\) is called a scaling conformally symplectic form If \(c_1(y)=0\ ,\) \(y\) is called symplectic. The commutator of two conformally symplectic elements is symplectic. scaling lemma Suppose there exists an element \(s\in\mathfrak{g}\) such that \[ d_1^{n}(s)a_n=\lambda(a_n)a_n\ ,\] with \(\lambda\in C^1(C^n(\mathfrak{g},\mathfrak{a}),R)\) and \(R\) the ring of the module \(\mathfrak{a}\ .\) Then for \(a_n\in Z^n(\mathfrak{g},\mathfrak{a})\) one has \[\lambda(a_n)a^n=d_1^{n}(s)a_n=d^{n-1}\iota_1^n(s)a_n\ ,\] that is, if \(\lambda(a_n)\) is invertible, then \(a_n=d^{n-1}\lambda(a_n)^{-1}\iota_1^n(s)a_n\in B^n(\mathfrak{g},\mathfrak{a})\ .\) In practice, this is very useful in computing cohomology, since it allows one to restrict the attention to those \(a_n\in Z^n(\mathfrak{g},\mathfrak{a})\) which have a noninvertible \(\lambda(a_n)\ .\) Notice that the argument does not work for \(s\in\mathfrak{l}\ .\) the homotopy formula If \(R\) equals \(\R\) or \(\mathbb{C}\) there is an explicit formula,the homotopy formula, to compute the preimage, at least on the span of the eigenforms. Let \(d_1^{n}(s) a_n^\iota=\lambda_\iota a_n^\iota\) and let \(S\) be the span of all such \(a_n^\iota\in Z^n(\mathfrak{g},\mathfrak{a})\) with \(\lambda_\iota\neq 0\ .\) Then if \(a_n\in S\ ,\) one defines \(\tau^{s}a_n^\iota=\tau^{\lambda_\iota}a_n^\iota\ .\) This defines \(\tau^{s}a_n\) by linearity. Let \[ P a_n = \left.\int \tau^{s} a_n \frac{d\tau}{\tau}\right|_{\tau=1}\ .\] Then for \(a_n\in S\) one has \[ a_n=d^{n-1} \iota_1^n(s)P a_n\ .\] Here the meaning of the integral is \[ \int \frac{d\tau}{\tau}=\log(\tau)\] and with \(\lambda\neq 0\ ,\) \[ \int \tau^\lambda\frac{d\tau}{\tau}=\frac{1}{\lambda}\tau^\lambda.\] proof Let \(a_n=\sum_\iota \alpha_\iota a_n^\iota\ .\) Then \[ d^{n-1} \iota_1^n(s)P a_n=d^{n-1} \iota_1^n(s)\sum_\iota \alpha_\iota P a_n^\iota\ :\] \[ =d^{n-1} \iota_1^n(s)\sum_{\iota} \alpha_\iota \frac{1}{\lambda_\iota} a_n^\iota\ :\] \[ =\sum_{\iota} \alpha_\iota a_\iota^n\ :\] \[ = a_n\] corollary A scaling conformally symplectic form can be used to integrate the symplectic form. pseudodifferential symbols - example of a closed 2-form Let \(a_2(f\delta^n,g\delta^k)=\mathrm{tr}([\log(\delta),f\delta^n]g\delta^k)\ .\)
Until Property Pattern Untimed version Pattern Name and Classification Until: Order Specification Pattern Structured English Specification Scope, P [holds] without interruption until S [holds]. Pattern Intent Until has been proposed by Grunske (2008) in [1] . This pattern describes a scenario in which an event/state will eventually become true, after another event/state held continuously. Temporal Logic Mappings LTL Globally: $P \; \mathcal{U} \; S$ Before R: $\Diamond R \rightarrow ( (P \wedge \neg R) \; \mathcal{U} \;(S \vee R))$ After Q: $\Box (Q \rightarrow (P \; \mathcal{U}\;S)$ Between Q and R: $\Box((Q \; \wedge \; \neg R \; \wedge \Diamond R) \rightarrow ((P \wedge \neg R) \; \mathcal{U} \;(S \wedge \neg R)))$ After Q until R: $\Box ((Q \; \wedge \; (\neg R)) \rightarrow ((P \; \mathcal{U} \;S) \; \mathcal{W} \;R))$ CTL Globally: $A[P \; \mathcal{U} \;S]$ Before R: $A[A[P \; \mathcal{U} \;S] \; \mathcal{W} \;R]$ After Q: $AG(Q \rightarrow \; A[P\; \mathcal{U} \;S])$ Between Q and R: $AG((Q \; \wedge \;AG (\neg R)) \rightarrow A[A[P \; \mathcal{U} \;S] \; \mathcal{W} \; R)])$ After Q until R: $AG((Q\; \wedge \;AG (\neg R)) \rightarrow A[A[P \; \mathcal{U} \;S] \; \mathcal{W} \;R])$ Example and Known Uses Power to a system must be available until a shutdown command is issued. (scope: global; source: satellite control system) Time-constrained version: Pattern Name and Classification Time-constrained Until: Real-time Order Specification Pattern Structured English Specification Scope, P [holds] without interruption until S [holds] [ Time(0)] Pattern Intent This pattern describes a scenario in which an event/state will eventually become true within a given time bound, after another event/state held continuously. Temporal Logic Mappings MTL Globally: $P \; \mathcal{U}^{[t1,t2]}\;S$ Before R: $\Diamond^{[t1,\infty)} R \rightarrow ( (P \wedge \neg R) \; \mathcal{U}^{[t1,t2]}\;(S \vee R))$ After Q: $\Box (Q \rightarrow (P \; \mathcal{U}^{[t1,t2]}\;S)$ Between Q and R: $\Box((Q \; \wedge \; \neg R \; \wedge \Diamond^{[t1,\infty)} R) \rightarrow ( (P \wedge \neg R) \; \mathcal{U}^{[t1,t2]}\;(S \vee R)))$ After Q until R: $\Box ((Q \; \wedge \; \Box^{[0,t1]} (\neg R)) \rightarrow ((P \; \mathcal{U}^{[t1,t2]}\;S) \; \mathcal{W} \;R))$ TCTL Globally: $A[P \; \mathcal{U}^{[t1,t2]} \;S]$ Before R: $A[A[P \; \mathcal{U}^{[t1,t2]} \;S] \; \mathcal{W} \;R]$ After Q: $AG(Q \rightarrow \; A[P\; \mathcal{U}^{[t1,t2]} \;S])$ Between Q and R: $AG((Q \; \wedge \;AG^{[0,t1]} (\neg R)) \rightarrow A[A[P \; \mathcal{U}^{[t1,t2]} \;S] \mathcal{W} \; R)])$ After Q until R: $AG((Q\; \wedge \;AG^{[0,t1]} (\neg R)) \rightarrow A[A[P \; \mathcal{U}^{[t1,t2]} \;S] \; \mathcal{W} \;R])$ Example and Known Uses After 30 ms of continuously operating the pump engine must be stopped. Note: the engine is connected to a piston rod which sends forward a plunger in order to deliver the insulin to the body. (scope:global; source: insulin pump). Relationships This pattern is the extension of the untimed version Until. Probabilistic version: Bibliography 1. Lars Grunske Specification patterns for probabilistic quality properties.ICSE 2008: 31-40
Research Open Access Published: On stability with respect to boundary conditions for anisotropic parabolic equations with variable exponents Boundary Value Problems volume 2018, Article number: 27 (2018) Article metrics 577 Accesses Abstract The anisotropic parabolic equations with variable exponents are considered. If some of diffusion coefficients \(\{b_{i}(x)\}\) are degenerate on the boundary, the others are always positive, then how to impose a suitable boundary value condition is researched. The existence of weak solutions is proved by the parabolically regularized method. The stability of weak solutions, based on the partial boundary value condition, is established by choosing a suitable test function. Introduction and the main results Recently, the anisotropic parabolic equations with the variable exponents and showed some essential characteristics different from equation (1.1). Here, In this paper, we study the equation with the initial value condition and with a partial boundary value condition where \(\Sigma_{1}\subseteq\partial\Omega\) is a relatively open subset. A similar partial boundary value condition was imposed on the equation and a new approach to prescribe the boundary value condition rather than define the Fichera function was formulated by Yin and Wang [6]. However, since equation (1.4) is anisotropic and with the variable exponents, the method of [6] seems difficult to be applied to equation (1.4). In what follows, we will try to depict \(\Sigma_{1}\) in another way. Moreover, instead of depicting the explicit formula of \(\Sigma _{1}\), we will try to find the other conditions to substitute the boundary value condition. Instead of condition (1.3), we assume that \(x\in\Omega\), \(b_{i}(x)>0\), and Here, \(\{i_{1}, i_{2}, \ldots, i_{k}\}\cup\{j_{1}, j_{2}, \ldots, j_{l}\}=\{ 1, 2, \ldots, N\}\), \(k+l=N\). For the sake of simplicity, we denote that and assume that \(p_{0}>1\). Let us introduce the basic definition and the main results. First of all, for any small constant \(\eta>0\), we define Definition 1.1 If a function \(v(x,t)\) satisfies and for \(\varphi\in L^{2}(0,T; W^{1,p^{0}}(\Omega)), \varphi |_{x\in\partial\Omega}=0\), Here and in what follows, \(p'=\frac{p}{p-1}\) as usual. Theorem 1.2 Theorem 1.3 Let \(p_{0}>1\), \(b_{i}(x)\) satisfy conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\), \(a(s)\) be a Lipschitz function and for every \(1\leq r\leq l\), \(\int_{\Omega}b_{j_{r}}^{-\frac {1}{p_{j_{r}}-1}}(x)\,dx<\infty\). If \(v(x,t)\) and \(u(x,t)\) are two solutions of equation (1.4), then Theorem 1.4 If \(p_{0}>1\), \(b_{i}(x)\) satisfies conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\), \(a(s)\) is a Lipschitz function. Let \(v(x,t)\) and \(u(x,t)\) be two solutions of equation (1.4). If and for every \(1\leq r\leq l\), then the stability (1.15) is true. Here, In fact, letting φ be a nonnegative \(C^{1}\) function, satisfying the partial boundary \(\Sigma_{1}\) can be depicted by φ as By this token, the exact partial boundary \(\Sigma_{1}\), such that the partial boundary value condition (1.6) matches up the nonlinear degenerate parabolic equation, should satisfy that and we can depict it as for any φ satisfying (1.20). However, if we really choose \(\Sigma_{1}\) as (1.22), it lacks the technical support to obtain the stability of the weak solutions for the time being. Anyway, by adopting some ideas and techniques in [4, 5], in some special cases, we can prove the stability of the weak solutions independent of the boundary value condition. Theorem 1.5 If \(p_{0}>1\), \(b_{i}(x)\) satisfies conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\), \(a(s)\) is a Lipschitz function. Let \(v(x,t)\) and \(u(x,t)\) be two solutions of equation (1.4) only with the initial values \(v_{0}(x)\) and \(u_{0}(x)\), respectively, but without any boundary value condition. If condition (1.17) is true, and for every \(1\leq r\leq k\), then the stability (1.15) is true. One can see that no boundary value condition is required in Theorem 1.5. From my own perspective, condition (1.23) is an alternative of the partial boundary value condition (1.6). By the way, for the following reaction–diffusion equation with The proof of existence By a similar method as in [4], we can prove the following. Lemma 2.1 and the trace of v on the boundary ∂Ω can be defined in the traditional way. We omit the details of the proof here. By this lemma, we know that if \(b_{i}(x)\) satisfies (1.8), (1.9) and if for every \(1\leq r\leq l\), \(\int _{\Omega}b_{j_{r}}^{-\frac{1}{p_{j_{r}}-1}}(x)\,dx<\infty\), then (2.1) is satisfied. Thus, we can define the trace of v on the boundary ∂Ω. Consider the regularized equation with the initial- boundary condition Here, \(v_{0\varepsilon}(x)\in C_{0}^{\infty}(\Omega)\) and is strongly convergent to \(v_{0}(x)\) in \(W_{0}^{1,p^{0}}(\Omega)\). Proof of Theorem 1.2 Multiplying (2.2) by \(v_{\varepsilon}\) and integrating it over \(Q_{T}\) yield then and and \(v_{\varepsilon}\rightarrow v\) \(a.e.\in Q_{T}\), Here, \(r<\frac{Np_{0}}{N-p^{0}}\). Now, similar to [1], we can show that and by Wu [9], by a process of the limit, we are able to prove that for any function \(\varphi\in L^{2}(0,T; W^{1,p^{0}}(\Omega))\), \(\varphi |_{x\in\partial\Omega}=0\). Thus, \(v(x,t)\) satisfies (1.10) and (1.11). Moreover, according to Lemma 2.1, the partial boundary value condition (1.6) is satisfied in the sense of trace. Now, we can prove the initial value (1.5) in a similar way as that in [10]. In detail, for small given \(r>0\), denote \(D_{r}=\{x\in\Omega: \operatorname{dist}(x,\partial\Omega)\leq r\}\). For large enough \(m, n\), denoting that \(v_{m}(x,t)=v_{\varepsilon=\frac{1}{m}}(x,t)\), we declare that where \(c_{r}(t)\) is independent of \(m,n\), and \(\lim_{t\rightarrow 0}c_{r}(t)=0\). In fact, by (2.2), for any \(t\in[0, T)\), we have For small \(\eta>0\), let Obviously, \(l_{\eta}(s)\in C(\mathbb{R})\) and Clearly, if we denote \(A_{\eta}(s)=\int_{0}^{s}L_{\eta}(s)\,ds\), and Suppose that \(\xi(x)\in C_{0}^{1}(D_{r})\) such that and choose \(\varphi=\xi L_{\eta}(v_{m}-v_{n})\) in (2.10), then Clearly, and Noticing that \(\xi\in C_{0}^{1}(\Omega)\), \(a(s)\) is a Lipschitz function, using Hölder’s inequality of the variable exponent Sobolev space, by (2.14), we easily deduce that At the same time, Now, for any given small r, if \(m,n\) are large enough, by (2.9), we have The stability of the initial-boundary value problem Theorem 3.1 If \(p_{0}>1\), \(b_{i}(x)\) satisfies conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\), \(a(s)\) is a Lipschitz function and for every \(1\leq r\leq l\), \(\int_{\Omega}b_{j_{r}}^{-\frac {1}{p_{j_{r}}-1}}(x)\,dx<\infty\), \(g^{i}(x)\) satisfies If \(v(x,t)\) and \(u(x.t)\) are two solutions of equation (1.4) with the same homogeneous value and with different initial values \(u_{0}(x)\) and \(v_{0}(x)\), then Proof Since \(\int_{\Omega}b_{j_{r}}^{-\frac {1}{p_{j_{r}}-1}}(x)\,dx<\infty\), by Lemma 2.1, we can choose \(\varphi =\chi_{[\tau,s]}L_{\eta}(v - u)\) in (1.11), where \(\chi_{[\tau ,s]}\) is the characteristic function of \([\tau, s]\subset(0, T)\). Then At first, we have By Lemma 3.1 from [11], we have Moreover, since \(g^{i}(x)\) satisfies condition (3.1) by (3.4), \(a(s)\) is a Lipschitz function, we have Now, let \(\eta\rightarrow0\) in (3.2). Then By Gronwall’s inequality, letting \(\tau\rightarrow0\), we have Theorem 3.1 is proved. Proof of Theorem 1.3 From the above proof of Theorem 3.1, we only need to prove that without condition (3.1). Let us give an explanation. Noticing if \(\{\Omega: \vert v-u \vert =0\}\) is a subset of Ω with a positive measure, then At the same time, if \(\{\Omega: \vert v-u \vert =0\}\) is a subset of Ω with zero measure, since \(b_{i}(x)\) satisfies (1.8)–(1.9), and for every \(1\leq r\leq l\), \(\int_{\Omega}b_{j_{r}}^{-\frac{1}{p_{j_{r}}-1}}(x)\,dx<\infty\), we have Then Thus, Theorem 1.3 is true. □ The stability based on the partial boundary value condition Theorem 4.1 If \(p_{0}>1\), \(b_{i}(x)\) satisfies conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\) satisfies (3.5), \(a(s)\) is a Lipschitz function. Let \(v(x,t)\) and \(u(x,t)\) be two solutions of equation (1.4). If the initial values \(u_{0}(x)\) and \(v_{0}(x)\) are different, while the partial boundary values satisfy Proof Let \(\Omega_{\eta}=\{x\in\Omega:\sum_{r=1}^{l}b_{j_{r}}(x)>\eta\}\), and Then, if \(x\in\Omega\setminus\Omega_{\eta}\), \(\phi_{\eta x_{i}}=\frac{1}{\eta}(\sum_{r=1}^{l} b_{j_{r}}(x))_{x_{i}}\), while \(x\in \Omega_{\eta}\), \(\phi_{x_{i}}=0\). Let \(\varphi=\chi_{[\tau,s]}\phi_{\eta}L_{\eta}(v-u)\) be the test function in (1.11). Then At first, and Secondly, by Hölder’s inequality of the variable exponent Sobolev space, we have Here, \(p^{1}_{i_{r}}=p^{+}_{i_{r}}\) or \(p^{-}_{i_{r}}\) according to or one can refer to Lemma 2.1 of [4]. \(q_{j_{r}}(x)=\frac {p_{j_{r}}(x)}{p_{j_{r}}(x)-1}\), \(q^{1}_{i_{r}}\) has a similar sense. Let \(\Sigma_{2}=\partial\Omega\setminus\Sigma_{1}\), and Then Since by the definition of the trace, we have Moreover, since we have Thirdly, for the last term of the left-hand side of (4.2), we have By condition (1.17), Then Fourthly, since \(g^{i}(x)\) satisfies condition (3.5), we have At last, we have By Gronwall’s inequality, we have Let \(\tau\rightarrow0\). Then □ Proof of Theorem 1.4 If \(\vert g^{i}(x) \vert \leq c\) and \(a(s)\) is a Lipschitz function, for every r satisfies (1.17), similar to the proof of Theorem 1.3 in Sect. 3, combining with Theorem 4.1, we know Theorem 1.4 is true. □ The stability without boundary value condition Theorem 5.1 Let \(v(x,t)\) and \(u(x,t)\) be two solutions of equation (1.4) with different initial values \(v_{0}(x)\) and \(u_{0}(x)\), respectively. If \(p_{0}>1\), \(b_{i}(x)\) satisfies conditions (1.8), (1.9), \(g^{i}(x)\in C^{1}(\overline{\Omega})\) satisfies (3.5), \(a(s)\) is a Lipschitz function, conditions (1.17) and (1.23) are true, then the stability (1.15) is true. Proof Since Proof of Theorem 1.5 If \(\vert g^{i}(x) \vert \leq c\) and \(a(s)\) is a Lipschitz function, condition (1.23) is true. Similar to the proof of Theorem 1.3 in Sect. 3, combining with Theorem 5.1, we know Theorem 1.5 is true. □ References 1. Antontsev, S., Shmarev, S.: Existence and uniqueness for doubly nonlinear parabolic equations with nonstandard growth conditions. Differ. Equ. Appl. 4(1), 67–94 (2012) 2. Tersenov Alkis, S.: The one dimensional parabolic \(p(x)\)-Laplace equation. Nonlinear Differ. Equ. Appl. 23, 27 (2016). https://doi.org/10.1007/s00030-016-0377-y 3. Tersenov Alkis, S., Tersenov Aris, S.: Existence of Lipschitz continuous solutions to the Cauchy–Dirichlet problem for anisotropic parabolic equations. J. Funct. Anal. 272, 3965–3986 (2017) 4. Zhan, H.: The stability of the anisotropic parabolic equation with the variable exponent. Bound. Value Probl. 2017, 134 (2017). https://doi.org/10.1186/s13661-017-0868-8 5. Zhan, H.: The well-posedness of an anisotropic parabolic equation based on the partial boundary value condition. Bound. Value Probl. 2017, 166 (2017). https://doi.org/10.1186/s13661-017-0899-1 6. Yin, J., Wang, C.: Evolutionary weighted p-Laplacian with boundary degeneracy. J. Differ. Equ. 237, 421–445 (2007) 7. Zhan, H.: On a hyperbolic-parabolic mixed type equation. Discrete Contin. Dyn. Syst., Ser. S 10(3), 605–624 (2017) 8. Zhan, H.: The solutions of a hyperbolic-parabolic mixed type equation on half-space domain. J. Differ. Equ. 259, 1449–1481 (2015) 9. Wu, Z., Zhao, J., Yin, J., Li, H.: Nonlinear Diffusion Equations. Word Scientific, Singapore (2001) 10. Zhan, H.: The solution of convection-diffusion equation. Chin. Ann. Math. 34(2), 235–256 (2013) (in Chinese) 11. Antontsev, S.V., Shmarev, S.: Parabolic equations with double variable nonlinearities. Math. Comput. Simul. 81, 2018–2032 (2011) 12. Alaoui, M.K., Messaoudi, S.A., Khenous, H.B.: A blow-up result for nonlinear generalized heat equation. Comput. Math. Appl. 68(12), 1723–1732 (2014) 13. Al-Smail, J.H., Messaoudi, S.A., Talahmeh, A.A.: Well-posedness and numerical study for solutions of a parabolic equation with variable-exponent nonlinearities. Int. J. Differ. Equ. 2018, Article ID 9754567 (2018) 14. Messaoudi, S.A., Talahmeh, A.A., Al-Smail, J.H.: Nonlinear damped wave equation: existence and blow-up. Comput. Math. Appl. 74, 3024–3041 (2017) Acknowledgements The author would like to thank SpringerOpen Accounts Team for kindly agreeing to give me a discount of the paper charge if my paper can be accepted. Availability of data and materials Not applicable. Funding The paper is supported by the Natural Science Foundation of Fujian province (no: 2015J01592), supported by the Science Foundation of Xiamen University of Technology, China. Ethics declarations Ethics approval and consent to participate Not applicable. Competing interests The author declares that he has no competing interests. Additional information Abbreviations Not applicable Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
I would like to draw a frame around one equation to point it out. I used \fbox{...} but it didn't work out. Here is a minimal example where I tried it. Any package suggestions? \documentclass[ german, paper=a4, ]{scrbook} %KOMA-\usepackage[latin1]{inputenc}\usepackage[ngerman]{babel} \usepackage[babel,german=guillemets]{csquotes}\usepackage[T1]{fontenc} \usepackage{amsmath}\begin{document}\begin{align}Nu_\text{m} &= \frac{\alpha_m \, L}{\lambda} \text{ \quad .} \\ Nu_\text{m} &= \frac{\alpha_m \, L}{\lambda} \text{ \quad .}\end{align}\end{document}
78 0 Hi everyone--I'm a bit stuck trying to follow a calculation in an article (Nucl. Phys. B360 (1991) p. 145-179) regarding the lab-frame relative velocity of two colliding particles as a function of the kinetic energy per unit mass. (I'll include references to equations in the article, but the article is not required for this particular question.) Suppose we have two particles of mass [tex]m[/tex] colliding with one another with energies [tex]E_1, E_2[/tex] and 3-momenta [tex]\mathbf{p_1}, \mathbf{p_2}[/tex]. Define [tex]p_1 = |\mathbf{p_1}|, p_2 = |\mathbf{p_2}|[/tex] Define (eq. 3.3) [tex]s = 2m^2 + 2E_1E_2 - 2p_1p_2\cos\theta[/tex] Which, I believe is the same as the center of mass energy [tex](p_1^\mu+p_2^\mu)^2[/tex]. Now define the kinetic energy per unit mass in the lab frame, (3.20) [tex] \epsilon = \frac{(E_{1,\mathrm{lab}}-m)+(E_{2,\mathrm{lab}}-m)}{2m}[/tex] First question: Why can we write [tex]\epsilon=\frac{s-4m^2}{4m^2}[/tex] Second question: Why is is true that the lab velocity is given by [tex]v_\mathrm{lab} = \frac{2\epsilon^{1/2}(1+\epsilon)^{1/2}}{1+2\epsilon}[/tex] Thanks very much for any assistance! Best, Flip Suppose we have two particles of mass [tex]m[/tex] colliding with one another with energies [tex]E_1, E_2[/tex] and 3-momenta [tex]\mathbf{p_1}, \mathbf{p_2}[/tex]. Define [tex]p_1 = |\mathbf{p_1}|, p_2 = |\mathbf{p_2}|[/tex] Define (eq. 3.3) [tex]s = 2m^2 + 2E_1E_2 - 2p_1p_2\cos\theta[/tex] Which, I believe is the same as the center of mass energy [tex](p_1^\mu+p_2^\mu)^2[/tex]. Now define the kinetic energy per unit mass in the lab frame, (3.20) [tex] \epsilon = \frac{(E_{1,\mathrm{lab}}-m)+(E_{2,\mathrm{lab}}-m)}{2m}[/tex] First question: Why can we write [tex]\epsilon=\frac{s-4m^2}{4m^2}[/tex] Second question: Why is is true that the lab velocity is given by [tex]v_\mathrm{lab} = \frac{2\epsilon^{1/2}(1+\epsilon)^{1/2}}{1+2\epsilon}[/tex] Thanks very much for any assistance! Best, Flip
In this chapter you will do more work with fractions written in the decimal notation. When fractions are written in the decimal notation, calculations can be done in the same way as for whole numbers. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Equivalent forms Fractions in decimal notation 1. What fraction of each rectangle is coloured in? Write your answers in the table. (a) (b) (c) (d) (a) Red (b) Green Yellow (c) Green Yellow (d) Yellow Green 2. Now find out what fraction in each rectangle in question 1 is not coloured in. (a) (b) (c) (d) Decimal fractions and common fractions are simply different ways of expressing the same number. We call them different notations. To write a common fraction as a decimal fraction, we must first express the common fraction with a power of ten (10, 100, 1 000 etc.) as denominator. For example: \(\frac{9}{20}=\frac{9}{20} \times \frac{5}{5} = \frac{45}{100} = 045\) If you have a calculator, you can also divide the numerator by the denominator to get the decimal form of a fraction, for example: \(\frac{9}{20} = 9 \div 20 = 0,45\) To write a decimal fraction as a common fraction, we must first express it as a common fraction with a power of ten as denominator and then simplify if necessary. For example: \( 0,65 = \frac{65}{100} = \frac{65 \div 5}{100 \div 5} = \frac{13}{20}\) 3. Give the decimal form of each of the following numbers. \(\frac{1}{2} \) __________ \(\frac{3}{4}\) __________ \(\frac{4}{5}\) __________ \(\frac{7}{5}\) __________ \(\frac{7}{2} \) __________ \(\frac{65}{100}\)__________ 4. Write the following as decimal fractions. (a) \(2 \times 10 + 1 \times 1 + \frac{3}{10}\) (b) \(3 \times 1 + 6 \times \frac{1}{100}\) (c) Three hundredths (d) \(7 \times \frac{1}{1000}\) 5. Write each of the following numbers as fractions in their simplest form. 0,2 0,85 0,07 12,04 40,006 6. Write in the decimal notation. (a) 5 + 12 tenths (b) 2 + 3 tenths + 17 hundredths (c) 13 hundredths + 15 thousandths (d) 7 hundredths + 154 hundredths Hundredths, percentages and decimals It is often difficult to compare fractions with different denominators. Fractions with the same denominator are easier to compare. For this and other reasons, fractions are often expressed as hundredths. A fraction expressed as hundredths is called a percentage. Instead of 6 hundredths we can say 6 per cent or \(\frac{6}{100}\) or 0,06. 6 per cent, \(\frac{6}{100}\) and 0,06 are just three different ways of writing the same number. The symbol % is used for per cent. Instead of writing "17 per cent", we may write 17%. 1. Write each of the following in three ways: in decimal notation, in percentage notation and in common fraction notation. Leave your answers in hundredths. (a) 80 hundredths (b) 5 hundredths (c) 60 hundredths (d) 35 hundredths 2. Complete the following table. 0,3 \(\frac{1}{4}\) 15% \(\frac{1}{8}\) 0,55 1% Ordering and comparing decimal fractions Bigger, smaller or the same? 1. Write the values of the marked points (A to D) in as accurately as possible in decimal notation. Write the values beneath the letters A to D. (a) (b) (c) (d) (e) (f) (g) (h) (i) 2. Order the following numbers from biggest to smallest. Explain your thinking. 5267 1263 1300 12689 635 1267 125 126 12 3. Order the following numbers from biggest to smallest. Explain your method. 0,8 0,05 0,901 0,15 0,465 0,55 0,75 0,4 0,62 0,901 0,8 0,75 0,62 0,55 0,465 0,4 0,15 0,05 4. Write down three different numbers that are bigger than the first number and smaller than the second number. (a) 5 and 5,1 (b) 5,1 and 5,11 (c) 5,11 and 5,12 (d) 5,111 and 5,116 (e) 0 and 0,001 (f) \(\frac{1}{2}\) and 1 5. Underline the bigger of the two numbers. (a) 2,399 and 2,6 (b) 5,604 and 5,64 (c) 0,11 and 0,087 (d) \(\frac{3}{4}\) and 50% (e) \(\frac{75}{100}\) and \(\frac{50}{100}\) (f) 0,125 and 0,25 6. The table gives information about two world champion heavyweight boxers. If they fight against one another, who would you expect to have the advantage, and why? Height (m) 1,98 1,88 Weight (kg) 112 103,3 Reach (m) 2,03 1,91 7. Fill in <, > or = . (a) 3,09 ☐ 3.9 (b) 3,9 ☐ 3,90 (c) 2,31 ☐ 3,30 (d) 3,197 ☐ 3,2 (e) 4,876 ☐ 5,987 (f) 123,321 ☐ 123,3 8. How many numbers are there between 3,1 and 3,2? Rounding off decimal fractions Decimal fractions can be rounded in the same way as whole numbers. They can be rounded to the nearest whole number or to one, two, three etc. figures after the comma. If the last digit of the number is 5 or bigger it is rounded up to the next number. For example: 13,5 rounded to the nearest whole number is 14; 13,526 rounded to two figures after the comma is 13,53. If the last digit is 4 or less it is rounded down to the previous number. For example: 13,4 rounded to the nearest whole number is 13. Let's round off 1. Round each of the following numbers off to the nearest whole number. 29,34 3,65 14,452 3,299 39,1 564,85 1,768 2. Round each of the following numbers off to one decimal place. 19,47 421,34 489,99 24,37 6,77 3. Round each of the following numbers off to two decimal places. 8,345 6,632 5,555 34,239 21,899 4. Mr Peters buys a radio for R206,50. The shop allows him to pay it off over six months. How must he pay back the money? 5. (a) Mrs Smith buys a carton of 10 kg grapes at the market for R24,77. She must divide it between herself and two friends. How much does each woman get? (b) How much must each person pay Mrs Smith for the grapes? 6. Estimate the answers for each of the following by rounding off the numbers. (a) \(1,43 \times 1,62\) (b) \(3,89 \times 4,21\) Calculations with decimal fractions To add and subtract decimal fractions tenths may be added to tenths tenths may be subtracted from tenths hundredths may be added to hundredths hundredths may be subtracted from hundredths etc. Let's do calculations! 1. Four consecutive stages in a cycling race are 21,4 km; 14,7 km; 31 km and 18,6 km long. How long is the whole race? Answer: 2. Calculate. (a) \( 16,52 + 2,35 \) (b) \(16,52 + 9,38\) (c) \(16,52 + 9,78\) (d) \( 30,08 + 2,9 \) (e) \(0,042 + 0,103\) (f) \(9,99 + 0,99\) 3. Calculate. (a) \( 45,67 - 23,25 \) (b) \( 45,67 -23,80 \) (c) \(187,6 - 98,45\) (d) \( 1,009 - 0,998 \) (e) \(0,9 - 0,045\) (f) \(65,7 - 37,6\) 4. The following set of measurements (in cm) was recorded during an experiment: 56,8; 55,4; 78,9; 57,8; 34,2; 67,6; 45,5; 34,5; 64,5; 88 (a) Find the sum of the measurements and round it off to the nearest whole number. (b) First round off each measurement to the nearest whole number and then find the sum. (c) Which of your answers in 4(a) and (b) is closest to the actual sum? Explain why. 5. By how much is 0,7 greater than 0,07? 6. The difference between two numbers is 0,75. The bigger number is 18,4. What is the other number? To multiply fractions written as decimals, convert the fractions to whole numbers by multiplying by powers of 10 (e.g. \(0,3 \times 10 = 3\)), do your calculations with the whole numbers, and then convert back to decimals again. For example: \(13,1 \times 1,01\) \(13,1 {\bf\times 10} \times 1,01 {\bf\times 100} = 131 \times 101 = 13 231; 13 231 \div {\bf 10 \div 100} = 13,231\) When you do division you can first multiply the number and the divisor by the same number to make the working easier. For example: \(21,7 \div 0,7 = (21,7 {\bf\times 10}) \div (0,7 {\bf\times 10}) = 217 \div 7 = 31\) 7. Calculate each of the following. You may use fraction notation if you wish. (a) \(0,12 \times 0,3 \) (b) \( 0,12\times 0,03 \) (c) \(1,2 \times 0,3\) (d) \(350 \times 0,043 \) (e) \( 0,035\times 0,043 \) (f) \(0,13 \times 0,16\) (g) \(1,3 \times 1,6 \) (h) \(0,13 \times 1,6\) 8. \(30,5 \times 1,3 = 39,65\). Use this answer to work out each of the following. (a) \(3,05 \times 1,3 \) (b) \( 305 \times1,3 \) (c) \(0,305 \times 0,13\) (d) \(305 \times 13 \) (e) \( 39,65 \div 30,5 \) (f) \(39,65 \div 0,305\) (g) \( 39,65 \div 0,13 \) (h) \(3,965 \div 130\) 9. \( 3,5 \times 4,3 = 15,05\). Use this answer to work out each of the following. (a) \(3,5 \times 43 \) (b) \( 0,35 \times43 \) (c) \(3,5 \times 0,043\) (d) \(0,35 \times 0,43 \) (e) \( 15,05\div 0,35 \) (f) \(15,05 \div 0,043\) 10. Calculate each of the following. You may convert to whole numbers to make it easier. (a) \( 62,5 \div 2,5 \) (b) \(6,25 \div 2,5\) (c) \( 6,25 \div 0,25 \) (d) \(0,625 \div 2,5\) Solving problems 1. (a) Divide R44,45 between seven people so that each one receives the same amount. (b) John saves R15,25 every week. He now has R106,75 saved up. For how many weeks has he been saving? 2. (a) Calculate \(14,5 \div 6\), correct to two decimal places (b) Calculate \(7,41 \div 5\), correct to one decimal place 3. Determine the value of \(x\). (Give answers rounded to 2 decimal places.) (a) \( 7,1 \div x = 4,2 \) (b) \(x \div 0,7 = 6,2\) (c) \(12 \div x = 6,4\) (d) \( x \div 3,5 = 7 \) (e) \(2,3 \times x = 6\) (f) \(0,023 \times x = 8\) 4. (a) 1 â of water weighs almost 0,995 kg.What will 50 â of water weigh? What will 0,5 â of water weigh? Mincemeat costs R36,65 per kilogram. What will 3,125 kg mince-meat cost? What will 0,782 kg cost?
Let $I \subseteq \mathbb{R}$ be an interval and $g: I \to \mathbb{C}$ continuous. Define $f: \mathbb{C} \backslash \overline{Im(f)} \to \mathbb{C}$ by $f(z) := \int_I \frac{1}{g(x) - z} dx$ (with $\overline{Im(f)}$ being the closure of $Im(f)$.) I now want to show that $f$ is analytic, and rewrite it into a power series that's (locally) defined for each $z_0 \in \mathbb{C} \backslash \overline{Im(f)}$. Now I must admit that I don't really know how to get started. I haven't dealt much with analytic functions before. I know that a complex function is per definition analytic iff it can be written as a power series (therefore, by completing the second part of the task, the first one would follow, although I don't really know how I could write the function a), and iff it is differentiable once (hence differentiable infinitely often). Therefore, it would also be sufficient for the first part to show that $f$ is differentiable, I think? How do I show that though? I would need to differentiate by $z$, whereas $f$ is defined as an integral with respect to $x$. I'm rather confused by this function.
Performing Topology Optimization with the Density Method Engineers are given significant freedom in their pursuit of lightweight structural components in airplanes and space applications, so it makes sense to use methods that can exploit this freedom, making topology optimization a popular choice in the early design phase. This method often requires regularization and special interpolation functions to get meaningful designs, which can be a nuisance to both new and experienced simulation users. To simplify the solution of topology optimization problems, the COMSOL® software contains a density topology feature. About the Density Method for Topology Optimization As the name suggests, topology optimization is a method that has the ability to come up with new and better topologies for an engineering structure given an objective function and set of constraints. The method comes up with these new topologies by introducing a set of design variables that describe the presence, or absence, of material within the design space. These variables are defined either within every element of the mesh or on every node point of the mesh. Changing these design variables thus becomes analogous to changing the topology. This means that holes in the structure can appear, disappear, and merge as well as that boundaries can take on arbitrary shapes. In addition, the control parameters are somewhat automatically defined and tied to the discretization. As of COMSOL Multiphysics® software version 5.4, the add-on Optimization Module includes a density topology feature to improve the usability of topology optimization. The feature is designed to be used as a density method (Ref. 3), meaning that the control parameters change a material parameter through an interpolation function. Interpolation functions for solid and fluid mechanics are built into the feature and used in example models throughout the Application Library in COMSOL Multiphysics. A bracket geometry is topology optimized, leaving only 50% of the material, which contributes the most to the stiffness. The printed bracket geometry. The density method involves the definition of a control variable field, \theta_c, which is bounded between 0 and 1. In solid mechanics, \theta_c=1 corresponds to the material from which the structure is to be built, while \theta_c=0 corresponds to a very soft material. By default, the void Young’s modulus is 0.1% of the solid Young’s modulus. In fluid mechanics, convention dictates that \theta_c=1 corresponds to fluid, while \theta_c=0 is a (slightly) permeable material with an inverse permeability factor, \alpha; i.e., a damping term is added to the Navier-Stokes equation: The damping term is 0 in fluid domains, while a large value is used in solid domains. These different values give a good approximation of the no-slip boundary condition on the interface between the domains. An Introduction to the Density Model Feature The Density Model feature supports regularization via a Helmholtz equation (Ref. 1). This introduces a minimum length scale using the filter radius, R_\mathrm{min}: Here, \theta_c is the raw control variable, which is modified by the optimizer, and \theta_f is the filtered variable. The mesh edge size is the default value for the filter radius. While this works well in terms of regularizing the optimization problem, it’s important to set a fixed length (larger than the mesh edge size) to get mesh-independent results. Top: The equation for the Helmholtz filter can be solved analytically for a 1D Heaviside function. Bottom: This plot is taken from the MBB beam optimization model. It shows the raw control variables to the left and the filtered version to the right. The Helmholtz filter gives rise to significant grayscale, which does not have a clear physical interpretation. The grayscale can be reduced by applying a smooth step function in what is referred to as projection in topology optimization. Projection reduces grayscale, but it also makes it more difficult for the optimizer to converge. The density topology feature supports projection based on the hyperbolic tangent function, and the amount of projection can be controlled with the projection steepness, \beta. Here, \theta_{\beta} is the projection point. Plot showing the filtered field to the left and the projected field to the right. Projection makes it possible to avoid grayscale, but grayscale can still appear if the optimization problem favors it. If the same interpolation function is used for the mass and the stiffness, grayscale is optimal in volume-constrained minimum compliance problems. It is thus common to use interpolation functions that cause intermediate values to be associated with little stiffness relative to their cost (compared to the fully solid value). You can think of this as a penalization of intermediate values for the material volume factor, and the Density Model interface (shown below) supports two such interpolation schemes for solid mechanics: solid isotropic material with penalization (SIMP) and rational approximation of material properties method (RAMP) interpolation. Darcy interpolation is provided for fluid mechanics. The interpolated variable is called the penalized material volume factor, \theta_p, and is used for interpolating the material parameters, e.g., for SIMP interpolation, the p_\textsc{simp} exponent can be increased to reduce the stiffness of intermediate values, so that grayscale becomes less favorable. \theta_p %26= \theta_\mathrm{min}(1-\theta_\mathrm{min})\theta^{p_\textsc{simp}}\\ E_p %26= E\theta_p \end{align} Here, E is the Young’s modulus of the solid material and E_p is the penalized Young’s modulus to be used throughout all optimized domains. The Density Model feature is available under Topology Optimization in Component > Definitions . The mesh edge length is taken as the default filter radius and it works well, but it has to be replaced with a fixed value in order to produce mesh-independent results. The penalized Young’s modulus can be defined as a domain variable, or (as in the case of the bracket model) it can be defined directly in the materials. Topology optimization with the density method involves varying the Young’s modulus spatially. In this case, it is achieved by going to the material properties and multiplying the solid Young’s modulus with the penalized material volume factor, dtopo1.theta_p. In summary, the density topology feature adds four variables. The filtered material volume factor is defined implicitly using a dependent variable. Symbol Description Equation \theta_c Control material volume factor 0\leq\theta_c\leq1 \theta_f Filtered material volume factor \theta_f = R_\mathrm{min}^2\mathbf{\nabla}^2\theta_f + \theta_c \theta Material volume factor \theta = \frac{\tanh(\beta(\theta_f-\theta_{\beta}))+\tanh(\beta\theta_{\beta})}{\tanh(\beta(1-\theta_{\beta}))+\tanh(\beta\theta_{\beta})} \theta_p Penalized material volume factor \theta_p = \theta_\mathrm{min}+(1-\theta_\mathrm{min})\theta^{p_\textsc{simp}} or \theta_p = \frac{q_\mathrm{Darcy}(1-\theta)}{q_\mathrm{Darcy}+\theta} When the filtering is disabled, the filtered variable becomes undefined and the projection instead uses the control material volume factor directly. If the projection is disabled, the material volume factor still exists, but it becomes identical to the projection input. Applying Continuation to Avoid Local Minima When the topology is not too complicated, the default values of the density topology feature work well. This is the case for the MBB beam optimization and topology optimized hook models. If the optimal design is more complicated (such as for the bracket example shown at the top of this post), there might be many local minima. To avoid these minima, you can use continuation in the SIMP exponent and the projection slope. This can be achieved by modifying the initial value expression in the density topology feature and adding a Parametric Sweep feature, as shown below. As a result, the solver ramps over the specified parameters, using the optimum from the previous case as the initial value for the next optimization step. That is, it starts with a small SIMP exponent and projection slope and then continues to higher values. It is possible to apply continuation by combining a parametric sweep with a study reference. See the Bracket — Topology Optimization tutorial model for details. Objectives and Constraints in Topology Optimization If the geometry is optimized for a single load case (as shown below to the left), the resulting design will be optimal with respect to that load case. This can seem obvious, but often designers make assumptions about symmetries and the design topology. Unless these assumptions are formalized as constraints, they will not be respected. Therefore, the design shown to the right below uses eight load cases (two load groups times four constraint groups). Left: The bracket geometry is optimized for a single load case, resulting in an asymmetric design with two loosely connected halves. Right: The bracket geometry with eight load cases. Designers often have several objectives that need to be weighted. To make an informed decision about these objectives, a designer can trace the Pareto optimal front using several optimizations with different weights. The Pareto optimal front for the bracket geometry can be traced by varying the weight in a parametric sweep. Animation of the topology optimized bracket. (Download the glTF™ file from the Application Gallery in GLB-file format to rotate the geometry yourself.) Exporting and Importing Topology Optimization Results It is possible to analyze the result of a topology optimized design with respect to stress concentration and buckling without remeshing. However, if you want to be completely sure that the void phase does not play a role, you can eliminate it by exporting and importing the resulting design, as shown below. The details of this procedure are discussed in a previous blog post. The contour (left) for the topology optimized MBB beam design is exported and imported as an interpolation curve (right). Next Steps To learn more about the built-in tools and features for solving optimization problems, check out the Optimization Module product page by clicking the button below. Further Resources Try using the density feature for topology optimization with these example models: Read more about topology optimization on the COMSOL Blog: References B.S. Lazarov and O. Sigmund, “Filters in topology optimization based on Helmholtz‐type differential equations,” International Journal for Numerical Methods in Engineering, vol. 86, no. 6, pp. 765–781, 2011. F. Wang, B.S. Lazarov, and O. Sigmund, “On projection methods, convergence and robust formulations in topology optimization,” Structural and Multidisciplinary Optimization, vol. 43, pp. 767–784, 2011. M.P. Bendsøe, “Optimal shape design as a material distribution problem,” Structural Optimization, vol. 1, pp. 193–202, 1989. glTF and the glTF logo are trademarks of the Khronos Group Inc. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
In this chapter you will do more work with fractions written in the decimal notation. When fractions are written in the decimal notation, calculations can be done in the same way as for whole numbers. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Equivalent forms Fractions in decimal notation 1. What fraction of each rectangle is coloured in? Write your answers in the table. (a) (b) (c) (d) (a) Red (b) Green Yellow (c) Green Yellow (d) Yellow Green 2. Now find out what fraction in each rectangle in question 1 is not coloured in. (a) (b) (c) (d) Decimal fractions and common fractions are simply different ways of expressing the same number. We call them different notations. To write a common fraction as a decimal fraction, we must first express the common fraction with a power of ten (10, 100, 1 000 etc.) as denominator. For example: \(\frac{9}{20}=\frac{9}{20} \times \frac{5}{5} = \frac{45}{100} = 045\) If you have a calculator, you can also divide the numerator by the denominator to get the decimal form of a fraction, for example: \(\frac{9}{20} = 9 \div 20 = 0,45\) To write a decimal fraction as a common fraction, we must first express it as a common fraction with a power of ten as denominator and then simplify if necessary. For example: \( 0,65 = \frac{65}{100} = \frac{65 \div 5}{100 \div 5} = \frac{13}{20}\) 3. Give the decimal form of each of the following numbers. \(\frac{1}{2} \) __________ \(\frac{3}{4}\) __________ \(\frac{4}{5}\) __________ \(\frac{7}{5}\) __________ \(\frac{7}{2} \) __________ \(\frac{65}{100}\)__________ 4. Write the following as decimal fractions. (a) \(2 \times 10 + 1 \times 1 + \frac{3}{10}\) (b) \(3 \times 1 + 6 \times \frac{1}{100}\) (c) Three hundredths (d) \(7 \times \frac{1}{1000}\) 5. Write each of the following numbers as fractions in their simplest form. 0,2 0,85 0,07 12,04 40,006 6. Write in the decimal notation. (a) 5 + 12 tenths (b) 2 + 3 tenths + 17 hundredths (c) 13 hundredths + 15 thousandths (d) 7 hundredths + 154 hundredths Hundredths, percentages and decimals It is often difficult to compare fractions with different denominators. Fractions with the same denominator are easier to compare. For this and other reasons, fractions are often expressed as hundredths. A fraction expressed as hundredths is called a percentage. Instead of 6 hundredths we can say 6 per cent or \(\frac{6}{100}\) or 0,06. 6 per cent, \(\frac{6}{100}\) and 0,06 are just three different ways of writing the same number. The symbol % is used for per cent. Instead of writing "17 per cent", we may write 17%. 1. Write each of the following in three ways: in decimal notation, in percentage notation and in common fraction notation. Leave your answers in hundredths. (a) 80 hundredths (b) 5 hundredths (c) 60 hundredths (d) 35 hundredths 2. Complete the following table. 0,3 \(\frac{1}{4}\) 15% \(\frac{1}{8}\) 0,55 1% Ordering and comparing decimal fractions Bigger, smaller or the same? 1. Write the values of the marked points (A to D) in as accurately as possible in decimal notation. Write the values beneath the letters A to D. (a) (b) (c) (d) (e) (f) (g) (h) (i) 2. Order the following numbers from biggest to smallest. Explain your thinking. 5267 1263 1300 12689 635 1267 125 126 12 3. Order the following numbers from biggest to smallest. Explain your method. 0,8 0,05 0,901 0,15 0,465 0,55 0,75 0,4 0,62 0,901 0,8 0,75 0,62 0,55 0,465 0,4 0,15 0,05 4. Write down three different numbers that are bigger than the first number and smaller than the second number. (a) 5 and 5,1 (b) 5,1 and 5,11 (c) 5,11 and 5,12 (d) 5,111 and 5,116 (e) 0 and 0,001 (f) \(\frac{1}{2}\) and 1 5. Underline the bigger of the two numbers. (a) 2,399 and 2,6 (b) 5,604 and 5,64 (c) 0,11 and 0,087 (d) \(\frac{3}{4}\) and 50% (e) \(\frac{75}{100}\) and \(\frac{50}{100}\) (f) 0,125 and 0,25 6. The table gives information about two world champion heavyweight boxers. If they fight against one another, who would you expect to have the advantage, and why? Height (m) 1,98 1,88 Weight (kg) 112 103,3 Reach (m) 2,03 1,91 7. Fill in <, > or = . (a) 3,09 ☐ 3.9 (b) 3,9 ☐ 3,90 (c) 2,31 ☐ 3,30 (d) 3,197 ☐ 3,2 (e) 4,876 ☐ 5,987 (f) 123,321 ☐ 123,3 8. How many numbers are there between 3,1 and 3,2? Rounding off decimal fractions Decimal fractions can be rounded in the same way as whole numbers. They can be rounded to the nearest whole number or to one, two, three etc. figures after the comma. If the last digit of the number is 5 or bigger it is rounded up to the next number. For example: 13,5 rounded to the nearest whole number is 14; 13,526 rounded to two figures after the comma is 13,53. If the last digit is 4 or less it is rounded down to the previous number. For example: 13,4 rounded to the nearest whole number is 13. Let's round off 1. Round each of the following numbers off to the nearest whole number. 29,34 3,65 14,452 3,299 39,1 564,85 1,768 2. Round each of the following numbers off to one decimal place. 19,47 421,34 489,99 24,37 6,77 3. Round each of the following numbers off to two decimal places. 8,345 6,632 5,555 34,239 21,899 4. Mr Peters buys a radio for R206,50. The shop allows him to pay it off over six months. How must he pay back the money? 5. (a) Mrs Smith buys a carton of 10 kg grapes at the market for R24,77. She must divide it between herself and two friends. How much does each woman get? (b) How much must each person pay Mrs Smith for the grapes? 6. Estimate the answers for each of the following by rounding off the numbers. (a) \(1,43 \times 1,62\) (b) \(3,89 \times 4,21\) Calculations with decimal fractions To add and subtract decimal fractions tenths may be added to tenths tenths may be subtracted from tenths hundredths may be added to hundredths hundredths may be subtracted from hundredths etc. Let's do calculations! 1. Four consecutive stages in a cycling race are 21,4 km; 14,7 km; 31 km and 18,6 km long. How long is the whole race? Answer: 2. Calculate. (a) \( 16,52 + 2,35 \) (b) \(16,52 + 9,38\) (c) \(16,52 + 9,78\) (d) \( 30,08 + 2,9 \) (e) \(0,042 + 0,103\) (f) \(9,99 + 0,99\) 3. Calculate. (a) \( 45,67 - 23,25 \) (b) \( 45,67 -23,80 \) (c) \(187,6 - 98,45\) (d) \( 1,009 - 0,998 \) (e) \(0,9 - 0,045\) (f) \(65,7 - 37,6\) 4. The following set of measurements (in cm) was recorded during an experiment: 56,8; 55,4; 78,9; 57,8; 34,2; 67,6; 45,5; 34,5; 64,5; 88 (a) Find the sum of the measurements and round it off to the nearest whole number. (b) First round off each measurement to the nearest whole number and then find the sum. (c) Which of your answers in 4(a) and (b) is closest to the actual sum? Explain why. 5. By how much is 0,7 greater than 0,07? 6. The difference between two numbers is 0,75. The bigger number is 18,4. What is the other number? To multiply fractions written as decimals, convert the fractions to whole numbers by multiplying by powers of 10 (e.g. \(0,3 \times 10 = 3\)), do your calculations with the whole numbers, and then convert back to decimals again. For example: \(13,1 \times 1,01\) \(13,1 {\bf\times 10} \times 1,01 {\bf\times 100} = 131 \times 101 = 13 231; 13 231 \div {\bf 10 \div 100} = 13,231\) When you do division you can first multiply the number and the divisor by the same number to make the working easier. For example: \(21,7 \div 0,7 = (21,7 {\bf\times 10}) \div (0,7 {\bf\times 10}) = 217 \div 7 = 31\) 7. Calculate each of the following. You may use fraction notation if you wish. (a) \(0,12 \times 0,3 \) (b) \( 0,12\times 0,03 \) (c) \(1,2 \times 0,3\) (d) \(350 \times 0,043 \) (e) \( 0,035\times 0,043 \) (f) \(0,13 \times 0,16\) (g) \(1,3 \times 1,6 \) (h) \(0,13 \times 1,6\) 8. \(30,5 \times 1,3 = 39,65\). Use this answer to work out each of the following. (a) \(3,05 \times 1,3 \) (b) \( 305 \times1,3 \) (c) \(0,305 \times 0,13\) (d) \(305 \times 13 \) (e) \( 39,65 \div 30,5 \) (f) \(39,65 \div 0,305\) (g) \( 39,65 \div 0,13 \) (h) \(3,965 \div 130\) 9. \( 3,5 \times 4,3 = 15,05\). Use this answer to work out each of the following. (a) \(3,5 \times 43 \) (b) \( 0,35 \times43 \) (c) \(3,5 \times 0,043\) (d) \(0,35 \times 0,43 \) (e) \( 15,05\div 0,35 \) (f) \(15,05 \div 0,043\) 10. Calculate each of the following. You may convert to whole numbers to make it easier. (a) \( 62,5 \div 2,5 \) (b) \(6,25 \div 2,5\) (c) \( 6,25 \div 0,25 \) (d) \(0,625 \div 2,5\) Solving problems 1. (a) Divide R44,45 between seven people so that each one receives the same amount. (b) John saves R15,25 every week. He now has R106,75 saved up. For how many weeks has he been saving? 2. (a) Calculate \(14,5 \div 6\), correct to two decimal places (b) Calculate \(7,41 \div 5\), correct to one decimal place 3. Determine the value of \(x\). (Give answers rounded to 2 decimal places.) (a) \( 7,1 \div x = 4,2 \) (b) \(x \div 0,7 = 6,2\) (c) \(12 \div x = 6,4\) (d) \( x \div 3,5 = 7 \) (e) \(2,3 \times x = 6\) (f) \(0,023 \times x = 8\) 4. (a) 1 â of water weighs almost 0,995 kg.What will 50 â of water weigh? What will 0,5 â of water weigh? Mincemeat costs R36,65 per kilogram. What will 3,125 kg mince-meat cost? What will 0,782 kg cost?
If $\text{Ker}(A) \ne \{0\}$, $A$ certainly doesn't have the property, as you cantake $B$ whose columns are all copies of a nonzero member of $\text{Ker}(A)$ and have$AB$ all $0$. So assume $\text{Ker}(A) = \{0\}$. Now the columns of $AB$ are $p$ arbitrary members of $\text{Ran}(A)$ other than $0$. The question is whetherthere are $p$ members of $\text{Ran}(A)$ which together have at least one $0$ inevery position. Let $V = \text{Ran}(A)$, which isa linear subspace of ${\mathbb R}^n$ of dimension $r = \text{rank}(A)$.Let ${\cal F}$ be the collection of sets $S \subseteq \{1,\ldots,n\}$such that $\dim \{v \in V: v_i = 0 \; \forall i \in S\} > 0$. The condition then is that there do not exist $p$ members of $\cal F$ whose union is $\{1,\ldots,n\}$. Note that $S \in {\cal F}$ is equivalent to $\text{Ker}(A_S) \ne \{0\}$, where $A_S$ is the $|S| \times m$ submatrix of $A$ consisting of the rows in $S$.In particular, $S$ contains all sets of cardinality $<r$. So certainly$p(r-1) < n$ is a necessary condition. But there may also be sets of greater cardinality, so this is not sufficient. For a given $A$, even if you list all maximal members of $\cal F$ (and there might be a lot of them) you have a "set-covering problem" to determine if there are $p$ such sets that cover $\{1,\ldots, n\}$, so this might be a nontrivial computational problem. I suspect it is NP-complete.
When we perform a Legendre transform on the connected generate functional $W[J]$ we get the quantum action (or 1PI action) $$ \Gamma[\phi] = W[J(\phi)] - \int\mathrm{d}^4x\,\phi J,\quad\phi(J)=\frac{\delta W}{\delta J}. $$ Then it can be shown that $$ \Gamma[\phi] = S[\phi] \mp\frac{1}{2}\log\det\left(\frac{\delta^2S}{\delta\Phi(x)\delta\Phi(y)}\right)_{\Phi=\phi} +\ldots, $$ where $S[\phi]$ is the classical action and the dots represent higher corrections. It is said that the lowest quantum correction (ie, the term involving $\log\det$) is the result of a resummation of one loop diagrams. Why is the $\log\det$ term identified with a one loop correction? I took a look at the proof, but it seems to me that there is no connection to the one loop diagrams at all. Why is $\frac{\delta^2S}{\delta\Phi(x)\delta\Phi(y)}$ identified with the propagator? Is it the free propagator or the exact propagator (including interactions)? Is it possible to get the same one loop correction to the action using the Wilson effective action instead?
tl;d: At only a handful of milliGauss, the Earth's field out there is so weak that it probably wouldn't be very useful. It's doubtful that the critical GPS satellites would depend on it. What's the Earth's field like out where the GPS satellites are? First of all, where are they? The approximate period of a satellite with semimajor axis $a$ is given by (from here) $$T = 2 \pi \sqrt{a^3 / GM_E},$$ where Earth's standard gravitational parameter is about 3.986E+14 m^3/s^2. Flip that around and you get $$a = \left(T^2 \frac{GM}{4 \pi^2} \right)^{1/3}.$$ Put in a period of a half-sidereal day (about 43082 seconds) and you get a distance from the center of the Earth of about 26,560 kilometers give or take, or about 4.2 Earth radii. A dipole field drops as $1/r^3$. For example $$\mathbf{B} = B_0 \frac{3(\mathbf{\hat{p}} \cdot \mathbf{\hat{r}}) \mathbf{\hat{r}} - \mathbf{\hat{p}}}{r^3} $$ where $\mathbf{\hat{p}}$ is the dipole vector of the field and $\mathbf{\hat{r}}$ is the vector from the dipole to the field point. Here $B_0$ is about 3.12E-5 Tesla, or about 0.312 gauss. If we ignore that Earth's field is tipped by about 11.5 degrees, and put in two points at one and 4.2 earth radii, we get about 0.31 gauss and 0.0042 gauss, which is only about 1.3% as strong as the field near the Earth's equator. At the poles the field is double that, but the ratio is the same. This is an incredibly weak field, there's not much you can do with a handful of milliGauss, and GPS is so critical that they'd never depend on something so weak. That doesn't mean that they don't have some backup systems available in an emergency, they might exist. But I don't think they will depend on Earth's field for torque.
As far as I can see, this question has a simple solution. Let's limit ourselves to Hamiltonian cycles, and forget about the constraint that the coloring is legal. Theorem:Let $c$ be an edge-coloring of $K_n$ such that no two Hamiltonian cycles have the same set of colors. Then $c\geq (1+o(1))\left(\frac{n}{e}\right)^2$. Proof: There are $(n-1)!/2$ Hamiltonian cycles, and $\sum_{i=1}^{n} \binom{c}{i}$ subsets of $c$ of size at most $n$. As each Hamiltonian cycle corresponds to a different such subset, we get the inequality$$(n-1)!/2 \leq \sum_{i=1}^{n} \binom{c}{i}.$$ Recalling that $(n-1)!/2=((1+o(1))\left(\frac{n}{e}\right))^n$ and that (for $c>2n$, which we can assume here) $\sum_{i=1}^{n} \binom{c}{i}=((1+o(1))\left(\frac{c \cdot e}{n}\right))^n$, we have$$((1+o(1))\left(\frac{n}{e}\right))^n\leq\left(\frac{c \cdot e}{n}\right)^n ,$$which yields the theorem.
How to Couple Radiating and Receiving Antennas in Your Simulations In Part 3 of our series on multiscale modeling in high-frequency electromagnetics, let’s turn our attention to the receiving antenna. We’ve already covered theory and definitions in Part 1 and radiating antennas in Part 2. Today, we will couple a radiating antenna at one location with a receiving antenna 1000 λ away. For verification, we will calculate the received power via line-of-sight transmission and compare it with the Friis transmission line equation that we covered in Part 1. Simulating the Background Field In the simulation of our receiving antenna, we will use the Scattered Field formulation. This formulation is extremely useful when you have an object in the presence of a known field, such as in radar cross section (RCS) simulations. Since there are a number of scattered field simulations in the Application Gallery, and it has been discussed in a previous blog post, we will assume a familiarity with this technique and encourage you to review those resources if the Scattered Field formulation is new to you. The Scattered Field formulation is useful for computing a radar cross section. When comparing the implementation we will use here with the scattering examples in the Application Gallery, there are two differences that need to be referenced explicitly. The first is that, unlike the scattering examples, we will use a receiving antenna with a Lumped Port. With the Lumped Port excitation set to Off, it will receive power from the background field. This is automatically calculated in a predefined variable, and since the power is going into the lumped power, the value will be negative. The second difference, which we will spend more time discussing, is that the receiving antenna will be in a separate component than the emitting antenna and we will have to reference the results of one component in the other to link them. Multiple Components in the Same Model What does it mean when we have two or more components in a model? The defining feature of a component is that it has its own geometry and spatial dimension. If you would like to have a 2D axisymmetric geometry and a 3D geometry in the same simulation, then they would each require their own component. If you would like to do two 3D simulations in the same model, you only need one component, although in some situations it can be beneficial to separate them anyways. Let’s say, for example, that you have two devices with relatively complicated geometries. If they are in the same component, then anytime you make a geometric change to one, they both need to be rebuilt (and remeshed). In separate components this would not be the case. Another common use of multiple components is submodeling, where the macroscopic structure is analyzed first and then a more detailed analysis is performed on a smaller region of the model. When we split into components, however, we then need to link the results between the simulations. In our case, we have two antennas at a distance of 1000 λ. Separating them into distinct components is not strictly required, but we are going to do it anyways to keep things general. We will add in ray tracing later in this series and some users may find this multiple component method useful with an arbitrarily complex ray tracing geometry. While we go through the details, it’s important that we have a clear image of the big picture. The main idea that we are pursuing in this post is that we first simulate an emitting antenna and calculate the radiated fields in a specific direction. Specifically, this is the direction of the receiving antenna. We then account for the distance between the antennas and use the calculated fields as the background field in a Scattered Field formulation for the receiving antenna. The emitting antenna is centered at the origin in component 1 and the receiving antenna is centered at the origin in component 2. Everything we will discuss here is simply the technical details of determining the emitted fields from the first simulation and using them as a background field in a second simulation. Note: The overwhelming majority of the COMSOL Multiphysics® software models only have one component and only shouldhave one component. Ensure that you have a sufficient need for multiple components in your model before implementing them, as there is a very real possibility of causing yourself extra work without benefit. Connecting Components with Coupling Operators There are a number of coupling operators, also known as component couplings, available in COMSOL Multiphysics. Generally speaking, these operators map the results from one spatial location to another. Said in another way, you can call for results in one location (the destination), but have the results evaluated at a separate location (the source). While this may seem trivial at first glance, it is an incredibly powerful and general technique. Let’s look at a few specific examples: We can evaluate the maximum or minimum value of a variable in a 3D domain, but call that result globally. This is a 3D to 0D mapping and allows us to create a temperature controller. Note that this can also be used with boundaries or edges, as well as averages or spatial integrations. We can extrude 2D simulation results to a 3D domain. This allows you to exploit translation symmetry in one physics (2D) and use the results in a more complex 3D model. We can project 3D data onto a 2D boundary (or 2D to 1D, etc.) A simple example of this is creating shadow puppets on a wall, but can also be useful for analyzing averages over a cross section. As mentioned above, we want to simulate the emitting antenna (just like we did in Part 2 of the series) and calculate the radiated fields at a distance of 1000 λ. We then use a component coupling to map the fields to being centered about the origin in component 2. Mapping the Radiated Fields If we look at the far-field evaluation discussed in Part 2, we know that the x-component of the far field at a specific location is The only complication is determining where to calculate the scattering amplitude. This is because component couplings need the source and destination to be locations that exist in the geometry. We don’t want to define a sphere in component 1 at the actual location of the receiving antenna, since that defeats the entire purpose of splitting the two antennas into two components. What we will do instead is create a variable for the magnitude of r, and then evaluate the scattering amplitude at a point in the geometry that shares the same angular coordinates, (\theta,\phi), as the point we are actually interested in. In the image below, we show the point where we would like to evaluate the scattering amplitude. Image showing where the scattering amplitude should be calculated and how the coordinates of that point can be determined. Defining the Point and Coupling Operator We add a point to the geometry using the rescaling of the Cartesian coordinates shown in the above figure. Only x is shown in the figure, but the same scaling is also applied to y and z. For the COMSOL Multiphysics implementation, shown below, we have assumed that the receiving antenna is centered at a location of (1000 λ, 0, 0), and the two parameters used are ant_dist = |\vec{r}_1| and sim_r = |\vec{r}|. The required point for the correct scattering amplitude evaluation. Note that we create a selection group from this point. This is so that it can be referenced without ambiguity. We then use this selection for an integration operator. Since we are integrating only over a single point, we simply return the value of the integrand at that point similar to using a Dirac delta function. The integration operator is defined using the selection group for the evaluation point. Running the Background Field Simulation in COMSOL Multiphysics® The above discussion was all about how to evaluate the scattering amplitude at the correct location. The only remaining step is to use this in a background field simulation of the half-wavelength dipole discussed in Part 1. When we add in the known distance between the antennas, we get the following: The variable definition for r. Note that this is defined in component 2. The background field settings. In the settings, we see that the expression used for the background field in x is comp1.intop1(emw.Efarx)*exp(-j*k*r)/(r/1[m]), which matches the equation cited above. Also note that r is defined in component 2, while intop1() is defined in component 1. Since we are calling this from within component 2, we need to include the correct scope for the coupling operator, comp1.intop1(). The remainder of the receiving antenna simulation is functionally equivalent to other Scattered Field simulations in the Application Gallery, so we will not delve into the specifics here. It is interesting to note that running either the emission or background field simulations by themselves is quite straightforward. All of the complication in this procedure is in correctly calculating the fields from component 1 and using them in component 2. All of this heavy lifting has paid off in that we can now fully simulate the received power in an antenna-to-antenna simulation, and the agreement between the simulated power and the Friis transmission equation is excellent. We can also obtain much more information from our simulation than we can purely from the Friis equation, since we have full knowledge of the electromagnetic fields at every point in space. It is worth mentioning one final point before we conclude. We have only evaluated the far field at an individual point, so there is no angular dependence in the field at the receiving antenna. Because we are interested in antennas that are generally far apart, this is a valid approximation, although we will discuss a more general implementation in Part 4. Concluding Thoughts on Coupling Radiating and Receiving Antennas We have now reached a major benchmark in this blog series. After discussing terminology in Part 1 and emission in Part 2, we can now link a radiating antenna to a receiving antenna and verify our results against a known reference. The method we have implemented here can also be more useful than the Friis equation, as we have fully solved for the electromagnetic fields and any polarization mismatch is automatically accounted for. There is one remaining issue, however, that we have not discussed. The method used here is only applicable to line-of-sight transmission through a homogeneous medium. If we had an inhomogeneous medium between the antennas or multipath transmission, that would not be appropriately accounted for either by this technique or the Friis equation. To solve that issue, we will need to use ray tracing to link the emitting and receiving antennas. In Part 4 of this blog series, we will show you how we can link a radiating source to a ray optics simulation. Further Reading Browse previous posts in the Multiscale Modeling in High-Frequency Electromagnetics blog series Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Forgot password? New user? Sign up Existing user? Log in The value of the integral ∫0π/2dx2+cosx\int_0^{\pi/2}\dfrac{\text dx}{2+\cos x}∫0π/22+cosxdx can be expressed in the form πABC\dfrac{\pi^A\sqrt{B}}{C}CπAB where AAA, BBB, CCC are positive integers and BBB is not divisible by the square of any prime. Find the value of A+B+C.A+B+C.A+B+C. Problem Loading... Note Loading... Set Loading...
I am driving an OLED display via two AAA batteries plus a charge pump to step up the voltage to 9V. I was wondering if there is likely to be any significant difference in efficiency between (a) using the two batteries in parallel and using a charge pump to step up from 1.5V to 9V, or (b) connecting the batteries in series and stepping up from 3V to 9V. You can calculate the efficiency by using \$n = 1 - \dfrac{p_{in}}{p_{out}}\$ and get the power \$p_{in}\$ by measuring the current from the batteries times the batteries voltage and the power \$p_{out}\$ by multiplying the current to the LED times the voltage across the LED(s) (at the output of the converter). This assumes the converter has a filter capacitor. Use a scope and a shunt resistor of about 0.1 Ohms if not. Although trickier, you can estimate the current consumption by averaging the pulses height times the duty cycle. $$I_{avg} = I_{peak} \times \frac{t_{on}}{t_{off}}$$ From my experience doing a lot of this sort of thing, using a higher voltage into a boost converter will improve efficiency. As a rule of thumb, the closer your input and output voltages of smps's the more efficient it will be. With a Dickson, that rule is the same as you've got diode (or mosfet) voltage drops to overcome. And you'll need more stages to get to higher voltage with a 1.5V input. With 3V you have less stages and therefore less voltage drops wasting your energy. As a generic answer, you need to pick some values, and do the math. So we have 3V or 1.5V, to 9V. Let's assume 100mA load, though your OLED probably uses less. And let's assume 85% efficiency, a good average switching regulator efficiency. So first we need to know the output power usage. $$P = V \times I$$ $$9V \times 0.1A = 0.9W$$ Next, we can calculate the input power usage. $$N \times 0.85 = 0.9W$$ $$N = \frac{0.9W}{0.85}$$ $$N = ~1.06W$$ So at 85% efficiency, you need 1.06W at the input. So we can figure out the current pull at the input. Reverse the power formula. $$I = \frac{P}{V}$$ $$\frac{1.06W}{3V} = 0.350A $$ while $$\frac{1.06W}{1.5V} = 0.7A$$ With the typical AAA having 1500mAh, you have 3V 1500mAh, or 1.5V 3000mAh. You would have 4 hours with either. BUT your charge pump might not be 85% efficient with both setups. And battery drain may not be the same in parallel. You need to figure out the efficiency by measuring the current draw at the battery in both setups, as well as the current draw of the regulator + load. Any multimeter with a current/ammeter setting will help. This is not including varying loads, as they are more difficult to figure out.
I am trying to reference a set of equations that are placed within a alignat and subequations environments. I would like to reference them as Eq. 1 together, rather than Eqs. 1a and 1b separately. The MWE below does exactly this, yet leaves a small indentation in the paragraph just below the equations. Is there a way to circumvent this or use a different environment? \documentclass{article} \usepackage{mathtools} \usepackage{lipsum} \begin{document} \lipsum[1] \begin{subequations} \begin{alignat}{1} \dot{\mathbf{u}}_{i} &= \frac{\mathbf{u}_{i+1} - \mathbf{u}_{i-1}}{2\Delta t} + \mathcal{O}(\Delta t^2) \\ \ddot{\mathbf{u}}_{i} &= \frac{\mathbf{u}_{i+1} - 2\mathbf{u}_{i} + \mathbf{u}_{i-1}}{\Delta t^2} + \mathcal{O}(\Delta t^2) \end{alignat} \label{eq} \end{subequations} \lipsum[2] I'm referencing the equation here: \ref{eq} \end{document}
57 0 I am having trouble with the following problem: Find the value(s) of [tex] \omega [\tex] for which [tex] y = \cos(\omega * t) [\tex] satisfies [tex]\frac{d^2*y}{d*t^2} + 9y = 0[\tex] I am trying to use latex but it doesn't seem to be working when I do "preview post" so I will rewrite what I am saying to make it more understandable in case Latex doesn't work upon posting. Find the value(s) of omega for which y = cos(omega*t) satisfies: (d^2t)/(dt^2) + 9y = 0 ----------------------------------------------------------------------- I am not entirely sure what I am supposed to do here. My ideas have been 1.) switch 9y over to the right side 2.)Take the integral of both sides 3.)Take the integral of both sides again to solve for y(t) This approach however left me lost in the dark and I feel is incorrect. I also tried substituting y = cos(omega*t) in for y but I can't solve the following equation. Could someone give me some ideas what I should do? Find the value(s) of [tex] \omega [\tex] for which [tex] y = \cos(\omega * t) [\tex] satisfies [tex]\frac{d^2*y}{d*t^2} + 9y = 0[\tex] I am trying to use latex but it doesn't seem to be working when I do "preview post" so I will rewrite what I am saying to make it more understandable in case Latex doesn't work upon posting. Find the value(s) of omega for which y = cos(omega*t) satisfies: (d^2t)/(dt^2) + 9y = 0 ----------------------------------------------------------------------- I am not entirely sure what I am supposed to do here. My ideas have been 1.) switch 9y over to the right side 2.)Take the integral of both sides 3.)Take the integral of both sides again to solve for y(t) This approach however left me lost in the dark and I feel is incorrect. I also tried substituting y = cos(omega*t) in for y but I can't solve the following equation. Could someone give me some ideas what I should do?
Spherical aberration compensation plates specify the total amount of spherical aberration imparted on a collimated beam of light covering its entire clear aperture. However, it is often necessary to know the amount of spherical aberration generated by the corrector plate Ultrafine Spherical Graphite Powder Air Classifier Mill. Working Principle LNJ Fluidised Jet Mill is a device as using multiple nozzles to form sonic speed air flow to perform superfine pulverizing. "What is a classifier and how is it different from a handshape?" Handshapes are one of the five fundamental building blocks or parameters of a sign Handshape, movement, location, orientation, and nonmanual markers. Air Classifier Mills, Graphite Crushing Machine, Graphite Powder Shredder manufacturer / supplier in China, offering Air Classifier Mills for Spherical Graphite Pilot Plant, 2016 New Brand CE Certificated Epoxy Resin Crusher, Dry Red Pepper Vortex Fine Grinding Mill with Ce Certificate and so on. Spherical Roller Bearings Two rows of rollers give these bearings load capacities over five times higher than comparably sized tapered roller bearings. By using this website, you agree to our Terms and Section 4 7 Triple Integrals in Spherical Coordinates. In the previous section we looked at doing integrals in terms of cylindrical coordinates and we now need to take a quick look at doing integrals in terms of spherical coordinates. First, we need to recall just how spherical coordinates are defined. The following sketch shows the Spherical Washers. Spherical washers, also called self aligning washers, have convex shaped radiuses, and are used to compensate when a bolt or screw does not have an exact perpendicular position with a surface. The two piece unit has male and female components that provide a swivel action to allow positioning. To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in Oct 15, 2015·The semisupervised spherical separation. As a consequence, two unlabeled points of result misclassified. Vice versa, in Fig. 2 the classifier, represented again by the continuous line circle, is constructed by taking into account the minimization of the number of unlabeled points located in the margin zone (the area between the two dotted spheres), Spherical is a boutique digital marketing agency for lifestyle brands in hospitality and travel. We tell compelling stories for our clients through a creative, insights driven approach to all things digitalfrom web design and development to content strategy and production to social media marketing and community management. Choose from our selection of spherical ball joints, including ball joint rod ends, swivel joints, and more. In stock and ready to ship. Air Classifiers. Sturtevant air classifiers balance the physical principles of centrifugal force, drag force and gravity to generate a high precision method of classifying particles according to size or density. All three Sturtevant air classifiers offer durable construction, as well as time and energy saving advantages. shuttle crawler, Timken® spherical plain bearings have been a staple in many industrial applications for nearly 75 years. Timken offers a full spherical plain portfolio and supports it with industry leading technical support, allowing customers to help maximize the performance of their equipment. Spherical aberration is an optical problem. it occurs when all incoming light rays focus at different points. This is after they pass through a spherical surface within the lens. Light rays that pass through a lens near its horizontal axis refract less compared to rays that pass closer to the edge. Spherical Trigonometry Rob Johnson West Hills Institute of Mathematics 1 Introduction The sides of a spherical triangle are arcs of great circles. A great circle is the intersection of a sphere with a central plane, a plane through the center of that sphere. The angles of a spherical triangle are measured Recent Examples on the Web. This ultra modern tower blends ancient concepts (like spherical pearls) with 21st century technology, once again combining the old and the new. Adrienne Faurote, Marie Claire, "The Instagram Guide to Shanghai, China," 5 Apr. 2019 Dead center, in the lower third of the oval, anchoring the whole scene, is a robust, spherical, tawny object. Recent Examples on the Web. This ultra modern tower blends ancient concepts (like spherical pearls) with 21st century technology, once again combining the old and the new. Adrienne Faurote, Marie Claire, "The Instagram Guide to Shanghai, China," 5 Apr. 2019 Dead center, in the lower third of the oval, anchoring the whole scene, is a robust, spherical The spherical coordinates calculator is a tool that converts between rectangular and spherical coordinate systems. It describes the position of a point in a three dimensional space, similarly as our cylindrical coordinates calculator.For a two dimensional space, instead of using this Cartesian to spherical Oct 29, 2018·This approach is computationally more efficient than using spherical harmonics to perform convolutions. We demonstrate the method on a classification problem of weak lensing mass maps from two cosmological models and compare the performance of the CNN with that of two baseline classifiers. Section 4 7 Triple Integrals in Spherical Coordinates. In the previous section we looked at doing integrals in terms of cylindrical coordinates and we now need to take a quick look at doing integrals in terms of spherical coordinates. First, we need to recall just how spherical coordinates are defined. Wilson's eXtended Classifier System was utilized for motion planning of a spherical robot [23, 24] . A motion planning strategy, composed of two trivial and one nontrivial maneuver, was devised In mathematics, a spherical coordinate system is a coordinate system for three dimensional space where the position of a point is specified by three numbers the radial distance of that point from a fixed origin, its polar angle measured from a fixed zenith direction, and the azimuth angle of its orthogonal Spherical Roller Thrust Bearings TSR Timken designed type TSR thrust spherical roller bearings to achieve a high thrust capacity with low friction and continuous roller alignment. This bearing type is ideal for operating conditions that experience heavy loads, difficulty in establishing or maintaining housing A spherical radius can be produced and checked if drawn and dimensioned with care. As shown in Figure 1, it should be noted that the radius of the spherical end of the cylinder is one half the diameter of the cylinder, and the tolerance limit for the radius of the spherical end is one half of that shown for the cylindrical surface. Spherical space The elliptic manifold has constant positive curvature everywhere Can visualise as the surface of a hypersphere embedded in Euclidean Classifiers in Elliptic space In practical applications, we want to do some kind of learning on the data, for example classification In this section we will define the spherical coordinate system, yet another alternate coordinate system for the three dimensional coordinate system. This coordinates system is very useful for dealing with spherical objects. We will derive formulas to convert between cylindrical coordinates and spherical coordinates as well as between Cartesian and spherical coordinates Find the spherical bearings you need at BaileyHydraulics. Choose from our wide selection of radial bearings in a variety of sizes for your bulldozer, bobcat, shovel loader or other application. TIMKEN SPHERICAL ROLLER BEARING CATALOG ..timken Price USD $75 The Timken team applies their know how to improve the reliability and performance of machinery in diverse markets worldwide. The company designs, makes and markets high performance mechanical components, This paper presents the spherical classifier, an artificial neural network model for content addressable memory, pattern recognition, and control.The spherical classifier is a high order dynamical system which stores as memories (attractors) any finite set of unit vectors in n dimensional space.We illustrate applications of the spherical classifier in classification and control. Comparison of Spherical Wavelet Transform (SWT) and Discrete Wavelet Transform (DWT) features on Mammographic Images Sushma S1, Latha KC2, Balasubramanian S3, Sridhar R4 Abstract One of the most widely used technology to detect breast cancers used in the primary diagnosing stage is His two balloons, which were spherical, proved to be useless in a strong wind. This is another proof of the spherical nature of the terrestrial globe. The egg is spherical or flat, upright only in the Notodontidae. Figure, spherical and intelligible is the primitive one, vi. In spherical ballooning we go with the wind, tinguishing between spherical and non spherical particles extending into the sub micron range. When a homogeneous, spherical scatterer is uniformly illumi nated by circularly polarized light, the azimuthal scattering pattern is uniform. For nonspherical particles, however, it is typically nonuniform. It Spherical (fig 6.3) The Negative lens has a higher refractive index to control spherical aberration (compensating for its weaker power). Zero spherical aberration can be achieved at only one color. The same is true of chromatic aberration. The doublet is far from perfect and some are more perfect than others. Controlling Chromatic Aberration Dec 01, 2007·We then show how our derivations can be used to define optimal classifiers for spherical homoscedastic distributions. By using a kernel which maps the original space into one where the data adapts to the spherical homoscedastic model, we can derive non linear classifiers with Spherical coordinates can be a little challenging to understand at first. Spherical coordinates determine the position of a point in three dimensional space based on the distance $\rho$ from the origin and two angles $\theta$ and $\phi$. Dec 01, 2007·We then show how our derivations can be used to define optimal classifiers for spherical homoscedastic distributions. By using a kernel which maps the original space into one where the data adapts to the spherical homoscedastic model, we can derive non linear classifiers with potential applications in a large number of problems. Apr 25, 2016·A Reinforcement Learning (RL) algorithm based on eXtended Classifier System (XCS) is used to navigate a spherical robot. Traditional motion planning strategies rely on pre planned optimal trajectories and feedback control techniques.
Research Open Access Published: Stabilization of a laminated beam with interfacial slip by boundary controls Boundary Value Problems volume 2015, Article number: 169 (2015) Article metrics 925 Accesses 6 Citations Abstract We consider two identical beams on top of each other with an adhesive in between. A considerable natural slip occurs in the structure and will not be ignored as was done in the previous investigations. In this work we take into account this slip and prove that we can stabilize the system in an exponential manner using boundary controls. The model consists of three coupled equations. The first two are related to the well-known Timoshenko system, and the third one describes the dynamic of the slip. Our result improves the few existing similar works in the literature. Introduction Of concern is a structure of two identical beams of uniform thickness stuck together by an adhesive. They are placed on top of each other. The structure is subject to a longitudinal displacement in addition to transversal and rotational displacements. These vibrations are undesirable, and it is our goal to stabilize the system in a fast way. These structures have gained much in popularity and are known under the name of ‘laminated’ beams. They are of considerable importance in engineering. The beams are attached together in such a way that a ‘slip’ is permitted while they are continuously in contact with each other. In certain situations the slip is purposively allowed with the objective to obtain some damping. This damping should be able to restore the system to its equilibrium state. Fastening very tightly the layers can affect anormally the performance of the structure. where \(x\in(0,1)\) and \(t>0\), with initial data and the cantilever boundary conditions. Here w, ψ, ρ, G, \(I_{\rho}\), D, γ, β denote the transverse displacement, rotation angle, density, shear stiffness, mass moment of inertia, flexural rigidity, adhesive stiffness and adhesive damping parameter, respectively, and s is proportional to the amount of slip along the interface. It is rather the third equation which is in the spotlight as when \(s\equiv0\) we recover the standard Timoshenko system. As is well known by now, the Timoshenko system has been studied by many authors, and many results may be found in the literature. It has been stabilized by means of different controls such as internal and/or boundary frictional and/or viscoelastic damping, dynamic boundary conditions, pointwise damping, distributed damping, heat damping, etc. The large number of citations cannot fit in this small paper. We refer the reader, however, to [3–12] for similar boundary controls to the ones used here and also for the interesting papers [13–15]. In [16] an exponential decay result was proved for this problem with one end fixed (\(w(0,t)=\psi(0,t)=s(0,t)=0\), \(t>0\)) and \(\psi (1,t)-w_{x}(1,t)=u_{1}(t)\), \(s_{x}(1,t)=0\), \((3s_{x}-\psi _{x})(1,t)=u_{2}(t)\), \(t>0\) at the other end. They adopted the boundary control and assumed that \(r_{1}:=\frac{G}{\rho}\neq\frac{D}{I_{\rho }}=:r_{2}\) and \(k_{i}\neq r_{i}\), \(i=1,2\). Moreover, they checked that the damping present in the third equation alone is not able to stabilize the structure in an exponential manner. Under different boundary controls, namely the authors in [17] proved an exponential stabilization of the system provided that the ‘dominant’ part of the closed loop system is itself exponentially stable. Since then, it seems that the subject remained dormant. We would like to bring back to life this matter of treating, in a more adequate fashion, structures for which the amount of slip is considerable. We discuss here the same model and boundary conditions as in [16]. We prove an exponential stabilization result without assuming the conditions \(r_{1}:=\frac{G}{\rho}\neq\frac{D}{I_{\rho}}=:r_{2}\) and \(k_{i}\neq r_{i}\), \(i=1,2\). These conditions are dropped, and instead we will assume that \(\rho G< I_{\rho}\). Our approach is different from the one used in [16]. Namely, we will consider the system for \(x\in(0,1)\), \(t>0\), with the boundary conditions for \(t\geq0\). The well-posedness follows easily from a slight modification of the arguments in [16, 17] (see also references in [1]). We have weak solutions in \(( V_{\ast}^{1}\times L^{2} ) ^{3}\) and strong solutions in \(( V_{\ast}^{2}\times H^{1} ) ^{3}\), where We shall focus here on the asymptotic behavior of solutions and in particular on the exponential stabilization of the system. In the next section we prove that the energy is decreasing, define the different functionals we will utilize later and prepare some useful lemmas. Section 2 is devoted to the statement and proof of our main result. The last section is a short one containing our conclusion. Some useful preliminary results where \(\Vert \cdot \Vert \) denotes the norm in \(L^{2}(0,1)\). Proposition 1 The energy \(E(t)\) is decreasing and in fact we have Proof This follows directly by multiplying the first equation in (1) by \(w_{t}\) (the second by \(( 3s-\psi ) _{t}\) and the third by \(s_{t}\)) and integrating over \((0,1)\). Integration by parts and the boundary conditions will also be used. Indeed, we obtain or and by our boundary conditions The observation leads to Working with the third equation of (1) we arrive at or The second equation in (1) allows us to write, for \(t\geq0\), or Although from this proposition we see that the energy is uniformly bounded and decreasing, it is not clear how to prove exponential decay from this functional at this stage. We need to establish a new functional \(F(t)\) which is suitable enough to derive an exponential decay. The strategy (which is by now standard) consists in starting with the energy \(E(t)\) and modifying it by adding new adequate terms (functionals) which may be estimated below and above by similar terms already existing in the expression of the energy (leading to the equivalence of both functionals) and whose derivatives provide us with the missing terms in the energy after differentiation. That is, the goal is to obtain an inequality of the form \(F^{\prime}(t)\leq -\kappa F(t)\) for some positive constant κ. We claim that the functional with where for \(\delta_{i}>0\), \(i=1,\ldots,5\), to be determined, is an appropriate one. It is easy to see that \(F(t)\) and \(E(t)\) are equivalent. Simple use of the Cauchy-Schwarz inequality and the Poincaré inequality will do. Lemma 1 Proof As simple differentiation of \(L_{1}(t)\) and use of the first and third equations in system (1) yield Observe that Therefore □ Lemma 2 Proof Using the second equation of (1), we find The estimate implies that □ Lemma 3 Proof In view of the first and third equations in (1) and the definition of \(L_{3}(t)\), we see that Therefore, from our boundary conditions Next, the estimations and imply that □ Lemma 4 Proof Clearly and therefore or simply □ Lemma 5 Proof It is easy to see, from the first equation in (1) and the boundary conditions, that The last two terms in (9) may be estimated as follows: so and Taking into account the previous relations, we find □ Main result Using the previous lemmas we obtain the following result. Theorem 1 For the energy E defined above, there exist two positive constants K and \(\kappa _{0}\) such that provided that \(\rho< I_{\rho}/G\). Proof or, using we find We shall forget for a moment about the first three terms on the right-hand side of (10) and focus on the rest of the coefficients. We need Next, ignoring \(\varepsilon_{0}\) as we will take it small enough later, we obtain Observe that there is only a smallness condition on \(\delta_{5}\), so we postpone its selection. There remains In turn we see that there is only a smallness condition on \(\delta_{2}\), therefore we need These last relations (14) hold if \(\rho< I_{\rho}/G\) by taking, for instance, \(\delta_{4}= ( \rho+\frac{I_{\rho}}{G} ) \frac {\delta _{3}}{2I_{\rho}}\). Now we go backward and select \(\delta_{2}\) small enough (in terms of \(\delta_{3}\)) so that the relations in (13) are satisfied. Next we pick \(\delta_{5}\) and then \(\delta_{1}\) so that the remaining conditions (12) are fulfilled. At this stage we may choose \(\varepsilon_{0}\) (small enough to satisfy (11)). Finally, we select \(\delta_{3}\) so small that the three first coefficients in (10) be negative. The inequality \(F^{\prime}(t)\leq-\kappa F(t)\), \(t\geq0\) implies the exponential decay of \(F(t)\). This property is shared by \(E(t)\) through the equivalence. □ Conclusion Our main goal here is the handling of the interfacial slip and the stabilization of the system. This has been established using a different way than the one used in the literature and with much weaker conditions. Indeed, the conditions \(r_{1}:=\frac{G}{\rho}\neq\frac{D}{I_{\rho}}=:r_{2}\) and \(k_{i}\neq r_{i}\), \(i=1,2\), are dropped and replaced by \(\rho G< I_{\rho}\). References 1. Hansen, SW, Spies, R: Structural damping in a laminated beam due to interfacial slip. J. Sound Vib. 204, 183-202 (1997) 2. Beards, CF, Imam, IMA: The damping of plate vibration by interfacial slip between layers. Int. J. Mach. Tool Des. Res. 18, 131-137 (1978) 3. Feng, D, Shi, D, Zhang, W: Boundary feedback stabilization of Timoshenko beam with boundary dissipation. Sci. China Ser. A 41(5), 483-490 (1998) 4. Kim, JU, Renardy, Y: Boundary control of the Timoshenko beam. SIAM J. Control Optim. 25, 1417-1429 (1987) 5. Morgul, O: Boundary control of a Timoshenko beam attached to a rigid body: planar motion. Int. J. Control 54, 761-763 (1991) 6. Xu, GQ: Feedback exponential stabilization of a Timoshenko beam with both ends free. Int. J. Control 72(4), 286-297 (2005) 7. Xu, GQ, Feng, DX: The Riesz basis property of a Timoshenko beam with boundary feedback and application. IMA J. Appl. Math. 67, 357-370 (2002) 8. Xu, GQ, Feng, DX, Yung, SP: Riesz basis property of the generalized eigenvector system of a Timoshenko beam. IMA J. Math. Control Inf. 21(1), 65-83 (2004) 9. Xu, GQ, Yung, SP: Exponential decay rate for a Timoshenko beam with boundary damping. J. Optim. Theory Appl. 123(3), 669-693 (2004) 10. Yan, Q, Feng, D: Boundary stabilization of nonuniform Timoshenko beam with a tipload. Chin. Ann. Math. 22(4), 485-494 (2001) 11. Yan, QX, Hou, SH, Feng, DX: Asymptotic behavior of Timoshenko beam with dissipative boundary feedback. J. Math. Anal. Appl. 269, 556-577 (2002) 12. Zhang, CG: Boundary feedback stabilization of the undamped Timoshenko beam with both ends free. J. Math. Anal. Appl. 326, 488-499 (2007) 13. Wang, JM, Guo, BZ: Analyticity and dynamic behavior of a damped three-layer sandwich beam. J. Optim. Theory Appl. 137(3), 675-689 (2008) 14. Wang, JM, Liu, J, Ren, B, Chen, J: Sliding mode control to stabilization of cascaded heat PDE-ODE systems subject to boundary control matched disturbance. Automatica 52, 23-34 (2015) 15. Wang, JM, Krstic, M: Stability of an interconnected system of Euler-Bernoulli beam and heat equation with boundary coupling. ESAIM Control Optim. Calc. Var. 21(4), 1029-1052 (2015) 16. Wang, JM, Xu, GQ, Yung, SP: Exponential stabilization of laminated beams with structural damping and boundary feedback controls. SIAM J. Control Optim. 44(5), 1575-1597 (2005) 17. Cao, XG, Liu, DY, Xu, GQ: Easy test for stability of laminated beams with structural damping and boundary feedback controls. J. Dyn. Control Syst. 13(3), 313-336 (2007) Acknowledgements The author would like to acknowledge the support provided by King Abdulaziz City for Science and Technology (KACST) through the Science and Technology Unit at King Fahd University of Petroleum and Minerals (KFUPM) for funding this work through project No. AC -32- 49. Additional information Competing interests The author declares that they have no competing interests. About this article Received Accepted Published DOI MSC 34B05 34D05 34H05 Keywords exponential stabilization vibration reduction Timoshenko system slip dynamic boundary control multiplier technique
Research Open Access Published: The Dirichlet problem for the time-fractional advection-diffusion equation in a line segment Boundary Value Problems volume 2016, Article number: 89 (2016) Article metrics 1244 Accesses 6 Citations Abstract The one-dimensional time-fractional advection-diffusion equation with the Caputo time derivative is considered in a line segment. The fundamental solution to the Dirichlet problem and the solution of the problem with a constant boundary condition are obtained using the integral transform technique. The numerical results are illustrated graphically. Introduction The constitutive equation for the matter flux (see, for example, [1]) where a denotes the diffusivity coefficient, v is the velocity vector, in combination with the balance equation for mass results in the standard advection-diffusion equation (under assumption \(\mathbf{v}= \mbox{const}\)): Equation (2) can be interpreted in terms of diffusion or heat conduction with additional velocity field as well as in terms as transport processes in porous media, Brownian motion or groundwater hydrology [1–6]. In the case of one spatial coordinate x, the advection-diffusion equation (2) takes the form In the last few decades, an increasing interest has been observed in the study of equations with derivatives of fractional order which have many applications in physics, geophysics, geology, rheology, engineering and bioengineering [7–14]. The time-nonlocal generalizations of the constitutive equation (1) were analyzed in [15] (compare this analysis with that of the generalized Fourier or Fick law carried out in [14, 16–19]). In the case of the ‘long-tail’ power kernel, the generalized constitutive equation for the mass flux has the following form [15]: Here \(\Gamma(\alpha)\) is the gamma function. In combination with the balance equation for mass, the constitutive equation (4) leads to the time-fractional advection-diffusion equation A comprehensive survey of research on the fractional advection diffusion equation as well as of the numerical methods used for its solving can be found in [23]. In the literature there are only several papers in which the analytical solutions of fractional advections diffusion equation were considered [23–26]. In the present paper, we investigate the Dirichlet problem for equation (6) in a line segment \(0< x< L\). Two types of boundary conditions are considered: the Dirac delta boundary condition for the fundamental solution and the constant boundary condition for the sought-for function. The fundamental solution to the Dirichlet problem We study the time-fractional advection-diffusion equation in a line segment \(0< x< L\), As usually \(a>0\), \(v>0\), \(0< t< \infty\). The advection diffusion equation (8) is considered under the zero initial condition and the Dirichlet boundary conditions at the ends of the segment: where \(\delta(t)\) is the Dirac delta function. The constant multiplier \(g_{0}\) has been introduced to obtain the nondimensional quantity displayed in the figures. The new sought-for function reduces the considered initial-boundary-value problem to the following one: To solve the Dirichlet problem under examination, we use the finite sin-Fourier transform with respect to the spatial coordinate x. Such a transform is the convenient reformulation of the sin-Fourier series in the domain \(0\leq x \leq L\): with The finite sin-Fourier transform of the second order derivative of a function is calculated according to the relation Next, we use the Laplace transform with respect to the time t. Recall that the Caputo fractional derivative for the Laplace transform requires the knowledge of the initial values of the function and its integer derivatives of the order \(k=1,2, \dots, n-1\) [20–22]: where s is the transform variable. Inversion of the integral transforms results in the solution has been used with \(E_{\alpha, \alpha}\) being the Mittag-Leffler function in two parameters α and β: Returning to the quantity \(c(x,t)\) according to (12), we finally get the fundamental solution to the Dirichlet problem: Using the nondimensional quantities we obtain Constant boundary value of a function As above, the new function u is introduced according to (12), and the Laplace transform with respect to time t and the finite sin-Fourier transform with respect to the spatial coordinate x give the solution in the transform domain: Taking into account that we obtain or, after inversion of the integral transforms, In this case \(E_{\alpha}(z)\) is the Mittag-Leffler function in one parameter α: Taking into account the following series [27]: and returning to the quantity \(c(x,t)\) according to (12), we get and in the nondimensional form where and the other nondimensional quantities are the same as in (29). Conclusions We have considered the time-fractional advection-diffusion equation with the Caputo fractional derivative in a domain \(0< x< L\). The Laplace transform with respect to time t and the finite sin-Fourier transform with respect to the spatial coordinate x have been used. The fundamental solution to the Dirichlet problem and the solution to the problem with a constant boundary condition for the sought-for function have been obtained. The results of numerical calculations are displayed in the figures for different values of the nondimensional spatial variable x̄, the drift parameter v̄, the time parameter κ, and the order of the time-fractional derivative α. To evaluate the Mittag-Leffler functions \(E_{\alpha, \alpha}(-x)\) and \(E_{\alpha}(-x)\) we have used the algorithms suggested in [28] (the interested reader is also referred to the Matlab programs that implement these algorithms [29]). It should be emphasized that the first term in curly brackets in the solution (39) satisfies the boundary condition (31) and (32), whereas the second one equals zero at the ends of a line segment \(0< x< L\) due to (19). References 1. Kaviany, M: Principles of Heat Transfer in Porous Media, 2nd edn. Springer, New York (1995) 2. Feller, W: An Introduction to Probability Theory and Its Applications, 2nd edn. Wiley, New York (1971) 3. Scheidegger, AE: The Physics of Flow Through Porous Media, 3rd edn. University of Toronto Press, Toronto (1974) 4. Van Kampen, NG: Stochastic Processes in Physics and Chemistry, 3rd edn. Elsevier, Amsterdam (2007) 5. Risken, H: The Fokker-Planck Equation. Springer, Berlin (1989) 6. Nield, DA, Bejan, A: Convection in Porous Media, 3rd edn. Springer, New York (2006) 7. Metzler, R, Klafter, J: The random walk’s guide to anomalous diffusion: a fractional dynamics approach. Phys. Rep. 339, 1-77 (2000) 8. Rossikhin, Y, Shitikova, MV: Applications of fractional calculus to dynamic problems of linear and nonlinear hereditary mechanics of solids. Appl. Mech. Rev. 50, 15-67 (1997) 9. West, BJ, Bologna, M, Grigolini, P: Physics of Fractal Operators. Springer, New York (2003) 10. Magin, RL: Fractional Calculus in Bioengineering. Begell House Publishers, Inc., Redding (2006) 11. Gafiychuk, V, Datsko, B: Mathematical modeling of different types of instabilities in time fractional reaction-diffusion systems. Comput. Math. Appl. 59, 1101-1107 (2010) 12. Mainardi, F: Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models. Imperial College Press, London (2010) 13. Uchaikin, VV: Fractional Derivatives for Physicists and Engineers, Background and Theory. Springer, Berlin (2013) 14. Povstenko, Y: Fractional Thermoelasticity. Springer, New York (2015) 15. Povstenko, Y: Theory of diffusive stresses based on the fractional advection-diffusion equation. In: Abi Zeid Daou, R, Moreau, X (eds.) Fractional Calculus: Applications, pp. 227-241. Nova Science Publishers, New York (2015) 16. Povstenko, Y: Fractional heat conduction equation and associated thermal stress. J. Therm. Stresses 28, 83-102 (2005) 17. Povstenko, Y: Thermoelasticity which uses fractional heat conduction equation. J. Math. Sci. 162, 296-305 (2009) 18. Povstenko, Y: Theory of thermoelasticity based on the space-time-fractional heat conduction equation. Phys. Scr. T 136, 014017 (2009) 19. Povstenko, Y: Non-axisymmetric solutions to time-fractional diffusion-wave equation in an infinite cylinder. Fract. Calc. Appl. Anal. 14, 418-435 (2011) 20. Gorenflo, R, Mainardi, F: Fractional calculus: integral and differential equations of fractional order. In: Carpinteri, A, Mainardi, F (eds.) Fractals and Fractional Calculus in Continuum Mechanics, pp. 223-276. Springer, Wien (1997) 21. Podlubny, I: Fractional Differential Equations. Academic Press, New York (1999) 22. Kilbas, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 23. Povstenko, Y: Fundamental solutions to time-fractional advection diffusion equation in a case of two space variables. Math. Probl. Eng. 2014, 705364 (2014) 24. Liu, F, Anh, V, Turner, I, Zhuang, P: Time-fractional advection-dispersion equation. J. Appl. Math. Comput. 13, 233-245 (2003) 25. Huang, F, Liu, F: The time fractional diffusion equation and the advection-dispersion equation. ANZIAM J. 46, 317-330 (2005) 26. Huang, F, Liu, F: The fundamental solution of the space-time fractional advection-dispersion equation. J. Appl. Math. Comput. 18, 339-350 (2005) 27. Prudnikov, AP, Brychkov, YA, Marichev, OI: Integrals and Series. Vol. 1: Elementary Functions. Gordon & Breach, Amsterdam (1986) 28. Gorenflo, R, Loutchko, J, Luchko, Y: Computation of the Mittag-Leffler function and its derivatives. Fract. Calc. Appl. Anal. 5, 491-518 (2002) 29. Matlab File Exchange 2005, Matlab-Code that calculates the Mittag-Leffler function with desired accuracy. Available for download at www.mathworks.com/matlabcentral/fileexchange/8738-Mittag-Leffler-function Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions Each of the authors contributed to each part of this work equally and read and approved the final version of the manuscript. About this article Received Accepted Published DOI MSC 26A33 45K05 Keywords fractional calculus Caputo fractional derivative non-Fickian diffusion fractional advection-diffusion equation Mittag-Leffler function
Linear $x$ and $y$ Motion Control: From the mathematical model one can see that the motion through the axes $x$ and $y$ depends on $U_{1}$. In fact $U_{1}$ is the total thrust vector oriented to obtain the desired linear motion. If we consider $U_{x}$ and $U_{y}$ the orientations of $U_{1}$ responsible for the motion through x and y axis respectively, we can then extract from formula (18) the roll and pitch angles necessary to compute the controls $U_{x}$ and $U_{y}$ ensuring the Lyapunov function to be negative semi-definite ( see Fig. 2). The paper is very clear except in the linear motion control. They didn't explicitly state the equations for extracting the angles. The confusing part is when they say we can then extract from formula (18) the roll and pitch angles necessary to compute the controls $U_{x}$ and $U_{y}$ where formula (18) is $$ U_{x} = \frac{m}{U_{1}} (\cos\phi \sin\theta \cos\psi + \sin\phi \sin\psi) \\ U_{y} = \frac{m}{U_{1}} (\cos\phi \sin\theta \sin\psi - \cos\phi \cos\psi) \\ $$ It seems to me that the roll and pitch angles depend on $U_{x}$ and $U_{y}$, therefore we compute the roll and pitch angles based on the $U_{x}$ and $U_{y}$ to control the linear motion.
I'm stuck at this exercise: Let $G$ be a group, with $|G|=pqr$, $p,q,r$ different primes, $q<r$, $r \not\equiv 1$ (mod $q$), $qr<p$. Also suppose that $p \not\equiv 1$ (mod $r$), $p \not\equiv 1$ (mod $q$). Let $C$ (the commutator of $G$) and $K$ be subgroups of $G$, with $C \leq K$, $K \trianglelefteq G$ and $|K|=q$. $K$ is the unique Sylow $q$-subgroup on $G$ (so $K \trianglelefteq G$). Let $G/K$ be an abelian group. Prove that $C=\{e\}$. I tried using Lagrange theorem, knowing that $C\leq K$, and then $|C|=\{1\ or\ q\}$. But I don't know how to eliminate the option $|C|=q$. This is a little part from a longer exercise. The definition of $C$ is $C=\langle[a,b]=aba^{-1}b^{-1} \mid a,b\in G \rangle$, $G/C$ is abelian too. Thank you.
The following is from page 31 of Stein and Shakarchi's Real Analysis. My question is about an aspect of the proof of the following theorem. Theorem 4.1Suppose $f$ is a non-negative measurable function on $\mathbb R^d$. Then there exists an increasing sequence of non-negative simple functions $\{\varphi_k\}_{k=1}^\infty$ that converges pointwise to $f$, namely, $$ \varphi_k(x) \le \varphi_{k+1}(x)\quad\text{and}\quad\lim_{k\to\infty}\varphi_k(x)=f(x),\ \text{for all $x$.} $$ Proof.We begin first with a truncation. For $k\ge 1$, let $Q_k$ denote the cube centered at the origin and of side length $k$. Then we define $$ F_k(x) = \begin{cases} f(x) & \text{if $x\in Q_k$ and $f(x)\le k$,} \\ k & \text{if $x\in Q_k$ and $f(x)> k$,}\\ 0 & \text{otherwise.} \end{cases} $$ Then $F_k(x)\to f(x)$ as $k$ tends to infinity for all $x$. Now, we partition the range of $F_k$, namely $[0,k]$ as follows. For fixed $k,j\ge 1$, we define $$ E_{\ell,j}=\left\{x\in Q_k:\frac{\ell}{j}<F_k(x)\le\frac{\ell+1}{j}\right\},\quad\text{for}\ 0\le\ell<kj. $$ Then we may form $$ F_{k,j}(x) = \sum_{\ell=0}^{kj-1}\frac{\ell}{j}{\large{\chi_{E_{\ell,j}}}}(x) $$ [where $\large{\chi_{E_{\ell,j}}}$ is the indicator function of $E_{\ell,j}$]. Each $F_{k,j}$ is a simple function that satisfies $0\le F_k(x)-F_{k,j}(x)\le 1/j$ for all $x$. If we now choose $j=k$, and let $\varphi_k = F_{k,k}$, then we see that $0\le F_k(x)-\varphi_k(x)\le 1/k$ for all $x$, $\color{red}{\underline{\color{black}{\text{and $\{\varphi_k\}$ satisfies all the desired properties.}}}}$ I do not see why $\varphi_k(x)\le\varphi_{k+1}(x)$ for all $x$. Can someone explain that?
Let $X_1, X_2, \cdots$ be i.i.d. random variables with $E(X_1) = 0, E(X_1^2) = \sigma^2 >0, E(|X_1|^3) = \rho < \infty$.Let $Y_n = \frac{1}{n} \sum_{i=1}^n X_i$ and let us note $F_n$ (resp. $\Phi$) the cumulative distribution function of $\frac{Y_n \sqrt{n}}{\sigma}$ (resp. of the standard normal distribution).Then, Berry Esseen theorem states that there exists a positive constant $C$ such that for all $x$ and $n$,$$|F_n(x)-\Phi(x)| \leq \frac{C \rho}{\sigma^3 \sqrt{n}}.$$ Are there known conditions on the distribution of $X_1$ that allow to derive a similar statement for probability density functions instead of cumulative distribution functions?
There are many possible choices regarding the overall scaling coefficients as well as the scaling coefficient converting time and frequency. It is possible to summarize these conventions succinctly using two numbers $a$ and $b$. I use the same notation as used in the Mathematica Fourier Transform function. We define the Fourier Transform: $$\mathcal{FT}_{a,b}[f(t)](\omega) = \sqrt{\frac {|b|}{(2\pi)^{1-a}}}\int_{-\infty}^{+\infty} e^{+i b \omega t} f(t) dt$$ And the inverse Fourier Transform $$\mathcal{FT}_{a,b}^{-1}[\tilde{f}(\omega)](t) = \sqrt{\frac{|b|}{(2\pi)^{1+a}}}\int_{-\infty}^{+\infty} e^{-i b \omega t} \tilde{f}(\omega) d\omega$$ Let$$\tilde{f}_{a,b}(\omega) = \mathcal{FT}_{a,b}[f(t)](\omega)$$$$\check{f}_{a,b}(t) = \mathcal{FT}_{a,b}^{-1}[\tilde{f}_{a,b}(\omega)](t)$$ It can be shown via the Fourier inversion theorem that for the classes of functions we care about in physics $\check{f}_{a,b}(t) = f(t)$ for any $a$ and $b$. That is, for these definitions of the Fourier Transform and Inverse Fourier transform the two operations are inverses of eachother. It's turns out that in the engineering and scientific literature there are many conventions that people choose depending mostly on what they are used to. The first convention in the OP is $(a,b) = (1,-1)$ which is commonly used in physics, about as commonly as $(a,b) = (1,+1)$ which is the second convention you have shown. In addition you will also see conventions where $(a,b) = (0,\pm1)$ where the factor of $2\pi$ is split evenly between the transform and inverse transform showing up with a square root. Furthermore, usually in math or signal processing you will come across the $(a,b) = (0,\pm 2\pi)$ convention in which there is NO prefactor of $2\pi$ on either the transform or the inverse transform but now instead of angular frequency $\omega$ represents a cyclic frequency and a $2\pi$ appears in all of the exponentials. All of these different conventions have advantages and disadvantages which may make one choice of convention more attractive than another depending on the application. The main point is that in any problem, whichever convention is chosen should be kept the same throughout the whole problem. To get back to the OP's main question now. In the language set up in this answer the OP is basically asking if it matters whether $b=+1$ or $b=-1$. The short answer is that it does not matter. Either way works and converts the original signal as a function of time into a function of frequency. The difference has to do with how we interpret positive and negative frequencies.Consider$$f^1(t) = e^{+i\omega_0 t}$$$$f^2(t) = e^{-i \omega_0 t}$$ The phasor for the first function rotates counterclockwise in phase space whereas the second rotates clockwise in phasespace. If we choose the $b=-1$ convention then $\tilde{f}^1_{1,-1}(\omega)$ will have a nonzero contribution at $+\omega_0$ whereas $\tilde{f}^2_{1,-1}(\omega)$ will have a nonzero contribution at $-\omega_0$. We might say $f^1$ is a positive frequency signal while $f^2$ is negative. However, if we choose $b=+1$ then everything reverses. $\tilde{f}^1_{1,+1}(\omega)$ will have a nonzero contribution at $-\omega_0$ while $\tilde{f}^2_{1,+1}(\omega)$ will have a contribution at $+\omega_0$. now $f^1$ is negative frequency and $f^2$ is positive frequency! Thuse we see that both $b=+1$ and $b=-1$ give answer that we can interpret as frequencies with the only difference between the two being what we call positive and negative frequencies. As a note I personally prefer $(a,b)=(1,+1)$ because it makes the formula for the Fourier transform (which I use more often than the inverse transform) as simple as possible. No prefactor and no minus sign in the exponent. edit: As you have pointed out sometimes these signs can have a substantial effect on some physical quantity such as reversing the sign (inverting the phase) of the complex impedance of a capacitor. Unfortunately this is something we just have to deal with and try to be consistent with our own conventions and those used by the references we consult. Of course you will find both conventions give the same answer for a real measurable quantity such as $V(t)$ across the resistor.
Statistics: Data Distribution Learn easily with Video Lessons and Interactive Practice Problems Mean, Median, Mode, Range Mean, median, mode, and range are values that describe data. Mean and median are measures of central tendency and provide a number to represent the central position of data in a set. Mode is also a measure of central tendency and can be a numerical or nonnumerical value and represents the data item that occurs the most. The range describes data by identifing the difference between the high and low data values. For a given data set, It’s important to know which measure of central tendency best describes the data. The presence of outliers may skew the mean to be a higher or lower number resulting in the median or mode to be the best measure. This data set represents a student’s test scores: 59, 88, 92, 94, 95, 96, 100 To measure the central tendency of this data set the median of 94 is the best measure. Because of the outlier, 59, the mean is skewed lower, and there is no mode. mean: 89 median: 94 mode: none Quartiles and Box and Whisker Plot Quartiles divide numerical data into four groups or quartiles. Values are identified as: minimum, first quartile, second quartile (median), third quartile, and maximum. A box and whisker plot is used to graphically represent quartile data. From a series of basketball games, here are the scores for a team: 87 (minimum) 92 103 (median of lower data) 104 108 108 109 112 (median of upper data) 116 118 (maximum) The box and whisker graph shown here is used to display the data with a box around the interquartile range of $q{3}-q{1}$ with end points shown on the minimum and maximum values. Standard Deviation The standard deviation measures the distance of a data value from normal. The symbol for the standard deviation is the Greek letter sigma. The formula for the standard deviation is the square root of the variance: $\sigma =\sqrt{variance}$ This data sets list the height in inches of 5 students.The mean is equal to 59.6: 63, 64, 58, 48, 65. Here is the calculation of the standard deviation equal to the square root of the variance. The variance is the average of the squared differences from the mean. $\begin{align} \sigma &=\sqrt{(3.4^{2} +4.4^{2} + (-1.6)^{2} + (-11.6)^{2} +5.4^{2})\div5}\\ \sigma &=\sqrt{(11.56 + 19.36 + 2.56 + 134.56 +29.16)\div5}\\ \sigma &=\sqrt{197.2\div5}\\ \sigma &=\sqrt{39.44}\\ \sigma &= 6.28 \end{align}$ Scatter Plots, Line of Best Fit, and Correlation Scatter Plots are used to determine if there is a correlation between two sets of data.If data points are shaped around the line of best fit, a correlation is indicated. The line of best fit may pass through some of the points, none of the points, or all of the points graphed on a scatter plot. When the data points are shaped around the line of best fit and slanted upward, the relationship is a high positive correlation. If the shape is slanted downward around the line of best fit, the relationship is a high negative correlation. If the data is not shaped around the line, the correlation can be low positive, low negative, or no correlation at all. This data is shaped around the line of best fit and slants upwards, so there is a high positive correlation between the two sets of data. Linear, Quadratic, and Exponential Graphs Algebra equations have recognizable graphs. Learning to match equations to graphs can help you save time when completing algebra problems. The graph of a linear equation is a straight line. $y=mx +b$ The graph of a quadractic equation is in the shape of a parabola. $y= ax^{2} + bx +c$ The graph of an exponential function is graph is asymptotic to the x-axis . The graph gets close to the x-axis but does not touch it. $y=b^{x}$
Defining parameters Level: \( N \) = \( 7 \) Weight: \( k \) = \( 6 \) Nonzero newspaces: \( 2 \) Newforms: \( 3 \) Sturm bound: \(24\) Trace bound: \(1\) Dimensions The following table gives the dimensions of various subspaces of \(M_{6}(\Gamma_1(7))\). Total New Old Modular forms 13 11 2 Cusp forms 7 7 0 Eisenstein series 6 4 2 Decomposition of \(S_{6}^{\mathrm{new}}(\Gamma_1(7))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 7.6.a \(\chi_{7}(1, \cdot)\) 7.6.a.a 1 1 7.6.a.b 2 7.6.c \(\chi_{7}(2, \cdot)\) 7.6.c.a 4 2
Overview Euler angles is a way of describing the rotation in 3D space using three separate rotations around an axes. Syntax Euler angles are usually defined one of the two symbol sets: \( \alpha, \beta, \gamma \)(Alpha, Beta, Gamma). These are usually used with proper Euler angles(see below). \( \Phi, \Theta, \Psi \)(Phi, Theta, Psi). These are usually used with Tait-Bryan angles(see below). Proper Euler Angles vs. Tait-Bryan Angles Euler angles can be split into two categories based on the axis used: Proper Euler Angles: These use the same axis for both the first and third rotations, e.g. x-y-x, x-z-x, y-x-y, y-z-y, z-x-z, z-y-z Tait-Bryan Angles: These use all three axis (no axis is used twice), e.g. x-y-z, x-z-y, y-x-z, y-z-x, z-x-y, z-y-x Proper Euler angles are also called classic Euler angles. The classic roll, pitch, yaw (RPY) terminology is a Tait-Bryan angle. Tait-Bryan angles are also called Cardan angles, nautical angles, or heading, elevation and bank. The axes of the initial frame (also called the reference frame) is denoted with x, y, z. The rotated frame is denoted with X, Y, Z. The line of nodes is the line (or vector) made by the intersection of the xy and XY planes. Calculating Euler Angles Between Two 3D Vectors First we calculate the cross-product of the two vectors, which gives us the vector of rotation: If you are using unit vectors, you can ignore the lengths (this would just divide by 1). Conversion To Rotation Matrices Euler angles can be converted to rotation matrices. We can combine these to form a single rotation matrix: Conversion From Rotation Matrices where \( atan2(y, x) \) returns the principle angle.
Get your free trial content now! Video Transcript Transcript Exponential Growth and Decay The largest longhorn breeder’s competition is coming up and Old MacDonald is tired of coming in second place. His wife, Mrs. MacDonald, suggests that Old MacDonald create a formula using exponential growth and decay to feed his favorite longhorn, BEVO, so that his horns grow faster. Exponential Growth Old MacDonald's wife reminds him that the standard form of a growth equation is: x_t = x_0 * (1 + r)t/n and to be very careful with his calculations. 't' is the time in days, so ‘x sub t’ is the final size after ‘t’ days. ‘x naught’ is the original size, (1 + r) is the growth factor, where '1' represents 100%, and ‘n’ is the time it takes for the horns to grow by a full growth factor. Old MacDonald comes up with a special vitamin mix that promises a horn growth of 25%, which can be written as 0.25, every 10 days. So the formula is: x_t = x_0 * (1 + 0.25)t/10. The initial length of BEVO’s horns, 'x naught', is 4 feet. In order to know how fast BEVO’s horns will grow, Old MacDonald plugs in the number of days for ‘t’. So, after 10 days, the equation for how long BEVO’s horns will be is 4 * (1.25)(10/10) feet, or simply, 5 feet. Every day for 50 days, Old MacDonald plans to feed BEVO meals with specially-prepared vitamins mixed in. Let’s take a look at how much BEVO’s horns will grow: After 10 days, BEVO’s horns will be 5 feet long. After 20 days, they will be 6.25 feet long, and after 50 days, BEVO’s horns will be about 12.21 feet long and BEVO will be assured to win the competition! Exponential Decay BUT Old MacDonald makes a SLIGHT calculation error and writes 2.5 instead of 0.25. After just 20 days, BEVO’s horns are already at 49 feet! Since Old MacDonald isn’t allowed to enter a longhorn with horns longer than 12.5 feet into the competition, he has to reverse the process. But he only has 30 days left in which to do so. Old MacDonald’s wife helps him again and tells him that the equation for decay is similar to the equation for growth, BUT instead of adding 1 and 'r' to get the growth factor, 'r' is subtracted from 1 to get the decay factor. Old MacDonald wants to have BEVO's horns close to 12.5 feet, so he concocts another vitamin mix that has a decay factor of 37%, which means that every 10 days, BEVO's horns will shrink by 37%.BEVO's horns currently measure 49 feet from tip to tip. Let’s take a look at the length of the horns after 10 days and see if this new pill will fix Old MacDonald’s problem. After 10 days, BEVO’s horns should be 30.87 feet long; after 20 days, about 19.45 feet and 12.25 feet after 30 days. It’s the day of the longhorn breeder’s competition! Old MacDonald is so excited! WHAT’S THIS!?!? I guess you could say Old MacDonald is “stumped” as to how this happened… Exponential Growth and Decay Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Exponential Growth and Decay kannst du es wiederholen und üben. Define exponential growth and decay. Tipps Just look at the basis of the power: $1+r>1$ $1-r<1$ If you raise a number greater than $1$ by a positive power you get a value greater than $1$ as well. Look at the example, consider $x_t=10\times 1.5^{t}$. Here we have $x_1=15$ $x_2=22.5$ ... Lösung Here you see the formula for an exponential process. We distinguish between exponential growthand exponential decay. Let's have a look at the basis of the power, $(1\pm r)$. What does this mean? For $(1+r)$ we have a basis greater than $1$ and thus a power which is greater than $1$ as well. So $x_t$ is bigger than $x_0$. We can then conclude that we have an exponential growth. $(1+r)$ is called the growth factor. By the same argument we can determine an exponential decay for $(1-r)$. Here $(1-r)$ is called the decay factor. What's the meaning of the other terms? The other terms are independent of exponential growth or decay. $\frac tn$ is the growing (or decaying) time in days by one growth (or decay) factor. $x_t$ is the size after $t$ days. $x_0$ is the original size, the starting size. Establish the formula for Bevo's shrinking horns. Tipps What's the meaning of the terms in the formula? $\frac tn$ is the decaying time by one decay factor in days. $x_t$ is the size after $t$ days. $x_0$ is the original size, the starting size. Keep in mind that you have to subtract the decay rate from $1$. Distinguish between the original size, that's the size at the beginning, and the size at $t$, which must be smaller than the original size. Lösung We already know the formula for exponential decay, $x_t=x_0\times (1-r)^{\left(\frac t{10}\right)}$, so all we need to do is put in our known values. The original size of the horns is 49 feet, so $x_0=49$. The size at $t$ is unknown. To get the decay factor we have to subtract the rate $r=0.37$ from $1$ to get $1-0.37=0.63$. Putting all these values into the exponential decay equation, we get: $x_t=49\times (1-0.37)^{\left(\frac t{10}\right)}=49\times 0.63^{\left(\frac t{10}\right)}$. Determine the growth of Bevo's horns. Tipps Just put the given value for $t$ in the growth formula. To round a decimal number, like $3.1415$ for example, proceed as follows: Look at the third decimal position; e.g. $1$ for $3.1415$. If this is less than $5$ you have to round down. Otherwise you have to round up. For $t=100$, we have $x_{100}=4\times 1.25^{\large \left(\frac {100}{10}\right)}=37.2529...\approx 37.25$. Lösung We can see all solutions in the table to the right. How do we get these solutions? We put each $t$-value in the formula $x_t=4\times 1.25^{\large \left(\frac t{10}\right)}$: $t=10$ gives us $x_{10}=4\times 1.25^{\large \left(\frac {10}{10}\right)}=5$. $t=20$ gives us $x_{20}=4\times 1.25^{\large \left(\frac {20}{10}\right)}=6.25$. $t=30$ gives us $x_{30}=4\times 1.25^{\large \left(\frac {30}{10}\right)}=7.8125\approx 7.81$. $t=40$ gives us $x_{40}=4\times 1.25^{\large \left(\frac {40}{10}\right)}=9.765625\approx 9.77$. $t=50$ gives us $x_{50}=4\times 1.25^{\large \left(\frac {50}{10}\right)}=12.2070...\approx 12.21$. Decide the corresponding exponential function. Tipps Put in the corresponding $x$-value in the respective function equation. You can find the starting value for each function; i.e. $f(0)$, $g(0)$, and $h(0)$. Check each point $(x,y)$. It must satisfy either $y=f(x)$, $y=g(x)$, or $y=h(x)$. Lösung To check if any point belongs to a given function, put the $x$ value of the point into the function. Let's start with $x=0$: $f(0)=3\times 1.2^0=3$ $\rightarrow$ $(0,3)$ $g(0)=4\times 0.9^0=4$ $\rightarrow$ $(0,4)$ $h(0)=5\times 0.5^0=5$ $\rightarrow$ $(0,5)$ $f(4)=3\times 1.2^4\approx 6.2$ $\rightarrow$ $(4,6.2)$ $g(4)=4\times 0.9^4\approx 2.6$ $\rightarrow$ $(4,2.6)$ $h(4)=5\times 0.5^4\approx 0.3$ $\rightarrow$ $(4,0.3)$ Calculate the size of Bevo's horns. Tipps Put the given value for $t$ in the formula $x_t=4\times (1+2.5)^{\large \left(\frac t{10}\right)}$. Have a look at the following example for rounding $2.71828$: Look at the third decimal position; e.g. $8$. If this is less than $5$ you have to round down. Otherwise you have to round up. For $t=60$: $x_{60}=4\times 3.5^{\large \left(\frac {60}{10}\right)}=7353.0625...\approx 7353.06$. Lösung To get the values for $x_t$, put each $t$ into the formula $x_t=4\times 3.5^{\left(\frac t{10}\right)}$: $t=10$ gives us $x_{10}=4\times 3.5^{\large \left(\frac {10}{10}\right)}=14$. $t=20$ gives us $x_{20}=4\times 3.5^{\large \left(\frac {20}{10}\right)}=49$. $t=30$ gives us $x_{30}=4\times 3.5^{\large \left(\frac {30}{10}\right)}=171.5$. $t=40$ gives us $x_{40}=4\times 3.5^{\large \left(\frac {40}{10}\right)}=600.25$. $t=50$ gives us $x_{50}=4\times 3.5^{\large \left(\frac {50}{10}\right)}=2100.875\approx 2100.88$. Decide if either exponential growth or decay is given. Tipps The factor for exponential decay (or growth) is less than (or greater than) $1$. If we have a growth of $30\%$, we get the following growth factor: $1+30\%=1+0.3=1.3$. Lösung For all the examples above we have to determine the growth, or decay, factor as well as the original size $x_0$. Water lilies- an exponential growing process: $1+r=1+0.2=1.2$ $x_0=2$ $x_t=2\times 1.2^t$, where $t$ stands for the time in months. Trees- an exponential decaying process: $1-r=1-0.15=0.85$ $x_0=2$ $x_t=2\times 0.85^t$, where $t$ stands for the time in years. Radiation- an exponential decaying process: $1-r=1-0.2=0.8$ $x_0=2$ $x_t=2\times 0.8^t$, where $t$ stands for the time in $10$ years. Cellular- an exponential growing process: $1+r=1+1=2$ $x_0=2$ $x_t=2\times 2^t$, where $t$ stands for the time in periods.
inclusion–exclusion principleis a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as Explain inclusion-exclusion principle with example. posted by , on , In combinatorics (combinatorial mathematics), the {\displaystyle |A\cup B|=|A|+|B|-|A\cap B|,} where A and B are two finite sets and |S| indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. The principle is more clearly seen in the case of three sets, which for the sets A, B and C is given by {\displaystyle |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C|.} This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total
I have the following question to solve: Tungsten, $\ce{W}$, and chlorine, $\ce{Cl}$, form a series of compounds with the following compositions: \begin{array}{rr} \text{Mass % W} & \text{Mass % Cl}\\ \hline 72.17 & 27.83\\ 56.45 & 43.55\\ 50.91 & 49.09\\ 46.36 & 53.64\\ \end{array} If a molecule of each compound contains only one tungsten atom, what are the formulas for the four compounds? My answer is as follows: For one gram of tungsten, chlorine has mass $$ \begin{align} \frac{27.83}{72.17} &= 0.3302~\mathrm{g}\\[3pt] \frac{43.55}{56.45} &= 0.7715~\mathrm{g}\\[3pt] \frac{49.09}{50.91} &= 0.9643~\mathrm{g}\\[3pt] \frac{53.64}{46.36} &= 1.157~\mathrm{g}.\\ \end{align} $$ Since $\frac{0.7715}{0.3302}\approx\frac{7}{3}$, $\frac{0.9643}{0.3302}\approx3$, and $\frac{1.157}{0.3302}\approx\frac{7}{2}$, the number of atoms of chlorine for a given mass of tungsten are respectively in the ratio $6:14:18:21$. So if a molecule of each compound contains only one tungsten atom, the formulas are $\ce{WCl_{6}}$, $\ce{WCl_{14}}$, $\ce{WCl_{18}}$, and $\ce{WCl_{21}}$. Is this correct?
Look at the following series: 1 + 2x + 3x^2 + 4x^3 + 5x^4 + ..... You can say by using any method that the series is divergent. It indeed diverges but we use this as a series expansion for 1/(1-x)^2. I think it is wrong to expand functions like that by using Maclaurin series expansion method. According to me calculating the sum of an alternating series is also incorrect and this misconception is also due to the expansion of a function by Maclaurin series expansion method. Let's put 2 instead of x in the above function(i.e 1/(1-x)^2) We get 1 as a sum of the infinite series obtained by expanding the function by Maclaurin series expansion method and then inserting 2 instead of x. We will get a divergent series as evident from the expansion of the function. I came up with this idea when i studied a research paper published by Leonhard Euler on Serieses of that type. The main point is that a method does not work for all situations-it fails somewhere. So if you agree then say yes, you are true and if there is a mistake then please correct me. The sum is not divergent for $|x|<1$: in this case the exponential decay of the $x^n$ factor is fast enough to mitigate the linear growth of the $n$ factor*. We can't use the sum to represent $1/(1-x)^2$ on the entire domain of this function; we can only use the sum for the sub-domain $(-1,1)$. However, there do exist infinitely differentiable functions whose Taylor series diverge except at the point of expansion. Similarly, there exist infinitely differentiable functions whose Taylor series converge but to the wrong function, again except at the point of expansion. The latter has a classic example, given by Taylor expanding the function $$f(x)=\begin{cases} e^{-1/x^2} & x \neq 0 \\ 0 & x=0 \end{cases}$$ at $x=0$. * Clarification: if the limit given by the ratio test is $r<1$, then there exists $N$ such that for $n \geq N$, $|a_n| \leq \left ( \frac{1+r}{2} \right )^n$. That is, the summands are eventually dominated by the summands of a convergent geometric series with a slightly larger base than the limit. The reverse happens if $r>1$. Let's put 2 instead of x in the above function(i.e 1/(1-x)^2) We get 1/4 That's not right: $\frac{1}{(1-2)^2}=1$. Perhaps you have in mind the series $1-2+3-4+\cdots$. See the Wikipedia article for full details. Briefly, Euler reasoned that $1-2x+3x^2-4x^4+\cdots=\frac{1}{(1+x)^2}$ and substituted $x=1$. Today, we would say that this is not a valid way to find the ordinary sum of the series, which diverges at $x=1$. On the other hand, since the series converges for $|x|<1$, it demonstrates that the Abel sum is $\frac14$. If you're studying Euler's research papers on divergent series, keep in mind that they were written two and a half centuries ago. To avoid confusion, you should read them alongside a modern commentary using modern definitions. In the case of this series, check on the Euler Archive on E352 and read the synopses, which explain the connection to Abel summation. The main point is that a method does not work for all situations-it fails somewhere. It is true that regularized sums of divergent series are trickier to work with than sums of convergent series. Not all of the theorems you're used to will apply in more general settings. This is inconvenient, but hardly fatal to the theory. As already pointed out by @Ian, the series $\sum_{n=0}^\infty (n+1)x^n$ does converge for $|x|<1$ and diverges elsewhere. I am not sure if this is where you were going with all of this, but it seems instructive to discuss another topic that might be lurking here, and that is the topic of an Asymptotic Series. Asymptotic series are series that can actually diverge, but whose partial sums can prove very useful and powerful as approximations. $$\text{Ei}(x)\sim e^{-x}\sum_{k=0}^{\infty}\frac{(-1)^n\,n!}{x^{n+1}}$$ Clearly this series diverges for all $x$! The ratio test yields $\left|\frac{a_{n+1}}{a_n}\right|=\frac{n+2}{x}\to \infty$. But, a truncated series provides a better and better approximation for the Exponential Integral as $x$ gets larger and larger. We see that if we use just the first term of the series, then the approximation of the Exponential Integral is $$\text{Ei}(x)\approx \frac{e^{-x}}{x}$$ which has an error of order $O\left(\frac{1}{x}\right)$.
Forgot password? New user? Sign up Existing user? Log in Evaluate the integral:∫αβ(x−α)(x−β)dx\int_\alpha^\beta (x-\alpha)(x-\beta) dx∫αβ(x−α)(x−β)dx SOLUTION∫αβ(x2−(α+β)x+αβ)dx\int_\alpha^\beta (x^2 - (\alpha + \beta)x + \alpha\beta ) dx∫αβ(x2−(α+β)x+αβ)dx (x33−(α+β)x22+αβx)∣αβ\left(\frac{x^3}{3} - \frac{(\alpha + \beta)x^2}{2} + \alpha\beta x\right)\bigg|_{\alpha}^{\beta}(3x3−2(α+β)x2+αβx)∣∣∣∣αβ 13(β3−α3)−α+β2(β2−α2)+αβ(β−α)\frac{1}{3}(\beta^3 - \alpha^3)-\frac{\alpha+\beta}{2}(\beta^2 - \alpha^2) + \alpha\beta(\beta-\alpha)31(β3−α3)−2α+β(β2−α2)+αβ(β−α) 13(β−α)(β2+αβ+α2)−α+β2(β−α)(β+α)+αβ(β−α)\frac{1}{3}(\beta - \alpha)(\beta^2+\alpha\beta+\alpha^2)-\frac{\alpha+\beta}{2}(\beta - \alpha)(\beta + \alpha) + \alpha\beta(\beta-\alpha)31(β−α)(β2+αβ+α2)−2α+β(β−α)(β+α)+αβ(β−α) (β−α)(13β2+13αβ+13α2−12α2−αβ−12β2+αβ)(\beta-\alpha)\left(\frac{1}{3}\beta^2 + \frac{1}{3}\alpha\beta + \frac{1}{3}\alpha^2 - \frac{1}{2}\alpha^2 - \alpha\beta - \frac{1}{2}\beta^2 + \alpha\beta\right)(β−α)(31β2+31αβ+31α2−21α2−αβ−21β2+αβ) (β−α)(−16β2+13αβ−16α2)(\beta-\alpha)\left(-\frac{1}{6}\beta^2 +\frac{1}{3}\alpha\beta - \frac{1}{6}\alpha^2\right)(β−α)(−61β2+31αβ−61α2) (β−α)[(β−α)2−6](\beta-\alpha)\left[\frac{(\beta-\alpha)^2}{-6}\right](β−α)[−6(β−α)2] −(β−α)36‾‾\underline{\underline{-\frac{(\beta-\alpha)^3}{6}}}−6(β−α)3 ∫αβ(x−α)(x−β)dx=−(β−α)36\boxed{\int_\alpha^\beta (x-\alpha)(x-\beta) dx = -\frac{(\beta-\alpha)^3}{6}}∫αβ(x−α)(x−β)dx=−6(β−α)3 Note by JohnDonnie Celestre 4 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Couldn't we substitute x - average(alpha,beta) = u ? That would have significantly reduced the computational time. Please think over it Log in to reply yeah thanks. Problem Loading... Note Loading... Set Loading...
The Central Board of Secondary education is responsible for conducting an examination of the schools affiliated to the Central Board. Students usually find maths as one of the most difficult paper. The major reason is due to lack of confidence and less of practice. It is suggested that practicing maths on daily basis will help students to develop to their full potential and also help them to solve the question with high accuracy. We at BYJU’S provide the student with 4 marks maths important questions for class 11th. Students can practice 4 marks wise question to be aware of the questions that can be framed in their final examination. Question 1- In triangle ABC, prove that \(\frac{\cos A}{a} + \frac{\cos B}{b} + \frac{\cos C}{c} = \frac{a^{2}+ b^{2}+ c^{2}}{2abc}\) Question 2- How many words, with or without meaning, each of 2 vowels and 3 consonants can be formed from the letter out of word DAUGHTER? Question 3- Find the coordinates of the orthocentre of the triangle whose vertices are (-1,3), (2,-1) and (0,0). Question 4- Find the domain and range of the function \(f(x) = \frac{1}{\sqrt{x – \left | x \right |}}\) Question 5- Find the variance for the following data- Question 6- Using principle of mathematical induction, prove that \(\frac{1}{3.5} + \frac{1}{5.7} + \frac{1}{7.9} + …….. + \frac{1}{(2n+1)(2n+3)} = \frac{n}{3(2n+3)}, \forall n \in N\) Question 7- Express in the form of a+ib, \(\left ( \frac{(3 + i\sqrt{5})(3 – i\sqrt{5})}{(\sqrt{3} + i\sqrt{2})(\sqrt{3} – i\sqrt{2})} \right )\) Question 8- Prove that: \(\cot (15/2)^{\circ} = \sqrt{2} + \sqrt{3} + \sqrt{4} + \sqrt{6}\) Question 9- Four cards are drawn at random from a pack of 52 playing cards. Find the probability of getting one card from each suit. Question 10- If a is A.M. and b and c be two G.M.s between any two positive numbers, then prove that \(b^{3} + c^{3} = 2abc\) Question 11- In a survey of 600 students in a school, 150 students were found to be taking tea and 225 taking coffee, 100 were taking both tea and coffee. Find how many students were taking neither tea nor coffee? Question 12- Represent the complex number \(1 + \sqrt{3}i\) Question 13- On an evening a man planned a party of their friends on his 25th marriage anniversary. When all of his friends have arrived, he introduced all to each other and everybody shakes hand with everybody else. Find the total person in a room, if total shake hands are 66. Question 14- A committee of two members is selected from two men and two women. What is the probability that the committee will have one man. Question 15- A teacher teaches their students with such a spirit that they must know what he knows? After teaching with same spirit, he decided to check the ability of the students through a test. He has given a question, if a,b are roots of \(x^{2} – 3x + p = 0\) Question 16- Find the equation of set of points ‘P’ such that its distance from the points (3,4,-5) and (-2,1,4) are equal. Question 17- A horse is tied to a pole by a rope. If the horse moves along a circular path keeping the rope tight and describe 176m when it has traced out \(72^{\circ}\) Question 18- Evaluate the value of \(\tan \left ( \frac{\pi}{8} \right )\) Question 19- If \(\left ( 1 + 1/i – i\right )^{m} = 1\) Question 20- Evaluate- (i) \(\lim \limits_{x \to 0} \left [ \frac{(x+1)^{5}-1}{x} \right ]\) (ii) \(\lim \limits_{x \to \pi} \left [ \frac{\sin (\pi + x)}{\pi( \pi -x)} \right ]\) ‘
Hi, I’m looking for a symbolic manipulations library to do different sorts of analysis on a model. I’ve tried python SymPy but couldn’t get it to work; following a recent tweet by @cscherrer, I was wondering whether I can use Julia’s SymPy/SymEngine for this. In short, the expression I’m interested in is an exponential distribution of the form: P(x,y|W)=\frac{1}{Z}\exp\left(-xWy\right) such that x,y\in\{0,1\}^N are boolean vectors of size N, W is an NxN matrix and Z is a normalising constant (or partition function): Z=\sum_{x,y}\exp\left(-xWy\right) I’m interested in various quantities of this expression, such as \frac{dP}{dw_{ij}}, KL\left( P(x,y|W) ; P(x,y|W')\right), and others, as well as approximations for the w_{ij}\ll1 regime (which helps with the exponent). Is this reasonable/doable in Julia? Disclaimer - I have absolutely no Julia experience; I’ve been wanting to try it for a long time, and I’m hoping this will be my excuse.
In this chapter you will do more work with fractions written in the decimal notation. When fractions are written in the decimal notation, calculations can be done in the same way as for whole numbers. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Equivalent forms Fractions in decimal notation 1. What fraction of each rectangle is coloured in? Write your answers in the table. (a) (b) (c) (d) (a) Red (b) Green Yellow (c) Green Yellow (d) Yellow Green 2. Now find out what fraction in each rectangle in question 1 is not coloured in. (a) (b) (c) (d) Decimal fractions and common fractions are simply different ways of expressing the same number. We call them different notations. To write a common fraction as a decimal fraction, we must first express the common fraction with a power of ten (10, 100, 1 000 etc.) as denominator. For example: \(\frac{9}{20}=\frac{9}{20} \times \frac{5}{5} = \frac{45}{100} = 045\) If you have a calculator, you can also divide the numerator by the denominator to get the decimal form of a fraction, for example: \(\frac{9}{20} = 9 \div 20 = 0,45\) To write a decimal fraction as a common fraction, we must first express it as a common fraction with a power of ten as denominator and then simplify if necessary. For example: \( 0,65 = \frac{65}{100} = \frac{65 \div 5}{100 \div 5} = \frac{13}{20}\) 3. Give the decimal form of each of the following numbers. \(\frac{1}{2} \) __________ \(\frac{3}{4}\) __________ \(\frac{4}{5}\) __________ \(\frac{7}{5}\) __________ \(\frac{7}{2} \) __________ \(\frac{65}{100}\)__________ 4. Write the following as decimal fractions. (a) \(2 \times 10 + 1 \times 1 + \frac{3}{10}\) (b) \(3 \times 1 + 6 \times \frac{1}{100}\) (c) Three hundredths (d) \(7 \times \frac{1}{1000}\) 5. Write each of the following numbers as fractions in their simplest form. 0,2 0,85 0,07 12,04 40,006 6. Write in the decimal notation. (a) 5 + 12 tenths (b) 2 + 3 tenths + 17 hundredths (c) 13 hundredths + 15 thousandths (d) 7 hundredths + 154 hundredths Hundredths, percentages and decimals It is often difficult to compare fractions with different denominators. Fractions with the same denominator are easier to compare. For this and other reasons, fractions are often expressed as hundredths. A fraction expressed as hundredths is called a percentage. Instead of 6 hundredths we can say 6 per cent or \(\frac{6}{100}\) or 0,06. 6 per cent, \(\frac{6}{100}\) and 0,06 are just three different ways of writing the same number. The symbol % is used for per cent. Instead of writing "17 per cent", we may write 17%. 1. Write each of the following in three ways: in decimal notation, in percentage notation and in common fraction notation. Leave your answers in hundredths. (a) 80 hundredths (b) 5 hundredths (c) 60 hundredths (d) 35 hundredths 2. Complete the following table. 0,3 \(\frac{1}{4}\) 15% \(\frac{1}{8}\) 0,55 1% Ordering and comparing decimal fractions Bigger, smaller or the same? 1. Write the values of the marked points (A to D) in as accurately as possible in decimal notation. Write the values beneath the letters A to D. (a) (b) (c) (d) (e) (f) (g) (h) (i) 2. Order the following numbers from biggest to smallest. Explain your thinking. 5267 1263 1300 12689 635 1267 125 126 12 3. Order the following numbers from biggest to smallest. Explain your method. 0,8 0,05 0,901 0,15 0,465 0,55 0,75 0,4 0,62 0,901 0,8 0,75 0,62 0,55 0,465 0,4 0,15 0,05 4. Write down three different numbers that are bigger than the first number and smaller than the second number. (a) 5 and 5,1 (b) 5,1 and 5,11 (c) 5,11 and 5,12 (d) 5,111 and 5,116 (e) 0 and 0,001 (f) \(\frac{1}{2}\) and 1 5. Underline the bigger of the two numbers. (a) 2,399 and 2,6 (b) 5,604 and 5,64 (c) 0,11 and 0,087 (d) \(\frac{3}{4}\) and 50% (e) \(\frac{75}{100}\) and \(\frac{50}{100}\) (f) 0,125 and 0,25 6. The table gives information about two world champion heavyweight boxers. If they fight against one another, who would you expect to have the advantage, and why? Height (m) 1,98 1,88 Weight (kg) 112 103,3 Reach (m) 2,03 1,91 7. Fill in <, > or = . (a) 3,09 ☐ 3.9 (b) 3,9 ☐ 3,90 (c) 2,31 ☐ 3,30 (d) 3,197 ☐ 3,2 (e) 4,876 ☐ 5,987 (f) 123,321 ☐ 123,3 8. How many numbers are there between 3,1 and 3,2? Rounding off decimal fractions Decimal fractions can be rounded in the same way as whole numbers. They can be rounded to the nearest whole number or to one, two, three etc. figures after the comma. If the last digit of the number is 5 or bigger it is rounded up to the next number. For example: 13,5 rounded to the nearest whole number is 14; 13,526 rounded to two figures after the comma is 13,53. If the last digit is 4 or less it is rounded down to the previous number. For example: 13,4 rounded to the nearest whole number is 13. Let's round off 1. Round each of the following numbers off to the nearest whole number. 29,34 3,65 14,452 3,299 39,1 564,85 1,768 2. Round each of the following numbers off to one decimal place. 19,47 421,34 489,99 24,37 6,77 3. Round each of the following numbers off to two decimal places. 8,345 6,632 5,555 34,239 21,899 4. Mr Peters buys a radio for R206,50. The shop allows him to pay it off over six months. How must he pay back the money? 5. (a) Mrs Smith buys a carton of 10 kg grapes at the market for R24,77. She must divide it between herself and two friends. How much does each woman get? (b) How much must each person pay Mrs Smith for the grapes? 6. Estimate the answers for each of the following by rounding off the numbers. (a) \(1,43 \times 1,62\) (b) \(3,89 \times 4,21\) Calculations with decimal fractions To add and subtract decimal fractions tenths may be added to tenths tenths may be subtracted from tenths hundredths may be added to hundredths hundredths may be subtracted from hundredths etc. Let's do calculations! 1. Four consecutive stages in a cycling race are 21,4 km; 14,7 km; 31 km and 18,6 km long. How long is the whole race? Answer: 2. Calculate. (a) \( 16,52 + 2,35 \) (b) \(16,52 + 9,38\) (c) \(16,52 + 9,78\) (d) \( 30,08 + 2,9 \) (e) \(0,042 + 0,103\) (f) \(9,99 + 0,99\) 3. Calculate. (a) \( 45,67 - 23,25 \) (b) \( 45,67 -23,80 \) (c) \(187,6 - 98,45\) (d) \( 1,009 - 0,998 \) (e) \(0,9 - 0,045\) (f) \(65,7 - 37,6\) 4. The following set of measurements (in cm) was recorded during an experiment: 56,8; 55,4; 78,9; 57,8; 34,2; 67,6; 45,5; 34,5; 64,5; 88 (a) Find the sum of the measurements and round it off to the nearest whole number. (b) First round off each measurement to the nearest whole number and then find the sum. (c) Which of your answers in 4(a) and (b) is closest to the actual sum? Explain why. 5. By how much is 0,7 greater than 0,07? 6. The difference between two numbers is 0,75. The bigger number is 18,4. What is the other number? To multiply fractions written as decimals, convert the fractions to whole numbers by multiplying by powers of 10 (e.g. \(0,3 \times 10 = 3\)), do your calculations with the whole numbers, and then convert back to decimals again. For example: \(13,1 \times 1,01\) \(13,1 {\bf\times 10} \times 1,01 {\bf\times 100} = 131 \times 101 = 13 231; 13 231 \div {\bf 10 \div 100} = 13,231\) When you do division you can first multiply the number and the divisor by the same number to make the working easier. For example: \(21,7 \div 0,7 = (21,7 {\bf\times 10}) \div (0,7 {\bf\times 10}) = 217 \div 7 = 31\) 7. Calculate each of the following. You may use fraction notation if you wish. (a) \(0,12 \times 0,3 \) (b) \( 0,12\times 0,03 \) (c) \(1,2 \times 0,3\) (d) \(350 \times 0,043 \) (e) \( 0,035\times 0,043 \) (f) \(0,13 \times 0,16\) (g) \(1,3 \times 1,6 \) (h) \(0,13 \times 1,6\) 8. \(30,5 \times 1,3 = 39,65\). Use this answer to work out each of the following. (a) \(3,05 \times 1,3 \) (b) \( 305 \times1,3 \) (c) \(0,305 \times 0,13\) (d) \(305 \times 13 \) (e) \( 39,65 \div 30,5 \) (f) \(39,65 \div 0,305\) (g) \( 39,65 \div 0,13 \) (h) \(3,965 \div 130\) 9. \( 3,5 \times 4,3 = 15,05\). Use this answer to work out each of the following. (a) \(3,5 \times 43 \) (b) \( 0,35 \times43 \) (c) \(3,5 \times 0,043\) (d) \(0,35 \times 0,43 \) (e) \( 15,05\div 0,35 \) (f) \(15,05 \div 0,043\) 10. Calculate each of the following. You may convert to whole numbers to make it easier. (a) \( 62,5 \div 2,5 \) (b) \(6,25 \div 2,5\) (c) \( 6,25 \div 0,25 \) (d) \(0,625 \div 2,5\) Solving problems 1. (a) Divide R44,45 between seven people so that each one receives the same amount. (b) John saves R15,25 every week. He now has R106,75 saved up. For how many weeks has he been saving? 2. (a) Calculate \(14,5 \div 6\), correct to two decimal places (b) Calculate \(7,41 \div 5\), correct to one decimal place 3. Determine the value of \(x\). (Give answers rounded to 2 decimal places.) (a) \( 7,1 \div x = 4,2 \) (b) \(x \div 0,7 = 6,2\) (c) \(12 \div x = 6,4\) (d) \( x \div 3,5 = 7 \) (e) \(2,3 \times x = 6\) (f) \(0,023 \times x = 8\) 4. (a) 1 â of water weighs almost 0,995 kg.What will 50 â of water weigh? What will 0,5 â of water weigh? Mincemeat costs R36,65 per kilogram. What will 3,125 kg mince-meat cost? What will 0,782 kg cost?
I'm solving problem in classical field theory and I have some difficulties. I'm trying to study small oscilations of heavy string with fixed points. First of all I wrote down this Lagrangian: $$S=\int dt ds \left[\frac{\rho}{2}(\dot{x}^2+\dot{y}^2)-\rho g y(s,t)+\frac{\lambda(s,t)}{2}\left(\left(\frac{\partial x}{\partial s}\right)^2+\left(\frac{\partial y}{\partial s}\right)^2-1\right)\right]$$ So I have 3 equations from Euler-Lagrange equations. $$\rho\ddot{x}+\frac{d}{ds}\left(\lambda(s,t)\frac{\partial x}{\partial s }\right)=0$$ $$\rho\ddot{y}+\frac{d}{ds}\left(\lambda(s,t)\frac{\partial y}{\partial s }\right)+\rho g=0$$ $$\left(\frac{\partial x}{\partial s }\right)^2+\left(\frac{\partial y}{\partial s }\right)^2=1$$ After that I've found stationary solution ($\frac{\partial x}{\partial t}=\frac{\partial y}{\partial t}=\frac{\partial \lambda}{\partial t}=0$). (I just put $\ddot{x}=\ddot{y}=0$) $$y_0(x)=-\frac{C_1}{\rho g}\cosh\left(\frac{\rho g x}{C_1}+C_2\right)$$ Where $C_1,C_2$ is integration constants (depends on positions of ends of string). And $\cosh(x)$ is hyperbolic cosine. To study small oscillations I've tried to use pertrubation theory. So, I put $$y(s,t)=y_0(s)+\bar{y}(s,t)$$ $$x(s,t)=x_0(s)+\bar{x}(s,t)$$ $$\lambda(s,t)=\lambda_0(s)+\bar{\lambda}(s,t)$$ But after that I get difficult differential equations, which I can't solve. Maybe someone know the more simplier aproach to solve this problem or know how to solve it in this way?
TL;DR: won't work with any spacecraft further away than one third of the larger semiaxis of the earth's path, even for fully-armed battlestation sized spacecraft. Radar (more exactly, time-of-flight) is limited by two things: speed of light: The nearest star is 4 light years away. So anything that we send into that direction could, earliest, return in 8 years. Now, this is not really the problem at hand, because free space attenuation is the fact that for any wave front, the power density decreases by the same amount the sphere surface increases with radius. I.e., you get $\frac1{4\pi d^2}$ of the original power density at distance $d$. Now, the result is the so-called Radar Equation: $$P_r = {{P_t G_t G_r \sigma \lambda^2}\over{{(4\pi)}^3 R^4}}$$ with $$\begin{align}P_r && \text{received signal power}\\P_t && \text{transmitted signal power}\\\lambda && \text{wavelength}\\G_t,\,G_r && \text{directional gain of the transmit, receive antennas}\\\sigma&&\text{radar cross section, "effective reflection area"}\\R && \text{the distance between you and the radar target}\end{align}$$ Let's plug in a few numbers. First of all, let's assume your radar spacecraft has enough power, and sends 1 MW. It's also got an excellent receiver and lots of signal processing, so that it can even detect a reflected signal at far below thermal noise level at 20°C. Let's say it can work with -180dBm of power – that's $10^{-21}$ W. Pretty much nothing. (in fact, we're getting close to action quantization here) Then, we come to the following reasoning for our maximum distance $R$: $$\begin{align}10^{-21} \text{ W}&= \frac{10^{6} \text{ W} G_t G_r \lambda^2 \sigma}{{(4\pi)}^3 R^4}\\10^{-27} &= \frac{{ G_t G_r\lambda^2 \sigma}}{{(4\pi)}^3 R^4}\end{align}$$ Let's furthermore assume your spacecraft has something slightly smaller than the Arecibo Observatory (72dBi) as antenna – something with a gain of 60 dBi, and let's also assume you use that for both transmitting and receiving, $G_t=G_r=G$ $$\begin{align}10^{-27} &= \frac{{ G_t G_r \lambda^2\sigma}}{{(4\pi)}^3 R^4}\\&= \frac{{ G^2 \sigma \lambda^2}}{{(4\pi)}^3 R^4}\\&= \frac{{ 10^{12} \sigma \lambda^2}}{{(4\pi)}^3 R^4}\\10^{-39}&= \frac{{ \sigma \lambda^2}}{{(4\pi)}^3 R^4}\\\end{align}$$ The question remains: What's a good estimate for the radar cross section of your target? So, we need to pick a target. I arbitrarily chose the Imperial Death Star. Which is nearly spherical, so we can analytically determine its RCS based on its radius $r$, assuming they have a nice, flat, metal surface freshly polished for the visit of the emperor (first Death Star had a $r=70\text{ km}$ $$\begin{align}\sigma &= \pi r^2\\&=\pi {(7\cdot 10^4)}^2\text m^2\\&\approx 3\cdot 50 \cdot 10^5 \text m^2\\&= 1.5\cdot 10^7 \text m^2\text{ .}\end{align}$$ Back to our maximum distance: $$\begin{align}10^{-39}&= \frac{{ 1.5\cdot 10^7 \text m^2 \lambda^2}}{{(4\pi)}^3 R^4}\\6.67\cdot10^{-47}&= \frac{{m^2 \lambda^2}}{{(4\pi)}^3 R^4}\\\end{align}$$ Let's assume we're doing some 1 GHz as frequency, so we have a wavelength of $$\begin{align}\lambda &= \frac cf\\&=\frac{3\cdot 10^8 \frac{\text m}{\text s}}{10^9\frac1{\text s}}\\&=3\cdot 10^{-1}\text{ m .}\end{align}$$ Why not a lower frequency, you ask? Simply because the size of an antenna of 60 dBi gain scales linearly with the wavelength. We need to get that antenna to space, so we can't have it being arbitrarily large (and as you've noticed, I'm overly concerned with realism). It follows that $$\begin{align}6.67\cdot10^{-47}&= \frac{{m^2 \lambda^2}}{{(4\pi)}^3 R^4}\\&= \frac{{9\cdot 10^{-2} m^4}}{{(4\pi)}^3 R^4}\\0.74\cdot10^{-46}\text{ m}^{-4}&= \frac{1}{{(4\pi)}^3 R^4}\\{(4\pi)}^3 \cdot 0.74\cdot10^{-46}\text{ m}^{-4}&= \frac{1}{ R^4}\\R^4 &= \frac{1}{{(4\pi)}^3 \cdot 0.74\cdot10^{-46}}\text{ m}^{4}\\&\approx \frac{1}{2000 \cdot 0.74\cdot10^{-46}}\text{ m}^{4}\\&\approx \frac{1}{2 \cdot 0.74\cdot10^{-43}}\text{ m}^{4}\\&\approx \frac{1}{1.5\cdot 10^{-43}}\text{ m}^{4}\\&= \frac{1}{1.5}10^{43}\text{ m}^{4}\\&= \frac{2}{3}10^{43}\text{ m}^{4}\\R&=\sqrt[4]{\frac{2}{3}10^{43}}\text{ m}\\&=\sqrt[4]{\frac{2}{3}10^{3}}\cdot\sqrt[4]{10^{40}}\text{ m}\\&=\sqrt[4]{\frac{2}{3}10^{3}}\cdot 10^{10}\text{ m}\\&\approx 5\cdot 10^{10}\text{ m}\\&\approx 0.334 \text{ AU .}\end{align}$$ Since from the formula we see that radius of the target only contributes to maximum range with the square root, to get a max distance of 5 AU, we'd need to increase the radius by a factor of $\left(\frac5{0.334}\right)^2\approx 15^2=225$, ie. that body would need to have a diameter of 31,500 km at least – about one fourth of the diameter of Jupiter!
I know that if $W$ and $W′$ are two independent brownian motions, then $dWt \ dWt′$ = 0. How can I prove/demonstrate this theorem? Additionaly, how can we prove that if $W$ and $W′$ are dependent, then $dWt \ dWt′ = \rho \ dt$? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community I know that if $W$ and $W′$ are two independent brownian motions, then $dWt \ dWt′$ = 0. How can I prove/demonstrate this theorem? Additionaly, how can we prove that if $W$ and $W′$ are dependent, then $dWt \ dWt′ = \rho \ dt$? For the first part looks quite obvious, since independence implies that the covariance is zero and since the correlation is just the covariance divided by the product of the standard deviations, it will be zero, too. $$\text{Cov}(W_t,W_t^\prime)=\mathbb E [W_t,W_t^\prime]-\mathbb E [W_t]\mathbb E[W_t^\prime]$$ By law of iterated expactation $$\mathbb E [W_t,W_t^\prime]=\mathbb E[\mathbb E[W_tW_t^\prime|W_t^\prime]]=\mathbb E[W_t^\prime\underbrace{\mathbb E[W_t|W_t^\prime]}_{W_t\perp W_t^\prime}]=\mathbb E[W_t^\prime\mathbb E[W_t]]=\mathbb E[W_t]\mathbb E[W_t^\prime]$$ $$\Rightarrow \text{Cov}(W_t,W_t^\prime)=\mathbb E [W_t,W_t^\prime]-\mathbb E [W_t]\mathbb E[W_t^\prime]=\mathbb E [W_t]\mathbb E[W_t^\prime]-\mathbb E [W_t]\mathbb E[W_t^\prime]=0$$ $$\text{Corr}(W_t,W_t^\prime)=\frac{\text{Cov}(W_t,W_t^\prime)}{\sqrt{\text{Var}[W_t]}\sqrt{\text{Var}[W_t^\prime]}}=\frac{0}{t}=0$$ $$\Rightarrow d\langle W_t,W_t^\prime\rangle\underbrace{=}_{\text{by indep.}}d\langle W_t\rangle d\langle W_t^\prime\rangle=0$$ When $W$ and $W^\prime$ are dependent, one can re-write the Brownian motion as a linear combination of the other Brownian motion and another Brownian motion under the same filtration, that is orthogonal to the other, i.e. $$dW_t^\prime=\rho dW_t+\sqrt{1-\rho^2}dW_t^\perp, \quad W_t\perp W_t^\perp$$ then compute $d\langle W_t,W_t^\prime\rangle$.
Cumulative distribution function are the function talking about the probability of an an event over a given interval. The cumulative distribution function can be obtained by integrating the probability density function. The probability can be obtained by integrating the cumulative distribution function over a definite interval. A cumulative distribution function will always be a non-decreasing function. Word Problems Problem 1: Life expectancy of a certain bacteria having the density function, $p(x)$ = $\frac{1}{x^3}$ if $x\ \geq\ 1$, $p(x)$ = $0$ if $x\ <\ 1$ Find the probability of bacteria living from $2$ to $5$ days. Solution: The density function of bacteria living from two to ten days, $p(x)$ = $\frac{1}{x^3}$ To get the probability this function has to integrated from $2$ to $10$. $\int_{2}^{10}p(x)\ dx$ = $\int_{2}^{10}p(x)\ dx$ = $\int_{2}^{10}$ $\frac{1}{x^3}$ $dx$ = -$\frac{1}{2}$ $[x^{-2}]_{2}^{10}$ = $0.12$ Problem 2: If probability density function of a random variable $X$ is given to be $f(x)$ = $\frac{1}{2x^2}$ for $0\ <\ x\ <\ 2$ then find its cumulative distribution function. Solution: To find the cumulative distribution function the probability density function is integrated. The cumulative distribution function, $F(x)$ = $\int_{0}^{x}f(x)dx$ $F(x)$ = $\int_{0}^{x}$ $\frac{1}{2x^2}$ $dx$ = -$\frac{1}{2}$ $[x^{-1}]_{0}^{x}$ = $\frac{-1}{2x}$ Hence, cumulative distribution function is $\frac{-1}{2x}$ for $0\ <\ x\ <\ 2$. Problem 3: A chemical can have life on the basis of given density function. $p(x)$ = $\frac{1}{3x^5}$ if $x\ \geq\ 1$, $p(x)$ = $0$ if $x\ <\ 1$ Find the probability that the chemical a life from $0$ to $5$ days. Solution: The probability can be obtained by integrating the function over the given interval. $P(x)$ = $\int_{0}^{5}p(x)dx$ Hence, $P(x)$ = $\int_{0}^{1}0\ dx + \int_{1}^{5}$ $\frac{1}{3x^{5}}$ $dx$ = $[\frac{-1}{12}$ $x^{-4}]_{1}^{5}$ = $0.0832$