text
stringlengths
256
16.4k
Introduction: In an excellent paper published less than two years ago, Timothy Lillicrap, a theoretical neuroscientist at DeepMind, found a simple yet reasonable solution to the weight transport problem. Essentially, Timothy and his co-authors showed that it’s possible to do backpropagation with random weights and still obtain very competitive results on various benchmarks [2]. The reason why this is really significant is that it marks an important step towards biologically plausible deep learning. The weight transport problem: While backpropagation is a very effective approach for training deep neural networks, at present it’s not at all clear whether the brain might actually use this method for learning. In fact, backprop has three biologically implausible requirements [1]: feedback weights must be the same as feedforward weights forward and backward passes require different computations error gradients must be stored separately from activations A biologically plausible solution to the second and third problems is to use an error propagation network with the same topology as the feedforward network but used only for backpropagation of error signals. However, there is no known biological mechanism for this error network to know the weights of the feedforward network. This makes the first requirement, weight symmetry, a serious obstacle. This is also known as the weight transport problem [3]. Random synaptic feedback: The solution proposed by Lillicrap et al. is based on two good observations: Any fixed random matrix may serve as a substitute for the original matrix in backpropagation provided that on average we have: \begin{equation} e^\top WB e > 0 \end{equation} where is the error in the network’s output. Geometrically, this is equivalent to requiring that and are within of each other. Over time we get better alignment between and due to the modified update rules which means that the first requirement becomes easier to satisfy with more iterations. A simple example: Let’s consider a simple three layer linear neural network that is intended to approximate a linear mapping: The loss is given by: \begin{equation} \mathcal{L} = \frac{1}{2} e^\top e \end{equation} From this we may derive the following backpropagation update equations: \begin{equation} \Delta W \propto \frac{\partial \mathcal{L}}{\partial W} = \frac{\partial \mathcal{L}}{\partial e} \frac{\partial e}{\partial y} \frac{\partial y}{\partial W} = e \cdot -1 \cdot h = e h^\top \end{equation} \begin{equation} \Delta W_0 \propto \frac{\partial \mathcal{L}}{\partial W_0} = \frac{\partial \mathcal{L}}{\partial e} \frac{\partial e}{\partial y} \frac{\partial y}{\partial h} \frac{\partial e}{\partial W_0} = e \cdot (-1) \cdot W \cdot x = -W^\top e x^\top \end{equation} Now the random synaptic feedback innovation is essentially to replace step with: \begin{equation} \Delta W_0 \propto B e x^\top \end{equation} where is a fixed random matrix. As a result, we no longer need explicit knowledge of the original weights in our update equations. I actually implemented this method for a three-layer sigmoid (i.e. nonlinear) neural network and obtained 89.5% accuracy on the MNIST dataset after 10 iterations, a result that is competitive with backpropagation. Discussion: In spite of its remarkable simplicity, Timothy Lillicrap’s solution to the weight transport problem is very effective and so I think it deserves further investigation. In the near future I plan to implement random synaptic feedback for much larger sigmoid and ReLU networks as well as recurrent neural networks in order to build upon the work of [1]. Considering all the approaches to biologically plausible deep learning attempted so far, I believe this work represents a very important step forward. References: How Important Is Weight Symmetry in Backpropagation? (Qianli Liao, Joel Z. Leibo, Tomaso A. Poggio. 2016. AAAI.) Random synaptic feedback weights support error backpropagation for deep learning(Lillicrap 2016. Nature communications.) Grossberg, S. 1987. Competitive learning: From interactive activation to adaptive resonance. Cognitive science 11(1):23–63.
Table of Contents Variational inequalities provide a general mathematical framework for many problems arising in optimization. For example, constrained optimization problems like LP and NLP are special cases of VI, and systems of equations and complementarity problems can be cast as VI. Thus VI problems have many applications, including those in transportation networks, signal processing, regression analysis, and game theory. In this section we present a mathematical formulation of VI, give an example of how VI can be modeled with GAMS EMP, and introduce the EMP annotations specific to VI. Note that to fully and properly understand some of this section, the introduction to MCP provides a useful background. For a given continuous function \(F: \mathbb{R}^n \rightarrow \mathbb{R}^n\) and a fixed closed convex set \(K \subset \mathbb{R}^n\) the variational inequality problem \(VI(F,K)\) is to find a point \(x^* \in K\) such that: \begin{equation} \langle F(x^*), (x-x^*) \rangle \geq 0, \quad \forall x \in K, \tag {7} \end{equation} where \(\langle \cdot, \cdot \rangle \) denotes the usual inner product. Observe that the VI generalizes many problem classes: If \(F(x)=0\) and \(K \equiv \mathbb{R}^n\), the VI is a system of nonlinear equations. If \(F(x) = \bigtriangledown f(x)\), the VI is a convex optimization problem. If \(F(x) = \bigtriangledown f(x)\) and \(K = \{x \, | \, Ax = b, Hx \leq h \}\), the VI is an NLP. If \(F(x) = \bigtriangledown f(x) = p \) and \(K = \{x \, | \, Ax = b, Hx \leq h \}\), the VI is an LP. If the feasible region is a box \(B = \{x \in \mathbb{R}^n | \, l_i \leq x_i \leq u_i, \, \textrm{for} \, i = 1, \dots , n \}\), with \( l_i \leq u_i\), \(\ l_i \in \mathbb{R} \cup \{- \infty \} \) and \(u_i \in \mathbb{R} \cup \{\infty\} \), the VI is an MCP, where \( x^* \in B\) is a solution of the respective MCP if for each \(i = 1, \dots , n\) one of the following conditions holds: \begin{equation} \begin{array}{lll} F_i(x^*) = 0 & \textrm{and} & l_i \leq x^{*}_i \leq u_i, \\ F_i(x^*) > 0 & \textrm{and} & x^{*}_i = l_i, \\ F_i(x^*) < 0 & \textrm{and} & x^{*}_i = u_i. \\ \end{array} \end{equation} Note that the set \(K\) is frequently defined in the following way: \begin{equation} K = \{x \, | \, x \geq 0, \, h(x) \geq 0\}. \tag {8} \end{equation} Further, note that the \(VI(F,K)\) represents a wider range of problems than classical optimization whenever \(F(x) \neq \bigtriangledown f(x)\) for some objective function \(f\) (or equivalently, the Jacobian of \(F\) is not symmetric). For example, problems that can be cast as VI include (generalized) Nash games and Nash equilibrium problems, systems of equations, complementarity problems, and fixed-point problems. Consider the following simple three dimensional linear example (adapted from Yashtini & Malek (2007) [261]. Let \begin{equation} F(x) = \begin{bmatrix} 22x_1 - 2x_2 + 6x_3 - 4\\ 2x_1 + 2x_2 \\ 6x_1 + 2x_3 \end{bmatrix}, \, K=\{ x \in \mathbb{R^3} \, | \, x_1 - x_2 \geq 1,\, -3x_1 - x_3 \geq -4, \, 2x_1 + 2x_2 + x_3 = 0, \, l \leq x \leq u \}, \tag {9} \end{equation} where \(l=(-6,-6,-6)^T\), \(u=(6,6,6)^T\). N.B.: \(F\) is not the gradient of any function \(\mathbb{R}^3 \rightarrow \mathbb{R}\). This \(VI(F,K)\) has a unique solution: \(x=(2/3, -1/3, -2/3)\). The problem can be implemented in GAMS with EMP as follows: Set i /1*3/;Variable x(i);x.lo(i) = -6;x.up(i) = 6;Equations F(i), h1, h2, h3;F(i).. (22*x('1') - 2*x('2') + 6*x('3') - 4)$sameas(i,'1') + (2*x('1') + 2*x('2'))$sameas(i,'2') + (6*x('1') + 3*x('3'))$sameas(i,'3') =n= 0;h1.. x('1') -x('2') =g= 1;h2.. -3*x('1') - x('3') =g= -4;h3.. 2*x('1') + 2*x('2') + x('3') =e= 0;Model linVI / F, h1, h2, h3 /;File annotations /'%emp.info%'/;put annotations;putclose 'vi F x h1 h2 h3';solve linVI using EMP; Observe that the function \(F\) and the constraints \(h\) are formulated using standard GAMS syntax. \(F\) is implemented as an equation of type =n=, which does not imply or enforce any relationship between the left-hand side and the right-hand side. Instead, this relationship is implied by the position of the matching variables (given in the EMP info file) relative to their bounds. The annotations in the EMP info file define the structure of the VI: what functions are matched to what variables, and what constraints serve to define the set \(K\). The EMP keyword vi indicates that the model is a VI, that the VI function F is matched to the variable x, and that the constraints h1 h2 h3 define the set \(K\). Alternatively, the EMP annotations could be written as follows: putclose 'vi F x'; Here the equations after the equation-variable pair are omitted. This is acceptable, since by default any equations that are part of the model but are not matched with a variable are automatically used to define the set \(K\). Since VI problems have no objective, the short form of the solve statement is used. The solver JAMS will reformulate the VI as an MCP and pass this on to an MCP subsolver. The EMP Summary produced by JAMS will contain the following line: --- EMP Summary ... VI Functions = 3 ... This output reflects the fact that there were three VI functions in the model above, one for each member of the set i. Note that there are several VI models in the GAMS EMP Library. For example, the models [SIMPLEVI] and [VI_MCP] demonstrate how some models can be specified using either MCP or VI syntax. A simple nonlinear VI is given in model [SIMPLEVI2]. As the transportation model is so well known, model [TRANSVI] demonstrates how it can be cast as a VI. The general syntax of the EMP annotations used to specify variational inequalities is as follows: VI [{var|*}] { [-] equ var} [{[-] equ}] The EMP keyword VI indicates that this is a variational inequality specification. The core of the VI specification is the (list of) equation-variable pair(s): the other parts are optional. A pair matches the equation equ with the variable var. This indicates that equ defines part of the VI function \(F\), and that these rows of \(F\) are perpendicular to columns taken from var. Multiple equation-variable pairs are allowed. The optional variables before the pairs are called preceding variables. These are variables that appear (and are often defined by) the constraints of the model, but they are not matched explicitly via the VI function \(F\). Instead, they are automatically matched with the zero function. See model [ZEROFUNC] for an example and a more detailed discussion. The optional equations after the equation-variable pairs are called trailing equations. They define the set \(K\) and may be omitted from the VI specification. By default, any equations that are included in the model but are not matched with a variable are automatically used to define the set \(K\). Even though both preceding variables and trailing equations may be omitted from the VI specification, we recommend to explicitly list them, since this clarifies intentions and eliminates ambiguity. The "-" sign in the syntax above is used to flip (i.e. to reorient or negate) the marked equation, e.g. so that x**1.5 =L= y becomes y =G= x**1.5. Flipped equations in EMP behave in the same way as flipped equations in MCP. Note More than one VI specification may appear in a model. Often, it makes no difference whether multiple equ-var pairs are part of the same or separate VI specifications, but this is not the case in general. For an example, see model [SIMPLEVI4].
Advances in Differential Equations Adv. Differential Equations Volume 15, Number 7/8 (2010), 601-648. An antimaximum principle for a degenerate parabolic problem Abstract We obtain an anti\-maximum principle for the following quasilinear parabolic problem: \begin{equation*} \tag*{\rm (P)} \left\{ \begin{alignedat}{2} \frac{\partial u}{\partial t} - \Delta_p u & = \lambda\, |u|^{p-2} u + f(x,t), & & \quad (x,t)\in \Omega\times (0,T); \\ u(x,t)& = 0, & & \quad (x,t)\in \partial\Omega\times (0,T); \\ u(x,0)& = u_0(x), & & \quad x\in \Omega, \end{alignedat} \right. \end{equation*} which involves the $p$-Laplace operator $\Delta_p u\equiv \mathrm{div}(|\nabla u|^{p-2}\nabla u)$ (with Dirichlet boundary conditions, $1 < p < \infty$) and a spectral parameter $\lambda\in \mathbb{R}^N$ taking values near the first eigenvalue $\lambda_1$ of $-\Delta_p$. We show that {\it any\/} weak solution $u\colon \Omega\times [0,T)\to \mathbb{R}$ of problem (P) (suitably defined in a standard way) eventually becomes positive for all $x\in \Omega$ and all times $t\geq T_{+}$, provided, for instance, $f(x,t)\geq \underline{f}(x) >0$ for some function $\underline{f}\in L^\infty(\Omega)$, $u_0\in W_0^{1,p}(\Omega)\cap L^{\infty}(\Omega)$, and $\lambda_1 < \lambda < \lambda_1 + \delta$. Here, the ``key'' constants $\delta\equiv \delta(\underline{f}, u_0) >0$ and $T_{+}\equiv T_{+}(f,u_0)\in (0,T)$ depend on $f$ (or $\underline{f}$ only) and $u_0$. In particular, a solution $u$ eventually becomes positive even if the initial data $u_0$ are "arbitrarily" negative as long as they are smooth enough. Article information Source Adv. Differential Equations, Volume 15, Number 7/8 (2010), 601-648. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355854621 Mathematical Reviews number (MathSciNet) MR2650583 Zentralblatt MATH identifier 1195.35237 Subjects Primary: 35P30: Nonlinear eigenvalue problems, nonlinear spectral theory 35J20: Variational methods for second-order elliptic equations 47J10: Nonlinear spectral theory, nonlinear eigenvalue problems [See also 49R05] 47J30: Variational methods [See also 58Exx] Citation Padial, Juan Francisco; Takáč, Peter; Tello, Lourdes. An antimaximum principle for a degenerate parabolic problem. Adv. Differential Equations 15 (2010), no. 7/8, 601--648. https://projecteuclid.org/euclid.ade/1355854621
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email Address: Article Id: WHEBN0000005306 Reproduction Date: In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time.[1] Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as dynamic equilibrium.[2][3] The concept of chemical equilibrium was developed after Berthollet (1803) found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions are equal. In the following chemical equation with arrows pointing both ways to indicate equilibrium, A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products: The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants. Guldberg and Waage (1865), building on Berthollet’s ideas, proposed the law of mass action: where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal: and the ratio of the rate constants is also a constant, now known as an equilibrium constant. By convention the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs. Despite the failure of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.[2][4] Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions, a proton may hop from one molecule of acetic acid on to a water molecule and then on to an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior. Le Chatelier's principle (1884) gives an idea of the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same). If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction: If {H3O+} increases {CH3CO2H} must increase and {CH3CO2−} must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant. A quantitative version is given by the reaction quotient. J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes, signalling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture.[1] This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the “driving force” for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation where R is the universal gas constant and T the temperature. When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc, Q_r = \frac{\prod (a_j)^{\nu_j}}{\prod(a_i)^{\nu_i}}~, the reaction quotient decreases. Q_r < K_{eq}~ and \left(\frac {dG}{d\xi}\right)_{T,p} <0~ : The reaction will shift to the right (i.e. in the forward direction, and thus more products will form). Q_r > K_{eq}~ and \left(\frac {dG}{d\xi}\right)_{T,p} >0~ : The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form). Note that activities and equilibrium constants are dimensionless numbers. The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ. For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as A particular case is the self-ionization of water itself Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as K_w = [H^+][OH^-]\, It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature. The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion. Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:[10] for which the equation (without solid carbon) is written as: K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant,\beta_D, is product of the stepwise constants. Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants. β1 and β2 are examples of association constants. Clearly β1 = 1/K2 and β2 = 1/βD; lg β1 = pK2 and lg β2 = pK2 + pK1[11] For multiple equilibrium systems, also see: theory of Response reactions. The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation Thus, for exothermic reactions, (ΔH is negative) K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way. The effect of electric field on equilibrium has been studied by Manfred Eigen among others. In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association/dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant’s value was determined. When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are any number of ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid. There are three approaches to the general calculation of the composition of a mixture at equilibrium. In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A: With TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations. When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA]= β1[A][H], [H2A]= β2[A][H]2 and [OH] = Kw[H]−1 so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be It is easy to see how this can be extended to three or more reagents. The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A. The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+aq[14] shows the species concentrations for a 5×10−6M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium. The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5μM solution of Al3+ are aluminium hydroxides Al(OH)2+, Al(OH)2+ and Al13(OH)327+, but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Chatelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, Al(OH)4−, is formed. Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2. At equilibrium, G is at a minimum: For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints: where a_{ij} is the number of atoms of element i in molecule j and bi0 is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers, also known as undetermined multipliers (though other methods may be used). Define: where the \lambda_i are the Lagrange multipliers, one for each element. This allows each of the N_j to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by (For proof see Lagrange multipliers) This is a set of (m+k) equations in (m+k) unknowns (the N_j and the \lambda_i) and may, therefore, be solved for the equilibrium concentrations N_j as long as the chemical potentials are known as functions of the concentrations at the given temperature and pressure. (See Thermodynamic databases for pure substances). This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations.[12] In Unicode, a suitable symbol is registered as .[15] It can be typed in Microsoft Windows as Alt + + 2, 1, C, C on the numeric keypad, and in most Linux distributions with Ctrl + ⇧ Shift + u, 2, 1, C, C, ↵ Enter. Magnesium, Hydrogen, Silicon, Iron, Copper Chemistry, Chromatography, /anic Chemistry, Infrared spectroscopy, Biochemistry Chemical equilibrium, Chemical kinetics, Cato Maximilian Guldberg, Peter Waage, Jacobus Henricus van 't Hoff Quantum chemistry, Chemical kinetics, Statistical mechanics, /anic Chemistry, Physics Oxygen, Chemical equilibrium, Equilibrium constant, Acid dissociation constant, Stability constants of complexes Chemistry, Chemical equilibrium, Dynamic equilibrium, Reactivity (chemistry), Materials science Chemical equilibrium, Solution, Dynamic equilibrium, Equilibrium constant, Temperature
The Romanian Economic Journal R.E.J. Jurnalul Economic ISSN (print) 1454-4296 ISSN (online) 2286-2056 Current issue Our email address: Please note that any image you will paste into "Full manuscript text" will be lost. We are accepting only three ways of inserting equations in papers, as described below: 1. You can insert equations in your paper as Latex formulas, surrounded by \( and \) like this \(\sum_{i=1}^{n} x_{i}^{2}\) or surrounded by $$ (two dollar signs) like this $$\int_{0}^{\pi} \sin x \, dx = 2$$ 2. or you can save your equations on your desktop as simple images, attach them in the Figures field and insert them in the Full manuscript text. 3. If you are profficient in computing, you can insert MATHML code directly in the paper's text. How can you get Latex formulas? If your text editor doesnt allow you to copy your equations as Latex formulas, you can build your equations as Latex expressions using these online free services: Log in I lost my password Register an author account Register a reviewer account .... Built by VERSATECH SRL.
I want a collection of points $\{ x_1, \dots, x_m\}$ to sample a unit cube $[0,1]^n$ with $n >>1 $ in high dimensions so that summing over these points is approximate the integral over that space. $$\frac{1}{m} \sum f(x_i) \approx \int_{[0,1]^n} f(x) \ dx $$ If I broke each segment $[0,1]$ into $10$ points, I would have to calculate $10^n$ values of my function $f$ - way too many. Are there lattices I can use which become dense in $[0,1]^n$ as the mesh gets smaller, and whose points do not grow too quickly? If I knew more about lattices, I could make this more precise.
It’s 8/15/17, which means it’s time to celebrate! The three numbers making up the date today form a Pythagorean triple, a triple of positive integers $(a,b,c)$ with $a^2+b^2=c^2$. Indeed, $8^2+15^2=64+225=289=17^2$. Alternatively, by the Pythagorean theorem, a Pythagorean triple is any triple of positive integers which make up the sides of a right triangle: It’s exciting when all three sides are integers, since many common right triangles’ lengths involve square roots: $(1,1,\sqrt{2})$, $(1,2,\sqrt{5})$, and $(1,\sqrt{3},2)$, to name a few. And these sides aren’t even rational, which the poor Pythagorean scholar Hippasus discovered by proving that $\sqrt{2}$ is irrational and was subsequently drowned to death by his colleagues, according to some historical accounts. So the ancient Pythagoreans in fact only believed in right triangles having rational side lengths. Of course, given one Pythagorean triple, like $(3,4,5)$, we can construct infinitely many by scaling the sides: $(6,8,10)$, $(9,12,15)$, etc. (In fact, 12/9/15 was the previous Pythagorean triple day, and 12/16/20 will be the next.) So to classify all Pythagorean triples, it suffices to find the reduced triples, those with no common factors greater than $1$. So the last reduced Pythagorean triple day was back in 2013 on 12/5/13, and the next one won’t be until 7/24/25! Constructing Pythagorean triples It’s well known that there are infinitely many reduced Pythagorean triples as well. One beautiful, famous proof of this derives a parameterization of all triples via geometric means: Consider the unit circle and the point $P=(-1,0)$ on the circle. Let $Q=(0,r/s)$ be a rational point on the $y$-axis inside the circle. Then line $PQ$ intersects the circle at a second point $R$, and it turns out $R$ has rational coordinates as well. Some simple algebra with similar triangles (try it, if you haven’t seen it before!) gives us $$R=\left(\frac{r^2-s^2}{r^2+s^2},\frac{2rs}{r^2+s^2}\right).$$ But since $R$ lies on the unit circle, if $(x,y)$ are the coordinates of $R$ then $x^2+y^2=1$, and substituting and clearing denominators we have $$(r^2-s^2)^2+(2rs)^2=(r^2+s^2)^2$$ (which can also be checked with direct algebraic computation). It follows that $(r^2-s^2,2rs,r^2+s^2)$ is a Pythagorean triple for any integers $r$ and $s$, giving us infinitely many Pythagorean triples. And in fact, for $r$ and $s$ relatively prime of different parity, these triples are reduced. Spherical considerations Given that this is a global day to celebrate (assuming you use the standard world calendar), and the Earth is a sphere, I have to wonder whether Pythagorean triples actually exist if drawn on a perfect-sphere approximation of the Earth. First, we’d have to define what we even mean by that – are we measuring in meters? In feet? And what is a right triangle on the surface of a sphere anyway? The most common definition of a triangle on a sphere is one formed by great circles. A great circle is any circle of maximal diameter around a sphere, in other words, whose plane contains the center of the sphere. So, the equator, the prime meridian, and all longitude lines are great circles on the Earth, but latitude lines are not. A line segment on a sphere is a segment of a great circle, and a (spherical) triangle is a shape formed by three points connected by three (spherical) line segments. The angle between two great circles that intersect is the angle between their planes. (Thanks to Wikipedia for the excellent image below.) Since our world now has size rather than being a flat plane, let’s set the radius of the Earth to be $1$ unit for simplicity. So we’re working with triangles $ABC$ with a right angle at $C$, and asking when they have rational lengths: Are there any Pythagorean triples on the unit sphere? Are there infinitely many? These questions suddenly aren’t very easy. If we scale our sphere down by $\pi/2$ we can get at least one, by taking the equator, the prime meridian, and the $90^\circ$ longitudinal line. This forms a right triangle with all three lengths equal (and all angles right!) and so we can simply scale the Earth to make it any rational lengths we want. But this still doesn’t answer anything about the unit sphere. There is some hope, however. In this paper by Hartshorne and Van Luijk, the authors show that there are infinitely many Pythagorean triples in the hyperbolic plane, using the Poincare Disk model and some nice hyperbolic trig formulas combined with some Eulerian number theory tricks. So Pythagorean triples are not the sole property of flat Euclidean space. In addition, there is a “spherical Pythagorean theorem”, which in our notation, since the radius of our sphere is $1$, says that $$\cos(c)=\cos(a)\cos(b)$$ where $a,b,c$ are the side lengths of the triangle and $c$ is opposite the right angle. And yet, this offers little insight into whether this equation has rational solutions. Trig functions are generally much harder to deal with than polynomials, especially when it comes to solving equations over the rationals. So, if you have any insights on this problem (or references to papers that have worked on it – I couldn’t find any in an initial search, but I am not very familiar with it), please let me know in the comments! And until then, happy reduced-Pythagorean-triple-in-flat-Euclidean-space day!
The wavelength dependence of the definition of free space path loss (FSPL) is an artifact of the way the receiver's antenna gain is defined in the same link budget calculation. It's referenced to an ideal isotropic antenna with a receive area of roughly 1 square wavelength, which for high frequency gets very small. If you do them together (transmit gain, path loss, receive gain) you'll see that the higher frequency wins because the gain of the transmit antenna go up, and the combination of the FSPL and the receive antenna together remain the same. So the reason higher frequencies are used is because the total link budget is better. You can't just take the FSPL and ignore the other terms in the budget, it won't make sense because of the way things are defined. You can see real-world examples of link budgets in this answer (short one) and in this answer (longer one) and in this question. You may find this one interesting as well. From here" Link Budget From this answer which is from this answer: $$ P_{RX} = P_{TX} + G_{TX} - L_{FS} + G_{RX} $$ $P_{RX}$: received power by spacecraft $P_{TX}$: transmitted power by wristwatch $G_{TX}$: Gain of wristwatch's transmitting antenna (compared to isotropic) $L_{FS}$: Free space Loss, what we usually call $1/r^2$ $G_{RX}$: Gain of spacecraft's receiving antenna (compared to isotropic) $$G \sim 20 \times \log_{10}\left( \frac{\pi d}{\lambda} \right)$$ $$L_{FS} = 20 \times \log_{10}\left( 4 \pi \frac{R}{\lambda} \right).$$ $$ P_{RX} - P_{TX} = G_{TX} - L_{FS} + G_{RX} $$ Change from dB to linear scale: $$ \frac{P_{RX}}{P_{TX}} = \frac{G_{TX}G_{RX}}{L_{FS}} = \frac{\pi^4 d_{RX}^2 d_{TX}^2}{\lambda^4}\frac{\lambda^2}{16 \pi^2 R^2} = \frac{\pi^2 d_{RX}^2 d_{TX}^2}{\lambda^2}\frac{1}{4^2 R^2} = \frac{\pi^2 d_{RX}^2 d_{TX}^2}{4 \lambda^2 R^2}$$ So the fraction of the transmitted power that is received depends on $\lambda^{-2}$ ; it improves as the frequency goes up and the wavelength goes down.
Let us start with an example relying on a simple time-warp, based on the Hann window (code below). It consists in modifying time index $t\in [-1,1]$, here with function $t\mapsto 2 (t+1)^\alpha/2^\alpha-1$. There are many other ways. Asymmetrical windows, or non-symmetric windows, are a topic of interest, albeit somewhat marginally used. I have not seen a lot on literature on this topic. It however has been discussed here at SE.DSP in: They can be useful when the data is itself non-symmetric or skewed, when the processing requires symmetry imbalance, or when you have no other choice.Discrete finite-support dyadic wavelets fall into the latter and third category. Real-time analysis, or causal imbalance, are common for the second case. Exponentially weighted windows are an example, see What is the name of this digital low pass filter? or What is the name of these simple filter algorithms?. In the first case, you can find cases in analytical chemistry (chromatography), with trailing peaks or band broadening, in audio processing or spectral analysis. Options to create asymmetric window functions are described as follows: 0 - Zero: the exponential window, used in the "exponentially-weighted moving average filter" (EWMA, see Is there a technical term for this simple method of smoothing out a signal?) 1 - First, for "real-time" applications that can be bufferized, or offline needs, you can first perform an extension trick. It is common to preserve a little data before the time sample when you actually need the data. This happens with triggered frame acquisition: you start acquiring "useful data" after a threshold is crossed, but your have a custom buffer for the data before.Hence, you can extend the data frame buffer "to the left", with real data, or by symmetry, and then you can use longer (and more classical) windows (symmetric or not), so that they take off when your signal is not interesting, and "start" where you want to analyze. 2 - Second: for many models, you can combine a "left-sided window" with a certain up-rate, to a "right-sided window" with another down-rate. This can seem ad-hoc, but is in use in chemistry, for the shape of skewed peaks (two halves of a Gaussian for instance). The junction should be consistent. In What is the meaning of half window functions?, the window is non-continuous. In many applications, one often tries to have regular functions (continuous, differentiable). For discrete windows, one often choose that $w_l(0-)\approx w_r(0+)$, which is easily done on isotonic (increasing/decreasing) windows with $[0,1]$ normalization. Higher orders of regularization could be enfored as well. 3 - Third: you can build a closed-form skewed function. A lot of skewed probability distribution functions can be used (log-normal, beta, Weibull) as windows. 4 - Fourth: you can distort or warp the time axis of a symmetric window, to yield an asymmetric one. Some references: Matlab code: % Laurent Duval % 2019/08/06 % SeDsp59829 close all;clear all nSample = 65; timeUniform = linspace(-1,1,nSample); powerExp = 0.5; timeWarpPower = 2*((timeUniform+1).^powerExp)/2^powerExp-1; hannWindow = @(time) 1/2*(1+cos(pi*time)); hannWindowUniform = hannWindow(timeUniform); hannWarpPower = hannWindow(timeWarpPower); figure(1);clf;hold on plot(timeUniform,hannWindowUniform, 'ob') plot(timeUniform,hannWarpPower, 'gx') grid on
The rank-$k$numerical ranges, denoted below by $\Lambda_k$, were introduced c. 2006 by Choi, Kribs, and Życzkowskias a tool to handle compression problems in quantum information theory. Since thentheir theory and applications have been advanced with remarkable enthusiasm. Thesequence of papers [1], [2], [3], [4], for example, led to a strikingextension of the classical Toeplitz–Hausdorff theorem (convexity of $W(M)$): all the$\Lambda_k(M)$ are convex (though some may be empty), and they are intersectionsof conveniently computable half-planes in $\mathbb{C}$.Among the many more recent papers concerning the $\Lambda_k(M)$, let us mention [5], [6] and [7]. Given a matrix $M$ of dimension $d$ and $k\geq1$, Choi, Kribs, and Życzkowski (see [8])defined the rank-$k$ numerical range of $M$ as\[\Lambda_k(M)=\{\lambda\in\mathbb{C}:\exists P\in P_k\mbox{ such that }PMP=\lambda P\},\]where $P_k$ denotes the set of rank-$k$ orthogonal projections in $M_d$. It is not hardto verify that $\Lambda_K(M)$ can also be described as the set of complex $\lambda$ such thatthere is some $k$-dimensional subspace $S$ of $\mathbb{C}^d$ such that $(Mu,u)=\lambda$ for allunit vectors in $S$. In particular, we see that\[W(M)=\Lambda_1(M)\supseteq\Lambda_2(M)\supseteq\Lambda_3(M)\supseteq\dots\quad.\] Note that, this numerical range is different from the $k$-numerical range as for a Hermitian matrix $A$, we get \[ \Lambda_k(A) = [\lambda_k, \lambda_{N-k+1}], \] where $\lambda_k$ are the eigenvalues of $A$ in an increasing order. On the other hand, the $k$-numerical range is given by \[ W_k = \left[\frac{1}{k}\sum_{i=1}^k\lambda_i, \frac{1}{k}\sum_{i=0}^{k-1} \lambda_{d-i} \right]. \] Hence, we get \[ \Lambda_k(A) \subset W_k(A). \] A comparison between the $k$-numerical range and higher-rank numerical range in the case $k=2$. Note that $\Lambda_2 \subset W_2$. The matrix used in this example is $A = \mathrm{diag}(1, 2, 4, 8)$. Numerical range (blue) and real numerical shadow of the matrix $U_5 = \mathrm{diag}(\left\{\ee^{2\pi \ii k/5}\right\}_{k=1}^5)$. The red polygon inside is $\Lambda_2(U_5)$. Numerical range (blue) and real numerical shadow of the matrix $U_7 = \mathrm{diag}(\left\{\ee^{2\pi \ii k/5}\right\}_{k=1}^7)$. The red polygon inside is $\Lambda_2(U_7)$ and the green polygon is $\Lambda_3(U_7)$.
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
It looks like you're new here. If you want to get involved, click one of these buttons! We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the power set of \(X\), whose elements are subsets of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system. This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in other posets, too, like join \(\vee\) and meet \(\wedge\). We could march much further in this direction. I won't, but try it yourself! Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an observation, or measurement. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \). This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are three such functions! And they're related in a beautiful way! The most fundamental is this: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its inverse image under \(f\) to be $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \). The inverse image is also called the preimage, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of Seven Sketches. The inverse image gives a monotone function $$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$ Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\). Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is: Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its image under \(f\) to be $$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \). The image is often written as \(f(S)\), but I'm using the notation of Seven Sketches, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek". The image gives a monotone function $$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \} \subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$ But here's the cool part: Theorem. \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \). Proof. We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have $$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\) This is great! But there's also another way to go forwards from \(P(X)\) to \(P(Y)\), which is a right adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful. Definition. Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define $$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \). Puzzle 23. Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \). What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have $$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the existential quantifier \(\exists x\), and "for all \(x\)" is called the universal quantifier \(\forall x\). So we are seeing that existential and universal quantifiers arise as left and right adjoints! This was discovered by Bill Lawvere in this revolutionary paper: By now this observation is part of a big story that "explains" logic using category theory. Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Puzzle 24. What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? Puzzle 25. What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
I think your postcondition is wrong. Since index i is initialized at 0, you're actually summing from 0 to n, not from 1 to n. The postcondition, lets call it $P$, should be: $$ j = \sum_{k=0}^{n} a[k] $$ Or: j = sum(a[0] ... a[n]) Usually you would write $B$ as $i \neq n+1$ instead of $i \le n$, because it makes the proof easier. In that case, you would choose $Q$ as (assuming array indexing starts at 0): $$ j = \sum_{k=0}^{i-1} a[k] $$ Or in your notation: j = sum(a[0] ... a[i-1]) Then $ Q \land \lnot B \implies P$ is true because you can substitute $i$ by $n+1$. Of course, you would have to prove that $Q$ is a valid invariant, but I'll leave that to you. In your case,$\lnot B$ is $i>n$ instead of $i = n + 1$, so you can't use substitution. You can deal with the inequality by adding information to the invariant and then proving $i = n +1$ using this new information. Our new invariant $Q'$ could be: $$Q \land i \le n+1$$ This new condition holds before the loop starts because $i$ starts at 0 and $n+1$ must be at least $1$.To prove that $Q'$ is a valid invariant, we have to show that it also holds after every iteration. Each time the loop executes $i$ changes to $i+1$, so we must prove that $ i+1 \le n+1$, which is true because the loop condition states that $i \le n$ was true when the iteration started. Now we know that $Q \land i \le n + 1 \land i > n$ holds after the loop ends. And since $i \le n + 1 \land i > n \implies i = n + 1$ and $Q \land i = n + 1 \implies P$, the program is correct.
Difference between revisions of "Demand/Dynamic User Assignment" m (→Iterative Assignment (Dynamic User Equilibrium)) (using vtypes from additional file) (One intermediate revision by one other user not shown) Line 24: Line 24: Between successive calls of DUAROUTER, the ''.rou.alt.xml'' format is used to record not only the current ''best'' route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. Between successive calls of DUAROUTER, the ''.rou.alt.xml'' format is used to record not only the current ''best'' route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. − The + + The |and .of is the of the for the last ) . − The + + The Gawron for a of + the the in the step + for a set of routes + probability a − == Logit == + == Logit == The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. Line 37: Line 42: <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> <math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math> − == + + == == + + + + + + + + + + + + + + = oneShot-assignment = = oneShot-assignment = Latest revision as of 13:02, 12 February 2018 Contents Introduction For a given set of vehicles with of origin-destination relations (trips), the simulation must determine routes through the network (list of edges) that are used to reach the destination from the origin edge. The simplest method to find these routes is by computing shortest or fastest routes through the network using a routing algorithm such as Djikstra or A*. These algorithms require assumptions regarding the travel time for each network edge which is commonly not known before running the simulation due to the fact that travel times depend on the number of vehicles in the network. . The problem of determining suitable routes that take into account travel times in a traffic-loaded network is called user assignment.SUMO provides different tools to solve this problem and they are described below. Iterative Assignment ( Dynamic User Equilibrium) The tool <SUMO_HOME> /tools/assign/duaIterate.py can be used to compute the (approximate) dynamic user equilibrium. python duaIterate.py -n -t <network-file> -l <trip-file> <nr-of-iterations> duaIterate.py supports many of the same options as SUMO. Any options not listed when calling duaIterate.py --help can be passed to SUMO by adding sumo--long-option-name arg after the regular options (i.e. sumo--step-length 0.5. This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by calling DUAROUTER to route the vehicles in a network with the last known edge costs (starting with empty-network travel times) calling SUMO to simulate "real" travel times result from the calculated routes. The result edge costs are used in the net routing step. The number of iterations may be set to a fixed number of determined dynamically depending on the used options. In order to ensure convergence there are different methods employed to calculate the route choice probability from the route cost (so the vehicle does not always choose the "cheapest" route). In general, new routes will be added by the router to the route set of each vehicle in each iteration (at least if none of the present routes is the "cheapest") and may be chosen according to the route choice mechanisms described below. Between successive calls of DUAROUTER, the .rou.alt.xml format is used to record not only the current best route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below. Route-Choice algorithm The two methods which are implemented are called Gawron and Logit in the following. The input for each of the methods is a weight or cost function on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes where each route has an old cost and an old probability (from the last iteration) and needs a new cost and a new probability . Gawron (default) The Gawron algorithm computes probabilities for chosing from a set of alterantive routes for each driver. The following values are considered to compute these probabilities: the travel time along the used route in the previous simulation step the sum of edge travel times for a set of alternative routes the previous probability of chosing a route Logit The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation. The probabilities are calculated from an exponential function with parameter scaled by the sum over all route values: Termination The option --max-convergence-deviation may be used to detect convergence and abort iterations automatically. Otherwise, a fixed number of iterations is used. Once the script finishes any of the resulting .rou.xml files may be used for simulation but the last one(s) should be the best. Usage Examples Loading vehicle types from an additional file By default, vehicle types are taken from the input trip file and are then propagated through DUAROUTER iterations (always as part of the written route file). In order to use vehicle type definitions from an additional-file, further options must be set duaIterate.py -n ... -t ... -l ... --additional-file <FILE_WITH_VTYPES>duarouter--aditional-file <FILE_WITH_VTYPES>duarouter--vtype-output dummy.xml Options preceeded by the string duarouter-- are passed directly to duarouter and the option vtype-output dummy.xml must be used to prevent duplicate definition of vehicle types in the generated output files. oneShot-assignment An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using <trip> input directly in SUMO instead of <vehicle>s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios). The routes for this incremental assignment are computed using the Automatic Routing / Routing Device mechanism. Since this device allows for various configuration options, the script Tools/Assign#one-shot.py may be used to automatically try different parameter settings. The MAROUTER application computes a classic macroscopic assignment. It employs mathematical functions (resistive functions) that approximate travel time increases when increasing flow. This allows to compute an iterative assignment without the need for time-consuming microscopic simulation.
Answer The solution set is $$\{60^\circ,180^\circ,300^\circ\}$$ Work Step by Step $$\cos\theta+1=2\sin^2\theta$$ over the interval $[0^\circ,360^\circ)$ Here we have both cosine and sine functions. It is necessary then to change either cosine or sine to the other so that we can solve the equation. Recall the identity: $\sin^2\theta=1-\cos^2\theta$ and replace it into the equation: $$\cos\theta+1=2(1-\cos^2\theta)$$ $$\cos\theta+1=2-2\cos^2\theta$$ $$2\cos^2\theta+\cos\theta-1=0$$ $$(2\cos^2\theta+2\cos\theta)+(-\cos\theta-1)=0$$ $$2\cos\theta(\cos\theta+1)-(\cos\theta+1)=0$$ $$(\cos\theta+1)(2\cos\theta-1)=0$$ $$\cos\theta=-1\hspace{1cm}\text{or}\hspace{1cm}\cos\theta=\frac{1}{2}$$ 1) $\cos\theta=-1$: there is one value of $\theta$ with which $\cos\theta=-1$ over the interval $[0^\circ,360^\circ)$, which is $\{180^\circ\}$ 2) $\cos\theta=\frac{1}{2}$: there are two values of $\theta$ with which $\cos\theta=\frac{1}{2}$ over the interval $[0^\circ,360^\circ)$, which are $\{60^\circ,300^\circ\}$ Combining two cases, $$\theta=\{60^\circ,180^\circ,300^\circ\}$$ This is the solution set of the problem.
Hint Applet (Trigonometric Substitution) Sample Problem Flash Applets embedded in hint portion of WeBWorK questions Example Sample Problem with trigSubWW.swf embedded A standard WeBWorK PG file with an embedded applet has six sections: A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer, hint and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, gives hints after a given number of tries and gives a solution that may be shown to the student after the problem set is complete. The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. Other applet sample problems: GraphLimit Flash Applet Sample Problem GraphLimit Flash Applet Sample Problem 2 Derivative Graph Matching Flash Applet Sample Problem Hint Applet (Trigonometric Substitution) Sample Problem PG problem file Explanation ##DESCRIPTION ##KEYWORDS('integrals', 'trigonometric','substitution') ## DBsubject('Calculus') ## DBchapter('Techniques of Integration') ## DBsection('Trigonometric Substitution') ## Date('8/20/11') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2010') ## AuthorText1('') ## Section1('') ## Problem1('20') ##ENDDESCRIPTION ######################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ######################################## This is the The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code. All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')). DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", "parserFormulaUpToConstant.pl", ); This is the The # Set up problem TEXT(beginproblem()); $showPartialCorrectAnswers = 1; $a = random(2,9,1); $a2 = $a*$a; $a3 = $a2*$a; $a4 = $a2*$a2; $a4_3 = 3*$a4; $a2_5 = 5*$a2; $funct = FormulaUpToConstant("-sqrt{$a2-x^2}/{x}-asin({x}/{$a})"); This is the The ################################### # Create link to applet ################################### $appletName = "trigSubWW"; $applet = FlashApplet( codebase => findAppletCodebase("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, ); ################################### # Configure applet ################################### $applet->configuration(qq {<xml><trigString>sin</trigString></xml>}); $applet->initialState(qq {<xml><trigString>sin</trigString></xml>}); TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer.<br/> It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT This is the Those portions of the code that begin the line with You must include the section that follows The lines TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT The text between the BEGIN_TEXT Evaluate the indefinite integral. $BR \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx \] $BR \{ans_rule( 60) \} END_TEXT ################################## Context()->texStrings; This is the ################################### # # Answers # ## answer evaluators ANS( $funct->cmp() ); TEXT($PAR, $BBOLD, $BITALIC, "Hi $studentLogin, If you don't get this in 5 tries I'll give you a hint with an applet to help you out.", $EITALIC, $EBOLD, $PAR); $showHint=5; Context()->normalStrings; TEXT(hint( $PAR, MODES(TeX=>'object code', HTML=>$applet->insertAll( debug =>0, reinitialize_button => 0, includeAnswerBox=>0, )) )); ################################## Context()->texStrings; SOLUTION(EV3(<<'END_SOLUTION')); $BBOLD Solution: $EBOLD $PAR To evaluate this integral use a trigonometric substitution. For this problem use the sine substitution. \[x = {$a}\sin(\theta)\] $BR$BR Before proceeding note that \(\sin\theta=\frac{x}{$a}\), and \(\cos\theta=\frac{\sqrt{$a2-x^2}}{$a}\). To see this, label a right triangle so that the sine is \(x/$a\). We will have the opposite side with length \(x\), and the hypotenuse with length \($a\), so the adjacent side has length \(\sqrt{$a2-x^2}\). $BR$BR With the substitution \[x = {$a}\sin\theta\] \[dx = {$a}\cos\theta \; d\theta\] $BR$BR Therefore: \[\int\frac{\sqrt{$a2 - x^2}}{x^2}dx= \int \frac{{$a}\cos\theta\sqrt{$a2 - {$a2}\sin^2\theta}} {{$a2}\sin^2\theta} \; d\theta\] \[=\int \frac{\cos^2\theta}{\sin^2\theta} \; d\theta\] \[=\int \cot^2\theta \; d\theta\] \[=\int \csc^2\theta-1 \; d\theta\] \[=-\cot\theta-\theta+C\] $BR$BR Substituting back in terms of \(x\) yields: \[-\cot\theta-\theta+C =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C \] so \[ \int\frac{\sqrt{$a2 - x^2}}{x^2}dx =-\frac{\sqrt{$a2-x^2}}{x}-\sin^{-1}\left(\frac{x}{$a}\right)+C\] END_SOLUTION Context()->normalStrings; ################################## ENDDOCUMENT(); This is the The
How to Solve for the Brachistochrone Curve Between Points The shortest route between two points isn’t necessarily a straight line. If by shortest route, we mean the route that takes the least amount of time to travel from point A to point B, and the two points are at different elevations, then due to gravity, the shortest route is the brachistochrone curve. In this blog post, we demonstrate how to use built-in mathematical expressions and the Optimization Module in COMSOL Multiphysics to solve for the brachistochrone curve. Finding the Brachistochrone Curve Imagine letting a marble roll down a curved ramp, such as those seen in roller skate parks, and measuring the time for the marble to roll from point A to point B. Our goal is to redesign the shape of the ramp between the two points, such that it takes the shortest possible time for the marble to travel between them. For simplicity, we consider the ideal situation where friction is neglected and the marble is infinitely small (a point mass). The brachistochrone curve is an idealized curve that provides the fastest descent possible. There is actually an analytical solution to this case or, with some derivation work, we can use the PDE functionality of COMSOL Multiphysics to solve the problem. However, since I am a “firm believer of the principle of least action”, a fancy way of saying “a lazy physicist”, we will use the Optimization Module instead. Setting Up the COMSOL Multiphysics Model Assuming point A of our problem is located at the origin (x, y) = (0, 0), the analytical solution for the brachistochrone is a parametric curve of this form: (1) x(t) &=& r (t-sin(t))\\ y(t) &=& r (-1+cos(t)) \end{align*} where the parameter r is a constant and the parameter t is the running parameter for the parametric curve and varies linearly from tA to tB along the curve. Typically, when we solve this problem, we are given the location of point B and solve for r and t. Here, we will start with the analytic solution for the brachistochrone and a known set of r and t that give us the location of point B. We will show how to approximate this analytic solution using the optimization functionality within COMSOL Multiphysics and the Optimization Module. We will start with a blank model. In fact, we will not add any “component” in the Model Builder; no geometry, physics, or mesh will be needed. This is an interesting example of “a model without a model”. First, we define a set of global parameters. We set the constant parameter r to 1 and the value of tB for point B to 1.2 \pi. (Keep in mind that A is the origin, so implicitly tA=0.) The location of point B ( xB, yB) can then be calculated using Eq. (1), as shown in the screenshot below. Next, we use an interpolation function to approximate the brachistochrone curve. We define the x-positions of a few interpolation points as x1 ~ x4 and give the y-positions ( y1 ~ y4) an initial guess, such that the interpolation points start out on a straight line between point A and point B. This interpolation function can be set up as shown in following screenshot. You can click the Create Plot button to generate a plot of the interpolation function. For clarity, remove the two extrapolation plots. Next, add a line graph of the analytical solution to the same plot group to compare with the numerical solution later. To create a plot of the parametric curve, first create a dummy analytic function under the Global Definitions node, setting the upper limit of the argument to tB (under the Plot Parameters section). The sole purpose of this analytic function is to provide a list of t values between 0 and tB for us to draw the parametric curve. Click Create Plot and drag the resulting Line Graph 1 out of 1D Plot Group 2 and into 1D Plot Group 1 (its name will change to Line Graph 2 automatically). For the y-axis data, enter the expression corresponding to Eq. (1): r*(-1+cos(root.x)). Similarly, for the x-axis data, enter r*(root.x-sin(root.x)). Note that the COMSOL Multiphysics variable root.xhere corresponds to the t parameter in Eq. (1). Click the Plot button. We now have a graph of the analytic solution (green curve) and the initial guess (blue curve and black dots) as shown below. Calculating Travel Time To calculate the travel time for the marble to roll from point A to point B, we use the assumption that the motion is frictionless, so that all loss in potential energy turns into kinetic energy, which is proportional to the square of the speed. Thus, if the curve is represented by the formula y = f(x), then the instantaneous speed is proportional to the square root of the loss in height: v \propto \sqrt{0-f(x)} (recall that we assume the original height at point A is 0). For an infinitesimal movement of dx in the x-direction, the path length that the marble travels along the curve is ds = dx \sqrt{1+f'(x)^2}. The time it takes to travel this length is simply the length divided by the speed d\tau = ds/v. Therefore, we arrive at an expression for the total travel time for any given curve y = f(x): All we have to do now is let the COMSOL software find the curve that minimizes this expression for the travel time. You may have noticed that we use the “proportional to” symbol (\propto) in the expression for the travel time, since we have neglected to include the mass of the marble and the gravitational acceleration constant in the formula. Since these numbers combine to merely scale the travel time by a constant factor, they do not affect what the curve looks like for the minimization problem. In other words, the brachistochrone curve is independent of the weight of the marble. Since we use the interpolation function int1 to approximate the curve f(x), we can define a global variable T for the travel time using the formula given above: integrate(sqrt((1+(d(int1(x),x))^2)/max(0-int1(x),eps)),x,0,xB). The extra expression max(... ,eps) in the denominator prevents divide-by-zero situations, as shown below. Optimization Solver Now we are ready to ask the software to minimize the travel time for us. Create an empty study and then right-click to add an Optimization node under it. We can use either the Coordinate Search, Nelder-Mead, BOBYQA, or COBYLA optimization solvers for this “model-free” optimization problem. COBYLA turns out to be the fastest. In the settings window, under the section Objective Function, enter T for the expression, which by default is minimized. Then, under the section Control Variables and Parameters, use the Add button (a “plus” sign icon) to add the parameters y1 - y4. The initial value fields are automatically filled with the corresponding values in the global parameters table. Under the Output While Solving section, check the Plot check box. These settings are shown in the screenshot below. Click Compute to watch as the optimization solver moves the interpolation curve up and down until the minimal travel time is reached. Even with the very crude linear interpolation curve using only four interpolation points, the results in the graph below show remarkable agreement with the analytical solution. Higher-order interpolation and more interpolation points will further improve the solution. Conclusion A few years ago, my mentor William Vetterling from ZINK Imaging chatted with a few of us at lunch about how we can use COMSOL Multiphysics to solve anything, since it provides the mathematical tools to handle all kinds of equations encountered in science and technology. This idea eventually evolved into his keynote presentation on the “Library of Babel” at the COMSOL Conference 2012 Boston. We have used an example of a “model without a model” to solve the brachistochrone curve problem by taking advantage of the software’s versatile built-in mathematical expressions and optimization functionality. I hope this demonstration will stimulate more creative usage of COMSOL Multiphysics to tackle more technical challenges. Comments (3) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
We have seen that some functions can be represented as series, which may give valuable information about the function. So far, we have seen only those examples that result from manipulation of our one fundamental example, the geometric series. We would like to start with a given function and produce a series to represent it, if possible. Suppose that $\ds f(x)=\sum_{n=0}^\infty a_nx^n$ on some interval of convergence. Then we know that we can compute derivatives of $f$ by taking derivatives of the terms of the series. Let's look at the first few in general: $$\eqalign{ f'(x)&=\sum_{n=1}^\infty n a_n x^{n-1}=a_1 + 2a_2x+3a_3x^2+4a_4x^3+\cdots\cr f''(x)&=\sum_{n=2}^\infty n(n-1) a_n x^{n-2}=2a_2+3\cdot2a_3x +4\cdot3a_4x^2+\cdots\cr f'''(x)&=\sum_{n=3}^\infty n(n-1)(n-2) a_n x^{n-3}=3\cdot2a_3 +4\cdot3\cdot2a_4x+\cdots\cr }$$ By examining these it's not hard to discern the general pattern. The $k$th derivative must be $$\eqalign{ f^{(k)}(x)&=\sum_{n=k}^\infty n(n-1)(n-2)\cdots(n-k+1)a_nx^{n-k}\cr &=k(k-1)(k-2)\cdots(2)(1)a_k+(k+1)(k)\cdots(2)a_{k+1}x+{}\cr &\qquad {}+(k+2)(k+1)\cdots(3)a_{k+2}x^2+\cdots\cr }$$ We can shrink this quite a bit by using factorial notation: $$ f^{(k)}(x)=\sum_{n=k}^\infty {n!\over (n-k)!}a_nx^{n-k}= k!a_k+(k+1)!a_{k+1}x+{(k+2)!\over 2!}a_{k+2}x^2+\cdots $$ Now substitute $x=0$: $$f^{(k)}(0)=k!a_k+\sum_{n=k+1}^\infty {n!\over (n-k)!}a_n0^{n-k}=k!a_k,$$ and solve for $\ds a_k$: $$a_k={f^{(k)}(0)\over k!}.$$ Note the special case, obtained from the series for $f$ itself, that gives $\ds f(0)=a_0$. So if a function $f$ can be represented by a series, we know just whatseries it is. Given a function $f$, the series$$\sum_{n=0}^\infty {f^{(n)}(0)\over n!}x^n$$is called the Maclaurin seriesfor $f$. Example 11.10.1 Find the Maclaurin series for $f(x)=1/(1-x)$. We need to compute the derivatives of $f$ (and hope to spot a pattern). $$\eqalign{ f(x)&=(1-x)^{-1}\cr f'(x)&=(1-x)^{-2}\cr f''(x)&=2(1-x)^{-3}\cr f'''(x)&=6(1-x)^{-4}\cr f^{(4)}(x)&=4!(1-x)^{-5}\cr &\vdots\cr f^{(n)}(x)&=n!(1-x)^{-n-1}\cr }$$ So $${f^{(n)}(0)\over n!}={n!(1-0)^{-n-1}\over n!}=1$$ and the Maclaurin series is $$\sum_{n=0}^\infty 1\cdot x^n=\sum_{n=0}^\infty x^n,$$ the geometric series. A warning is in order here. Given a function $f$ we may be able to compute the Maclaurin series, but that does not mean we have found a series representation for $f$. We still need to know where the series converges, and if, where it converges, it converges to $f(x)$. While for most commonly encountered functions the Maclaurin series does indeed converge to $f$ on some interval, this is not true of all functions, so care is required. As a practical matter, if we are interested in using a series to approximate a function, we will need some finite number of terms of the series. Even for functions with messy derivatives we can compute these using computer software like Sage. If we want to know the whole series, that is, a typical term in the series, we need a function whose derivatives fall into a pattern that we can discern. A few of the most important functions are fortunately very easy. Example 11.10.2 Find the Maclaurin series for $\sin x$. The derivatives are quite easy: $f'(x)=\cos x$, $f''(x)=-\sin x$, $f'''(x)=-\cos x$, $\ds f^{(4)}(x)=\sin x$, and then the pattern repeats. We want to know the derivatives at zero: 1, 0, $-1$, 0, 1, 0, $-1$, 0,…, and so the Maclaurin series is $$ x-{x^3\over 3!}+{x^5\over 5!}-\cdots= \sum_{n=0}^\infty (-1)^n{x^{2n+1}\over (2n+1)!}. $$ We should always determine the radius of convergence: $$ \lim_{n\to\infty} {|x|^{2n+3}\over (2n+3)!}{(2n+1)!\over |x|^{2n+1}} =\lim_{n\to\infty} {|x|^2\over (2n+3)(2n+2)}=0, $$ so the series converges for every $x$. Since it turns out that this series does indeed converge to $\sin x$ everywhere, we have a series representation for $\sin x$ for every $x$. Sometimes the formula for the $n$th derivative of a function $f$ is difficult to discover, but a combination of a known Maclaurin series and some algebraic manipulation leads easily to the Maclaurin series for $f$. Example 11.10.3 Find the Maclaurin series for $x\sin(-x)$. To get from $\sin x$ to $x\sin(-x)$ we substitute $-x$ for $x$ and then multiply by $x$. We can do the same thing to the series for $\sin x$: $$ x\sum_{n=0}^\infty (-1)^n{(-x)^{2n+1}\over (2n+1)!} =x\sum_{n=0}^\infty (-1)^{n}(-1)^{2n+1}{x^{2n+1}\over (2n+1)!} =\sum_{n=0}^\infty (-1)^{n+1}{x^{2n+2}\over (2n+1)!}. $$ As we have seen, a general power series can be centered at a point other than zero, and the method that produces the Maclaurin series can also produce such series. Example 11.10.4 Find a series centered at $-2$ for $1/(1-x)$. If the series is $\ds\sum_{n=0}^\infty a_n(x+2)^n$ then looking at the $k$th derivative: $$k!(1-x)^{-k-1}=\sum_{n=k}^\infty {n!\over (n-k)!}a_n(x+2)^{n-k}$$ and substituting $x=-2$ we get $\ds k!3^{-k-1}=k!a_k$ and $\ds a_k=3^{-k-1}=1/3^{k+1}$, so the series is $$\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}.$$ We've already seen this, in section 11.8. Such a series is called the Taylor series for the function,and the general term has the form$${f^{(n)}(a)\over n!}(x-a)^n.$$A Maclaurin series is simply a Taylor series with $a=0$. Exercises 11.10 For each function, find the Maclaurin series or Taylor series centered at $a$, and the radius of convergence. Ex 11.10.1$\cos x$(answer) Ex 11.10.2$\ds e^x$(answer) Ex 11.10.3$1/x$, $a=5$(answer) Ex 11.10.4$\ln x$, $a=1$(answer) Ex 11.10.5$\ln x$, $a=2$(answer) Ex 11.10.6$\ds 1/x^2$, $a=1$(answer) Ex 11.10.7$\ds 1/\sqrt{1-x}$(answer) Ex 11.10.8Find the first four terms of the Maclaurin series for $\tanx$ (up to and including the $\ds x^3$ term).(answer) Ex 11.10.9Use a combination of Maclaurin series and algebraicmanipulation to find a series centered at zero for$\ds x\cos (x^2)$.(answer) Ex 11.10.10Use a combination of Maclaurin series and algebraicmanipulation to find a series centered at zero for$\ds xe^{-x}$.(answer)
To elaborate on gallais' clarifications, a type theory with impredicative Prop, and dependent types, can be seen as some subsystem of the calculus of constructions, typically close to Church's type theory. The relationship between Church's type theory and the CoC is not that simple, but has been explored, notably by Geuvers excellent article. For most purposes, though, the systems can be seen as equivalent. Then indeed, you can get by with very little, in particular if you're not interested in classical logic, then the only thing you really need is an axiom of infinity: it's not provable in CoC that any types have more than 1 element! But with just an axiom expressing that some type is infinite, say a natural numbers type with the induction principle and the axiom $0\neq 1$, you can get pretty far: most of undergraduate mathematics can be formalized in this system (sort of, it's tough to do some things without the excluded middle). Without impredicative Prop, you need a bit more work. As noted in the comments, an extensional system (a system with functional extensionality in the equality relation) can get by with just $\Sigma$ and $\Pi$-types, $\mathrm{Bool}$, the empty and unit types $\bot$ and $\top$, and W-types. In the intensional setting that's not possible: you need many more inductives. Note that to build useful W-types, you need to be able to build types by elimination over $\mathrm{Bool}$ like so: $$ \mathrm{if}\ b\ \mathrm{then}\ \top\ \mathrm{else}\ \bot $$ To do meta-mathematics you'll probably need at least one universe (say, to build a model of Heyting Arithmetic). All this seems like a lot, and it's tempting to look for a simpler system which doesn't have the crazy impredicativity of CoC, but is still relatively easy to write down in a few rules. One recent attempt to do so is the $\Pi\Sigma$ system described by Altenkirch et al. It's not entirely satisfying, since the positivity checking required for consistency isn't a part of the system "as is". The meta-theory still needs to be fleshed out as well. A useful overview is the article Is ZF a hack? by Freek Wiedijk, which actually compares the hard numbers on all these systems (number of rules and axioms).
I would be very grateful if anyone could help with my questions below. Here's my latex code: \documentclass[12pt]{report}\usepackage{caption}\usepackage{subcaption}\usepackage{tikz}\usetikzlibrary{arrows}\usepackage{verbatim}\begin{document}\tikzstyle{int}=[draw, line width = 1mm, minimum size=8em]\begin{figure} \centering\begin{tikzpicture}[node distance=4.5cm,auto,>=latex']\node [int] (a) {};\node (b) [left of=a,node distance=5cm, coordinate] {a};\node [int] (c) [] {$S$};\node [coordinate] (end) [right of=c, node distance=5cm]{};\path[->] (b) edge node {$\gamma$} (a);\draw[->] (c) edge node {$\psi$} (end) ;\end{tikzpicture}\caption{This is a single compartment model}\end{figure}\end{document} I'm a very newbie to tikz and I've had a brief look at the 405-page document by Till Tantau. I have to say, I don't even know where to start. The above code was a template which I've managed to edit to produce the image seen. I don't quite understand what every single code is doing and I'm sure there are bits of codes that aren't needed here. I would like to introduce a loop with an arrow head below the box pointing back at the box (with a label on the arrow:"z"). I was also wondering if anyone could help with making the arrows bolder and perhaps bigger. Alternatively, I would be grateful if anyone could direct me to exactly what page to read on the tikz documentation to answer these questions. Thanks
Let's say I have a sample of 2 (assumingly different) diatomics A and B. Through spectroscopy I found the data below: For molecule A: $$\begin{align} B&\approx2.17690\ \mathrm{cm^{−1}}\\ D&\approx4.79000\times10^{-5}\ \mathrm{cm^{−1}}\\ I&=1.28590\times10^{-46}\ \mathrm{kg\ m^2}\\ \tilde ν_e&=928.15\ \mathrm{cm^{−1}} \end{align}$$ For molecule B: $$\begin{align} B&\approx1.51695\ \mathrm{cm^{−1}}\\ D&\approx7.1049\times10^{-6}\ \mathrm{cm^{−1}}\\ I&=1.84533\times10^{-46}\ \mathrm{kg\ m^2}\\ \tilde ν_e&=1401.86\ \mathrm{cm^{−1}} \end{align}$$ Through Mass Spec, I found 2 strong spectral lines at $30.0077086\ \mathrm{g\ mol^{-1}}$ and $41.9766928\ \mathrm{g\ mol^{-1}}$. How do I deduce which is which and further, how do I determine the bond length. [FYI, I'm thinking the former mass is the molecule NO and the latter NaF just by guessing] I'm thinking that the heavier molecule will have the larger inertia value. And from the inertia equation $I=\mu R^2$ I'll derive the bond length $R$. but how do I find the $\mu$ from the actually mass of the molecule from the mass spec data?
Assuming the RH and $s \in \mathbb{C}, \rho_n =\frac12 \pm i\gamma_n$, the following (altered) Hadamard product: $$\displaystyle \displaystyle \prod_{n=1}^\infty \left(1- \frac{s}{\frac12+ (-1)^n i \gamma_n} \right) \left(1- \frac{s}{\frac12+ (-1)^{n+1} i \gamma_n} \right) = \frac{\xi_{rie}(s)}{\xi_{rie}(0)}$$ runs through the alternating non-trivial zeros $\rho_n$ with $\xi_{rie}(s)= \frac12 s(s-1) \pi^{-\frac{s}{2}} \Gamma\left(\frac{s}{2}\right) \zeta(s)$. Contrary to the factors of the original Hadamard product, these alternating factors do converge. This question suggest that many similarities exist between infinite (Hadamard/Weierstrass) products using $\gamma_n=n$ and $\gamma_n=\Im(\rho_n)$, and this question shows that a closed form for alternating factors using $\gamma_n=n$ does exist. I therefore like to conjecture that also a closed form exists for the alternating formula above. Let's call the closed form for each factor $A_-$ and $A_+$ and it is easy to see that: $$\displaystyle A_-A_+=\frac{\xi_{rie}(s)}{\xi_{rie}(0)}=s(s-1) \pi^{-\frac{s}{2}} \Gamma\left(\frac{s}{2}\right) \zeta(s)$$ is the entire function to be split into two factors. Splitting this function is easy to do for the Gamma-part: $G_-G_+=s(s-1) \pi^{-\frac{s}{2}} \Gamma\left(\frac{s}{2}\right)$, that for instance (there are more ways) could be factored into: $$\displaystyle G_-=s\Gamma\left(\frac{s}{4}\right) \pi^{-\frac{s}{4}}2^{\frac{s}{4}-\frac32} \text{ and } G_+=(s-1)\Gamma \left(\frac{s}{4}+\frac12\right) \pi^{-(\frac{s}{4}+\frac12)}2^{\frac{s}{4}+\frac12}$$ But what to do with $\zeta(s)$? The poles of $G_-$ (-4,-8,...) and $G_+$ (-2,-6,...) might provide some hints, since they need to be annihilated by the zeros of the 'to be found' $\zeta(s)$-factors. It is also clear that a $\zeta(s)$-factor must now induce alternating non-trivial zeros only i.e.: $\frac12+14.134...i,\frac12-21.022...i,\frac12+25.010...i, \dots$ (and its complement). Dividing the infinite product factors $A_-$ en $A_+$ (using n=699) by $G_-$ and $G_+$ respectively, one gets the following graphs of what the (absolute) $\zeta(s)$-factors might look like: Question (apologies for the long intro): The graphs of the two potential factors for $\zeta(s)$ above, could both be seen as analytically continued functions across $\mathbb{C}/1$, that have been derived "bottom up" from their alternating zeros. This would imply that in the domain $\Re(s)>1$ the two factors must multiply into: $$\zeta(s)=\sum_{n=1}^{\infty} \frac{1}{n^s} = \prod _{p \in \mathbb{P}}(1-p^{-s})^{-1}$$ Hence my question: are there any ways to split, the known analytically "discontinued" expressions for $\zeta(s)$ in the domain $\Re(s)>1$, into two factors that each can be analytically continued again? P.S.: (1) I have f.i. tried splitting the Euler product into its $(p \mod 4 = 1)$ and $(p \mod 4 = 3)$ factors, however did not see any way to analytically continue these. (2) Also hoped to find some 'natural' connection with the alternating Zeta: $\eta(s)=\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^s}$ that is valid over the domain $\Re(s)>0$, however so far have been unsuccessful in factoring it further.
Let $D$ be a bounded simply connected region (open subset homeomorphic to the disc) in the plane, containing the origin. Suppose that for every line $L$ through the origin the intersection $L\cap\partial D$ consists of two points $z_1$ and $z_2$ such that $|z_1-z_2|=\mathrm{diam} D$. Does it follow that $D$ is a disc? Following the suggestion of Benoît Kloeckner, moderator, I have deleted my later answer, then edited and appended this (originally partial) answer to make it complete. Pick an arbitrary direction. Then draw the line $L$ through the origin, perpendicular to the chosen direction. Since $L$ intersects the boundary of $D$ at two points, say $z_1$ and $z_2$, such that the segment $\overline{z_1z_2}$ is a diameter of $D$, the lines perpendicular to this segment and passing through $z_1$ and $z_2$, respectively, bound a parallel strip containing $D$ - otherwise the diameter of $D$ would be greater than the distance from $z_1$ to $z_2$. This proves that the width of $D$ in every direction is equal to the diameter of $D$. Remark: This also proves that the closure $\bar{D}$ of $D$ is convex, being the intersection of a family of strips. in fact, $\bar{D}$ is strictly convex, as every set of constant width must be. Another remark: The same proof works in every dimension; just replace the arbitrary direction by arbitrary hyperplane with respect to which we look at the width of $D$. Thus far, this does not quite answer the question. Among all examples of convex bodies of constant width I know, only the ball has the property that all diameters have one common point. It remains to prove that no other such body exists. It has been established that $\bar{D}$ is strictly convex, that is, every support line of $D$ contains exactly one boundary point of $D$. Also, each support line of $D$ has its "opposite" support line, forming a strip between them of width $d$ and containing $D$. Now, suppose $\bar{D}$ is not smooth. Specifically, let $x_0$ be a boundary point of $D$ at which there are two intersecting support lines. Then the corresponding opposite to them support lines touch $\bar{D}$ at points $x_1$ and $x_2$, respectively, such that each of the segments $\overline{x_0x_1}$ and $\overline{x_0x_2}$ is a diameter of $D$. But since all diameters of $D$ meet at a single point, namely at the origin, $x_0$ must be the origin, contrary to the assumption that the origin lies in the interior of $\bar{D}$. This, in view of Alexandre's comment at the end of his question, implies that $D$ is a circular disk. By the way, it is not necessary to assume that the origin lies in the interior of $\bar{D}$, since this follows from the other assumptions. Namely, if the origin were a boundary point of $D$, then a line passing through it and penetrating the interior of $D$ would intersect the boundary of $D$ at another point, the two points forming a diameter. Then the line $L$ through the origin and perpendicular to the penetrating line would be a support line of $D$. But there should be another boundary point on $L$ at the diameter-distance from the origin - a clear contradiction: two perpendicular diameters meeting at their end points. Thus $D$ is a circle centered at the origin. Final remark: The statement for the plane implies the same in higher dimensions, by taking all 2-dimensional cross-sections of the body through the origin: each of them is smooth and each satisfies the same assumptions on the diameters. Since all such cross-sections are congruent circles, the body is a ball. I show that $\partial D$ is a Jordan curve in the plane, may be this will be of some help. To do this, define a map $\phi : S^1 \to \Bbb R^2$ such that $\phi(v) = (\Bbb R_+ v) \cap \partial D$. Your conditions imply that this map is well-defined, that is, it is single-valued. It also immediately follows that $\phi$ is injective. Moreover, $\phi(S^1) = \partial D$. In fact, the inclusion $\partial D \subset \phi(S^1)$ is obvious by the assumption that any line through 0 intersects $\partial D$ in two points, opposite to each other with respect to 0. The inclusion $\phi(S^1) \subset \partial D$ is true because $(\Bbb R_+ v) \cap D$ must be an interval containing 0 (otherwise you get more than one intersections with $\partial D$), and $\phi(v)$ can be thought as the supremum of this interval. Next we show that $\phi$ is continuous by contradiction. Suppose that there is a sequence $v_n \in S^1$ which converges to $a\in S^1$ such that $\phi(v_n)$ does not converges to $\phi(a)$. This means that there is an open disk $U$ in $\Bbb R^2$ centered at $\phi(a)$ such that $\phi(v_n) \notin U$ for all $n$ (up to passing to a suitable subsequence). In this way you get a limit point of $\phi(v_n)$ that is different from $\phi(a)$, belongs to $\partial D$, but is in the half-line line $\Bbb R_+ a$, and this contradicts your assumptions. It follows that $\partial D$ is a Jordan curve. Note that we didn't used the hypothesis that $D$ is homeomorphic to a disk (or that is simply connected). This follows automatically from the Schoenflies theorem, since we have proved that $\partial D$ is a Jordan curve. Or, if you prefer, you can explicitly define a radial homeomorphism of the plane that sends the unit disk to $D$, by means of a rescaling of the embedding $\phi$. Inspired by Wlodek Kuperberg's answer, I think I have a simple proof that your domain must be a circle. As noticed by Wlodek, given any line $L$ through the origin, at both point of intersection between $L$ and $\partial D$ the line orthogonal to $L$ is a supporting line for $D$. Moreover $D$ is convex. This means that the boundary curve of $D$ must be an integral curve of the vector field orthogonal to the directions issued from the origin, i.e. $\partial/\partial \theta$ in polar coordinates $(r,\theta)$. This shows that it must be a circle. Edit: this proofs may look like it needs the boundary to be smooth, but really it doesn't: one only needs the fundamental theorem of analysis for the function that maps a direction from the origin to the distance between the corresponding boundary point and the origin. Convexity of the boundary is more than enough, since it ensures that the above function is Lipschitz.
Forgot password? New user? Sign up Existing user? Log in Given sin2(θ)+cos2(θ)=1 \sin^2(\theta) + \cos^2(\theta) = 1 sin2(θ)+cos2(θ)=1, which of the following is true? True or False: sin2(θ)−cos2(θ)+1=2sin2(θ) \sin^2(\theta) - \cos^2(\theta) + 1 = 2\sin^2(\theta) sin2(θ)−cos2(θ)+1=2sin2(θ). (Hint: Use the identity sin2(θ)+cos2(θ)=1 \sin^2(\theta) + \cos^2(\theta) = 1 sin2(θ)+cos2(θ)=1.) Which is these is equivalent to cos2(θ)sec2(θ)−cos2(θ), \cos^2(\theta)\sec^2(\theta) - \cos^2(\theta),cos2(θ)sec2(θ)−cos2(θ), over values of θ\thetaθ for which the given expression is defined? Which of these is equivalent to x x x? If sin2(θ)=925 \sin^2(\theta) = \frac{9}{25} sin2(θ)=259, what is cos2(θ)? \cos^2(\theta) ?cos2(θ)? Problem Loading... Note Loading... Set Loading...
For the following exercises, determine the point(s), if any, at which each function is discontinuous. Classify any discontinuity as jump, removable, infinite, or other. 131) \(f(x)=\frac{1}{\sqrt{x}}\) Answer: The function is defined for all x in the interval \((0,∞)\). In other words, this function is continuous on its domain. 132) \(f(x)=\frac{2}{x^2+1}\) 133) \(f(x)=\frac{x}{x^2−x}\) Answer: Removable discontinuity at \(x=0\); infinite discontinuity at \(x=1\) 134) \(g(t)=t^{−1}+1\) 135) \(f(x)=\frac{5}{e^x−2}\) Answer: Infinite discontinuity at \(x=ln2\) 136) \(f(x)=\frac{|x−2|}{x−2}\) 137) \(H(x)=tan2x\) Answer: Infinite discontinuities at \(x=\frac{(2k+1)π}{4}\), for \(k=0,±1,±2,±3,…\) 138) \(f(t)=\frac{t+3}{t^2+5t+6}\) For the following exercises, decide if the function is continuous at the given point. If it is discontinuous, what type of discontinuity is it? 139) \(\frac{2x^2−5x+3}{x−1}\) at \(x=1\) Answer: No. It is a removable discontinuity. 140) \(h(θ)=\frac{sinθ−cosθ}{tanθ}\) at \(θ=π\) 141) \(g(u)=\begin{cases}\frac{6u^2+u−2}{2u−1} & if \; u≠\frac{1}{2}\\ \frac{7}{2} & if \, u=\frac{1}{2}\end{cases}\), at \(u=\frac{1}{2}\) 141) \(g(u)=\begin{cases}\frac{6u^2+u−2}{2u−1} & if \, u≠\frac{1}{2}\\ \frac{7}{2} & if \; u=\frac{1}{2}\end{cases}\), at \(u=\frac{1}{2}\) Answer: Yes. It is continuous. 142) \(f(y)=\frac{sin(πy)}{tan(πy)}\), at \(y=1\) 143) \(f(x)=\begin{cases}x^2−e^x & if \; x<0\\x−1 & if \; x≥0\end{cases}\), at \(x=0\) Answer: Yes. It is continuous. 144) \(f(x)=\begin{cases}xsin(x) & if \; x≤π\\ xtan(x) & if \; x>π\end{cases}\), at \(x=π\) In the following exercises, find the value(s) of k that makes each function continuous over the given interval. 145) \(f(x)=\begin{cases}3x+2 & x<k\\2x−3 & k≤x≤8\end{cases}\) Answer: \(k=−5\) 146) \(f(θ)=\begin{cases}sinθ & 0≤θ<\frac{π}{2}\\cos(θ+k) & \frac{π}{2}≤θ≤π\end{cases}\) 147) \(f(x)=\begin{cases}\frac{x^2+3x+2}{x+2} & x≠−2\\ k & x=−2\end{cases}\) Answer: k=−1 148) \(f(x)=\begin{cases}e^{kx} & 0≤x<4\\x+3 & 4≤x≤8\end{cases}\) 149) \(f(x)=\begin{cases}\sqrt{kx} & 0≤x≤3\\x+1 & 3<x≤10\end{cases}\) Answer: \(k=\frac{16}{3}\) In the following exercises, use the Intermediate Value Theorem (IVT). 150) Let \(h(x)=\begin{cases}3x^2−4 & x≤2\\5+4x & x>2\end{cases}\) Over the interval \([0,4]\), there is no value of x such that \(h(x)=10\), although \(h(0)<10\) and \(h(4)>10\). Explain why this does not contradict the IVT. 151) A particle moving along a line has at each time t a position function s(t), which is continuous. Assume \(s(2)=5\) and \(s(5)=2\). Another particle moves such that its position is given by \(h(t)=s(t)−t\). Explain why there must be a value c for \(2<c<5\) such that \(h(c)=0\). Answer: Since both \(s\) and \(y=t\) are continuous everywhere, then \(h(t)=s(t)−t\) is continuous everywhere and, in particular, it is continuous over the closed interval [\(2,5\)]. Also, \(h(2)=3>0\) and \(h(5)=−3<0\). Therefore, by the IVT, there is a value \(x=c\) such that \(h(c)=0\). 152) [T] Use the statement “The cosine of t is equal to t cubed." a. Write a mathematical equation of the statement. b. Prove that the equation in part a. has at least one real solution. c. Use a calculator to find an interval of length 0.01 that contains a solution. 153) Apply the IVT to determine whether \(2^x=x^3\) has a solution in one of the intervals [\(1.25,1.375\)] or [\(1.375,1.5\)]. Briefly explain your response for each interval. Answer: The function \(f(x)=2^x−x^3\) is continuous over the interval [\(1.25,1.375\)] and has opposite signs at the endpoints. 154) Consider the graph of the function \(y=f(x)\) shown in the following graph. a. Find all values for which the function is discontinuous. b. For each value in part a., state why the formal definition of continuity does not apply. c. Classify each discontinuity as either jump, removable, or infinite. 155) Let \(f(x)=\begin{cases}3x & x>1\\ x^3 & x<1\end{cases}\). a. Sketch the graph of \(f\). b. Is it possible to find a value k such that \(f(1)=k\), which makes \(f(x)\) continuous for all real numbers? Briefly explain. Answer: a. b. It is not possible to redefine \(f(1)\) since the discontinuity is a jump discontinuity. 156) Let \(f(x)=\frac{x^4−1}{x^2−1}\) for \(x≠−1,1\). a. Sketch the graph of \(f\). b. Is it possible to find values \(k_1\) and \(k_2\) such that \(f(−1)=k\) and \(f(1)=k_2\), and that makes \(f(x)\) continuous for all real numbers? Briefly explain. 157) Sketch the graph of the function \(y=f(x)\) with properties i. through vii. i. The domain of f is (\(−∞,+∞\)). ii. f has an infinite discontinuity at \(x=−6\). iii. \(f(−6)=3\) iv. \(\displaystyle \lim_{x→−3^−}f(x)=\displaystyle \lim_{x→−3^+}f(x)=2\) v. \(f(−3)=3\) vi. f is left continuous but not right continuous at \(x=3\). vii. \(\displaystyle \lim_{x→−∞}f(x)=−∞\) and \(\displaystyle \lim_{x→+∞}f(x)=+∞\) Answer: Answers may vary; see the following example: 158) Sketch the graph of the function \(y=f(x)\) with properties i. through iv. i. The domain of f is [\(0,5\)]. ii. \(\displaystyle \lim_{x→1^+}f(x)\) and \(\displaystyle \lim_{x→1^−}f(x)\) exist and are equal. iii. \(f(x)\) is left continuous but not continuous at \(x=2\), and right continuous but not continuous at \(x=3\). iv. \(f(x)\) has a removable discontinuity at \(x=1\), a jump discontinuity at \(x=2\), and the following limits hold: \(\displaystyle \lim_{x→3^−}f(x)=−∞\) and \(\displaystyle \lim_{x→3^+}f(x)=2\). In the following exercises, suppose \(y=f(x)\) is defined for all x. For each description, sketch a graph with the indicated property. 159) Discontinuous at \(x=1\) with \(\displaystyle \lim_{x→−1}f(x)=−1\) and \(\displaystyle \lim_{x→2}f(x)=4\) Answer: Answers may vary; see the following example: 160) Discontinuous at \(x=2\) but continuous elsewhere with \(\displaystyle \lim_{x→0}f(x)=\frac{1}{2}\) Determine whether each of the given statements is true. Justify your response with an explanation or counterexample. 161) \(f(t)=\frac{2}{e^t−e^{−t}}\) is continuous everywhere. Answer: False. It is continuous over (\(−∞,0\)) ∪ (\(0,∞\)). 162) If the left- and right-hand limits of \(f(x)\) as \(x→a\) exist and are equal, then f cannot be discontinuous at \(x=a\). 163) If a function is not continuous at a point, then it is not defined at that point. Answer: False. Consider \(f(x)=\begin{cases}x & if x≠0\\ 4 & if x=0\end{cases}\). 164) According to the IVT, \(cosx−sinx−x=2\) has a solution over the interval [\(−1,1\)]. 165) If \(f(x)\) is continuous such that \(f(a)\) and \(f(b)\) have opposite signs, then \(f(x)=0\) has exactly one solution in [\(a,b\)]. Answer: False. Consider \(f(x)=cos(x)\) on [\(−π,2π\)]. 166) The function \(f(x)=\frac{x^2−4x+3}{x^2−1}\) is continuous over the interval [\(0,3\)]. 167) If \(f(x)\) is continuous everywhere and \(f(a),f(b)>0\), then there is no root of \(f(x)\) in the interval [\(a,b\)]. Answer: False. The IVT does not work in reverse! Consider \((x−1)^2\) over the interval [\(−2,2\)]. [T] The following problems consider the scalar form of Coulomb’s law, which describes the electrostatic force between two point charges, such as electrons. It is given by the equation \(F(r)=k_e\frac{|q_1q_2|}{r^2}\), where \(k_e\) is Coulomb’s constant, \(q_i\) are the magnitudes of the charges of the two particles, and r is the distance between the two particles. 168) To simplify the calculation of a model with many interacting particles, after some threshold value \(r=R\), we approximate F as zero. a. Explain the physical reasoning behind this assumption. b. What is the force equation? c. Evaluate the force F using both Coulomb’s law and our approximation, assuming two protons with a charge magnitude of \(1.6022×10^{−19}\) coulombs (C), and the Coulomb constant \(k_e=8.988×10^9Nm^2/C^2\) are 1 m apart. Also, assume \(R<1m\). How much inaccuracy does our approximation generate? Is our approximation reasonable? d. Is there any finite value of R for which this system remains continuous at R? 169) Instead of making the force 0 at R, instead we let the force be 10−20 for \(r≥R\). Assume two protons, which have a magnitude of charge \(1.6022×10^{−19}C\), and the Coulomb constant \(k_e=8.988×10^9Nm^2/C^2\). Is there a value R that can make this system continuous? If so, find it. Answer: \(R=0.0001519m\) Recall the discussion on spacecraft from the chapter opener. The following problems consider a rocket launch from Earth’s surface. The force of gravity on the rocket is given by \(F(d)=−mk/d^2\), where m is the mass of the rocket, d is the distance of the rocket from the center of Earth, and k is a constant. 170) [T] Determine the value and units of k given that the mass of the rocket on Earth is 3 million kg. (Hint: The distance from the center of Earth to its surface is 6378 km.) 171) [T] After a certain distance D has passed, the gravitational effect of Earth becomes quite negligible, so we can approximate the force function by \(F(d)=\begin{cases}−\frac{mk}{d^2} & if d<D\\ 10,000 & if d≥D\end{cases}\). Find the necessary condition D such that the force function remains continuous. Answer: \(D=63.78km\) 172) As the rocket travels away from Earth’s surface, there is a distance D where the rocket sheds some of its mass, since it no longer needs the excess fuel storage. We can write this function as \(F(d)=\begin{cases} −\frac{m_1k}{d^2} & if d<D \\ −\frac{m_2k}{d^2} & if d≥D\end{cases}\). Is there a D value such that this function is continuous, assuming \(m_1≠m_2?\) Prove the following functions are continuous everywhere 173) \(f(θ)=sinθ\) Answer: For all values of \(a\), \(f(a)\) is defined, \(lim_{θ→a}f(θ)\) exists, and \(lim_{θ→a}f(θ)=f(a)\). Therefore, \(f(θ)\) is continuous everywhere. 174) \(g(x)=|x|\) 175) Where is \(f(x)=\begin{cases} 0 & \text{if x is irrational}\\ 1 & \text{if x is rational}\end{cases}\) continuous? Answer: Nowhere
Narayanan, EK and Thangavelu, S (2004) An optimal theorem for the spherical maximal operator on the Heisenberg group. In: Israel Journal of Mathematics, 144 (2). pp. 211-219. PDF An_optimal_theorem.pdf - Published Version Restricted to Registered users only Download (562kB) | Request a copy Abstract Let $H_n = C^n \times R$ be the Heisenberg group of dimension $2n+1$. Let $\sigma _r$ be the normalized surface measure on the sphere of radius $r$ in $C^n$ . For a function f on $H_n$ , let $\[ M_\sigma_f = sup_r>0 \ |f \ast \sigma_r| \]$. It had been shown in [A. Nevo and S. Thangavelu, Adv. Math. 127 (1997), no. 2, 307–334; MR1448717 (98f:22005)] that $M_\sigma$ is bounded on $L^p(H_n)$ for all $\[ p > \frac {2n−1}{2n−2}\]$ . In this paper the authors modify the arguments of that paper and combine them with the square function method used in [E. M. Stein and S. Wainger, Bull. Amer. Math. Soc. 84 (1978), no. 6, 1239–1295; MR0508453 (80k:42023)] to prove the maximal theorem on R^n; they show that $M_\sigma$ is bounded on $ L^p(H_n)$ if and only if $\[p > \frac {2n}{2n−1} \]$. As a consequence of their result the authors obtain the result that for $n \geq 2$, the family $( \sigma_r)_r$ is pointwise ergodic in $\[ L^p$ for all $p > \frac {2n}{2n−1} \]$. The maximal theorem on $H_n$ has also been obtained in a more general setting by Fourier integral methods in [D. M¨uller and A. Seeger, Israel J. Math. 141 (2004), 315–340; MR2063040 (2005e:22005)]. Item Type: Journal Article Additional Information: Copyright of this article belongs to The Hebrew University Magnes Press Jerusalem. Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr. Ramesh Chander Date Deposited: 27 Feb 2008 Last Modified: 08 Feb 2012 05:11 URI: http://eprints.iisc.ac.in/id/eprint/12442 Actions (login required) View Item
This question already has an answer here: In many place one finds accounts of how to evaluate $$ \int_0^\infty \frac{\sin x} x\,dx = \underbrace{\lim_{a\to\infty}\int_0^a}_{\text{Why view it this way?}} \frac{\sin x} x\, dx. $$ And it gets asserted that the reason why the integral must first be evaluated over a bounded interval and then afterward the bound is allowed to go to $\infty$ is that $$ \int_0^\infty \left|\frac{\sin x} x\right| \, dx\ \underbrace{{}\ =\infty\ {}}_{\text{That's why.}} \tag 1 $$ and therefore the ways of thinking about integrals introduced by Henri Lebesgue are not applicable on the unbounded interval. If it is asked how we know that $(1)$ is true, my first thought it that one ought to compare it somehow with a harmonic series because it ultimately declines to $0$ at the same rate. It seems as if this question is worth having among our stock of questions and answers, but a couple of days ago I found the answer posted as a mere comment. Hence this present posting. I'll notify the person who posted the comment. Doubtless some variety of ways to prove this result exists, so others should post their own versions.
Say that an inner model $M$ of $V$ is generically saturated if for every forcing notion $\Bbb P\in M$, either there is an $M$-generic for $\Bbb P$ in $V$, or forcing with $\Bbb P$ over $V$ collapses cardinals. What is the consistency strength of "$L$ is generically saturated"? If the answer is $0^\#$ exists, is this sort of a general answer for relative constructibility (i.e. $L[A]$ is generically saturated if and only if $A^\#$ exists)? If the answer is negative, what can we conclude from this principle? Note, Mohammad Golshani remarks (also this), that assuming $0^\#$ exists for every $\kappa$, $\operatorname{Add}(\kappa,1)^L$ collapses $\kappa$ to $\omega$ (in particular, if $\kappa$ is countable in $V$, it is just a Cohen forcing). So in the presence of $0^\#$ at least we know that a lot of the forcings in $L$ do collapse cardinals, even if they do not admit generics in $V$ (e.g., $\operatorname{Col}(\omega,\omega_1^V)$ cannot admit a generic, although it does collapse cardinals). (The idea here is to marry Foreman's maximality principle that states that every forcing adds a real or collapses cardinals, with inner model hypothesis-like ideas.)
Suppose $\kappa< 2^{\aleph_0}$ and $\langle P_i : i < \kappa\rangle$ is a sequence of perfect subsets of $2^{\omega}$. Can we find $Q_i \subseteq P_i$ for $i < \kappa$ such that each $Q_i$ is perfect and for every $x_i \in Q_i$ (for $i < \kappa$), the set $\{x_i: i < \kappa\}$ is Turing independent? If Martin's axiom holds, then the answer is yes. In particular, under the continuum hypothesis, the answer is yes. In any case, the answer is yes when $\kappa$ is countable. More generally, the answer is yes if $\text{MA}_\kappa$ holds. (See update below for improvements.) Let's consider the case first that $\kappa$ is countable. Suppose that we have countably many perfect sets $\langle P_n\mid n\in\omega\rangle$, each of which is the set of branches through a perfect tree $T_n$, so that $P_n$ consists of the branches through $T_n$. I propose to refine these trees using the construction method of this answer, in order to construct perfect subtrees $S_n\subseteq T_n$ whose sets of branches $Q_n$ will be the desired perfect refinements with pairwise Turing incompatible branches. Specifically, we build the subtrees $S_n$ in a sequence of stages. At each stage, we are committed to only finitely much information altogether about which nods are in $S_n$, and at each stage, we end-extend the current approximation to $S_n$. At a given stage, we consider whether a given program $e$ might compute a branch through $S_m$ using an oracle that is a branch through $S_n$. Call this requirement $R_{e,n,m}$. We can meet this requirement by extending the branches of $S_n$ sufficiently so that program $e$ determines a branch higher than the current branches we have promised about $S_m$, in such a way that we can extend our promise of $S_m$ to the next stage so as to avoid it. In this way, we are fulfilling requirement $R_{e,n,m}$. I am not saying that this process is computable, since perhaps no extension of the current promise to $S_n$ will enable $e$ to halt sufficiently; but the point is that in this case, we needn't worry about this program, since it will not be giving us a branch through $S_m$. So I am computing the subtrees using the jumps of the oracles. We can also fold in stages of the construction to ensure that the trees $S_n$ are all branching, so that each $Q_n$ will be a perfect set. In this way, in $\omega$ many stages, we construct the perfect refinements $Q_n\subseteq P_n$ so that no real in $Q_n$ computes any real in $Q_m$ for $n\neq m$, as desired. We can even arrange the construction so that no branch through any $S_n$ computes any other distinct branch through any $S_m$, including $n=m$. Now, consider the case of general $\kappa$, under the assumption that $\text{MA}_\kappa$ holds. In this case, we can consider the forcing to add the subtrees $S_n$ with finite conditions, each specifying a finite piece of $S_n$ inside $T_n$, with finite support, ordered by end-extension. This forcing is isomorphic to adding $\kappa$ many Cohen reals, and is therefore c.c.c. Thus, by Martin's axiom, there is a way of choosing the subtrees so as to meet all the requirements $R_{e,\alpha,\beta}$, for Turing programs $e$ and distinct $\alpha,\beta<\kappa$. Each requirement corresponds to a dense set in the forcing, and there are only $\kappa$ many requirements. Update. I've now realized several improvements. Let us call your principle the perfect set refinement property ($\text{PSR}_\kappa$). Theorem. The perfect set refinement property $\text{PSR}_\kappa$ follows from $\text{MA}_\kappa(\text{Cohen})$. Proof. The principle $\text{MA}_\kappa(\text{Cohen})$ is the very weak version of Martin's axiom, which applies only to the forcing $\text{Add}(\omega,1)$ that adds a single Cohen real. Fix any family of perfect sets $P_\alpha$ for $\alpha<\kappa$. Consider the forcing to add a single Cohen real. Such forcing adds a size-continuum family of pairwise mutually generic Cohen reals $c_\alpha$ for $\alpha<\mathfrak{c}$. Use these reals to pick out perfect subtrees $S_\alpha\subset T_\alpha$. By the argument above, paths through these trees will be Turing incomparable. And so we will have realized our desired family in the extension by adding a Cohen real. The actual properties needed in this argument are only the $\kappa$ many dense sets corresponding to the pairwise independence. So we don't need an actual Cohen real, but only $\text{MA}_\kappa(\text{Cohen})$. $\Box$ Theorem. $\text{PSR}_\kappa$ holds after forcing to add $\theta$ many Cohen reals, for any $\theta\geq\kappa^+$. Proof. Suppose $G$ is $V$-generic for the forcing to add $\theta$ many Cohen reals, where $\theta\geq\kappa^+$. If $\langle P_\alpha\mid\alpha<\kappa\rangle$ is a family of perfect sets, then this sequence is added by the restriction of $G$ to a set of size $\kappa$. On one of the remaining coordinates, we have added a Cohen real generically over that part of $G$, and so we have created the desired perfect refinement already in $V[G]$. $\Box$ Corollary. One can force the full perfect set refinement property, $\text{PSR}_\kappa$ for all $\kappa<\mathfrak{c}$, simply by adding sufficiently many Cohen reals. Meanwhile, one might ask why the OP has insisted that $\kappa<\mathfrak{c}$. The reason is that allowing $\kappa$ to be the continuum itself (or larger) makes the principle false. Theorem. $\text{PSR}_\kappa$ is false for $\kappa\geq\mathfrak{c}$. Proof. For each real $x$, there is a perfect set $P_x$ of reals $y$ such that $x\leq_T y$. Consider any perfect refinement $Q_x\subseteq P_x$. Let $y\in Q_x$ and consider any $z\in Q_y$. It follows that $y\leq_T z$, and so these are not Turing independent. So there is no independent perfect refinement of this continuum-sized family. $\Box$
Yes, another one of those "I have found a great password-generation strategy" waiting to be shot down! It seems that although diceware aims to be a secure and user-friendly password generation method, there is a consensus that its output is not typically user-friendly. Over the years I have read many posts asking how to generate "nicer" passwords from it. For example 1 2 3 and in particular 4. To get "nicer" passwords, whatever that means, the suggestions are typically: create your own word list (a lot of effort and need a longer password for same entropy since manual word lists are typically shorter) use a selection method but be aware you lose entropy (e.g. pick $1$ out of $16$ passwords at the cost of $4$ bits of entropy) But it seems to me that there is an easy way to get a more memorable yet shorter password, especially if you're bilingual. What you need is a set of $N$ disjoint diceware word lists, each with $6^5$ words. When drawing a word, lookup the dice combination in the $N$ lists and pick the word you like most. This is exactly as secure as drawing from a single list. Indeed, assuming there is no overlap between the lists, this is equivalent to processing the $6^5$ sets of $N$ words to create your own "preferred" word list and use a standard diceware on that list. 1 - Am I correct that generating a password in this fashion does not decrease entropy? In practice, it is hard to find non-overlapping lists (unless you speak several languages, and even then there might be duplicates). The impact of overlaps is trivial to quantify when $N = 2$ but 2 - What is the impact of overlaps between lists when $N > 2$? Even better, pick your $2$ preferred out of $N$ and flip a coin to choose. You just increased the entropy ($+1/$word) and still get a nicer password than single-list diceware. Pushing this idea further: there are around $500$K words in the English language, from which one can create $\dfrac{500\cdot 10^3}{6^5} \approx 64$ non-overlapping word lists. For each dice combination, pick your favorite $K$ out of $64$ and pick a final word at random. That's $\log_2(K)$ extra bits of entropy per word, so with $K=4$ you have $14.9$ $bits/word$ instead of the standard $12.9/word$. Nicer and shorter password for the same entropy / more entropy for the same number of words. You control the trade-off between entropy and nicer words by adjusting $K$. With $K=8$ you gain $3$ $bits/word$ whereas with $K=1$ you get your preferred words but standard diceware entropy. 3 - Why is nobody doing this? This is easy enough to automate. Unless something above is wrong?
I've been trying to prove it for a while, but can't seem to get anywhere. $$\frac{1}{\sin^2\theta} + \frac{1}{\cos^2\theta} = (\tan \theta + \cot \theta)^2$$ Could someone please provide a valid proof? I am not allowed to work on both sides of the equation. Work so far: RS: $$ \begin{align} & \frac{\sin^2\theta}{\cos^2\theta} + \frac{\cos^2\theta}{\sin^2\theta} + 2 \\[10pt] & = \frac{\sin^4\theta}{(\cos^2\theta)(\sin^2\theta)} + \frac{\cos^4\theta}{(\sin^2\theta) (\cos^2\theta)} + \frac{(\sin^4\theta)(\cos^2\theta)}{(\sin^2\theta)(\cos^2\theta)} + \frac{(\sin^2\theta)(\cos^4\theta)}{(\sin^2\theta)(\cos^2\theta)} \\[10pt] & = \frac{\sin^4\theta + \cos^4\theta + (\sin^4\theta)(\cos^2\theta) + (\sin^2\theta)(\cos^4\theta)}{(\cos^2\theta)(\sin^2\theta)} \end{align} $$ I am completely lost after this.
Monotone solutions to a class of elliptic and diffusion equations 1. Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China, China $-\Delta u= F'(u),$ in $\mathbf R^n,$ $\partial_{x_n}u>0,$ and the diffusion equation $u_t-\Delta u= F'(u),$ in $\mathbf R^n\times$ {$t>0$}, $\partial_{x_n}u>0, u|_{t=0}=u_0,$ where $\Delta$ is the standard Laplacian operator in $\mathbf R^n$, and $u_0$ is a given smooth function in $\mathbf R^n$ with some monotonicity condition. We show that under a natural condition on the nonlinear term $F'$, there exists a global solution to the diffusion problem above, and as time goes to infinity, the solution converges in $C_{l o c}^2(\mathbf R^n)$ to a solution to the corresponding elliptic problem. In particular, we show that for any two solutions $u_1(x')<$ $u_2(x')$ to the elliptic equation in $\mathbf R^{n-1}$: $-\Delta u=F'(u),$ in $\mathbf R^{n-1}, $ either for every $c\in (u_1(0),u_2(0))$, there exists an $(n-1)$ dimensional solution $u_c$ with $u_c(0)=c$, or there exists an $x_n$-monotone solution $u(x',x_n)$ to the elliptic equation in $\mathbf R^n$: $-\Delta u=F'(u), $ in $\mathbf R^n,$ $\partial_{x_n}u>0,$ in $\mathbf R^n$ such that $\lim_{x_n\to-\infty}u(x',x_n)=v_1(x')\leq u_1(x')$ and $\lim_{x_n\to+\infty}u(x',x_n)=v_2(x')\leq u_2(x').$ A typical example is when $F'(u)=u-|u|^{p-1}u$ with $p>1$. Some of our results are similar to results for minimizers obtained by Jerison and Monneau [13] by variational arguments. The novelty of our paper is that we only assume the condition for $F$ used by Keller and Osserman for boundary blow up solutions. Mathematics Subject Classification:35Jx. Citation:Li Ma, Chong Li, Lin Zhao. Monotone solutions to a class of elliptic and diffusion equations. Communications on Pure & Applied Analysis, 2007, 6 (1) : 237-246. doi: 10.3934/cpaa.2007.6.237 [1] [2] Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. [3] [4] [5] Paolo Baroni, Agnese Di Castro, Giampiero Palatucci. Intrinsic geometry and De Giorgi classes for certain anisotropic problems. [6] Ludovick Gagnon. Qualitative description of the particle trajectories for the [7] Luis A. Caffarelli, Alexis F. Vasseur. The De Giorgi method for regularity of solutions of elliptic equations and its applications to fluid dynamics. [8] Isabeau Birindelli, Enrico Valdinoci. On the Allen-Cahn equation in the Grushin plane: A monotone entire solution that is not one-dimensional. [9] Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. [10] Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. [11] [12] [13] [14] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. [15] [16] [17] Guolian Wang, Boling Guo. Stochastic Korteweg-de Vries equation driven by fractional Brownian motion. [18] [19] Muhammad Usman, Bing-Yu Zhang. Forced oscillations of the Korteweg-de Vries equation on a bounded domain and their stability. [20] Eduardo Cerpa, Emmanuelle Crépeau. Rapid exponential stabilization for a linear Korteweg-de Vries equation. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
We have begun looking at trace class and Hilbert Schmidt ideals in lectures today. In particular, looking at pages 206-212 of Reed and Simon book 1. My lecture asserted the following two equivalences in class today and I don't understand the details. If we let $\{ b_n \}$ be an orthonormal basis, it was asserted that the Hilbert space norm of $\left| A \right|$ was equal to the Hilbert space norm of $A$. That is, $$\left( \sum_{n=1}^{\infty} \| \left| A \right| b_n \|^2 \right)^{\frac{1}{2}} = \left( \sum_{n=1}^{\infty} \| A b_n \|^2 \right)^{\frac{1}{2}}.$$ And the second assertion was that the square of the Hilbert space norm of $\left| A \right|^{\frac{1}{2}}$ was the trace norm of $A$.
Answer $\cos 225^{\circ}= -\dfrac{\sqrt 2}{2}$ Work Step by Step The reference angle of an angle $0 \leq \theta \lt 2\pi $ based on its position can be computed by using the following steps: a) Quadrant- I: $\theta $ b) Quadrant- II: $180^{\circ}-\theta $ c) Quadrant -III: $\theta - 180^o $ d) Quadrant -IV: $360^{\circ}-\theta $ We can see that the angle $225^{\circ}$ lies in Quadrant III. Therefore, we have : Reference angle of $225^{\circ}$ is equal to $ =225^{\circ}-180^{\circ}=45^{\circ}$ Since, $\cos 45^{\circ}=\dfrac{\sqrt 2}{2}$ So, $\cos 225^{\circ}= -\dfrac{\sqrt 2}{2}$; Because $\theta $ lies in Quadrant-III.
Practice Paper 2 Question 12 A point traces a unit circle if its coordinates satisfy \((x,y) = (\cos t, \sin t)\) as time \(t\) varies from \(0\) to \(2\pi.\) Give an equation for a point that traces a spiral centred at \((0,0)\) and that crosses the positive \(x\)-axis at \(x=1,2,3,\ldots\) at times \(t = 2\pi,4\pi,6\pi,\ldots\) and find its speed \(v(t)\) at time \(t.\) Related topics Warm-up Questions Give \(y=\tan x\) as a parametric equation \((x,y)\) where \(x=\arcsin t\) and \(y\) contains no trigonometric functions. Sketch the parametric curve \((x,y)=(4\cos t,2\sin t).\) Hint:Try forming \(\cos^2t+\sin^2t.\) A boat is heading North East traveling North at \(12\) kilometres per hour and East \(35\) kilometres per hour. What is the overall speed of the boat? Hints Hint 1A point on a spiral gets (continuously) further away from its centre with time. How would you transform the circle equation to achieve that? Hint 2The above means that the magnitude of the position vector must increase with time for the spiral (whereas it's fixed for the circle). Hint 3The \(x\) and \(t\) values in the question are proportional (with ratio \(2\pi\)). What does that tell you about the equation for the \(x\) component? Hint 4The above means that time must multiply \(\cos t.\) What about the \(y\) component? Hint 5How about resolving the speed vector into components? Hint 6This means extracting the \(x\) and \(y\) components of the speed vector. How does one do that knowing the \(x\) and \(y\) components of the position vector? Hint 7The question asks for the speed \(v(t),\) i.e. not a vector. Solution The question does not specify any conditions for \(y(t)\) while for \(x(t)\) it only specifies the crossing points with the \(x\)-axis. There are multiple solutions satisfying these conditions: any spiral is acceptable as long as the crossing points with the \(x\)-axis are satisfied. Since the given crossing points are linearly spaced, we can modify the circle equation by multiplying both \(\cos t\) and \(\sin t\) by a linear function; such a spiral (called the Archimedean spiral) is: \[ \textstyle (x,y) = \left(\frac{1}{2\pi}\,t\cos t, \frac{1}{2\pi}\,t\sin t\right). \] To find its total speed, extract the \(x\) and \(y\) components of the speed vector by differentiating the position components, and then \(v(t)\) is the vector's absolute value: \[ \begin{align} \dot x &= \frac{1}{2\pi}(\cos t - t \sin t) \\[2pt] \dot y &= \frac{1}{2\pi}(\sin t + t \cos t) \end{align} \] and \(v(t) = \sqrt{\dot x^2 + \dot y^2}\) which after some easy algebra becomes \(v(t) = \frac{1}{2\pi}\sqrt{1+t^2}.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
So we all know that the continued fraction containing all $1$s... $$ x = 1 + \frac{1}{1 + \frac{1}{1 + \ldots}} $$ yields the golden ratio $x = \phi$, which can easily be proven by rewriting it as $x = 1 + 1/x$, solving the resulting quadratic equation and assuming that a continued fraction that only contains additions will give a positive number. Now, a friend asked me what would happen if we replaced all additions with subtractions: $$ x = 1 - \frac{1}{1 - \frac{1}{1 - \ldots}} $$ I thought "oh cool, I know how to solve this...": \begin{align} x &= 1 - \frac{1}{x} \\ x^2 - x + 1 &= 0 \end{align} And voila, I get... $$ x \in \{e^{i\pi/3}, e^{-i\pi/3} \} $$ Ummm... why does a continued fraction containing only $1$s, subtraction and division result in one of two complex (as opposed to real) numbers? (I have a feeling this is something like the $\sum_i (-1)^i$ thing, that the infinite continued fraction isn't well-defined unless we can express it as the limit of a converging series, because the truncated fractions $1 - \frac{1}{1-1}$ etc. aren't well-defined, but I thought I'd ask for a well-founded answer. Even if this is the case, do the two complex numbers have any "meaning"?)
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Practice Paper 2 Question 13 You place right-angled triangles \(ABC\) with \(\angle A=\frac{360}{k}\) degrees, where \(k\ge5\) is an integer and \(\angle B=90\) degrees, as follows: the first triangle with \(A\) at \((0,0)\) and \(B\) at \((0,1)\), then repeatedly place triangles with \(A\) at \((0,0)\) such that \(AB\) of the new triangle coincides with \(AC\) of the previous triangle. What is the size of the area enclosed by the first \(k\) triangles? Related topics Warm-up Questions Donald starts off with an empty mailbox. If Donald receives \(2\) emails on the first day, and the amount of emails he receives per day triples compared to the day before, how many days would it take for him to have over \(10000\) emails? Alice wants to know the height of a lamp post. As the lamp post is very tall, she cannot measure this directly. Instead, she measures the length of the shadow, which is \(10\) metres long. Knowing that the \(6\) foot tall bus stop sign nearby casts a \(4\) foot shadow, how tall is the lamp post? What is the angle between the tip of the shadow and the top of the lamp post? How does this angle compare with the angle of the bus stop? Hints Hint 1What relationship do you notice between the 2nd and 1st triangle? Hint 2More specifically, between the sides- but more importantly between the areas of these triangles? Hint 3What about any two successive triangles? Hint 4What is the area of the first triangle? Hint 5The total area is the sum of the areas of all triangles. Notice anything familiar about that sum given the above relationship? Hint 6More specifically, given that the ratio between the areas of any two successive triangles is constant, what kind of sum is the sum of all their areas? Solution Let \(\theta=\frac{360}{k}.\) The first triangle has area \(\frac{\tan\theta}{2}.\) After placing the next triangle we notice that it is similar to the previous triangle: the ratio between sides is \(\frac{1}{\cos \theta}\) and the ratio between areas is \(1/\cos^2 \theta.\) This applies to all pairs of successive triangles, i.e. a geometric progression, hence the total area is the geometric series: \[ \begin{align} A&=\frac{\tan \theta}{2}\sum_{i=0}^{k-1} \frac{1}{\cos^{2i} \theta} \\&=\frac{\tan \theta}{2}\cdot\frac{1/\cos^{2k} \theta -1}{1/\cos^2 \theta -1}, \end{align} \] which can be simplified to \[ \begin{align} A&=\frac{\tan \theta \cos^2 \theta}{2}\cdot\frac{1/\cos^{2k} \theta -1}{\sin^2 \theta} \\&=\frac{1}{2 \tan \theta} \cdot \left( \frac{1}{\cos^{2k} \theta} -1 \right). \end{align} \] If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Usually the stochastic integral is defined for processes indexed over $[0,\infty).$ I wonder about the standard way to define the integral for processes indexed over $[0,T].$ That is, for a continuous local martingale $M = (M_t)_{t \in [0,T]}$ and a progressive process (sufficiently integrable) $H = (H_t)_{t \in [0,T]} $ I want to define $$ \int_0^\cdot H_s dM_s, \quad t \in [0,T]. $$ I guess one could do two things: 1. Do the proof of the existence of the stochastic integral all over with processes defined on $[0,T]$ 2. Reduce it to the general case. That is, define $$ \tilde{M}_t = \begin{cases} M_t, t \in [0,T] \\ M_T, t \in [T, \infty) \end{cases} $$ Then define $\tilde{H}$ in the same way. And then define $$ \int_0^t H_s dM_s = \int_0^t \tilde{H}_s d \tilde{M}_s, \quad t \in [0,T]. $$ Do both approaches work?
comprehensive function based validation routines README The above badges represent the current development branch. As a rule, I don't push to GitHub unless tests, coverage and usability are acceptable. This may not be true for short periods of time; on holiday, need code for some other downstream project etc. If you need stable code, use a tagged version. Read 'Further Documentation' and 'Installation'. Please note that developer support for PHP5.4 & 5.5 was withdrawn at version 2.0.0 of this library.If you need support for PHP 5.4 or 5.5, please use a version >=1,<2 Provides extensive and complex validation of nested structures. Primary use case is validating incoming Json data, where, unlike XML, there is no defined validation pattern in common usage. (XML has XSD.) Validating incoming data is important because you can't trust the data provider. Early warning in your program makes it robust. In large and/or corporate systems, your upstream data providers can change without you knowing. Being able to validate that the data conforms to an expected pattern prevents your application from failing unnecessarily. A robust system, first filters, then validates, then optionally filters again (usually referred to as mapping,) any incoming data. This library provides a Zend Validator compatible extension that allows you to create complex validations of nested data. Validation just wouldn't be the same if you didn't know why something failed. All Chippyash Validators use the Chippyash/Validation/Messenger class to store validation errors, and record reasons why a validation passed. Sometimes, knowing why it succeeded is just as important. All Chippyash Validators support invoking on the validation class, in which case you need to supply the messenger: use Chippyash\Validation\Messenger; use Chippyash\Validation\Common\Double as Validator; $messenger = new Messenger(); $validator = new Validator(); $result = $validator($someValue, $messenger); $msg = $messenger->implode(); if (!$result) { echo $msg; } else { //parse the messages and switch dependent on why it succeeded } Alternatively, You can call the isValid() method, in which case, you do not need to supply the Messenger: use Chippyash\Validation\Common\Double as Validator; $validator = new Validator(); $result = $validator->isValid($someValue); if (!$result) { $errMsg = implode(' : ', $validator->getMessages()); } Chippyash\Validation\Common\DigitString: does the value contain only numeric characters? Chippyash\Validation\Common\Double: Is the supplied string equivalent to a double (float) value; Chippyash\Validation\Common\Email: Is the supplied string a simple email address Chippyash\Validation\Common\Enum: Is supplied string one of a known set of strings use Chippyash\Validation\Common\Enum; $validator = new Enum(['foo','bar']); $ret = $validator->isValid('bop'); //returns false Chippyash\Validation\Common\IsArray: Is the supplied value an array? Chippyash\Validation\Common\ArrayKeyExists: Is value and array and has the required key? Chippyash\Validation\Common\ArrayKeyNotExists: Is value and array and does not have the required key? Chippyash\Validation\Common\IsTraversable: Is the supplied value traversable? Chippyash\Validation\Common\Netmask: Does the supplied IP address belong to the constructed Net Mask (CIDR) use Chippyash\Validation\Common\Netmask; $validator = new Netmask('0.0.0.0/1'); return $validator->isValid('127.0.0.1); //return true return $validator->isValid('128.0.0.1); //return false You can construct a Netmask Validator with a single CIDR address mask or an array of them. If you call the Netmask isValid (or invoke it) with a null IP, It will try to get the IP from $_SERVER['REMOTE_ADDR'] or $_SERVER['HTTP_X_FORWARDED_FOR'] thus making it ideal for its' primary use case, that of protecting your web app against requests from unauthorised IP addresses. For more uses of the Netmask validator, see the test cases. Chippyash\Validation\Common\UKPostCode: Simple extension of Zend PostCode to check for UK Post Codes. Should be straightforward to create your own country specific validator; Chippyash\Validation\Common\UKTelNum. Again, a simple extension of the Zend TelNum Validator Chippyash\Validation\Common\ZFValidator: A Simple class allowing you to extend it to create any validator using the Zend Validators. Here is where we start to depart from the Zend validators. Chippyash\Validation\Common\ArrayPart. Is the value an array, does the required key exist, and does it validate according to the passed in function parameter? use Chippyash\Validation\Common\ArrayPart; use Chippyash\Validation\Common\Enum; $validator = new ArrayPart('idx', new Enum(['foo','bar'])); $ret = $validator->isValid(['idx' => 'bop']); //false $ret = $validator->isValid(['foo' => 'bop']); //false $ret = $validator->isValid(['idx' => 'bar']); //true Chippyash\Validation\Common\Lambda. The Lambda validator expects a function on construction that will accept a value and return true or false: use Chippyash\Validation\Common\Lambda; $validator = new Lambda(function($value) { return $value === 'foo'; }); $ret = $validator->isValid('bar'); //false You can pass in an optional second StringType parameter with the failure message use Chippyash\Validation\Common\Lambda; use Chippyash\Type\String\StringType; $validator = new Lambda(function($value) { return $value === 'foo'; }, new StringType('Oops, not a Foo'); if (!$validator->isValid('bar')) { //false $errMsg = implode(' : ', $validator->getMessages()); } You can specify a Messenger parameter as the second parameter to your function declaration if you want to handle adding error messages manually use Chippyash\Validation\Messenger; use Chippyash\Validation\Common\Lambda; use Chippyash\Type\String\StringType; $validator = new Lambda(function($value, Messenger $messenger) { if ($value != 'foo') { $messenger->add(new StringType('error message')); return false; } return true; } ); Chippyash\Validation\Common\ISO8601DateString: Does the supplied string conform toan ISO8601 datestring pattern. docs tbc. This validator is so complex, that it probably deserves it's own library. So be warned, it may be removed from this one! Pattern validators allow you to validate complex data structures. These data structures will normally be a traversable (array, object with public parameters, object implementing a traversable interface etc.) They are central to the usefulness of this library. For example, lets say we have some incoming Json: $json = ' { "a": "2015-12-01", "b": false, "c": [ { "d": "fred", "e": "NN10 6HB" }, { "d": "jim", "e": "EC1V 7DA" }, { "d": "maggie", "e": "LE4 4HB" }, { "d": "sue", "e": "SW17 9JR" } ], "f": [ "a@b.com", "c@d.co.uk" ] } '; The first thing we'll do is convert this into something PHP can understand, i.e. $value = json_decode($json); //or use the Zend\Json class for solid support The HasTypeMap validator allows us to validate both the keys and the values of our incoming data and thus forms the heart of any complex validation requirement. use Chippyash\Validation\Pattern\HasTypeMap; use Chippyash\Validation\Common\ISO8601DateString; use Chippyash\Validation\Common\IsArray; use Chippyash\Validation\Common\Email; use Chippyash\Type\Number\IntType $validator = new HasTypeMap([ 'a' => new ISO8601DateString(), 'b' => 'boolean', 'c' => new IsArray(), 'f' => new IsArray() ]); $ret = $validator->isValid($value); Note, again, the best we can do for the 'c' and 'f' element is determine if it is an array. See the 'Repeater' below for how to solve this problem. The values supplied in the TypeMap can be one of the following: Any returned by PHP gettype(), i.e. "integer" "double" "string", "boolean", "resource", "NULL", "unknown" The name of a class, e.g. '\Chippyash\Type\String\StringType' A function conforming to the signature 'function($value, Messenger $messenger)' and returning true or false An object implementing the ValidationPatternInterface The Repeater pattern allows us to validate a non associative array of values. Its constructor is: __construct(ValidatorPatternInterface $validator, IntType $min = null, IntType $max = null) If $min === null, then it will default to 1. If $max === null, then it will default to -1, i.e. no max. We can now rewrite our validator to validate the entire input data: use Chippyash\Validation\Pattern\HasTypeMap; use Chippyash\Validation\Pattern\Repeater; use Chippyash\Validation\Common\ISO8601DateString; use Chippyash\Validation\Common\IsArray; use Chippyash\Validation\Common\Email; use Chippyash\Validation\Common\UKPostCode; use Chippyash\Type\Number\IntType $validator = new HasTypeMap([ 'a' => new ISO8601DateString(), 'b' => 'boolean', 'c' => new Repeater( new HasTypeMap([ 'd' => 'string', 'e' => new UKPostCode() ]), null, new IntType(4) ), 'f' => new Repeater(new Email()) ]); $ret = $validator->isValid($value); This says that the 'c' element must contain 1-4 items conforming to the given TypeMap. You can see this in action in the examples/has-type-map.php script. These validators allow you carry out boolean logic. LAnd, LOr, LNot and LXor do as expected. Each require ValidatorPatternInterface constructor parameters. Here is superficial example: use Chippyash\Validation\Logical; use Chippyash\Validation\Common\Lambda; $true = new Lambda(function($value){return true;}); $false = new Lambda(function($value){return false;}); $and = new Logical\LAnd($true, $false); $or = new Logical\LOr($true, $false); $not = new Logical\LNot($true); $xor = new Logical\LXor($true, $false); And of course, you can combined them: $validator = new Logical\LNot( new Logical\LAnd($true, Logical\LXor($false, $true))) $ret = $validator->isValid('foo'); //the above is equivelent to $ret = !( true && (false xor true)) The real power of this is that it allows you to create alternate validation: $nullOrDate = new LOr( new Lambda(function($value) { return is_null($value); }, new Lambda(function($value) { try {new \DateTime($value); return true;} catch (\Exception $e) { return false;} }) ); All the above assumes you are running a single validation on the data and that all of the items specified by the validator pattern exist in the incoming data. What happens when you have optional items? This is where the ValidationProcessor comes in. ValidationProcessor allows you to run a number of validation passes over the data. Typically, you'd run a validation for all required data items first, and then run one or more subsequent validations checking for optional items. To use, construct the processor with your first (usually required item) validator, then simply add additional ones to it. $validator = new ValidationProcessor($requiredValidator); $validator->add($optionalValidator); Run your validation and gather any error messages if required: if (!$validator->validate($value)) { var_dump($validator->getMessenger()->implode()); } The processor will run each validation in turn and return the combined result. See examples/validation-processor.php for more illustration. Please note that what you are seeing of this documentation displayed on Github is always the latest dev-master. The features it describes may not be in a released version yet. Please check the documentation of the version you Compose in, or download. Test Contract in the docs directory. Check out ZF4 Packages for more packages fork it write the test amend it do a pull request Found a bug you can't figure out? fork it write the test do a pull request NB. Make sure you rebase to HEAD before your pull request Or - raise an issue ticket. Install Composer "chippyash/validation": ">=2,<3" Or to use the latest, possibly unstable version: "chippyash/validation": "dev-master" Clone this repo, and then run Composer in local repo root to pull in dependencies git clone git@github.com:chippyash/Validation.git Validation cd Validation composer install To run the tests: cd Validation vendor/bin/phpunit -c test/phpunit.xml test/ This software library is released under the BSD 3 Clause license This software library is Copyright (c) 2015-2018, Ashley Kitson, UK This software library contains code items that are derived from other works: None of the contained code items breaks the overriding license, or vice versa, as far as I can tell. So as long as you stick to GPL V3+ then you are safe. If at all unsure, please seek appropriate advice. If the original copyright owners of the derived code items object to this inclusion, please contact the author. I didn't do this by myself. I'm deeply indebted to those that trod the path before me. The following have done work that this library uses: Zend Validator: This library requires the Zend Validator Library. Zend Validator provides a comprehensive set of use case specific validators. Whilst this library provides some specific examples of how to use them, it builds on it. Nevertheless the Zend Validator library is a robust tool, and this dev wouldn't do without it. Zend I18n: Additional validations are available from the Zend I18n lib. V1.0.0 Initial Release V1.1.0 Update dependencies V1.1.1 Move code coverage to codeclimate V1.1.2 Add link to packages V1.1.3 Verify PHP 7 compatibility V1.1.4 Remove @internal flag on Lambda validator V1.1.5 Allow wider range of zend dependencies V1.2.0 Add additional common validators V1.2.1 update dependencies V1.2.2 update build scripts V1.2.3 update composer - forced by packagist composer.json format change V2.0.0 BC Break. Withdraw old php version support V2.1.0 Change of license from GPL V3 to BSD 3 Clause
Question:Outline how Einstein's and Planck's views of Science differed in relation to Science research being influenced by society an politics.I can't remember anything about this and I'm having trouble finding the information needed. Can some one please help me understand what is is that... A few months ago, I stumped my Mathematics teacher with a question when we were learning about displacement of a particle, given a formula. For example, ##x=t^{2}-t-1##, where x is in meters and t is in seconds.Anyway, she made it very clear how to solve displacement when given time t... Differentiation by first principles is as followed:$$y'=\lim_{h\rightarrow 0}\dfrac {f\left( x+h\right) -f\left( x\right) }{h}$$So, assuming that ##y= e^{x},## can we prove, using first principle, that:$$\dfrac{dy}{dx}\left( e^{x}\right) =e^x$$Or is there other methods that are... So, today we were studying the introduction of probability. For me it is fairly simple (for now).My question is something we discussed during class today.When betting on a horse before a horse race - say, a race of 5 horses, the odds ARE NOT 1/5 because the odds are not equal (eg. one... Hi, I'm trying to learn LaTeX and one of the things I'm trying to figure out is what is the difference between \frac and \dfrac?I mean, look:\frac{a}{b} is: $$\frac{a}{b}$$\dfrac{a}{b} is: $$\dfrac{a}{b}$$Other then the thickened line, is there really any difference?Thank you... [I don't know if this is in the right topic or not so I hope I'm all good]My question is related to the exponential growth and decay formula Q=Ae^(kt).Simply, why is the value e used as the base for the exponent?Does it have to be e?If so, can anybody tell me why? Thanks~| FilupSmith |~
144) Consider two athletes running at variable speeds \(\displaystyle v_1(t)\) and \(\displaystyle v_2(t).\) The runners start and finish a race at exactly the same time. Explain why the two runners must be going the same speed at some point. 145) Two mountain climbers start their climb at base camp, taking two different routes, one steeper than the other, and arrive at the peak at exactly the same time. Is it necessarily true that, at some point, both climbers increased in altitude at the same rate? Answer: Yes. It is implied by the Mean Value Theorem for Integrals. 146) To get on a certain toll road a driver has to take a card that lists the mile entrance point. The card also has a timestamp. When going to pay the toll at the exit, the driver is surprised to receive a speeding ticket along with the toll. Explain how this can happen. 147) Set \(\displaystyle F(x)=∫^x _1(1−t)dt.\) Find \(\displaystyle F′(2)\) and the average value of \(\displaystyle F'\) over \(\displaystyle [1,2].\) Answer: \(\displaystyle F′(2)=−1;\) average value of \(\displaystyle F'\) over \(\displaystyle [1,2]\) is \(\displaystyle −1/2.\) In the following exercises, use the Fundamental Theorem of Calculus, Part 1, to find each derivative. 148) \(\displaystyle \frac{d}{dx}∫^x_1e^{−t^2}dt\) 149) \(\displaystyle \frac{d}{dx}∫^x_1e^{cost}dt\) Answer: \(\displaystyle e^{\cos x}\) 150) \(\displaystyle \frac{d}{dx}∫^x_3\sqrt{9−y^2}dy\) 151) \(\displaystyle \frac{d}{dx}∫^x_4\frac{ds}{\sqrt{16−s^2}}\) Answer: \(\displaystyle \frac{1}{\sqrt{16−x^2}}\) 152) \(\displaystyle \frac{d}{dx}∫^{2x}x_tdt\) 153) \(\displaystyle \frac{d}{dx}∫^{\sqrt{x}}_0tdt\) Answer: \(\displaystyle \sqrt{x}\frac{d}{dx}\sqrt{x}=\frac{1}{2}\) 154) \(\displaystyle \frac{d}{dx}∫^{sinx}_0\sqrt{1−t^2}dt\) 155) \(\displaystyle \frac{d}{dx}∫^1_{cosx}\sqrt{1−t^2}dt\) Answer: \(\displaystyle −\sqrt{1−cos^2x}\frac{d}{dx}cosx=|sinx|sinx\) 156) \(\displaystyle \frac{d}{dx}∫^{\sqrt{x}}_1\frac{t^2}{1+t^4}dt\) 157) \(\displaystyle \frac{d}{dx}∫^{x^2}_1\frac{\sqrt{t}}{1+t}dt\) Answer: \(\displaystyle 2x\frac{|x|}{1+x^2}\) 158) \(\displaystyle \frac{d}{dx}∫^{lnx}_0e^tdt\) 159) \(\displaystyle \frac{d}{dx}∫^{e^2}_1lnu^2du\) Answer: \(\displaystyle ln(e^{2x})\frac{d}{dx}e^x=2xe^x\) 160) The graph of \(\displaystyle y=∫^x_0f(t)dt\), where f is a piecewise constant function, is shown here. a. Over which intervals is f positive? Over which intervals is it negative? Over which intervals, if any, is it equal to zero? b. What are the maximum and minimum values of f? c. What is the average value of f? 161) The graph of \(\displaystyle y=∫^x_0f(t)dt\), where f is a piecewise constant function, is shown here. a. Over which intervals is f positive? Over which intervals is it negative? Over which intervals, if any, is it equal to zero? b. What are the maximum and minimum values of f? c. What is the average value of f? Answer: a. f is positive over \(\displaystyle [1,2]\) and \(\displaystyle [5,6]\), negative over \(\displaystyle [0,1]\) and \(\displaystyle [3,4]\), and zero over \(\displaystyle [2,3]\) and \(\displaystyle [4,5]\). b. The maximum value is 2 and the minimum is −3. c. The average value is 0. 162) The graph of \(\displaystyle y=∫^x_0ℓ(t)dt\), where \(\displaystyle ℓ\) is a piecewise linear function, is shown here. a. Over which intervals is ℓ positive? Over which intervals is it negative? Over which, if any, is it zero? b. Over which intervals is ℓ increasing? Over which is it decreasing? Over which, if any, is it constant? c. What is the average value of ℓ? 163) The graph of \(\displaystyle y=∫^x_0ℓ(t)dt\), where \(\displaystyle ℓ\) is a piecewise linear function, is shown here. a. Over which intervals is ℓ positive? Over which intervals is it negative? Over which, if any, is it zero? b. Over which intervals is ℓ increasing? Over which is it decreasing? Over which intervals, if any, is it constant? c. What is the average value of ℓ? Answer: a. ℓ is positive over \(\displaystyle [0,1]\) and \(\displaystyle [3,6]\), and negative over \(\displaystyle [1,3]\). b. It is increasing over \(\displaystyle [0,1]\) and \(\displaystyle [3,5]\), and it is constant over \(\displaystyle [1,3]\) and \(\displaystyle [5,6]\). c. Its average value is \(\displaystyle \frac{1}{3}\). In the following exercises, use a calculator to estimate the area under the curve by computing \(\displaystyle T_{10}\), the average of the left- and right-endpoint Riemann sums using \(\displaystyle N=10\) rectangles. Then, using the Fundamental Theorem of Calculus, Part 2, determine the exact area. 164) [T] \(\displaystyle y=x^2\) over \(\displaystyle [0,4]\) 165) [T] \(\displaystyle y=x^3+6x^2+x−5\) over \(\displaystyle [−4,2]\) Answer: \(\displaystyle T_{10}=49.08,∫^3_{−2}x^3+6x^2+x−5dx=48\) 166) [T] \(\displaystyle y=\sqrt{x^3}\) over \(\displaystyle [0,6]\) 167) [T] \(\displaystyle y=\sqrt{x}+x^2\) over \(\displaystyle [1,9]\) Answer: \(\displaystyle T_{10}=260.836,∫^9_1(\sqrt{x}+x^2)dx=260\) 168) [T] \(\displaystyle ∫(cosx−sinx)dx\) over \(\displaystyle [0,π]\) 169) [T] \(\displaystyle ∫\frac{4}{x^2}dx\) over \(\displaystyle [1,4]\) Answer: \(\displaystyle T_{10}=3.058,∫^4_1\frac{4}{x^2}dx=3\) In the following exercises, evaluate each definite integral using the Fundamental Theorem of Calculus, Part 2. 170) \(\displaystyle ∫^2_{−1}(x^2−3x)dx\) 171) \(\displaystyle ∫^3_{−2}(x^2+3x−5)dx\) Answer: \(\displaystyle F(x)=\frac{x^3}{3}+\frac{3x^2}{2}−5x,F(3)−F(−2)=−\frac{35}{6}\) 172) \(\displaystyle ∫^3_{−2}(t+2)(t−3)dt\) 173) \(\displaystyle ∫^3_2(t^2−9)(4−t^2)dt\) Answer: \(\displaystyle F(x)=−\frac{t^5}{5}+\frac{13t^3}{3}−36t,F(3)−F(2)=\frac{62}{15}\) 174) \(\displaystyle ∫^2_1x^9dx\) 175) \(\displaystyle ∫^1_0x^{99}dx\) Answer: \(\displaystyle F(x)=\frac{x^{100}}{100},F(1)−F(0)=\frac{1}{100}\) 176) \(\displaystyle ∫^8_4(4t^{5/2}−3t^{3/2})dt\) 177) \(\displaystyle ∫^4_{1/4}(x^2−\frac{1}{x^2})dx\) Answer: \(\displaystyle F(x)=\frac{x^3}{3}+1\frac{x}{,}F(4)−F(\frac{1}{4})=\frac{1125}{64}\) 178) \(\displaystyle ∫^2_1\frac{2}{x^3}dx\) 179) \(\displaystyle ∫^4_1\frac{1}{2\sqrt{x}}dx\) Answer: \(\displaystyle F(x)=\sqrt{x},F(4)−F(1)=1\) 180) \(\displaystyle ∫^4_1\frac{2−\sqrt{t}}{t^2}dt\) 181) \(\displaystyle ∫^{16}_1\frac{dt}{t^{1/4}}\) Answer: \(\displaystyle F(x)=\frac{4}{3}t^{3/4},F(16)−F(1)=\frac{28}{3}\) 182) \(\displaystyle ∫^{2π}_0cosθdθ\) 183) \(\displaystyle ∫^{π/2}_0sinθdθ\) Answer: \(\displaystyle F(x)=−cosx,F(\frac{π}{2})−F(0)=1\) 184) \(\displaystyle ∫^{π/4}_0sec^2θdθ\) 185) \(\displaystyle ∫^{π/4}_0secθtanθ\) Answer: \(\displaystyle F(x)=secx,F(\frac{π}{4})−F(0)=\sqrt{2}−1\) 186) \(\displaystyle ∫^{π/4}_{π/3}cscθcotθdθ\) 187) \(\displaystyle (∫^{π/2}_{π/4}csc^2θdθ\) Answer: \(\displaystyle F(x)=−cot(x),F(\frac{π}{2})−F(\frac{π}{4})=1\) 188) \(\displaystyle ∫^2_1(\frac{1}{t^2}−\frac{1}{t^3})dt\) 189) \(\displaystyle ∫^{−1}_{−2}(\frac{1}{t^2}−\frac{1}{t^3})dt\) Answer: \(\displaystyle F(x)=−\frac{1}{x}+\frac{1}{2x^2},F(−1)−F(−2)=\frac{7}{8}\) In the following exercises, use the evaluation theorem to express the integral as a function \(\displaystyle F(x).\) 190) \(\displaystyle ∫^x_at^2dt\) 191) \(\displaystyle ∫^x_1e^tdt\) Answer: \(\displaystyle F(x)=e^x−e\) 192) \(\displaystyle ∫^x_0costdt\) 193) \(\displaystyle ∫^x_{−x}sintdt\) Answer: \(\displaystyle F(x)=0\) In the following exercises, identify the roots of the integrand to remove absolute values, then evaluate using the Fundamental Theorem of Calculus, Part 2. 194) \(\displaystyle ∫^3_{−2}|x|dx\) 195) \(\displaystyle ∫^4_{−2}∣t^2−2t−3∣dt\) Answer: \(\displaystyle ∫^{−1}_{−2}(t^2−2t−3)dt−∫^3_{−1}(t^2−2t−3)dt+∫^4_3(t^2−2t−3)dt=\frac{46}{3}\) 196) \(\displaystyle ∫^π_0|cost|dt\) 197) \(\displaystyle ∫^{π/2}_{−π/2}|sint|dt\) Answer: \(\displaystyle −∫^0_{−π/2}sintdt+∫^{π/2}_0sintdt=2\) 198) Suppose that the number of hours of daylight on a given day in Seattle is modeled by the function \(\displaystyle −3.75cos(\frac{πt}{6})+12.25\), with t given in months and \(\displaystyle t=0\) corresponding to the winter solstice. a. What is the average number of daylight hours in a year? b. At which times \(\displaystyle t_1\) and \(\displaystyle t_2\), where \(\displaystyle 0≤t_1<t_2<12,\) do the number of daylight hours equal the average number? c. Write an integral that expresses the total number of daylight hours in Seattle between \(\displaystyle t_1\) and \(\displaystyle t_2\) d. Compute the mean hours of daylight in Seattle between \(\displaystyle t_1\) and \(\displaystyle t_2\), where \(\displaystyle 0≤t_1<t_2<12\), and then between \(\displaystyle t_2\) and \(\displaystyle t_1\), and show that the average of the two is equal to the average day length. 199) Suppose the rate of gasoline consumption in the United States can be modeled by a sinusoidal function of the form \(\displaystyle (11.21−cos(\frac{πt}{6}))×10^9\) gal/mo. a. What is the average monthly consumption, and for which values of t is the rate at time t equal to the average rate? b. What is the number of gallons of gasoline consumed in the United States in a year? c. Write an integral that expresses the average monthly U.S. gas consumption during the part of the year between the beginning of April \(\displaystyle (t=3)\) and the end of September \(\displaystyle (t=9).\) Answer: a. The average is \(\displaystyle 11.21×10^9\) since \(\displaystyle cos(\frac{πt}{6})\) has period 12 and integral 0 over any period. Consumption is equal to the average when \(\displaystyle cos(\frac{πt}{6})=0\), when \(\displaystyle t=3\), and when \(\displaystyle t=9\). b. Total consumption is the average rate times duration: \(\displaystyle 11.21×12×10^9=1.35×10^{11}\) c. \(\displaystyle 10^9(11.21−\frac{1}{6}∫^9_3cos(\frac{πt}{6})dt)=10^9(11.21+2π)=11.84x10^9\) 200) Explain why, if f is continuous over \(\displaystyle [a,b],\) there is at least one point \(\displaystyle c∈[a,b]\) such that \(\displaystyle f(c)=\frac{1}{b−a}∫^b_af(t)dt.\) 201) Explain why, if f is continuous over \(\displaystyle [a,b]\) and is not equal to a constant, there is at least one point \(\displaystyle M∈[a,b]\) such that \(\displaystyle f(M)=\frac{1}{b−a}∫^b_af(t)dt\) and at least one point \(\displaystyle m∈[a,b]\) such that \(\displaystyle f(m)<\frac{1}{b−a}∫^b_af(t)dt\). Answer: If f is not constant, then its average is strictly smaller than the maximum and larger than the minimum, which are attained over \(\displaystyle [a,b]\) by the extreme value theorem. 202) Kepler’s first law states that the planets move in elliptical orbits with the Sun at one focus. The closest point of a planetary orbit to the Sun is called the perihelion (for Earth, it currently occurs around January 3) and the farthest point is called the aphelion (for Earth, it currently occurs around July 4). Kepler’s second law states that planets sweep out equal areas of their elliptical orbits in equal times. Thus, the two arcs indicated in the following figure are swept out in equal times. At what time of year is Earth moving fastest in its orbit? When is it moving slowest?Kepler’s first law states that the planets move in elliptical orbits with the Sun at one focus. The closest point of a planetary orbit to the Sun is called the perihelion (for Earth, it currently occurs around January 3) and the farthest point is called the aphelion (for Earth, it currently occurs around July 4). Kepler’s second law states that planets sweep out equal areas of their elliptical orbits in equal times. Thus, the two arcs indicated in the following figure are swept out in equal times. At what time of year is Earth moving fastest in its orbit? When is it moving slowest? 203) A point on an ellipse with major axis length 2a and minor axis length 2b has the coordinates \(\displaystyle (acosθ,bsinθ),0≤θ≤2π.\) a. Show that the distance from this point to the focus at \(\displaystyle (−c,0)\) is \(\displaystyle d(θ)=a+ccosθ\), where \(\displaystyle c=\sqrt{a^2−b^2}\). Use these coordinates to show that the average distance \(\displaystyle \bar{d}\) from a point on the ellipse to the focus at \(\displaystyle (−c,0),\) with respect to angle θ, is a. Answer: 309,389,957 Solution: \(\displaystyle a. d^2θ=(acosθ+c)^2+b^2sin^2θ=a^2+c^2cos^2θ+2accosθ=(a+ccosθ)^2;\) \(\displaystyle b. \bar{d}=\frac{1}{2π}∫^{2π}_0(a+2ccosθ)dθ=a\) 204) As implied earlier, according to Kepler’s laws, Earth’s orbit is an ellipse with the Sun at one focus. The perihelion for Earth’s orbit around the Sun is 147,098,290 km and the aphelion is 152,098,232 km. a. By placing the major axis along the x-axis, find the average distance from Earth to the Sun. b. The classic definition of an astronomical unit (AU) is the distance from Earth to the Sun, and its value was computed as the average of the perihelion and aphelion distances. Is this definition justified? 205) The force of gravitational attraction between the Sun and a planet is \(\displaystyle F(θ)=\frac{GmM}{r^2(θ)}\), where m is the mass of the planet, M is the mass of the Sun, G is a universal constant, and \(\displaystyle r(θ)\) is the distance between the Sun and the planet when the planet is at an angle θ with the major axis of its orbit. Assuming that M, m, and the ellipse parameters a and b (half-lengths of the major and minor axes) are given, set up—but do not evaluate—an integral that expresses in terms of \(\displaystyle G,m,M,a,b\) the average gravitational force between the Sun and the planet. Answer: Mean gravitational force \(\displaystyle = \frac{GmM}{2}∫^{2π}_0\frac{1}{(a+2\sqrt{a^2−b^2}cosθ)^2}dθ\). 206) The displacement from rest of a mass attached to a spring satisfies the simple harmonic motion equation \(\displaystyle x(t)=Acos(ωt−ϕ),\) where \(\displaystyle ϕ\) is a phase constant, ω is the angular frequency, and A is the amplitude. Find the average velocity, the average speed (magnitude of velocity), the average displacement, and the average distance from rest (magnitude of displacement) of the mass.
Does anyone know how to evaluate the following limit? $$\lim_{x\to\frac{\pi}{2}}\left(\frac{1}{\frac{\pi}{2}-x}-\tan {x}\right)$$ The answer is 0 , but I want to see a step by step solution if possible. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Here are the steps $$ \lim_{x\to \frac{\pi}{2}} \left[\frac{1}{\frac{\pi}{2}-x}-\tan x\right]= \lim_{x\to \frac{\pi}{2}} \left[\frac{2}{\pi-2x}-\frac{\sin x}{\cos x}\right] $$ $$ = \lim_{x\to \frac{\pi}{2}} \left[\frac{2\cos x-(\pi-2x)\sin x}{(\pi-2x)\cos x}\right]= \lim_{x\to \frac{\pi}{2}} \left[\frac{\frac{d}{dx}[2\cos x-(\pi-2x)\sin x]}{\frac{d}{dx}[(\pi-2x)\cos x]}\right] $$ $$ = \lim_{x\to \frac{\pi}{2}} \left[\frac{-2\sin x-(\pi-2x)\cos x+2\sin x}{-(\pi-2x)\sin x-2\cos x}\right] = \lim_{x\to \frac{\pi}{2}} \left[\frac{\frac{d}{dx}[(\pi-2x)\cos x]}{\frac{d}{dx}[(\pi-2x)\sin x+2\cos x]}\right] $$ $$ = \lim_{x\to \frac{\pi}{2}} \left[\frac{-(\pi-2x)\sin x-2\cos x}{(\pi-2x)\cos x-2\sin x-2\sin x}\right] = \lim_{x\to \frac{\pi}{2}} \left[\frac{(\pi-2x)\sin x+2\cos x}{4\sin x -(\pi-2x)\cos x}\right] $$ $$ = \frac{0\cdot 1+2\cdot 0}{4\cdot 1 -0\cdot 0}= \frac{0}{4}=0 $$ Hints: 1) Put everything over a common denominator. 2) Since $f(\pi/2)$ is undefined (and hence your function $f$ is not continuous at $\pi/2$), you can't just plug in $\pi/2,$ so use L'Hospital's rule (i.e. differentiate the numerator and the denominator with respect to $x$ until you no longer obtain a result in the form $\frac{0}{0}$). Take $y=\pi/2-x$ and write the limit as $$\lim_{y\rightarrow0}\,\left(\frac{1}{y}-\frac{\cos\,y}{\sin \,y}\right),$$ then use Maclaurin series expansion
SolidsWW Flash Applet Sample Problem 3 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 3 with solidsWW.swf embedded A standard WeBWorK PG file with an embedded applet has six sections: A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete. The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below: There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 2 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################## This is the The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code. All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')). DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", ); This is the The TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = 2*random(2,6,1); $b = 2*$a; $xy = 'x'; $func1 = "x"; $func2 = "2*$a-x"; $xmax = Compute("2*$a"); $shapeType = 'poly'; $sides = random(3,8,1); $correctAnswer = Compute("2*$a^3*$sides*tan(pi/$sides)"); This is the The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set ######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height ######################################### <p> This is the Those portions of the code that begin the line with ################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' ); <p>You must include the section that follows ################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, ))); The lines </plot></xml>}); configure the applet. The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable The code Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission. TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT The text between the BEGIN_TEXT $BR $BR Find the volume of the figure shown. The cross-section of the figure is a regular $sides-sided polygon. The area of the polygon can be computed as a function of the length of a line segment from the center of the $sides-sided polygon to the midpoint of one of its sides and is given by \($sides x^2\tan\left(\frac{\pi}{$sides}\right)\) where \(x\) is the length of the bisector of one of the sides (shown in black on the cross-section graph). A formula similar to the cylindrical shells formula will then provide the volume of the figure. Simply replace \(\pi\) in the formula \[V=2\pi\int x f(x) dx\] with \($sides \tan\left(\frac{\pi}{$sides}\right)\) to find the volume of the solid shown where for this solid \[f(x)=\begin{cases}x&x\le $a\\ $b-x&$a<x\le $b\end{cases}\] for \(x=0\) to \($b\). \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings; This is the #################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT(); This is the The
In this section, we use some basic integration formulas studied previously to solve some key applied problems. It is important to note that these formulas are presented in terms of indefinite integrals. Although definite and indefinite integrals are closely related, there are some key differences to keep in mind. A definite integral is either a number (when the limits of integration are constants) or a single function (when one or both of the limits of integration are variables). An indefinite integral represents a family of functions, all of which differ by a constant. As you become more familiar with integration, you will get a feel for when to use definite integrals and when to use indefinite integrals. You will naturally select the correct approach for a given problem without thinking too much about it. However, until these concepts are cemented in your mind, think carefully about whether you need a definite integral or an indefinite integral and make sure you are using the proper notation based on your choice. Basic Integration Formulas Recall the integration formulas given in [link] and the rule on properties of definite integrals. Let’s look at a few examples of how to apply these rules. Example \(\PageIndex{1}\): Integrating a Function Using the Power Rule Use the power rule to integrate the function \( ∫^4_1\sqrt{t}(1+t)dt\). Solution The first step is to rewrite the function and simplify it so we can apply the power rule: \[ ∫^4_1\sqrt{t}(1+t)dt=∫^4_1t^{1/2}(1+t)dt=∫^4_1(t^{1/2}+t^{3/2})dt.\] Now apply the power rule: \[ ∫^4_1(t^{1/2}+t^{3/2})dt=(\frac{2}{3}t^{3/2}+\frac{2}{5}t^{5/2})∣^4_1\] \[ =[\frac{2}{3}(4)^{3/2}+\frac{2}{5}(4)^{5/2}]−[\frac{2}{3}(1)^{3/2}+\frac{2}{5}(1)^{5/2}]=\frac{256}{15}.\] Exercise \(\PageIndex{1}\) Find the definite integral of \( f(x)=x^2−3x\) over the interval \([1,3].\) Hint Follow the process from Example to solve the problem. Answer \[ −\frac{10}{3}\] The Net Change Theorem The net change theorem considers the integral of a rate of change. It says that when a quantity changes, the new value equals the initial value plus the integral of the rate of change of that quantity. The formula can be expressed in two ways. The second is more familiar; it is simply the definite integral. Net Change Theorem The new value of a changing quantity equals the initial value plus the integral of the rate of change: \[F(b)=F(a)+∫^b_aF'(x)dx\] or \[∫^b_aF'(x)dx=F(b)−F(a).\] Subtracting \(F(a)\) from both sides of the first equation yields the second equation. Since they are equivalent formulas, which one we use depends on the application. The significance of the net change theorem lies in the results. Net change can be applied to area, distance, and volume, to name only a few applications. Net change accounts for negative quantities automatically without having to write more than one integral. To illustrate, let’s apply the net change theorem to a velocity function in which the result is displacement. We looked at a simple example of this in The Definite Integral. Suppose a car is moving due north (the positive direction) at 40 mph between 2 p.m. and 4 p.m., then the car moves south at 30 mph between 4 p.m. and 5 p.m. We can graph this motion as shown in Figure. Figure \(\PageIndex{1}\): The graph shows speed versus time for the given motion of a car. Just as we did before, we can use definite integrals to calculate the net displacement as well as the total distance traveled. The net displacement is given by \[ ∫^5_2v(t)dt=∫^4_240dt+∫^5_4−30dt=80−30=50.\] Thus, at 5 p.m. the car is 50 mi north of its starting position. The total distance traveled is given by \[ ∫^5_2|v(t)|dt=∫^4_240dt+∫^5_430dt=80+30=110.\] Therefore, between 2 p.m. and 5 p.m., the car traveled a total of 110 mi. To summarize, net displacement may include both positive and negative values. In other words, the velocity function accounts for both forward distance and backward distance. To find net displacement, integrate the velocity function over the interval. Total distance traveled, on the other hand, is always positive. To find the total distance traveled by an object, regardless of direction, we need to integrate the absolute value of the velocity function. Example \(\PageIndex{2}\): Finding Net Displacement Given a velocity function \(v(t)=3t−5\) (in meters per second) for a particle in motion from time \(t=0\) to time \(t=3,\) find the net displacement of the particle. Solution Applying the net change theorem, we have \[ ∫^3_0(3t−5)dt=\frac{3t^2}{2}−5t∣^3_0=[\frac{3(3)^2}{2}−5(3)]−0=\frac{27}{2}−15=\frac{27}{2}−\frac{30}{2}=−\frac{3}{2}.\] The net displacement is \( −\frac{3}{2}\) m (Figure). Figure \(\PageIndex{2}\): The graph shows velocity versus time for a particle moving with a linear velocity function. Example \(\PageIndex{3}\): Finding the Total Distance Traveled Use Example to find the total distance traveled by a particle according to the velocity function \(v(t)=3t−5\) m/sec over a time interval \([0,3].\) Solution The total distance traveled includes both the positive and the negative values. Therefore, we must integrate the absolute value of the velocity function to find the total distance traveled. To continue with the example, use two integrals to find the total distance. First, find the t-intercept of the function, since that is where the division of the interval occurs. Set the equation equal to zero and solve for t. Thus, \(3t−5=0\) \(3t=5\) \( t=\frac{5}{3}.\) The two subintervals are \( [0,\frac{5}{3}]\) and \( [\frac{5}{3},3]\). To find the total distance traveled, integrate the absolute value of the function. Since the function is negative over the interval \([0,\frac{5}{3}]\), we have \(|v(t)|=−v(t)\) over that interval. Over \([ \frac{5}{3},3]\), the function is positive, so \(|v(t)|=v(t)\). Thus, we have \( ∫^3_0|v(t)|dt=∫^{5/3}_0−v(t)dt+∫^3_{5/3}v(t)dt\) \( =∫^{5/3}_05−3tdt+∫^3_{5/3}3t−5dt\) \( =(5t−\frac{3t^2}{2})∣^{5/3}_0+(\frac{3t^2}{2}−5t)∣^3_{5/3}\) \( =[5(\frac{5}{3})−\frac{3(5/3)^2}{2}]−0+[\frac{27}{2}−15]−[\frac{3(5/3)^2}{2}−\frac{25}{3}]\) \( =\frac{25}{3}−\frac{25}{6}+\frac{27}{2}−15−\frac{25}{6}+\frac{25}{3}=\frac{41}{6}\). So, the total distance traveled is \( \frac{14}{6}\) m. Exercise \(\PageIndex{2}\) Find the net displacement and total distance traveled in meters given the velocity function \(f(t)=\frac{1}{2}e^t−2\) over the interval \([0,2]\). Hint Follow the procedures from Example and Example. Note that \(f(t)≤0\) for \(t≤ln4\) and \(f(t)≥0\) for \(t≥ln4\). Answer Net displacement: \( \frac{e^2−9}{2}≈−0.8055m;\) total distance traveled: \( 4ln4−7.5+\frac{e^2}{2}≈1.740 m\) Applying the Net Change Theorem The net change theorem can be applied to the flow and consumption of fluids, as shown in Example. Example \(\PageIndex{4}\): How Many Gallons of Gasoline Are Consumed? If the motor on a motorboat is started at \(t=0\) and the boat consumes gasoline at the rate of \(5−t^3\) gal/hr, how much gasoline is used in the first 2 hours? Solution Express the problem as a definite integral, integrate, and evaluate using the Fundamental Theorem of Calculus. The limits of integration are the endpoints of the interval [0,2]. We have \[ ∫^2_0(5−t^3)dt=(5t−\frac{t^4}{4})∣^2_0=[5(2)−\frac{(2)^4}{4}]−0=10−\frac{16}{4}=6.\] Thus, the motorboat uses 6 gal of gas in 2 hours. Example \(\PageIndex{5}\): Chapter Opener: Iceboats As we saw at the beginning of the chapter, top iceboat racers can attain speeds of up to five times the wind speed. Andrew is an intermediate iceboater, though, so he attains speeds equal to only twice the wind speed. Figure \(\PageIndex{3}\): (credit: modification of work by Carter Brown, Flickr) Suppose Andrew takes his iceboat out one morning when a light 5-mph breeze has been blowing all morning. As Andrew gets his iceboat set up, though, the wind begins to pick up. During his first half hour of iceboating, the wind speed increases according to the function \(v(t)=20t+5.\) For the second half hour of Andrew’s outing, the wind remains steady at 15 mph. In other words, the wind speed is given by \[ v(t)=\begin{cases}20t+5& for 0≤t≤\frac{1}{2}\\15 & for \frac{1}{2}≤t≤1\end{cases}.\] Recalling that Andrew’s iceboat travels at twice the wind speed, and assuming he moves in a straight line away from his starting point, how far is Andrew from his starting point after 1 hour? Solution To figure out how far Andrew has traveled, we need to integrate his velocity, which is twice the wind speed. Then Distance =\( ∫^1_02v(t)dt.\) Substituting the expressions we were given for \(v(t)\), we get \( ∫^1_02v(t)dt=∫^{1/2}_02v(t)dt+∫^1_{1/2}2v(t)dt\) \( =∫^{1/2}_02(20t+5)dt+∫^1_{1/3}2(15)dt\) \( =∫^{1/2}_0(40t+10)dt+∫^1_{1/2}30dt\) \( =[20t^2+10t]|^{1/2}_0+[30t]|^1_{1/2}\) \( =(\frac{20}{4}+5)−0+(30−15)\) \(=25.\) Andrew is 25 mi from his starting point after 1 hour. Exercise \(\PageIndex{3}\) Suppose that, instead of remaining steady during the second half hour of Andrew’s outing, the wind starts to die down according to the function \(v(t)=−10t+15.\) In other words, the wind speed is given by \( v(t)=\begin{cases}20t+5 & for 0≤t≤\frac{1}{2}\\−10t+15& for\frac{1}{2}≤t≤1\end{cases}\). Under these conditions, how far from his starting point is Andrew after 1 hour? Hint Don’t forget that Andrew’s iceboat moves twice as fast as the wind. Answer \(17.5 mi\) Integrating Even and Odd Functions We saw in Functions and Graphs that an even function is a function in which \(f(−x)=f(x)\) for all x in the domain—that is, the graph of the curve is unchanged when x is replaced with −x. The graphs of even functions are symmetric about the y-axis. An odd function is one in which \(f(−x)=−f(x)\) for all x in the domain, and the graph of the function is symmetric about the origin. Integrals of even functions, when the limits of integration are from −a to a, involve two equal areas, because they are symmetric about the y-axis. Integrals of odd functions, when the limits of integration are similarly \([−a,a],\) evaluate to zero because the areas above and below the x-axis are equal. Rule: Integrals of Even and Odd Functions For continuous even functions such that \(f(−x)=f(x),\) \[∫^a_{−a}f(x)dx=2∫^a_0f(x)dx.\] For continuous odd functions such that \(f(−x)=−f(x),\) \[∫^a_{−a}f(x)dx=0.\] Example \(\PageIndex{6}\): Integrating an Even Function Integrate the even function \( ∫^2_{−2}(3x^8−2)dx\) and verify that the integration formula for even functions holds. Solution The symmetry appears in the graphs in Figure. Graph (a) shows the region below the curve and above the x-axis. We have to zoom in to this graph by a huge amount to see the region. Graph (b) shows the region above the curve and below the x-axis. The signed area of this region is negative. Both views illustrate the symmetry about the y-axis of an even function. We have \( ∫^2_{−2}(3x^8−2)dx=(\frac{x^9}{3}−2x)∣^2_{−2}\) \( =[\frac{(2)^9}{3}−2(2)]−[\frac{(−2)^9}{3}−2(−2)]\) \( =(\frac{512}{3}−4)−(−\frac{512}{3}+4)\) \( =\frac{1000}{3}\). To verify the integration formula for even functions, we can calculate the integral from 0 to 2 and double it, then check to make sure we get the same answer. \[ ∫^2_0(3x^8−2)dx=(\frac{x^9}{3}−2x)∣^2_0=\frac{512}{3}−4=\frac{500}{3}\] Since \( 2⋅\frac{500}{3}=\frac{1000}{3},\) we have verified the formula for even functions in this particular example. Figure \(\PageIndex{4}\): Graph (a) shows the positive area between the curve and the x-axis, whereas graph (b) shows the negative area between the curve and the x-axis. Both views show the symmetry about the y-axis. Example \(\PageIndex{7}\): Integrating an Odd Function Evaluate the definite integral of the odd function \(−5sinx\) over the interval \([−π,π].\) Solution The graph is shown in Figure. We can see the symmetry about the origin by the positive area above the x-axis over \([−π,0]\), and the negative area below the x-axis over \([0,π].\) we have \[ ∫^π_{−π}−5sinxdx=−5(−cosx)|^π_{−π}=5cosx|^π_{−π}=[5cosπ]−[5cos(−π)]=−5−(−5)=0.\] Figure \(\PageIndex{5}\):The graph shows areas between a curve and the x-axis for an odd function. Exercise \(\PageIndex{4}\) Integrate the function \( ∫^2_{−2}x^4dx.\) Hint Integrate an even function. Answer \(\dfrac{64}{5}\) Key Concepts The net change theorem states that when a quantity changes, the final value equals the initial value plus the integral of the rate of change. Net change can be a positive number, a negative number, or zero. The area under an even function over a symmetric interval can be calculated by doubling the area over the positive x-axis. For an odd function, the integral over a symmetric interval equals zero, because half the area is negative. Key Equations Net Change Theorem \( F(b)=F(a)+∫^b_aF'(x)dx\) or \(∫^b_aF'(x)dx=F(b)−F(a)\) Glossary net change theorem if we know the rate of change of a quantity, the net change theorem says the future quantity is equal to the initial quantity plus the integral of the rate of change of the quantity Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at http://cnx.org.
Evaluate the integral: $$\mathcal{J} = \int_0^\pi \frac{{\rm d}x}{1-2a \cos x + a^2} \quad , \quad \left| a \right| <1$$ $$\mathcal{J} = \int_0^\pi \frac{{\rm d}x}{1-2a \cos x + a^2} \quad , \quad \left| a \right| <1$$ Solution This is an open invitation to the new brand site Joy of Mathematics . We have expanded it a bit and of course equipped it a bit with new features. The new team is waiting you over to this new site.
I am trying to show that the double integral over $\mathbb{R}^2$ of the joint PDF of Gaussian Distribution is $1$. I am looking at: $$\frac{1}{2\pi} \cdot \frac{1}{\sqrt{a^2b^2-c^2}} \iint_{\mathbb{R^2}} \exp\left\{\frac{-(a^2(x-m)^2 - 2c(x-m)(y-n)+b^2(y-n)^2)}{2(a^2b^2 - c^2)}\right\} dx dy$$ where $m, n \in\mathbb{R}$, $a>0, b>0$, and $c \in \mathbb{R}$ s.t. $|c| < ab$. How I'm attempting to solve this is by trying to get an upper bound of this integral by removing $c$ in the exponent of the integral, and replacing it by $ab$, and then completing the square so that the term in the exponent gives : $$\exp\left\{\frac{-(a(x-m) + b(y-n))^2}{2a^2b^2-c^2}\right\}$$ and trying to use the fact that the integral of $$\exp\left\{\frac{-(x^2+y^2)}{2}\right\}$$ gives $\sqrt{2\pi}$. But I don't really know exactly how to connect these two statements. If anyone could help it would be appreciated. I am only looking for ways to do this with integration tricks.
Groups of Order 8 Theorem Then $G$ is isomorphic to one of the following: $\Z_8$ $\Z_4 \oplus \Z_2$ $\Z_2 \oplus \Z_2 \oplus \Z_2$ $D_4$ $\Dic 2$ where: $\Z_n$ is the cyclic group of order $n$ $D_4$ is the dihedral group of order $8$ $\Dic 2$ is the dicyclic group of order $8$. Proof $\Box$ Let $G$ be non-abelian. So $G$ is by definition cyclic. So there is no order $8$ element. Let it be denoted by $a$. Let $A$ denote the subgroup generated by $a$. Let $b \in G \setminus A$. Then $\set {a, b}$ is a generator of $G$. Now we consider how $a$ and $b$ interact with each other. Consider the element $x = b a b^{-1}$. By Subgroup of Index 2 is Normal, $b A b^{-1} = A$. So $x \in A$. By Order of Conjugate Element equals Order of Element, the only possible choices are $x = a$ or $x = a^3$. If $x = a$, then $a$ and $b$ commute. So $b a b^{-1} = a^3$. It suffices to consider the order of $b$: If $\order b = 2$, then $G \cong D_4$. If $\order b = 4$, then $G \cong \Dic 2$. $\blacksquare$ Also see
Practice Paper 2 Question 19 A population \(x(t)\) grows in time according to \(\frac{dx}{dt}=(x-1)(2x-1).\) Knowing that \(x(0)=0,\) after how much time does it reach reach \(50\%\) of its ultimate value as time passes? Related topics Warm-up Questions Integrate \(\int (6x-4)^2 dx.\) Solve \(\frac{dy}{dx}=\cos x\) to find \(y(x),\) given that \(y(0)=1.\) Evaluate \(\lim\limits_{x \to -\infty} \big(\frac{3x^2+7x+2}{x^2+1}\big).\) Hints Hint 1How could you manipulate this differential equation into quantities you can integrate? Hint 2How can you rewrite the product of the resulting two fractions as a sum/difference? Hint 3What does "Ultimate value" mean in terms of time? Hint 4... how about letting time go to infinity? Hint 5Which limit to infinity do you need to evaluate to find the ultimate value of the population? Solution Separating the variables (\(x\) on one side, \(t\) on the other), we can rewrite the differential equation as \(dt = \frac{dx}{(x-1)(2x-1)}.\) Use partial fractions to split the product: \(dt=\big(\frac{1}{x-1} - \frac{2}{2x-1}\big)dx.\) We can now integrate (directly or otherwise): \[\begin{align} \int{dt} &= \int{\frac{dx}{x-1}} - 2\int{\frac{dx}{2x-1}} \\ t &= \ln|x-1| - \ln|2x-1| + C \\ &= \ln \left| \frac{x-1}{2x-1} \right| + C \end{align}\] Using the initial condition \(x(0)=0,\) we find that that \(C=0.\) Therefore \(t=\ln \big|\frac{x-1}{2x-1}\big|,\) which we can rearrange to get \(x(t)=\frac{1}{2} - \frac{1}{4e^t - 2}.\) Evaluating \(\lim\limits_{t\to\infty} x(t)\) gives us the ultimate population. As \(t\to\) approaches \(\infty,\) \(\frac{1}{4e^t - 2}\) approaches \(0,\) so the ultimate population is \(\frac{1}{2},\) and \(50\%\) of this is \(\frac{1}{4}.\) Substituting, we find that \(t_{50\%}=\ln\frac{3}{2}.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
The book. A draft of my book: Geometric Algebra for Electrical Engineers, is now available. I’ve supplied limited distribution copies of some of the early drafts and have had some good review comments of the chapter I (introduction to geometric algebra), and chapter II (multivector calculus) material, but none on the electromagnetism content. In defense of the reviewers, the initial version of the electromagnetism chapter, while it had a lot of raw content, was pretty exploratory and very rough. It’s been cleaned up significantly and is hopefully now more reader friendly. Why I wrote this book. I have been working on a part time M.Eng degree for a number of years. I wasn’t happy with the UofT ECE electromagnetics offerings in recent years, which have been inconsistently offered or unsatisfactory. For example: the microwave circuits course which sounded interesting, and had an interesting text book, was mind numbing, almost entirely about Smith charts. I had to go elsewhere to obtain the M.Eng degree requirements. That elsewhere was a project course. I proposed a project to an electromagnetism project with the following goals Perform a literature review of applications of geometric algebra to the study of electromagnetism. Identify the subset of the literature that had direct relevance to electrical engineering. Create a complete, and as compact as possible, introduction to the prerequisites required for a graduate or advanced undergraduate electrical engineering student to be able to apply geometric algebra to problems in electromagnetism. With those prerequisites in place, work through the fundamentals of electromagnetism in a geometric algebra context. In retrospect, doing this project was a mistake. I could have done this work outside of an academic context without paying so much (in both time and money). Somewhere along the way I lost track of the fact that I enrolled on the M.Eng to learn (it provided a way to take grad physics courses on a part time schedule), and got side tracked by degree requirements. Basically I fell victim to a “I may as well graduate” sentiment that would have been better to ignore. All that coupled with the fact that I did not actually get any feedback from my “supervisor”, who did not even read my work (at least so far after one year), made this project-course very frustrating. On the bright side, I really like what I produced, even if I had to do so in isolation. Why geometric algebra? Geometric algebra generalizes vectors, providing algebraic representations of not just directed line segments, but also points, plane segments, volumes, and higher degree geometric objects (hypervolumes.). The geometric algebra representation of planes, volumes and hypervolumes requires a vector dot product, a vector multiplication operation, and a generalized addition operation. The dot product provides the length of a vector and a test for whether or not any two vectors are perpendicular. The vector multiplication operation is used to construct directed plane segments (bivectors), and directed volumes (trivectors), which are built from the respective products of two or three mutually perpendicular vectors. The addition operation allows for sums of scalars, vectors, or any products of vectors. Such a sum is called a multivector. The power to add scalars, vectors, and products of vectors can be exploited to simplify much of electromagnetism. In particular, Maxwell’s equations for isotropic media can be merged into a single multivector equation \begin{equation}\label{eqn:quaternion2maxwellWithGA:20} \lr{ \spacegrad + \inv{c} \PD{t}{}} \lr{ \BE + I c \BB } = \eta\lr{ c \rho – \BJ }, \end{equation} where \( \spacegrad \) is the gradient, \( I = \Be_1 \Be_2 \Be_3 \) is the ordered product of the three R^3 basis vectors, \( c = 1/\sqrt{\mu\epsilon}\) is the group velocity of the medium, \( \eta = \sqrt{\mu/\epsilon} \), \( \BE, \BB \) are the electric and magnetic fields, and \( \rho \) and \( \BJ \) are the charge and current densities. This can be written as a single equation \begin{equation}\label{eqn:ece2500report:40} \lr{ \spacegrad + \inv{c} \PD{t}{}} F = J, \end{equation} where \( F = \BE + I c \BB \) is the combined (multivector) electromagnetic field, and \( J = \eta\lr{ c \rho – \BJ } \) is the multivector current. Encountering Maxwell’s equation in its geometric algebra form leaves the student with more questions than answers. Yes, it is a compact representation, but so are the tensor and differential forms (or even the quaternionic) representations of Maxwell’s equations. The student needs to know how to work with the representation if it is to be useful. It should also be clear how to use the existing conventional mathematical tools of applied electromagnetism, or how to generalize those appropriately. Individually, there are answers available to many of the questions that are generated attempting to apply the theory, but they are scattered and in many cases not easily accessible. Much of the geometric algebra literature for electrodynamics is presented with a relativistic bias, or assumes high levels of mathematical or physics sophistication. The aim of this work was an attempt to make the study of electromagnetism using geometric algebra more accessible, especially to other dumb engineering undergraduates like myself. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on R^2 and R^3 (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), applications to electromagnetism (82 pages), and some appendices. Many of the fundamental results of electromagnetism are derived directly from the multivector Maxwell’s equation, in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra literature. As a conceptual bridge, the book includes many examples of how to extract familiar conventional results from simpler multivector representations. Also included in the book are some sample calculations exploiting unique capabilities that geometric algebra provides. In particular, vectors in a plane may be manipulated much like complex numbers, which has a number of advantages over working with coordinates explicitly. Followup. In many ways this work only scratches the surface. Many more worked examples, problems, figures and computer algebra listings should be added. In depth applications of derived geometric algebra relationships to problems customarily tackled with separate electric and magnetic field equations should also be incorporated. There are also theoretical holes, topics covered in any conventional introductory electromagnetism text, that are missing. Examples include the Fresnel relationships for transmission and reflection at an interface, in depth treatment of waveguides, dipole radiation and motion of charged particles, bound charges, and meta materials to name a few. Many of these topics can probably be handled in a coordinate free fashion using geometric algebra. Despite all the work that is required to help bridge the gap between formalism and application, making applied electromagnetism using geometric algebra truly accessible, it is my belief this book makes some good first steps down this path. The choice that I made to completely avoid the geometric algebra space-time-algebra (STA) is somewhat unfortunate. It is exceedingly elegant, especially in a relativisitic context. Despite that, I think that this was still a good choice from a pedagogical point of view, as most of the prerequisites for an STA based study will have been taken care of as a side effect, making that study much more accessible.
P(n): For all $n$, the number of straight line segments determined by $n$ points in the plane, no three of which lie on the same straight line,is: $\large \frac{n^2 - n}{2}$. Inductive hypotheses: given $n = k$ points, assume $P(k)$ is true: $P(k) = \dfrac{k^2 - k}{2}$. Proving $P(k+1)$ would require proving that for $n = k+1$ points, using your inductive hypothesis, the number of lines passing through $k + 1$ points is equal to $$P(k+1) = \dfrac{(k+1)^2 - (k+1)}{2}$$That is, $P(k+1)$ is the sum of $P(k)$, the number of lines determined by $k$ points, plus the number of additional line segments resulting from the additional point: the $(k+1)$th point. Since there are $k$ original points, the number of line segments that can connect with the $(k+1)$st point is precisely $k$, one line segment connecting each of the $k$ original points with $k+1$th point. That is, our sum is: $$\begin{align}P(k) + k &= \dfrac{(k^2 - k)}{2} + k = \dfrac{(k^2 - k)}{2} + \dfrac {2k}{2} \\ \\& = \dfrac{ k^2 + 2k - k}{2} \\ \\ & = \frac{k^2 + 2k +1 - k - 1}{2} \\ \\& = \frac{(k+1)^2 - (k + 1)}{2} \\\end{align}$$ Hence, from the truth of the base case, and the fact that $P(k+1)$ follows from assuming $P(k)$, we have thus proved by induction on $n$ that $P(n) = \dfrac{n^2 - n}{2}$
11:00 AM to 12:30 PM CSB 480 Cindy Xiong, Northwestern University Your visual system can crunch vast arrays of numbers at a glance, providing the rest of your brain with critical values, statistics, and patterns needed to make decisions about your data. But that process can be derailed by biases at both the perceptual and cognitive levels. I demonstrate 3 instances of these biases that obstruct effective data communication. First, in the most frequently used graphs – lines and bars – reproductions of average values are systematically underestimated for lines, and overestimated for bars. Second, when people see correlational data, they often mistakenly also see a causal relationship. I’ll show how this error can be even worse for some types of graphs. Third, we’ve all experienced being overwhelmed by a confusing visualization. This may happen because the designer – an expert in the topic – thinks that you’d see what they see. I’ll describe a replication of this real-world phenomenon in the lab, showing that, when communicating patterns in data to others, it is tough for people to see a visualization from a naive perspective. I discuss why these biases happen in our brains, and prescribe ways to design visualizations to mitigate these biases. 1:00 PM to 2:00 PM CS conference room (CSB 453) Elad Hazan, Princeton University Linear dynamical systems are a continuous subclass of reinforcement learning models that are widely used in robotics, finance, engineering, and meteorology. Classical control, since the works of Kalman, has focused on dynamics with Gaussian i.i.d. noise, quadratic loss functions and, in terms of provably efficient algorithms, known systems and observed state. In this talk we'll discuss how to apply new machine learning methods to control which relax all of the above: efficient control with adversarial noise, general loss functions, unknown systems, and partial observation. Based on joint work with Naman Agarwal, Nataly Brukhim, Brian Bullins, Karan Singh, Sham Kakade, Max Simchovitz, and Cyril Zhang. 2,3,…,k: From approximating the number of edges to approximating the number of k-cliques (with a sublinear number of queries) 12:00 PM to 1:00 PM CS conference room (CSB 453) Dana Ron, Tel Aviv University This talk will present an algorithms for approximating the number of k-cliques in a graph when given query access to the graph. This problem was previously studied for the cases of k=2 (edges) and k=3 (triangles). We give an algorithm that works for any k >= 3, and is actually conceptually simpler than the k=3 algorithm. We consider the standard query model for general graphs via (1) degree queries, (2) neighbor queries and (3) pair queries. Let n denote the number of vertices in the graph, m the number of edges, and C_k the number of k-cliques. We design an algorithm that outputs a (1+\epsilon)-approximation (with high probability) for C_k, whose expected query complexity and running time are O (\frac{n}{C_k^{1/k}}+\frac{m^{k/2}}{C_k}) poly (\log n, 1/\epsilon,k). Hence, the complexity of the algorithm is sublinear in the size of the graph for C_k = \omega(m^{k/2-1}). Furthermore, we prove a lower bound showing that the query complexity of our algorithm is essentially optimal (in terms of the dependence on n, m and C_k, up to the dependence on \log n, 1/\epsilon and k). If time permits, I will also talk shortly about follow-up work on approximate counting of $k$-cliques in bounded-arboricity graphs. The talk is based on works with Talya Eden and C. Seshadhri. 11:30 AM to 12:30 PM CS conference room (CSB 453) Tulika Mitra, National University of Singapore Internet of Things (IoT), a network of billion computing devices embedded within physical objects, is revolutionizing our lives. The IoT devices at the edge are primarily responsible only for collecting and communicating the data to the cloud, where the computationally intensive data analytics takes place. However, the data privacy and the connectivity issues - in conjunction with the fast real-time response requirement of certain IoT applications - call for smart edge devices that should be able to support privacy-preserving, time-sensitive computation for machine intelligence on-site. We will present the computation challenges in edge computing and introduce hardware-software co-designed approaches to overcome these challenges. We will discuss the design of tiny accelerators that are completely software programmable and can speed up computation to realize the edge computing vision at ultra-low power budget. We will also demonstrate the promise of collaborative computation that engages heterogeneous processing elements in a synergistic fashion to achieve real-time edge computing. 12:00 PM to 1:00 PM CSB 480 (Computer Science Department) Swastik Kopparty, Rutgers University 12:00 PM to 1:00 PM CS conference room (CSB 453) Piotr Sankowski, University of Warsaw 12:00 PM to 1:00 PM CSB 453 Tushar Krishna, Georgia Tech Ever since modern computers were invented, the dream of creating artificial intelligence (AI) has captivated humanity. We are fortunate to live in an era when, thanks to deep learning (DL), computer programs have paralleled, and in many cases even surpassed human level accuracy in tasks like visual perception and speech synthesis. However, we are still far away from realizing general-purpose AI. The problem lies in the fact that the development of supervised learning based DL solutions today is mostly open loop. A typical DL model is created by hand-tuning the deep neural network (DNN) topology by a team of experts over multiple iterations, followed by training over petabytes of labeled data. Once trained, the DNN provides high accuracy for the task at hand; if the task changes, however, the DNN model needs to be re-designed and re-trained before it can be deployed. A general-purpose AI system, in contrast, needs to have the ability to constantly interact with the environment and learn by adding and removing connections within the DNN autonomously, just like our brain does. This is known as synaptic plasticity. In this talk we will present our research efforts towards enabling general-purpose AI leveraging plasticity in both the algorithm and hardware. First, we will present GeneSys (MICRO 2018), a HW-SW prototype of a closed loop learning system for continuously evolving the structure and weights of a DNN for the task at hand using genetic algorithms, providing 100-10000x higher performance and energy-efficiency over state-of-the-art embedded and desktop CPU and GPU systems. Next, we will present a DNN accelerator substrate called MAERI (ASPLOS 2018), built using light-weight, non-blocking, reconfigurable interconnects, that supports efficient mapping of regular and irregular DNNs with arbitrary dataflows, providing ~100% utilization of all compute units, resulting in 3X speedup and energy-efficiency over our prior work Eyeriss (ISSCC 2016). Finally, time permitting, we will describe our research in enabling rapid design-space exploration and prototyping of hardware accelerators using our dataflow DSL + cost-model called MAESTRO (MICRO 2019). 11:40 AM to 12:40 PM CS Department 451 Shafi Goldwasser, UC Berkeley Cryptography and Computational Learning have shared a curious history: a scientific success for one has often provided an example of an impossible task for the other. Today, the goals of the two fields are aligned. Cryptographic models and tools can and should play a role in ensuring the safe use of machine learning. We will discuss this development with its challenges and opportunities. Host: Jeannette Wing 11:40 AM to 12:40 PM CS Department 451 Srini Devadas, MIT As the world becomes more connected, privacy is becoming harder to maintain. From social media services to Internet service providers to state-sponsored mass-surveillance programs, many outlets collect sensitive information about the users and the communication between them – often without the users ever knowing about it. In response, many Internet users have turned to end-to-end encryption, like Signal and TLS, to protect the content of the communication. Unfortunately, these works do little to hide the metadata of the communication, such as when and with whom a user is communicating. In scenarios where the metadata are sensitive, encryption alone is not sufficient to ensure users’ privacy. Most prior systems that provide metadata private communication fall into one of two categories: systems that (1) provide formal privacy guarantees against global adversaries but do not scale to large numbers of users, or (2) scale easily to a large user base but do not provide strong guarantees against global adversaries. I will present two systems that aim to bridge the gap between the two categories to enable private communication with strong guarantees for many millions of users: Quark, a horizontally scalable anonymous broadcast system that defends against a global adversary who monitors the entire network and controls a significant fraction of the servers, and Crossroads, which provides metadata private communication between two honest users against the same adversary using a novel cryptographic primitive. This talk is based on Albert Kwon's recently completed MIT PhD dissertation. Host: Simha Sethumadhavan 11:40 AM to 12:40 PM CSB 451 Barbara Grosz, Harvard University Computing technologies have become pervasive in daily life, sometimes bringing unintended but harmful consequences. For students to learn to think not only about what technology they could create, but also whether they should create that technology and to recognize the ethical considerations that should constrain their design, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard's Embedded EthiCS initiative, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will begin with a short description of my experiences teaching the course "Intelligent Systems: Design and Ethical Challenges" that inspired the design of Embedded EthiCS. It will then describe the goals behind the design, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions. Host: Augustin Chaintreau 11:40 AM to 12:40 PM CS Department 451 Bill Freeman, MIT
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
I'm doing a course on elliptic curves, and I'm stuck on a line in a proof which is supposedly using the uniqueness in Hensel's lemma. Starting with an elliptic curve $$E:Y^2Z+a_1XYZ+a_3YZ^2=X^3+a_2X^2Z+a_4XZ^2+a_6$$ we work in the affine piece $Y \not=0$. So we let $t=\frac{-X}{Y}$, $w=\frac{-Z}{Y}$, getting $$w=t^3+a_1tw+a_2t^2w+a_3w^2+a_4tw^2+a_6w^3=:f(t,w).$$ We've found a power series $w(t)$ such that $w(t)=f(t,w(t))$. Now we're trying to prove Lemma 7.2 Let $R$ be an integral domain complete w.r.t. an ideal $I$, let $a_1,...,a_6 \in R$ and let $K=\mbox{Frac}(R)$. Then $$\hat{E}(I)=\{(t,w(t))\in E(K)\;|\; t \in I\}$$ is a subgroup of $E(K)$. In trying to prove closure, we take two points $P_1,P_2\in \hat{E}(I)$ and try to show that the third point on the line $P_1P_2$, $P_3=-P_1-P_2=(t_3,w_3)$, is in $\hat{E}(I)$. The lecturer claims that once we show $t_3, w_3 \in I$ then we are done, by the uniqueness of the power series in Hensel's lemma. I don't see why this is true. Our statement of Hensel's lemma is: Let $R$ be an integral domain complete with respect to $I$, and let $F \in R[X]$, $s \geq 1$. Suppose we are given $a \in R$ such that $\begin{array}{lll}F(a) &\equiv &0 \hspace{5mm} \mbox{(mod }I^s) \\ F'(a) &\in &R^\times.\end{array}$ Then there exists a unique $b \in R$ such that $\begin{array}{lll}F(b) &= &0 \\ b &\equiv &a \hspace{5mm} \mbox{(mod }I^s).\end{array}$ If we had $w_3 \equiv 0 \;(\mbox{mod }t_3^3)$, then I think by the uniqueness in Hensel's lemma we should have $w_3=w(t_3)$, as required. But I don't see how this is true or how it has anything to do with $w_3$ being in $I$.
The first observation of top quark production in proton-nucleus collisions is reported using proton-lead data collected by the CMS experiment at the CERN LHC at a nucleon-nucleon center-of-mass energy of $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV. The measurement is performed using events with exactly one isolated electron or muon and at least four jets. The data sample corresponds to an integrated luminosity of 174 nb$^{-1}$. The significance of the $\mathrm{t}\overline{\mathrm{t}}$ signal against the background-only hypothesis is above five standard deviations. The measured cross section is $\sigma_{\mathrm{t}\overline{\mathrm{t}}} =$ 45$\pm$8 nb, consistent with predictions from perturbative quantum chromodynamics. Measurements of two- and multi-particle angular correlations in pp collisions at s=5,7, and 13TeV are presented as a function of charged-particle multiplicity. The data, corresponding to integrated luminosities of 1.0pb−1 (5 TeV), 6.2pb−1 (7 TeV), and 0.7pb−1 (13 TeV), were collected using the CMS detector at the LHC. The second-order ( v2 ) and third-order ( v3 ) azimuthal anisotropy harmonics of unidentified charged particles, as well as v2 of KS0 and Λ/Λ‾ particles, are extracted from long-range two-particle correlations as functions of particle multiplicity and transverse momentum. For high-multiplicity pp events, a mass ordering is observed for the v2 values of charged hadrons (mostly pions), KS0 , and Λ/Λ‾ , with lighter particle species exhibiting a stronger azimuthal anisotropy signal below pT≈2GeV/c . For 13 TeV data, the v2 signals are also extracted from four- and six-particle correlations for the first time in pp collisions, with comparable magnitude to those from two-particle correlations. These observations are similar to those seen in pPb and PbPb collisions, and support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp collisions. Measurements are presented of the associated production of a W boson and a charm-quark jet (W + c) in pp collisions at a center-of-mass energy of 7 TeV. The analysis is conducted with a data sample corresponding to a total integrated luminosity of 5 inverse femtobarns, collected by the CMS detector at the LHC. W boson candidates are identified by their decay into a charged lepton (muon or electron) and a neutrino. The W + c measurements are performed for charm-quark jets in the kinematic region $p_T^{jet} \gt$ 25 GeV, $|\eta^{jet}| \lt$ 2.5, for two different thresholds for the transverse momentum of the lepton from the W-boson decay, and in the pseudorapidity range $|\eta^{\ell}| \lt$ 2.1. Hadronic and inclusive semileptonic decays of charm hadrons are used to measure the following total cross sections: $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 107.7 +/- 3.3 (stat.) +/- 6.9 (syst.) pb ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 84.1 +/- 2.0 (stat.) +/- 4.9 (syst.) pb ($p_T^{\ell} \gt$ 35 GeV), and the cross section ratios $\sigma(pp \to W^+ + \bar{c} + X)/\sigma(pp \to W^- + c + X)$ = 0.954 +/- 0.025 (stat.) +/- 0.004 (syst.) ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W^+ + \bar{c} + X)\sigma(pp \to W^- + c + X)$ = 0.938 +/- 0.019 (stat.) +/- 0.006 (syst.) ($p_T^{\ell} \gt$ 35 GeV). Cross sections and cross section ratios are also measured differentially with respect to the absolute value of the pseudorapidity of the lepton from the W-boson decay. These are the first measurements from the LHC directly sensitive to the strange quark and antiquark content of the proton. Results are compared with theoretical predictions and are consistent with the predictions based on global fits of parton distribution functions. A search for narrow resonances in the dijet mass spectrum is performed using data corresponding to an integrated luminosity of 2.9 inverse pb collected by the CMS experiment at the LHC. Upper limits at the 95% confidence level (CL) are presented on the product of the resonance cross section, branching fraction into dijets, and acceptance, separately for decays into quark-quark, quark-gluon, or gluon-gluon pairs. The data exclude new particles predicted in the following models at the 95% CL: string resonances, with mass less than 2.50 TeV, excited quarks, with mass less than 1.58 TeV, and axigluons, colorons, and E_6 diquarks, in specific mass intervals. This extends previously published limits on these models. The production of jets associated to bottom quarks is measured for the first time in PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. Jet spectra are reported in the transverse momentum (pt) range of 80-250 GeV, and within pseudorapidity abs(eta < 2). The nuclear modification factor (R[AA]) calculated from these spectra shows a strong suppression in the b-jet yield in PbPb collisions relative to the yield observed in pp collisions at the same energy. The suppression persists to the largest values of pt studied, and is centrality dependent. The R[AA] is about 0.4 in the most central events, similar to previous observations for inclusive jets. This implies that jet quenching does not have a strong dependence on parton mass and flavor in the jet pt range studied. A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given. Measurements of the differential production cross sections in transverse momentum and rapidity for B0 mesons produced in pp collisions at sqrt(s) = 7 TeV are presented. The dataset used was collected by the CMS experiment at the LHC and corresponds to an integrated luminosity of 40 inverse picobarns. The production cross section is measured from B0 meson decays reconstructed in the exclusive final state J/Psi K-short, with the subsequent decays J/Psi to mu^+ mu^- and K-short to pi^+ pi^-. The total cross section for pt(B0) > 5 GeV and y(B0) < 2.2 is measured to be 33.2 +/- 2.5 +/- 3.5 microbarns, where the first uncertainty is statistical and the second is systematic. The Upsilon production cross section in proton-proton collisions at sqrt(s) = 7 TeV is measured using a data sample collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 3.1 +/- 0.3 inverse picobarns. Integrated over the rapidity range |y|<2, we find the product of the Upsilon(1S) production cross section and branching fraction to dimuons to be sigma(pp to Upsilon(1S) X) B(Upsilon(1S) to mu+ mu-) = 7.37 +/- 0.13^{+0.61}_{-0.42}\pm 0.81 nb, where the first uncertainty is statistical, the second is systematic, and the third is associated with the estimation of the integrated luminosity of the data sample. This cross section is obtained assuming unpolarized Upsilon(1S) production. If the Upsilon(1S) production polarization is fully transverse or fully longitudinal the cross section changes by about 20%. We also report the measurement of the Upsilon(1S), Upsilon(2S), and Upsilon(3S) differential cross sections as a function of transverse momentum and rapidity. A search for Z bosons in the mu^+mu^- decay channel has been performed in PbPb collisions at a nucleon-nucleon centre of mass energy = 2.76 TeV with the CMS detector at the LHC, in a 7.2 inverse microbarn data sample. The number of opposite-sign muon pairs observed in the 60--120 GeV/c2 invariant mass range is 39, corresponding to a yield per unit of rapidity (y) and per minimum bias event of (33.8 ± 5.5 (stat) ± 4.4 (syst)) 10^{-8}, in the |y|<2.0 range. Rapidity, transverse momentum, and centrality dependencies are also measured. The results agree with next-to-leading order QCD calculations, scaled by the number of incoherent nucleon-nucleon collisions. A measurement of the J/psi and psi(2S) production cross sections in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC is presented. The data sample corresponds to an integrated luminosity of 37 inverse picobarns. Using a fit to the invariant mass and decay length distributions, production cross sections have been measured separately for prompt and non-prompt charmonium states, as a function of the meson transverse momentum in several rapidity ranges. In addition, cross sections restricted to the acceptance of the CMS detector are given, which are not affected by the polarization of the charmonium states. The ratio of the differential production cross sections of the two states, where systematic uncertainties largely cancel, is also determined. The branching fraction of the inclusive B to psi(2S) X decay is extracted from the ratio of the non-prompt cross sections to be: BR(B to psi(2S) X) = (3.08 +/- 0.12(stat.+syst.) +/- 0.13(theor.) +/- 0.42(BR[PDG])) 10^-3 Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities. The prompt D0 meson azimuthal anisotropy coefficients, v[2] and v[3], are measured at midrapidity (abs(y) < 1.0) in PbPb collisions at a center-of-mass energy sqrt(s[NN]) = 5.02 TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (pT) range of 1 to 40 GeV/c, for central and midcentral collisions. The v[2] coefficient is found to be positive throughout the pT range studied. The first measurement of the prompt D0 meson v[3] coefficient is performed, and values up to 0.07 are observed for pT around 4 GeV/c. Compared to measurements of charged particles, a similar pT dependence, but smaller magnitude for pT < 6 GeV/c, is found for prompt D0 meson v[2] and v[3] coefficients. The results are consistent with the presence of collective motion of charm quarks at low pT and a path length dependence of charm quark energy loss at high pT, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma. The transverse momentum (pt) spectrum of prompt D0 mesons and their antiparticles has been measured via the hadronic decay channels D0 to K- pi+ and D0-bar to K+ pi- in pp and PbPb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair with the CMS detector at the LHC. The measurement is performed in the D0 meson pt range of 2-100 GeV and in the rapidity range of abs(y)<1. The pp (PbPb) dataset used for this analysis corresponds to an integrated luminosity of 27.4 inverse picobarns (530 inverse microbarns). The measured D0 meson pt spectrum in pp collisions is well described by perturbative QCD calculations. The nuclear modification factor, comparing D0 meson yields in PbPb and pp collisions, was extracted for both minimum-bias and the 10% most central PbPb interactions. For central events, the D0 meson yield in the PbPb collisions is suppressed by a factor of 5-6 compared to the pp reference in the pt range of 6-10 GeV. For D0 mesons in the high-pt range of 60-100 GeV, a significantly smaller suppression is observed. The results are also compared to theoretical calculations. A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results. A measurement of the exclusive two-photon production of muon pairs in proton-proton collisions at sqrt(s)= 7 TeV, pp to p mu^+ mu^- p, is reported using data corresponding to an integrated luminosity of 40 inverse picobarns. For muon pairs with invariant mass greater than 11.5 GeV, transverse momentum pT(mu) > 4 GeV and pseudorapidity |eta(mu)| < 2.1, a fit to the dimuon pt(mu^+ mu^-) distribution results in a measured cross section of sigma(pp to p mu^+ mu^- p) = 3.38 [+0.58 -0.55] (stat.) +/- 0.16 (syst.) +/- 0.14 (lumi.) pb, consistent with the theoretical prediction evaluated with the event generator Lpair. The ratio to the predicted cross section is 0.83 [+0.14-0.13] (stat.) +/- 0.04 (syst.) +/- 0.03 (lumi.). The characteristic distributions of the muon pairs produced via photon-photon fusion, such as the muon acoplanarity, the muon pair invariant mass and transverse momentum agree with those from the theory. Measurements of the differential cross sections for the production of exactly four jets in proton-proton collisions are presented as a function of the transverse momentum pt and pseudorapidity eta, together with the correlations in azimuthal angle and the pt balance among the jets. The data sample was collected in 2010 at a center-of-mass energy of 7 TeV with the CMS detector at the LHC, with an integrated luminosity of 36 inverse picobarns. The cross section for a final state with a pair of hard jets with pt > 50 GeV and another pair with pt > 20 GeV within abs(eta) < 4.7 is measured to be sigma = 330 +- 5 (stat.) +- 45 (syst.) nb. It is found that fixed-order matrix element calculations including parton showers describe the measured differential cross sections in some regions of phase space only, and that adding contributions from double parton scattering brings the Monte Carlo predictions closer to the data.
Sound is, simply put, weak but rapid fluctuations in air pressure: The air becomes a tiny bit denser, a tiny bit thinner, denser, thinner, and so forth. These fluctuations start at sound sources, for example loudspeakers, and spread out like waves. At the sound wave’s peak, the air is at its densest, while at the wave’s trough, the air is at its thinnest. When these sound waves hit our ears, our auditory system translates them into something that we can perceive consciously — and thus, we hear that the sound is there. Still, it is difficult to describe, compare, and process these subjective experiences. For example, would you and I agree that this sound is stronger than that sound? And if so, how much stronger is it? To make sound into something that we can measure, describe, compare, and handle, many different acoustic quantities have been introduced. We use these to make sound into something that we can discuss in a more concrete and objective manner. These quantities affect us all, not least because noise regulations use them to describe how much sound e.g. airports, roads, and concerts are allowed to make. In this article series, we will therefore go through the most important acoustic quantities. In this first part, we begin by discussing what decibels are. This is a fundamental quantity used everywhere where sound is described quantitatively. How do we perceive loudness? Basically, we measure sound pressure as the deviation from the approximately constant atmospheric pressure, as you can see in the figure above. (As sound pressure is thus also a kind of pressure, we measure it in the pressure unit, Pascal.) As sound waves become stronger, that is, as the pressure difference between wave peak (denser air, positive sound pressure) and wave trough (thinner air, negative sound pressure) becomes larger, we do of course also perceive that the sound becomes louder. But the manner in which we perceive it is not quite obvious. Let us have a closer look. Let’s say that we take a sound and double its sound pressure. We perceive this as a noticeable increase in loudness. Then we take the original sound and triple its sound pressure. If we listen to these three sounds in order, most people will perceive the increase from double to triple sound pressure as weaker than the increase from original to double sound pressure. This is despite the fact that the increase, as measured in Pascals, is the same in both cases! Instead, let’s say that we take the doubled sound pressure and double it again, so that we get the quadruple sound pressure of the original. If we compare these three sounds, we hear that we perceive increase from double to quadruple sound pressure as roughly as strong as the increase from the original to the double sound pressure. In other words, if we increase the sound by some factor (for example 2, as in this example where we double the sound pressure), most people will perceive the increase as roughly equally strong, no matter how strong the sound was originally! You can try this out yourself: Sound pressure Sound Original Double Triple Quadruple Sound pressure level and decibels Due to the way our hearing works, it makes sense to use a logarithmic unit to describe loudness. (We will soon see why!) Sound pressure level is therefore defined as \( L_p = 20 \times \log \left( \frac{p_\text{rms}}{p_\text{ref}} \right) \, . \) Here, \(L_p\) is the sound pressure level in decibels (usually abbreviated to dB), \(p_\text{rms}\) is a representative sound pressure for the sound of interest, and \(p_\text{ref}\) is a reference pressure of 0.00002 Pascal (that is, 20 micro-Pascal.) When we specify loudness in decibels, it just means that we are expressing the physical sound pressure in this logarithmic form. Why do we use decibels, though? So, why do we actually use a logarithmic form here? From the above discussion, we know that if we for example double the sound pressure from 0.5 Pascal to 1 Pascal, and subsequently double that to 2 Pascal, we perceive both increases as roughly equally strong. But what is the sound pressure level in decibels here? If we put these numbers into the above formula, we find that the three sound pressure levels are 88 dB, 94 dB, and 100 dB. In other words, the sound pressure level increases with 6 dB every time we double the sound pressure! (Similarly, we can find that a tenfold increase in sound pressure increases the sound pressure level by 20 dB.) This is one reason to use decibels: If you add a certain number of decibels to a sound pressure level, you perceive the increase as roughly the same, regardless of what the original sound pressure level was. Another reason to use decibels is that human hearing spans a huge range of sound pressures. The weakest sound young people with ‘normal hearing’ can hear is about 0.00002 Pascal. The sound from a pneumatic drill — where you have to use hearing protection in order to avoid hearing damage — can be about 60 Pascal. Instead, it’s easier to say that these sounds are at 0 dB and at 130 dB. If you are curious about the kind of levels that different common sources of sound create, the figure to the right gives an idea. You can also find nice tables and infographics on other websites. Note, however, that different sources may disagree a bit on the values for a noise source. These values vary, and are not possible to pin down exactly. For example, the loudness of conversation varies between cultures, and the loudness of road vehicles depends on vehicle type, vehicle speed, and the road surface. (By the way, we don’t just use decibels for sound pressure. The decibel is actually a general unit of measure expressing a ratio of physical properties through a logarithmic formula as above. When you use decibels for other quantities in e.g. acoustics, electronics, and signal processing, you swap the sound pressure \(p_\text{rms}\) and reference pressure \(p_\text{ref}\) with another measurement quantity and another reference quantity.) Representative sound pressure: RMS As we just saw, we use a representative sound pressure \(p_\text{rms}\) to calculate the sound pressure level in decibels. From the above figure, the sound pressure varies quickly with time while the representative sound pressure \(p_\text{rms}\) is a single number. So, how do we calculate this number? This representative sound pressure has another name as well: Root-mean-square pressure, usually abbreviated to RMS pressure. This name basically tells you how to calculate it: If we represent the pressure at time \(t\) as \(p(t)\), we first take the square of the pressure, \(p^2(t)\). Then, we take the average over this over an appropriate time period \(T\), and finally we take the square root of this. More concretely, we can express this mathematically as \(p_\text{rms} = \sqrt{\frac{1}{T} \int_{t_0}^{t_0+T} p^2(t) \, \mathrm{d}t } \, , \) where we choose to average from the time \(t_0\) to the time \(t_0+T\). However, calculating a representative sound pressure in this way only makes sense for steady sounds, such as dense traffic or ventilation noise, where \(p_\text{rms}\) is quite independent of the time interval we choose for our averaging. For short, sharp sounds such as a clap or an explosion, however, \(p_\text{rms}\) would decrease with the length of the time interval, since a longer time interval means a higher proportion of silence. Later in this article series, we’ll come back to how we can find good representative sound pressure levels for such sounds as well. Multiple sounds at the same time Finally, we’ll look at how the sound pressure level increases when we have multiple simultaneous sounds. If we have two simultaneous sounds, e.g. two cars idling side-by-side, it’s tempting, but wrong to suppose that the sound pressure level becomes \(20\times\log\left(\frac{p_\text{rms,1}+p_\text{rms,2}}{p_\text{rms}}\right)\,,\) where \(p_\mathrm{rms,1}\) and \(p_\mathrm{rms,2}\) are the RMS sound pressures from car 1 and 2, respectively. This is wrong because the sound waves from the two cars are not exactly the same. In the sound wave from the first car, the wave peaks and wave troughs won’t come at the same time as those in the sound wave from the second car. Therefore, we do not get a pure constructive interference. Rather, the two sound waves interfere partly constructively and partly destructively, so that the total RMS sound pressure becomes \(p_\mathrm{rms}=\sqrt{p_\mathrm{rms,1}^2+p_\mathrm{rms,2}^2}\). This is more than the RMS sound pressure from a single car, but less than the sum of both cars’ RMS sound pressures. (More generally, the RMS sound pressure from n different sources is \(p_\mathrm{rms}=\sqrt{\sum_{i=1}^{n}p_{\mathrm{rms},i}^2}\).) Thus, the total sound pressure level from these two cars becomes \(L_p=20\times\log\left(\frac{\sqrt{p_\mathrm{rms,1}^2+p_\mathrm{rms,2}^2}}{p_\mathrm{ref}}\right) \\ \phantom{L_p} =10\times\log\left(\frac{p_\mathrm{rms,1}^2+p_\mathrm{rms,2}^2}{p_\mathrm{ref}^2}\right)\,.\) (Here we have two equivalent formulas for the sound pressure level. The first follows from what we have seen earlier, while the second is a bit more convenient to enter on your calculator. The two formulas are equal because \(20 \times \log (x) = 10 \times \log (x^2)\).) If the two cars are equally loud (that is, if \(p_\mathrm{rms,1}=p_\mathrm{rms,2}\)), we can calculate that they together make 3 dB more sound than one car on its own. This also holds for traffic. For example, if the traffic density of a stretch of road is doubled from 2000 cars per hour to 4000 cars per hour, the total sound pressure level increases by 3 dB. This is a noticeable increase, but far from as strong as you might imagine. Next time At this point we have just started to examine acoustic quantities. The quantities that are used to regulate noise in various countries are more advanced, and require more explanation. Next time, we will look at frequency weighting. This lets us take into account that human ears don’t hear every sound equally well, even if the sounds’ RMS sound pressure are equal. This post is a translation of a Norwegian-language blog post that I originally wrote for acousticsresearchcentre.no. I would like to thank Tron Vedul Tronstad for proofreading and fact-checking the Norwegian original.
1) Show that $Re(z)^2-Re(z^2)\geq 0$2) Write down the section formula and linear representation of a line. 3) Write down the equation of line on argand plane equivalent to y=x. 4) (*) Write down the equation of a hyperbola, like xy=1, on argand diagram. Now in the above diagram, from left to right, gives three parallel stright lines, namely $l_1,l_2,l_3$. The small circle is a unit circle and $C_i:|z|=r_i\in R$ is the circle with $l_i$ as tangent. L is the line passing through origin perpendicular to $l_i$ while $u_i$ are points of intersection between $L,l_i$ . Given that: 1) $l_2$ is represented by $|z-u_3|=|z-u_1|$ 2) $Re(u_2)=r_1$. 3) The inclination of $l_i$ is $\frac{\pi}{3}$ Problem: Basic problem a) Show that $u_2=\frac{1}{2}(u_1+u_3)$ b) Show that $u_i=l_i(cis\theta)$ where $\theta \in R$. c) By (a), (b) or otherwise, write down the equation/values of $u_i,l_i,r_i,L$ in numerical form. Advanced problem: d) Two tangent of $C_1$ passing through $u_3$ touch $C_1$ at two points, a and b respectively. Show that line passing through a and b is parallel to $l_1$. e) i) Let the intersection point between $Re(z)=r_1$ and $l_1$ be $v_1$. Find the equation of circle $C_v$ if $0,u_1,v_1,r_1$ lies on $C_v$. ii) Is $Re(z_3)=r_2$? Prove your assertion. Extreme problem: f) A function $f(x)=z$ is defined by: Step I: w is a point on $l_1$ outside of $C_1$ such that $|u_1-w|=x$. Step II: $l_w$ is the tangent from $w$ to $C_1$ and touch $C_1$ at $w'$. Step III: z is the point on $l_w$, not lying between w and w' and $|z-w'|=1$. i) Write down domain and codomain of f. ii) Show that f is injective. iii) Sketch f on the argand diagram. g) Another function $g(x)=z$ is similar to f, but this time in step III, z is a point that $z=\frac{1}{2}(w+w')$. i) Show that g is also injective. ii) Find the area enclosed by $l_w,C_1,L$. iii) Sketch g. iv) What's the difference in nature between shape of f and g? v) Sketch $h(x)=g(x)-f(x)$. h) If $u_2$ is now replaced by $u_3$ in determining new f and g, say a(x) and b(x), is a and b a linear transform of f and g? Explain. Pretty hard this time. Can these questions be easily solved in Cartesian plane? [A technique of rotating axis is required]
Periodic and Quasi-periodic Solutions for the Complex Swift-Hohenberg Equation Wenyan Cui,Lufang Mi,Honglian You Keywords:Swift-Hohenberg equation; periodic solution; quasi-periodic solution; normal form Abstract: In this paper, we consider the complex Swift-Hohenberg(CSH) equation$$\frac{\partial u}{\partial t}=\lambda u-(\alpha+\mathrm{i}\beta)\l(1+\frac{\partial^2}{\partial x^2}\r)^2u-(\sigma+\mathrm{i}\rho)|u|^2u $$subject to periodic boundary conditions. Using an infinite dimensional KAM theorem, we prove thatthere exist a continuous branch of periodic solutions and a Cantorian branch of quasi-periodic solution forthe above equation.
Question Let $(X_n)_{n\geq 1}$ be an i.i.d sequence of standard normals. Show that with probability one $\liminf \frac{M_n}{\sqrt{2\log n}}\geq 1$, where $M_n=\max_{1}^n X_i$. My attempt Given $\varepsilon >0$, it suffices to show that $P(\frac{M_n}{\sqrt{2\log n}}<1-\varepsilon \quad \text{i.o.})= 0$. To this end put $A_n=(\frac{M_n}{\sqrt{2\log n}}<1-\varepsilon)$ and we attempt to use the Borel-Cantelli Lemma. So $$ \sum_1 ^\infty P(A_n)=\sum_{1}^\infty P(X_1<c_n)^n=\sum_{1}^\infty (1-\bar{\Phi} (c_n))^n $$ where $c_n=(1-\varepsilon )(\sqrt{2\log n})$ and $\bar{\Phi}=1-\Phi$ is the survival function. At this point since the normal distribution has no nice form for its cdf I have to use some argument involving asymptotics of this sum. To this end, I know that $$ \bar{\Phi}(x)\sim\frac{\phi (x)}{x} $$ as $x\to \infty$ where $\phi$ is the density of a standard normal. More precisely, $$ \left(\frac{1}{x}-\frac{1}{x^3}\right)\leq \frac{\bar{\Phi}(x)}{\phi(x)}\leq \frac{1}{x}.\tag{1} $$ We can write $$ \sum_{1}^\infty P(X_1<c_n)^n\leq \sum_{1}^\infty(1-(c_n^{-1}-c_n^{-3})\phi(c_n))^n $$ using $1$, but I am not sure how to argue that this is finite.
May someone please verify if this proof is correct? Let E be the union of open neighborhoods, since the union of a collection of open sets is open, it follows that E is an open set. Assume E is an open set therefore, $\forall$ p$\in$ E $\exists$ $r>0$ : $N_r(p)$ $\subset$ E. So for $p_1$ , $\exists$ $N_{r_1}(p_1)$ $\subset$ E and the same thing holds for every p. This means that every element in E has a neighborhood around it, therefore, E is contained within the union of the neighborhoods. Furthermore, may someone please tell me of possible ways to improve this proof? You asked if it is possible to improve your proof that an open set is the union of open sets. Notation. Let $(M,\rho)$ be a metric space. Fix $x_0\in M$. Fix $r>0$."$B(x_0,r)$" is notation for "$\{x\in M:\rho(x,x_0)<r\}$." Terminology. Let $(M,\rho)$ be a metric space. Fix $S\subseteq M$. Say $S$ is open if for each $x\in S$, there exists $r>0$ such that $B(x,r)\subseteq S$. Proposition. Let $(M,\rho)$ be a metric space. Fix $S\subseteq M$. The following are equivalent. (i) $S$ is open. (ii) $S=\bigcup_{\alpha\in A}U_\alpha$, where $\{U_\alpha\}_{\alpha\in A}$ is a family of open sets. Proof. ($\Rightarrow$) Assume $S$ is open. Then $S=\bigcup_{x\in S}B(x,r_x)$, where $r_x>0$ and $B(x,r_x)\subseteq S$ for every $x\in S$. Because $B(x,r_x)$ is open for every $x\in S$, we are done. ($\Leftarrow$) Assume $S=\bigcup_{\alpha\in A}U_\alpha$, where $\{U_\alpha\}_{\alpha\in A}$ is a family of open sets. To prove $S$ is open, fix $x_0\in S$. Then $x_0\in U_{\alpha_0}$ for some $\alpha_0\in A$. Because $U_{\alpha_0}$ is open, there exists $r_0>0$ such that $B(x_0,r_0)\subseteq U_{\alpha_0}$. Because $S=\bigcup_{\alpha\in A}U_\alpha$, it follows that $B(x_0,r_0)\subseteq S$. As a result, $S$ is open.
Suppose \(T=\{u_{1}, \ldots, u_{n} \}\) and \(R=\{w_{1}, \ldots, w_{n} \}\) are two orthonormal bases for \(\Re^{n}\). Then: \begin{eqnarray*} w_{1} &=& (w_{1}\cdot u_{1}) u_{1} + \cdots + (w_{1}\cdot u_{n})u_{n}\\ & \vdots & \\ w_{n} &=& (w_{n}\cdot u_{1}) u_{1} + \cdots + (w_{n}\cdot u_{n})u_{n}\\ \Rightarrow w_{i} &=& \sum_{j} u_{j}(u_{j}\cdot w_{i}) \\ \end{eqnarray*} Thus the matrix for the change of basis from \(T\) to \(R\) is given by \[P = (P^{j}_{i}) = (u_{j}\cdot w_{i}).\] We would like to calculate the product \(PP^{T}\). For that, we first develop a dirty trick for products of dot products: $$(u.v)(w.z)=(u^{T} v) (w^{T} z) = u^{T} (v w^{T}) z\, . $$ The object \(v w^{T}\) is the square matrix made from the outer product of \(v\) and \(w\)! Now we are ready to compute the components of the matrix product \(PP^{T}\): \begin{eqnarray*} \sum_{i}(u_{j}\cdot w_{i})(w_{i}\cdot u_{k})&=& \sum_{i}(u_{j}^{T} w_{i}) (w_{i}^{T} u_{k})\\ &=& u_{j}^{T} \left[\sum_{i} (w_{i} w_{i}^{T}) \right] u_{k} \\ &\stackrel{(*)}=& u_{j}^{T} I_{n} u_{k} \\\ &=& u_{j}^{T} u_{k} = \delta_{jk}. \end{eqnarray*} The equality \((*)\) is explained below. Assuming \((*)\) holds, we have shown that \(PP^{T}=I_{n}\), which implies that \[P^{T}=P^{-1}.\] The equality in the line \((*)\) says that \(\sum_{i} w_{i} w_{i}^{T}=I_{n}\). To see this, we examine \(\left(\sum_{i} w_{i} w_{i}^{T}\right)v\) for an arbitrary vector \(v\). We can find constants \(c^{j}\) such that \(v=\sum_{j} c^{j}w_{j}\), so that: \begin{eqnarray*} \left(\sum_{i} w_{i} w_{i}^{T}\right)v &=& \left(\sum_{i} w_{i} w_{i}^{T}\right)\left(\sum_{j} c^{j}w_{j}\right) \\ &=& \sum_{j} c^{j} \sum_{i} w_{i} w_{i}^{T} w_{j} \\ &=& \sum_{j} c^{j} \sum_{i} w_{i} \delta_{ij} \\ &=& \sum_{j} c^{j} w_{j} \textit{ since all terms with \(i\neq j\) vanish}\\ &=&v. \end{eqnarray*} Thus, as a linear transformation, \(\sum_{i} w_{i} w_{i}^{T}=I_{n}\) fixes every vector, and thus must be the identity \(I_{n}\). Definition: Orthonality A matrix \(P\) is orthogonal if \(P^{-1}=P^{T}\). Then to summarize, Theorem: Orthonormality A change of basis matrix \(P\) relating two orthonormal bases is an orthogonal matrix. \(\textit{i.e.}\), \(P^{-1}=P^T.\) Example 123 Consider \(\Re^{3}\) with the orthonormal basis \[ S=\left\{ u_{1}=\begin{pmatrix}\frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{6}}\\ \frac{-1}{\sqrt{6}}\end{pmatrix}, u_{2}=\begin{pmatrix}0\\ \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{pmatrix}, u_{3}=\begin{pmatrix}\frac{1}{\sqrt{3}}\\ \frac{-1}{\sqrt{3}}\\ \frac{1}{\sqrt{3}}\end{pmatrix} \right\}. \] Let \(E\) be the standard basis \(\{e_{1},e_{2},e_{3} \}\). Since we are changing from the standard basis to a new basis, then the columns of the change of basis matrix are exactly the standard basis vectors. Then the change of basis matrix from \(E\) to \(S\) is given by: \begin{eqnarray*} P=(P^{j}_{i})=(e_{j}u_{i})&=& \begin{pmatrix} e_{1}\cdot u_{1} & e_{1}\cdot u_{2} & e_{1}\cdot u_{3} \\ e_{2}\cdot u_{1} & e_{2}\cdot u_{2} & e_{2}\cdot u_{3} \\ e_{3}\cdot u_{1} & e_{3}\cdot u_{2} & e_{3}\cdot u_{3} \\ \end{pmatrix} \\ = \begin{pmatrix} u_{1} & u_{2} & u_{3} \end{pmatrix} &=& \begin{pmatrix} \frac{2}{\sqrt{6}} & 0 & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{2}}&\frac{-1}{\sqrt{3}}\\ \frac{-1}{\sqrt{6}}& \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\\ \end{pmatrix}. \\ \end{eqnarray*} From our theorem, we observe that: \begin{eqnarray*} P^{-1}=P^{T} &=& \begin{pmatrix}u_{1}^{T}\\u_{2}^{T}\\u_{3}^{T}\end{pmatrix} \\ &=& \begin{pmatrix} \frac{2}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{-1}{\sqrt{6}}\\ 0 & \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}}& \frac{-1}{\sqrt{3}}&\frac{1}{\sqrt{3}}\\ \end{pmatrix}. \\ \end{eqnarray*} We can check that \(P^{T}P=I\) by a lengthy computation, or more simply, notice that \begin{eqnarray*} (P^{T}P)_{ij} &=& \begin{pmatrix}u_{1}^{T}\\u_{2}^{T}\\u_{3}^{T}\end{pmatrix} \begin{pmatrix}u_{1} & u_{2}& u_{3}\end{pmatrix} \\ &=& \begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}. \end{eqnarray*} Above we are using orthonormality of the \(u_{i}\) and the fact that matrix multiplication amounts to taking dot products between rows and columns. It is also very important to realize that the columns of an \(\textit{orthogonal}\) matrix are made from an \(\textit{orthonormal}\) set of vectors. Remark: (Orthonormal Change of Basis and Diagonal Matrices) Suppose \(D\) is a diagonal matrix and we are able to use an orthogonal matrix \(P\) to change to a new basis. Then the matrix \(M\) of \(D\) in the new basis is: \[ M = PDP^{-1} = PDP^{T}. \] Now we calculate the transpose of \(M\). \begin{eqnarray*} M^{T} &=& (PDP^{T})^{T}\\ &=& (P^{T})^{T}D^{T}P^{T} \\ &=& PDP^{T}\\ &=& M \end{eqnarray*} The matrix \(M=PDP^{T}\) is symmetric!
This question is about 2-d surfaces embedded inR3It's easy to find information on how the metric tensor changes when $$x_{\mu}\rightarrow x_{\mu}+\varepsilon\xi(x)$$So, what about the variation of the second fundamental form, the Gauss and the mean curvature? how they change?I found... Does it make a sense to define the Taylor expansion of the square of the distance function? If so, how can one compute its coefficients? I simply thought that the square of the distance function is a scalar function, so I think that one can write$$d^2(x,x_0)=d^2(x'+(x-x'),x_0)=d^2(x',x_0) +... Hi,I read somewhere the geodesic distance between an arbitrary point ##x## and the base point ##x_0## in normal coordinates is just the Euclidean distance. Why?! That's the part I don't understand. I know that one can writeg_{\mu \nu} = \delta_{\mu \nu} - \frac{1}{6} (R_{\mu \rho \nu \sigma}... I'm studying General Relativity and Differential Geometry. In my text book, the author has written ##x^2=d(x,.)## where d(x,y) is distance between two points ##x,y\in M##. I couldn't understand what d(x,.) means. Moreover, I am not sure if this is generally true to write ##x^2=g_{\mu\nu} x^\mu...
Sue Liu of Madras College, St Andrews sent an excellent solution tothis question. Suppose that $a$ is the radius of the axle, $b$ is the radius ofeach ball-bearing, and $c$ is the radius of the hub (see thefigure). The solution is based on the following figure. The angle subtended by each ball-bearing at the centre of the axleis $2\pi/n$, so that in the triangle $OPQ$ we have $$OQ= a+b, \quadPQ = b, \quad \angle POQ = \pi/n$$ and hence $$(a+b)\sin (\pi/n) = b.$$ We also know that $$c = OP' = OP +PP' = a+2b.$$ The rest of the solution is applying simple algebra to (1) and (2),and to simplify this we will (temporarily) write $s$ for $\sin(\pi/n)$. First, (1) gives $$ as = b(1-s).$$ Next, (2) gives $$c =a+2b = a + {2as\over 1-s} = a\left({1+s\over 1-s}\right),$$ sothat, finally, we have $${a \over b} = \left({1 - \sin (\pi/n)\over\sin (\pi /n)}\right), \quad {b \over c} = \left({\sin (\pi/n)\over1+\sin (\pi /n)}\right), \quad {c\over a} = \left({1+\sin(\pi/n)\over 1-\sin (\pi /n)}\right).$$ When there are 3ballbearings, $n = 3$ and $\sin (\pi /3) = \sqrt 3 /2$. Hence$${a\over b}= {2\sqrt 3 \over 3} - 1, \quad {b \over c} = \sqrt 3(2 -\sqrt 3), \quad {c \over a} = 7 + 4\sqrt 3.$$ When there are 4ballbearings, $n = 4$ and $\sin (\pi /4) = \sqrt 1 /\sqrt 2$. Hence$${a \over b}= \sqrt 2 - 1, \quad {b \over c} = \sqrt 2 - 1, \quad{c \over a} = 3 + 2\sqrt 2.$$ {\bf If $n=6$ then} $b = a$ and $c =3a$. This is a very special case for suppose, in the general case,that the internal radius $c$ of the hub is an integer multiple ofthe radius $b$ of each ball-bearing. Then $c/b = N$, say, where $N$is an integer, and this gives $$N\sin {\pi \over n} = 1+\sin{\pi\over n},$$ or $$\sin {\pi\over n} = {1\over N-1}.$$ Now it isknown (although this is NOT an elementary result) that if $\sin x$is rational, then $\sin x$ is $0$, $1/2$ or $1$. Thus we must have$${1\over N-1} = {1\over 2},$$ so that $N=3$. This means that $\sin(\pi/n) = 1/2$ so that $n=6$. The same conclusion can be made if we know that in the general casethe internal radius of the hub is an integer multiple of the radiusof each ball-bearing. Finally, as this same conclusion, namely $n=6$, can be drawnwhenever one of the ratios $a/b$, $b/c$, $c/a$ are rational numbersand, as the case $n=6$ is not practical (many more ballbearings areneeded) it follows that in all cases of, for example, bicyclewheels, these ratios are irrational. It therefore follows that itis impossible to manufacture a perfectly fitting set ofball-bearings.
Developed by Deva O'Neil - Published July 20, 2017 Subject Area Mechanics Level First Year Available Implementation Glowscript Learning Objectives * Students use a while loop to implement a summation (**Exercises 1 and 2**) * Students distinguish between variables that are updated (accumulators) and variables that are recalculated in each iteration of a loop (**Exercises 1 and 2**) * Students calculate the center of mass of a system numerically, and check the output for plausibility. (**Exercise 2**) Time to Complete 50 min These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.). * Under what circumstances would the program behave differently? * Explain the purpose behind putting the variable “massSum” on both sides of the equal sign. *Important hint* In the next exercise, you’ll be prompted to add lines to an existing program. Pay attention to whether the purpose of a line is to “update” a variable or to (re)calculate it. I use the word “update” in situations where the old value of a variable is used to calculate a new value. For example, this line would update a variable called x: ```python x = x + v*t ``` * Which “x” is the old value? In contrast, this would be a recalculation of x, but I would not call it an update: ```python x= v*t ``` Note that it does not use the old value. * In your while loop, what variable is “updated?” #Exercise 2 Open the template for Exercise 1, creating a new program so that you don't overwrite your previous program. The goal of this program to calculate the center of mass position of the system when the user creates multiple balls. The center of mass position will be marked with a white X on the screen. Every time a new ball is added, the X will move. ![](images/CoM/CoM.png "") The vector that represents the position of the system’s center of mass will be called rcm (meaning $\vec{r}_{cm}$), calculated as follows: $\vec{r}_{cm}=\frac{\Sigma ~ m_i \vec{r}_i}{\Sigma ~m_i}$ * Besides the sum of the masses, what else are you going to need to calculate (and update every loop) to find $\vec{r}_{cm}$? Use the summation symbol to express your answer. FIll in the blanks provided in the template. ``` rcmNumerator = _______________________ ``` *Use symbols, NOT numerical values.* ``` rcm = rcmNumerator/__________ ``` Inside the while loop, after the existing commands, do the following: 1. Update rcmNumerator 2. Recalculate rcm 3. Print "New center of mass = " with the vector rcm following the text. Make sure that the correct units for rcm are printed out also. __Analysis__ Test your program by placing one new ball. Record the position of this new ball: _____________ * Calculate (without using your computer program) where the center of mass should be: * Does it match your program’s output? Place more balls and verify visually that the X moves around in a way that makes sense. If your program works for one additional ball but not two, check your calculation of the numerator in the while loop. Does it "update" the variable based on its old value? * Explain how the behavior of the program will change if the masses of the white balls are changed to be the same as the starter ball. Test your prediction. * Explain, in your own words, the physical significance of the center of mass of a system. Download Options Share a Variation Credits and Licensing Deva O'Neil, "Center of Mass for Point Particles," Published in the PICUP Collection, July 2017. The instructor materials are ©2017 Deva O'Neil. The exercises are released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license
By Dr Adam Falkowski (Résonaances; Orsay, France) The title of this post is purposely over-optimistic in order to increase the traffic. A more accurate statement is that a recent analysis of X-ray spectrum of galactic clusters claims the presence of a monochromatic \(3.5\keV\) photon line which can be interpreted as a signal of a\[ Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clustersby Esra Bulbul and 5 co-authors (NASA/Harvard-Smithsonian) \large{m_{\nu({\rm ster})} = 7\keV } \]sterile neutrino dark matter candidate decaying into a photon and an ordinary neutrino. It's a long way before this claim may become a well-established signal. Nevertheless, in my opinion, it's not the least believable hint of dark matter coming from astrophysics in recent years. First, let me explain why one would anyone dirty their hands to study X-ray spectra. In the most popular scenario the dark matter particle is a WIMP — a particle in the \(\GeV\)-\(\TeV\) mass ballpark that has weak-strength interactions with the ordinary matter. This scenario may predict signals in gamma rays, high-energy anti-protons, electrons etc, and these are being searched high and low by several Earth-based and satellite experiments. But in principle the mass of the dark matter particle could be anywhere between \(10^{-30}\) and \(10^{50}\GeV\), and there are many other models of dark matter on the market. One serious alternative to WIMPs is a \(\keV\)-mass sterile neutrino. In general, neutrinos aredark matter: they are stable, electrically neutral, and are produced in the early universe. However we know that the 3 neutrinos from the Standard Model constitute only a small fraction of dark matter, as otherwise they would affect the large-scale structure of the universe in a way that is inconsistent with observations. The story is different if the 3 "active" neutrinos have partners from beyond the Standard Model that do not interact with W- and Z-bosons — the so-called "sterile" neutrinos. In fact, the simplest UV-complete models that generate masses for the active neutrinos require introducing at least 2 sterile neutrinos, so there are good reasons to believe that these guys exist. A sterile neutrino is a good dark matter candidate if its mass is larger than \(1\keV\) (because of the constraints from the large-scale structure) and if its lifetime is longer than the age of the universe. How can we see if this is the right model? Dark matter that has no interactions with the visible matter seems hopeless. Fortunately, sterile neutrino dark matter is expected to decay and produce a smoking-gun signal in the form of a monochromatic photon line. This is because, in order to be produced in the early universe, the sterile neutrino should mix slightly with the active ones. In that case, oscillations of the active neutrinos into sterile ones in the primordial plasma can populate the number density of sterile neutrinos, and by this mechanism it is possible to explain the observed relic density of dark matter. But the same mixing will make the sterile neutrino decay, as shown in the diagrams here. If the sterile neutrino is light enough and/or the mixing is small enough then its lifetime can be much longer than the age of the universe, and then it remains a viable dark matter candidate. The tree-level decay into 3 ordinary neutrinos is undetectable, but the 2-body loop decay into a photon and and a neutrino results in production of photons with the energy\[ \large{E=\frac{m_{\rm DM}}{2}.} \] Such a monochromatic photon line can potentially be observed. In fact, in the simplest models sterile neutrino dark matter heavier than \(\approx 50\keV\) would produce a too large photon flux and is excluded. Thus the favored mass range for dark matter is between \(1\) and \(50\keV\). Then the photon line is predicted to fall into the X-ray domain that can be studied using X-ray satellites like XMM-Newton, Chandra, or Suzaku. Until last week these searches were only providing lower limits on the lifetime of sterile neutrino dark matter. This paper claims they may have hit the jackpot. The paper use the XMM-Newton data to analyze the stacked X-ray spectra of many galaxy clusters where dark matter is lurking. After subtracting the background they see is this: Although the natural reaction here is a loud "are you kidding me", the claim is that the excess near \(3.56\keV\) (red data points) over the background model is very significant, at 4-5 astrophysical sigma. It is difficult to assign this excess to any know emission lines from usual atomic transitions. If interpreted as the signal of sterile neutrino dark matter, the measured energy and the flux corresponds to the red star in the plot, with the mass \(7.1\keV\) and the mixing angle of order \(5\times 10^{-5}\). This is allowed by other constraints and, by twiddling with the lepton asymmetry in the neutrino sector, consistent with the observed dark matter relic density. Clearly, a lot could go wrong with this analysis. For one thing, the suspected dark matter line doesn't stand alone in the spectrum. The background mentioned above consists not only of continuous X-ray emission but also of monochromatic lines from known atomic transitions. Indeed, the \(2\)-\(10\keV\) range where the search was performed is pooped with emission lines: the authors fit 28 separate lines to the observed spectrum before finding the unexpected residue at \(3.56\keV\). The results depend on whether these other emission lines are modeled properly. Moreover, the known argon XVII dielectronic recombination line happens to be nearby at \(3.62\keV\). The significance of the signal decreases when the flux from that line is allowed to be larger than predicted by models. So this analysis needs to be confirmed by other groups and by more data before we really get excited. Decay diagrams borrowed from this review. For more up-to-date limits on sterile neutrino DM see this paper, or this plot. Update: another independent analysis of XMM-Newton data observes the anomalous 3.5 keV line in the Andromeda and the Perseus cluster. The text was reposted from Adam's blog with his permission...
SPPU First Year Engineering (Semester 1) Engineering Mathematics-1 December 2013 Engineering Mathematics-1 December 2013 Total marks: -- Total time: -- Total time: -- INSTRUCTIONS (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary (1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary Answer any one question from Q1 and Q2 1 (a) Examine the following system of equations for consistency and solve it, if consistent. 4x-2y+6z=8 x+y-3z=-1, 15x-3y+9z=21 4x-2y+6z=8 x+y-3z=-1, 15x-3y+9z=21 4 M 1 (b) Examine the following vectors for Linear dependence. Find the relation between them, if dependent. (2, -1, 3, 2), (1, 3, 4, 2) and (3, -5, 2, 2) (2, -1, 3, 2), (1, 3, 4, 2) and (3, -5, 2, 2) 4 M 1 (c) If 2 cos ϕ=x+1/x, 2cosψ=y+1/y prove that, x prove that, x py q+1/x py q=2 cos (pϕ + qψ) 4 M 2 (a) Use de Moivre's theorem, to solve the equation x x 7+x 4+I (x 3+1)=0 4 M 2 (b) If (1+ai)(1+bi)=p+iq, then prove that, i) p tan [tan ii) (1+a i) p tan [tan -1+ tan -1b]=q ii) (1+a 2)(1+b 2)=p 2+q 2 4 M 2 (c) Reduce the following matrix A to its normal form and hence find its rank, where \[ A=\begin{vmatrix} 2 &-3 &4 &4 \\1 &1 &1 &2 \\3 &-2 &3 &6 \end{vmatrix} \] 4 M Answer any one question from Q3 and Q4 3 (a) Test convergence of the series (any one) \[ i) \ \ \sum^{\infty}_{n=1} \dfrac {2^n + 1}{3^n +1} \\ ii) \ \ \dfrac {1* 2}{3^2 * 4^2} + \dfrac {3 * 4}{5^2 *6^2}+ \dfrac {5* 6}{7^2 * 8^2} + \cdots \ \cdots \] 4 M 3 (b) Expand 40+53(x-2)+19(x-2) 2+2(x-2) 3in ascending powers of x 4 M 3 (c) If y=x n log x then, prove that \[ Y_{n+1} = \dfrac {n!}{x} \] 4 M 4 (a) Solve any one: \[ i) \ \ Evaluate \ \lim_{x\to \infty} (\cot x)^{\sin x} \] ii) Find the values of a and b such that, \[ \lim_{x\to 0}\dfrac {a \cos x - a + b \ x^2}{x^4}= \dfrac {1}{12} \] 4 M 4 (b) prove that, \[ e^x \tan x=x +x^2 + \dfrac {5x^2}{6} + \dfrac {x^4}{2}+ \cdots \ \cdots \] 4 M 4 (c) \[ if \ Y=\dfrac {x} {(x+1)^4} \ find \ Y_n \] 4 M Solve any two of the following: 5 (a) \[ Verify \ \dfrac {\partial^2 u}{\partial x \partial y} = \dfrac {\partial ^2 u}{\partial y \partial x} \ for \ u =\tan ^{-1}\left [ \dfrac {y}{x} \right ] \] 7 M 5 (b) if x=u tan v, y=u sec v prove that \[ \left ( \dfrac {\partial u} {\partial x} \right )_y \left ( \dfrac {\partial v}{\partial x} \right )_y = \left ( \dfrac {\partial u}{\partial y} \right )_x \left ( \dfrac {\partial v}{\partial y} \right )_x \] 7 M Answer any one question from Q5 and Q6 5 (c) \[ If \ u = \dfrac {x^3 +y^3}{y \sqrt{x}} + \dfrac {1}{x^7} \sin ^{-1} \left ( \dfrac {x^2 + y^2}{2xy} \right ) \] Then, find the value of \[ x^2 \dfrac {\partial^2 u}{\partial x^2}+ 2xy \dfrac {\partial ^2 u}{\partial x \partial y} + y^2 \dfrac {\partial ^ 2 u}{\partial y^2} \ At \ point \ (1,1) \] 7 M Solve any two of the following: 6 (a) If u=(x u 4) f''(xy) 2-y 2) f(xy) then show that u xx+u yy=(x 4-y 7 M 6 (b) Verify Euler's theorem for homogeneous functions F(x,y,z)=3x 2z+4z F(x,y,z)=3x 2yz+5xy 4 7 M 6 (c) If x=u+v+w, y=uv+uw+vw, z=uvw and F is function of x,y,z then prove that, \[ x \dfrac {\partial F}{\partial x} + 2y \dfrac {\partial F}{\partial y} +3z \dfrac {\partial F}{\partial z} = u \dfrac {\partial F} {\partial u} + v \dfrac {\partial F}{\partial v}+ w \dfrac {\partial F}{\partial w} \] 7 M Answer any one question from Q7 and Q8 7 (a) If x=v 2+w 2, y=w 2+u 2, z=u 2+v 2Find \[ \dfrac {\partial (u,v,w)} {\partial (x,y,z)} \] 4 M 7 (b) Examine for functional dependence for u=x+y+z, v=x 2+y 2+z 3, w=x 3+y 3+z 3-3xyz. 4 M 7 (c) Find the extreme values of f(x,y)=x 3+y-3axy, a>0 5 M 8 (a) If u 2+xv 2=x+y and v 2+yu 2=x-y find \[ \dfrac {\partial v} {\partial y} \] 4 M 8 (b) The resistance R of a circuit was calculated using the formula I=E/R. If there is an error of 0.1 Amp in reading I and 0.5 Volts in E, find the corresponding percentage error in R when I=15 Amp and E=100 Volts 4 M 8 (c) Divide 24 into three parts such that, the continued product of the first, square of the second and cube of the third may be maximum. Use Lagrange's method. 5 M More question papers from Engineering Mathematics-1
The observation of even a few exceptional storms can provide quantitative evidence for climate change. Doing so however requires observing and learning as much as possible about how storms work, and not merely counting storms. With some understanding of how atmospheric systems and storms operate, we have other observational information from physics, chemistry, and planetary science that we can also apply to the question. We should use all the information available. Bayes rule can help us do this objectively. Here, P(A|B) can be the posterior probability that the climate has changed (A), given the observation of the exceptional storm (B). P(A) is the prior probability for climate-change. P(B|A) is the likelihood that storm B occurs given a changed climate (e.g. warmer). k1 and k2 are constants of proportionality. Let (!A) represent no change of climate. Then we can write two equations for Bayes Rule. P(A|B) = k1 P(A) P(B|A) P(!A|B) = k2 P(!A) P(B|!A) The prior odds for a change in climate is P(A):P(!A), and let's assume the prior odds for and against climate change are even, 1:1. Now if we use all the available information we have about atmosphere physics and chemistry, and we observe storm B in detail, we can make an informed estimate of the ratio of likelihoods P(B|A):P(B|!A). Let's assume that B is an exceptional storm and ten times more likely to occur when the atmosphere is warmer. If a storm of type B is observed in actuality, the posterior odds can be applied, and the odds should be updated in favor of climate change to 10:1. What this means is that the observation of exceptional (extreme) events should inform our opinion on climate change. This approach is most successful however when we have many, and many types, of information on how the atmosphere and climate works. Measurements taken on a day during extreme weather will of course be outliers, but could also provide important information about how In a Warming World, Storms May Be Fewer but Stronger. We should not assume outliers always represent 'noise' that needs to be averaged away. It does not seem unreasonable to ask the question whether or not we are seeing effects of climate change in the weather. Thursday morning I read the following on the US National Weather Service forecast discussion page: CLIMATE...THERE IS A SMALL CHANCE THAT SEATTLE WILL GET TO 90DEGREES ON SUNDAY WHICH WOULD TIE THE RECORD FOR THE DAY. SINCERECORDS STARTED IN SEATTLE AT THE FEDERAL BUILDING DOWNTOWN IN 1891THERE HAVE BEEN ONLY SIX DAYS IN THE FIRST WEEK OF JUNE WITH A HIGHTEMPERATURE OF 90 DEGREES OR MORE. THE LAST TIME IT HAPPENED WASJUNE 4 2009 WITH A HIGH OF 91 DEGREES. FELTON Whether or not the temperature exceeds 90 degrees next Sunday, I wouldn't dismiss the question of what mechanisms might be operating out-of-hand. We should try to estimate how much what happens supports (or not) hypotheses based upon physical processes. For example, use Bayes rule reasoning to estimate the change in posterior odds for a mechanism A that increases the likelihood of P(B|A) and P(!B|!A) by 15%, and decreases the likelihood of P(!B|A) and P(B|!A) by 15%.$$\delta = 0.15$$Then the likelihood is given by the following, where the record is exceeded for a years and not exceeded for b years.$$k \times\left[ \frac{P(B \parallel A)}{P(B \parallel !A)} \right] ^{a}\times \left[ \frac{P(!B \parallel A)}{P(!B \parallel !A)} \right] ^{b} $$$$k \times\left[ \frac{1 + \delta}{1 - \delta} \right] ^{a-b}$$ Let's also look at the support if the record is also exceeded in 2016 and 2017. Change in posterior odds in favor of A(0.15) (a) Record Not Exceeded 2015 - Posterior odds decrease from prior odds by 35%. By this method it is possible that additional observations eventually discredit the hypothesis. (b) Record Exceeded in 2015 - Posterior odds increase by 35%.(c) Record Exceeded in 2015 & 2016 - Posterior odds increase by 83%. (d) Record Exceeded in 2015 & 2016 & 2017 - Odds increased by 148%. Finally, the advantages of using this approach, rather than a frequentist approach, can be more easily understood by considering how it could be applied in practice. For example, how a Penn Cove shellfish business might use these calculated changes of climate-change probability to self-insure their farm. The owner of a shellfish farm may understand that climate change poses a risk to her business, and has hedged for the cost of the odd bad year due to this by putting an extra 100 dollars into an account each month. She has found this has worked well in the past, with the account growing to be large enough to cover costs in bad years, without ballooning too large. How might she use the information that Seattle is breaking temperature records (and the probability of A may be changing) to adjust this amount? If the temperature record is exceeded in 2015, she may decide to increase the amount to 135 dollars per month, and if the record is exceeded again in 2016 she may decide to increase it to 183 dollars, and if it is exceeded again in 2017 increase it to 248 dollars. The advantage is the Bayes method helps her make a decision to act sooner than by using a frequentist approach. This way she may be able to prepare for future costs.
In continuation of my previous post on factoring, I continue to explore these methods. From Pollard’s $latex \rho$ method, we now consider Pollard’s p-1 algorithm. Before we consider the algorithm proper, let’s consider some base concepts. Firstly, I restate Euler’s Theorem (of course, Euler was prolific, and so there are altogether too many things called Euler’s Theorem – but so it goes): If $latex \mathrm {gcd} (a,n) = 1$, i.e. if a and n are relatively prime, then $latex a^{\phi (n)} \equiv 1 \mod n$, where $latex \phi (n)$ is the Euler Totient Function or Euler’s Phi Function (perhaps as opposed to someone else’s phi function?). As a corollary, we get Fermat’s Little Theorem, which says that if $latex \mathrm {gcd} (a,p) = 1$, with p a prime, then $latex a^{p-1} \equiv 1 \mod p$. The second base concept is called smoothness. A number is called B-smooth is none of its prime factors is greater than B. A very natural question might be: why do we always use the letter B? I have no idea. Another good question might be, what’s an example? Well, the number $latex 24 = 2^3 \cdot 3$ is 3-smooth (and 4-smooth, 5-smooth, etc). We call a number B-power smooth if all prime powers $latex p_i ^{n_i}$ dividing the number are less than or equal to B. So 24 is 8-power smooth, but not 7-power smooth. Note also that in this case, the number is a factor of $latex \mathrm{lcm} (1, 2, 3, …, B)$. Pollard’s (p-1) algorithm is called the “p-1” algorithm because it is a specialty factorization algorithm. Suppose that we are trying to factorize a number N and we choose a positive integer B, and that there is a prime divisor p of N such that p-1 is B-power smooth (we choose B beforehand – it can’t be too big or the algorithm becomes computationally intractable). Now for a positive integer a coprime to p, we get $latex a^{p-1} \equiv 1 \mod p$. Since p-1 is B-power smooth, we know that $latex (p-1) | m = \mathrm{lcm} (1, 2, …, B)$. Thus $latex a^m \equiv 1 \mod p$, or rather $latex p | (a^m – 1)$. Thus $latex p| \mathrm{gcd} (a^m – 1, N) > 1$. And so one hopes that this factor is nontrivial and proper. This is the key idea behind the entire algorithm. Pollard’s (p-1) Algorithm 1. Choose a smoothness bound B (often something like $latex 10^6$) 2. Compute $latex m = \mathrm{lcm} (1, 2, …, B)$ 3. Set a = 2. 4. Compute $latex x = a^m – 1 \mod N$ and $latex g = \mathrm{gcd} (x, N)$ 5. If we’ve found a nontrivial factor, then that’s grand. If not, and if $latex a < 20$ (say), then replace a by a+1 and go back to step 4. So how fast is the algorithm? Well, computing the lcm will likely take something in the order of $latex O(B \; log_2 B)$ complexity (using the Sieve of Erosthanes, for example). The modular exponentiation will take $latex O( \; (log_2 N)^2)$ time. Calculating the gcd takes only $latex O( \; (log_2 N)^3)$, so the overall algorithm takes $latex O(B \cdot lob_2 B \cdot (log_2 N)^2 + (log_2 N)^3)$ or so. In other words, it’s only efficient is B is small; but when B is small, fewer primes can be found. I’ve read that Dixon’s Theorem guarantees that there is a probability of about $latex \dfrac{1}{27}$ that a value of B the size of $N^{1/6}$ will yield a factorization. Unfortunately, this grows untenably large. In practice, this algorithm is an excellent way to eliminate the possibility of small factors (or to successfully divide out small factors). Using $latex B = 10^6$ will identify smaller factors and allows different techniques, such as the elliptic curve factorization algorithm, to try to find larger factors. I’ll expand on more factoring later.
Let $C$ be a $[n,k]$ linear code over $\mathbb{F}_q$. Suppose that $\rho$ is the covering radius . I want to show that $\rho \geq \frac{n-k}{1+ \log_q{(n)}}$. Could you give me a hint how we could show this? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Let $C$ be a $[n,k]$ linear code over $\mathbb{F}_q$. Suppose that $\rho$ is the covering radius . I want to show that $\rho \geq \frac{n-k}{1+ \log_q{(n)}}$. Could you give me a hint how we could show this? The idea is to use a sphere packing argument, only for covering. Let $V_q(\rho)$ be the volume of a Hamming ball of radius $\rho$. The Hamming balls around all codewords cover the entire space, so $V_q(\rho) q^k \geq q^n$, or $V_q(\rho) \geq q^{n-k}$. To deduce the bound, use an approximation for $V_q(\rho)$.
This is my first post, I’ll do something very simple: ADVERTISE the methodology that our applied mathematicians use in data analysis. Unlike statisticians or computer scientists, we usually start from ‘dynamical system’ point of view. To give you a taste of what I mean, a method called dynamic mode decomposition (DMD) will be offered as an example. Appetizers: why you may care? If you are a data scientist, you must be very familiar with the Singular Value Decomposition (SVD) which helps you get a better feeling of what your high-dimensional data looks like by performing Principle Component Analysis (PCA). Now I claim: DMD is a similar methodology which is also light weighted, easy to learnand straightforward to apply. Why not put it into your data science toolbox? (At least use it for a quick check!) Consider you are dealing with time series data, you don’t want to simply fit a curve(eg. polynomials) to the data and extrapolate, do you? Since you know: to better forecast the future, you need to extract information (eg. trend, seasonality, special effects) from the historical data. Time series model, like ARIMA model, is usually used for this reason, which can explain the mechanismof your data (i.e. how the data are generated). And DMD is similar, it can give you more physical insightscompared with curve fitting. One good thing about DMD is that you don’tneed to manually de-trendyour time series data or explicitly specify seasonal parameteras we typically do in time series modeling, which is a great plus. Entree: what is Dynamic Mode Decomposition? Dynamic mode decomposition is a dimensionality reduction algorithm, which was originally introduced in the fluid mechanics community. Similar to SVD which gives you the intrinsic coordinate system and the corresponding projections, DMD offers you specific spatial modes that evolve with different temporal behaviors. The basic idea is to find the best matrix \(A\) (in the least square sense) such that \(x_{k+1}=Ax_k\). (i.e. find a linear approximation that send you from your current state to the next.) Robert Taylor has already written up a nice tutorial for DMD, I’ll just direct you to his blog here. But since he only tests this method on his proposed toy dataset, to serve as a supplementary, I provide an example using the real-world data, which is the motion capture data (walking) obtained from here. Also, to know more details about DMD, I recommend the paper here. Salad: walking motion analysis with DMD (Note: All the Matlab code below can be found in my GitHub. 🙂 Dataset The data we are using is the first trial of CMU Mocap subject #07. We use the c3d file format, which basically means it contains the 3D coordinate data of each time frame. Sensors are placed on each part of the human body. For simplicity purposes, we only consider a subset of these sensors. The number of sensors I pick is 18, and each sensor provides (x,y,z) information. The total number of time frames is 316 (the gap between each time frame is 1/120 seconds). So basically we have a matrix of size (54,316) and each column represents a time frame. To give you an intuition of what the sensor data looks like, here I present nine of them to you: Some seemingly linear trend, some periodicity, … not that bad, huh? Now, we are good to go. DMD If you come back from Robert Taylor’s blog on DMD, there should be no surprise that the following Matlab script is an implementation of DMD: function [mu, Phi] = dmd(X, Y, truncate) if truncate == 0 mu = []; Phi = []; else r = truncate; % [U,S,V] = svd(X,'econ'); Ur = U(:,1:r); Sr = S(1:r,1:r); Vr = V(:,1:r); % Atilde = Ur'*Y*Vr/Sr; [W,D] = eig(Atilde); mu = diag(D); % frequencies Phi = Y*Vr/Sr*W; % DMD modes end end Now, we apply the DMD algorithm on our data: r = 10; % you can pick this number your self to threshold [mu,Phi] = dmd(data(:,1:end-1),data(:,2:end),r); the output \(\mu\) represents the temporal modes and the output \(\Phi\) represents the spatial modes. Now, we determine the initial condition b and ‘merge’ it into temporal modes: b = Phi\data(:,1); for iter = 1:T Psi(:,iter) = b.*mu.^(iter-1); end then we can reconstruct our original signals by doing: X = Phi*Psi; Here, \(X\) is a reconstructed version of the human ‘walking’ motion. (Again, this is a quick walk-through, if any lines of explaination above don’t make sense to you, you should check Robert Taylor’s blog). Results Our results show that the DMD algorithm can fit the data very well: the solid lines are ground truth and the black dotted lines are reconstructions (if you are not satisfied, you can increase the truncation parameter r to achieve better fit), meaning the motion itself can be pretty much explained by the linear system \(x_{k+1}=Ax_k\): To get a better feeling of what we have reconstructed, here is an animation: Now you see that the walking motion itself can be well-captured by such a linear system (I should be more careful on this, because if you generate more time frames, you may observe exponentially decay of the skeleton. This is not surprising though, because the model itself is essentially linear, it can only exhibit exponential growth, exponential decay and oscillations.). Interpret the Results Then what? You can also reconstruct the walking motion with SVD! What’s so special about DMD then? Let’s look into the spatial and temporal modes we get from the model: Here’s a visualization of the distribution of frequencies of temporal modes (complex plane): Note: the frequencies are not the values of \(\mu’s\), but the \(log(\mu)/dt\). w = log(mu)/dt; plot(w,'r*','Markersize',12); grid on; axis equal; This is because the frequency concept comes from the continuous-time dynamical system and our process \(x_{k+1}=Ax_k\) is a discrete-time version of the dynamical system \(\frac{dx}{dt}=\tilde{A} x\). The latter one has the solution of the form \(x=\Phi(0)e^{\tilde{A}t}\), where the exponential term captures the frequencies if you consider this in the complex domain. Now the question is, what does these temporal modes of different frequencies represent? To illustrate, I group different temporal modes using circles of different colors. And for convenience, I also sort the modes with respect to their amplitude of frequencies. [~, order] = sort(abs(w)); mu = mu(order); w = w(order); Phi = Phi(:,order); Since the horizontal axis is the real axis and the vertical axis is the imaginary axis, the further the mode deviate from the horizontal axis, the larger the frequency. Therefore, in this plot, the coupled modes in the grey circles oscillate relatively fast, whereas the nodes inside the blue circle have almost no oscillations. And notice that the modes in the blue circle are very close to the origin, meaning they are very persistent, barely grow or decay or oscillate throughout the time, they may keep some ‘invariant’ properties of the walking motion and are very important. Another signature from this plot is that almost all the modes are purely oscillating, except the ones in the green circles. These two modes in the green circles are on the left of the vertical axis, meaning that they exponentially decay as time proceeds. This makes us conjecture that they may not be that important since we don’t expect any sort of features vanish in a steady walking. Our results further confirm our conjectures: Conclusion 1: The modes in the blue circle corresponds to the skeleton of the body moving forward, which is the most important signature intuitively. Here’s a reconstruction with only these two modes: X = Phi(:,1:2)*Psi(1:2,:); To justify the importance of the modes in the blue circle, we further animate the reconstructions only with the modes in the green circles or the yellow circles: X1 = Phi(:,3:4)*Psi(3:4,:); X2 = Phi(:,5:6)*Psi(5:6,:); We can see the animations don’t make sense at all: the first one starts from a mess and gradually shrinks to a point, which is in agreement with the exponential decay; the second one is doing some spider-like movement but we don’t see a human at all. So, in order to interpret the other modes, we need to consider their combined effects with the modes in the blue circle. For convenience, let’s call them skeleton modes. Conclusion 2: the modes in the green circles don’t count: This is well-expected, because the exponential decay already tells us these two modes will vanish as time proceeds: they only contribute a bit in the beginning of the motion. (the guess is they may come from intricacies when a person ‘starts’ to walk.) All these can be justified by observing the reconstruction with the skeleton modes and the modes in the green circles: X = Phi(:,1:2)*Psi(1:2,:)+Phi(:,3:4)*Psi(3:4,:); Conclusion 3: the modes in the yellow circles correspond the the intrinsic walking frequency. These two modes are the most interesting ones. When you add them to the skeleton modes, it provides a reasonable walking motion, which is awesome! Let’s call them intrinsic walking modes. Note that these two modes are purely oscillating, and from the above spider-like movement we can see they only capture the (nearly) periodic movements of the four limbs, which really makes sense. We can infer the walking speed from these intrinsic walking modes, and this is what SVD CANNOT DO! Let’s take a look at the reconstruction: X = Phi(:,1:2)*Psi(1:2,:)+Phi(:,5:6)*Psi(5:6,:); Conclusion 4: other modes model other high-resolution details. Since the intrinsic walking modes are already found, other modes are of less interests. They just serve to add extra details, but no bother to take a look (in fact, they are interesting!). For example, the modes in the grey circles give us something like a ‘ zombie jump‘. If we use some hindsight, this is indeed correct, huh? Here’s an animation of skeleton modes + zombie modes. X = Phi(:,1:2)*Psi(1:2,:)+Phi(:,7:8)*Psi(7:8,:); Dessert: what else? Despite the success, I don’t want to lie to the readers: you cannot easily obtain such nice interpretations on more complex human motions. This is because DMD has limited power as mentioned earlier. One takeaway is: they are extremely useful in modeling the (almost) periodic signals (eg. walking, running, swimming, punching, etc.). For motions of other kinds, typically it will fail (think about jumping), let alone the mixture of these motions. One way out is to consider multi-resolution Dynamic Mode Decomposition (mrDMD). (Again, I encourage you to read the paper or Robert Taylor’s blog on mrDMD.) However, there will be more parameter tunings when you use it, and also some tradeoffs.
$$\frac{ \cos 6x + 6 \cos 4x + 15 \cos 2x + 10 }{ \cos 5x + 5 \cos 3x + 10 \cos x }$$ My approach so far : Tried to represent the denominator as a factor of numerator by manipulating numerator's $\cos 6x = \cos (5x+x)$ , $\cos 4x = \cos (3x+x)$ , so on .. but then $\sin x$ come up which make it more complex to solve . The options for the answer are: A) $\cos 2x$. B) $2 \cos x$. C) $\cos^2 x$. D) $1 + \cos x$
Let $(X,d)$ be a metric space with infinitely many elements. Suppose $X$ is disconnected. Then is it necessarily true that $X$ is not compact? I come up with this question when I was thinking about the Lebesgue number lemma. If $X$ is disconnected, then there exist disjoint open sets $U$ and $V$ in $X$ such that $X=U\cup V$. For every $x\in U$, there is some open ball $B(x,\epsilon_x)$ such that $x\in B(x,\epsilon_x)\subset U$, and similarly for $V$. Then $$\{B(x,\epsilon_x)\}_{x\in X}$$ is an open cover of $X$. Intuitively, as $x\in U$ gets closed to the "boundary" of $U$, the ball $B(x,\epsilon_x)$ would become smaller and smaller while still lies in $U$, and one can imagine $\epsilon_x\to0$ as $d(x,V)\to0$. Then I don't think the open cover $\{B(x,\epsilon_x)\}_{x\in X}$ would admit a Lebesgue number, so that $X$ may not be compact. On the other hand, if $X$ is connected, then any open cover $\{U_\lambda\}_{\lambda\in\Lambda}$ must “overlap”, so that if a ball $B(x,\epsilon_x)$ in some $U_\lambda$ becomes small enough, it might fall into another $U_{\lambda’}$. So my questions are: Can anyone prove the above conjecture? I’m also interested to know some examples of disconnected metric space other thanthe discrete metric. Can someone provide such examples?
Study Radiofrequency Tissue Ablation Using Simulation Radiofrequency tissue ablation is a medical procedure that uses targeted heat for a variety of medical purposes, including killing cancerous cells, shrinking collagen, and alleviating pain. The process involves applying mid- to high-frequency alternating current directly to the tissue, raising the temperature in a focused region near the applicator. We can simulate this process with COMSOL Multiphysics and the AC/DC and Heat Transfer modules. In today’s blog post, we will go over some key concepts for modeling this procedure. What Is Radiofrequency Tissue Ablation? Whenever an alternating electric current (or a direct current, for that matter) is applied to living tissue, there will be heat generation and temperature rise due to Joule heating. The ability to target this heat to specific localized tissue areas is a key advantage of the radiofrequency tissue ablation technique. In one of many medical applications, a cancerous tumor is a localized target. Using heat, the temperature of the area is raised to kill the cancer cells. Alternating current is used (rather than direct) to avoid stimulating nerve cells and causing pain. When alternating current is used, and the frequency is high enough, the nerve cells are not directly stimulated. To understand how we can model this process, let’s examine the figures below, which show some of the key concepts of this technique. A tumor within healthy tissue. Capillaries perfuse blood through the tissue and tumor. When an undesirable tissue mass is identified, such as a tumor, a doctor can use either a monopolar or bipolar applicator to inject current into and around the tumor. The current comes from a generator and varies sinusoidally in time. Frequencies of 300 to 500 kHz are common, although the procedure can use much lower frequencies. There are a wide variety of electrode configurations ranging from flat plates and single needles to a cluster of needles, depending on the desired shape of the heated domain and how the doctor will access the tissue. One common class of applicator is deployed through the circulatory system by using a long, flexible catheter and then extending a set of needles from the distal end into the tissue to be heated. A monopolar applicator is made up of a needle and patch applicator, whereas a bipolar applicator consists of two needle electrodes. More than two applicators and other applicator configurations are also possible. By convention, one electrode is called the ground, or reference, electrode. The voltage applied at the other electrode is with respect to this ground. A monopolar radiofrequency applicator and a patch electrode on the skin’s surface. A bipolar applicator primarily heats the region between the electrodes. An engineer designing one of these devices has a complicated problem to solve. The shape of the heated tissue depends on the shape and number of electrodes; which part is insulated and which is not; and ultimately, the thermal energy absorption distribution of the nearby tissue over time. The sharp, pointed ends of the needle electrodes complicate the design process, since they lead to high current densities and thus uneven temperature rise along the needle. For the cancerous tumor application, the goal is to kill the undesirable tissue mass and leave the surrounding healthy tissue unharmed. For shrinking collagen, the goal is still to heat tissue, but to avoid any possibility of damaging cells. COMSOL Multiphysics simulation streamlines and shortens this process. To properly model this procedure, we must build a model of the electric current flow through the tissue as well as the heat generation and temperature rise. Let’s explore these steps. Analyzing Joule Heating and Current Flow We begin by examining the typical material properties of both the applicator and living tissue and discuss how these materials behave at an operating frequency of 500 kHz. The table below shows the representative electrical conductivity, \sigma; relative permittivity, \epsilon_r; skin depth, \delta; and complex-valued conductivity, (\sigma+j\omega \epsilon_0 \epsilon_r) at 500 kHz. Although there is a variation to the electrical conductivity and relative permittivity of different tissues, for the purposes of this discussion, we will approximate the human body as having the properties of a weak saline solution. The actual properties of tissue do not vary by much more than one order of magnitude from this value, while the conductivity of the electrode and insulator are over five orders of magnitude larger or smaller. Electrical Conductivity (S/m) Relative Permittivity Skin Depth at 500 kHz (m) Complex Conductivity at 500 kHz (S/m) Metal Electrode 10 6 1 ~10 -4 10 6 + j 4 x 10 -6 Polymer Insulator 10 -12 2 ~10 10 10 -12 + j 9 x 10 -5 “Average” Human Tissue 0.5 65 1 0.5 + j 0.0003 We compute the skin depth to decide if we need to compute the magnetic fields and any heating due to induced currents. At 500 kHz, the electrical skin depth of the human body is on the order of one meter, while the heated regions have a typical size on the order of a centimeter. Hence, we can make the approximation that heating due to induced currents in the tissue is negligible and need not be calculated. Note that this approximation will not be valid if some small pieces of metal exist within the tissue, such as a stent within a nearby blood vessel. We can also see from the magnitude of the complex conductivity in the above table that the electrodes are essentially perfect conductors when compared to tissue. Similarly, the polymer insulators can be well approximated as perfect insulators when compared to human tissue. This information lets us choose the form of our governing equation. Under the assumption that magnetic fields and induction currents are negligible and operating at a constant frequency, we can solve the frequency-domain form of the electric currents equation. Further assuming that the human body itself does not generate any significant currents, the governing equation is: which solves for the voltage field, V, throughout the modeling domain. The electric field is computed from the gradient of the voltage: \mathbf{E} = -\nabla V. The total current is \mathbf{J} = (\sigma+j\omega \epsilon_0 \epsilon_r) \mathbf{E} and the cycle-averaged Joule heating is Q = \frac{_1}{^2} \Re (\mathbf{J}^* \cdot \mathbf{E} ). Since the conductors are essentially perfectly conducting compared to the tissue, we can omit these domains from our electrical model. That is, we can assume that all surfaces of the metal electrodes are equipotential. This is reasonable if the equivalent free-space wavelength (\lambda = c_0/f = 600m) is much larger than the model size. When using the AC/DC Module, we can use the Terminal boundary condition to fix the voltage on all surfaces of the electrode. The Terminal boundary condition can specify the applied voltage, total current, or total power fed into the boundaries. It is reasonable to ask why the conductor is omitted, for there is indeed some finite heat loss within the electrode itself. The heating within the electrode, however, is many orders of magnitude lower than in the surrounding tissue. Although the currents in the conductor can be quite high, the electric field (the variation in the voltage along the electrode) is quite small, hence the heating is negligible. Similarly, since the insulators are essentially perfect, these domains can also be eliminated from the electrical model. In the insulators, the electric fields may be quite high, but the current is essentially zero, which again means negligible heating. The Electric Insulation boundary condition, \mathbf{n} \cdot \mathbf{J} = 0, can be applied on the boundaries of the insulators and implies that no current (neither conduction nor displacement currents) passes through these boundaries. There is one caveat to this: If the electrodes are completely enclosed within the insulators, then there will be significant displacement currents in the insulators and these domains should be included in the model. On the exposed surface of the skin, the Electric Insulation boundary condition is also appropriate. However, if there is an external electrode patch applied to the skin’s surface, then current can pass through the skin to the electrode. The conductivity of skin is lower than that of the underlying tissue, and this should be modeled. However, we may not want to model the skin explicitly as a separate domain. In such cases, the Distributed Impedance boundary condition applies the condition \mathbf{n} \cdot \mathbf{J} = Z_s^{-1}(V-V_0), where V_0 is the external electrode voltage and Z_s is the equivalent computed impedance of the skin. A schematic of such a model is shown below, with representative material properties and boundary conditions. Now that the electrical model is addressed, let’s move on to the thermal model. A schematic of an electrical model of radiofrequency tissue ablation. Representative material properties are shown on the left. The modeling domain and governing equations are shown to the right. Computing Temperature Rise in Human Tissue The objective of the thermal model is quite straightforward: to compute the rise in tissue temperature over time due to the electrical heating and predict the size of the ablated region. The governing equation for temperature, T, is the Pennes Bioheat equation: where \rho and C_p are the density and specific heat of the tissue, while \rho_b and C_{p,b} are the density and specific heat of the blood perfusing through the tissue at a rate of \omega_b. T_b is the arterial blood temperature and Q_{met} is the metabolic heat rate of the tissue itself. This equation is implemented within the Heat Transfer Module. If the last two terms are omitted, then the above equation reduces to the standard transient heat transfer equation. It is also necessary to specify boundary conditions on the exterior of the modeling domain. The most conservative condition would be the Thermal Insulation boundary condition, which implies that the body is perfectly insulated. This would lead to the fastest rise in temperature over time. A more physically realistic boundary condition would be the Convective Heat Flux condition: with a heat transfer coefficient of h = 5-10 W/m^2K and an external temperature of T_{ext}=20-25 ^{\circ}C. This reasonably approximates the free convective cooling from uncovered skin to ambient conditions. Along with the change in temperature, we also want to compute the tissue damage. The Heat Transfer Module offers two different methods for evaluating this: Time-at-temperature threshold analysis: If the tissue is heated above a specified damage temperature for a specified time (e.g., over 50°C for over 50 seconds) or if a peak temperature of necrosis is ever instantaneously exceeded (e.g., 100°C), then the tissue is considered irreversibly damaged. A tissue damage fraction is also computed based upon the damage temperature and time (e.g., over 50°C for 25 seconds would lead to 50% damage). Energy absorption analysis: Given a frequency factor and activation energy that are properties of the tissue being studied, the Arrhenius equation is used to compute the fraction of damaged tissue. Along with these predefined damage integrals, it is also possible to implement a user-defined equation for damage analysis via the equation-based modeling capabilities of COMSOL Multiphysics. Representative radiofrequency ablation results from a 2D axisymmetric model. Two insulated applicators are inserted into a tumor within the body to heat and kill the diseased tissue. The plotted results include the voltage field (top left), resistive heating (bottom left), and the temperature and size of the completely damaged tissue at two different times (right). Solving the Coupled Problem to Understand Radiofrequency Tissue Ablation We have now developed a model that is a combination of a frequency-domain electromagnetics problem and a transient thermal problem. COMSOL Multiphysics solves this coupled problem using a so-called frequency-transient study type. The frequency-domain problem is a linear stationary equation, since it is reasonable to assume that the electrical properties are linear with respect to electric field strength over one period of oscillation. Thus, COMSOL Multiphysics first solves for the voltage field using a stationary solver and then computes the resistive heating. This resistive heating term is then passed over to the transient thermal problem, which is solved with a time-dependent solver. This solver computes the change in temperature over time. The frequency-transient study type automatically accounts for material properties that change with temperature and the tissue damage fraction. If the temperature rises or tissue damage causes the material properties to change sufficiently to alter the magnitude of the resistive heating, then the electrical problem is automatically recomputed with updated material properties. This can also be described as a segregated approach to solving a multiphysics problem. In such thermal ablation processes, it is also common to vary the magnitude of the applied electrical heating to pulse the load on and off at known times. In such situations, the Explicit Events interface can be used, as described in our earlier blog post on modeling periodic heat loads. If you instead want to model the heat load changing as a function of the solution itself, then the Implicit Events interface can be used to implement feedback, as described in our earlier blog post on implementing a thermostatic controller. Explore More Resources for Radiofrequency Tissue Ablation Modeling If you are interested in studying radiofrequency tissue ablation, there are several other resources worth exploring. If your electrodes have sharp edges and you are concerned about localized heating near these edges, consider adding fillets to your model, since a sharp edge leads to a locally inaccurate result for the heating. Also keep in mind that, despite any locally inaccurate heating, the total global heating will nevertheless be quite accurate with a sharp edge. Thus, the filleting of sharp edges is not always necessary, since the local temperature field can still be quite accurate. If there are any relatively thin layers of materials that have relatively higher or lower electrical conductivity compared to their surroundings, consider using the Electric Shielding or Contact Impedance boundary conditions for the electrical problem. There are similar boundary conditions available for thin layers in thermal models as well. If you are interested in modeling at much higher frequencies, such as in the microwave regime, then you need to consider an electromagnetic wave propagating through the tissue. In such cases, look to the RF Module and the Conical Dielectric Probe for Skin Cancer Diagnosis example in the Application Gallery. At even higher frequencies in the optical regime, a range of modeling approaches are possible, as described in our blog post on modeling laser-material interactions. The heat source for your problem need not even be electrical. High-intensity focused ultrasound is another ablation technique and can be modeled, as described in the Focused Ultrasound Induced Heating in Tissue Phantom tutorial in the Application Gallery. In closing, we have shown that COMSOL Multiphysics, in conjunction with the AC/DC Module and Heat Transfer Module, gives you the capability and flexibility to model radiofrequency tissue ablation procedures. If you are interested in using COMSOL Multiphysics for this type of modeling, or have any other questions, please contact us. Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
First Let's take a look at the convolution $\displaystyle C _ { i } = \sum _ { j \oplus k = i } A _ { j } * B _ { k }$, and the $\oplus$represents any boolean operation. And we are able to evaluate $C$ in $O(n \log n)$ time, using an algorithm called Fast Walsh Transform, where $n$ represents the length of the binary digits. However, when looking at wikipedia page, it says that the Walsh Transform is to accelerate the evaluation of an $n \times n$ Matrix called Walsh Matrix. I also found it reasonable. My question is, what's the connection between the two evaluations? I know that a convolution is a linear transformation and can be represented as a vector multiplied by a matrix. But where's the matrix in the First convolution? I am so confused, and does it represent that some multiplications with specified matrixed could be accelerated to $O(n \log n)$? (Thus the matrix is $n \times n$)
What is the easiest way to calculate $$ \int_{0}^{1} \frac{ \log (1+x)}{x}\, dx$$ ? Need a hint. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $$I=\int_{0}^{1}\frac{\log(1+x)}{x}\,dx = \int_{0}^{1}\sum_{n\geq 1}\frac{(-1)^{n+1}x^{n-1}}{n}\,dx=\sum_{n\geq 1}\frac{(-1)^{n+1}}{n^2}=\frac{1}{2}\zeta(2)=\color{red}{\frac{\pi^2}{12}}. $$ We have an indefinite integral $$ \int\frac{\ln(1+x)}{x } dx=-\operatorname{Li}_2(-x). $$ Therefore $$ \int_ 0^1 \frac{\ln(1+x)}{x } dx=-\operatorname{Li}_2(-1) = -\frac 1 2 \zeta(2)=- \frac{\pi^2}{12}. $$ Of course this is overkill for this integral, but this is the method of choice if the upper limit is $1/2$ or $\phi$.
If I have a dirichlet distribution with parameter $\alpha \in \mathbb{R}^k = [\alpha_1, \alpha_2, \cdots, \alpha_k]$, and then I set the component $\alpha_k$ to $\epsilon$. As I decrease $\epsilon$, will this pdf approximate the pdf of a dirichlet distribution with parameters $[\alpha_1, \alpha_2, \cdots, \alpha_{k-1}]$? That is, ignoring the last dimension of the samples from $\text{Dir}([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}, \epsilon])$, will they follow approximately the same distribution as samples from $\text{Dir}([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}])$ My progress so far: As a sanity check I started with simulations using Numpy. Sampling from $\text{Dir}([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}, 10^{-10}])$ and $\text{Dir}([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}])$ seems very similar indeed. Then I've tried substituting the new $([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}, \epsilon])$ vector into the expression for the PDF of the Dirichlet distribution. The resulting expression seems very similar to the PDF of Dir$([\alpha_1, \alpha_2, \cdots, \alpha_{k-1}])$, but it's being multiplied by $\frac{1}{x_k\cdot\Gamma(\epsilon)}$. As $\epsilon$ decreases the whole expression tends to zero. Not really sure how to proceed from here.
LHC is colliding lead ions whose lab energy is \(82\times 6.369\TeV\), a new record! On Thursday, November 25th, 1915, exactly 100 years ago, Einstein presented the final form of his equations (defining the general theory of relativity) to the Royal Prussian Academy of Sciences in the afternoon. The building sits at the famous Unter den Linden 8 avenue (the big street that leads to the Brandenburg Gate from the East), see Google Maps, was designed according to the military tastes of Emperor William II, and it was inaugurated just a year earlier, in 1914. These days, the once only building of the Academy is used as one of the three homes of the (Berlin) State (formerly Imperial) Library or "Staatsbibliothek" ("Stabi" as Germans call it). Einstein had prepared the final form of the equations for that talk and had to work hard, relatively to the standards of this "lazy dog": “One thing is for sure, that I’ve never been so plagued in my life,” wrote Einstein at the time. “Smoking like a chimney, working like a steed, eating without thought, sleeping irregularly.”So much whining about some work that may be basically reduced to writing \(S=\int R\). ;-) His wife Elsa remembered that he was absent-minded in the last two weeks or so and sometimes played the piano mindlessly or stared blankly to the space as if he were Witten. Einstein was exhausted and stinking of cigarettes during the talk (strangely, he only allowed to be photographed with tobacco pipes which "contributed to his calm and objective judgment", he stressed; Albert remembered that to beat his doctor, his grandfather smoke cigarette butts from the street) but he gave us his general relativity. The content of papers was more or less ready but they only appeared in 1916. Institute for Advanced Study has organized an event, GR at 100, and this lecture by the IAS director and my once co-author (and an independent co-father of our matrix string theory) Robbert Dijkgraaf previously gave this October 2015 talk which was the only one whose video I could find two weeks ago. But now, the IAS YouTube channel offers you 10 videos from the gathering. Search for "GR @ 100" on that page. A talk by Andy Strominger about his very recent findings is there, too. The Dijkgraaf video above mentions the Baby Einstein, Newton, the Solvay conferences, some classical thought experiments, principles, and effects we associate with the general theory of relativity, as well some newest results such as the holographic principle at the end. Robbert's presentation is poetic so some wise quotes by Einstein are included. If we ignore the "partially" independent efforts by David Hilbert etc., general relativity was a result of a pure scientific enterprise by a solitary genius. When he discovered special relativity (and other things) during his miraculous year of 1905, Einstein wasn't satisfied. Newton's laws of gravity were incompatible with special relativity (for example because Newton's gravity was thought to act immediately i.e. faster than light – but more mathematically because such an instantaneous action simply violates the Lorentz symmetry) and after a decade of struggles, he saw the light at the end of the tunnel (his own words) and completed the new picture of gravity in terms of a curved spacetime. Needless to say, from the beginning, he was trying to describe gravity e.g. in terms of a scalar (modern jargon: Klein-Gordon) field that was meant to "be" the gravitational potential. Similar theories never quite work. After all, we know that the gravitational potential is pretty much a component of a symmetric tensor and replacing it with a scalar is similar to attempts to reduce the stress-energy tensor to a scalar. Sometimes around 1911 or 1912, at the German University in Prague, Einstein finally appreciated the importance of the equivalence principle. The physical phenomena in a freely falling frame are locally indistinguishable from the physical phenomena in the absence of all gravitational fields because all objects accelerate with the same acceleration in a given gravitational field (equivalently, it's because the inertial mass and the gravitational mass are equal, or proportional to each other with a universal conversion factor). That's great because there has to exist a non-linear transformation of coordinates that locally makes all the effects of the gravitational field disappear. A three-minute cartoon on Einstein and GR Einstein realized that this principle was very constraining – capable of instantly killing an overwhelming fraction of his candidate theories – and it has also directed him towards theories that are invariant under arbitrary, general coordinate transformations (or "diffeomorphisms"). The laws of physics in special relativity are only invariant under the Lorentz or Poincaré transformations – the "special" changes of the observer – which is why the original relativity was renamed to "special relativity". Similarly, one needs the more general, nonlinear coordinate transformations and the equivalence of all observers, including the accelerating ones, which is why the "new" relativity became known as the "general theory of relativity". Once the equivalence principle became a core of Einstein's thinking, the path towards the final equations was wide open. Einstein had to learn the Riemannian geometry – which has been available in the mathematics books since the 19th century – and go through almost all the possible technical mistakes you can think of. For example, before he found the final form of the equation, he had equations where the Einstein tensor was replaced by the "simpler" Ricci tensor. Einstein was no Einstein, either. Well, he was one but not in the normal sense. ;-) The Ricci-not-Einstein equations had to be wrong because the stress-energy tensor was conserved (the divergence equals zero) but if one imposed the same conservation on the Ricci tensor (which was assumed to be equal), it would imply that the Ricci scalar had to be constant (probably zero), which would also mean that the stress-energy tensor would have to be traceless (which it usually isn't) etc. OK, it's some technical detail, you may say. You get instantly the Einstein tensor in the equations if you derive them as the variation of the Einstein-Hilbert action. His confirmation of the previously observed Mercury's perihelion precession and the successful prediction of the bending light during the 1919 solar eclipse were the early experimental tests that distinguished general relativity from Newton's theory and the theory was quickly accepted. Just to be sure, Einstein didn't need experiments to be sure that Newton's theory of gravity had to lose in the experimental match: it was incompatible with the empirically verified principle of relativity (as implemented in special relativity). Karl Schwarzschild found the first black hole solution of the theory already in 1917. But the world needed to wait for more than a decade to appreciate the general relativity's implications for cosmology. The Universe can't be static, we finally learned. Instead, it's expanding ("shrinking" was a priori allowed as well, but wrong) and the rate of expansion has to change with time. The big bang theory more or less follows from general relativity. In the 1960s, people found new experiments that verified the gravitational red shift and other predictions. In the same decade, people began to actually trust the existence of the black holes (John Wheeler coined the new, better name for what had been known as "frozen stars" in that decade). In the 1970s, Bekenstein and Hawking started the research into the thermodynamic and quantum properties of black holes and string theory was identified as a consistent of quantum theory of gravity. The full realistic versions of string theory only began to be studied in the 1980s but that's already a little bit different history. These days, some of the motivations and "modes of thinking" that Einstein exploited to converge to his right equations seem totally obsolete. We know that the cosmological constant term that Einstein originally wanted to "erase" as an "ugly modification" is very natural and should be expected to be present. It is indeed nonzero, as observed in the late 1990s, which is why the expansion of the Universe keeps on accelerating. In fact, we are stunned that its numerical magnitude is so tiny. While the anthropic explanation is plausible, it may be wrong in which case we need a new perspective that is "in between" the modern view that implies that the cosmological constant term should be large; and Einstein's old view that it's ugly and should be erased altogether.\[ R_{\mu \nu} - {1 \over 2}R \, g_{\mu \nu} + \Lambda g_{\mu \nu}= {8 \pi G \over c^4} T_{\mu \nu} \] Equally importantly, we understand Einstein's equations just as the equations of an "effective field theory". There may be corrections such as terms of the form \(\ell^2 R_{\mu\lambda}R_\nu{}^\lambda\) on the left hand side – the higher-derivative terms – and the only reason why they may be ignored in practice is that the coefficient \(\ell^2\) is dimensionful and small (the units of squared length) and its effect on physical predictions becomes increasingly negligible at increasingly long length scales, i.e. at \(L\gg \ell\). The terms in Einstein's equations are the leading terms. The cosmological constant is "even more leading" than the Einstein tensor but because the cosmological constant (the coefficient of the term) is so tiny, it only matters at cosmological scales. We know that the correct theory doesn't have to be constructed out of the fundamental metric tensor field at all. The effective field theory may look like Einstein's equations even if the exact theory that is valid at shorter (and all) length scales is a qualitatively different beast. Also, we have kind of returned to the historical status of general relativity which is a "theory of gravity compatible with [special] relativity". While general relativity may be viewed as a "more general theory" and the special relativity is just its limit for the cases when gravity is negligible, we may also agree with many particle physicists and view general relativity as a particular theory of spin-two tensor fields that are added on top of a special relativistic Minkowski spacetime, with the right interactions that guarantee the diffeomorphism (local) symmetry and therefore the equivalence principle (and the decoupling of the unphysical excitations of the spin-two field). General relativity or something with the same local symmetry is the only effective theory that locally preserves the Lorentz invariance and contains spin-two excitations. Spin-two excitations (gravitational waves and, in the quantum theory, gravitons) are needed if the stress-energy tensor is supposed to interact with itself at a distance. The equations of general relativity are considered "beautiful" although these days, we may describe its advantages and uniqueness in terms of less emotional and more "provable" words. The beauty really comes from the uniqueness of interactions of spin-two fields that are compatible with the special relativity. Thanks for those great contributions, Einstein. General relativity has led to lots of solutions and mysteries, wormholes and time machines. Most of those are probably forbidden by the "full" list of laws of physics although they are "possible" as mere geometries. In particular, it seems likely that the time machines can never arise and only non-traversable wormholes are allowed thanks to the energy condition (and these non-traversable wormholes may also be described via the quantum entanglement in quantum gravity). Some theorems in GR have been proven – e.g. the singularity theorems started by Hawking and Penrose that imply that the birth of a black hole is inevitably under some conditions. Also, Penrose promoted his Cosmic Censorship Conjecture. I believe it's right to say that this conjecture has been largely debunked, especially in higher-dimensional spacetimes where full-fledged counterexamples seem to exist. But even in the limited realm of \(D=4\) dynamics, the precise wording has to be weakened a lot (generic situations have to be required etc.) for it to be defensible and even this weakened statement seems unjustified. Penrose's conjecture de facto claims that naked singularities can never arise because classical GR would be desperate – it wouldn't know what to predict about the matter coming from the naked singularity because the answers depend on the extreme physical effects near the singularity, i.e. depend on quantum gravity. But there's really nothing wrong about the dependence of something on quantum gravity! The full dynamics is ultimately described by a theory of quantum gravity and if the classical limit is insufficient to figure out what happens, even approximately, then it's insufficient. But it's no contradiction in any sense. P.S.: Another thing related to general relativity. Richard Muller keeps on flooding Quora with totally wrong answers to all questions about general relativity, just like previously. This time, he claimed that tachyons – viewed as particles with spacelike trajectories – can't escape the black hole interior if photons can't. But of course, tachyons can, just like the authors of other answers say. The causality restriction is the "only" barrier that prevents one from leaving the interior. Muller is totally ignorant about the difference between physical and coordinate singularities and deduces his totally wrong answers from his usage of coordinates that are singular at the event horizon. His totally wrong answer was upvoted 134 times, probably by future U.S. presidential candidates.
We may model the problem as a roulette problem, where we have $N$ balls inside a box in which one of them is special. The simplest model is the geometric model --- we put the non-special ball back and retry the whole process. It follows a geometric distribution with expectation $\frac{1}{p}$. But this is far too slow, and it has a large variance with $p$ is small. The second simplest one is the Russian roulette model --- we do not put the non-special ball that we got back to the ball. In such a case for each $k = 1,...,N$, $P(X=k) = \frac{1}{N}$ and we can get $E(X) = \sum \frac{k}{N} = \frac{N+1}{2}$ and $V(X) = \frac{N^2-1}{12}$ The problem with these methods is that they have a very long tail and has a large variance ($O(n^2)$). (The case for geometric model should be $\frac{1-p}{p^2}\approx \frac{1}{p^2}$ when $n = \frac{1}{p}$ is large) So instead of simply dealing with the balls we have taken out we add another measure. We add another special ball to the pool for every time we didn't get the special ball. It is clear that doing in this way is much faster because the probability grows approximately linear in the beginning, that allows us to capture the balls quickly rather than compensating the chance at the later stages. Consider the case where we replace a non-special ball by a special ball every time, then the chance is given by $$P(X_n = k) = \frac{k^2 n!}{n^{k+1}(n-k)!}$$ It is pretty amazing that the expectation of this random variable is asymptotically extremely close to $\sqrt{n}$. It has a linear variance as well. In fact if you do a linear regression on $E(X_n)^2$ you will get a R-squared value of over 0.999998! It has a linear variance that guarantees is pick in a very narrow range as well. For example this is the graph when $n = 81$: $$P(Y_n = k) = \frac{k^2 (n-1)^k (n-2)!}{(n+k-1)!}$$ One may observe that they operates almost the same, especially when $k$ is small, therefore the following questions come naturally: 1) Are there any closed formula for $E(X)$? As it involves factorial and exponential summation this does not look obvious. 2) Is $\lim _{n\to \infty} \frac{E(X_n)^2}{n} = \lim _{n\to \infty} \frac{E(Y_n)^2}{n}$? If yes we can ask a strong question, does $\lim _{n\to \infty} \frac{E(X_n)^2-E(Y_n)^2}{\sqrt{n}}$ converges? Empirically both $\frac{E(X_n)^2}{n}$ and $\frac{E(Y_n)^2}{n}$ hovers around 1.57 and $\frac{E(X_n)^2-E(Y_n)^2}{\sqrt{n}}$ 'looks' converging to 1.66. Of course in practical sense the first one ($X_n$) is better because the range is finite [promised to be] and it has a smaller variance, but it is the nearly-exact behavior that attracted me to compute such bits.
Answer The remaining angles and sides are $$\angle A=56^\circ\hspace{.75cm}BC\approx307.382ft\hspace{.75cm}AB\approx361.146ft$$ Work Step by Step $$\angle B=20^\circ50'\approx20.833^\circ\hspace{.75cm}\angle C=103^\circ10'\approx103.167^\circ\hspace{.75cm}AC=132ft$$ Here $AC$ is the side $b$, so $b=132ft$ 1) Analysis: - Angles $\angle B$ and $\angle C$ are known. We can always calculate $\angle A$ as the sum of 3 angles in a triangle equals $180^\circ$. - Side $b$ and its opposite $\angle B$ are known. $\angle A$ and $\angle C$ are also known, so they are helpful to finding out sides $a$ and $c$. (Law of sines is to be applied) 2) Calculate the unknown angle $\angle A$ We know that the sum of 3 angles in a triangle is $180^\circ$. $$\angle A+\angle B+\angle C=180^\circ$$ $$\angle A+20^\circ50'+103^\circ10'=180^\circ$$ $$\angle A+124^\circ=180^\circ$$ $$\angle A=180^\circ-124^\circ=56^\circ$$ 3) Calculate the unknown sides $a$ and $c$ a) For $a$ We know the opposite angle of $a$: $\angle A=56^\circ$, so $\sin A=\sin56^\circ\approx0.829$. We also know side $b=132ft$ and its opposite angle $\angle B=20.833^\circ$, $\sin B\approx0.356$. Therefore, using the law of sines: $$\frac{a}{\sin A}=\frac{b}{\sin B}$$ $$a=\frac{b\sin A}{\sin B}$$ $$a=\frac{132ft\times0.829}{0.356}$$ $$a\approx307.382ft$$ So, $BC\approx307.382ft$ b) For $c$ We know the opposite angle of $c$: $\angle C=103.167^\circ$, so $\sin C=\sin103.167^\circ\approx0.974$. We also know side $b=132ft$ and its opposite angle $\angle B=20.833^\circ$, $\sin B\approx0.356$. Therefore, using the law of sines: $$\frac{c}{\sin C}=\frac{b}{\sin B}$$ $$c=\frac{b\sin C}{\sin B}$$ $$c=\frac{132ft\times0.974}{0.356}$$ $$c\approx361.146ft$$ So, $AB\approx361.146ft$ 4) Conclusion: The remaining angles and sides are $$\angle A=56^\circ\hspace{.75cm}BC\approx307.382ft\hspace{.75cm}AB\approx361.146ft$$
Exercise \(\PageIndex{1}\): Set Operations Let \(A = \{1, 5, 31, 56, 101\}\), \(B = \{22, 56, 5, 103, 87\}\), \(C = 41, 13, 7, 101, 48\}\), and \(D = \{1, 3, 5, 7...\}\) Give the sets resulting from: \(A \cap B\) \(C \cup A\) \(C \cap D\) \((A \cup B) \cup (C \cup D)\) Answer: 1. \(A \cap B =\{ 5, 56 \}\) 2. \(C \cup A=\{1, 5, 7, 13, 31, 41, 48, 56, 101\} \) 3. \(C \cap D = \{ 7, 13, 41, 101\} \) 4. \((A \cup B) \cup (C \cup D)\) Exercise \(\PageIndex{2}\): True or False \(7 \in \{6, 7, 8, 9\}\) \(5 \notin \{6, 7, 8, 9\}\) \(\{2\} \nsubseteq \{1, 2\}\) \(\emptyset \nsubseteq \{\alpha, \beta, x\}\) \(\emptyset = \{\emptyset\}\) Answer: \( T, T, F, F, F\) Exercise \(\PageIndex{3}\): Subsets List all the subsets of: \(\{1, 2, 3\}\) \(\{\phi, \lambda, \Delta, \mu\}\) \(\{\emptyset\}\) Answer: 1. \(\{\{1, 2, 3\}, \{1, 2\}, \{1, 3\}, \{ 2, 3\}, \{1\}, \{2\}, \{ 3\}, \emptyset \}\) 3. \(\{\{\emptyset\},\emptyset \}\) Exercise \(\PageIndex{4}\): Venn Diagram A survey of 100 university students found the following data on their food preferences: 54 preferred Italian cuisine 29 preferred Asian-style cooking 16 preferred both Italian and Asian-style foods 19 preferred both Asian-style and Indian dishes 10 preferred both Italian and Indian cuisines 5 liked them all 11 did not like any of the options How many students preferred: Only Indian food? Only Italian food? Only one food? Exercise \(\PageIndex{5}\): Symbols Assume that the universal set is the set of all integers. Let \(A=\{-7,-5,-3,-1,1,3,5,7\} \) \(B =\{ x \in {\bf Z}| x^2 <9 \} \) \(C= \{2,3,4,5,6\}\) \(D=\{x \in {\bf Z}| x \leq 9 \}\) In each of the following fill in the blank with most appropriate symbol from \(\in, \notin, \subset, =,\neq,\subseteq\), so that resulting statement is true. A-----D 3-----B 9-----D {2}-----\(C^c\) \(\emptyset\)-----D A-----C B-----C C-----D 0-----\(A \cap D\) 0-----\(A \cup D\) Exercise \(\PageIndex{6}\): Prove or disprove Given subsets \(A,B,C\) of a universal set \(U\), prove the statements that are true and give counter examples to disprove those that are false. \( A-(B \cap C)=(A-B) \cup(A-C).\) If \( A \cap B= A \cap C\) then \(B= C\). If \( A \cup B= A \cup C\) then \(B= C\). \( A-(B - C)=(A-B)-C.\) If \(A \times B \subseteq C \times D\) then \(A\subseteq C\) and \( B \subseteq D.\) If \(A\subseteq C\) and \( B \subseteq D\) then \(A \times B \subseteq C \times D.\) Exercise \(\PageIndex{7}\): Set operations Let \(A = \{ r,e,a,s,o,n,i,g\}, B = \{m,a,t,h,e,t,i,c,l\} \) and \( C \) = the set of vowels. Calculate: \(A \cup B \cup C.\) \(A \cap B.\) \({C}^c\). Exercise \(\PageIndex{8}\): Prove or disprove Given subsets \(A,B,C\) of a universal set \(U\), prove the statements that are true and give counter examples to disprove those that are false. \(P(A \cup B) = P(A) \cup P(B).\) \(P(A \cap B) = P(A) \cap P(B).\) \(P(A^c)=(P(A))^c\) \(P(A - B) = P(A) - P(B).\) Exercise \(\PageIndex{9}\): Equal Sets Consider the following sets: \(A=\{x \in {\bf Z}| x= 2m, m \in {\bf Z}\} \) and \(B=\{x \in {\bf Z}| x= 2(n-1), n \in {\bf Z}\} \). Are \(A\) and \(B\) equal? Justify your answer. Exercise \(\PageIndex{10}\): Product of Sets Let \(A=\{1,3,5\} \), and \(B =\{ a,b \} \). Then Find \( A \times B\) and \(B \times A\). Are \(A \times B\) and \(B \times A\) equal? Justify your answer.
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$If you're seeking intuition about how a line bundle can be non-trivial, the simplest example is the Möbius strip viewed as the total space of a real line bundle over a circle. In detail, view $\Reals^{2} \to \Reals$ as the trivial real line bundle via projection to the first factor. Divide $\Reals^{2}$ by the glide reflection $\gamma(x, v) = (x + 1, -v)$, and view the result, by projection to the first factor, as a line bundle over the circle $\Reals/\mathbf{Z}$. A fundamental domain for the action is the strip $[0, 1] \times \Reals$, and $(0, v) \sim (1, -v)$; that is, the edges of the strip are glued with a half-twist. It's easy (intermediate value theorem) to show this line bundle has no continuous, non-vanishing section, so the bundle is non-trivial. Your intuition about sections of Hermitian line bundles being local complex-valued functions is correct. The simplest Hermitian line bundles (over complex curves) are harder to visualize than real line bundles because their total spaces are complex surfaces, hence real four-manifolds. Take $M = \Cpx\Proj^{1}$, the complex projective line (real two-sphere). Removing many details: In the trivial complex line bundle, the set of unit vectors is (diffeomorphic to) $S^{2} \times S^{1}$. In the tangent bundle $TM = TS^{2}$, the set of unit vectors is (diffeomorphic to) the special orthogonal group $SO(3)$: A point $p$ of $S^{2}$ is a unit vector in $\Reals^{3}$, a unit tangent vector $v$ at $p$ is an element of $\Reals^{3}$ orthogonal to $p$, and the ordered triple $(p, v, p \times v)$ may be viewed as an element of $SO(3)$. This association is clearly bijective (because reading the columns from an element of $SO(3)$ recovers a point of the sphere and a unit tangent vector, and these uniquely determine the third column) and smooth (write either map in coordinates coming from the entries of $3 \times 3$ real matrices). A possibly less familiar example is the complement $L$ of a point $q$ in the complex projective plane $\Cpx\Proj^{2}$, which is the total space of a non-trivial Hermitian line bundle in which the set of unit vectors is (diffeomorphic to) $S^{3}$. (Pick a complex projective line $M$ not containing $q$. For each point $x$ of $L$, let $\ell$ be the unique line through $q$ and $x$, and let $p = M \cap \ell$. The mapping $x \mapsto p$ is a line bundle projection. The bundle of unit vectors is (essentially) a sphere in $\Cpx\Proj^{2}$ centered at $q$.) The preceding examples are distinct because their sets of unit vectors are mutually non-diffeomorphic. There are infinitely many Hermitian lines bundles over the projective line, classified by an integer, the Chern number. It turns out the preceding examples have Chern number $0$, $2$, and $1$ respectively.
In natural numbers the unary successor operator $S$ is the most natural function which maps each number to the next one. Furthermore we may consider the binary relation $+$ as an iteration of $S$. Also $\times$ is an iteration of $+$ and $\exp$ is an iteration of $\times$ i.e. $\forall m,n\geq 1$ $m+n:=\underbrace{S(S(S(\cdots S}_{n\ \text{times}}(m))))$ $m\times n:=\underbrace{m+m+m+\cdots+m}_{n\ \text{times}}$ $m^n:=\underbrace{m\times m\times m\times \cdots\times m}_{n\ \text{times}}$ We build the rich arithmetic of natural numbers via above three natural operators. Also they show many complicated mutual relations with each other. But, why do we stop here in arithmetic of natural numbers and don't go forward with continuing iterating operators again and again? i.e. $m*n:=\underbrace{m^{m^{m^{.^{.^{.^{m}}}}}}}_{n\ \text{times}}$ $m\circledast n:=\underbrace{m*m*m*\cdots*m}_{n\ \text{times}}$ $m\circledcirc n:=\underbrace{m\circledast m\circledast m\circledast \cdots\circledast m}_{n\ \text{times}}$ $\vdots$ The point is that maybe there are rich interactions between these new natural operators and the ordinary arithmetic operators of natural numbers. These interactions may unfold some deep aspects of long standing open questions of number theory which hopefully can lead us to a solution itself. Question:Why do we stop at exponentiation stage in arithmetic of natural numbers? Is there any mathematical or philosophical problem about defining such generalized operators and working with them as well as successor, sum, multiplication and exponentiation? Are these "unnatural" in any sense? If yes, what does this "unnatural" essence mean? Did these extended set of operators on natural numbers appeared in any text before? If yes, please introduce your references.
I know that different authors use different notation to represent programming language semantics. As a matter of fact Guy Steele addresses this problem in an interesting video. I'd like to know if anyone knows whether the leading turnstile operator has a well recognized meaning. For example I don't understand the leading $\vdash$ operator at the beginning of the denominator of the following: $$\frac{x:T_1 \vdash t_2:T_2}{\vdash \lambda x:T_1 . t_2 ~:~ T_1 \to T_2}$$ Can someone help me understand? Thanks.
I think I know where the problem is, I just don't know why it's wrong: We start with $$\lim_{x\to \infty }\left(\frac{x}{x\ln (x)}\right).$$ Normally we would just cancel out the $x$'s in the fraction to get $$\lim_{x\to \infty }\left(\frac{1}{\ln (x)}\right).$$ But, I read somewhere that $$\lim_{x\to a}\left({f(x)g(x)}\right) = \lim_{x\to a }\left({f(x)}\right)\times\lim_{x\to a }\left({g(x)}\right).$$ Couldn't we then use that to express $\lim_{x\to \infty }\left(\frac{x}{x\ln (x)}\right)$ as $$\lim_{x\to \infty }\left({\frac{x}{\ln (x)}}\right)\times\lim_{x\to \infty }\left({\frac{1}{x}}\right),$$ and by using L'Hospital's Rule, $$ \begin{align} \lim_{x\to \infty }\left({\frac{x}{\ln (x)}}\right)\times\lim_{x\to \infty }\left({\frac{1}{x}}\right) &= \lim_{x\to \infty }\left({x}\right)\times\lim_{x\to \infty }\left({\frac{1}{x}}\right)\\ &= \lim_{x\to \infty }\left({\frac{x}{x}}\right)\\ &= \lim_{x\to \infty }\left({1}\right)=1. \end{align} $$ I think the problem occurs when I split the limit into two, but I just can't see why that's a problem. I would greatly appreciate it if someone could tell me where the problem is along with why it is a problem.
We have $N$ oscillators and each of them is described by the Hamiltonian: $$H = \frac{p^2}{2m} + \frac{Kq^4}{4} $$ I have to compute the average total energy $\langle E\rangle$ of the $N$ oscillators. But to do so, first I have to compute the one particle partition function and to do so I have to solve the following integral: $$Z_1 (V,T) = \iint_{\mathbb{R}^2} e ^{-\beta \,H_1(p,q)}\,dp\,dq.$$ So in this case: $$ Z_1 = \int_{-\infty}^{\infty} e^{-\beta\frac{p^2}{2m}}\,dp \int_{-\infty}^{\infty} e^{-\beta \frac{Kq^4}{4}}\,dq. $$ I know this integral can be solved by the Gauss method, knowing that: $$\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}.$$ For the first integral I got: $$\int_{-\infty}^{\infty} e^{-\beta\frac{p^2}{2m}}\,dp = \sqrt{\frac{2m\pi}{\beta}}$$ I am having difficulties solving the second one: $$\int_{-\infty}^{\infty} e^{-\beta \frac{Kq^4}{4}}\,dq.$$ I have tried to make a the change of variables: $q^4 = a^2,$ but this does not simplify the calculation. What method should I use? Once you get $Z_1$: $$Z_N = (Z_1)^N.$$ Then you just have to apply: $$\langle E \rangle = -\frac{ \partial \log(Z_N)}{\partial \beta}.$$ ANSWER $$\langle E \rangle = \frac{3N}{4\beta}.$$
Practice Paper 2 Question 20 Show that \(2^{50} < 3^{33}.\) Related topics Warm-up Questions Compare \(2^{39}\) and \(3^{26}.\) Hint:Consider \(8\) and \(9\). Compare \(2^{100}\) and \(5^{50}.\) Expand \((3 – 2x)^5.\) Hints Hint 1How about using binomial expansion? Hint 2You need an appropriate choice of numbers in the binomial expansion; splitting \(3\) as \(3=(2+1)\) may not be very helpful. How about splitting a higher power of \(3?\) Hint 3Perhaps writing \(3^2 = (2^3 + 1)\) is more helpful? Hint 4How can you make \((3^2)^n\) appear, where \(n\) is an integer? Hint 5If you wanted to write \(3^{33}\) as \(m(3^2)^n,\) how would you manipulate \(3^{33}\) such that \(n,m\) are integers? Hint 6If \(3^{33}=3\cdot(3^2)^{16},\) how would you now implement a binomial expansion? Hint 7Pay close attention to the first few terms in the expansion. Hint 8Could you now build an inequality from the binomial expansion that contains only powers of \(2?\) Hint 9The inequality should include a term that relates to the question's goal. Solution We would like to find an intermediary quantity smaller that \(3^{33},\) that we can express in terms of powers of \(2\), and thus formulate an intermediary inequality. We can try to write \(3^{33}\) as a binomial expansion, in order to introduce the powers of 2. We have \[ \begin{align} 3^{33} &= 3 \times 3^{32} \\ &= 3 \times (3^2)^{16} \\ &= 3 \times (1 + 8)^{16} \\ &= 3 \times (1 + 2^3)^{16} \\ &= 3 \times (2^{48} + 16 \cdot 2^{45} + \cdots + 1) \\ &> 2 \times (2^{48} +2^{49} + \cdots + 1) \\ &= 2^{49} +2^{50} + \cdots \\ &> 2^{50}. \end{align} \] Note: Would a binomial expansion of the form \((1+2)^n\) be as efficient? If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
I'd appreciate help in understanding how changing the significance level effects the results of the t-test. I have conducted an experiment where a group of 15 participants took a test, played a game, and took the original test again. The data set follows: Round 1 (Before Game) Scores: 6, 4, 7, 8, 12, 6, 7, 5, 11, 4, 7, 1, 6, 10, 4 Round 2 (After Game) Scores: 2, 3, 7, 11, 11, 9, 7, 12, 5, 15, 11, 11, 7, 4, 7 mean test score before game play: 6.53 mean test score after game play: 8.13 Accordingly I formulated a null hypothesis that game play does not effect test scores and an alternative hypothesis that game play increases scores (see below). Using the data and R I calculated the t-statistic, critical value, and p-value $H_0: \mu_0 = 6.53$ and $H_1: \mu_1 > 6.53$ $\alpha = 0.05, \mu_0 = 6.53, \overline x = 8.13, \sigma = 3.70, n = 15$ $$ t = \frac{8.13 - 6.53}{\frac{3.70}{\sqrt 15}} = 1.67 $$ Critical value = 1.76 and p-value = 0.94 T-value < Critical Value $ \to $ $1.67 < 1.76 \therefore$ accept $H_0$ $p-value > \alpha$ $\to 0.94 > 0.5 \therefore$ accept $H_0$ But when I re-calculate with a $\alpha$ of 0.1 the critical value changes to 1.35, while the p-value stays the same at 0.94. At this point, accepting/rejecting diverges based on which value comparison is made. Did I make a mistake in the calculation or am I misunderstanding some other factor(s)? Thanks.
I am trying to understand better the quantization of the Harmonic Oscillator. Here are three ways of thinking about the Harmonic Oscillator. Eigenfunctions of the differential operator: $H = -\frac{d^2}{dx^2} + x^2$ Eigenfunctions of the oscillator $H = a a^\dagger+ \frac{1}{2}$ Special orbits of the $U(1)$ action on the complex plane, level sets of the moment map $H = p^2 + x^2$. Are there any places that explain all three of these on equal footing? Items 1 and 2 have a Wick formula $$ \langle a b c d\rangle = \langle a b \rangle \langle c d\rangle + \langle a c \rangle \langle b d\rangle + \langle a d \rangle \langle bc \rangle$$ Is there an analogue in the symplectic geometry case (item 3)? I want to understand better why this is a duality $$ {\tt rotation,}\;e^{it}\in U(1)\leftrightarrow {\tt gaussians,}\;e^{-x^2} \leftrightarrow {\tt eigenstates, }\;|n\rangle$$ Something to that effect, mentioned in these notes. Does any rotation action get quantized this way? This question involves rotation actions, in a different way than this other MO qustion: Characterizing the harmonic oscillator creation and annihilation operators in a rotationally invariant way EDIT Here is another MO post where the Bargmann transform arises in quantization of Harmonic Oscillator:Representation of double cover of $U(n)$ on eigenspaces of harmonic oscillator
Search Now showing items 1-10 of 53 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Sample Test 4 Question 3 Let \(a>1\) be an integer. Give a non-integral expression in terms of \(a\) for \(F(a)=\displaystyle\int_1^a (-1)^{\lfloor x \rfloor} \lfloor x \rfloor^{-1} dx,\) where \(\lfloor x\rfloor\) is the greatest integer less than or equal to \(x.\) Related topics Hints Hint 1Sketch the graph \(y = \frac{1}{\lfloor{x}\rfloor}\) for \(x > 1.\) Hint 2How does multiplying it by \((-1)^{\lfloor{x}\rfloor}\) change the graph? Hint 3An integral of a function is just the area under its graph. Hint 4Could you express the total area as a sum? Solution Plotting the function \(y = (-1)^{\lfloor{x}\rfloor}\frac{1}{\lfloor{x}\rfloor}\) yields rectangles of width \(1\) and height \(-1, \frac{1}{2}, -\frac{1}{3}, \frac{1}{4}, -\frac{1}{5}, \ldots.\) Since an integral of a function is just the area under its graph, \(F(a)=\sum_{i=1}^{a-1}\frac{(-1)^i}{i}.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
1. The domain of vector field \(\vecs{F}=\vecs{F}(x,y)\) is a set of points \((x,y)\) in a plane, and the range of \(\vecs F\) is a set of what in the plane? Solution: Vectors For the following exercises, determine whether the statement is true or false. 2. Vector field \(\vecs{F}=⟨3x^2,1⟩\) is a gradient field for both \(ϕ_1(x,y)=x^3+y\) and \(ϕ_2(x,y)=y+x^3+100.\) 3. Vector field \(\vecs{F}=\dfrac{⟨y,x⟩}{\sqrt{x^2+y^2}}\) is constant in direction and magnitude on a unit circle Solution: False 4. Vector field \(\vecs{F}=\dfrac{⟨y,x⟩}{\sqrt{x^2+y^2}}\) is neither a radial field nor a rotation. For the following exercies, describe each vector field by drawing some of its vectors. 5. [T] \(\vecs{F}(x,y)=x\,\hat{\mathbf i}+y\,\hat{\mathbf j}\) Solution: \(\vecs{F}(x,y)=−y\,\hat{\mathbf i}+x\,\hat{\mathbf j}\) 6. [T] \(\vecs{F}(x,y)=x\,\hat{\mathbf i}−y\,\hat{\mathbf j}\) 7. [T] Solution: \(\vecs{F}(x,y)=\,\hat{\mathbf i}+\,\hat{\mathbf j}\) 8. [T] \(\vecs{F}(x,y)=2x\,\hat{\mathbf i}+3y\,\hat{\mathbf j}\) 9. [T] Solution: 10. \(\vecs{F}(x,y)=3\,\hat{\mathbf i}+x\,\hat{\mathbf j}\) [T] \(\vecs{F}(x,y)=y\,\hat{\mathbf i}+\sin x\,\hat{\mathbf j}\) 11. [T] Solution: \(\vecs F(x,y,z)=x\,\hat{\mathbf i}+y\,\hat{\mathbf j}+z\,\hat{\mathbf k}\) 12. [T] \(\vecs F(x,y,z)=2x\,\hat{\mathbf i}−2y\,\hat{\mathbf j}−2z\,\hat{\mathbf k}\) 13. [T] \(\vecs F(x,y,z)=yz\,\hat{\mathbf i}−xz\,\hat{\mathbf j}\) 14. [T] For the following exercises, find the gradient vector field of each function \(f\). 15. \(f(x,y)=x\sin y+\cos y\) Solution: \(\vecs{F}(x,y)=\sin(y)\,\hat{\mathbf i}+(x\cos y−\sin y)\,\hat{\mathbf j}\) 16. \(f(x,y,z)=ze^{−xy}\) 17. \(f(x,y,z)=x^2y+xy+y^2z\) Solution: \(\vecs F(x,y,z)=(2xy+y)\,\hat{\mathbf i}+(x^2+x+2yz)\,\hat{\mathbf j}+y^2\,\hat{\mathbf k}\) 18. \(\vecs{F}(x,y)=x^2\sin(5y)\) 19. \(\vecs{F}(x,y)=\ln(1+x^2+2y^2)\) Solution: \(\vecs{F}(x,y)=(\dfrac{2x}{1+x^2+2y^2})\,\hat{\mathbf i}+(\dfrac{4y}{1+x^2+2y^2})\,\hat{\mathbf j}\) 20. \(f(x,y,z)=x\cos(\dfrac{y}{z})\) 21. What is vector field \(\vecs{F}(x,y)\) with a value at \((x,y)\) that is of unit length and points toward \((1,0)\)? Solution: \(\vecs{F}(x,y)=\dfrac{(1−x)\,\hat{\mathbf i}−y\,\hat{\mathbf j}}{\sqrt{(1−x)^2+y^2}}\) For the following exercises, write formulas for the vector fields with the given properties. 22. All vectors are parallel to the \(x\)-axis and all vectors on a vertical line have the same magnitude. 23. All vectors point toward the origin and have constant length. Solution: \(\vecs{F}(x,y)=\dfrac{(y\,\hat{\mathbf i}−x\,\hat{\mathbf j})}{\sqrt{x^2+y^2}}\) 24. All vectors are of unit length and are perpendicular to the position vector at that point. 25. Give a formula \(\vecs{F}(x,y)=M(x,y)\,\hat{\mathbf i}+N(x,y)\,\hat{\mathbf j}\) for the vector field in a plane that has the properties that \(\vecs{F}=\vecs 0\) at \((0,0)\) and that at any other point \((a,b), \vecs F\) is tangent to circle \(x^2+y^2=a^2+b^2\) and points in the clockwise direction with magnitude \(\|\vecs F\|=\sqrt{a^2+b^2}\). Solution: \(\vecs{F}(x,y)=y\,\hat{\mathbf i}−x\,\hat{\mathbf j}\) 26. Is vector field \(\vecs{F}(x,y)=(P(x,y),Q(x,y))=(\sin x+y)\,\hat{\mathbf i}+(\cos y+x)\,\hat{\mathbf j}\) a gradient field? 27. Find a formula for vector field \(\vecs{F}(x,y)=M(x,y)\,\hat{\mathbf i}+N(x,y)\,\hat{\mathbf j}\) given the fact that for all points \((x,y)\), \(\vecs F\) points toward the origin and \(\|\vecs F\|=\dfrac{10}{x^2+y^2}\). Solution: For the following exercises, assume that an electric field in the \(xy\)-plane caused by an infinite line of charge along the \(x\)-axis is a gradient field with potential function \(V(x,y)=c\ln(\dfrac{r_0}{\sqrt{x^2+y^2}})\), where \(c>0\) is a constant and \(r_0\) is a reference distance at which the potential is assumed to be zero. 28. Find the components of the electric field in the \(x\)- and \(y\)-directions, where \(\vecs E(x,y)=−\vecs ∇V(x,y).\) 29. Show that the electric field at a point in the \(xy\)-plane is directed outward from the origin and has magnitude \(\|\vecs E\|=\dfrac{c}{r}\), where \(r=\sqrt{x^2+y^2}\). Solution: \(\|\vecs E\|=\dfrac{c}{|r|^2}r=\dfrac{c}{|r|}\dfrac{r}{|r|}\) A (or flow line streamline) of a vector field \(\vecs F\) is a curve \(\vecs r(t)\) such that \(d\vecs{r}/dt=\vecs F(\vecs r(t))\). If \(\vecs F\) represents the velocity field of a moving particle, then the flow lines are paths taken by the particle. Therefore, flow lines are tangent to the vector field. For the following exercises, show that the given curve \(\vecs c(t)\) is a flow line of the given velocity vector field \(\vecs F(x,y,z)\). 30. \(\vecs c(t)=⟨ e^{2t},\ln|t|,\dfrac{1}{t} ⟩,\,t≠0;\quad \vecs F(x,y,z)=⟨2x,z,−z^2⟩\) 31. \(\vecs c(t)=⟨ \sin t,\cos t,e^t⟩;\quad \vecs F(x,y,z) =〈y,−x,z〉\) Solution: \(\vecs c′(t)=⟨ \cos t,−\sin t,e^{−t}⟩=\vecs F(\vecs c(t))\) For the following exercises, let \(\vecs{F}=x\,\hat{\mathbf i}+y\,\hat{\mathbf j}\), \(\vecs G=−y\,\hat{\mathbf i}+x\,\hat{\mathbf j}\), and \(\vecs H=x\,\hat{\mathbf i}−y\,\hat{\mathbf j}\). Match \(\vecs F\), \(\vecs G\), and \(\vecs H\) with their graphs. 32. 33. Solution: \(\vecs H\) 34. For the following exercises, let \(\vecs{F}=x\,\hat{\mathbf i}+y\,\hat{\mathbf j}\), \(\vecs G=−y\,\hat{\mathbf i}+x\,\hat{\mathbf j}\), and \(\vecs H=−x\,\hat{\mathbf i}+y\,\hat{\mathbf j}\). Match the vector fields with their graphs in (I)−(IV). \(\vecs F+\vecs G\) \(\vecs F+\vecs H\) \(\vecs G+\vecs H\) \(−\vecs F+\vecs G\) 35. Solution: d. \(−\vecs F+\vecs G\) 36. 37. Solution: a. \(\vecs F+\vecs G\) 38.
Practice Paper 4 Question 4 The parabola \(y=x^2+\frac{1}{4}\) is eventually intersected at two points by the line \(y=tx,\) where \(t\) is the time increasing linearly from \(0\) (hence the line rotates from a horizontal towards a vertical position). Determine the speed at which the segment linking the two intersection points grows. Related topics Hints Hint 1Where do the two curves intersect? Hint 2Find, in terms of \(t,\) the coordinates of these points. Hint 3... and the length of the segment joining the two points. Hint 4How could you find the speed at which it grows from this length? Solution We first find where the points of intersection are, in terms of \(t.\) We equate the two curves to get \(x^2-tx+\frac{1}{4} = 0.\) Solving for \(x\) gives us their \(x\)-coordinates: \(x_{A,B} = \frac{t\pm\sqrt{t^2-1}}{2}.\) The two points \(A\) and \(B\) lie on the line \(y=tx\) so we substitute in the above \(x\)-coordinates to get their \(y\)-coordinates. Thus the two points are: \(A = \big(\frac{t-\sqrt{t^2-1}}{2},\frac{t^2-t\sqrt{t^2-1}}{2}\big)\) and \(B = \big(\frac{t+\sqrt{t^2-1}}{2},\frac{t^2+t\sqrt{t^2-1}}{2}\big)\) The length of the segment joining \(A\) and \(B\) is their Euclidean distance given by \(D_{A,B}(t) = \sqrt{(x_B - x_A)^2 + (y_B - y_A)^2}.\) Substituting and simplifying yields \(D_{A,B}(t) = \sqrt{t^4-1}.\) To get the rate at which this distance increases, we differentiate with respect to \(t\) to get the result \(D'_{A,B}(t) = \frac{2t^3}{\sqrt{t^4-1}}.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
A learning algorithm is the backbone of machine learning that distinguishes it from traditional computer programming by allowing data-driven model building. In the past years, we have developed learning algorithms using a number and tools and for diverse application domains, as outlined below. Learning with Kernels Kernel methods offer a mathematically elegant arsenal to help tackle several problems that arise in machine learning ranging from probabilistic inference to deep learning. Recently, a subfield of kernel methods known as Hilbert space embedding of distributions has gained increasing popularity [ ], thanks to foundational work done in our department during the last 10+ years. For a probability distribution $\mathbb{P}$ over a measurable space $\mathcal{X}$, the kernel mean embedding of $\mathbb{P}$ can is defined as the mapping $\mu: \mathbb{P} \mapsto \int k(x,\cdot) \,\mathrm{d}\mathbb{P}(x)$ where $k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ is a positive definite kernel function. Its applications include, but are not limited to, comparing real-world distributions on the basis of samples, differentially private learning, and determining goodness-of-fit of a model. Our department has an ongoing history of contributions to the state-of-the-art in this area. In [ ], we develop privacy-preserving algorithms based on the kernel mean embedding that allow one to release a database while guaranteeing the privacy of each record in the database. In applications such as probabilistic programming, transforming a base random variable $X$ with a function $f$ forms a basic building block to manipulate a probabilistic model. It then becomes necessary to characterize the distribution of $f(X)$. In [ ], we show that for any continuous function $f$, consistent estimators of the mean embedding of a random variable $X$ lead to consistent estimators of the mean embedding of $f(X)$. For Mat\`ern kernels and sufficiently smooth functions, we also provide rates of convergence. In [ ], we address the problem of measuring the relative goodness of fit of two models using kernel mean embeddings. Given two candidate models, and a set of target observations, the goal is to produce a set of interpretable examples (so-called informative features) which indicate the regions in the data domain where one model fits better than the other. The task is formulated as a statistical test whose runtime complexity is linear in the sample size. Figure: The Rosenbrock function, a non-convex function which serves as a test-bed for optimization algorithms (image credit: Wikipedia) Optimization for Machine Learning Optimization lies at the heart of most machine learning algorithms. Characteristics of modern large-scale machine learning problems include: high-dimensional, noisy, and uncertain data; huge volumes of batch or streaming data; intractable models, low accuracy, and reliance on distributed computation or stochastic approximations. Optimizing under these settings with approaches such as coordinate descent and the Frank-Wolfe algorithm has shown promosing results in recent years. The high-level goal of research in optimization in our department is to understand the convergence property of coordinate descent as well as Frank-Wolfe optimization algorithms under different sampling schemes and constraints. It is well known that greedy coordinate descent (CD) converges faster in practice than the randomized version, however the properties of greedy CD were less well understood. In [ ], we provide a theoretical understanding of greedy coordinate descent for smooth functions. We also propose an approximate greedy CD approach which is computationally cheap and always provably better than the randomized version. Similarly, in [ ] we propose an adaptive recursive sampling scheme based on the min-max optimal solution of the variance reduction problem to achieve faster convergence for CD. The proposed approach can also be applied to stochastic gradient descent. Matching pursuit (MP), Frank-Wolfe (FW), and coordinate descent do have a similar structure of the optimization problem. A connection between MP and coordinate descent is explored in [ ]. We also prove the rate for accelerated matching pursuit, which was not known previously. The MP algorithm for optimization over conic hulls is proposed in [ ]. In [ ], an easy-to-implement conditional gradient method is proposed for a composite minimization problem, which converges at the rate of $O(1/\sqrt{k})$. In a different line of work, we propose a Frank-Wolfe based approach to boost variational inference [ ], which enables us to analyze the convergence of the proposed framework under suitable assumptions. Extreme Classification Extreme multi-label classification refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. It has been shown that machine learning problems arising in tasks such as recommendation, ranking, and web-advertising can be reduced to the framework of extreme classification. It had been long conjectured that a binary-relevance-based one-vs-rest scheme is not statistically and computationally tenable for such scenarios. Surprisingly, we have been able to show in our recent work [ ], that a Hamming loss minimizing one-vs-rest paradigm is key to getting good prediction performance, as well as to efficient training (by enabling parallel training). DiSMEC [ ], when published in 2016, surpassed the contemporary state-of-the-art methods by up to 10\% points on various datasets consisting of up to a million labels. Since then, it has been a top-performing benchmark method in this domain for over two years now. The concurrent training coupled with model pruning paradigms in DiSMEC have motivated algorithms by Microsoft research which have been used in the Bing Search engine for dynamic search advertising and related searches. Neural Networks Research interest in deep neural networks, especially in the generative adversarial network (GAN) approach [ ], has increased substantially in recent years. In [ ], we propose a simple module to improve a GAN by preprocessing samples with a network that initially makes the task of the discriminator harder (akin to a data smoothing), thus simplifying the generator's task. This leads to a tempered learning process for both generator and discriminator. In a number of experiments, the proposed method can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP). In [ ], we propose the AdaGAN, a boosting style meta-algorithm which can be combined with various modern generative models (including GANs and VAEs) to improve their quality. We provide an optimal closed form solution for performing greedy updates to approximate an unknown distribution with a sequentially built mixture in any given f-divergence. The paper establishes a fruitful connection between learning theory and neural network research and has already attracted a large amount of follow-up work. The work [ ] develops a deep neural network that can learn to write programs from a corpus of program induction problems. The approach leads to an order of magnitude speedup over strong baselines and an approach based on a recurrent neural network (RNN). Ideas from causality are beginning to influence our work on machine learning, and the notion of independent causal mechanism has been adopted in several areas including semi-supervised learning, domain adaptation, and transfer learning. From a deep learning perspective, we investigated whether a set of independent mechanisms can be recovered using deep neural networks [ ]. We proposed an algorithm that enables a set of experts (i.e., deep neural networks) to recover independent (inverse) mechanisms from a data set that has undergone unlabelled transformations. Using a competitive training procedure, the experts specialize to different mechanisms. Not only can the mechanisms be learned successfully, but the system also generalizes to transformed data in other domains.
Peter F. and John A. simultaneously sent me a nice video on the 3Blue1Brown YouTube channel that wonderfully popularizes various mathematical curiosities and gets well-deserved hundreds of thousands of views for that: The most unexpected answer to a counting puzzle (a 5-minute video)Imagine two boxes on a line, a big one and a small one. The big one is \(M\) times heavier than the smaller one. The small box starts at zero speed, the big one approaches from the right, collides with the small one. And the small box elastically oscillates between the big box and a static wall. You count all the "clacks". When the boxes are equally heavy and \(M=1\), you will get \(3\) clacks. When \(M=10^{10}\), you will get... \(314,\!159\) clacks. It's almost like digits of \(\pi\). And indeed. The rule continues. Why do you get digits of \(\pi\) in a counting problem? And why are they multiplied by powers of ten? OK, let us start with the last question. Why do the base-ten multiples seem to play a special role here? Many people are asking this question over there. Well, they don't. The universal rule is that if the mass ratio is \(M\), then the number of clacks will be an integer close to \[ N \approx \pi \sqrt M. \] That's it. Note that nothing involving the number ten appears in the formula above or the previous sentence. But if \(M=100^p\), then \(\sqrt{M}=10^p\). So if the mass ratio is an even power of ten, the number of clacks will scale like a power of ten, too. But if you preferred to see the digits of \(\pi\) in the hexadecimal form, you could do it equally well. You would choose the mass ratio to be \(M=256^{K}\) (which you could write as \(M=100^p\) if you took the hexadecimal notation seriously) and the number of clacks would be \[ N \approx \pi \cdot 16^p \] where the power of sixteen would simply shift the hexadecimal digits (again, \(16^p\) could be written as \(10^p\) hexadecimally). So indeed, there is absolutely nothing special about the base-10 digits here. The only fact is that if you care about digits of \(\pi\) in the base-10 system, you must choose the mass ratio that is also simple in the base-10 system, and the calculation will guarantee that the number of clacks will preserve the base-10 values simply because\[ \sqrt{100^p} = 10^p. \] It's that simple. OK, why does \(\pi\) appear there? The channel will probably record a video where the proof is presented, it could be a nice one. The derivation was given by Gregory Galperin in a 2003 paper (he already figured it out in 1995): Google Scholar entry. The paper has just 13 citations but it's amusing enough so that the YouTube popularization of the paper collects 350,000 views within a day! ;-) Galperin's proof is funny – and relying on some witty tricks from this book by Arnold. You may map the elastically colliding boxes to a billiard. And in a certain transformed space, there will be a fixed calculable number of "clacks", basically \(\sqrt M = 10^p\), per unit angle. And because the angles of the initial and final state differ by \(\pi\), you will get \(\pi\cdot \sqrt M\) "clacks". It's not really hard to see why the number of clacks scales like \(\sqrt{M}\), either. And the explanation below is mine. But I believe that any person with physics intuition can produce it, too. For a large mass ratio, the small box in between the wall and the big box behaves as the molecule of some gas. Because the total kinetic energy is preserved, the original kinetic energy \(MV^2/2\) of the big box is converted to \(v^2/2\) of the small box (whose mass is \(m=1\)) in the middle of the process when the big box is approximately at rest. You can see that \[ v_{\rm max} = V_{\rm max}\sqrt{M} \] So if the initial speed of the big box is "one", the maximum speed of the small box will be \(\sqrt{M}\). The time needed to oscillate between the wall and the big box will therefore scale like \(1/\sqrt{M}\), and the maximum number of clacks per unit time will go like \(\sqrt{M}\) again. Well, we were cheating because we neglected the fact that for a higher \(M\), the big box gets closer to the wall, before it's repelled, and when it's closer, the rate of clacks is higher. It's true but this extra power of \(M\) is canceled by the fact that almost all the clacks only appear during a shorter "core" of the episode. I guess the dear reader as well as the blogger would be bored to prove all these powers. But it's not hard to see that there is some power law. Billiards play a role in the proof. So here's an amazing shot by Mr Blomdahl – thanks to Olda K. who loves billiard. I have serious doubts whether skills play much role beyond the brute force and luck – when collisions amplifying the uncertainties are involved. But maybe I just miscalculate the odds. It's too bad that only problems whose "basic rules" are really trivial to understand may be turned into these yummy popular videos where things nicely clack and millions of people watch and listen (although I guess that only thousands will be patient enough to try to understand the proof at a technical level). There are so many things in higher mathematics and theoretical physics that are even more exciting – but one needs a more refined background to understand the beauty. P.S.: Here is a helpful comment explaining the origin of the \(\pi\) factor from the commenter Bidoni Bidona: Describe the system in terms of the velocities \(V\) and \(v\) only.It's more concise than Galperin's paper, indeed. As it turns out, accounting for conservation of momentum and energy when the two masses collide, and perfect reflection when the lightest one bounces off the wall, the vector in the \((v,V)\) plane describe an ellipse. If you scale the axis so that you're now in the \((v,\sqrt{M} V)\) plane, where \(M\) is the mass ratio \(m_{\rm big}/m_{\rm small}\), you're now describing a circle. The process ends when the vector ceases to be in the \(v\gt V\) half plane. Putting it all together, you find that the number of collisions is precisely the largest integer smaller than or equal to \(\pi/{\rm arctan}(1/\sqrt{M})\). So you're not exactly proportional to the floor of \(\pi\sqrt{M}\) but \({\rm arctan}(1/\sqrt{M})\) is reasonably close to \(1/\sqrt{M}\) even when \(\sqrt{M}\) is just above \(1\). As a matter of fact, there exists a viable conjecture that the floor of \(\pi/{\rm arctan}(1/\sqrt{M})\) is equal to the floor of \(\sqrt{M}\pi\) for all values of \(M=100^p\), but it works at least for the first billions of values of \(p\)... (anyway the offset, if any, would be just \(1\); so this might only be a problem if at some point, the \(p\) starting digits of \(\pi\) end with \(p/2\) nines. Let me return to the question: Why does \(\pi\) appear in a counting problem? The answer is that the number of clacks is so large that it is effectively continuous (that's why I could discuss the small box as "gas" in the thermodynamic limit) and in this continuous limit, the counting of the number of clacks \(N\) may be converted to an integral – and many integrals are proportional to \(\pi\), you know. No extra conspiracy is needed to understand the presence of \(\pi\).
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Practice Paper 4 Question 5 A thin square piece of paper of side \(1\) is folded recursively. The first fold is along a diagonal of length \(\sqrt{2},\) thus obtaining an isosceles triangle. All successive folds are along a segment of length \(l_i,\) with \(i=1,2,\ldots,\) that results in a new triangle that is similar (geometrically) to the previous one. Find the sum of all \(l_i\) as this process continues indefinitely. Related topics Hints Hint 1After the first fold, where would the fold be so that the resulting triangle is geometrically similar to the previous one? A diagram may be useful. Hint 2Find the relationship between the value of \(l_i\) and the adjacent side of the corresponding triangle. Hint 3How does that help you find the relationship between the value of \(l_i\) and \(l_{i+1}?\) Hint 4Do you notice anything familiar about the sum of all \(l_i\) given the above relationship? Solution The first fold is of length \(\sqrt{2}\). This is a special case though as it turns the square into an isosceles right triangle. All future triangle-to-triangle folds should be along the altitude of the larger triangle. The fold halves the right triangle, so the adjacent side of the larger triangle becomes the hypotenuse of the smaller one. This gives us the ratio of \(l_i\) to \(l_{i+1}\) to be \(\frac{1}{\sqrt 2},\) and that would also be the ratio for our geometric series. Hence after \(n\) folds the total length is:\[ \begin{align} L(n) &= \sum_{i=1}^{n}{\frac{1}{\sqrt 2^i}} \\ \lim_{n \rightarrow \infty} L(n) &= \sqrt{2} + 1\\ \textrm{or equivalent: }&=\frac{\sqrt{2}}{2-\sqrt{2}}=\frac{1}{\sqrt{2}-1}. \end{align} \] If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.