text
stringlengths
256
16.4k
This is a follow-up question on Price of a prepayment-based claim. Consider a zero-coupon bond of maturity $T$ with price $P_0$ for which the borrower can reimburse the principal $N$ at any time $\tau$ between $0$ excluded and $T$ included. Assuming a constant risk-free interest rate $r$, the price of this claim is given by the risk-neutral expectation of its payoff: $$P_0 = \mathbb{E}^{\mathbb{Q}}\left[N\left(\mathbb{I}_{\{\tau \leq T\}}e^{-r\tau}+\mathbb{I}_{\{\tau>T\}}e^{-rT}\right)\right]$$ To model the stopping time $\tau$, we introduce a homogenous Poisson process $N(t)$ parameterised by $\lambda > 0$ such that for $t \in \mathbb{R}_+^*$ and $n \in \mathbb{N}$: $$ \mathbb{P}(N(t) = n) = \frac{(\lambda t)^n}{n!}e^{-\lambda t}$$ Let $(\mathcal{F}_t)_{t \geq 0}$ be the natural filtration associated to the process $N(t)$. The stopping time $\tau$ with respect to filtration $(\mathcal{F}_t)_{t \geq 0}$ is then defined as: $$ \tau = \min \{t>0 : N(t)>0 \}$$ We derive the stopping time distribution: $$ \begin{align} & \mathbb{P}(\tau > t) = \mathbb{P}(N(t) = 0) = e^{-\lambda t} \\[8pt] & \mathbb{P}(\tau \leq t) = 1 - e^{-\lambda t} \end{align} $$ Hence as expected: $$ \tau \sim \mathcal{E}(\lambda) $$ Now, my question concerns the effect of a change of measure $-$ from the real-world probability $\mathbb{P}$ to the risk-neutral measure $\mathbb{Q}$ $-$ on the parameter $\lambda$. For example, under the Black-Scholes model, the canonical change of measure modifies the asset $S_t$ drift from $\mu$ to $r$ but it does not have any impact on the diffusion coefficient $\sigma$. I am familiar with the theory but not very familiar with the practicalities of the change of measure technique, so I do not really know how to tackle this issue here. Could someone please explain whether: There would be any impact on $\lambda$ when passing from $\mathbb{P}$ to $\mathbb{Q}$? Is my pricing problem sufficiently specified above or should there be any additional information to answer question 1? Note: please if you think there is an impact do not post the derivation but a hint on how to proceed. : A first try The fundamental property of the risk-neutral measure is that (Brigo & Mercurio, 2007): The price of any asset divided by a reference positive non dividend-paying asset (called numeraire) is a martingale (no drift) under the measure associated with that numeraire. Now, in my setting I am not quite sure what would be considered the "asset": my guess is that it would be the Poisson process $N(t)$. Hence, letting $s<t$ we would have: $$ \begin{align} \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}N(t)|\mathcal{F}_s\right] & = \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}N(s)|\mathcal{F}_s\right] + \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}(N(t)-N(s))|\mathcal{F}_s\right] \\[10pt] & = e^{-rt}N(s) + \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}(N(t)-N(s))\right] \\[10pt] & = e^{-rt}N(s) + e^{-rt}\lambda(t-s) \\[10pt] & = e^{-rt}\left(N(s) + \lambda (t-s)\right) \end{align} $$ Where the second step follows from the independent increments property of the Poisson process. Given that we want: $$ \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}N(t)|\mathcal{F}_s\right] = e^{-rs}N(s) $$ I have the impression that the implied - i.e. under the risk-neutral measure - lambda parameter $\tilde{\lambda}$ should be: $$ \tilde{\lambda} \equiv \tilde{\lambda}(r,s,t) = N(s)\frac{e^{r(t-s)}-1}{t-s} $$ A few questions come to mind: Is my reasoning correct here? Does it make sense to define the parameter in terms of the process? My guess is no given that $\tilde{\lambda}(r,0,t) = 0$ because $N(0)=0$... Does it pose a problem to go from a constant $\lambda$ under the real-world measure $\mathbb{P}$ to a functional $\tilde{\lambda}(r,s,t)$ under the real-world measure $\mathbb{Q}$? : A second try Using the same logic, I now leverage the property than the compensated Poisson process is a martingale: $$ \begin{align} & N_c(t) \equiv N(t) - \lambda t \\[10pt] & \mathbb{E}^{\mathbb{Q}}\left[N_c(t)|\mathcal{F}_s\right] = N_c(s) \end{align} $$ However when I discount I (obviously) get: $$ \mathbb{E}^{\mathbb{Q}}\left[e^{-rt}N_c(t)|\mathcal{F}_s\right] = e^{-rt}N_c(s) $$ And hence I am stuck again.
Given a voting rule $f : \{-1,1\}^n \to \{-1,1\}$ it’s natural to try to measure the “influence” or “power” of the $i$th voter. One can define this to be the “probability that the $i$th vote affects the outcome”. Definition 12We say that coordinate $i \in [n]$ is pivotalfor $f : \{-1,1\}^n \to \{-1,1\}$ on input $x$ if $f(x) \neq f(x^{\oplus i})$. Here we have used the notation $x^{\oplus i}$ for the string $(x_1, \dots, x_{i-1}, -x_i, x_{i+1}, \dots, x_n)$. Definition 13The influenceof coordinate $i$ on $f : \{-1,1\}^n \to \{-1,1\}$ is defined to be the probability that $i$ is pivotal for a random input: \[ \mathbf{Inf}_i[f] = \mathop{\bf Pr}_{{\boldsymbol{x}} \sim \{-1,1\}^n}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})]. \] Influences can be equivalently defined in terms of “geometry” of the Hamming cube: Fact 14For $f : \{-1,1\}^n \to \{-1,1\}$, the influence $\mathbf{Inf}_i[f]$ equals the fraction of dimension-$i$ edgesin the Hamming cube which are boundaryedges. Here $(x,y)$ is a dimension-$i$ edge if $x$ and $y$ agree in all coordinates but the $i$th; it is a boundary edge if $f(x) \neq f(y)$. Examples 15For the $i$th dictator function $\chi_i$ we have that coordinate $i$ is pivotal for every input $x$; hence $\mathbf{Inf}_i[\chi_i] = 1$. On the other hand, if $j \neq i$ then coordinate $j$ is never pivotal; hence $\mathbf{Inf}_j[\chi_i] = 0$ for $j \neq i$. Note that the same two statements are true about the negated-dictator functions. For the constant functions $\pm 1$, all influences are $0$. For the $\mathrm{OR}_n$ function, coordinate $1$ is pivotal for exactly two inputs, $(-1, 1, 1, \dots, 1)$ and $(1, 1, 1, \dots, 1)$; hence $\mathbf{Inf}_1[\mathrm{OR}_n] = 2^{1-n}$. Similarly, $\mathbf{Inf}_i[\mathrm{OR}_n] = \mathbf{Inf}_i[\mathrm{AND}_n] = 2^{1-n}$ for all $i \in [n]$. The $\mathrm{Maj}_3$ is depicted in Figure 2; the points where it’s $+1$ are coloured grey and the points where it’s $-1$ are coloured white. Its boundary edges are highlighted in black; there are $2$ of them in each of the $3$ dimensions. Since there are $4$ total edges in each dimension, we conclude $\mathbf{Inf}_i[\mathrm{Maj}_3] = 2/4 = 1/2$ for all $i \in [3]$. For majority in higher dimensions, $\mathbf{Inf}_i[\mathrm{Maj}_n]$ equals the probability that among $n-1$ random bits, exactly half of them are $1$. This is roughly $\frac{\sqrt{2/\pi}}{\sqrt{n}}$ for large $n$ — we will see this in the exercises and in a future chapter. Influences can also be defined more “analytically” by introducing the derivative operators. Definition 16The $i$th (discrete) derivative operator$\mathrm{D}_i$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\mathrm{D}_i f : \{-1,1\}^n \to {\mathbb R}$ defined by \[ \mathrm{D}_i f (x) = \frac{f(x^{(i\mapsto 1)}) - f(x^{(i \mapsto -1)})}{2}. \] Here we have used the notation $x^{(i \mapsto b)} = (x_1, \dots, x_{i-1}, b, x_{i+1}, \dots, x_n)$. Notice that $\mathrm{D}_if(x)$ does not actually depend on $x_i$. The operator $\mathrm{D}_i$ is a linear operator: i.e., $\mathrm{D}_i(f+g) = \mathrm{D}_i f + \mathrm{D}_i g$. If $f : \{-1,1\}^n \to \{-1,1\}$ is boolean-valued then \begin{equation} \mathrm{D}_if(x) = \begin{cases} 0 & \text{if coordinate $i$ is not pivotal for $x$,} \\ \pm 1 & \text{if coordinate $i$ is pivotal for $x$.} \end{cases} \label{eqn:derivative-of-boolean} \end{equation} Thus $\mathrm{D}_if(x)^2$ is the $0$-$1$ indicator for whether $i$ is pivotal for $x$ and we conclude that $\mathbf{Inf}_i[f] = \mathop{\bf E}[\mathrm{D}_if({\boldsymbol{x}})^2]$. We take this formula as a definition for the influences of real-valued boolean functions. Definition 17We generalize Definition 13 to functions $f : \{-1,1\}^n \to {\mathbb R}$ by defining the influence of coordinate $i$ on $f$ to be \[ \mathbf{Inf}_i[f] = \mathop{\bf E}_{{\boldsymbol{x}} \sim \{-1,1\}^n}[\mathrm{D}_if({\boldsymbol{x}})^2] = \|\mathrm{D}_i f\|_2^2. \] The discrete derivative operators are quite analogous to the usual partial derivatives. For example, $f : \{-1,1\}^n \to {\mathbb R}$ is monotone if and only if $\mathrm{D}_i f(x) \geq 0$ for all $i$ and $x$. Further, $\mathrm{D}_i$ acts like formal differentiation on Fourier expansions: Proposition 18Let $f : \{-1,1\}^n \to {\mathbb R}$ have the multilinear expansion $f(x) = \sum_{S \subseteq [n]} \widehat{f}(S)\,x^S$. Then \begin{equation} \label{eqn:deriv-formula} \mathrm{D}_i f(x) = \sum_{\substack{S \subseteq [n] \\ S \ni i} } \widehat{f}(S)\,x^{S \setminus \{i\}}. \end{equation} Since $\mathrm{D}_i$ is a linear operator, the proof follows immediately from the observation that \[ \mathrm{D}_i x^S = \begin{cases} x^{S \setminus \{i\}} & \text{if $S \ni i$,} \\ 0 & \text{if $S \not \ni i$.} \end{cases} \] By applying Parseval’s Theorem to the Fourier expansion \eqref{eqn:deriv-formula}, we obtain a Fourier formula for influences: In other words, the influence of coordinate $i$ on $f$ equals the sum of $f$’s Fourier weights on sets containing $i$. This is another good example of being able to “read off” an interesting combinatorial property of a boolean function from its Fourier expansion. In the special case that $f : \{-1,1\}^n \to \{-1,1\}$ is monotone there is a much simpler way to read off its influences: they are the degree-$1$ Fourier coefficients. In what follows, we write $\widehat{f}(i)$ in place of $\widehat{f}(\{i\})$. Proof: By monotonicity, the $\pm 1$ in \eqref{eqn:derivative-of-boolean} is always $1$; i.e., $\mathrm{D}_if(x)$ is the $0$-$1$ indicator that $i$ is pivotal for $x$. Hence $\mathbf{Inf}_i[f] = \mathop{\bf E}[\mathrm{D}_i f] = \widehat{\mathrm{D}_if}(\emptyset) = \widehat{f}(i)$, where the third equality used Proposition 18. $\Box$ This formula allows us a neat proof that for any $2$-candidate voting rule that is monotone and transitive-symmetric, all of the voters have small influence: Proof: Transitive-symmetry of $f$ implies that $\widehat{f}(i) = \widehat{f}(i’)$ for all $i, i’ \in [n]$ (using Exercise 1.29(a)); thus by monotonicity, $\mathbf{Inf}_i[f] = \widehat{f}(i) = \widehat{f}(1)$ for all $i \in [n]$. But by Parseval, $1 = \sum_S \widehat{f}(S)^2 \geq \sum_{i=1}^n \widehat{f}(i)^2 = n \widehat{f}(1)^2$; hence $\widehat{f}(1) \leq 1/\sqrt{n}$. $\Box$ This bound is slightly improved in the exercises. The derivative operators are very convenient for functions defined on $\{-1,1\}^n$ but they are less natural if we think of the Hamming cube as $\{\mathsf{True}, \mathsf{False}\}^n$; for the more general domains we’ll look at in later chapters they don’t even make sense. We end this section by introducing some useful definitions that will generalize better later on. Definition 22The $i$th expectation operator$\mathrm{E}_i$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by \[ \mathrm{E}_i f (x) = \mathop{\bf E}_{{\boldsymbol{x}}_i}[f(x_1, \dots, x_{i-1}, {\boldsymbol{x}}_i, x_{i+1}, \dots, x_n)]. \] Whereas $\mathrm{D}_i f$ isolates the part of $f$ depending on the $i$th coordinate, $\mathrm{E}_i f$ isolates the part not depending on the $i$th coordinate. In the exercises you are asked to verify the following: $\displaystyle \mathrm{E}_i f (x) = \frac{f(x^{(i \mapsto 1)}) + f(x^{(i \mapsto -1)})}{2}$, $\displaystyle \mathrm{E}_i f (x) = \sum_{S \not \ni i} \widehat{f}(S)\,x^{S}$, $\displaystyle f(x) = x_i \mathrm{D}_i f(x) + \mathrm{E}_i f(x)$. Note that in the decomposition $f = x_i \mathrm{D}_i f + \mathrm{E}_i f$, neither $\mathrm{D}_i f$ nor $\mathrm{E}_i f$ depends on $x_i$. This decomposition is very useful for proving facts about boolean functions by induction on $n$. Finally, we will also define an operator very similar to $\mathrm{D}_i$ called the $i$th Laplacian: Definition 24The $i$th directional Laplacian operator$\mathrm{L}_i$ is defined by \[ \mathrm{L}_i f = f - \mathrm{E}_i f. \] Notational warning: many authors use the negated definition, $\mathrm{E}_i f – f$. In the exercises you are asked to verify the following: $\displaystyle \mathrm{L}_i f (x) = \frac{f(x)- f(x^{\oplus i})}{2}$, $\displaystyle \mathrm{L}_i f (x) = x_i \mathrm{D}_i f(x) = \sum_{S \ni i} \widehat{f}(S)\,x^{S}$, $\displaystyle \langle f, \mathrm{L}_i f \rangle = \langle \mathrm{L}_i f, \mathrm{L}_i f \rangle = \mathbf{Inf}_i[f]$.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Perfect positioning of skeletal structures $\neq$ Organ immobility e.g. ventral displacement or geographic miss from moving organs Anatomic variations during treatment. Limitations of existing solutions Frame-based immobilization systems are highly uncomfortable [Lutz, W. et. al, 1988], Takakura, T., et al., 2010] Involuntary patient movements is not a boon for inspection-based setups [Cervino, L.I. et. al., 2010, Li G. et al, 2015] Existing real-time correcting proposal employs stepper motors for head motion alignment[Li, X. et al, 2015] Solution: Soft Position Correcting Systems Eliminate rigid frames and metallic rings used in immobilization that make patient uncomfortable ✔ Eliminate attenuation of X-Ray beams ✔ Remove need to constantly model nonlinear dynamics of patient's torso region and soft robotic actuators ✔ Ensure control system is robust to (non)-parametric uncertainties ✔ Testbed Vision-based Pose Estimation Steps Find edges of 2D planar regions in scene structure [Torr and Zisserman, 2000] $\rightarrow$ bound resulting plane indices with their 2D convex hull Extract face and face-height neighbors into a predefined 2D prismatic model [Rusu, R. et. al, 2008] Cluster extracted points based on a Euclidean Clustering scheme defined in the paper Face will be the largest cluster Segmentation Results Control Design Objectives Stabilize system states Optimal closed loop tracking given a desired trajectory Robustify system to (non-)parametric uncertainties $\rightarrow$ changing head shapes, size and other anatomic/tumor variations Model Reference Adaptive System (MRAS) Design Model head and bladder dynamics as$\dot{\textbf{y}} = \textbf{A}\textbf{y} + \textbf{B} {\Lambda} \left(\textbf{u} - f(\textbf{x}) \right) + \textbf{w}(k) $ ,where $f(\textbf{x})$ is a nonlinear function to be adapted for in our controller parametersand $\textbf{x}$ is the tuple containing past controls and current outputs Approximate $f(\textbf{x})$ by a neural network with continuous memory states$\hat{f}\left(\textbf{u}(k-d), \textbf{y}(k),\textbf{w}(k)\right)$ $\rightarrow$ $\hat{f}(\cdot)$ is realized with a Long-Short Term Memory Cell [Horchreiter and Schmidhuber, 1991, 1997] Purpose: Remember good adaptation gains Set up control law in terms of parameter estimates from the neural network weights and Lipschitz basis functions, $\Phi(\textbf{y}) = \{\textbf{y}(k-d), \cdots, \textbf{y}(k-d-4), \textbf{u}(k-d) \cdots \textbf{u}(k-d-5)\}$ $i.e.$ network looks back in time by 5 time steps at every instant, and then makes a prediction Derive Adaptive Adjustment mechanism from Lyapunov analysis [Parks, P., 1966] Assumptions a dynamic RNN with $N$ neurons, $\varphi(\textbf{x})$, exists that will map from a compact input space $\textbf{u} \subset \mathbb{U}$ to an output space $\textbf{y} \subset \mathbb{Y}$ on the Lebesgue integrable functions with closed interval $[0, T]$ or open-ended interval $[0,\infty)$; the nonlinear function $f(\textbf{x})$ is exactly $\Theta^T \Phi(\textbf{x})$ with coefficients $\Theta \in R^{N\times m}$ and a Lipschitz-continuous vector of basis functions $\Phi(\textbf{x}) \in R^N$; inside a ball $\textbf{Y}_R$ of known, finite radius $R$, the ideal neural network (NN) approximation $f(\textbf{x}): R^n \rightarrow R^m$, is realized to a sufficient degree of accuracy, $\varepsilon_f > 0$; outside $\textbf{Y}_R$, the NN approximation error can be upper-bounded by a known unbounded, scalar function $\varepsilon_{max}(\textbf{x})$;$\|\varepsilon(\textbf{x})\| \le \varepsilon_{max}(\textbf{x}), \quad \, \forall \quad \textbf{x} \in \textbf{Y}_R$; there exists an exponentially stable reference model$\dot{\textbf{y}}_m = \textbf{A}_my_m + \textbf{B}_m \textbf{r}$ $\hat{\textbf{K}}_y \, \text{ and } \hat{\textbf{K}}_r$ are adaptive gains to be suitably designed There are model matching conditions such that $\hat{\textbf{K}}_y = \textbf{K}_y , \, \text{ and } \hat{\textbf{K}}_r = \textbf{K}_r$ (ideally) Mathematically, we can imagine the approximator as $\hat{f}(\textbf{x}) = \hat{{\Theta}}^T {\Phi}(\textbf{x}) + \varepsilon_f(\textbf{x})$ $\hat{\Theta}^T$ denotes the vectorized weights of the neural network and $\Phi(\textbf{x})$ denotes the vector of lagged inputs and output, $\varepsilon_f(\textbf{x})$ is the approximation error. Recurrent Neural Network Model Closed Loop Dynamics State closed loop dynamics is given by$\dot{\textbf{y}} = \textbf{A} + \textbf{B} \Lambda (\hat{\textbf{K}}_{y}^{T} \textbf{y} + \textbf{B} \Lambda (\hat{\textbf{K}}_r^T \textbf{r} + \varepsilon_f))$ where $\textbf{A}$ and $\Lambda$ are unknown matrices. The sign of $\Lambda$ is known. $\hat{\textbf{K}}_{y}^{T} \text{ and } \hat{\textbf{K}}_r^T$ are adaptation gains to be determined. The generalized error state vector $\textbf{e}(k) = \textbf{y}(k) -\textbf{y}_m(k)$, has dynamics$\dot{\textbf{e}}(k)= \textbf{A}_m\textbf{e}(k) + \textbf{B} \Lambda[\tilde{\textbf{K}}_r^T \textbf{r} +\tilde{\textbf{K}}_y^T \textbf{y} - \varepsilon_f] $, $\textbf{A}_m$ is Hurwitz and known, $\textbf{B}$ is known. $\textbf{y}_m$ is assumed to be a linear model-following model of the form, $\dot{\textbf{y}}_m = \textbf{A}_m \textbf{y}_m + \textbf{B} \, \textbf{r}$. Lyapunov Redesign $\textbf{Theorem:}$Given correct choice of adaptive gains $\hat{\textbf{K}}_y$ and $\hat{\textbf{K}}_r$, the error state vector, $\textbf{e}(k)$ with closed loop time derivative $\dot{\textbf{e}}$, is uniformly ultimately bounded, and the state $\textbf{y}$ will converge to a neighborhood of $\textbf{r}$. Given the Lyapunov function$\textbf{V}(\textbf{e}, \tilde{\textbf{K}}_y, \tilde{\textbf{K}}_r) = \textbf{e}^T\textbf{P}\textbf{e}+ \textbf{tr}(\tilde{\textbf{K}}_y^T \Gamma_y ^{-1} \tilde{\textbf{K}}_y |\Lambda|)+ \textbf{tr}(\tilde{\textbf{K}}_r^T \Gamma_r^{-1} \tilde{\textbf{K}}_r^T |\Lambda|)$, where $\tilde{\textbf{K}}_y = \hat{\textbf{K}}_y - {\textbf{K}}_y$ and $\tilde{\textbf{K}}_r = \hat{\textbf{K}}_r - {\textbf{K}}_r$.$\textbf{P}$ is a unique symmetric, positive definite (SPD) matrix solution of the algebraic Lyapunov equation, $\textbf{P}\textbf{A}_m + \textbf{A}_m^T\textbf{P} = -\textbf{Q}$ We can verify that the adaptation laws are$\dot{\hat{\textbf{K}}}_y = -\Gamma_y \textbf{y} \textbf{e}^T \textbf{P} \, \textbf{B} \text{sgn}(\Lambda) \text{ and }\dot{\hat{\textbf{K}}}_r = -\Gamma_r \textbf{r} \textbf{e}^T \textbf{P} \, \textbf{B} \, \text{sgn}(\Lambda)$. where $\Gamma_y$ and $\Gamma_r$ are fixed SPD matrices of adaptation rates.$\textbf{tr}(\textbf{A})$ denote the trace of matrix $\textbf{A}$. We show in the paper that the time-derivative of the Lyapunov function is negative definite outside of the compact set$\chi =\left(\textbf{e}: \|\textbf{e}\| \le \frac{2\|\textbf{PB}\|\lambda_{high}(\Lambda)\varepsilon_{max}(\textbf{y})}{\lambda_{low}(\textbf{Q})}\right).$ Thus, the error $\textbf{e}$ is uniformly ultimately bounded. $i.e.$ $\textbf{y}(t) \rightarrow 0$ as $t \rightarrow \infty$ is sufficient for us. Results Future/Ongoing Work Add a differentiable optimization layer before neural network output layer to account for valve saturation in the end-to-end trianing algorithm Test on varied head shapes, sizes and volunteer trials Decouple control laws along individual axes of actuation Stages of Integration A commercial pillow tested on a mannequin testbed to study controllability along a 1-DoF motion [arXiv:1506.04787, arXiv:1610.01481]. We then developed and integrated a customized adjustable frame with air pillows to dynamically compensate motions [arXiv:1703.03821] Next phase: A wearable head helmet consisting of compliant soft actuating robots with morphological computational properties to actuate along peripheral axes of motion
Let $X_1, \ldots, X_{n_X}$ and $Y_1, \ldots, Y_{n_Y}$ be $n_x$ and $n_Y$ iid observations from two independent Bernoulli populations with probabilities of success $p_X$ and $p_Y$. Define the statistics $T_X = \sum_{i=1}^{n_X}X_i$ and $T_Y = \sum_{i=1}^{n_Y}Y_i$. I am testing the hypothesis: $$ H_0 : p_X = p_Y \ \ \ \text{and} \ \ \ H_1 : p_X \neq p_Y $$ Consider the test statistic: $$ T = \dfrac{\hat{p}_X-\hat{p}_Y}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_X}+\frac{1}{n_Y}\right)}} $$ where $\hat{p}_X = \frac{T_X}{n_X}$ and $\hat{p}_Y = \frac{T_Y}{n_Y}$, and $\hat{p} = \frac{T_X+T_Y}{n_X+n_Y}$. I would like to derive the asymptotic distribution of $T$ under $H_0$ as both $n_X$ and $n_Y$ go to infinity. My works is as follows: By the asymptotic properties of the MLE: $$ \sqrt{n_X}(\hat{p}_X-p_X) \to_{D}N(0,p_X(1-p_X)) $$ and $$ \sqrt{n_Y}(\hat{p}_Y-p_Y) \to_{D}N(0,p_Y(1-p_Y)) $$ I would like to find the distribution of $\hat{p}_X-\hat{p}_Y$, but am not sure how. In general two "in distribution" results are not additive. That is, it is genreally NOT the case that $X_n+Y_n \to_{D} X+Y$. Does anyone have any ideas? I know in general that under the null: $$ T = \dfrac{\hat{p}_X-\hat{p}_Y}{\sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_X}+\frac{1}{n_Y}\right)}} \to_D N(0,1) $$
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
The Open Neighbourhood Definition of Continuous Maps on Topological Spaces Recall from the Continuous Maps on Topological Spaces page that if $X$ and $Y$ are topological spaces then a function $f : X \to Y$ is said to be Continuous at a point $a \in X$ if there exists local bases $\mathcal B_a$ of $a$ and $\mathcal B_{f(a)}$ of $f(a)$ such that for every $B \in \mathcal B_{f(a)}$ there exists a $B' \in \mathcal B_a$ such that: Furthermore, we said that $f$ is continuous on $X$ if $f$ is continuous at every point $a \in X$. We will now look an alternative but similar definition of a map $f : X \to Y$ being continuous at a point $a \in X$. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$. Then $f$ is continuous at $a \in X$ if and only if for every open neighbourhood $V$ of $f(a)$ there exists an open neighbourhood $U$ of $a$ such that $f(U) \subseteq V$. Proof:$\Rightarrow$ Let $f : X \to Y$ and suppose that $f$ is continuous at $a \in X$. Then there exists local bases $\mathcal B_a$ of $a$ and $\mathcal B_{f(a)}$ of $f(a)$ such that for all $B \in \mathcal B_{f(a)}$ there exists a $B' \in \mathcal B_a$ such that: Now let $V$ be any open neighbourhood of $f(a)$. Since $\mathcal B_{f(a)}$ is a local basis of $f(a)$ we have that by definition for every open neighbourhood $V$ of $f(a)$ that there exists $B \in \mathcal B_{f(a)}$ such that: But $B$ itself is an open neighbourhood of $f(a)$, and so since $f$ is continuous we have that since $B \in \mathcal B_{f(a)}$ that there exists a $B' \in \mathcal B_a$ such that: But $B' \in \mathcal B_a$ is an open neighbourhood of $a$ $(*)$, so set $U = B'$. Therefore from $(*)$, $(**)$, and $(***)$ we have that: Hence, for every open neighbourhood $V$ of $f(a)$ there exists an open neighbourhood $U$ of $a$ such that $f(U) \subseteq V$. $\Leftarrow$ Suppose now that for every open neighbourhood $V$ of $f(a)$ there exists an open neighbourhood $U$ of $a$ such that $f(U) \subseteq V$. To show that $f$ is continuous we must show that there exists an local bases $\mathcal B_{f(a)}$ of $f(a)$ and $\mathcal B_a$ of $a$ such that for all $B \in \mathcal B_{f(a)}$ there exists a $B' \in \mathcal B_a$ such that $f(B') \subseteq B$. Consider the following (rather trivial) local bases of $a$ and $f(a)$: Then for every $V = B \in \mathcal B_{f(a)}$ (open neighbourhood of $f(a)$) we are given that there exists an open neighbourhood $U = B' \in \mathcal B_a$ such that: Therefore $f$ is continuous at $a$. $\blacksquare$ From Theorem 1 above, we see that then $f : X \to Y$ is continuous on all of $X$ if and only if for all $a \in X$ we have that for all open neighbourhoods $V$ of $f(a)$ there exists an open neighbourhood $U$ of $a$ such that $f(U) \subseteq V$.
When I analyzed a data set with two categories, I used a dummy variable $z=1$ for category 1, and 0 otherwise, and added the extra term $\beta z$ to the regression model. Suppose the least squares estimate for $\beta$ is $b$. I tried to calculate some prediction intervals for the two categories. For category 1, since $z=1$, I needed to count the variance for $b$, but for category 2, since $z=0$, I did not count this variance. It then appeared that the variance for category 1 includes an additional variance because I happened to code it as $z=1$. I am new to study econometrics. Could anyone please help with this puzzle? Am I missing anything here? That is natural and you are not missing anything. Let $y=\alpha + \beta z + u$. Your prediction of $y$ given $z=1$ is $\hat{y} = a + b$, where $a$ and $b$ are the OLS estimates. The prediction error ($y - \hat{y}$) for $z=1$ is, thus, $(\alpha - a) + (\beta-b) + u$, which is involved with $b$. On the other hand, for $z=0$, the prediction of $y$ is $a$ and the prediction error is $(\alpha-a) + u$, which has nothing to do with $b$. As you said, the former depends on $b$, while the latter does not. (Perhaps it would help to remember that the value of $z$ is given for the prediction, and thus the prediction intervals depend on the value of $z$.) (Calculation of the prediction intervals:) You can calculate the prediction intervals using the formula $(a+b) \pm se((\alpha-a)+(\beta-b)+u) \cdot \textit{critical value}$ for $z=1$, and the formula $a \pm se((\alpha-a)+u) \cdot \textit{critical value}$ for $z=0$. which is fine. Note that these two prediction intervals can also be written as $(a+b) \pm se(a+b+u) \cdot cv$ and $a\pm se(a+u) \cdot cv$, respectively, because $\alpha$ and $\beta$ are constant (nonrandom). How to estimate the standard errors can be found in Wooldridge's textbook (the "Prediction and Residual Analysis" section; 6.4 in 5ed).
The Gait that closes the Gap The mostly used gaits of human locomotion are walking and running. In walking the important characteristics are that always one or both legs have ground contact, and the center of mass is lifted during a single leg stance phase. In running exist true flight phases and the center off mass reaches a minimum during stance phase. In order to change from one gait to another, an abrupt switching is observed. Maybe, there exists no smooth transition over many steps. The bipedal spring-mass model can reproduce both gaits, but there were significant differences in velocity, respectively system energy observed. However, the observations had a limited view because the simulation results were reduced to self-stable gait solutions. A more general investigation on model predictions should focus on periodic gait solutions and could propose additional strategies for stabilizing the identified gait patterns. In order to investigate periodic gait patterns for analyzing walking and running, a common platform for gait simulations is required. As mentioned, the bipedal spring-mass model is able to show both gaits and using the methodology of Poincaré maps, periodic solutions can be identified. For applying Poincaré maps, the definition of start and stop conditions is necessary, which should exist in both gaits similarly. A useful start and stop event was established in the Locomotion Lab Jena, namely the instant of Vertical Leg Orientation (VLO). The issue regarding VLO is explained in another article. Does there still exist a gap between walking and running when focussing on periodic gait solutions? Classification of the gaits. R: running, W2,W3: walking, GR: grounded running. Previously, it was known that walking operates at low velocities only. When increasing the energy respectivley the energy, the body will lift off at some time. Due to that reason, the following investigation will be focussed on gaits at low velocities. Analyses on running revealed that it is easy to find running at low speeds. It is depends mainly on the setting of the leg’s angle of attack $\alpha_0$, allowing for running or not. With our novel map for gait solutions, the systems variables at the instant of VLO are shown. Here, we focus on symmetric gait patterns, which means that the first half of the stance phase is symmetrically identical to the second half. The system state is then reduced to a single, independent variable, i.e. the height of the center of mass $y_0$ at the event of VLO. In the figure, two system parameters, i.e. the leg’s angle of attack $\alpha_0$ and the system energy (thick lines) were varied. The simulation results for typical walking (green area, W2) reveal a height of the center of mass at midstance $y_0$ always above the height at touch down $y_{TD}$. The difference between them can be very small. In contrast, the center of mass is always lowered in running (blue area, R) with $y_0 < y_{TD}$. There is another difference regarding the angle of attack $\alpha_0$ found simulations with the same system energy. In conclusion, the gap between walking and running clearly exists. Is there a chance to reduce the gap between the typical gaits of walking and running? Inside the model simulations exists one condition, not explained so far. It is implied that the center of mass moves downward at the event of leg’s touch down. It feels like curious when a leg with a fixed angle at touch down would approaching the ground from a lower level. From now we reject this thought and allowing the center of mass to move upward when the leg is touching the ground. What happens with the leg? In fact, the simulated leg is not defined when it does not have ground contact. We assume of course that the leg will lengthen or rotating before hitting the ground. With this modified assumption, the center of mass is lifted during touch down, we can identify a novel gait (orange area), which connects both, walking and running. In this gait, we observe that the center off mass is clearly lowered during single leg stance phase as usual in running. However, there exists a distinct double support phase, typical for walking. The gait called Grounded Running. Motion of the center of mass and the ground reaction forces with distinct double support phases of the legs. With the spring-mass model, a novel gait is identified, that could be classified as a running gait where $y_0 < y_{TD}$ although there is always ground contact respectively no flight phase exists. Based on simulations, this is a new gait, but it is already observed in locomotion of birds and is called Grounded Running. The simulations have shown, that grounded running exists for low velocities only. In contrast, the same simulation study revealed that walking can be faster than assumed. Featured Paper J. Rummel, Y. Blum, A. Seyfarth. From walking to running. Autonome Mobile Systeme 2009, R. Dillmann, J. Beyerer, C. Stiller, J.M. Zöllner, T. Gindele (Eds.) Springer: 89-96, 2009. DOI: 10.1007/978-3-642-10284-4_12 PDF A general system state for gait model simulations In walking and running, the motion of the centre of mass can be understood as a kind of oscillation, however, this oscillation is sometimes hardly comparable with a sinus function. For investigations of gaits, a special kind of mapping, so called Poincaré maps are used. In Poincaré maps a comparison between the systems state at the beginning and at the end of an oscillation is conducted which removes the time from the dataset. In the case, the systems state variables are equal, a periodic motion is revealed. Poincaré originally used the period duration for fixing start and end events (in order to analyse the motion of planets). When simulating gaits with legged models, the period duration is not known prior the simulation start. In gait simulations, the scientists apply physical system states to define start and end event of a simulation while two conditions are very popular in the biomechanics field, i.e. the touch down of a leg and the apex of the centre of mass. The touch down event is often used for simulating Passive Dynamic Walkers with rigid legs. In models with compliant legs, the apex is mostly used to describe start and end of a gait cycle. When the touch down event is used, the conditions of the counter leg need to be fixed a priori. Is the counter leg at the ground? Does it lift off at the same moment or does it swing without ground contact? Similar a priori definitions are required when using the apex. Is the leg lifted at the instant of apex, the gait of running is selected. In the walking gait, the active leg must have ground contact during apex. By definition, the apex is the highest point of the centre of mass during locomotion. However, the stop condition of a simulation is identified when one maximum is reached. That could be one maximum of possibly several peaks. In some simulations with a walking model, there were truly more than one maximum identified. Hence, the apex ore the maximum is not necessarily a unique event in walking and the apex return map is incomplete or maybe incorrect. The instant of Vertical Leg Orientaion (VLO) with a simulated walking model. At the Locomotion Lab in Jena another system state event for Poincaré maps was established, which allows to investigate both gaits, walking and running, with the same simulation and mapping. This event is called Vertical Leg Orientation (VLO). The definition is, that the active leg has ground contact and is oriented vertically or the hip joint is vertically above the foot point. While the active leg is on the ground, the counter leg is lifted. This system state exists in both gaits equally. Well defined motion events for Poincaré maps ensure for a reduction of independent system variables in order to simplify the analysis. The system variables of the spring-mass model are the positions $x$ and $y$ of the centre of mass, which is located at the hip, and the velocities $v_x$ and $v_y$ of the centre of mass. At the instant of Vertical Leg Orientation, the system state can be reduced to two independent variables, i.e. the height $y$ and the angle of the velocity $\theta = \arctan(v_y/v_x)$. Parameter map where symmetric walking patterns were found. The novel method for gait analysis using VLO was applied for walking simulations first. There is a clear distinction between symmetric and asymmetric gait patterns. In symmetric walking, the velocity angle $\theta$ is always zero, which means, the first half of the stance phase is symmetrically identical to the second half. In asymmetric walking patterns is $\theta \neq 0$. Asymmetric walking patterns are not worse than symmetric patterns as there exist also self-stable solutions. A surprising finding is that in symmetric walking solutions, the centre of mass $y$ is always lifted at the event of VLO compared to the height at touch down $y_{TD}$. Why is the novel event for Poincaré maps called VLO instead of “mid-stance”? The term mid-stance is already used in scientific literature but addresses various conditions. Mid-stance is for instance the time based centre of the stance phase, a period of time during the stance phase, or the event when the ground reaction force is perpendicular to the ground. In order to clearly define the event of vertical orientation and differentiate from a less explicit term, the name of Vertical Leg Orientation was established. Simulation results on Walking and Running combined in a single map using VLO will be presented in another article. Featured Paper J. Rummel, Y. Blum, H.M. Maus, C. Rode, A. Seyfarth. Stable and robust walking with compliant legs. IEEE International Conference on Robotics and Automation, May 3-8, Anchorage, Alaska: 5250-5255, 2010. DOI: 10.1109/ROBOT.2010.5509500 PDF
A T3 Space That Is NOT a T4 Space Recall from the T4 (Normal Hausdorff) Topological Spaces page that a topological space $X$ is said to be normal if for all disjoint closed sets $E$ and $F$ there exists open sets $U$ and $V$ with $E \subseteq U$, $F \subseteq V$, and $U \cap V = \emptyset$. We then said that a topological space $X$ is a T 4 space if it is both a normal space and a T 1 space. We saw that every T 4 space is a T 3 space. We will now see that there exists T 3 spaces that are not T 4 spaces. Recall from The Lower and Upper Limit Topologies on the Real Numbers page that the set $\mathbb{R}$ with the topology $\tau$ generated by sets of the form $[a, b)$ where $a, b \in \mathbb{R}$, $a < b$ is called the lower limit topology (or the Sorgenfrey line). We claim that $\mathbb{R}$ with the topology is T 3. To show this, let $x \in \mathbb{R}$ and consider any closed set $F$ in $\mathbb{R}$ with the lower limit topology. Then $\mathbb{R} \setminus F$ is an open set containing $x$. So there exists a $b \in \mathbb{R}$ such that $x \in [x, b)$ where, of course, $[x, b)$ is an open set. Let $U = [x, b)$. Then $U^c = (-\infty, x) \cup [b, \infty)$ is also an open set in $\mathbb{R}$ with the lower limit topology and it contains $F$. Let $V = (-\infty, x) \cup [b, \infty)$. Then, for every point $x \in \mathbb{R}$ and for all closed sets $F$ in $\mathbb{R}$ not containing $X$ there exists open sets $U$ and $V$ that separate $\{ x \}$ and $F$, so $\mathbb{R}$ with the lower limit topology is a T 3 space. Now we know that the product of two T 3 spaces is T 3, so $\mathbb{R} \times \mathbb{R} = \mathbb{R}^2$ where each copy of $\mathbb{R}$ has the lower limit topology is also a T 3 space. We claim that this product is not a T 4 space though. Consider the following set in $\mathbb{R}^2$ with this topology:(1) This set is closed in $\mathbb{R}^2$ with respect to this topology. Moreover, if we consider this set as a subspace of $\mathbb{R}^2$ this this topology then every singleton set is open which implies that this line has the discrete topology, i.e., every subset of points on this line is both open and closed. In particular, the following sets are closed in the subspace of this line:(2) Since these two sets are closed in this subspace, there exists closed sets $E$ and $F$ in $\mathbb{R}^2$ such that $E^* = E \cap \mathbb{R}^2$ and $F^* = F \cap \mathbb{R}^2$. Clearly $E^*$ is contained in $E$ and $F^*$ is contained in $F$. Moreover, it's not hard to see that there exists no open sets $U$ and $V$ which can separate $E$ and $F$ because overlap of $U$ and $V$ will always occur. So $\mathbb{R}^2$ is not normal with this topology and hence $\mathbb{R}^2$ is not a T 3 space that is not a T 4 space with this topology.
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Given a topological space ($X$, $\mathcal{T}$) and the following defintions $\partial A$ := { $x$ | every neighbourhood of $x$ contains points from $A$ and from $X \setminus A$ } $\overline{A}$ := $A \cup \partial A$ $A^{\circ}$ := $A \setminus \partial A$. Now i want to prove that$$ \overline{A \cap B} \subseteq \overline{A} \cap \overline{B}$$ Proof: With set theory$$ \overline{A} \cap \overline{B} = (A \cup \partial A) \cap (B \cup \partial B) = (A \cap B) \cup (\partial A \cap B) \cup (A \cap \partial B) \cup (\partial A \cap \partial B)$$and $\overline{A \cap B} = (A \cap B) \cup \partial (A \cap B)$. So if $x \in \overline{A \cap B}$. Now i distinguish two cases. (i) $x \in (A \cap B)$, then its obvious that $x \in \overline{A} \cap \overline{B}$ with the equations from above (ii) $x \in \partial (A \cap B)$, here i have no idea how to proceed, because i have no idea how to decompose the expression $\partial (A \cap B)$, do you have any suggestions for me?
OK, question edited in response to comment from @user158565 for those not in possession of the Jackman book. I tried to abstract the relevant information below. I'm struggling with Simon Jackman's example 2.12 on p82 of his 2009 book Bayesian Analysis for the Social Sciences, specifically what happens with the value of $n$ (from proposition 2.4 on p81) in the calculation $\theta^*$ and $\sigma^{2*}$. For example in the calculation of $\theta^*$ below there are two terms in the left parentheses $\frac{.491}{.000484}$ and $\frac{.548}{.000486}$ the second of which corresponds to $\bar{y}\frac{n}{\sigma^2}$ in Proposition 2.4 on p81. Since $\bar{y}=.548$ and $\sigma^2=.000486$ then $n=1$. However, Jackman gives $n=\alpha+\beta$ where $\alpha=279$ and $\beta=230$. If anyone could help me understand what's going on here I'd appreciate. Some more details: The example is about voting intentions for one of two candidates with $\theta$, the probability of voting for one, given by $\theta \sim Beta(\alpha,\beta)$. The exercise is to approximate this Beta distribution with a Normal distribution $\theta \sim N(\tilde{\theta},\sigma^2)$ with $$\sigma^2=V(\theta; \alpha, \beta)=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$ known and $$\tilde{\theta}=E(\theta; \alpha,\beta)=\frac{\alpha}{\alpha+\beta}$$ to be estimated given a prior $\theta\sim N(.491,.000484)$ and data $\alpha=279$ and $\beta=230$ giving $\tilde{\theta}=.548$ and $\sigma^2=.000486$ The text now reads "the poll result $\tilde{\theta}$ is observed data, generated by a sampling process governed by $\theta$ and the poll's sample size (n.b. $n=\alpha+\beta$). Specifically, using the normal approximation, $\tilde{\theta} \sim N(\theta,\sigma^2)$ or $.548 \sim N(\theta, .000486)$. We are now in a position to use the result in Proposition 2.4". Proposition 2.4 gives: $$\mu \vert\mathbf{y} \sim N \left(\frac{\mu_0\sigma_0^{-2}+\bar{y}\frac{n}{\sigma^2}}{\sigma_0^{-2}+\frac{n}{\sigma^2}},\left(\sigma_0^{-2}+\frac{n}{\sigma^2}\right)^{-1}\right)$$ where the subscript 0 indicates prior information. The example continues giving the posterior estimate for $\theta$: $$\theta^*=\left(\frac{.491}{.000484}+\frac{.548}{.000486}\right)\left(\frac{1}{.000484}+\frac{1}{.000486}\right)^{-1}=.519$$
Ambient Isotopic Embeddings on Topological Spaces Recall from the Isotopic Embeddings on Topological Spaces page that if $X$ and $Y$ are topological spaces and $f, g : X \to Y$ are embeddings, then $f$ is said to be isotopic to $g$ if there exists a continuous function $H : X \times I \to Y$ such that $H_t : X \to Y$ is an embedding for every $t \in I$, $H_0 = f$, and $H_1 = g$. We also proved that isotopy is an equivalence relation on the set of embeddings from $X$ to $Y$. We will now describe another type of isotopy. Definition: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be embeddings. Then $f$ is Ambient Isotopic to $g$ in $Y$ if there exists a continuous function $H : Y \times I \to Y$ such that: 1) $H_t : Y \to Y$ is a homeomorphism for every $t \in I$. 2) $H_0 = \mathrm{id}_Y$. 3) $H_1 \circ f = g$. If such a function $H$ exists, then $H$ is said to be an Ambient Isotopy from $f$ to $g$. Definition: Let $A$ and $B$ be subspaces of a topological space $Y$. Then $A$ and $B$ are said to be Ambient Isotopic within $Y$ if there exists a continuous function $H : Y \times I \to Y$ such that: 1) $H_t : Y \to Y$ is a homeomorphism for every $t \in I$. 2) $H_0 = \mathrm{id}_Y$. 3) $H_1(A) = B$. For example, consider the spaces $X = \{ a \}$ and $Y = [0, 1]$. Let $f : \{ a \} \to [0, 1]$ be defined by $f(c) = \frac{1}{3}$ and let $g : \{ a \} \to [0, 1]$ be defined by $g(a) = \frac{2}{3}$. We will show that $f$ is ambient isotopic to $g$ within $[0, 1]$. Define a function $H : [0, 1] \times [0, 1] \to [0, 1]$ by:(1) Then $H$ is continuous by the gluing lemma. Furthermore, $H_0(y) = H(y, 0) = y = \mathrm{id}_Y$ and $H_1(f(c)) = H(f(c), 1) = 1 - f(c) = g(c)$, so $H_1 \circ f = g$. Theorem 1: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be embeddings. If $f$ is ambient isotopic to $g$ in $Y$ then $f$ is isotopic to $g$. Proof:Since $f$ is ambient isotopic to $g$ in $Y$ there exists a continuous function $H : Y \times I \to Y$ such that $H_t$ is a homeomorphism for every $t \in I$, $H_0 = \mathrm{id}_Y$, and $H_1 \circ f = g$. Consider the function $K : X \times I \to Y$ defined by: Then $K$ is a continuous function since $f$ and $H$ are continuous functions. Furthermore, $K_t = H_t \circ f$ is an embedding for every $t \in I$. Lastly, $K_0(x) = H(f(x), 0) = H_0(f(x)) = f(x)$, i.e., $K_0 = f$, and $K_1(x) = H(f(x), 1) = H_1 (f(x)) = g(x)$, so $K_1 = g$. Therefore $f$ is isotopic to $g$. $\blacksquare$
Consider $$\frac{a}{a}\pmod a,\ \ \ a\in\mathbb Z\setminus\{-1,0,1\}$$ There are two cases: $1)$ $\frac{a}{a}$ is the notation for the real number $1$. Then the expression is equivalent to $1$ modulo $a$. E.g., see this, where the expression on the LHS would not even exist if what would be meant by $99$ in the denominator is modular inverse instead of regular integer division. $2)$ $\frac{a}{a}$ is the notation for $ax$, where $x$ is in the class of solutions to $ax\equiv 1\pmod{a}$. Since $x$ does not exist, $\frac{a}{a}$ does not exist too in this case. So $\frac{a}{a}\mod a$ can be thought of as either equivalent to $1$ modulo $a$ or not existing at all. Now, $aa^{-1}$ is possibly more commonly used than $\frac{a}{a}$ in modular arithmetic and $\frac{a}{a}$ is more commonly used than $aa^{-1}$ in division in integers, but surely not always. Such notation was probably created due to its similarities to division in integers, but I think less ambiguous notation should've been created instead. Also see this. There's a comment there that points out my problem. Consider: $$\frac{ac}{bc}\pmod m,$$ where $$\gcd(b,m)=1,\ \ \ \gcd(c,m)>1,\ \ \ m\in\mathbb Z\setminus\{-1,0,1\},\ \ \ c\in\mathbb Z\setminus\{0\},\ \ \ b\in\mathbb Z\{0\},\ \ \ a\in\mathbb Z$$ can either be equivalent to $\frac{a}{b}\pmod{m}$ if the fractional notation denotes integer division or it could not exist at all if the fractional notation denotes modular inverses.
Given that: $$\lim_{x\to0} \frac{f(x)}{x}=0$$ How can I prove that: $$\lim_{x\to0} f(x)=0$$ ? Would the L'Hospital's Rule be applicable? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hint : $f(x)=\frac{f(x)}{x}{x}$ and product rule. If by contradiction $f(x)\to a\neq0,\;\infty$, your limit $\lim f(x)/x$ would be $\infty$. How can you obtain a contradiction if $\lim f(x)$ doesn't exist? By definition, if $\lim_{x \to 0} f(x)/x = 0$ then for all $\epsilon>0$ there exist small enough $x$ such that $$-\epsilon<f(x)/x<\epsilon$$ therefore, since $x\ne0$ $$-\epsilon\cdot x<f(x)<\epsilon\cdot x$$ and when $x\in (0, 1)\cup(-1, 0)$ we get $$-\epsilon<f(x)<\epsilon$$ which implies that $\lim_{x \to 0} f(x) = 0$ Because the $\lim_{x\to0} \frac{f(x)}{x}=0$, this means that $f(x)$ tends to $0$ faster than $x$, or that $|f(x)|\le x$. Therefore: $$0\le|f(x)|\le x $$ Using the squeeze theorem, and because $\lim_{x\to0} x=0$, $\lim_{x\to0} f(x){x}=0$. By the definition of a limit, f (x) / x < eps for every eps if x is small enough. Take eps = 1, so f (x) / x < 1 if x < eps1, or f (x) < x if x < eps1. Take the definition of the limit again; f (x) < eps if you take x < min (eps, eps1). Obviously you don't need that the limit of f (x) / x is 0. If the limit is c, then f (x) / x < c+1 for small x, so f (x) < x * (c + 1) for small x, and f (x) < eps if x < eps / (c + 1). Obviously you'll have to add a few absolute values in the argument. This is not a fancy proof. It is a grinding one from the epsilon-delta definition of limit suitable for an introductory calculus class. The core of the idea is that if $f(x)$ is bounded away from $0$ as $x$ goes to zero, then $f(x)/x$ is also bounded away from zero, as for small $x$, dividing by $x$ makes things further from zero. And things bounded away from zero don't have a limit of zero. Suppose the limit as $x$ goes to zero of $f(x)$ is $a$, and $a > 0$. Let $\epsilon = a/2$. Then there is a $\delta$ such that for all $0 < x < \delta$, $a/2 < f(x) < 3a/3$. Let $\delta_0$ be the minimum of $1/2$ and $\delta$. (statement 1): Then for $0 < x < \delta_0$, $f(x)/x \geq f(x)*2 \geq 2a/2 = a$ If the limit of $f(x)/x$ as $x$ goes to zero is $0$, then let $\epsilon = a/2$. (statement 2): Then there exists a $\delta_1 > 0$ such that for all $0 < x < \delta_1$, $-a/2 < f(x)/x < a/2$ Let $\delta_2$ be the least of $\delta_0$ and $\delta_1$. For $x < \delta_2 <= \delta_1$, $f(x)/x < a/2$ by (2). For $x < \delta_2 <= \delta_1$, $f(x)/x \geq a$ by (1). Thus $a/2 > a$ or $1/2 > 1$, a contradiction. A similar result holds of $a < 0$. Spotting the places where I implicitly assumed $a$ was positive may be interesting.
$$\lim_{x\to 0}\frac{\sin{6x}}{\sin{2x}}$$ I have no idea at all on how to proceed. I am guessing there is some trig rule about manipulating these terms in some way but I can not find it in my notes. I tried to make $\tan$ into $\dfrac\sin\cos$ $$\frac{\sin6x}{\cos6x} \times \frac{1}{\sin2x}$$ But this doesn't get me anywhere as far as I can tell.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Search Now showing items 1-8 of 8 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV (Elsevier, 2015-01) We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ... Elliptic flow of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Springer, 2015-06-29) The elliptic flow coefficient ($v_{2}$) of identified particles in Pb--Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV was measured with the ALICE detector at the LHC. The results were obtained with the Scalar Product ... Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at s =2.76TeV (American Physical Society, 2015-01-07) The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at midrapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range ... Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV (Elsevier, 2015-02-04) Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ...
I need to find the expression of the third order gradient (with respect to the input vector $\mathbf{x} \in \mathbb{R}^{n} $) of the following expression: $\triangledown^{(3)}_{x}f(\mathbf{x}) = <\mathbf{v},g(A^{T}.\mathbf{x})>$ Where $f\mathbf(x): \mathbb{R}^{n} \rightarrow \mathbb{R} $, $\mathbf{x} \in \mathbb{R}^{n} $, $v \in \mathbb{R}^{m}$ and $A^{T} \in \mathbb{R}^{m \times n}$ and $g(\mathbf{y}):\mathbb{R}^{m} \rightarrow \mathbb{R}^{m}$. It is assumed that $g()$ sufficiently continous as necessary to calculate gradients. Also $g()$ is an elementwise operator (for example sigmoid transfer function) I am not experienced calculating higher order gradients and working out the steps needed to derive the answer is very confusing. I have tried to expand the inner product and the matrix multiplication to eliminate all the vector/matrix products. This results in an expression to which I can more familiarly calculate the partial derivatives necessary to compute the gradient (ie. consider all the variables as constants that are not equal to the variable of which the partial derivative is taken). However this method seems very cumbersome and matrix calculus (https://en.wikipedia.org/wiki/Matrix_calculus) seems to be a more efficient way of solving the problem. So I am wondering if anyone knows how to compute the given third order gradient using matrix calculus and could possible elaborate on the steps used? Hopefully this will help me in understanding matrix calculus better.
Let's find all solutions using formal logic. Given a logic statement of the form "if $A$ then $B$", we can express it symbolically as $A \implies B$ or its equivalent, $\bar A \lor B$. Numbering the given statements 1,2,3 respectively, we have $A_1$ = the truth value of this number is a multiple of 5, $B_1$ = the truth value of this number would lie between 1 to 19, and so on. Let $E$ be (the truth value of) the conjunction of the 3 statements. Then:$$\begin{align}E = & (\bar A_1 \lor B_1) (\bar A_2 \lor B_2) (\bar A_3 \lor B_3) \\= & \bar A_1 \bar A_2 \bar A_3 \lor \bar A_1 \bar A_2 B_3 \lor \bar A_1 B_2 \bar A_3 \lor \bar A_1 B_2 B_3 \\ & \lor B_1 \bar A_2 \bar A_3 \lor B_1 \bar A_2 B_3 \lor B_1 B_2 \bar A_3 \lor B_1 B_2 B_3\end{align}$$ Now, since every multiple of 5 is also a multiple of 10, the conjunction $A_1 A_3$ must be false because it asserts that the number is a multiple of 5 and not a multiple of 10. Also, $B_i B_j$ is false if $i \neq j$ because the intervals being disjoint means that the number cannot be simultaneously a member of two intervals. We can therefore simplify to get $E = \bar A_1 \bar A_2 B_3 \lor B_1 \bar A_2 \bar A_3$. The first term requires a multiple of 8 from the interval 30 to 39 that is not a multiple of 5, so 32 is a solution. The second term requires a multiple of 8 and 10 from the interval 1 to 19. There aren't any. Hence 32 is the only solution.
I have three random variables $X$, $Y$, $Z$. $X$ is independent from the two others. On the other hand, $Y$ and $Z$ may be dependent, and of different distributions, for example consecutive order statistics for a given sampling. How can I estimate $\mathbb P (X \in [Y, Z))$ ? I know that this probability is $\mathbb P (\{X - Y \geq 0\} \cap \{Z-X > 0\})$, but these are not independent. Where do I go from here? EDIT Following @Dilip Sarwate's and @whuber's comments below. For a given $x$, \begin{align} \mathbb P(X\in [Y, Z) | X=x) &= \mathbb E[1\{X\in [Y, Z) \} | X=x] \\ &= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}1\{x\in [y, z) \}f_{Y, Z}(y, z)dydz \\ &= \int_{-\infty}^{x}\int_{x}^{\infty}f_{Y, Z}(y, z)dydz \end{align} Then returning to the original problem: \begin{align} \mathbb P (X \in [Y, Z)) = \int_{-\infty}^\infty\int_{-\infty}^{x}\int_{x}^{\infty}f_{Y, Z}(y, z)f_X(x)dydzdx \end{align} Similarly, we would then have \begin{align} \mathbb E (X 1\{X \in [Y, Z)\}) = \int_{-\infty}^\infty\int_{-\infty}^{x}\int_{x}^{\infty}xf_{Y, Z}(y, z)f_X(x)dydzdx \end{align} ? Are the two previous equations correct?
In two dimensions, you have a powerful tool, the Riemann mapping. If you have a compact set in the plane whose complement is connected, knowing explicitlythe map of the complement onto the exterior of the unit disk gives you the log capacity. Now take a book on complex variables with many examples of conformal maps, and you can calculate capacities of many planar sets. I recommend the bookby L. Volkovyskii, A collection of problems in complex analysis, there is an English Dover edition, which is a real encyclopedia of explicit conformal maps.(If you can read some Russian, there is another book: M. A. Evgrafov, Problems in the theory of analytic functions). All examples given in the paper that you cite can be obtained with this method.For the disc, ellipse and the segment the conformal map is the Joukowski function, for the square and triangle use a modified Schwarz-Christoffel formula, and it leads to Gamma function. (All this is covered in the book of Volkovyskii cited above.You can easily generalize the last two examples to regular $n$-gons with any $n$. The answer is in terms of Gamma function; it is written in Polya, Szego, Isoperimetric problems of mathematical Physics. It is more difficult in dimension 3 and higher (Newtonian potential). A good collection of explicitly computed capacities is contained in the book N. S. Landkof, Foundations of modern potential theory (there is an English translation). EDIT. Here are some detail for the regular $n$-gon. We want to map the exterior region onto the exterior of the unit disk so that $\infty\to\infty$ by a function $f$.To do this, break the exterior region into $n$ congruent triangles with one vertex at infinity,and other two vertices the adjacent vertices of the polygon. The interior anglesof a triangle are $2\pi/n$ at $\infty$ and $\pi/2+\pi/n$ at finite vertices. Now map one of these triangles onto the sector $\{ z: |\arg z|<\pi/n, |z|>1\}$. So that$\infty\to\infty$ and other two vertices go to $e^{\pm i\pi/n}$. By the symmetryprinciple this function will have an analytic continuation to the whole exteriorand map it onto the exterior of the unit disk, so it is our $f$. The required map of the triangle onto a sector is performedwith the Schwarz-Christoffel formula, and the integral which is involved is reduced to the Gamma-function. Then $f(z)\sim az,\;z\to\infty$ and capacity is $1/|a|$.
If $G$ is a finite $p$-group with a nontrivial normal subgroup $H$, then the intersection of $H$ and the center of $G$ is not trivial. Perhaps a slightly less computationally-intensive argument is to simply note that since $H$ is normal, it must be a union of conjugacy classes. Each conjugacy class of $G$ has $p^i$ elements for some $i$; since $H$ contains at least one conjugacy class with $p^0 = 1$ elements (the class of the identity), and $|H|\equiv 0 \pmod{p}$, it must contain other classes with just one element, which must be classes of central elements of $G$. $H$ is normal, consider $G$ acting on $H$ by conjugation. The class equation yields $$ \left| H \right| = \left| H^G \right| + \sum_i [G:Stab_{h_i}] \, , $$ where $ H^G = \{ h \in H \ \mid \ ghg^{-1} = h , \ \forall g \in G \} $ are the fixed points and $ Stab_{h_i} = \{ g \in G \ \mid \ gh_ig^{-1} = h_{i} \}\leqslant G $ is the stabilizer of a $h_i \in H$. Observe that in this case $ H \cap Z(G) = H^G $. $p$ divides $\left| H \right|$ and $[G:Stab_{h_i}]$ for every non trivial orbit, so it divides $\left| H^G \right|$. In particular $H^G$ is not empty, so there is an element of $H$ that is also in the center of $G$. Let $a_{1}, . . . , a_{k}$ be representatives of the conjugacy classes of $G$, ordered such that $a_{m} \in H$ and $a_{m+1}, \cdots , a_{k} \notin H$. The conjugacy class $C(a_{i})$ have either $C(a_{i}) \subset H$ or $C(a_{i}) \cap H = \{e\}$. First arrange the $\{a_{1}, . . . , a_{m}\}$ so that the first $r$ represent conjugacy classes of size 1, (i.e. elements in $H \cap Z$) and the latter $m − r$ represent classes of size larger than 1. Then we can write the class equation for $H \cap Z = H$ as: $$|H| = \sum\limits_{i=1}^{m} |C(a_{i}) \cap H| = |H \cap Z| + \sum\limits_{i=r}^{m} |C(a_{i})| = |H \cap Z| + \sum\limits_{i=r}^{m}\frac{|G|}{|N(a_{i})|}$$ As $|H| < p^{n}$ every term in the sum is divisible by $p$ so $|H \cap Z|$ from above is divisible by $p$. Can we try induction on $n$ by taking quotients like $G/Z$ WHERE $Z$ IS THE CENTER. More technically it goes like this. We take the quotient group $G/Z$ where $Z$ is the center and as $G$ is a $p-group$,$Z$ is non trivial.Hence $o(G/Z)=o(G)/O(Z)$ is less than $o(G)$ to be ideal for application of induction.Now look at the set $S=(Zx|x\in N)$.My claim is that this set is a subgroup and as a matter of fact a normal subgroup.Because $(Zx_1)(Zx_2)=Zx_1x_2$ and hence the closure and the inverse property is trivial.Again for any $x \in G$ and $n \in N$ $(Zx)(Zn)(Zx)^{-1}=Zxnx^{-1}$ and due to $N$ being normal,that belongs to $S$.Hence $S$ is a normal subgroup in $G/Z$.Now due to non-triviality of $Z$ we can apply induction to claim that the intersection of $S$ with the center of $G/Z$ is non-trivial.That is there is a $a$ not in $Z$ but in $N$ such that $Zax=Zxa$ for all $x \in G$ Hence there is an $a \in N$ such that $axa^{-1}x^{-1}$ belongs to $Z$ for all $x \in G$.Now as $a^{-1}$ belongs to $N$ clearly $xa^{-1}x^{-1}$ be in $N$ and thus $axa^{-1}x^{-1}$ belongs to $N$.But if $N$ and $Z$ has trivial intersection then $ax=xa$ for all $x \in G$ which makes $a \in Z$,a contradiction.
The muon (from the Greek letter mu (μ) used to represent it) is an elementary particle similar to theelectron , with negative electric charge and a spin of . Together with the electron , the tauon , andthe three neutrinos , it is classified as alepton . It is the unstable subatomic particle with the secondlongest mean lifetime ( ), behind theneutron (~ ). Like all elementary particles,the muon has a corresponding antiparticle of opposite charge but equalmass and spin: the antimuon (also called a positive muon ). Muons are denoted by andantimuons by . Muons were sometimes referred to as mumesons in the past, even though they are not classified asmesons by modern particle physicists ( seeHistory ). Muons have a mass of , which is about 200 timesthe mass of the electrons. Even so, muons are the lightestparticles of ordinary matter, after the electrons. Since the muon'sinteractions are very similar to those of the electron, a muon canbe thought of in most ways as simply a much heavier version of theelectron. Due to their greater mass, muons are not as sharplyaccelerated when they encounter electromagnetic fields, and do notemit as much bremsstrahlungradiation . For this reason, muons of a given energy are farmore highly penetrating of matter than electrons, since slowing ofthese particles in matter to capture velocities is primarily due toenergy loss from this mechanism. Muons generated by cosmic rays inthe atmosphere are capable of penetrating to the ground and intodeep mines. As with the case of the other charged leptons, the muon has anassociated muon neutrino . Muonneutrinos are denoted by . History Muons were discovered by Carl D.Anderson in 1936 while he studiedcosmic radiation . He had noticedparticles that curved in a manner distinct from that of electronsand other known particles, when passed through a magnetic field . In particular, these newparticles were negatively charged but curved to a smaller degreethan electrons, but more sharply than protons , for particles of the same velocity. It wasassumed that the magnitude of their negative electric charge wasequal to that of the electron, and so to account for the differencein curvature, it was supposed that these particles were ofintermediate mass (lying somewhere between that of an electron andthat of a proton). The discovery of the muon seemed so incongruousand surprising at the time that Nobel laureate I. I. Rabi famously quipped, "Who ordered that?" For this reason, Anderson initially called the new particle a mesotron , adopting the prefix meso- from theGreek word for "mid-". Shortly thereafter, additional particles ofintermediate mass were discovered, and the more general term meson was adopted to refer to any such particle. Facedwith the need to differentiate between different types of mesons,the mesotron was in 1947 renamed the mu meson (with theGreek letter μ ( mu ) used to approximate the soundof the Latin letter m ). However, it was soon found that the mu meson significantly differedfrom other mesons; for example, its decay products included aneutrino and an antineutrino , rather than just one or theother, as was observed in other mesons. Other mesons wereeventually understood to be hadrons —that is,particles made of quarks —and thus subject tothe residual strong force . Inthe quark model, a meson is composed of exactly two quarks(a quark and antiquark), unlike baryons ,which are composed of three quarks. Mu mesons, however, were foundto be fundamental particles (leptons) like electrons, with no quarkstructure. Thus, mu mesons were not mesons atall (in the new sense and use of the term meson ), and sothe term mu meson was abandoned, and replaced with themodern term muon . Muon sources Since the production of muons requires an available center of momentum frame energy of105.7 MeV, neither ordinary radioactive decay events nor nuclearfission and fusion events (such as those occurring in nuclear reactors and nuclear weapons ) are energetic enough toproduce muons. Only nuclear fission produces single-nuclear-eventenergies in this range, but due to conservation constraints, muonsare not produced. On Earth, all naturally occurring muons are apparently created bycosmic rays , which consist mostly ofprotons, many arriving from deep space at very high energy. When a cosmic ray proton impacts atomic nuclei of air atoms in theupper atmosphere, pions are created. Thesedecay within a relatively short distance (meters) into muons (thepion's preferred decay product), and neutrinos . The muons from these high energy cosmicrays, generally continuing essentially in the same direction as theoriginal proton, do so at very high velocities. Although theirlifetime without relativistic effects would allow ahalf-survival distance of only about 0.66 km at most, the time dilation effect of special relativity allows cosmic raysecondary muons to survive the flight to the earth's surface.Indeed, since muons are unusually penetrative of ordinary matter,like neutrinos, they are also detectable deep underground andunderwater, where they form a major part of the natural backgroundionizing radiation. Like cosmic rays, as noted, this secondary muonradiation is also directional. The same nuclear reaction described above (i.e., hadron-hadronimpacts to produce pion beams, which then quickly decay to muonbeams over short distances) is used by particle physicists toproduce muon beams, such as the beam used for the muon g − 2 experiment . Innaturally-produced muons, the very high-energy protons to begin theprocess are thought to originate from acceleration byelectromagnetic fields over long distances between stars orgalaxies, in a manner somewhat analogous to the mechanism of protonacceleration used in laboratory particle accelerators . Muon decay The most common decay of the muon Muons are unstable elementary particles and are heavier thanelectrons and neutrinos but lighter than all other matterparticles. They decay via the weak interaction to an electron, twoneutrinos and possibly other particles with a net charge of zero.Nearly all of the time, they decay into an electron, anelectron-antineutrino, and a muon-neutrino. Antimuons decay to apositron , an electron-neutrino, and amuon-antineutrino: \mu^-\to e^- + \bar\nu_e + \nu_\mu,~~~\mu^+\to e^+ + \nu_e +\bar\nu_\mu. The mean lifetime of the (positive) muon is 2.197 019 ± 0.000 021μs. The equality of the muon and anti-muon lifetimes has beenestablished to better than one part in 10 4 . The tree-level muon decay width is \Gamma=\frac{G_F^2m_\mu^5}{192\pi^3}I\left(\frac{m_e^2}{m_\mu^2}\right), whereI(x)=1-8x-12x^2\ln x+8x^3-x^4. A photon or electron-positron pair is alsopresent in the decay products about 1.4% of the time. The decay distributions of the electron in muon decays have beenparametrized using the so-called Michel parameters . The values of thesefour parameters are predicted unambiguously in the Standard Model of particle physics , thus muon decaysrepresent an excellent laboratory to test the space-time structureof the weak interaction . Nodeviation from the Standard Model predictions has yet beenfound. Certain neutrino-less decay modes are kinematically allowed butforbidden in the Standard Model. Examples, forbidden by leptonflavour conservation, are \mu^-\to e^- + \gamma and \mu^-\to e^- + e^+ + e^-. Observation of such decay modes would constitute clear evidence forphysics beyond the StandardModel (BSM).Upper limits for the branching fractions of suchdecay modes are in the range 10 −11 to10 −12 . Muonic atoms The muon was the first elementaryparticle discovered that does not appear in ordinary atoms . Negative muons can, however, form muonic atoms by replacing an electron inordinary atoms. Muonic atoms are much smaller than typical atomsbecause the larger mass of the muon gives it a smaller ground-state wavefunction than the electron. A positive muon, when stopped in ordinary matter, can also bind anelectron and form an exotic atom known as muonium (Mu) atom, in which the muon acts as thenucleus. The positive muon, in this context, can be considered apseudo-isotope of hydrogen with one ninth of the mass of theproton. Because the reduced mass ofmuonium, and hence its Bohr radius , isvery close to that of hydrogen , thisshort-lived "atom" behaves chemically — to a first approximation —like hydrogen , deuterium and tritium . Anomalous magnetic dipole moment The anomalous magneticdipole moment is the difference between the experimentallyobserved value of the magnetic dipole moment and the theoreticalvalue predicted by the Diracequation . The measurement and prediction of this value is veryimportant in the precision testsof QED (quantumelectrodynamics ). The E821 experiment at BrookhavenNational Laboratory (BNL) studied the precession of muon and anti-muonin a constant external magnetic field as they circulated in aconfining storage ring. The E821 Experiment reported thefollowing average value (from the July 2007 review by Particle DataGroup) a = \frac{g-2}{2} = 0.00116592080(54)(33) where the first errors are statistical and the secondsystematic. The difference between the g-factors of themuon and the electron is due to their difference in mass. Becauseof the muon's larger mass, contributions to the theoreticalcalculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributionsinvolving hadrons are important at thecurrent level of precision, whereas these effects are not importantfor the electron. The muon's anomalous magnetic dipole moment isalso sensitive to contributions from new physics beyond the Standard Model , such assupersymmetry . For this reason, themuon's anomalous magnetic moment is normally used as a probe fornew physics beyond the Standard Model rather than as a test of QED( Phys.Lett. B649, 173(2007) ). See also References [1] S.H. Neddermeyer and C.D. Anderson, "Note on the Nature ofCosmic-Ray Particles", Phys. Rev. 51, 884–886 (1937). Full textavailable in [3176]. J.C. Street and E.C. Stevenson, "New Evidence for the Existenceof a Particle of Mass Intermediate Between the Proton andElectron", Phys. Rev. 52, 1003-1004 (1937). Full text available in[3177]. Serway & Faughn, College Physics, Fourth Edition(Fort Worth TX: Saunders, 1995) page 841 Emanuel Derman, My Life As A Quant (Hoboken, NJ:Wiley, 2004) pp. 58-62. Marc Knecht ; The Anomalous Magnetic Moments of theElectron and the Muon, Poincaré Seminar (Paris, Oct. 12,2002), published in : Duplantier, Bertrand; Rivasseau, Vincent(Eds.) ; Poincaré Seminar 2002, Progress in MathematicalPhysics 30, Birkhäuser (2003) [ISBN 3-7643-0579-7]. Full textavailable in PostScript. External links
Gluing Disjoint Topological Spaces Definition: Let $X$ and $Y$ both be disjoint topological spaces. Let $A \subseteq X$ be nonempty and let $f : A \to Y$ be a map where $f(A) = B$ is the range of $f$. Define an equivalence relation $\sim$ on $X \cup Y$ by saying that $a \sim b$ if $f(a) = b$, as well as, $x \sim x$ for all $x \in X \cup Y$, i.e., with the equivalence relation $\sim$ we associate points that are associated by $f$ as well as standardly associated points with themselves. The equivalence classes of $\sim$ are thus $\{ \{ y \} \cup f^{-1}(\{ y \}) : y \in B \}$ and the singleton sets $\{ y \}$ where $y \in B$. The result is called the Gluing of $X$ and $Y$ along $f$ written $(X \oplus Y) / \sim$ or $X \cup_f Y$. In simpler terms, consider two disjoint topological spaces $X$ and $Y$ and consider a subset of $A$ of $X$. Let $f : A \to Y$ be any function where $f(A) = B$ is the range of $f$ over $A$. We then glue together points in $A$ with their image under $f$, as well as we "glue" points that are themselves to themselves. For a very simple example, consider the following disjoint spaces $[0, 1]$ and $[2, 3]$ with the subspace topologies from $\mathbb{R}$. Let $A = \{ 0, 1 \}$ and define the function $f(x)$ for $x \in A$ simply by $f(0) = 2$ and $f(1) = 2$. Then we glue the points $0, 1 \in [0, 1]$ to $2 \in [2, 3]$ to get $(X \oplus Y) / \sim = X \cup_f Y$:
Unbounded Linear Functionals Recall that if $(X, \| \cdot \|_X)$ is a normed linear space then a linear functional $f$ on $X$ is said to be bounded if there exists an $M \in \mathbb{R}$, $M \geq 0$ such that for every $x \in X$:(1) And we define the least such $M$ to get $\| f \| = \inf \{ M : |f(x)| \leq M \| x \|_X, \: \forall x \in X \}$. Definition: Let $(X, \| \cdot \|_X)$ be a normed linear space and let $f$ be a linear functional on $X$. Then $f$ is Unbounded if it is not bounded. If $f$ is an unbounded linear functional on $X$ then for every positive real number $M \in \mathbb{R}$, $M > 0$ there exists an $x_M \in X$ such that:(2) In particular, for each $n \in \mathbb{N}$ there exists an $x_n \in X$ such that:(3) We will use this property in the following proposition. Proposition 1: Let $(X, \| \cdot \|_X)$ be a normed linear space and let $f$ be a linear functional on $X$. If $f$ is unbounded then $\ker f$ is a dense subspace of $X$. Proof:We already know that $\ker f$ is always a subspace of $X$ so all that we need to show is that $\ker f$ is dense in $X$. We do this by showing that for each $x \in X$ there is a sequence $(z_n)$ in $\ker f$ that converges to $x$. Since $f$ is unbounded, for each $n \in \mathbb{N}$ there exists an $x_n \in X$ such that $|f(x_n)| > n \| x_n \|_X$. In particular, observe that if $x_n = 0$ then this implies that $0 = |f(x_n)| > n \| x_n \|_X = n \cdot 0 = 0$, a contradiction. So $x_n \neq 0$ for every $n \in \mathbb{N}$. For each $n \in \mathbb{N}$ let $\displaystyle{y_n = \frac{x_n}{\| x_n \|_X}}$. Then $\| y_n \|_X = 1$ for each $n \in \mathbb{N}$ and also: That is, for each $n \in \mathbb{N}$ we have that: Now for $x \in X$ let $(z_n)$ be the sequence defined for each $n \in \mathbb{N}$ by: Note that each $z_n$ is well defined since $f(y_n) \neq 0$ by $(*)$. Furthermore we see that each $z_n \in \ker f$ since: Also, $(z_n)$ converges to $x$ since: Where the inequality at $(**)$ comes from the fact that $|f(y_n)| > n$ for each $n \in \mathbb{N}$. Thus, for each $x \in X$ there is a sequence $(z_n)$ in $\ker f$ that converges to $x$. So $\ker f$ is dense in $X$. $\blacksquare$
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
I'm studying rotations expressed by a matrix $R$. $|\psi> \to \hat{U} (R) |\psi>$ When we assume infinitesimal rotations, we can write $R = E + \omega$ where $E$ is an identity matrix and $\omega$ is a real matrix. Then, according to "Lectures on Quantum Mechanics" by S.Weinberg, $\hat{U} (E + \omega)$ must take the form $\hat{U} (E + \omega) = E + \frac{i}{2\hbar} \sum _{ij} \omega _{ij} J_{ij} + O(\omega ^2).$ But why? My teacher said this is taylor expansion but I've never heard of taylor expansion of matrix functions.
Bernoulli Bernoulli Volume 21, Number 2 (2015), 1089-1133. CLT for linear spectral statistics of normalized sample covariance matrices with the dimension much larger than the sample size Abstract Let $\mathbf{A} =\frac{1}{\sqrt{np}}(\mathbf{X} ^{T}\mathbf{X} -p\mathbf{I} _{n})$ where $\mathbf{X} $ is a $p\times n$ matrix, consisting of independent and identically distributed (i.i.d.) real random variables $X_{ij}$ with mean zero and variance one. When $p/n\to\infty$, under fourth moment conditions a central limit theorem (CLT) for linear spectral statistics (LSS) of $\mathbf{A} $ defined by the eigenvalues is established. We also explore its applications in testing whether a population covariance matrix is an identity matrix. Article information Source Bernoulli, Volume 21, Number 2 (2015), 1089-1133. Dates First available in Project Euclid: 21 April 2015 Permanent link to this document https://projecteuclid.org/euclid.bj/1429624972 Digital Object Identifier doi:10.3150/14-BEJ599 Mathematical Reviews number (MathSciNet) MR3338658 Zentralblatt MATH identifier 06445969 Citation Chen, Binbin; Pan, Guangming. CLT for linear spectral statistics of normalized sample covariance matrices with the dimension much larger than the sample size. Bernoulli 21 (2015), no. 2, 1089--1133. doi:10.3150/14-BEJ599. https://projecteuclid.org/euclid.bj/1429624972
Does anybody have the Bachelier model call option pricing formula for $r > 0$? All the references I've read assume $r = 0$. I don't speak French, so I can't read Bachelier's original paper. Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community We assume that, under the risk-neutral measure, the stock process $\{S_t, t \ge 0\}$ satisfies an SDE of the form \begin{align*} dS_t = r S_t dt + \sigma dW_t, \end{align*} where $r$ is the constant interest rate, $\sigma$ is the constant volatility, and $\{W_t, t \ge 0\}$ is standard Brownian motion. For $0 \le t \le T$, \begin{align*} S_T = S_t e^{r(T-t)} + \sigma\int_t^T e^{r(T-s)}dW_s. \end{align*} That is, \begin{align*} S_T \mid S_t &\sim N\left(S_t e^{r(T-t)},\, \frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right) \right)\\ &\sim S_t e^{r(T-t)} + \sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\xi, \end{align*} where $\xi$ is standard normal random variable. Then \begin{align*} C_t &= e^{-r(T-t)}E\left(\left(S_T-K\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}E\left(\left(S_t e^{r(T-t)} + \sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\xi-K\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}E\left(\left(\xi -\frac{K-S_t e^{r(T-t)}}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right)^+ \mid \mathcal{F}_t \right)\\ &=e^{-r(T-t)}\left(S_t e^{r(T-t)}-K\right)\Phi\left(\frac{S_t e^{r(T-t)}-K}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right) \\ &\qquad + e^{-r(T-t)}\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}\,\phi\left(\frac{S_t e^{r(T-t)}-K}{\sqrt{\frac{\sigma^2}{2r}\left(e^{2r(T-t)}-1 \right)}}\right), \end{align*} where $\Phi$ is the cumulative distribution function of a standard normal random variable, and $\phi$ is the corresponding density function. Comments Let $K^*=e^{-r(T-t)}K,$ and $$v^2(t, T) = \frac{\sigma^2}{2r}\left(1-e^{-2r(T-t)}\right).$$ Then, we can re-express the price as \begin{align*} C_t &= \left(S_t-K^*\right)\Phi\left(\frac{S_t-K^*}{v(t, T)}\right) +v(t, T)\,\phi\left(\frac{S_t-K^*}{v(t, T)}\right). \end{align*} See also Section 3.3 of the book Martingale Methods in Financial Modeling; however, note that there are a few typos in this book. One other possibility is to assume that \begin{align*} S_t = e^{rt}(S_0 + \sigma W_t). \end{align*} Then the corresponding option price can be similarly obtained. See also the book mentioned above. It's pretty simple to derive with basic knowledge of stochastic calculus. But since you are looking for the easy answer here it is: $$C_t=e^{-r(T-t)}\sigma\sqrt{T-t} (D \Phi(D)+\phi(D))$$ where $D=\frac{F_{t,T}-K}{\sigma \sqrt{T-t}}$ and $\Phi(\cdot)$ and $\phi(\cdot)$ are respectively the normal cdf and pdf. $F_{t,T}=S_te^{r(T-t)}$ is the forward price. You might want to differentiate between the growth rate $\mu$ and the discount rate $r$. Gordon's solution is the most logical thing to do. NSZ's solution amounts to assuming a process $$dF = \sigma dW$$ with a discount rate $r$ and $F(t,T) = S(t) e^{r(T-t)}$. We apply Ito's Lemma to $f(t,F) = F e^{r(t-T)}$ to obtain in terms of $S$: $$dS = r S dt + \sigma e^{r(t-T)} dW\,.$$
How do I recover the cost function from the profit function? Suppose I have $$ \pi(p, w_1, w_2) = p^3 \cdot w_1 \cdot w_2 $$. How do I get the cost function? Using Hotelling's lemma I get the supply function: $$ y_s(p) = \frac{\partial \pi(p, w_1, w_2)}{\partial p} = 3 p^2 \cdot w_1 \cdot w_2$$ and the input demands: $$z_1(p, w)= - \frac{\partial \pi(p, w_1, w_2)}{\partial w_l} = - p^3 w_2 $$ and $$z_2(p, w)= - \frac{\partial \pi(p, w_1, w_2)}{\partial w_2} = - p^3 w_1 $$but what is the next step? Note that the functional forms are hypothetical and highly probable to be invalid.
In Weinberg's Lectures on Quantum Mechanics, he mentions Unfortunately, we cannot simply use first-order perturbation theory, with $T_{nuc}$ taken as the perturbation and the state vectors $\Phi_{a,X}$ taken as unperturbed energy eigenstates. This is because we are looking for discrete eigenvalues of the full Hamiltonian, for which the eigenvectors $\Psi$ would be normalizable, in the sense that $(\Psi,\Psi)$ is finite, while $(\Phi_{a,X},\Phi_{a,X})$ is infinite. We cannot expand in powers of a perturbation that converts a state vector with continuum normalization into one that is normalizable as a discrete state. I do not get this. Why does perturbation theory fail here? About the notations: follow from http://en.wikipedia.org/wiki/Born%E2%80%93Oppenheimer_approximation, $T_{nuc}$ is the kinetic energy of nucleus, i.e. $T_n$ in wiki. $\Psi$ in Weinberg is the same as $\Psi$ in wiki. $\Phi_{a,X}$ is $\chi_k (\mathbf{r}; \mathbf{R})$ in wiki, where $k\leftrightarrow a$ is the label of energy eigenstates.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
From the link you provided: Specification: Rotor Diameter: 90mm Fan blades:11 blades Weight: about 350g Working Voltage:6s(22.2V) lipo battery Motor:Brushless Motor 3553 1450kv No Load Current: 4.1 A Load Current: 83A No Load Speed:32190 rpm Load Speed:16095 rpm Thrust: 3300g G/A:45.16 Assuming that 16 * 6/4 = 24 18650 cells would be able to deliver full electrical power for the fan, the issue would be indeed be the local angle of attack of the fan blades. Thrust at full power is listed at 3.3 kg = 32 N. This would be at standstill/hovering conditions at sea level, since measuring at that level provides the highest thrust level for advertisements. Diameter is 0.09m. Net thrust T = $$ T = \dot{m} \cdot (V_{out} - V_{in}) = \dot{m} \cdot \Delta V \tag{1}$$ $$ \dot{m} = \rho \cdot A \cdot V \tag{2} $$ Combining (1) and (2) for the hover, with $V_{in}$ = 0: $$ V = \sqrt{\frac{T}{\rho \cdot A}} = \sqrt{\frac{32}{1.225 \cdot \pi/4\cdot0.09^2}} = ~\text{64 m/s}$$ Impulse thrust considerations usually draw a contracting propeller wake for inducted propellers. Ducted fans work a bit differently and we can take the average velocity behind the fan for further Order Of Magnituding. Below figure is from this research paper, and shows the considerations for ducted fan flow; it contains some methods for more detailed computations. Rotational speed under load is 16095 rpm = 1,684 rad/s, tip speed = 0.045 * 1,684 = 75.8 m/s. A velocity triangle at the blade tip has as angle $ tan^{-1} (64/75.8) = 40 $ deg. The blade needs to be inclined further than that, usually about 6 deg, so tip blade angle of the standard fan would be 46 deg. Purchasing the actual fan for verifying the above would be a good thing! For the second fan, this same method can be followed: mass stream will remain the same if the hull is closed, in order for the 2nd fan to deliver the same thrust $\Delta V$ = 64 => ${V_{}out} = 64 + 64 = 128 $ m/s. Tip angle velocity triangle = $ tan^{-1} (128/75.8) = 59.4 $ deg, fan blade angle = 59.4 + 6 = 66 deg, etc. Note that the above is valid for the hover. As soon as the "rocket" picks up speed, the local angle of attack of the fan blades will reduce and therefore thrust will reduce. So one would have to optimise time of thrust (time of amps delivered) with weight, momentary speed, and expected end speed, then average the blade angles out for the speed function. Note that opening the hull in between the fans allows for extra air to be drawn in, increasing the mass flow. The paper cited above has results for a setup like that as well - if the increased mass flow relates in lower entry velocity, this might be worth considering.
OpenCV 3.4.0 Open Source Computer Vision You can add two images by OpenCV function, cv.add() or simply by numpy operation, res = img1 + img2. Both images should be of same depth and type, or second image can just be a scalar value. For example, consider below sample: It will be more visible when you add two images. OpenCV function will provide a better result. So always better stick to OpenCV functions. This is also image addition, but different weights are given to images so that it gives a feeling of blending or transparency. Images are added as per the equation below: \[g(x) = (1 - \alpha)f_{0}(x) + \alpha f_{1}(x)\] By varying \(\alpha\) from \(0 \rightarrow 1\), you can perform a cool transition between one image to another. Here I took two images to blend them together. First image is given a weight of 0.7 and second image is given 0.3. cv.addWeighted() applies following equation on the image. \[dst = \alpha \cdot img1 + \beta \cdot img2 + \gamma\] Here \(\gamma\) is taken as zero. Check the result below: This includes bitwise AND, OR, NOT and XOR operations. They will be highly useful while extracting any part of the image (as we will see in coming chapters), defining and working with non-rectangular ROI etc. Below we will see an example on how to change a particular region of an image. I want to put OpenCV logo above an image. If I add two images, it will change color. If I blend it, I get an transparent effect. But I want it to be opaque. If it was a rectangular region, I could use ROI as we did in last chapter. But OpenCV logo is a not a rectangular shape. So you can do it with bitwise operations as below: See the result below. Left image shows the mask we created. Right image shows the final result. For more understanding, display all the intermediate images in the above code, especially img1_bg and img2_fg.
Answer The probability of getting a head in all the six tosses is $\frac{1}{64}$. Work Step by Step We know that the probability of getting a head in a single toss of the coin is as given below: Number of favorable out comes $ n\left( e \right)=1$ (head) Number of total out comes $ n\left( s \right)=2$ (head and tail) $ P\left( \text{head} \right)=\frac{1}{2}$ And the probability of getting a head in all the six tosses is as follows: $\begin{align} & P\left( \text{six heads} \right)=\frac{1}{2}\times \frac{1}{2}\times \frac{1}{2}\times \frac{1}{2}\times \frac{1}{2}\times \frac{1}{2} \\ & =\frac{1}{64} \end{align}$ Hence, the probability that a head will come in all the six tosses is $\frac{1}{64}$.
Definition. Let be a set equipped with a -algebra . A measure on (or on , or simply on if such that 1). . 2). (Countable additivity) if is a sequence of disjoint sets in , then . 2)’. if are disjoint sets in , then , because one can take for . A function that satisfies 1) and 2)’ but not necessarily 2) is called a finitely additive measure. If is a set and is a -algebra, is called a measurable space and the sets in are called measurable sets. If is a measure on , then is called a measure space. Let be a measure space. If (which implies that for all since , is called finite. If where and for all , the set is said to be -finite for . More generally, if where and for all , the set is said to be -finite for . If for each with there exists with and , is called semifinite. Remark. Every -finite measure is semifinite, but not conversely. Examples. 1) Let be any nonempty set, , and any function from to . Then determines a measure on by the formula . is semifinite iff for every , and is -finite iff is semifinite and is countable. a. If for all , is called counting measure; b. If for some , is defined by and for , is called the point mass or Dirac measure at . 2) Let be an uncountable set, and let be the -algebra of countable or co-countable sets. The function on defined by if is countable and if is co-countable is easily seen to be a measure. 3) Let be an infinite set and . Define if is finite, if is infinite. Then is a finitely additive measure but not a measure. Theorem. Let be a measure space. a). (Monotonicity) If and , then $\mu(E)\le\mu(F)$. b). (Subadditivity) If , then . c). (Continuity from below) If and , then . d). (Continuity from above) If , , and , then . Proof. a) If , then . b) Let and for . Then the are disjoint and for all . Therefore, by a) . c) Setting , we have . d) Let ; then , , and . By c) then . Since , we may subtract it from both sides to yield the desired result. Remark. The condition in d) could be replaced by for some , as the first ‘s can be discarded from the sequence without affecting the intersection. However, some finiteness assumption is necessary, as it can happen that for all but . Definition. If is a measure space, a set such that is called a null set. By subadditivity, any countable union of null sets is a null set, a fact which we shall use frequently. If a statement about points is true except for in some every . (If more precision is needed, we shall speak of a -null set, or -almost everywhere). If and , then by monotonicity provided that , but in general it need not be true that . A measure whose domain includes all subsets of null sets is called complete. Completeness can sometimes obviate annoying technical points, and it can always be achieved by enlarging the domain of , as follows. Theorem. Suppose that is a measure space. Let and and for some . Then is a -algebra, and there is a unique extension of to a complete measure on . Proof. Since and are closed under countable unions, so is . If where and , we can assume that (otherwise, replace and by and ). Then , so . But and , so that . Thus is a -algebra. If as above, we set . This is well defined, since if where $F_j\subset N_j\in\mathcal N$, then $E_1\subset E_2\cup N_2$ and so , and likewise . It is easily verified that is a complete measure on , and that is the only measure on that extends . Remark. The measure is called the completion of , and is called the completion of with respect to . “Whenever you meet a ghost, don’t run away, because the ghost will capture the substance of your fear and materialize itself out of your own substance… it will take over all your own vitality… So then, whenever confronted with a ghost, walk straight into it and it will disappear.” ~ Allan Watts References: [1] Gerald B. Folland, Real Analysis: Modern Techniques and Their Applications, 2ed, page 24-27. [2] Purpose Fairy’s 21-Day Happiness Challenge, http://www.jrmstart.com/wordpress/wp-content/uploads/2014/10/Free+eBook+-+PurposeFairys+21-Day+Happiness+Challenge.pdf.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Isomorphisms Between Normed Linear Spaces Isomorphisms Between Normed Linear Spaces Definition: Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear spaces. A bounded linear operator $T \in \mathcal B(X, Y)$ is said to be an Isomorphism between $X$ and $Y$ if $T$ is a bijection and there exists constants $m > 0$ and $M > 0$ such that $m \| x \|_X \leq \| T(x) \|_Y \leq M \| x \|_X$ for every $x \in X$. If such an isomorphism exists, then the spaces $X$ and $Y$ are said to be Isomorphic. Proposition 1: Let $(X, \| \cdot \|_X)$, $(Y, \| \cdot \|_Y)$ and $(Z, \| \cdot \|_Z)$ be normed linear spaces. Then: a) $X$ is isomorphic to $X$ (Reflexive Property). b) If $X$ is isomorphic to $Y$ then $Y$ is isomorphic to $X$ (Symmetry Property). c) If $X$ is isomorphic to $Y$ and $Y$ is isomorphic to $Z$ then $X$ is isomorphic to $Z$ (Transitivity Property). The proposition above tells us that "isomorphism" between normed linear spaces is an equivalence relation. Proof of a)Let $I$ be the identity operator. Then $I$ is clearly bijective. If $m = 1$ and $M = 1$ we have that: \begin{align} \quad m\| x \|_X = 1 \| x \|_x \leq \| I(x) \|_X \leq 1 \| x \|_X = M \| x \|_X \end{align} So $X$ is isomorphic to $X$. **Proof of b) Suppose that $X$ is isomorphic to $Y$. Let $T$ be an isomorphism from $X$ to $Y$. Then $T$ is bijective and there exists $m > 0$, $M > 0$ such that for every $x \in X$: \begin{align} \quad m \| x \|_X \leq \| T(x) \|_Y \leq M \| x \|_X \end{align} Since $T$ is bijective, $T^{-1}$ is bijective. For each $y \in Y$ there is an $x \in X$ such that $T(x) = y$. So the inequality above translates for every $y \in Y$ by: \begin{align} \quad m \| T^{-1}(y) \|_X \leq \| y \|_Y \leq M \| T^{-1}(y) \|_X \end{align} Hence, for every $y \in Y$ we have that: \begin{align} \quad \frac{1}{M} \| y \|_Y \leq \| T^{-1}(y) \|_X \leq \frac{1}{m} \| y \|_Y \end{align} So $T^{-1}$ is an isomorphism from $Y$ to $X$. Thus $Y$ is isomorphic to $X$. Proof of c)Suppose that $X$ is isomorphic to $Y$ and $Y$ is isomorphic to $Z$. Then there exists bijective $S \in \mathcal B(X, Y)$ and a bijective $T \in \mathcal B(Y, Z)$, as well as constants $m > 0$, $M > 0$, $c > 0$, and $C > 0$ such that for all $x \in X$ and for all $y \in Y$: \begin{align} \quad m \| x \|_X \leq \| S(x) \|_Y \leq M \| x \|_X \end{align}(6) \begin{align} \quad c \| y \|_Y \leq \| T(y) \|_Z \leq C \| y \|_Y \end{align} Since $S$ and $T$ are bijective, the function $T \circ S : X \to Z$ is bijective. For every $z \in Z$ there is a $y \in Y$ such that $z = T(y)$. Similarly, for every $y \in Y$ there is $x \in X$ such that $S(x) = y$. So for every $z \in Z$ there is a $x \in X$ such that $z = T(S(x))$. So the first and second equations translate to: \begin{align} \quad cm \| x \|_X \leq c \| S(x) \|_Y = c \| y \|_Y \leq \| z \|_Z \leq C \| y \|_Y = C \| S(x) \|_Y \leq CM \| x \|_X\\ \end{align} Since $z = T(S(x))$, we have that: \begin{align} \quad cm \| x \|_X \leq \| T(S(x)) \|_Z \leq CM \| x \|_X \end{align} So $S \circ T$ is an isomorphism from $X$ to $Z$, and thus, $S$ is isomorphic to $T$. $\blacksquare$
Research Open Access Published: Polynomial energy decay of a wave–Schrödinger transmission system Boundary Value Problems volume 2018, Article number: 60 (2018) Article metrics 418 Accesses Abstract We study in this paper a wave–Schrödinger transmission system for its stability. By analyzing carefully Green’s functions for the infinitesimal generator of the semigroup associated with the system under consideration, we obtain a useful resolvent estimate on this generator which can be applied to derive the decaying property. Our study is inspired by L. Lu & J.-M. Wang [Appl. Math. Lett., 54:7–14, 2016] whose energy decay result is improved upon in our paper. Our method, different from the one used in the previous reference, can be adapted to study stability problems for other 1-D transmission systems. Introduction Thanks to its wide applicability, the Schrödinger equation where \(\varDelta=\sum_{j=1}^{n}\frac{\partial^{2}}{\partial x^{2}_{j}}\) is the Laplacian on \(\mathbb{R}^{n}\), has been receiving extensive attention from the mathematical control community; see [2–8] and the references cited therein. Specifically, the systems described by the Schrödinger equation have received extensive studies for their stability in the past three decades. Among the vast references in this direction, Lagnese [9] proved a stability result via “connecting” it to the stability property of the plate equation \(\partial_{t}^{2}u+\varDelta^{2}u+\text{l.o.t}=0\) (while the study of the stability and stabilization of the plate equation has a relatively long history). Machtyngier and Zuazua [4] studied the boundary and internal stabilization problem via the multiplier method (the main idea has originated from stability studies for wave equations). In [7, 10], some collocated boundary stabilization problems were investigated. Zuazua [2] provided a nice survey on the recent studies on the control properties for the Schrödinger equation. This paper is devoted to the study of the stabilization of the Schrödinger equation via a damped wave equation through a common end point. More precisely, we are concerned in this paper with the system where \(\mathrm{i}=\sqrt{-1}\) is the imaginary unit, and \(k\in\mathbb {R}\setminus\{0\}\) and \(b\in(0,\infty)\) are fixed arbitrarily. System (1.1) was recently studied by Lu and Wang [1] with the intension to understand better the transmission of dissipation effect from a damped wave equation to a damping-free Schrödinger equation where the energy can be exchanged by (1.1) 1. The natural phase space for system (1.1) is Let us define an unbounded linear operator A in H by We can prove as in [1] that A is the infinitesimal generator of a strongly continuous semigroup \(\{e^{tA}\}\) on H. Therefore, (1.1) admits for every triple \((u^{0},u^{1},v^{0})\in H\) a unique solution \((u,v)\in\mathbb{S}^{0}\); if further \((u^{0},u^{1},v^{0})\in\mathcal{D}(A)\), then \((u,v)\in\mathbb{S}^{1}\). Here \(\mathbb{S}^{0}\) and \(\mathbb{S}^{1}\) are defined by We associate with system (1.1) the following energy functional: Theorem A (see [1]) A is the infinitesimal generator of a strongly continuous semigroup\(\{e^{tA} \}_{t\in[0,\infty)}\) of contractions on H. In particular, we have: For every triple\((u^{0},v^{0},v^{1})\in H\), the boundary value problem(1.1) admits a unique solution\((u,v)\in\mathbb{S}^{0}\) such that\(u(\cdot,0)=u^{0}\), \(v(\cdot,0)=v^{0}\) and\(\partial_{t}v(\cdot,0)=v^{1}\); if, in addition, \((u^{0},v^{0},v^{1})\in\mathcal{D}(A)\) ( see(1.3)), then\((u,v)\in\mathbb{S}^{1}\). The spectrum\(\sigma(A)\) of A consists merely of eigenvalues of A, and is distributed as follows:$$ \left . \textstyle\begin{array}{l} \lambda_{1j}= - \frac{b}{2} \pm\frac{\sqrt{b^{2}-4j^{2}\pi^{2}}}{2} +\mathcal{O}(j^{-1}), \\ \lambda_{2j}= - \vert j-\frac{1}{2} \vert ^{2} \pi^{2}\mathrm{i}+\mathcal{O}(j^{-1} ),\quad\operatorname{\Re}\mathfrak{e} \lambda_{2j}< 0 \end{array}\displaystyle \right \},\quad\textit{as }j\nearrow\infty. $$(1.7) \(E(t)\searrow0\) as\(t\nearrow\infty\). Note especially that Lu and Wang [1] proved that \(E(t)\) decreases to 0 as \(t\rightarrow+\infty\). But due to the fact that \(\lim_{j\rightarrow\infty}\operatorname{\Re} \mathfrak{e} \lambda_{2j} =0\), \(E(t)\) cannot decay uniformly (see the last section of the paper for a brief proof of this statement). Recently, the non-uniform decay properties have been investigated extensively in the literature for PDEs; see [11, 12]. Our main result gives a more accurate decay rate for the energy \(E(t)\). Theorem 1.1 Let E, defined as in (1.6), be the energy associated with system (1.1). There exists \(M\in(0,\infty)\) such that, for every solution \((u,v)\in\mathbb{S}^{1}\) with \(u(\cdot,0)=u^{0}\), \(v(\cdot,0)=v^{0}\), and \(\partial_{t}v(\cdot,0)=v^{1}\), By [13, Theorem 2.4], this theorem follows immediately from the following theorem. Theorem 1.2 Throughout this paper, C is a generic constant which can assume a different value at each occurrence. The rest of the paper is organized as follows. With the aid of the idea of Green’s functions, we provide in Sect. 2 an explicit formulae for the resolvent \(R(\mathrm{i}\gamma;A)\). The main results of this paper are proved in Sect. 3. Some concluding remarks are included in Sect. 4. Green’s functions and the resolvent \(R(\mathrm{i}\gamma;A)\) We would like to calculate in this section the resolvent \(R(\mathrm{i}\gamma;A)\) with \(\gamma\in\mathbb{R}\) by using the idea of Green’s functions. Let \((\phi,\psi,\eta)\in H\). Consider the equation \((\lambda\mathrm{id}_{H}-A)(f,g,h)= (\phi,\psi,\eta)\) with \(\mathrm{id}_{H}\) denoting the identity operator on H, or equivalently, the boundary value problem (BVP) The Green’s functions for BVP (2.1) should assume the form where \(\mathfrak{h}\) is the Heaviside function, namely and the coefficients \(\sigma_{jk}\), \(\varsigma_{jk}\) (\(j=1,2,3\), \(k=1,2\)), \(\breve{\sigma}_{11}\), \(\breve{\sigma}_{12}\), \(\breve{\varsigma}_{jk}\) (\(j=2,3\), \(k=1,2\)) are yet to be determined later (see (2.4), (2.5), (2.6), and (2.7)). The Green’s functions should also satisfy This, together with the notion of Green’s functions, implies and By Cramer’s rule, we can deduce from (2.4) that We deduce \(\sigma_{11}\) from (2.5) by Cramer’s rule that where Δ is given by Similarly, we can deduce from (2.5) that \(\sigma_{12}\), \(\varsigma_{11}\), \(\varsigma_{12}\) can be expressed as follows: and We can deduce from (2.6) that \(\sigma_{21}\), \(\sigma_{22}\), \(\varsigma_{21}\), \(\varsigma_{22}\) can be expressed as follows: We can deduce from (2.7) that \(\sigma_{31}\), \(\sigma_{32}\), \(\varsigma_{31}\), \(\varsigma_{32}\) can be expressed as follows: Let us remind that Δ in the above formulas is given explicitly by (2.13). Proof of the main results We seek to obtain in this section the lower bound for \(\vert \varDelta \vert \) (see (2.13) for the definition of Δ). As mentioned in Sect. 2, we need merely consider the situation \(\lambda\in\mathrm{i}\mathbb{R}\). For the sake of clarity, we distinguish λ into two cases. Case 1 (\(\lambda\in\mathbb{C}\setminus\{0\}\) and \(\lambda= \vert \lambda \vert \mathrm{i}\)) In this case, \(\sqrt{\lambda(\lambda+b)}=\mathfrak{p}( \vert \lambda \vert )+ \mathrm{i}\mathfrak{q}( \vert \lambda \vert )\), where Obviously, we have Mainly using the triangle inequality, we can deduce from (2.13) that But where the “⩾” in the second line follows if and only if \(\vert \lambda \vert \geqslant\frac{b}{\sqrt{48}}\), and \(\mathfrak{p}(\cdot)\) is given by (3.1). And similarly, we have and Therefore, Case 2 (\(\lambda\in\mathbb{C}\setminus\{0\}\) and \(\lambda=- \vert \lambda \vert \mathrm{i}\)) In this case, \(\sqrt{\lambda(\lambda+b)} =\mathfrak{p}( \vert \lambda \vert )- \mathrm{i}\mathfrak{q}( \vert \lambda \vert )\), where \(\mathfrak{p}\) and \(\mathfrak{q}\) are given by (3.1). Substitute this into (2.13) to obtain Here α and β are given explicitly as and satisfy where the “=” in the first line follows from a series of elementary calculations and rearrangements, the “⩾” in the third line follows from (3.2) and Having the above analysis results at our disposal, we are now in a position to prove the main results. Proof of Theorem 1.2 It is equivalent to proving where \((f,g,h)\) and \((\phi,\psi,\eta)\) are related by (2.2). The derivative \(\widehat{ \psi}'\) of ψ̂ reads Since \(g\in H^{1}(0,1)\) satisfies \(g(1)=0\) in the trace sense, it suffices to estimate \(\Vert g' \Vert _{L^{2}(0,1)}\) instead of \(\Vert g \Vert _{H^{1}(0,1)}\). Therefore, we only need to analyze \(\| \widehat{ \psi}'\|_{L^{2}(0,1)}\). By a density argument, we can prove By Young’s inequality (see [14, Theorem 2.24, p. 33]), we have where the “⩽” in the second line follows from in which we used (3.2) when we establish the last “<”. Mainly using Hölder’s inequality, we have By some routine calculations, we have whenever \(\lambda\in\mathrm{i}\mathbb{R}\) satisfies \(\vert \lambda \vert \geqslant\max ( \frac{16e^{2b}k^{4}}{b^{2}},\frac{1}{k^{4}},b, \frac{288(b+2)^{2}}{k^{4}b^{2}},\frac{6(2+b)}{b} )\). which, together with (2.2) 3, implies We can also prove where the constant \(C>0\) is independent of \((\phi,\psi,\eta)\) and λ. Now it remains to analyze the term \(\int_{0}^{1}F^{1}(x,\xi)\phi(\xi)\,d\xi\). But To provide in detail a way to analyze \(\int_{0}^{1}F^{1}(x,\xi)\phi(\xi)\,d\xi\), we continue as follows: Employing the same idea, we analyze the rest of (3.23) term-by-term, and then collect all the information together to obtain This, together with (3.22), implies where the constant \(C>0\) is independent of \((\phi,\psi,\eta)\) and λ. Concluding comments and an open question By analyzing carefully Green’s functions for boundary value problems associated with ordinary differential equations (i.e., (2.1)), we prove that the infinitesimal generator of the semigroup associated with system (1.1) satisfies the resolvent estimate (1.9), thereby proving that the energy of system (1.1) decays polynomially. Having a very simple underlying idea, our method is based on Green’s functions and relies on heavy calculations. Our method can be modified to treat other transmission systems of 1-D partial differential equations where one of the equations is damped in the whole interval. However, according to the deductions based on our idea, it seems very hard to find the optimal decay rate of the energy of system (1.1). Therefore, one of our next concerns is to understand better the following question. Open question Could the decay rate \((t+1)^{-1}\) given in estimate (1.8) be improved? As indicated before, the above question seems difficult to solve with merely the method used in this paper. To close this section, we prove by a contradiction argument that the energy E (defined in (1.6)) can NOT decay exponentially. Assume to the contrary that \(E(t)\) decays exponentially, or equivalently, there exists a pair \((M_{0},\gamma_{0})\in(0,\infty)^{2}\) such that, for every \(w\in H\), Write, for every \(\lambda_{0}\in\mathbb{C}\) with \(\gamma<\operatorname{\Re} \mathfrak{e} \lambda<0\), Therefore, λ belongs to \(\rho(A)\), the resolvent set of A, and moreover, \(R_{\lambda}=R(\lambda;A)\), the resolvent of A. Thus, we proved just now that λ belongs to \(\rho(A)\) whenever \(\lambda\in\mathbb{C}\) satisfies \(\gamma_{0}<\operatorname{\Re}\mathfrak{e} \lambda<0\). This contradicts (1.7) 2. The proof is complete. Notes 1. Throughout this paper, \(\langle{\cdot}\rangle=\sqrt{1+|{\cdot}|^{2}}\). References 1. Lu, L., Wang, J.-M.: Transmission problem of Schrödinger and wave equation with viscous damping. Appl. Math. Lett. 54, 7–14 (2016) 2. Zuazua, E.: Remarks on the controllability of the Schrödinger equation. In: Quantum Control: Mathematical and Numerical Challenges. CRM Proc. Lecture Notes, vol. 33, pp. 193–211. Am. Math. Soc., Providence (2003) 3. Machtyngier, E.: Exact controllability for the Schrödinger equation. SIAM J. Control Optim. 32, 24–34 (1994) 4. Machtyngier, E., Zuazua, E.: Stabilization of the Schrödinger equation. Port. Math. 51, 243–256 (1994) 5. Phung, K.-D.: Observability and control of Schrödinger equation. SIAM J. Control Optim. 40, 211–230 (2001) 6. Lebeau, G.: Contrôle de l’équation de Schrödinger. J. Math. Pures Appl. 71, 267–291 (1992) 7. Guo, B.Z., Shao, Z.C.: Regularity of a Schrödinger equation with Dirichlet control and collocated observation. Syst. Control Lett. 54, 1135–1142 (2005) 8. Dautray, R., Lions, J.L.: Analyse Mathématique et Calcul Numérique pour les Sciences et les Techniques, vol. 1. Masson, Paris (1984) 9. Lagnese, J.: Boundary Stabilization of Thin Plates. SIAM Studies in Appl. Math., vol. 10. SIAM, Philadelphia (1989) 10. Krstic, M., Guo, B.Z., Smyshlyaev, A.: Boundary controllers and observers for the linearized Schrödinger equation. SIAM J. Control Optim. 49, 1479–1497 (2011) 11. Liu, Z., Rao, B.: Characterization of polynomial decay rate for the solution of linear evolution equation. Z. Angew. Math. Phys. 56, 630–644 (2005) 12. Burq, N.: Décroissance de l’énergie locale de l’équation des ondes pour le problème extérieur et absence de résonance au voisinage du réel. Acta Math. 180(1), 1–29 (1998) 13. Borichev, A., Tomilov, Y.: Optimal polynomial decay of functions and operator semigroups. Math. Ann. 347(2), 455–478 (2010) 14. Adams, R.A., Fournier, J.J.F.: Sobolev Spaces, 2nd edn. Pure and Applied Mathematics (Amsterdam), vol. 140. Elsevier/Academic Press, Amsterdam (2003) Acknowledgements The author is supported by NSFC (#11701050 and #11571244), by JG Program (#2017JG13) of Chengdu Normal University, and by SCJYT Program (#18ZB0098) of Sichuan Province, China. Availability of data and materials Not applicable. Funding Not applicable. Ethics declarations Ethics approval and consent to participate Not applicable. Competing interests There is no conflict of interest in this paper. Consent for publication Not applicable. Additional information List of abbreviations Not applicable. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Received Accepted Published DOI MSC 35Q41 35B30 35B35 35C20 35L10 35P05 Keywords Wave–Schrödinger transmission system Polynomial energy decay Resolvent estimate Green’s functions
I think this one is also a nice answer: Lets prove that $\lim\sup n a_n \log n = 0$. For this, we note that, by the basic estimates on the partial harmonic sums, $$ \limsup n a_n \log n = \limsup n a_n \sum_{k=k_0}^n \frac{1}{k} $$ For all $k_0 \ge 0$. As $\sum a_n < + \infty$, we must have that, from the hypothesis, $na_n$ is decreasing to zero. But then $a_n/k \le a_k/n$ for $k\le n$. Using this on the $\limsup$ above, we see that $$ \limsup n a_n \log n \le \limsup \sum_{k=k_0}^n a_k = \sum_{k=k_0}^{\infty} a_k $$ As $k_0$ was arbitrary, and as the sum converges, we conclude the desired result.
Considering the problem There were n couples attending a ceremony. Out of the 2n people, exactly m of them ate pizza, chosen randomly over all subsets of 2n people. Let X be the number of couples that didn’t eat pizza. Find an explicit expression for the mean of X. There seems to an easy and correct solution by taking an indicator RV for each couple (expectation of which easily calculated), and summing them all for n couples. However I took a different approach by using the law of iterated expectation as follows: Let X be the number of couples that didn't eat pizza, and Y be the number of the men that had pizza, noting that $Y$ is a binomial random variable with $p=\frac m{2n}$. We thus are going to use the law of iterated expectations to calculate mean of X: \begin{align*} \mathbf E[X] &= \mathbf E[\:\mathbf E[X|Y]\:] \\ \mathbf E[Y] &= m/2 \\\mathbf E[Y^2] &= \mathbf {var}[Y]+\mathbf E[Y]^2 = m/2(1-m/2n)+m^2/4\end{align*} To find $\mathbf E[X|Y]$, we need to determine the expected number of women that didn't eat pizza while their husbands are among the presumed group of men that didn't have pizza.\ We assume $y$ men had pizza. There are $n$ women, each with probability of $\frac{n-y}{n}$ to be the wife of a man that didn't eat pizza. Also, independently, this woman won't have a pizza with a probability of $1 -\frac{m-y}{n}$. Thus the probability of a couple not having pizza equals: $$ p=(1-\frac{y}{n})\cdot (1-\frac{m-y}{n}) $$ Thus, we have a binomial random variable with the calculated p, hence the mean is: $$ \mathbf E[X|Y] = np = \frac{(n-Y)(n-m+Y)}{n} $$ \begin{align*} \mathbf E[X] &= \mathbf E[\:\mathbf E[X|Y]\:] > \\&=\mathbf E\left[\:\frac{(n-Y)(n-m+Y)}{n}\:\right] \\&= > \frac 1n\left(n^2-mn+m\mathbf E[Y]-\mathbf E[Y^2]\right) > \\&=n-m+m^2/(4n)+m^2/(4n^2)-m/(2n) \end{align*} which does not lead to the correct answer, if there's no mistake in calculation parts I hope. :) Are the steps taken correct, apart from calculation parts? Would you please tell me where the fallacy is, if you see any? PS. I believe that my assumption for considering X|Y as a binomial with the calculated p is not correct, since it allows it in probability for no women to be wives of the presumed group of men! I believe that my assumption for considering X|Y as a binomial with the calculated p is not correct, since it allows it in probability for no women to be wives of the presumed group of men! So can any alternative assumptions be made to yield the correct expectation?
There are two different types of questions involved here. The formula you used works for determining the number of proper factors of a positive integer with three distinct prime factors. If $$n = p_1^pp_2^qp_3^r$$ then each factor of $n$ has the form $$m = p_1^{j}p_2^{k}p^{l}$$ where $j \in \{0, 1, 2, \ldots, p\}$, $k \in \{0, 1, 2, \ldots, q\}$, and $l \in \{0, 1, 2, \ldots, r\}$. Hence, $j$ can be chosen in $p + 1$ ways, $k$ can be chosen in $q + 1$ ways, and $l$ can be chosen in $r + 1$ ways. Thus, $n$ has $$(p + 1)(q + 1)(r + 1)$$factors, of which $1$ is proper. Hence, $n$ has $$(p + 1)(q + 1)(r + 1) - 1$$proper factors. The first question is asking for the number of proper factors of $7875$. To determine this, we first factor $7875$ into primes.\begin{align*}7875 & = 3 \cdot 2625\\ & = 3 \cdot 3 \cdot 875\\ & = 3 \cdot 3 \cdot 5 \cdot 175\\ & = 3 \cdot 3 \cdot 5 \cdot 5 \cdot 35\\ & = 3 \cdot 3 \cdot 5 \cdot 5 \cdot 5 \cdot 7\\ & = 3^2 \cdot 5^3 \cdot 7\end{align*}Each factor of $7875$ has the form $3^a5^b7^c$, where $a \in \{0, 1, 2\}$, $b \in \{0, 1, 2, 3\}$, and $c \in \{0, 1\}$. Hence, there are $3 \cdot 4 \cdot 2 = 18$ factors of $7875$ and $3 \cdot 3 \cdot 2 - 1 = 17$ proper factors of $7875$. Since $30 = 2 \cdot 3 \cdot 5$, it has $2 \cdot 2 \cdot 2 - 1 = 8 - 1 = 7$ proper factors. The second question is asking for the number of ordered triples $(a, b, c)$ of positive integers such that $abc = 30$. This is not the same thing. In this question, each prime must appear once among the three factors. If we write\begin{align*}a & = 2^{x_1}3^{y_1}5^{z_1}\\b & = 2^{x_2}3^{y_2}5^{z_2}\\c & = 2^{x_3}3^{y_3}5^{z_3}\end{align*}then\begin{align}x_1 + x_2 + x_3 & = 1 \tag{1}\\y_1 + y_2 + y_3 & = 1 \tag{2}\\z_1 + z_2 + z_3 & = 1 \tag{3}\end{align}since each prime appears once in the factorization of $30$. Equations 1, 2, and 3 are equations in the nonnegative integers, each of which has three solutions, depending on which of the three variables is equal to $1$. Hence, there are $3 \cdot 3 \cdot 3 = 27$ such ordered triples.
ED contracts are quoted as: $100-LIBOR_{3M}$, where the three-month LIBOR rate is annualized.For instance, an annualized rate of 3.00% would yield a quote of 97. A one basis point change would now yield a quote of 96.99 or 97.01, resulting in a loss or gain in \$25.By construction, the DV01 is \$25, which effectively results in the fact that the ... Having traded these options for a number of years I have some insight. It’s my belief that those that make a living specifically out of these options do have tree-style models that take into account early exercise. On the other hand , those that have occasional use of these options (such as interest rate derivatives dealers who might use them to hedge otc ... The quoting convention must be explained somewhere in your book.For Eurodollar futures, this convention is 100 - yield, 92 means the yield is 8% per annum, so for one quarter you need to divide this discount by 4 to get the price (100% - (8% × (3month/12month)) = 100% - 2% = 98% Yield of Dec19 Future - Current 3Months Libor / 25 bps (1 rate hike)Libor 3M = 1.84 %Price of Dec 19 Future (Ticker EDZ9) = 97.18 = 2.82 %Number of hikes = (2.82 - 1.84)/0.25 = 4Please note these are very simplifying assumptions, as the 3 months Libor is just a proxy on the Fed Funds Target rate. For a US investor to hedge the bonds the investor would (1) Buy EURUSD in the Spot market, (2) Buy the German bonds with the EUR proceeds, (3) Short EURUSD in the forward market to provide a guaranteed repatriation rate when the bonds mature (thus avoiding FX risk).Currently the two year forward exchange premium/discount for the EURUSD is 532 forward ... Futures trading are settled on a daily basis meaning in the end of day, your account will be adjusted by your PnL. So of course your payment on T1 is not discounted. However forward is settled only once at expiration, hence you discount the whole duration. In general futures contracts are leverage instruments. They never require the investment of principal. They do however require margin: you need to fund your account at a futures exchange so that they have insurance against any losses you incur, as an example this might be 2 days standard volatility. On 1 ED contract for 5bps a day thats probably 10bps margin ... The future is at 92, so the interest rate is 8% per year (!the good old days!) or 2% a quarter. Two percent interest on one million is 20,000. So one future covers the interest on 980,000 initial amount and allows you to repay 1,000,000 at maturity 3 months later.You initially borrow 4,820,000 so you need 4,820,000/980,000 futures (for a three month loan).... Consider a fixed-for-floating swap with reset dates $T_0, \ldots, T_{n-1}$ and payment dates$T_1, \ldots, T_n$, where $0<T_0 < \cdots < T_n$. We assume that the swap exchanges the floating rate payments $L(T_{i-1}; T_{i-1}, T_i)\Delta T_i$ and the fixed rate payments $K\Delta T_i$, for $i=1, \ldots, n$, where $\Delta T_i = T_i -T_{i-1}$.The ... Two things: 1) The eurodollar implied futures rates need to be convexity-adjusted before they can be used as forward rates (futures rate = forward rate + convexity bias). 2) Discounting should be done using the OIS discount curve, not the LIBOR curve.More specifically (and ignoring market conventions such as day count), let's say you're pricing a 1-year ... Jacob nailed it, but I'll add something else that might have been confusing you.You can never buy eurodollar futures part way through the 3-month period. They always have 3 month of life to them, starting just after the expiration of the futures contract. So they will always be paying/receiving 3 months worth of interest. And therefore Jacob's math ... To answer the first question directly, the swap in question is a 1 Year swap of a fixed rate vs 3 month Libor. The swap starts in Mid-June (the date of the ED futures expiration) and goes until the next June. There are 4 quarterly payments.To understand things better, look carefully at Table 8.4 and see how the three columns on the right are computed from ... I just checked Google Finance and the EUR/USD = 1.1190.... for arguments sake lets say it goes up by 0.10 to 1.2190 the percentage change = 1.2190/1.1190-1 = +8.94%in terms of USD/EUR the beginning quote would be 1/1.1190 = 0.8937 but would be 1/1.2190 = 0.8203 after the EUR/USD went up by 0.10. Therefore the change in terms of USD/EUR = 0.8203/0.8937-1 = -... There are quite a few reasons:Fed funds futures rate and Eurodollar futures rate do not reflect market expectations alone. Technically speaking, a risk-free interest rate is the sum of 1) rate expectations, 2) term premium, and 3) convexity bias. Term premium is typically positive, since investors demand a higher yield for taking on more duration risk (i.e.... hypothetically if we assume that $R_{fra}=R_{fut}-\frac{1}{2} \cdot \sigma^2\cdot T^2$ holds (convexity adjustment) and you are able to observe $R_{fra}$, $R_{fut}$ and $T$ then you can extract implied volatility of reference interest rate. If your view on volatility is different then you can make a bet: long convexity position (if you expect volatility ... What you have calculated, correctly as far as I can tell, is a December-starting 1-year compounded Libor 3m forward rate. That's a weird-sounding thing, but it is essentially equivalent to a December-starting 1-year forward swap rate vs Libor 3m. (I've just priced exactly this against a live USD Libor 3m yield curve and I get 97.3 bp.)However, this should ... PAI is the interest paid on the VM. Assuming perfect collateralization (i.e. collateral always reset to the derivative NPV) it is shown (see Piterbarg "funding beyond discounting") that funding is entirely done trough the collateral and therefore the derivative should be valued by each party with discounting at the collateral rate rather than at its own ... Take a 5Y bond, say buying \$10 million dollar notional and calculate the PV01 using you favourite method for calculating bond risks, e.g. some duration formula. Lets say this Pv01 is \$4,500Now look at the ED strip. Each 3-month contract has a pv01 of \$25 by definition of the instrument. If you purchase 1 each of every contract for 5y then you will have ... there are many ways to solve Vasicek system, for me personally I markov short rate approach. Without going into the details of proofs:Note that eurodollar future is calculated under risk neutral Q measure of libor rate at each settlement $t_{fix}$ (on three months interval each)libor rate $l(t_{fix}) = \frac{1}{tenor} e^{A_{diff} - B_{diff} * r(t_{fix})}... This book describes something that looks like DV01 hedging of the bond with eurodollar contracts. But the reality is that the underlying rates of the 2 kind of instruments are different. The bond depends on the treasury yield curve whereas the eurodollars depend on the Libor curve. These 2 curves share some common risk factors however there exist a basis ... In my experience Chinese whispers between IR traders and bank/institution strategy/researchers and then journalists is rife. Hikes/cuts are predicted by traders based on FedFund futures or meeting period FFOIS rates. The same goes for GBP or EUR where the OIS rates dictate the probability of hikes/cuts.Note that your quote didn't directly say the expected ... At least 2 problems here I think. 1) the CME vols are of the implied rate, not the price. Therefore express underlying price and strike in yield terms by taking 100-price and 100-strike. 2) the units of option price need to be the same as the underlying. For example , option whose strike is 2.50 has price 0.03, not 3. Try those adjustments. Your description of the contract is incomplete, is this the december call on the september 19 future or something else ? In all cases it's in basis points as explained in the contract description and the calculator seems ok.Based on your 3.75 picture it seems to be the option expiring in Sep 19 on the Sep 19 contract. And its price is 3.75 basis point. The futures contract pays off every day during its life, with the last payment at T. There are no payments after that. When Hull is talking about a payment at T+.25 he is referring to the payoff of an investment that is separate from the futures contract. In my opinion, there is no better reference than The Treasury Bond Basis, which I still read cover-to-cover at least once a year.Since the publication of the 2005 edition, the biggest development is the introduction of the WN (ultra-long bond) contract in 2009. The TN (ultra 10-year) contract was introduced in 2016 as well. But the same set of tools and ... 1) convert the futures prices into forward rates by using forward rate= 100- futures price. You now have a chain of forward rates, starting with the rate from Sep 16 to Dec 16.2) you need a rate from today to Sep 16. Use 2 month spot Libor3) to calculate a zero coupon rate from today to any given date, chain together the relevant forward rates. eg ... I think you are right that ultimately all dollar movements are reflected in reserve accounts at the Federal Reserve.May I make a couple of additional points: Eurodollar borrowing is really not a close substitute for Fed Funds. First of all, not much volume goes through the term eurodollar markets, so it doesn't have the capacity to replace Fed Funds ...
Answer The resultant force is $108.2\text{ pounds}$ and angle is $12.6{}^\circ $. Work Step by Step Magnitude of the first force $=70\text{ pounds}$. The direction of the first force $=$ $\text{S}56{}^\circ \text{E}$. Magnitude of the second force $=50$ pounds. The direction of the second force $=\text{N72}{}^\circ \text{E}$. The vector component along the x-axis or horizontal is i and the vector component along the y-axis or the vertical direction is j. Let ${{F}_{1}}$ and ${{F}_{2}}$ denote the first and second force acting on the object, respectively. Therefore the equation becomes $\begin{align} & {{F}_{1}}=70\cos 326{}^\circ \text{i}+70\sin {{326}^{0}}\text{j} \\ & =58\text{i}-\text{39}\text{.1j} \\ & {{F}_{2}}=50\cos {{18}^{0}}\text{i}+50\sin {{18}^{0}}\text{j} \\ & =47.6\text{i+}15.5\text{j} \end{align}$ And, $\begin{align} & F=\left( 58+47.6 \right)\text{i}+\left( 15.5-39.1 \right)\text{j} \\ & =105.6\text{i}-23.6\text{j} \end{align}$ So, the magnitude of force vector is as shown below $\begin{align} & \left| \left| F \right| \right|=\sqrt{{{\left( 105.6 \right)}^{2}}+{{\left( 23.6 \right)}^{2}}} \\ & =108.2\text{ pounds} \end{align}$ Also, the angle is $\begin{align} & \text{cos}\theta =\frac{a}{\left| \left| F \right| \right|} \\ & \theta ={{\cos }^{-1}}\frac{105.6}{108.2} \\ & =12.6{}^\circ \\ & 360{}^\circ -12.6{}^\circ =347.4{}^\circ \end{align}$ Therefore, $\theta =12.6{}^\circ $
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let $X$ and $T$ be schemes and assume we have two coherent sheaves $\mathcal{F}$ and $\mathcal{G}$ on $X\times T$ which are flat over $T$, that is these are families of sheaves parametrized by $T$. Assume we have a nontrivial morphism of sheaves $\phi: \mathcal{F}\rightarrow \mathcal{G}$ and that there is some $t_0\in T$ such that $\phi_{t_0}: \mathcal{F}_{t_0}\rightarrow G_{t_0}$ is surjective. Can we find an open subset $U\subseteq T$ containing $t_0$, such that for all $t\in U$ the map $\phi_t$ is surjective? What can we say in general about the following set: $Z:=\{t\in T | \phi_t: \mathcal{F}_t\rightarrow \mathcal{G}_t \text{is surjective}\}$? Is it always an open subset in $T$? Or do wee need to put some restrictions to $\mathcal{F}$ and $\mathcal{G}$ to be true? I read in Geometric Invariant Theory and Decorated Principal Bundles by Schmitt: "The condition that a morphism between vector bundles be surjective is an open condition in a suitable parameter space." But nothing more is said about what this "parameter space" should be.
For inelastic scattering, the cross section can be modeled like this: $$ \left( \frac{\text{d}^2\sigma}{\text{d}\Omega\text{d}E} \right) = \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Mott}\, \left[ W_2(Q^2,\nu) + 2W_1(Q^2,\nu) \tan^2 \frac{\theta}{2} \right] $$ where the first term corresponds to the electric part and the second one (angle dependent!) corresponds to the magnetic one. For spin-$0$ particles, we only have the first term (This will be important later). We can rewrite this in terms of dimensionless structure functions, $F_{1,2}$, $$ \left( \frac{\text{d}^2\sigma}{\text{d}\Omega\text{d}E} \right) = \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Mott}\, \left[ \frac{1}{\nu}F_2(x,Q^2) + \frac{2}{M}F_1(x,Q^2) \tan^2 \frac{\theta}{2} \right] \tag{1} $$ where $$ F_1(x,Q^2) = M\, W_1(Q^2,\nu)\quad\text{and}\quad F_2(x,Q^2) = \nu\, W_2(Q^2,\nu) $$ and $x=\frac{Q^2}{2M\nu}$ is the Bjorken scaling. These dimensionless structure functions do not depend strongly on the momentum transfer $Q^2$, hence one can deduce that the particles in question (i.e. the quarks) are pointlike$^1$. So let's compare Eq. (1) with another cross section, the Rosenbluth cross section: $$ \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Rosenbluth} = \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Mott}\, \left[ \frac{1}{1+\tau}\big[G_E^2(Q^2) + \tau\, G_M^2(Q^2)\big] + 2\tau\,G_M^2(Q^2) \tan^2 \frac{\theta}{2} \right] $$ where $\tau=\frac{Q^2}{4m^2}$. (Notice the small $m$ since we need to distinguish the elastic ($m,x=1$) and inelastic ($M,0<x\leq 1$) case!) For pointlike particles, we have $G_E(Q^2)=1$ and $G_M(Q^2)=1$ and thus the Rosenbluth cross section becomes $$ \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Rosenbluth}^\text{(pointlike)} = \left( \frac{\text{d}\sigma}{\text{d}\Omega} \right)_\text{Mott}\, \left[ 1 + 2\tau\,\tan^2 \frac{\theta}{2} \right] \tag{2} $$ Since we know that we describe pointlike particles in Eq. (1), we can compare Eq. (1) to Eq. (2). We shall do this by considering the ratio of the magnetic part to the electric part: $$ \begin{align}\frac{\text{magnetic part}}{\text{electric part}}\text{ in Eq. (1)} &= \frac{2\nu\,F_1(x,Q^2)\tan^2\frac{\theta}{2}}{M\, F_2(x,Q^2)} \tag{3a}\\\frac{\text{magnetic part}}{\text{electric part}}\text{ in Eq. (2)} &= 2\tau\, \tan^2\frac{\theta}{2} = \frac{Q^2}{2m^2}\tan^2\frac{\theta}{2} \tag{3b} \end{align}$$ Since Eq. (3) corresponds to the elastic scattering case, we have the relation $Q^2=2m\nu$ (since $x=1$ here). We can use this to write $$ \begin{align}\frac{\text{magnetic part}}{\text{electric part}}\text{ in Eq. (2)} &= \frac{2\nu^2}{Q^2}\tan^2\frac{\theta}{2} \tag{3b again} \end{align}$$ Let us now finally set Eqs. (3a) and (3b) equal: $$ \frac{2\nu\,F_1(x,Q^2)\tan^2\frac{\theta}{2}}{M\, F_2(x,Q^2)} = \frac{2\nu^2}{Q^2}\tan^2\frac{\theta}{2} \tag{4}$$ As you can verify, using the definition of the Bjorken scaling $x=\frac{Q^2}{2M\nu}$, we can modify Eq. (4) to look like this: $$ \frac{1}{2x}F_2(x,Q^2) = F_1(x,Q^2) $$ Actually, we have to include something: remember that I mentioned that for spin-0 particles, $F_1(x,Q^2)=0$? If we include this, we get: $$ \frac{1}{2x}F_2(x,Q^2) = \begin{cases} 0 & \text{for spin-0}\\ F_1(x,Q^2) & \text{for spin-1/2}\end{cases} $$ And here we have it: since the proton's structure functions obey the lower equation, we can conclude that its constituents are in fact spin-1/2 particles! $^1$Density distribution and structure function are related by a Fourier transformation. If the structure function is almost constant w.r.t. $Q^2$, then the density distribution must be almost a delta function.
Mantis Shrimp Claws will do quite nicely. Carbon is extremely abundance on Earth's surface and presumably any other Earth-like planet that the author may be working on. Given the many forms that carbon can take from the ultra soft graphite to the ultra hard diamond, it should be able to satisfy your needs. Characteristics of good armor The element(s) that form this armor are important but the construction/organization of the elements are far more important. If the armor is too rigid, it will shatter. If it's not hard enough, then the bullet will pass through. (An example of a behavior we don't want is spider silk. True, it's stronger than steel at that scale but it's also super stretchy. Stopping a bullet on the other side of the target isn't very useful.) The goal will be to spread the bullet's kinetic energy over a large enough period that the armor plate can handle it. Mantis shrimp have ridiculously resilient armor. They have to since they hit harder than most anything else in the animal kingdom. Note the many layers of fibers at slight angles to each other. In this configuration, penetrations that break through between two parallel fibers (this is the weakest configuration) runs into the next layer below that is oriented more towards the longitudinal orientation (which is far far stronger). At every layer, the bullet is forced to expend lots of energy breaking the bonds of the fibers along their strongest axis. Fast Replacement Armor Requirement 4 is the most interesting. Growing plates like a turtle shell would certainly be effective from an armor perspective however, these don't grow quickly. Human skin doesn't offer any kind of armor capabilities but it does grow very quickly. Skin wounds can heal in a month or less (depending on various factors). Clearly we need something that will grow fast and ideally is always growing. If the armor is damaged, we don't want to have to keep carrying it around for longer than we have to. Lots of animals have disposable "armor" to one degree or another. Humans have their skin. Porcupines have their quills (which are replaced). Snakes, lizards, crabs and lobsters have their skins. Let's assume this creature is a carnivore so that it can afford the higher metabolic costs of replacing all it's armor in a month or so. Perhaps as a way to reduce this metabolic load, the creature swallows the old armor scales which are then broken down into basic components for the armor-building cells to use. Alternatively, the outer layers can just flake off after exposure to oxygen for some period. This gives the armor a natural decay rate and prevents the armor from getting too thick. Natural variation in the armor breakdown proteins could lead to some creatures with thicker or thinner armor than others. Hey cool! We just invented a way to get heavy and light versions of the same creature suited for different battlefield duties without having to breed different versions. Win! Yeah, but how good is it? Mantis shrimp 'fists' are known to withstand 4 gigapascals. This is about 40k bar or 1/90th the pressures in Earth's core. Dang. I'm going to assume a NATO 5.56x45mm round. It's super common and well understood. With a muzzle velocity of 990m/s. Kinetic Energy is: $$KE=\frac{1}{2}mv^2$$$$\Delta E=F\parallel d$$ therefore $$F=\frac{\Delta E}{d}$$ Assume $E_0=0\text{ J}$ therefore $F=\frac{mv^2}{2d}$. $$P=\frac{F}{A}$$$$A=\pi r^2$$therefore $P=\frac{mv^2}{2d\pi r^2}$, where $P$ is the pressure in Pascals, $m$ is the mass of the bullet in kilograms, $v$ is the muzzle velocity in meters per second, $d$ is the distance the bullet travels in meters, and $r$ is the radius of the bullet. With all that, we get that the pressure exerted on the armor is $$\frac{.004\cdot990^2}{2\cdot d\cdot\pi\cdot0.00285^2}\approx \frac{77}{d}\text{ Megapascals (MPa)}$$ The farther away the gun, the less pressure exerted. With simple algebra, we can find that you would have to fire from just under 2 cm away to reach a breaking pressure of 4 GPa. This is only an approximation since these calculations don't include angle of impact, thickness of armor, ablation effects, the liquid characteristics of metals at high speeds and small time frames, tungsten at high speeds, possible pyrophoric effects, and so on. Turn it up to 11 So far we've been talking about common mantis shrimp armor. Cool. Let's turn it up to 11 by replacing whatever carbon/calcium materials are in their fists for carbon nanotubes. Given that the theoretical maximum for carbon nanotubes is approximately 100gpa (about 25x our baseline), replacing a substantial portion of the default fist matrix should yield impressive strength gains. I'm no materials engineer so I can't prove it. I only play one on the internet.
MercuryDPM Beta This page provides a short overview of the mathematics and models used in the code. We first introduce the basic components of a DPM simulation (global parameters and variables, properties of particles, walls, boundaries, force laws and interactions), and provide the notation used henceforth. Then we explain the basic steps taken in each simulation (setting initial conditions, looping through time, writing data). The document is structured similarly to the code, so the user can easily relate the relevant classes with their purpose. Throughout this text, we include references to the relevant classes and functions, so the user can view the implementation. In its most basic implementation, a DPM simulation simulates the movement of a set of particles \(P_i,\ \ i=1,\dots,N\), each of which is subject to body forces \(\vec f_i^\mathrm{body}\) and interacts with other particles via interaction forces \(\vec f_{ij}\), and torques \(\vec t_{ij},\ \ i=1,\dots,N_\mathrm{interaction}\). While the body forces are assumed to act in the particle's center of mass \(\vec r_i\), the interaction forces act at the contact point \(\vec c_{ij}\) between the particles. Thus, each particle is subjected to the total force and torque \[\vec f_i = \vec f_i^\mathrm{body}+\sum_{j=1}^N \vec f_{ij},\quad \vec t_i = \sum_{j=1}^N \vec t_{ij} + (\vec{c}_{ij}-\vec{r}_i)\times \vec f_{ij}.\] To simulate the movement of each particle, the particles' initial properties are defined, then Newton's laws are applied, resulting in the equations of motion, \[m_i \ddot{\vec{r_i}} = \vec f_i,\quad I_i \dot{\vec{\omega_i}} = \vec t_i,\] where \(m_i\), \(I_i\), \(\omega_i\) are the mass, moment of inertia and angular velocity of each particle (we currently do not use the particles' orientation, as all particles are spherical). The default body force applied to a particle is due to gravity, \(\vec f_i^\mathrm{body}=m_i \vec{g}\), with \(\vec g\) the gravitational acceleration. The interaction forces can be quite varied, but usually consist of a repulsive normal force due to the physical deformation when two particles are in contact, and possibly an adhesive normal force \[\vec f_{ij} = (f_{ij}^{n,rep}-f_{ij}^{n,rep})\vec n_{ij} + \vec f_{ij}^{t},\] with \(\vec n_{ij}\) the normal direction at the contact point, which for spherical particles is given by \(\vec n_{ij}=\frac{\vec r_j-\vec r_i}{|\vec r_j-\vec r_i|}\). Further, a set of walls \(W_j,\ \ j=1,\dots,N\) can be implemented, which can apply an additional force \(\vec f_{ij}\) and torque \(\vec t_{ij}\) to each particle \(P_i\). A typical, flat wall is defined by \[\mathrm{W}_j = \{\vec{r}:\ \vec{n}_j \cdot (\vec{r} - \vec{r}_j) \leq 0 \},\] with \( \vec{n}_j\) the unit vector pointing into the wall and \(\vec{r}_j\) a point on the wall. However, many other wall types are possible, such as intersections of planar walls (e.g. polyhedral walls), axisymmetric walls (e.g., cylinders or funnels), or even complex shapes (coils and helical walls). More complex simulations include further boundary conditions \(B_j,\ \ j=1,\dots,N\), such as periodic walls and regions where particles get destroyed or added. Time integration is done using Velocity Verlet, see http://en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet for details. The implementation can be found in DPMBase::solve. In the following sections, we give a short overview over the parameters that can be specified in MercuryDPM, and introduce the variable names which are used throughout this documentation. We also include links to the implementation of each object/parameter. For the more complex type of objects (particles, species, interactions, walls, and boundaries), where multiple implementation exist, we give an overview ofthe common parameters and discuss the most common types. Each simulation has the following global parameters: These variables are all implemented in the class DPMBase. This class is a base class of Mercury3D, which is the typical class used in Driver code, such as the tutorials [add link]. The sets of particles, species, walls ... mentioned above are implemented as handlers, which contain a vector of pointers (=links) to objects. E.g., WallHandler contains a set of pointers to walls, which are possibly of different type (flat, cylindrical, polyhedral, ...). The same is true for the ParticleHandler, SpeciesHandler, InteractionHandler, and BoundaryHandler. Next, we introduce the basic types of objects these handlers can contain. There are several wall types: InfiniteWall, IntersectionOfWalls, AxisymmetricIntersectionOfWalls, Screw, and Coil. All these wall types aree derived from a common class, BaseWall, which has the following properties: There are several boundaries, which can be categorized into three types: There are several species, which are built up from four building blocks (see Species for more details): The most commonly used normal force species, LinearViscoelasticNormalSpecies contains AdhesiveForceSpecies: Describes the short-range, adhesive normal contact force \(f^{n,ad}\), which is added to the normal contact force. Several adhesive forces are implemented: FrictionForceSpecies: Describes the tangential frictional contact force. Several normal forces are implemented: Note that in the Coulomb friction model, the frictional yield force only depends on the contact force, e.g. \(|f^{sl}|\leq\mu^{sl}|f^{n,c}|\), which is why the two normal forces are split. There are several Interaction types, one for each Species. While the Species contain the parameters of the contact model, the Interaction contains the variables stored for an individual interaction between a particle (or particle-wall) pair. All interaction types have the following common properties: For collisions between particle \( \mathrm{P}_i \) and wall \( \mathrm{w}_j \), some variables are defined differently:
Table of Contents Propagation of Error in Evaluating Functions Examples 1 Recall from the Propagation of Error in Evaluating Functions page that if $y = f(x)$ is a differentiable function for $x \in [a, b]$ and $x_A, x_T \in [a, b]$ then we can approximate the error of $f(x_A)$ to $f(x_T)$ with either of the formulas(1) More generally, we can approximate the error of $f(x_A)$ to $f(x_T)$ by $\mathrm{Error} (f(x_A)) \approx f'(\xi) (x_T - x_A)$ where $\xi$ is between $x_T$ and $x_A$. Furthermore, we can approximate the relative error of $f(x_A)$ to $f(x_T)$ with the following formula:(2) We will now look at some examples of applying the formulas above. Example 1 **Approximate the error
It is because binaries of lower mass have a chirp frequency and amplitude that evolves much more slowly than for a binary of higher mass with the same orbital period. This is because the rate of orbital energy loss due to GWs is much higher for a higher mass binary. There is no big difference between the GW signal produced by merging black holes and neutron stars of similar mass until just before the merger when the neutron stars can tidally deform. This point was reached at a frequency beyond LIGO's sensitivity and in fact the LIGO GW observations were not cpabale on their own of distinguishing between NS/BH binary possibilities. The difference between this LIGO GW signal and the previous BH binary detections is just due to the total mass of the systems involved, not their nature. There are a few things going on here. The amplitude of the signal from a merging binary is$$h \sim 10^{-22} \left(\frac{M}{2.8M_{\odot}}\right)^{5/3}\left(\frac{0.01{\rm s}}{P}\right)^{2/3}\left(\frac{100 {\rm Mpc}}{d}\right),$$where $M$ is the total mass of the system in solar masses, $P$ is the instantaneous orbital period in seconds and $d$ is the distance in 100s of Mpc. $h \sim 10^{-22}$ is a reasonable number for the sensitivity of LIGO to gravitational wave strain where it is most sensitive (at frequencies of 30-1000 Hz). The merging black hole sources previously seen by LIGO were much more massive than the merging neutron star binary by about a factor of 10-20. On the other hand, they were more than a factor of 10 more distant. Thus at a similar frequency (i.e. at the same orbital period) the neutron star merger produced a slightly lower amplitude than the black hole mergers. Note though that the amplitude gets bigger as the period gets smaller (and the frequency gets bigger) and the binary inspirals. The time evolution of the frequency depends on the chirp mass, which is given by $$M_C = \frac{(m_1 m_2)^{3/5}}{(m_1+m_2)^{1/5}}$$and the rate of change of frequency is$$\frac{df}{dt} = \frac{96 \pi^{8/3}}{5} f^{11/3} \left(\frac{GM_C}{c^3}\right)^{5/3}.$$ So at a given frequency, the rate of change of frequency and the rate of change of GW amplitude just depend on the chirp mass; the timescale to merger from a given frequency can be approximated as $\tau \sim f/\dot{f} \propto M_C^{-5/3}$. For the merging neutron star binary, $M_C = 1.19M_{\odot}$. For the black hole binaries found so far, $9 < M_C/M_{\odot} <30$, so the frequency and amplitude evolution of these is far more rapid. For the black hole binaries this means that as they become visible in LIGO's sensitive frequency range ($> 20$ Hz), they are orbiting with a period of 0.1s, but their frequency is increasing at 50-200 times the rate at which is it increasing in a neutron star binary at the same orbital period. Hence the comparative timescales to merger of 1s vs 100s.
Below, I've focused on PA when lots of other theories would do. If replacing PA with a different theory leads to a more answerable question, feel free to do so. The standard system of a nonstandard model $M$ of PA is the set of sets of natural numbers coded by elements of $M$: $$SS(M)=\{X\subseteq\omega: \exists a\in M\forall x\in\mathbb{N}(x\in X\iff p_x\vert a)\}.$$ (Here "$p_i$" denotes the $i$th prime.) An easy overspill argument shows that $SS(M)$ is always a Scott set,$^1$ and Scott proved that every countable Scott set is the standard system of some nonstandard model.$^2$ We can define an analogous notion$^3$ of standard system with $\mathbb{N}$ replaced with more general initial segments: if $M\subsetneq_{end} N$ are models of PA, then we let $SS_M(N)$ be the set of elements of $M$ coded by elements of $N$: $$SS_M(N)=\{X\subseteq M: \exists a\in N\forall x\in M(x\in X\iff p_x\vert a)\}.$$ (Here "$p_i$" denotes the $i$th prime in the sense of $N$, or equivalently in this case of $M$.) The same overspill argument shows that the second-order structure $(M, SS_M(N))$ is a model of WKL. My question is whether the analogue of Scott's theorem holds, at least for countable $M$: Question.Suppose $M$ is a countable model of PA and $\mathcal{X}$ is a countable family of subsets of $M$ such that $(M, \mathcal{X})\models$ WKL. Is there an $N\supsetneq_{end} M$ such that $SS_M(N)=\mathcal{X}$? The problem here is that in the usual case, we don't need to worry about $\subseteq$ versus $\subseteq_{end}$, whereas that poses a real problem here. $^1$A set of sets of natural numbers closed under join and Turing reducibility such that for every infinite binary tree in the set, an infinite path through that tree is also in the set. Equivalently, the second-order part of an $\omega$-model of WKL. $^2$The generalization of Scott's theorem to uncountable Scott sets is wildly open. Knight and Nadel generalized Scott's theorem to Scott systems of cardinality $\aleph_1$, thus solving the problem under the assumption of CH, but their argument breaks down immediately for models of cardinality $\aleph_2$ or greater. Meanwhile, Gitman has recently shown that the question has an affirmative answer for a class of Scott sets characterized in terms of forcing, assuming the set-theoretic hypothesis PFA (which contradicts CH); however, my understanding is that her arguments really only apply to that particular class of Scott sets and that the hypothesis of PFA is currently necessary. $^3$I've had trouble finding much literature on this, especially compared to the usual notion of a standard system, but Kossak and Schmerl's book does contain some information about them (especially in Chapter $7$).
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Formatting numbers in Lua Posted on September 9, 2019 I often use Lua to generate solution for homework assignments. Ideally, I want the solution to look exactly how it would look if it were written by hand. But this can be tricker than it appears at first glance. In this post, I’ll explain the issue and how I solve it. To illustrate the formatting issue, let me consider an example of writing the solution of how to find roots of a quadratic equation. Let’s start with a simple example. First let’s define \defineenumeration[question]\defineenumeration[solution] Then consider the following example. \startquestion Find the roots of $x^2 + 5x + 6 = 0$.\stopquestion\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = 1 \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = -2 \text{ and } -3. \stopformula\stopsolution Now suppose I want to generate a homework assignment with four or five suchquestions. In order to ensure that I don’t make any mistakes, I generate thequestions and the answers using Lua. For simplicity, let’s assume that bothroots are real. Then, I can use string.formatters to easily generate theassignment and the solution. \startluacodelocal formatters = string.formatterslocal question = formatters[ [[\startquestion Find the roots of $%s x^2 + %s x + %s = 0$.\stopquestion]] ]local solution = formatters[ [[\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = %s. \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = %s \text{ and } %s. \stopformula\stopsolution]] ]local sqrt = math.sqrtassignment = assignment or { }assignment.roots = function(a, b, c) context(question(a == 1 and "" or a, b, c)) D = b^2 - 4*a*c r1 = (-b + sqrt(D))/(2*a) r2 = (-b - sqrt(D))/(2*a) context(solution(D, r1, r2))end\stopluacode Then, in the homework assignment, I can generate the above question and its solution using: \ctxlua{assignment.roots(1, 5, 6)} The above generated solution works well when all numbers are integer valued. However, the generated solution is not ideal when some of the calculations result in floats. For example: \ctxlua{assignment.roots(1, 4, 2)} will generate: \startquestion Find the roots of $x^2 + 4x + 2 = 0$.\stopquestion\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = 8.0. \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = -0.5857864376269 \text{ and } -3.4142135623731. \stopformula\stopsolution Technically, the solution is correct. But one never types a float with aprecision of 12 decimal places. Of course, I could change the %s in thetemplate to %.3f as follows: local solution = formatters[ [[\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = %.3f. \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = %.3f \text{ and } %.3f. \stopformula\stopsolution]] ] For the second problem, this will generate: \startquestion Find the roots of $x^2 + 4x + 2 = 0$.\stopquestion\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = 8.000. \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = -0.586 \text{ and } -3.414. \stopformula\stopsolution which is partially acceptable (when typesetting the solution by hand, onewould uses Δ = 8 rather than Δ = 8.000) but for the first problem, we now get \startquestion Find the roots of $x^2 + 5x + 6 = 0$.\stopquestion\startsolution Let's start by computing the determinant. \startformula Δ = b^2 - 4ac = 1.000. \stopformula Since $Δ > 0$, the roots are given by \startformula r_{1,2} = \dfrac{ -b \pm \sqrt{Δ} }{ 2a } = -2.000 \text{ and } -3.000. \stopformula\stopsolution which is not ideal. So, to typeset such examples, I need to format numbers as follows: If the number is an integer, use %d. If the number is a float, use %.3f. However, once you think about it, the above spec is not complete. In addition, what we want is that If %.3fgives "0.000", using (the tex equivalent of) %.3e. Who knew simply formatting numbers could be so complicated! Anyways, here is a simple function that does this formatting: local mathtype, floor = math.type, math.floorlocal format, strlen, match = string.format, string.len, string.matchformatnumber = function(a) if mathtype(a) == "integer" or floor(a) == a then return format("%d", a) else str = format("%s", a) fmt = format("%.3f", a) exp = format("%.3e", a) if strlen(str) < strlen(fmt) then return str elseif fmt == "0.000" then x = match(exp, "(.*)e") y = match(exp, "e(.*)") return format("%s \\times 10^{%d}", x, y) else return fmt end endend We first check if the number is an integer (using math.type(a) == "integer")or if the number is a float of the type 8.0 (using math.floor(a) == a) andif so, format the number using %d. We then next check if casting the number to a string leads to a shorter string thanformatting it using %.3f. If so, we format it using %s. This ensures thata number like 8.2 is typeset as 8.2 rather than 8.200. Finally, we check if formatting the number using %.3f gives "0.000". Ifso, we typeset the tex equvalent of %.3e. Phew! To use this code, I use %s in my templates, and then call the template as context(solution(formatnumber(D), formatnumber(r1), formatnumber(r2))) This gives me correct formatting in all the edge cases.
$\def\RR{\mathbb{R}}$I finally managed to do the routine computation Ted Shifrin describes, and I'm writing up the details for the record. It turns out that the Gauss map is a bit of a red herring. Rather, let $X$ and $Y$ be two surfaces in $\RR^3$ and let $f: \RR \to X$ and $g: \RR \to Y$ be two curves which have the following interesting property: For all $t$, the tangent planes $T_{f(t)}X$ and $T_{g(t)}Y$ are parallel. Let $\theta(t)$ be the angle between $f'(t)$ and $g'(t)$. Then $d \theta$ is the difference between the geodesic curvature $1$-forms. (This is a slightly vague statement that will be made better below.) In particular, $Y$ could be $S^2$, and $g$ could be the composition of $f$ with the Gauss map $X \to S^2$, but this case isn't special in any way. First, I need to explain how to write the geodesic curvature as a $1$-form. This is surely well known to experts, but most of the books I've been looking at define it as a number, and I had to convert to $1$-forms before I understood what was going on. Let $C \subset X \subset \RR^3$ be a curve in an oriented surface in $3$-space. For $x \in X$, let $n_x$ be the unit normal vector. We define a $1$-form $\kappa_{C/X}$ on $C$ as follows: Let $\phi: \RR \to C$ be a parametrization of $C$ (or a segment of $C$). Let $\phi'(t) = |\phi'(t)| u(t)$, so $u(t)$ is a unit vector. Write $u'(t)$ for the derivative of $u(t)$ with respect to $t$. We define $\kappa_{C/X}$ so that $\phi^{\ast} \kappa = \det(u(t)\ u'(t)\ n_{u(t)}) dt$. Clearly, if $\phi$ is the arc length parametrization, then $\kappa$ is $(\mbox{geodesic curvature}) d (\mbox{arc length})$. But I claim that this definition works for any parametrization. The reason is as follows: Suppose we change to another parametrization, with parameter $s$. Then $u$ and $n$ don't change. We have $du/dt = (du/ds) (ds/dt)$. But $dt = (ds/dt)^{-1} ds$. Since $\det$ is linear, these cancel. I also need Lemma Let $a$, $b$ and $c$ be three vectors in $\RR^3$ such that $a$ is a unit vector and $b$ is normal to $a$. Then $(a \times b) \cdot (a \times c) = b \cdot c$. Proof We may assume that $a = (0,0,1)$. Then $b = (b_1, b_2, 0)$ and $c=(c_1, c_2, c_3)$. The claimed result is $(b_2, -b_1,0) \cdot (c_2, -c_1,0) = (b_1, b_2, 0) \cdot (c_1,c_2,c_3)$, which is obvious. $\square$ Now, let $X$, $f$, $Y$, $g$ and $\theta$ be as above. We write $n(t)$ for the common normal to $T_{f(t)} X$ and $T_{g(t)} Y$. Write $f(t) = |f(t)| u(t)$ and $g(t) = |g(t)| v(t)$, so $u$ and $v$ are unit vectors. From our previous computations about geodesic curvature, we want to show$$\frac{d \theta(t)}{dt} = \det(v(t), v'(t), n(t)) - \det(u(t), u'(t), n(t))$$where prime is differentiation with respect to $t$. (Note that it was important to work out the formula for geodesic curvature without assuming we have an arc length parametrization, since it is unlikely that both $f(t)$ and $g(t)$ are unit speed.) Now, since $u$ and $v$ are unit vectors, $\theta = \cos^{-1}(u \cdot v)$. We have$$\frac{d \theta}{dt} = - \ \frac{d (u \cdot v)/dt}{\sin \theta} = - \ \frac{u' \cdot v + u \cdot v'}{\sin \theta}$$so we must show that$$u' \cdot v + u \cdot v' = - \det(v, v', (\sin \theta) n) + \det(u, u', (\sin \theta) n). \quad (\ast)$$ But $u$ and $v$ are unit vectors and $n$ is normal to both of them, so $(\sin \theta) n = u \times v$ and we can write the right hand side of $(\ast)$ as$$\det(v, v', v \times u) + \det(u, u', u \times v) = (v \times v') \cdot (v \times u) + (u \times u') \cdot (u \times v).$$We must show that$$u \cdot v' + u' \cdot v = (v \times v') \cdot (v \times u) + (u \times u') \cdot (u \times v).$$ The result now follows from the Lemma, using $(a,b,c) = (v, v', u)$ in the first case and $(u, u', v)$ in the second. (Since $u$ and $v$ are unit length, we have $u \perp u'$ and $v \perp v'$, as the lemma requires.) $\square$
The seminar takes place on Mondays, 4:10-5:30pm, in Stevenson 1320. (Note that we are no longer meeting in Stevenson 1308.) April 15, 2019 Title: Property (T) and $\epsilon$-orthogonal subgroups Speaker: Krishnendu Khan (Vanderbilt University) Abstract: In this talk we’ll be discussing the notion of angle between subgroups of group G introduced by M. Kassabov and how the small angle (small orthogonality) helps to lift relative property (T) of G with respect to generating subgroups to the entire group G (result of M. Ershov and A. Jaikin-Japirain). (This is Lecture 10 in a semester-long series on SL(3,Z) and Property (T).) April 8, 2019 Seminar rescheduled for next week April 1, 2019 Title: Property (T) without bounded generation Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Abstract: We will present the work of Mimura proving (T) for the elementary groups EL_n(R) which uses ideas of Shalom’s argument but does it without the bounded generation aspects. (This is Lecture 9 in a semester-long series on SL(3,Z) and Property (T).) March 25, 2019 Title: Bounded generation by subgroups and Property (T) Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Abstract: I will continue our ongoing discussion and discuss Shalom’s proof of (T) for SL(3,Z). We will prove a more general result that any group G that contains three subgroups H, K_1 and K_2 satisfying some conditions (and K_1 and K_2 possessing relative property T with respect to G), has Property (T). (This is Lecture 8 in a semester-long series on SL(3,Z) and Property (T).) March 18, 2019 Title: Property (T), affine actions and (reduced) cohomology, Part III Speaker: Jesse Peterson (Vanderbilt University) Abstract: We will present several equivalent conditions for property (T) in terms of affine actions and (reduced) cohomology. This follows the work of Delorme, Guichardet, Shalom, and Ozawa. (This is Lecture 7 in a semester-long series on SL(3,Z) and Property (T).) March 11, 2019 Title: Property (T), affine actions and (reduced) cohomology, Part II Speaker: Jesse Peterson (Vanderbilt University) Abstract: We will present several equivalent conditions for property (T) in terms of affine actions and (reduced) cohomology. This follows the work of Delorme, Guichardet, Shalom, and Ozawa. (This is Lecture 6 in a semester-long series on SL(3,Z) and Property (T).) March 4, 2019 No seminar (spring break) February 25, 2019 — talk will begin at 4:30pm instead of the usual 4:10pm Title: Property (T), affine actions and (reduced) cohomology, Part I Speaker: Jesse Peterson (Vanderbilt University) Abstract: We will present several equivalent conditions for property (T) in terms of affine actions and (reduced) cohomology. This follows the work of Delorme, Guichardet, Shalom, and Ozawa. (This is Lecture 5 in a semester-long series on SL(3,Z) and Property (T).) February 18, 2019 No seminar (time conflict with a grad student’s oral qualifying exam) February 11, 2019 Title: Lattices in SL(n,R) and Kazhdan’s Property (T) Speaker: Jun Yang (Vanderbilt University) Abstract: First, we will prove that Property (T) is inherited by lattices. Then we will show SL(n,Z) is a lattice in SL(n,R). (This is Lecture 4 in a semester-long series on SL(3,Z) and Property (T).) February 4, 2019 Title: The Howe–Moore Property Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Abstract: We will state the Howe–Moore property and prove it for SL(n,R). (This is Lecture 3 in a semester-long series on SL(3,Z) and Property (T).) January 28, 2019 Title: Relative Property (T) for semi-direct products Speaker: Brent Nelson (Vanderbilt University) Abstract: We will define relative property (T) for semi-direct products and show that $(\mathbb{Z}^2, SL_2(\mathbb{Z}) \ltimes \mathbb{Z}^2)$ and $(\mathbb{R}^2, SL_2(\mathbb{Z}) \ltimes \mathbb{R}^2)$ satisfy this definition. (This is Lecture 2 in a semester-long series on $SL_3(\mathbb{Z})$ and Property (T).) Sources: Cornulier and Tessera, A characterization of relative Kazhdan property T for semidirect products with abelian groups, Ergodic Theory and Dynamical Systems, 2010. Shalom, Bounded generation and Kazhdan’s property (T), Publications Mathématiques de l’IHÉS, 1999. Chifan and Ioana, On relative property (T) and Haagerup’s property, Transactions of the AMS, 2011. Peterson, Notes on operator algebras, online lecture notes, 2015. January 21, 2019 No seminar (Martin Luther King Jr. Day) January 14, 2019 Title: Equivalent definitions of Property (T) Speaker: Lauren C. Ruth (Vanderbilt University) Abstract: We will give some definitions of Property (T) and show that they are equivalent. (This is Lecture 1 in a semester-long series on SL(3,Z) and Property (T).) Source: Bekka, de la Harpe, Valette, Kazhdan’s Property (T), online book, 2007. January 7, 2019 Title: Organizational meeting: Semester on SL(3,Z) and Property (T) Abstract: This semester, we will be running a special series featuring different proofs that SL(3,Z) has Property (T). At this organizational meeting, we will list the known proofs and provide references, and participants can choose which proofs they would like to present. (Winter Break) December 3, 2018 Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Title: Group Stability and Property (T) Abstract: We present the very recent paper of Becker and Lubotzky with the above title. The various notions of group approximations will be considered along with several examples. The main theorem linking group stability and property (T) will be discussed. Source: Becker and Lubotzky, Group stability and Property (T), arXiv, 2018. November 26, 2018 Speaker: Brent Nelson (Vanderbilt University) Title: Crossed products, dual weights, and Takesaki-duality Abstract: Given an action of a locally compact abelian group G on a von Neumann algebra M, one can form the crossed product von Neumann algebra. This von Neumann algebra contains both M and L(G), and the group action is encoded via commutation relations. If M is equipped with a faithful normal state, then the crossed product is naturally equipped with a dual weight. Moreover, the crossed product also admits an action of the dual group to G. Takesaki-duality states that if one repeats the crossed product construction with this dual group action, then one obtains a tensor product of the original von Neumann algebra M and B(L^2(G)). I will given an overview of these concepts and results. Source: Takesaki, Theory of Operator Algebras II, Chapters 7 and 10. November 19, 2018 No seminar (Thanksgiving break) November 12, 2018 Speaker: Jesse Peterson (Vanderbilt University) Title: The Furstenberg boundary and C*-simplicity Abstract: We will continue the discussion from October 1. In particular, we will discuss Kalantar and Kennedy’s result that a group is C*-simple if and only if the action on its Furstenberg boundary is free, and we will discuss Breuillard, Kalantar, Kennedy, and Ozawa’s result that a group C*-algebra has unique trace if and only if the action on its Furstenberg boundary if faithful. November 5, 2018 Speaker: Krishnendu Khan (Vanderbilt University) Title: Bi-exact groups and Lacunary hyperbolic groups Abstract: In this talk I’ll talk about examples of bi-exact groups arising from geometric group theory shown by Ozawa and a subclass of lacunary hyperbolic groups constructed by Olshanskii, Osin, Sapir. October 29, 2018 No seminar (time conflict with special Subfactor Seminar) October 22, 2018 Speaker: Lauren C. Ruth (Vanderbilt University) Title: Equivalent definitions of invariant random subgroups Abstract: An invariant random subgroup (IRS) of a locally compact second-countable group G is a conjugation-invariant probability measure on the space of subgroups of G. We will show how every IRS arises from a measure-preserving action on a probability space via the stabilizer map, comparing the proofs in Creutz–Peterson (2016) and Abért–Bergeron–Biringer–Gelander–Nikolov–Raimbault–Samet (2017), after mentioning the proof for discrete groups in Abért–Glasner–Virág (2014). Sources: Abért, Glasner, and Virág, Kesten’s theorem for invariant random subgroups, Duke Mathematical Journal, 2014. Creutz and Peterson, Stabilizers of ergodic actions of lattices and commensurators, Transactions of the AMS, 2016. Abért, Bergeron, Biringer, Gelander, Nikolov, Raimbault, and Samet, On the growth of L²-invariants for sequences of lattices in Lie groups, Annals of Mathematics, 2017. October 15, 2018 Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Title: On Amenability and Tubularity Abstract: We shall discuss the notion of tubularity due to Jung, and discuss about how this provides an interesting geometric characterization of injectivity for von Neumann algebras. Source: Jung, Amenability, tubularity, and embeddings into $\mathcal{R}^\omega$, arXiv 2006. October 8, 2018 No seminar. October 1, 2018 Speaker: Jesse Peterson (Vanderbilt University) Title: The Furstenberg boundary and C*-simplicity Abstract: We will continue the discussion the last two weeks by Lauren Ruth. In particular, we will discuss Kalantar and Kennedy’s result that a group is C*-simple if and only if the action on its Furstenberg boundary is free, and we will discuss Breuillard, Kalantar, Kennedy, and Ozawa’s result that a group C*-algebra has unique trace if and only if the action on its Furstenberg boundary if faithful. September 24, 2018 Speaker: Lauren C. Ruth (Vanderbilt University) Title: Results on boundary actions, cont’d Abstract: Last time, we defined boundary actions and saw how the Gleason–Yamabe theorem (crucial in Montgomery–Zippin’s solution to Hilbert’s 5th Problem) gives rise to the group splitting in Furman’s result. This time, we will take a closer look at the maximal G-boundary, also called the Furstenberg boundary. After showing existence and uniqueness, we will prove that G acts faithfully on its Furstenberg boundary if and only if its amenable radical is trivial, following notes of Ozawa. Time permitting, we will go through Haagerup’s proof of Breuillard–Kalantar–Kennedy–Ozawa’s result that the amenable radical of G is trivial if and only if G has the unique trace property. Sources: Haagerup, A new look at C*-simplicity and the unique trace property of a group, arXiv 2016. Ozawa, Lecture on the Furstenburg boundary and C*-simplicity, online lecture notes. September 17, 2018 Speaker: Lauren C. Ruth (Vanderbilt University) Title: A result of Furman on boundary actions Abstract: After giving the definition and examples of boundary actions, we will sketch Furman’s proof that if a boundary action of a locally compact group H satisfies certain hypotheses (related to the “no small subgroups” property), then we may conclude that either H is discrete infinite countable, or H is a connected semi-simple real Lie group with trivial center. Sources: Furman, On minimal, strongly proximal actions of locally compact groups, Israel Journal of Mathematics 2003. Tao, Hilbert’s Fifth Problem and Related Topics, online book. September 10, 2018 Speaker: Srivatsav Kunnawalkam Elayavalli (Vanderbilt University) Title: 1-Bounded Entropy, Part I: Free entropy for pedestrians Abstract: In this talk I aim to discuss the notions of free entropy and free entropy dimension invented by Dan Voiculescu. Time permitting, we will also talk about free group factors and other wild animals. We will not go in depth into the proofs, but refer to the attached notes for a careful exposition. Sri’s lecture notes:
The decay of potassium-40 to argon-40 is either a $\beta^+$ decay in which what is emitted is not an electron but a positron$$ {}^{40}{\rm K} \to {}^{40}{\rm Ar} + e^+ + \nu_e $$or, more frequently (if we have whole atoms), an electron capture that you mentioned in which no charged leptons are emitted at the end! About 11% of the potassium-10 decays proceed in this way.$$ {}^{40}{\rm K} + e^- \to {}^{40}{\rm Ar} + \nu_e $$The remaining 89% of the decays of potassium-40 go to calcium-40 (the beta-plus decay is a small fraction of a percent). Note that the two reactions above differ by moving the positron from the right hand side to the left hand side so that the sign has to be changed. Potassium-40 has 19 protons and 21 neutrons. Argon-40 has 18 protons and 22 neutrons. So if we focus on the "minimum part" of the nuclei, the reactions above may be reduced either to $$ p \to n + e^+ + \nu_e$$or $$ p+e^- \to n + \nu_e $$which are the standard reactions switching protons and neutrons. In particular, the second reaction displayed right above this line is the more "microscopic" description of the electron capture you're primarily interested it. These reactions preserve the electric charge, baryon number, and lepton number. They also have to preserve energy. A free proton couldn't decay to the neutron and other two particles because it's lighter. Even a proton and a low-velocity electron wouldn't have enough mass/energy to produce the neutron (plus the neutrino) as in the second reaction. But when the protons and neutrons are parts of whole nuclei, the energies of the initial and final nuclei are affected by the nuclear interactions. In particular, the argon-40 nucleus (and especially atom) is highly bound which means lighter and the reactions where argon-40 appears as a product are therefore "more possible". To summarize, the electron capture (=falling of the electron) simply means that the proton has a nonzero probability to meet with one of the electrons – probably in the inner shells – and merge into a new particle, a neutron, plus a neutrino. This process can't occur in the vacuum due to the energy conservation but in the context of the nucleus, the interactions with other neutrons and protons make the final state with the new neutron favorable. On the contrary, alpha decays are more rare. Among 24 isotopes of potassium, only potassium-36 may alpha-decay. Carbon-14 doesn't alpha-decay, either. Among the isotopes of carbon, only carbon-9 alpha-decays. Both of these alpha-decays must be preceded by a beta-decay. Usually just heavy enough nuclei (with too small an excess of neutrons) alpha-decay.
A Superellipsoid or super-egg. You can control the shape using these values: r t A B C The default values give Piet Hein's superegg. Other interesting cases to try are: Superellipsoids are three dimensional versions of superellipses which in turn are a cross between a square and a circle. A circle has equation $x^2+y^2=1$ if we generalise that we get the superellipses $x^p+y^p=1$. You can play with the 2D version parametric version (faster) implicit verson (slower). The equation for the superellipsoid is $\left(\left|{\frac {x}{A}}\right|^{r}+\left|{\frac {y}{B}}\right|^{r}\right)^{{t/r}}+\left|{\frac {z}{C}}\right|^{{t}}=1$. To generate the mesh I have started with a mesh on a cube, project them onto a sphere, and find the spherical coordinates, $\rho=\sqrt{x^2+y^2+z^2}$, $\theta=\mathrm{atan2}(y,x)$, $\phi=\mathrm{asin}(z/\rho)$. Using the polar representation for the superellipse $$ \begin{align} X(\theta,\phi)=&A\,c(\phi,2/t)\ c(\theta,2/r)\\ Y(\theta,\phi)=&B\,c(\phi,2/t)\ s(\theta,2/r)\\ Z(\theta,\phi)=&C\,s(\phi,2/t). \end{align} $$ Where $c(v,t) = \operatorname{sgn}(\cos v )|\cos v |^{t}$, $s(v,t) = \operatorname{sgn}(\sin v )|\sin v |^{t}$. This simplifies to $$ \begin{align} X(x,y,z) = &A\,p(m/\rho,2/t)\ p(x/m,2/r)\\ X(x,y,z) = &B\,p(m/\rho,2/t)\ p(y/m,2/r)\\ X(x,y,z) = &C\,p(z/\rho,2/t)\\ \end{align} $$ with $p(v,t)=\operatorname{sgn}(v )|v|^{t}$ and $m=\sqrt{x^2+y^2}$. See this answer at stackoverflow. Yibin JiangMon Nov 19 2018 The definition of theta and phi can be varied according to t and r. Theta is not always defined by atan2(x,y), neither as phi.
Inside my main.tex I inserted a function called \loadnote which refers to an external file notepie.tex in which are considered the footnotes (with reference to this discussion) . \usepackage{catchfilebetweentags}\newcommand{\loadnote}[1]{% \ExecuteMetaData[notepie.tex]{notes#1}%} I have repeatedly the following error : Latexmk: Summary of warnings from last run of (pdf)latex: Latex found3 multiply defined reference(s) and in the search discovered a convenient package \usepackage{showlabels} that allows viewing the position of footnotes and references to equations. I would like to understand if it was possible to implement the changes regarding the footed notes written with the command \loadnote{}. for example : 1.file notepie.tex: \documentclass[8pt]{article}\begin{document}%<*notes004>\footnote{La mia footnotes è questa.}%</notes004>\end{document} 2.in the main.tex file : \documentclass{article}\usepackage{catchfilebetweentags}\newcommand{\loadnote}[1]{% \ExecuteMetaData[notepie.tex]{notes#1}%}\begin{document}Se ho necessità di riferirmi ad una nota\loadnote{004},mentre la miaequazione : $\tau = \mu\,\dot{\theta}{t}$\label{eq080}\end{document} the \loadnote is not a \label, I would like to know if it is possible to implement something to make sure that the lateral reference to the note is highlighted thanks very much :)
Applying The Fixed Point Method for Solving Systems of Two Nonlinear Equations Be sure to review the following pages regarding The Fixed Point method for solving systems of two nonlinear equations: We will now look at an example of applying this method. Example 1 Consider the following system of nonlinear equations $\left\{\begin{matrix} y = \frac{1}{3} \ln (x + 2)\\ y = e^{3x} - 2 \end{matrix}\right.$. There exists a solution $(\alpha, \beta)$ such that $\alpha, \beta > 0$, and there exists a solution $(\tilde{\alpha}, \tilde{\beta})$ such that $\tilde{\alpha}, \tilde{\beta} < 0$. This system can be rewritten as $\left\{\begin{matrix} x = \phi_1(x, y) = e^{3y} - 2\\ y = \psi_1 (x, y) = e^{3x} - 2 \end{matrix}\right.$ and as $\left\{\begin{matrix} x = \phi_2(x, y) = \frac{1}{3} \ln (y + 2)\\ y = \psi_2 (x, y) = \frac{1}{3} \ln (x + 2) \end{matrix}\right.$. Determine which choice of $\phi$ and $\psi$ are suitable for the convergence of The Fixed Point method for each of the solutions $(\alpha, \beta)$ and $(\tilde{\alpha}, \tilde{\beta})$, and using the initial approximation $(x_0, y_0) = (-1.5, -1.5)$ to compute successive approximations $(x_n, y_n)$ of $(\tilde{\alpha}, \tilde{\beta})$ until $\| (x_n, y_n) - (x_{n-1}, y_{n-1}) \|_1 < \epsilon = 10^{-3}$. Let's first consider $\left\{\begin{matrix} x = \phi_1(x, y) = e^{3y} - 2\\ y = \psi_1 (x, y) = e^{3x} - 2 \end{matrix}\right.$. We have that $\phi_{1, x} = 0$ AND $\psi_{1, x} = e^{3x}$. We therefore have that $\mid \phi_{1, x} \mid + \mid \psi_{1, x} \mid = 0 + e^{3x} < 1$ for $x < \frac{1}{3}$, i.e, $\mid \phi_{1, x} \mid + \mid \psi_{1, x} \mid < \frac{1}{3}$ for $x < 0$. We also have that $\mid \phi_{1, y} \mid = e^{3y}$ and $\mid \psi_{1, y} = 0$ and so $\mid \phi_{1, y} \mid + \mid \psi_{1, y} \mid = e^{3y} < 1$ for $y < \frac{1}{3}$, i.e, $\mid \phi_{1, y} \mid + \mid \psi_{1, y} \mid < 1$ for $y < 0$. Therefore this choice of rewriting the prescribed system of equations is suitable for the negative solution $(\tilde{\alpha}, \tilde{\beta})$. Now let's consider $\left\{\begin{matrix} x = \phi_2(x, y) = \frac{1}{3} \ln (y + 2)\\ y = \psi_2 (x, y) = \frac{1}{3} \ln (x + 2) \end{matrix}\right.$. We have that $\phi_{2, x} = 0$ and $\psi_{2, x} = \frac{1}{3(x + 2)}$. Thus $\mid \phi_{2, x} \mid + \mid \psi_{2, x} \mid = \frac{1}{3(x + 2)} < 1$ for $x > 0$. Furthermore, we have that $\phi_{2, y} = \frac{1}{3(y + 2)}$ and $\psi_{2, y} = 0$ and so $\mid \phi_{2, y} \mid + \mid \psi_{2, y} \mid = \frac{1}{3(y+2)} < 1$ for $y > 0$. Therefore this choice of rewriting the prescribed system of equations is suitable for the positive solution $(\alpha, \beta)$. So for the root $(\tilde{\alpha}, \tilde{\beta})$ we will use the choice of $x = \phi_1(x, y)$ and $y = \psi_1(x, y)$. Using the initial approximation $(x_0, y_0) = (-1.5, -1.5)$ we have that:(1) We have that:(4) Therefore $(x_3, y_3) = (-1.997502, -1.997502)$ is an approximation to the root $(\alpha, \beta)$ with accuracy $\epsilon = 10^{-3}$.
Paraboloid of revolution In mathematics, a paraboloid is a quadric surface of special kind. There are two kinds of paraboloids: elliptic and hyperbolic. The elliptic paraboloid is shaped like an oval cup and can have a maximum or minimum point. In a suitable coordinate system with three axes x, y, and z, it can be represented by the equation [1] \frac{z}{c} = \frac{x^2}{a^2} + \frac{y^2}{b^2}. where a and b are constants that dictate the level of curvature in the x-z and y-z planes respectively. This is an elliptic paraboloid which opens upward for c>0 and downward for c<0. Hyperbolic paraboloid The hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation [2] \frac{z}{c} = \frac{y^2}{b^2} - \frac{x^2}{a^2}. For c>0, this is a hyperbolic paraboloid that opens down along the x-axis and up along the y-axis (i.e., the parabola in the plane x=0 opens upward and the parabola in the plane y=0 opens downward). Properties Parallel rays coming into a circular paraboloidal mirror are reflected to the focal point, F, or vice versa. With a = b an elliptic paraboloid is a paraboloid of revolution: a surface obtained by revolving a parabola around its axis. It is the shape of the parabolic reflectors used in mirrors, antenna dishes, and the like; and is also the shape of the surface of a rotating liquid, a principle used in liquid mirror telescopes and in making solid telescope mirrors (see Rotating furnace). This shape is also called a circular paraboloid. There is a point called the focus (or focal point) on the axis of a circular paraboloid such that, if the paraboloid is a mirror, light from a point source at the focus is reflected into a parallel beam, parallel to the axis of the paraboloid. This also works the other way around: a parallel beam of light incident on the paraboloid parallel to its axis is concentrated at the focal point. This applies also for other waves, hence parabolic antennas. For a geometrical proof, click here. The hyperbolic paraboloid is a doubly ruled surface: it contains two families of mutually skew lines. The lines in each family are parallel to a common plane, but not to each other. Curvature The elliptic paraboloid, parametrized simply as \vec \sigma(u,v) = \left(u, v, {u^2 \over a^2} + {v^2 \over b^2}\right) has Gaussian curvature K(u,v) = {4 \over a^2 b^2 \left(1 + {4 u^2 \over a^4} + {4 v^2 \over b^4}\right)^2} and mean curvature H(u,v) = {a^2 + b^2 + {4 u^2 \over a^2} + {4 v^2 \over b^2} \over a^2 b^2 \left(1 + {4 u^2 \over a^4} + {4 v^2 \over b^4}\right)^{3/2}} which are both always positive, have their maximum at the origin, become smaller as a point on the surface moves further away from the origin, and tend asymptotically to zero as the said point moves infinitely away from the origin. The hyperbolic paraboloid, when parametrized as \vec \sigma (u,v) = \left(u, v, {u^2 \over a^2} - {v^2 \over b^2}\right) has Gaussian curvature K(u,v) = {-4 \over a^2 b^2 \left(1 + {4 u^2 \over a^4} + {4 v^2 \over b^4}\right)^2} and mean curvature H(u,v) = {-a^2 + b^2 - {4 u^2 \over a^2} + {4 v^2 \over b^2} \over a^2 b^2 \left(1 + {4 u^2 \over a^4} + {4 v^2 \over b^4}\right)^{3/2}}. Multiplication table If the hyperbolic paraboloid z = {x^2 \over a^2} - {y^2 \over b^2} is rotated by an angle of π/4 in the + z direction (according to the right hand rule), the result is the surface z = {1\over 2} (x^2 + y^2) \left({1\over a^2} - {1\over b^2}\right) + x y \left({1\over a^2}+{1\over b^2}\right) and if \ a=b then this simplifies to z = {2\over a^2} x y . Finally, letting a=\sqrt{2} , we see that the hyperbolic paraboloid z = {x^2 - y^2 \over 2}. is congruent to the surface \ z = x y which can be thought of as the geometric representation (a three-dimensional nomograph, as it were) of a multiplication table. The two paraboloidal \mathbb{R}^2 \rarr \mathbb{R} functions z_1 (x,y) = {x^2 - y^2 \over 2} and \ z_2 (x,y) = x y are harmonic conjugates, and together form the analytic function f(z) = {1\over 2} z^2 = f(x + i y) = z_1 (x,y) + i z_2 (x,y) which is the analytic continuation of the \mathbb{R}\rarr \mathbb{R} parabolic function \ f(x) = {1\over 2} x^2. Dimensions of a paraboloidal dish The dimensions of a symmetrical paraboloidal dish are related by the equation: \scriptstyle 4FD = R^2, where \scriptstyle F is the focal length, \scriptstyle D is the depth of the dish (measured along the axis of symmetry from the vertex to the plane of the rim), and \scriptstyle R is the radius of the rim. Of course, they must all be in the same units. If two of these three quantities are known, this equation can be used to calculate the third. A more complex calculation is needed to find the diameter of the dish measured along its surface. This is sometimes called the "linear diameter", and equals the diameter of a flat, circular sheet of material, usually metal, which is the right size to be cut and bent to make the dish. Two intermediate results are useful in the calculation: \scriptstyle P=2F (or the equivalent: \scriptstyle P=\frac{R^2}{2D}) and \scriptstyle Q=\sqrt {P^2+R^2}, where \scriptstyle F, \scriptstyle D, and \scriptstyle R are defined as above. The diameter of the dish, measured along the surface, is then given by: \scriptstyle \frac {RQ} {P} + P \ln \left ( \frac {R+Q} {P} \right ), where \scriptstyle \ln(x) means the natural logarithm of \scriptstyle x , i.e. its logarithm to base "e". The volume of the dish, the amount of liquid it could hold if the rim were horizontal and the vertex at the bottom (e.g. the capacity of a paraboloidal wok), is given by \scriptstyle \frac {1} {2} \pi R^2 D , where the symbols are defined as above. This can be compared with the formulae for the volumes of a cylinder \scriptstyle (\pi R^2 D), a hemisphere \scriptstyle (\frac {2}{3} \pi R^2 D, where \scriptstyle D=R), and a cone \scriptstyle ( \frac {1} {3} \pi R^2 D ). Of course, \scriptstyle \pi R^2 is the aperture area of the dish, the area enclosed by the rim, which is proportional to the amount of sunlight a reflector dish can intercept. The surface area of a parabolic dish can be found using the area formula for a surface of revolution which gives \scriptstyle A=\frac{\pi R}{6 D^2}\left((R^2+4D^2)^{3/2}-R^3\right). Applications Pringles. An example of a hyperbolic paraboloid. Paraboloidal mirrors are frequently used to bring parallel light to a point focus, e.g. in astronomical telescopes, or to collimate light that has originated from a source at the focus into a parallel beam, e.g. in a searchlight. The top surface of a fluid in an open-topped rotating container will form a paraboloid. This property can be used to make a liquid mirror telescope with a rotating pool of a reflective liquid, such as mercury, for the primary mirror. The same technique is used to make solid paraboloids, in rotating furnaces. The widely-sold fried snack food Pringles potato crisps resemble a truncated hyperbolic paraboloid. [3] The distinctive shape of these crisps allows them to be stacked in sturdy tubular containers, fulfilling a design goal that they break less easily than other types of crisp. [4] Examples in Architecture See also References This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
The Closure of a Set in a Topological Space Recall from the Accumulation Points of a Set in a Topological Space that if $(X, \tau)$ is a topological space and $A \subseteq X$ then a point $x \in X$ is called an accumulation point of $A$ if every open neighbourhood $U$ ($U \in \tau$) contains elements of $A$ different from $x$. We denoted the set of all accumulation points of $A$ as:(1) We will now look at another important term known as the closure of $A$. Definition: Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then Closure of $A$ denoted by $\bar{A}$ or $\mathrm{cl} (A)$ is the smallest closed set such that $A \subseteq \bar{A}$. By $U$ being the "smallest" closed set containing $A$ we mean that if $U$ and $V$ are both closed sets containing $A$ then $A \subseteq U \subseteq V$. Equivalently, the closure of $A$ can be defined to be the the intersection of all closed sets which contain $A$ as a subset. Proposition 1: Let $(X, \tau)$ be a topological space. a) The closure of the whole set $X$ is $X$, that is, $\overline{X} = X$. b) The closure of the empty set is the empty set, that is, $\overline{\emptyset} = \emptyset$. Proposition 2 (Idempotency of the Closure of a Set): Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then the closure of the closure of $A$ is equal to the closure of $A$, that is, $\bar{\bar{A}} = \bar{A}$. Proof:By definition, $\bar{\bar{A}}$ is the smallest closed set containing $\bar{A}$. But $\bar{A}$ is closed, and so $\bar{\bar{A}} = \bar{A}$. $\blacksquare$ Proposition 3: Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then $a \in \bar{A}$ if and only if $U \cap A \neq \emptyset$ for every open neighbourhood $U$ of $a$. Proof:$\Rightarrow$ Let $a \in \bar{A}$. Suppose that there exists an open neighbourhood $U$ of $a$ such that $U \cap A = \emptyset$. Then $A \subset X \setminus U$ (note that the inclusion is strict since $Since [[$ U$ is open, $X \setminus U $] is closed. Since [[$ \bar{A}$ is the smallest closed set containing $A$ we see that: But this is a contradiction. Observe that $a \in \bar{A}$ but $a \not \in X \setminus U$. Therefore the assumption that such an open neighbourhood $U$ of $a$ exists is false. $\Leftarrow$ Suppose that $U \cap A \neq \emptyset$ for every open neighbourhood $U$ of $a$. If $a \not \in \bar{A}$ then $a \in X \setminus \bar{A}$. But $X \setminus \bar{A}$ is open, and so there exists an open neighbourhood $U$ of $a$ such that $U \subseteq X \setminus \bar{A}$.But then $U \cap A = \emptyset$, a contradiction. $\blacksquare$ Let's look at some examples of the closure of a set. Example 1 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals on $\mathbb{R}$ and let $A = [0, 1)$. The closure of $A$ is $\bar{A} = [0, 1]$. To verify this, we note that $[0, 1]$ is indeed closed because:(3) Furthermore, no smaller set contains $A$, so indeed $\bar{A} = [0, 1]$. Example 2 For another example, consider the set $X = \{ a, b, c, d \}$ and the topology $\tau = \{ \emptyset, \{ b \}, \{ a, b \}, \{ b, c \}, \{a, b, c \}, X \}$ and the set $A = \{ b, d \}$. What is the closure of $A$, i.e., what is $\bar{A}$? Let's first list out all of the closed sets of $X$. Recall that a set $B$ is said to be closed if $B^c$ is open. So the set of closed sets of $X$ is the set of complements of elements from $\tau$. The closed sets for this topology are therefore:(4) We now need to find the smallest closed set of $X$ such that $A = \{ b, d \} \subseteq X$. In this example, the smallest closed set is the whole set $X$. Therefore $\bar{A} = X$. Now instead consider the set $B = \{ a \} \subseteq X$. What is $\bar{B}$? Well, the smallest closed set containing $B$ is $\{ a, d \}$ so $\bar{B} = \{ a, d \}$.
Forgot password? New user? Sign up Existing user? Log in If a function f(x)f(x)f(x) that is differentiable over (−∞,∞)(-\infty,\infty)(−∞,∞) is monotonically decreasing and limx→∞f(x)≠−∞,\displaystyle\lim_{x\rightarrow\infty}f(x)\neq-\infty,x→∞limf(x)=−∞, then as xxx approaches infinity, f(x)f(x)f(x) is Problem Loading... Note Loading... Set Loading...
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub... Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th... I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower... How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both. I want to construct a nfa from this, but I'm struggling with the regex part
Is there a good algorithm for finding a permutation which, applied to all the strings in a set, maximises the weight of those with a common fixed length prefix. Given a set $S=\{(s,w)\}$ of pairs of fixed length bit strings and weights and an integer k find a permutation p of {1..k} and bit string v of length k to maximise $$ \Sigma \{ w_i | (s_i,w_i)\in S\land s_i[p(1)]=v[1] \land s_i[p(2)]=v[2]\land...s_i[p(k)]=v[k] \} $$ I can think of an obvious greedy algorithm where we pick the index i and value v which maximises the weights of the strings with $s[i]=v$ delete position i from all strings. Repeat until we've chosen k bits. Is it possible to do better? The problem I'm trying to solve is a compression one where the bits in the data can be reordered and each compressed item can be either the original or the suffix from the original which is concatenated with a common prefix when decompressed. I need to find a reordering and prefix which maximises the number of items which can use the second form. If p is the permutation and v the prefix Compression for s: sp = permute(p, s).If sp[1..k]=v then # Does the permuted s have the prefix v return sp[k+1..n] # Compressed case, uses only n-k bitselse return sp # Uncompressed case, uses all n bits Decompression for s: if len(s)=k then su = concat(v,s) # Compressed case, add the prefix.else su = s # Uncompressed casereturn inverse_permutation(p, su)
(3 intermediate revisions by the same user not shown) Line 1: Line 1: − A set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m110/m110040/m1100401.png" /> endowed with an everywhere defined [[ Binary relation|binary relation]] <img align="absmiddle" border="0" src="https: //www.encyclopediaofmath.org/legacyimages/m /m110/m110040/m1100402.png" /> on it. No conditions are imposed. In particular, a magma need not be commutative or associative. Of particular importance is the [[ Free magma|free magma]] on an alphabet (set) < img align=" absmiddle" border=" 0" src="https://www. encyclopediaofmath. org/ legacyimages/m/m110/m110040/m1100403.png" /> . A mapping < img align=" absmiddle" border=" 0" src="https://www. encyclopediaofmath. org/ legacyimages/ m/ m110/m110040/m1100404. png" /> of one magma into another is a morphism of magmas if < img align=" absmiddle" border=" 0" src=" https://www. encyclopediaofmath. org/ legacyimages/ m/m110/m110040/m1100405.png" /> for all < img align=" absmiddle" border=" 0" src="https://www. encyclopediaofmath.org/ legacyimages/ m/m110/m110040/m1100406.png" /> , i.e., if it respects the binary relations. + + + + + A with [[binary ]]: m on it. No conditions are imposed. In particular, a magma need not be commutativeor associative + + . + + + + + + Of particular importance is the [[free magma]] on an alphabet (set) + + + + + + <=""="".. // + ><=""="".. // + /. " / + ><=""=""" . .// + "/> <="""".// + /> Latest revision as of 21:12, 7 January 2016 2010 Mathematics Subject Classification: Primary: 08A [MSN][ZBL] groupoid A universal algebra with one binary operation: a set $M$ endowed with an everywhere defined $m : M \times M \rightarrow M$ on it. No conditions are imposed. In particular, a magma need not be commutative or associative: it is the broadest class of such algebras: groups, semi-groups, quasi-groups – all these are magmas of a special type. . A mapping $f : N \rightarrow M$ of one magma into another is a morphism of magmas if $f(m_N(a,b)) = m_M(f(a),f(b))$ for all $a,b \in N$, i.e., if it respects the binary operations. An important concept in the theory of magma is that of isotopy of operations. On a set $G$ let there be defined two binary operations, denoted by $(\cdot)$ and $(\circ)$; they are isotopic if there exist three one-to-one mappings $\alpha$, $\beta$ and $\gamma$ of $G$ onto itself such that $a\cdot b=\gamma^{-1}(\alpha a\circ\beta b)$ for all $a,b\in G$ (cf. Isotopy (in algebra)). A magma that is isotopic to a quasi-group is itself a quasi-group; a magma with a unit element that is isotopic to a group, is also isomorphic to this group. For this reason, in group theory the concept of isotopy is not used: For groups isotopy and isomorphism coincide. A magma with cancellation is a magma in which either of the equations $ab=ac$, $ba=ca$ implies $b=c$, where $a$, $b$ and $c$ are elements of the magma. Any magma with cancellation is imbeddable into a quasi-group. A homomorphic image of a quasi-group is a magma with division, that is, a magma in which the equations $ax=b$ and $ya=b$ are solvable (but do not necessarily have unique solutions). Of particular importance is the free magma on an alphabet (set) $X$. A set with one partial binary operation (i.e. one not defined for all pairs of elements) is said to be a partial magma. Any partial submagma of a free partial magma is free. References [1] A.G. Kurosh, "Lectures on general algebra" , Chelsea (1963) (Translated from Russian) [2] P.M. Cohn, "Universal algebra" , Reidel (1981) [3] O. Boruvka, "Foundations of the theory of groupoids and groups" , Wiley (1976) (Translated from German) [4] R.H. Bruck, "A survey of binary systems" Ergebnisse der Mathematik und ihrer Grenzgebiete. Neue Folge. 20 Springer (1958) Zbl 0081.01704 [5] N. Bourbaki, "Algebra", 1, Chap.1-3, Springer (1989) How to Cite This Entry: Magma. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Magma&oldid=12223 This article was adapted from an original article by M. Hazewinkel (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
I read online that rotational work is defined as $$W =\int \tau \mathrm{d} \vartheta$$ which is not exactly the same as $$W = \int \vec{F} \cdot \mathrm{d}\vec{r}$$ So i was wondering if it possible to give a little more general formula for rotational work that resembles the one with the dot product. If I consider a force not perpendicular to the axis of rotation, can I use the infinitesimal displacement vector $\mathrm{d} \vec{\vartheta}$ and write this? $$\mathrm{d}\vec{r} = \mathrm{d}\vec{\vartheta} \times \vec{r}$$ Doing so would allow me to write $$W = \int \vec{F} \cdot \mathrm{d}\vec{r} = \int \vec{F} \cdot (\mathrm{d}\vec{\vartheta} \times \vec{r}) = \int (\vec{r} \times \vec{F}) \cdot \mathrm{d}\vec{\vartheta} = \int \vec{\tau}\cdot \mathrm{d}\vec{\vartheta}$$ and keep a perfect symmetry with the other definition of work. Where is my mistake? It is easier to derive 3D power, and then assert that work is the time integral of power. Consider a moving rigid body, with a force $\boldsymbol{F}$ applied and a torque $\boldsymbol{\tau}_C$ applied at the center of mass. The body is instantaneously rotating about some point such that the velocity of the center of mass is $\boldsymbol{v}_C = \boldsymbol{\omega} \times \boldsymbol{r}_C$. Additionally the equipollent torque at the rotation center is $\boldsymbol{\tau} = \boldsymbol{\tau}_C + \boldsymbol{r}_C \times \boldsymbol{F}$. Let us look at power at the center of mass and at the center of rotation. The two should be identical because power should be coordinate invariant. $$\require{cancel} \begin{aligned} P & = \boldsymbol{F} \cdot \boldsymbol{v}_C + \boldsymbol{\tau}_C \cdot \boldsymbol{\omega} & &\mbox{at center of mass} \\ \hline P & =\boldsymbol{F} \cdot \cancelto{0}{\boldsymbol{v}} + \boldsymbol{\tau} \cdot \boldsymbol{\omega}=(\boldsymbol{\tau}_C + \boldsymbol{r}_C \times \boldsymbol{F}) \cdot \boldsymbol{\omega} & &\mbox{at rotation center} \\ & =(\boldsymbol{r}_C \times \boldsymbol{\omega}) \cdot \boldsymbol{F} + \boldsymbol{\tau}_C \cdot \boldsymbol{\omega} = \boldsymbol{F} \cdot \boldsymbol{v}_C + \boldsymbol{\tau}_C \cdot \boldsymbol{\omega} & &\checkmark \end{aligned} $$ So the rule for power (and hence work) is: At any arbitrary location A, dot product the force with the linear velocity and the torque with rotational velocity to calculate power. Work is the time integral of power. $$ P = \boldsymbol{F} \cdot \boldsymbol{v}_A + \boldsymbol{\tau}_A \cdot \boldsymbol{\omega} $$ $$ W = \int P {\rm d}t $$ Pick the center of rotation and $\boldsymbol{v}_A \rightarrow 0 $, or the line of action of the force and $\boldsymbol{\tau}_A \rightarrow 0$ and the above simplifies to just one inner product.
The completely symmetric form of Coulomb's constant, such that $E=H$, is $K_c=c/4\pi$. If you equate the forces of gravity and electricity, you can wrote, eg $M=þQ$, where $þ$ is a constant. Then $G=c/4\pi þ^2$. Stoney's mass is then $eþ$ and planck's mass is $eþ/\sqrt{\alpha}$. The rationalisation of equations only start when you do, as Heaviside and as Lorentz do, start with maxwell's equations. People are starting to do this with gravity, see, eg gravitomagnetism on the wikipedia. SI treats rationalisation of quantities in three different ways according to whether it is gravity (unrationalised, no units), or electricity (rationalised, no extra units), or light (unrationalised with units) It should be noted that gravitational magnetic-theory is somewhat behind the corresponding electric version, because the anticipated size of the field is so small that only now it is possible to try to detect the field. One should imagine that $G$ is a 'falle-constant'. That is, Newton's equation is not usable as a definition of mass in the way that Coulomb's equation might define charge, so the non-variable constants are lumped together in the manner that $K_c$ defines coulomb's constant. It's only when one has a sufficient theory behind it that one tries to modify the value of $G$. 'Falle-constant' here simply means that the units and value of the constant are 'as they fall'.
If $m=p^k$ is a prime power then I know: $$\exists x\in \mathbb{Z}:x^n\equiv a\bmod p^k\iff a^{\frac{p-1}{\gcd(n,p-1)}}\equiv 1\bmod p^{j}$$ $$\text{ where: }j=\min\left(v_p(n)+1+[p\mid n][p=2],k\right)$$ Thus if I have the prime factorization of $m$ then by the Chinese remainder theorem I can just verify the above congruence holds for every prime power $p^{v_p(m)}$. However what if I don't know the prime factorization of $m$? Is there still a simple way to determine if $x^n\equiv a\bmod m$ is solvable? What about just the special case when $n=2$ i.e. $x^2\equiv a\bmod m$ with $m$ composite?
Actually, co-skewness is represented by a rank 3 tensor, rather than a matrix. I'm going to reproduce the formulation from Bhandari and Das, Options on portfolios with higher-order moments, but I'll add and omit some details. The co-skewness tensor is$$S_{ijk} = E \left[ r_i \times r_j \times r_k \right] = \frac{1}{T} \sum_{t=1}^T r_i(t) \times r_j(t) \times r_k(t)$$where $r$ are asset returns over $T$ time periods. Then, given portfolio weights $w$, mean asset returns $\mu$, covariance matrix $\Sigma$, and portfolio variance $\sigma_p^2(w) = w\prime \Sigma w$, we calculate moments:$$\begin{eqnarray*}m_1 & = & w\prime \mu \\m_2 & = & \sigma_p^2 + m_1^2 \\m_3 & = & \sum_{i=1}^N \sum_{j=1}^N \sum_{k=1}^N w_i w_j w_k S_{ijk}\end{eqnarray*}$$ The portfolio skewness is then$$S_p = \frac{1}{\sigma_p^3} \left[ m_3 - 3m_2 m_1 + 2m_1^3 \right]$$ In the case of a 6-asset portfolio, the co-skewness tensor will contain 216 components; however, due to symmetry, it only contains 56 independent components. Therefore, it can be helpful to reformulate the portfolio skewness equation for computational efficiency. To do this, we can start with the definition of skewness for portfolio returns,$$S_p = \frac{1}{\sigma_p^3} E [ \left( \sum_{i=1}^N w_i r_i \right)^3 ] \quad ,$$ and then apply the multinomial theorem to obtain the portfolio skewness in terms of only the independent components. Update Especially for longer time series, the return moments should becentered on the means, i.e., $r_i = R_i - \bar{R}_i$ In the case of daily returns, $R_i(t) = \frac{P(t) - P(t-1)}{P(t-1)}$, where $P(t)$ is the closing price at time $t$. Be sure the prices for returns are comparable from period to period. For example, stock prices may need adjustments to account fordividend payments. See this Q & A on return measurement formore discussion. note: I edited the equation for the co-skewness tensor above.
Let $\vec x$ be a random unit vector (that is, a random vector on the unit sphere). Let $x_i$ be the $i$'th coordinate (if it is easier, you can assume we are in 3-dimensional space). What is the distribution of $|x_i|$? More specifically, what is the expected value $\langle|x_i|\rangle$? This problem has a very nice geometrical interpretation if we can assume that the distribution $f(\vec{x})$ is constant over the surface of the $n-1$-unit-sphere in $n$-dimensional space. Then $f(\vec{x} \wedge (\vert x_i \vert=a))$ relates to the surface of two slices (at negative and positive coordinate) of the n-1 sphere in n-dimensional space, which is a (n-2)-dimensional sphere with radius $r = \sqrt{1-a^2}$. The area of this slice is related to the area (or more like a length, since it is a curve) $A_{n-2}$ of the n-2 sphere multiplied with the distance $ds$ which is perpendicular to $\vec{r}$. The fraction of this slice must be related to the total, thus we use the ratio with the area of a $A_{n-1}$ sphere. We have the area of the slice: $$A_{slice} = A_{n-2}(r) ds$$ And the relative area of the slice is: $$\frac{A_{slice}}{A_{total}} = \frac{A_{n-2}(r)}{A_{n-1}(1)} ds$$ with $$A_{n-2}(r) = \frac{2\pi^{\frac{n-1}{2}}}{\Gamma\left(\frac{n-1}{2}\right)}r^{n-2}$$ $$A_{n-1}(1) = \frac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}$$ $$r = \sqrt{1- x_i ^2} $$ $$\begin{array} \\ds &= \sqrt{(dr)^2+(dx_i)^2}\\ %% &= \sqrt{\left(dx_i \frac{-x_i}{\sqrt{1-x_i^2}}\right)^2+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{x_i^2}{{1-x_i^2}}+(dx_i)^2}\\ %% &= \sqrt{(dx_i)^2 \frac{1}{{1-x_i^2}}}\\ &= dx_i {\frac{1}{\sqrt{1-x_i^2}}} \end{array}$$ Thus: $$\begin{array}\\ f(\vert x_i \vert) &= \frac{2 \Gamma\left(\frac{n}{2}\right)/\Gamma\left(\frac{n-1}{2}\right)}{\pi^{1/2}}\left(1-x_i^2\right)^{\frac{n-3}{2}}\end{array}$$ using the Beta function $$\begin{array}\\ f(\vert x_i \vert) &= \frac{\left(1-x_i^2\right)^{\frac{n-3}{2}}}{B(\frac{1}{2},\frac{n-1}{2})} \end{array}$$ which becomes for n=3: $f(\vert x_i \vert) = 1$ Which makes sense, intuitively. At the pole the slice has smaller radius but is thicker and at the equator the slice has larger radius but is more thin. This results in equal probability, whatever $x_i$ at the pole, the equator, or in between. Note, some help about the intuition behind the integration step: The n-1 sphere in n-dimensional space is a hyper surface and intersection with $\vert x_i\vert=a$ is a hypercurve. By integrating the hypercurve, along the direction $\vec{s}$, you get the hypersurface The answer is $1/2$ This paper has the probability density $f_n(x_i)$ of $x_i$ for the vector inside an n-dimensional hypersphere. You're interested in the vector from origin to the random point on a surface of a hypersphere. To get the surface of a unit hypersphere you simply take a derivative along the radius then fix the radius at 1. Therefore, you can use $2f_{n-2}(x_i)$ from the paper as a density for your problem and take an integral from 0 to 1: $$\int_0^1 2f_{n-2}x dx=\int_0^1 f_{n-2} dx^2$$ The equations for $f_n$ from the paper: $$\frac{\Gamma(n/2+1)}{\Gamma((n+1)/2)}\frac{(\sqrt{1-x^2})^{n-1}}{\sqrt\pi}$$ Hence, the density of the $|x_i|$ is given by: $$\tilde f_n(|x|)=2\frac{\Gamma((n-2)/2+1)}{\Gamma((n-1)/2)}\frac{(\sqrt{1-x^2})^{n-3}}{\sqrt\pi}$$ For three dimensional case you get: $$\tilde f_3(|x|)=\frac{2\Gamma(3/2)}{\sqrt\pi}=1$$ You get the mean absolute value by integrating: $$\int_0^1\tilde f_3(|x|)xdx=\int_0^1xdx=1/2$$ Two points worth noting. First, it's easy to calculate the integral for any $n$ by substitution $z=x^2$ : $$\frac{2 \Gamma(\frac n 2)}{(n-1)\Gamma(\frac{n-1}{2})\sqrt\pi}$$ You can see now that the average absolute value of the coordinate converges to zero as dimensionality increases as shown on the plot of the average vs. $n$ number of dimensions: Second, the sequence of functions on these densities converges to the normal distribution: $$\frac{f_n(v/\sqrt{n+2})}{\sqrt{n+2}}\to_{n\to\infty}\mathcal{N}(0,1)$$
There isn't a simple redefinition that would work in all cases, and this is probably why nobody does this. Of course, you COULD define such a unit for a given system by defining it as a function of temperature, but it would be applicable only to the specific system. As you noted, the physics behind the temperature dependence of a material's heat capacity lies in the quantum states which become available for excitation at different energies. Consider an insulator which stores heat primarily in the lattice vibrations. As you increase the temperature, more vibrational modes are accessible. This is often described using the Debye model which gives the heat capacity as such: $C_V = k(\frac{T}{\theta})^3$ where $C_V$ is the heat capacity, $k$ is a constant, $T$ is the temperature in Kelvin, and $\theta$ is a constant pertinent to a specific solid.[1] However, a metal can also store a significant amount of heat in the motions of electrons since conduction electrons are not bound to specific sites. Heat capacity for the electron 'gas' is often described like this: $C_V' = k' \frac{T}{\theta'}$ where $C_V' \neq C_V$, $k' \neq k$ and $\theta' \neq \theta$.[2] Depending on $\theta$ and $\theta'$ for a given material, at low temperatures (often cryogenic) the electron contribution can be the dominant mode of storing heat in the solid. Thus, if you chose a unit which was sensible for an insulator, it wouldn't apply to a conductor (and vice versa). Even within a specific material, the heat capacity on either side of a phase transition is often different since the phase transition has changed the quantum states available in the material. Therefore, your new unit would become discontinuous at phase transitions and would need a different mathematical expression for each segment of the heat capacity function, not to mention a different parameterization for each material. Hence, no real gain. [1] See Kittel, Kroemer, Thermal Physics Second Edition; W. H. Freeman and Company, New York, 1980, p. 106, eq. 47b [2] See Kittel, Kroemer, Thermal Physics Second Edition; W. H. Freeman and Company, New York, 1980, p. 193, eq. 37. Note that folks don't actually use the symbol $\theta'$ in this case, but choosing so makes the explanation above clearer.
The RC Circuit concept Up until now we've been dealing with circuits that don't change with time. We've assumed that once you connect the circuit up things have certain voltages and currents and that's just how it is. In the circuits we've seen up until now that's basically true, however in real circuits (and especially circuits with capacitors) there is some "start up" time called "transients" where the values build up to their values. This is mostly the case because resistors take some amount of time to develop their charge that is based on the current. The higher the current the faster the charge builds up until it hits the value we've been calculating previously. In this topic we'll start to look at circuits with resistors and capacitors, specifically we'll look at what happens immediately after putting the circuit together (remember that's called transients). That means we'll look at what the voltage across a resistor is after 1 second, 2 seconds etc. as well as other properties of the circuit. When the switch on the circuit below is closed the current I starts out at 10mA (just like it would if the capacitor were a short circuit) but then drops exponentially to 0mA (the current through a capacitor after transients is always 0 in DC circuits). Conversely the voltage across the resistor begins at 10V (like it would if the capacitor was a short circuit) and exponentially drops to 0V (from Ohm's law we know that since the current is 0 the voltage drop across the resistor must be 0). On the other hand when the switch on the circuit above is closed the voltage across the capacitor begins at 0V and logarithmically increases to \(10\mu\)V (the DC value we calculated in previous topics). fact A capacitor-resistor circuit is characterised by what we call a "time constant" whose symbol is \(\tau\). The time constant of a series RC circuit is \(\tau = RC\). fact The voltage across a capacitor in an RC circuit is given by: \(\Large V_c = V(1-e^{\frac{-t}{\tau}})\) Where \(V\) is the voltage across the resistor-capacitor branch and \(\tau\) is the time constant from above. fact The voltage and current of the resistor in an RC circuit will fall 63.2% of its initial value within the first time constant, 86.5% in two time constants, 95% in three time constants, 98.2% in four time constants and 99.3% in five time constants. The voltage across a capacitor in an RC circuit will rise to 63.2% of its final value within the first time constant, 86.5% in two time constants, 95% in three time constants, 98.2% in four time constants and 99.3% in five time constants. If the RC circuit involves a complex arrangement of capacitors and resistors try to reduce them to a single capacitor and a single resistor to determine the time constant. You may notice that according to the above percentages the voltages and currents NEVER reach their steady state values that we calculated in previous topics. If we want to refer to these steady state values we will say that a circuit's switch was closed a "long time" ago. practice problems
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Does delta neutral portfolio mean you add up deltas of all positions and the sum should be zero? Is this true? Also, in a FX portfolio consisting of FX calls puts and Fwds, if FWD delta is given for each how do you make the portfolio delta neutral? Do you just add up the fwd deltas for all and depending on the sum, buy or sell a FWD to bring the sum of delta to zero? Is this the right approach? You are right in saying that to check whether your position is $\Delta$-neutral, you have to check the $\Delta$s of its constituents. That's a general statement that applies to positions that you are not rebalancing too fast, see e.g. this recent question. In general, each Greek measures a particular risk/exposure of your position to a market condition that may change. For example, $\Delta$ ($\Theta$, $\rho$) tells you how much your position will change if the underlying level (time, interest rate) changes given that all other things being equal. The latter condition makes it possible to express Greeks as partial derivatives of the portfolio's value with respect to relevant variables, e.g. $\Delta = \frac{\partial V}{\partial S}$. Since partial derivatives are linear, if you hold $m$ at the money calls calls and $n$ stocks $$ \frac\partial{\partial S}(mC + nS) = m\Delta_C + n \approx m/2 + n $$ where $\Delta_C \approx 1/2$ is the $\Delta$ of one at the money call. In that case, to be $\Delta$-neutral, you would hold $n = -m/2$ stocks against your at the money call position. As such, to compute exposure of your entire position, it is enough to compute separately exposures of each leg in your position. For this reason, the approach you've described in the OP is correct. Just note that $\Delta$ changes as well, and hence to stay $\Delta$ neutral you will need to do additional trades while you proceed to hold the option position. This concept is called continuous hedging. The changes of $\Delta$ are described by higher-order Greeks such as Gamma, Charm, DdelV etc.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
The phrase "first-order logic" has two meanings: It is a chapter of mathematical logic in which we study certain kinds of formal systems and everything related to them. It is a special kind of first-order theory, namely the one generated by an empty signature and an empty set of axioms. Your question refers to the second meaning, but to understand this, we need to build things up: There is a certain formal language called the language of first-order logic. Speaking informally, it is the stuff you can build from variables, equality, $\land$, $\lor$, $\lnot$, $\Rightarrow$, $\forall$ and $\exists$. This stuff is known as first-order formulas. There is a certain formal system called first-order logic which tells us what it means that we prove a first-order formula. The system is given as a set of inference rules. A first-order theory $\mathcal{T}$ is given by: a signature $\Sigma_{\mathcal{T}}$ which consists from a set of constants, function symbols, and relation symbols. Think of these as extensions of the basic language of first-order logic. We call it the language of $\mathcal{T}$. a deductively closed set of first-order formulas written in the language extended by the signature. A set $S$ of formulas is said to be deductively closed if any application of inference rules of first-order logic to formulas in $S$ gives formulas which are again in $S$. In other words, $S$ contains all of its logical consequences. A common way of creating such a set $S$ is: start with some chosen set of formulas $A$, and add to it all of its logical consequences, and the consequences of those consequences, and so on. This is called the deductive closure of $A$. We often call the formulas in $A$ axioms. A theory may or may not be complete. It is not important to know what "complete" means here, but it is important to know that the following can happen: we can have two sets of formulas $A$ and $B$, such that $A \subseteq B$, the deductive closure of $A$ is a complete theory, and the deductive closure of $B$ is not a complete theory. We are now ready to answer your question. Let $T$ be the theory whose signature is empty and whose set of formulas is the deductive closure of the empty set. Let $P$ be the theory whose signature is that of Peano arithmetic (constant $0$, unary operation $S$, binary operations $+$ and $\times$) and the formulas are the deductive closure of Peano axioms. It is a fact that $T$ is contained in $P$ (in fact $T$ is contained in every theory), $T$ is complete, $P$ is not complete. The theory $T$ is popularly called "first-order logic", but this really is a misnomer. Some people are a bit more precise and call it "the pure theory of first-order logic". In summary, your question revealed the following: You did not know that "first-order logic" may refer to the theory with empty signature generated by the empty axioms. A complete theory may become incomplete when we extend it. You used the wrong definition of completeness. The correct definition is: a theory is complete if, every sentence or its negation is a theorem of the theory. NB: a sentence is a closed formula (one that does not contain any free variables). Lastly, let me address your question about validity: a formula is provable if there is a proof of it a formula is valid if it is true in every model A basic meta-theorem about first-order logic is that every provable formula is valid. The reverse holds as well and is known as Gödel's completeness theorem. However, it often happens that in some particular situation one purposely makes a mismatach between validity and provability for a good reason. For instance, if we limit attention to just finite models, then it may easily happen that there would be valid statements which have no proof. Why would one do that? In computer science it could be for algorithmic reasons, or because one is interested in a particular class of models only. HYou say "the only way to know that a sentence is valid is to prove it". This may be the case at some informal level (I think God would disagree with you), but notice that any such proof of validity happens outside the theory, at the meta-level. Indeed, since establishing validity requires one to talk about all models this is certainly not something we would expect to perform inside the theory.
I am currently trying to gain a fuller understanding of the meaning of various spin states and their relation to the direction of measurement by a Stern-Gerlach device. I came across two spin-${1 \over 2}$ states, the first of which given by: $$\left|\psi_1\right\rangle={1 \over 2} \left|+z\right\rangle+{i\sqrt3 \over 2}\left|-z\right\rangle $$ I know that if a spin-${1 \over 2}$ particle is prepared spin up along an axis specified by: $$\hat{\boldsymbol{n}}=\sin{\theta}\cos{\phi}\hat{\boldsymbol{i}}+\sin{\theta}\sin{\phi}\hat{\boldsymbol{j}}+\cos{\theta}\hat{\boldsymbol{k}}$$ Then its spin state is given by: $$\left|+n\right\rangle=\cos{{\theta \over 2}}\left|+z\right\rangle+e^{i\phi}\sin{{\theta \over 2}}\left|-z\right\rangle$$ By inspection, then, I determined that for $\left|\psi_1\right\rangle$ possible values for $\theta$ and $\phi$ are $\theta = {2\pi \over 3}$ and $\phi={\pi \over 2}$. Knowing these angles I then determined the direction in which $\left|\psi_1\right\rangle$ is spin up. This made sense. The second state I encountered, however, is given by: $$\left|\psi_2\right\rangle={-i \over 2}\left|+z\right\rangle+{\sqrt3 \over 2}\left|-z\right\rangle$$ According to the expression I gave above for $\left|+n\right\rangle$, however, in order for the amplitude for $\left|+z\right\rangle$ to be complex, the argument passed to the cosine function must be complex. In turn, this means $\theta$ must be complex. This doesn't make sense to me, because my understanding is that $\hat{\boldsymbol{n}}$ is a vector in ordinary 3-dimentional space. This lead me to believe I am determining $\theta$ incorrectly. If I am making a mistake in determining the values for $\theta$ and $\phi$, I am curious to know what it is. If my method is correct, then why am I coming up with a complex angle for the spin direction of $\left|\psi_2\right\rangle$?
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Context: There was recently a question on Math.SE: Inferior to Other Younger and Brighter Kids which starts as follows: I'm a high school student (Junior/Grade 11) and I'm currently studying Spivak's Calculus on Manifolds. I've finished Single/Multi Variable Calculus (the basic ones without analysis/rigor), the first half of Rudin for Real Analysis and Linear Algebra (Lang) through self-study. And then the OP writes how inferior he feels compared to other younger and brighter kids. Perhaps there are some deeper issues involved, but I don't want to dwell on it, nor discuss the situation of OP publicly. The issue: The issue I would like to write about is that more and more often I notice kids studying advanced books. Surely gifted students will access more advanced material earlier than their peers and I would applaud the kids reading almost anything they want. Yet, I have the feeling that sometimes this is just too early and reading advanced books without certain level of mathematical maturity may cause more harm than good. My experience isn't enough to draw any conclusions, but this is not just a feeling, I've witnessed negative effects of such behavior multiple times. Here are some example motives: Over-reliance on the already-learned material.There was an introductory problem given which could be solved using some elementary method. The gifted student thought "this is boring" and solved it faster using an advanced method. Then there was another problem given, but the advanced method couldn't be generalized to this new problem (while the elementary way could) and the gifted student was unable to find a solution until the end of the class. Missing fundamentals.Gifted student learned $X$ by himself, but the material depended on the knowledge of $Y$. Yet, the student lacked proper understanding of $Y$ and so build some bad intuitions, which had to be unlearned before progressing further. Closed-mindness (compared to other gifted students).This one is hard to describe, some signs are: the student thinks there's always a book that explains the area in question; the student specializes in solving toy problems and (difficult) homework, but is unable to handle even simple openor researchproblems; the student is unable to form his own theories or own interpretations. There also others, like burnout, but the causes will be more complicated, let's not diverge too much. Even these three trouble me very deeply (especially the last) and it's like some math-contest rat race rather than curious study of the beautiful area that mathematics is. To me, rediscovering the wheel (if not too often, human lifespan is still limited) is a necessary part of study, so that when the time comes, the kids will be able to do their own original discoveries. Getting all the best concepts laid out in a book, digested, updated and improved over the years doesn't seem like the best idea. Still, perhaps this view is invalid and unfounded? Question: Is there any data or research on the effects of access to advanced material early in kid's education? Given that gifted students are rare, right now even some anecdotal evidence would be great. What are your experiences in this matter? More concrete examples: @Joseph Van Name asked for examples, so here there are. Over-reliance on the already-learned material: In a geometry class there was a guy that tried to solve every single problem using analytical approach: coordinates, trigonometry, algebra, etc. I admit that he was very good at this, but he didn't learned some techniques we practiced using simpler problems and then wasn't able to solve the more difficult ones which wouldn't yield to his standard method. During programming class there was a group who liked suffix trees too much and some students didn't learned some simpler techniques (here I mostly mean variations on the Knuth-Morris-Pratt algorithm) which were necessary later. There was another guy who would try to solve all the contest inequalities using Muirhead's inequality. He had a huge problem each time the inequality in question was not similar to any mean (e.g. all the numbers begin equal not causing equality). Missing fundamentals: A person who had learned more advanced probability before university disregarded a few first classes and then assumed that any distribution has a density function. There were later some problems with conditional expected values and their $\sigma$-algebras, but I don't remember it well enough. Many times students who learn asymptotic complexity by themselves don't understand some concepts properly and don't pay enough attention to details. Some common misconceptions are $n \notin O(n^2)$, $n^2 + n \in \omega(n^2)$, $f_i \in O(g)$ implies $\sum_{i=1}^{n} f_i \in O(n\cdot g)$. Frequently students that learned programming without any supervision or guidance from more experienced people have bad style (and programming contests often reinforce that bad style). It is an issue, because it's hard to unlearn and the drawbacks are not severe enough for the student to want to change it by himself. The effect is that the student penalizes himself for a much longer time (one of the reasons why summer internships or involvement in open-source projects is a good idea).
Forgot password? New user? Sign up Existing user? Log in ABCD is a rhombus in which the altitude from d bisects AB. AE=EB. Therefore, angle A and angle B respectively are (of how many degrees). Note by Neha Adepu 5 years, 12 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: I don't think this is from IMO (International Mathematical Olympiad). Log in to reply This is from an exam conducted here in India by a private company. Oh, I see. Indian Mathematical Olympiad? @Zi Song Yeoh – No way, I doubt they ask such easy questions. As Vikram said, it must be a contest by some private organisation. If the private organisations use names such as "IMO" and mislead the students, then its a very bad tactic to promote themselves. @Pranav Arora – its called international mathematics olympiad conducted by SOF @Zi Song Yeoh – Nope. Its a basic level contest conducted by a company. Hey guys,I think she is talking of SOF IMO and not the Great and the one you all are thinking IMO!!!! YOU,RE RIGHT!. It is a problem from the work book. Consider the sides of the rhombus to be of length xxx, i.e., AD=xAD=xAD=x. So, AE=x2AE= \frac{x}{2}AE=2x. Let ∠A=θ\angle A= \theta∠A=θ. So, cosθ=AEAD=12cos \theta = \frac{AE}{AD} =\frac{1}{2}cosθ=ADAE=21 ⟹ θ=60o\implies \theta =60^o⟹θ=60o ⟹ ∠A=60o\implies \angle A=60^o⟹∠A=60o and ∠B=180o−60o=120o\angle B=180^o-60^o=120^o∠B=180o−60o=120o(As they are interior opposite angles) This que is not from IMO ! I think is 180 degrees Consider- DA=xDA = xDA=x AE=x/2AE = x/2AE=x/2 EB=x/2EB = x/2EB=x/2 DE=x2−(x/2)2DE = x^{2} - (x/2)^{2}DE=x2−(x/2)2 =3x/2= \sqrt{3}x/2=3x/2 DB=xDB = xDB=x (Pythagoras Theorem) DA=x,DB=x,AB=xDA = x , DB = x , AB = xDA=x,DB=x,AB=x ADBADBADB is an Equilateral triangle Thus, A=60°A = 60°A=60° B=120°B = 120°B=120° THANK YOU!!! ANGLE A=60ANGLE B=120 If this problem is from IMO, please tell me what year and what question. Thats Indian Maths Olympiad this problem is not from the main exam, it is from the workbook Problem Loading... Note Loading... Set Loading...
All road vehicles powered by internal combustion engines have a transmission as part of the powertrain. The simplest type of transmissions is the manual transmission. It’s called “manual” because the driver has both roles of decision making (when to perform a gearshift) and actuation (actual gearshift process). The traction characteristics of an internal combustion engine makes it impossible to propel a vehicle without a transmission. The torque and speed output of the internal combustion engine are either too low or too high to match the dynamic requirements of a vehicle. Thus, the role of a transmission is to: adapt the torque output of the engine function of the road load make possible the backwards movement of the vehicle, for the same direction of rotation of the engine allow engine disconnection from the rest of the powertrain What is the difference between a transmission and a gearbox ? Usually a transmission consists of a gearbox plus a differential. The gearbox contains all the gear assemblies, shafts, synchronizers, rails, etc. The gearbox can be regarded as the transmission without the differential. For front-wheel drive (FWD) vehicles, the powertrain (engine + gearbox + differential) is completely contained on the front axle. Thus, for this type of vehicles, when we refer to the transmission, we consider that it contains both the gearbox and the differential. For rear-wheel drive (RWD) vehicle, the powertrain is split between the front and rear axles. The front axle contains usually the engine and gearbox, while the rear axle contains the differential. Thus, for this type of vehicles, transmission or gearbox has the same meaning. The transmission is mounted after the coupling device (clutch, torque converter), takes the clutch torque and speed as input and converts and distributes them to the wheels through the half shafts. Types and main components of a manual transmission Every manual transmission consists of input and output shafts, several permanent-mesh gears and an actuation mechanism. Depending on the number of ratio stages used to make up the gears, the transmissions are classified as: single-stage transmissions two-stage transmissions multi-stage transmissions In a single-stage transmission, a gear ratio is formed with only one pair of gears. Also, there are only two shafts in the transmission: an input shaft and an output shaft. This type of transmissions are primarily used in front-wheel drive vehicles. A particular feature of this type of transmission is that there is no direct drive gear (ratio = 1.00). This is because all gear ratios are formed by a pair of permanent-mesh gears. There is an equivalent gear for the direct drive gear, with the gear ratio close to 1.00 (e.g. 0.98 or 1.02). Two-stage transmissions are used for standard powertrain configuration (engine on front axle with rear-wheel drive). Most of these transmissions have an input shaft, a counter shaft and an output shaft. There are also configurations with only two shafts (input and output). In the case of a two-stage transmission, the input shaft and the output shaft have a coaxial arrangement (their axis is common), while on single-stage transmissions the axis of the input and output shafts are different, with an offset between them. Both single-stage and two-stage transmissions have the input shaft connected to the clutch. All the forward gear assemblies have synchonizers for engagement. The purpose of the synchronizer is to align the input shaft speed to the output shaft speed when a gearshift is performed. Two-stage transmissions have a constant gear which mechanically links the input shaft to the counter shaft. Thus, every gear ratio is made up with two permanently-meshed gear assemblies, the constant gear plus the gear assembly for the specific gear. Because of this arrangement, two-stage transmissions have slightly less overall efficiency. The direct drive gear (4th gear in the image above) is the gear which connects the input shaft directly to the output shaft, without going through a gear mesh. Thus the gear ratio for a direct drive gear is 1.00 (no conversion of speed or torque). In every transmission, except for the reverse gear, all the forward gears are permanently meshed. For the example above, all the gears on the counter shaft are fixed (they rotate together), and all the gears on the output shaft are free (they rotate independently of the output shaft). The synchronizers are fixed on the output shaft. When engaging a gear, the synchonizer will make the connection between input/counter shaft and output shaft. The reverse gear contains an extra gear in order to change the direction of rotation of the output shaft. The reverse gear doesn’t have a synchronizer since the reverse gear is engaged after the vehicle has come to a complete stop. All the gearshifts in a manual transmission are performed with torque interruption. Before a gearshift, the clutch is opened and there is no more engine torque transmitted to the input shaft. After the gearshift is complete, the clutch is closed back in order to allow the flow of the engine power (torque and speed). In the case of a manual transmission, the gearshift can be: upshift: the gear number is incremented (e.g. from 1st gear to 2nd gear) downshift: the gear number is decremented (e.g from 3rd gear to 2nd gear) Modern manual transmissions have 5, 6 or even 7 forward gears and 1 reverse gear. Each gear is characterized by a gear ratio. Multi-stage transmissions are using more than two permanently meshed gears assemblies for a gear ratio formation. They are primarily used in commercial vehicle applications. How the engine, speed, torque and power is modified by the transmission ? The core element of a manual transmission is the meshed gear assembly. It consists of two toothed wheels (gears) meshed together. The gear that is connected to the input/counter shaft is the input gear, the gear connected to the synchronizer is the output gear. Every gear has a fixed gear ratio. The gear ratio ( i) is the ratio between the number of teeth of the output gear ( z out) and the number of teeth of the input gear ( z). For the example above the gear ratio is: in For a given speed of the input gear ( n in = 4500 rpm) and a gear ratio ( i = 1.5), the speed of the output gear ( n) will be: out For a given torque of the input gear (T ) and a gear ratio ( in = 200 Nm i = 1.5), the torque of the output gear (T ) will be: out We can see that, for a gear ratio higher than 1.00, the output speed is reduced while the output torque is amplified. \[P \text{ [W]} = T \text{ [Nm]} \cdot \frac{\pi}{30} \cdot n \text{ [rpm]}\] What happens to the power, does it change? To find the answer to this question we need to calculate the power at the input gear and the power at the output gear, with the equation: For our input data above, we’ll get:\[ \begin{equation*} \begin{split} P_{in} &= T_{in} \cdot \frac{\pi}{30} \cdot n_{in} &= 200 \cdot \frac{\pi}{30} \cdot 4500 &= 94248 \text{ W}\\ P_{out} &= T_{out} \cdot \frac{\pi}{30} \cdot n_{out} &= 300 \cdot \frac{\pi}{30} \cdot 3000 &= 94248 \text{ W} \end{split} \end{equation*} \] As we can see a gear ratio doesn’t transform the power also, but only the torque and speed, keeping the power constant. In reality there is a small power drop, at the output gear, due to gear mesh efficiency. For one gear mesh assembly, the efficiency is around 0.98 – 0.99. In this case the output power will be: Example of real-world manual transmission: TREMEC TR-6070 Source: http://www.tremec.com The TREMEC TR-6070 seven speed manual transmission was designed specifically for premier North American sports cars and integrates an awe inspiring shift technology. The TR-6070 is based on the well respected TR-6060 six speed transmission. A triple overdrive gear was added to improve fuel economy and lower emissions. Incorporated in the TR-6070 is a Gear Absolute Position (GAP) sensor. The technology provides a signal from the transmission to the engine controller, inferring the real time position of the shift selector. With this information, the engine RPM can be controlled to match the next gear selection which enhances driveability. Design features of the TR-6070 synchronizers include a combination of double-cone and triple-cone rings, utilizing a hybrid solution on all forward gears. The hybrid rings are a combination of carbon and sintered bronze cones providing higher capacity and shift performance. Linear bearings lower the friction of the shift rail movements, making the shifter feel naturally lighter and more direct. TR-6070 Features at a Glance: Rear wheel drive, seven-speed manual overdrive transmission Triple overdrive for improved fuel economy and lower emissions Gear ratio spread of up to 6.33 Triple- and double-cone synchronizers Advanced and asymmetric clutch teeth in second and third speed gears Two-piece gear design for high torque capacity Low mass, hollow shaft design available Sensors include: Temperature Speed Gear position TREMEC TR-6070 Transmission Specifications: Type Rear wheel drive, seven-speed manual overdrive transmission Maximum gross vehicle weight (reference) [kg/lb] 2400 / 5291 Case Die-cast aluminum alloy Center distance [mm] 85 Overall length [mm] 782 Clutch housing Integrated Synchronizer type Double and triple cone; hybrid friction material Lubricant type Dexron III ATF Lubricant capacity (approximate) [L / pt] 3.5 / 7.4 Transmission weight [kg / lb] 65.2 / 143.75 Power take off No Available Gear Ratios Alternative ratios available upon request; may result in different maximum input torque Gear A B C 1 2.97 2.66 2.29 2 2.07 1.78 1.61 3 1.43 1.30 1.21 4 1.00 1.00 1.00 5 0.71 0.74 0.82 6 0.57 0.50 0.68 7 0.48 0.42 0.45 R 2.85 2.53 2.70 Input Torque [Nm / lb-ft] 625 / 460 740 / 545 860 / 635 Manual transmissions have relatively simple mechanics, do not require maintenance, are robust and with very good overall efficiency. Understanding how a manual transmission works is critical in order to advance to more complex topics as automatic or double-clutch transmissions. For any questions or observations regarding this tutorial please use the comment form below. Don’t forget to Like, Share and Subscribe!
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
The energy of the magnetic field is the work required to establish a general steady-state distribution of currents and fields. This work is, in infinitesimal form, $$\label{0}\tag{0}\delta W = \frac 1 2 \int (\delta \mathbf A \cdot \mathbf J) \ d^3 x$$ where $\mathbf J$ is the current density. If we are interested in work done on the free (macroscopic) currents, we have (a): $$\begin{align}\delta W & = \frac 1 2 \int (\delta \mathbf A \cdot \mathbf J_f) \ d^3 x\\&=\int \delta \mathbf A \cdot (\nabla \times \mathbf H) \ d^3x\\&=\int [\mathbf H \cdot (\nabla \times \delta \mathbf A) +\nabla \cdot(\mathbf H \times \delta \mathbf A)] \ d^3x\end{align}$$ Where $\mathbf A$ is the vector potential and $\mathbf H$ is the magnetic field (b). Assuming a localized field distribution, the second term in the integral vanishes, and using $\mathbf B = \nabla \times \mathbf A$ we get $$\delta W = \int \mathbf H \cdot \delta \mathbf B \ d^3x$$ If we assume that the material is linear, i.e. that $\mathbf B = \mu \mathbf H$, we have $$\mathbf H \cdot \delta \mathbf B = \frac 1 2 \delta( \mathbf H \cdot \mathbf B)$$ Therefore we finally get the following expression for the energy of the magnetic field in the presence of linear materials: $$U = \frac 1 2 \int \mathbf H \cdot \mathbf B \ d^3 x = \frac 1 {2 \mu} \int B^2 \ d^3 x$$ where the magnetic energy density is written in the form $$\tag{1}\label{1}u = \frac 1 2 \mathbf H \cdot \mathbf B = \frac {B^2} {2 \mu} $$ To derive \ref{1}, we made use of the macroscopic form of Maxwell's equations. In particular, we are assuming using for the 4th equation the form (neglecting the displacement current): $$\nabla \times \mathbf H = \mathbf J_f$$ where $\mathbf J_f$ is the free (macroscopic) current. This means that \ref{1} represents the work done on free currents when establishing the steady-state current distribution of currents and fields. It would also be possible to use the microscopic form of Maxwell's equations, in particular $$\nabla \times \mathbf B = \mu_0 \mathbf J$$ where $\mathbf J$ is the total current, i.e. the sum of free and bound currents: $$\mathbf J = \mathbf J_f + \mathbf J_b$$ In this case, there is no $\mathbf H$ vector and the magnetic energy density is (c) $$\tag{2}\label{2}u' = \frac{B^2}{ 2 \mu_0}$$ This represents the work done on every current when establishing the magnetic field, including the bound currents, i.e. $$\begin{split}u & = u_f\\u'& = u_f + u_b\\\end{split}$$ This means that the energy required to establish the bound currents can be calculated for a linear material as $$\tag{3}\label{3}u_b = u'-u = \frac{B^2}2 \left( \frac 1 {\mu_0} - \frac 1 {\mu} \right) = \frac{B^2} 2 \left( \frac{\mu-\mu_0}{\mu \mu_0} \right)$$ Since (for a linear material) we have $$\mathbf M = \left( \frac{\mu-\mu_0}{\mu \mu_0} \right) \mathbf B$$ where $\mathbf M$ is the magnetization, we can write \ref{3} as $$\tag{4}\label{4}u_b = \frac 1 2 \mathbf M \cdot \mathbf B$$ And indeed this expression is valid for every material, not just for linear ones (d). As for the expression $$\int \mathbf H \cdot \mathbf B \ d^3 x = 0$$ on Jackson's book the full text says: A magnetostatic field is due entrirely to a localized distribution of permanent magnetization. Show that $$ \int \mathbf H \cdot \mathbf B\ d^3 x = 0 $$ provided the integral is taken over all space. So this is a statement that (in the presence of linear materials) the work done on free currents when establishing a magnetostatic field is 0. I suspect this to be more general, i.e. valid for any kind of relation between $\mathbf H$ and $\mathbf B$, but at the moment I cannot prove it. (a) J.D. Jackson, Classical Electrodynamics (1962) 6.2 (b) I adopt the nomenclature in which $\mathbf H$ is the magnetic field and $\mathbf B$ the magnetic-flux density (or magnetic induction), which is the one used by Jackson. (c) D.J. Griffiths, Introduction to Electrodynamics, 3rd ed. (1999), 7.2.4 and 8.1.2. It is especially interesting to read the footnote at page 348. (d) B.D. Popovic Evaluation of magnetic energy density inmagnetised matter, PROC. IEE, Vol. 113, No. 7, July 1966.
So I have a wave function in spherical polars: $$\psi=\left(r\sin(\theta)e^{i\phi}\right)^l\exp\left({\frac{-r}{(l+1)a}}\right)\;,$$ and I have to show $$E[r^k]=\frac{\int_0^\infty r^{2(l+1)+k}\exp\left({\frac{-2r}{(l+1)a}}\right)}{\int_0^\infty r^{2(l+1)}\exp\left({\frac{-2r}{(l+1)a}}\right)}\;.$$ I know the formula for the expectation but can't seem to get the required form, any help will be appreciated! closed as off-topic by Emilio Pisanty, sammy gerbil, Kyle Kanos, JMac, Qmechanic♦ Dec 22 '17 at 18:20 This question appears to be off-topic. The users who voted to close gave this specific reason: "Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Emilio Pisanty, sammy gerbil, JMac, Qmechanic The inner product of two wavefunctions is defined as: $$(\phi,\psi)=\int d^3\textbf x\,\phi^*(\textbf x)\psi(\textbf x)$$ Given an operator $\hat O$ and a wavefunction $\psi$, the expectation value of the operator is defined as: $$\mathbb{E}(\hat O)=\frac{(\psi,\hat O \psi)}{(\psi,\psi)}$$ In your case $O=r^k$ and the wavefunction is given in spherical coordinates: $$\psi(r,\theta,\varphi)=\sin^l(\theta)e^{i\varphi l} r^l\exp\left({\frac{-r}{(l+1)a}}\right)$$ Now you have to write down numerator and denominator as integrals and convert the triple integral to spherical polars. You will notice that the angular part of the two integrals will be the same on both the numerator and denominator, and you can therefore simplify.
It actually is possible (*) to find a purely algebraic way to analyze such problems mathematically, without using your hands or equivalent devices. The method is actually quite closely related to ggcg's answer, though it will look quite different. And it goes beyond that by actually dispensing with the cross product entirely, and replacing it with what we call the "wedge" or "exterior" product. Rather than taking two vectors and getting one other vector out of it, as in $\vec{a} \times \vec{b}$, when you take the wedge product you get a plane out: the wedge product $\vec{a} \wedge \vec{b}$ represents the plane spanned by $\vec{a}$ and $\vec{b}$, which is orthogonal to the vector $\vec{a} \times \vec{b}$. Moreover, there is also a "sense" of this plane, which substitutes for the right-hand rule, and is related to the order in which you take the product: $\vec{b} \wedge \vec{a}$ represents the same exact plane, except with the opposite "sense", just as $\vec{b} \times \vec{a}$ represents the same vector in the opposite direction. You can rewrite expressions for things like the Lorentz force law and rotations (and everything else in physics that uses a cross product) with the wedge product, and it all just works out beautifully. But in this case, rather than using the right-hand rule, you just need to decide on an order for your basis vectors: do you order them as $(\vec{x}, \vec{y}, \vec{z})$, or as $(\vec{x}, \vec{z}, \vec{y})$, for example? Making a choice for that ordering allows you to completely specify everything you need, expand in the basis, and compute the wedge product correctly without ever referring to handedness. (**) This approach is just one small part of something called "geometric algebra". One nice feature of this approach is that it actually generalizes to other dimensions. In two dimensions, you get a nicer understanding of complex algebra. In four dimensions, you can use the same exact techniques to do special relativity more easily. It's actually just a coincidence that the cross product even works in three dimensions; if you want something that will work in three dimensions and any other dimension, you actually have to go to the wedge product. Geometric algebra is actually a really great pedagogical approach, and we would all be better off if everyone would just use it, and ditch cross products forever. Unfortunately, that's not going to happen while you're a student. So while I encourage you to learn geometric algebra, and I would bet that you'd really get better at physics if you do, remember that you'll still probably be taught and asked to use cross products. (*) I just need to dispense some practical advice here. It sounds like you're a relatively new (and probably talented) student of physics. Physicists (including teachers) are all very familiar with the various problems with the right-hand rule. And it is unfortunate. I wish that it weren't so, but as a purely practical matter, if you stick with physics you'll need to keep interacting with the cross product for at least another couple years. So my advice is to stick with it, and become really good at using it correctly. It's not that hard, and maybe it'll exercise your brain in ways that come in handy. (**) It's still true that you might eventually need to relate your particular choice of directions to someone else's idea of the "correct" orientation. Basically, you would need to make your choice of ordering consistent with their choice, which would probably be based on the right-hand rule. However, unless a question specifically asks for that orientation, you could very well use a left-handed orientation and still get correct answers (e.g., force going in or out of the page, etc.) using the wedge product.
Output: P(q) = \frac{\text{scale}}{V_{cs}} \int_{0}^{\pi}\int_{0}^{2\pi} f^2(q,\theta_Q,\phi_Q) \sin(\theta_Q) d\theta_Q d\phi_Q + \text{background} where f(q,\theta_Q,\phi_Q) = ( \rho_{c}-\rho_{sh} ) \prod_{j=1}^3 [ 2 * L/2 * sinc(Q_j*L/2) ] + ( \rho_{sh}-\rho_{solv} ) * \prod_{j=1}^3 [ 2 * (L/2+d) * sinc(Q_j*(L/2+d)) ] where sinc(x) = \sin(x) / x Q_1 = Q \sin(\theta_Q) \cos(\phi_Q) Q_2 = Q \sin(\theta_Q) \sin(\phi_Q) Q_3 = Q \cos(\theta_Q) Parameters: scale = scaling factor, volume fraction of particles scale phi ~ N V_{cs} / V_{irr}, with N / V_{irr} being the number density of particles in the irradiated volume background = const. background L = length of the cuboid d = shell thickness \rho_{c} = scattering length density of core \rho_{sh} = scattering length density of shell \rho_{solv} = scattering length density of solvent V_{cs} = volume of core-shell cuboid Intensity is very similar to the faster core-shell sphere model, especially when using polydispersity. Model can easily be generalized for anisotropic lengths. Created By p3scmr Uploaded Aug. 2, 2019, 6:27 p.m. Category Parallelepiped Score 0 Verified This model has not been verified by a member of the SasView team In Library This model is not currently included in the SasView library. You must download the files and install it yourself. Files core_shell_cuboid.c core_shell_cuboid.py No comments yet. Please log in to add a comment.
A system can be defined by a model, real (scale model) or virtual (mathematic) model. A dynamic systems, for example, is described by differential equations. The solution of the differential equation shows how the variables of the system depend on the time. Let’s take as example the translational mass with spring-damper. The equation of motion of the body mass is (for a complete explanation of how to derive the equation, read the article Mechanical systems modeling using Newton’s and D’Alembert equations):\[{m \cdot \frac{d^2x}{dt^2} + c \cdot \frac{dx}{dt} + k \cdot x = F} \tag{1}\] where: m [kg] – mass k [N/m] – spring constant (stiffness) c [N⋅s/m] – damping coefficient F [N] – external force acting on the body x [m] – displacement of the body Free response of a system The free response of a system is the solution of the describing differential equation of the system, when the input is zero. In our case the input into the system is the force F [N]. Therefore, in order to verify the free response of the system we have to solve the differential equation:\[{m \cdot \frac{d^2x}{dt^2} + c \cdot \frac{dx}{dt} + k \cdot x = 0} \tag{2}\] The easiest way to to this is to integrate the differential equation in Xcos. For a complete explanation how handle it, read the article How to solve (integrate) a differential equation in Xcos. The equation (2) is for the system in equilibrium. This means that, if we solve it in Xcos, the output of the system (displacement x [m] of the body) will be zero. To be able to see the free response, we need an initial condition. For our example we are going to set the initial conditions: x(0) &= 0.4\\ v(0) &= 0 \end{split} \] which translates into: at time t = 0 (when simulation starts), the position of the mass is 0.4 m to the right and the speed is 0 m/s. The free response of the mass spring-damper system will be the variation in time of the displacement x [m]. To integrate equation (2), we have to write it in the following form:\[\frac{d^2x}{dt^2} = \frac{1}{m} \cdot \left ( – c \cdot \frac{dx}{dt} – k \cdot x \right ) \tag{3}\] Equation (3) is modelled and integrated in Xcos (see block diagram below). The parameters of the model are defined directly in the blocks of the Xcos diagram. The simulation is run for 50 s and the Clock block samples data at 0.01 s. Notice that the initial condition of the first integrator block (speed) is set at 0 m/s and the second integrator block (position/displacement) is set at 0.4 m. This meas that the initial value of the speed will be 0 m/s and the position will be 0.4 m. Running the Xcos block diagram will output the following plot: At time t = 0, the position x [m] of the mass is 0.4 m. After being released it begins to oscillate around the equilibrium value, 0 m, with smaller and smaller amplitudes. Due to the damping coefficient, after a while, it will stabilise at 0 m (the equilibrium position). The general equation of a free response system has the differential equation in the form: The solution x(t) of the equation (4) depends only on the n initial conditions. Forced response of a system The forced response of a system is the solution of the differential equation describing the system, taking into account the impact of the input. In our case the input is force F [N]. Therefore, in order to verify the forced response of the system, we have to solve the differential equation:\[{m \cdot \frac{d^2x}{dt^2} + c \cdot \frac{dx}{dt} + k \cdot x = F(t)} \tag{3}\] To visualise the forced system response of a system, all the initial conditions must be zero:\[ \begin{split} x(0) &= 0\\ v(0) &= 0 \end{split} \] which translates into: at time t = 0 (when simulation starts), the position of the mass is 0 m and the speed is 0 m/s. At time t = 10 s, the input force will become 0.5 N and it will pull the mass to the right. The forced response of the mass spring-damper system will be the variation in time of the displacement x [m]. To integrate equation (3), we have to write it in the following form:\[\frac{d^2x}{dt^2} = \frac{1}{m} \cdot \left (F – c \cdot \frac{dx}{dt} – k \cdot x \right ) \tag{4}\] We can reuse the Xcos model from the free system response, the only differences being that there is an input step force added and the initial condition of the position is 0 m. The input step force will have value 0.5 N when the simulation time is 10 s. Running the Xcos block diagram will output the following plot: In this case, with an input force of 0.5 N, the equilibrium position of the system is around 0.25 m. We can notice how the position oscillates around the equilibrium point, settling down in time due to the damping coefficient. Total response of a system When combining the free and forced response of a system we get the total response of the system. The total response of a system is the solution of the differential equation with an input and initial conditions different than zero. Therefore, in order to verify the total response of the system we have to solve the differential equation (same as forced response):\[{m \cdot \frac{d^2x}{dt^2} + c \cdot \frac{dx}{dt} + k \cdot x = F} \tag{5}\] with the initial conditions as (same as free response):\[ \begin{split} x(0) &= 0.4\\ v(0) &= 0 \end{split} \] which translates into: at time t = 0 (when simulation starts), the position of the mass is 0.4 m and the speed is 0 m/s. At time t = 10 s, the input force will become 0.5 N and it will pull the mass to the right. The total response of the mass spring-damper system will be the variation in time of the displacement x [m]. The Xcos block diagram is the same as in the forced response case, the only difference being the initial condition setting of the position (0.4 m). The input step force will have value 0.5 N when the simulation time is 10 s. Running the Xcos block diagram will output the following plot: Notice how the mass is released from the initial position (0.4 m) and tries to settle around the equilibrium position 0 m. When the simulation time is 10 s, the input force becomes 0.5 N and pulls the mass towards a new equilibrium point 0.25 m. The free and forced responses of a system are useful to understand the dynamic behaviour of a system (plant). Further, the parameters and the time responses of the plant will be useful when designing a control system, for a particular application.
Here we will cover the rules which we use for differentiating most types of function. The derivative a function of the form $y=a$ (where $a$ is a constant) is zero: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=0\] Note: This is intuitive as a constant function is a horizontal line which has a slope of zero. To differentiate any function of the form: \[y=ax^n\] where $a$ and $n$ are constants, we take the power $n$, bring it in front of the function, and then reduce the power by $1$: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=n\times ax^{n-1}\] Differentiate the function $y=x^4$. \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=4\times x^{(4-1)}\\ &=4x^3 \end{align} Differentiate the function $y=10x$. \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=10\times1\times x^{1-1}\\ &=10x^0\\ &=10 \end{align} Differentiate the function $y=2x^{~\frac{1}{4}~}$. \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=2\times\left(\frac{1}{4}\right)\times x^{\left(\frac{1}{4}-1\right)}\\ &=\frac{1}{2}\times x^{\left(\frac{1}{4}-1\right)}\\ &=\frac{1}{2}\times x^{-\frac{3}{4}~}\\ &=\dfrac{1}{2 \sqrt[4]{x^3}~} \end{align} See Powers and Roots to remind yourself why $\frac{1}{2}x^{-\frac{3}{4}~}=\dfrac{1}{2 \sqrt[4]{x^3}~}$. Differentiate the function $f(x)=3x^{-3}$. \begin{align} f'(x)&=(-3)\times 3x^{-3-1}\\ &=-9x^{-4}\\ &=-\dfrac{9}{x^4}\\ \end{align} To differentiate a sum (or difference) of terms, differentiate each term separately and add (or subtract) the derivatives. Differentiate the function $y=10x^3+2x$. First differentiate the first term in the sum, $10x^3$ using the power rule: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=30x^2\] Now differentiate the second term, $2x$ using the power rule again: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=2\] Adding these derivatives together give us the derivative of $y=10x^3+2x$: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=30x^2+2\] Differentiate the function $g(x)=x^4-3x^{-3}$. Let $m(x)=x^4$ and $n(x)=3x^{-3}$. We have already found the derivatives of these two functions. They are $m'(x)=4x^3$ and $n'(x)=-9x^-4$ respectively. The derivative of the function $g(x)=x^4+3x^{-3}$ is therefore the difference of these two derivatives: \begin{align} g'(x)&=3x^3-\left(-\dfrac{9}{x^4}\right)\\ &=3x^3+\dfrac{9}{x^4} \end{align} Differentiate the function $y=4x^5+x^2-10x^{-2}-3$. The derivative of the: The derivative of $y=4x^5+x^2-10x^{-2}+3$ is therefore: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=20x^4+2x-\left(-20x^{-3}\right)-0\\ &=20x^4+2x+20x^{-3} \end{align} This rule is used to differentiate a function of another function, $y=f(g(x))$.To differentiate $y=f(g(x))$, let $u=g(x)$ so that we have $y$ as a function of $u$, $y=f(u)$. Then the chain rule says: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=\dfrac{\mathrm{d} y}{\mathrm{d} u}\times \dfrac{\mathrm{d} u}{\mathrm{d} x}\] Once you have worked this out, you replace $u$ by $g(x)$ and your answer is now in terms of $x$. Differentiate the function $y=(3x+x^4)^2$. The first step is to set $u=g(x)$, where $g(x)=3x+x^4$, and differentiate $u$ with respect to $x$: \[\dfrac{\mathrm{d} u}{\mathrm{d} x}=3+4x^3\] The next step is to differentiate $y$ with respect to $u$. Rewriting $y$ in terms of $u$ gives: \[y=u^2\] and so \[\dfrac{\mathrm{d} y}{\mathrm{d} u}=2u\] Next, using the chain rule, we have: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=\dfrac{\mathrm{d} y}{\mathrm{d} u}\times \dfrac{\mathrm{d} u}{\mathrm{d} x}\\ &=2u\times (3+4x^3) \end{align} The final step is to substituting $u=3x+x^4$: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=2u\times (3+4x^3)\\ &=2(3x+x^4)(3+4x^3) \end{align} After multiplying out the brackets and cancelling where possible, this simplifies to: \[2x(4x^6+15x^3+9)\] Recall that the exponential function $f(x)= e^x$. The derivative of this function is the same as the function itself: \[\dfrac{\mathrm{d} f}{\mathrm{d} x}= e^x\] If the power to which $e$ is raised is a function of $x$, $g(x)$ say, we have $f(x)= e^{g(x)}$ and: \[\dfrac{\mathrm{d} f}{\mathrm{d} x}=g'(x) e^{g(x)}\] where $g'(x)$ denotes the derivative of the function $g(x)$. Differentiate the function $f(h)= e^{h^2}$. Here the power to which $ e$ is raised is a function so we have: \begin{align} \dfrac{\mathrm{d} f}{\mathrm{d} h}&=2h\times e^{h^2}\\ &=2h e^{h^2} \end{align} We use the product rule to differentiate a function $y=uv$ which is a product of two functions of $x$, $u$ and $v$. The product rule says the derivative of $y$ is: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=u\dfrac{\mathrm{d} v}{\mathrm{d} x}+v\dfrac{\mathrm{d} u}{\mathrm{d} x}\] In words this says that “the derivative of a product of two functions is the derivative of the first, times the second, plus the first times the derivative of the second.” Differentiate the function $y=(1+2x^2)(3x+x^4)$. Here $u=1+2x^2$ and $v=3x+x^4$. The derivative of $u$ with respect to $x$ is: \[\dfrac{\mathrm{d} u}{\mathrm{d} x}=4x\] and the derivative of $v$ with respect to $x$ is: \[\dfrac{\mathrm{d} v}{\mathrm{d} x}=3+4x^3\] Using the product rule, we have: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=u\dfrac{\mathrm{d} v}{\mathrm{d} x}+v\dfrac{\mathrm{d} u}{\mathrm{d} x}\\ &=(1+2x^2)(3+4x^3)+(3x+x^4)4x \end{align} After multiplying out the brackets and cancelling where possible, this simplifies to: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=12x^5+4x^3+18x^2+3\] Differentiate the function $y=(x^2+3x+6)(x^3+5x^2+2x+1)$. Here $u=x^2+3x+6$ and $v=x^3-5x^2+2x-1$. The derivative of $u$ with respect to $x$ is: \[\dfrac{\mathrm{d} u}{\mathrm{d} x}=2x+3\] and the derivative of $v$ with respect to $x$ is: \[\dfrac{\mathrm{d} v}{\mathrm{d} x}=3x^2-5x+2\] Using the product rule, we have: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=u\dfrac{\mathrm{d} v}{\mathrm{d} x}+v\dfrac{\mathrm{d} u}{\mathrm{d} x}\\ &=(x^2+3x+6)(3x^2-5x+2)+(x^3-5x^2+2x-1)(2x+3) \end{align} After multiplying out the brackets and cancelling where possible, this simplifies to: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=5x^4-3x^3-14x^2-18x+9\] We use the quotient rule to differentiate a function $y=\dfrac{u}{v}$ which is a quotient of two functions of $x$, $u$ and $v$. The quotient rule says the derivative of $y$ is: \[\dfrac{\mathrm{d} y}{\mathrm{d} x}=\dfrac{v\dfrac{\mathrm{d} u}{\mathrm{d} x}-u\dfrac{\mathrm{d} v}{\mathrm{d} x}}{v^2}\] In words, this says that the derivative of a quotient is “the derivative of the numerator times the denominator, minus the numerator times the derivative of the denominator, all divided by the denominator squared” Differentiate the function $y=\dfrac{x^2+3}{x^3-2x+1}$ Here $u=x^2+3$ and $v=x^3-2x+1$. The derivative of $u$ with respect to $x$ is: \[\dfrac{\mathrm{d} u}{\mathrm{d} x}=2x\] and the derivative of $v$ with respect to $x$ is: \[\dfrac{\mathrm{d} v}{\mathrm{d} x}=3x^2-2\] Using the quotient rule, we have: \begin{align} \dfrac{\mathrm{d} y}{\mathrm{d} x}&=\dfrac{v\dfrac{\mathrm{d} u}{\mathrm{d} x}-u\dfrac{\mathrm{d} v}{\mathrm{d} x}~}{v^2}\\ &=\dfrac{(x^3-2x+1)2x-(x^2+3)(3x^2-2)}{(x^3-2x+1)^2} \end{align} After multiplying out the brackets in the numerator and cancelling where possible, this simplifies to \[\frac{dy}{dx}={-x^4-11x^2+2x+6}{(x^3-2x+1)^2}\]
Asymptotics of Robin Eigenvalues via Dirichlet-to-Neumann Analysis by Robin Lang (Universidade de Estugarda, Alemanha) We study the asymptotic behaviour of the eigenvalues of the Laplace operator equipped with a complex Robin boundary condition depending on the Robin parameter (or coupling constant) \( \alpha \in \mathbb{C} \). Of principal interest is the case when \(\alpha\) diverges in the complex plane. To get intuition we first consider several model cases such as intervals, star-shaped compact quantum graphs, and \(n\)-dimensional balls. Since the Robin parameter is complex, the variational techniques used extensively to study the self-adoint case of real \(\alpha\) fail here. However, by characterising the eigenvalues of the Robin problem in terms of a meromorphic family of Dirichlet-to-Neumann operators depending on the spectral parameter \(\lambda\), we can use properties of divergent eigenvalues \(\alpha\) of the latter to prove some abstract results for Robin eigenvalues on Lipschitz domains and star shaped quantum graphs. The talk is based on joint work with Sabine Bögli (Ludwig Maximilian University of Munich) and James Kennedy (University of Lisbon).
Second Order Systems concept Second order systems will be the main focus of our analyses in Control theory. They're complex enough to exhibit several interesting traits of real control systems (which are often higher order) but are simple enough that we can create and use quite a few math tools to make using them so much simpler. Even when dealing with higher order systems we'll often pretend that they're a second order system and keep in mind that we'll be just a little bit wrong. In this topic we'll cover what a second order system is, what it looks like, and a few of its properties. In the next topic we'll cover analysing second order systems like we did with first order systems before. fact A second order system has the transfer function: $$ H(s) = \frac{\omega_n}{s^2 + 2 \zeta \omega_n + \omega_n^2} $$ Where \(\omega_n\) is called the natural frequency and \(\zeta\) is the damping ratio (both defined below) So a second order system can be completely described with just two numbers, the natural frequency \(\omega_n\) and the damping ratio \(\zeta\). The information in this topic is probably something you've seen before (or at least most of it) when learning differential equations. Here the natural frequency and damping ratio mean the same as they did in first year physics classes on oscillations. Simply put the natural frequency is the frequency the system would operate at without any friction and the like, while the damping ratio tells you something about how much "friction" is present. Having a large damping ratio tends to create a sluggish system but prevents a lot of oscillating back and forth (called "ringing"). fact The damping ratio \(\zeta\) is divided into three categories: \(\zeta = 1\): critically damped \(\zeta \gt 1\): overdamped \(0 \lt \zeta \lt 1\): underdamped Whether the system is critically damped, overdamped or underdamped determines the general shape of the system's output when given a step input. As such classifying systems by their damping ratio proves very useful and makes communicating about a system much quicker and simpler. When underdamped the system's closed loop poles are complex conjugates and the transient response is oscillatory, but will die down over time. If \(\zeta = 0\) the oscillations will not die down but in fact continue forever like a sine wave. When overdamped the system exhibits no oscillations whatsoever, the output climbs to its steady state value but never reaches it. Overdamped systems are slower than critically damped systems but have the benefit of never overshooting their mark (this might be useful for something like a cruise control system where the user doesn't want to unintentionally speed). A critically damped system also has no oscillations but it reaches its final value faster than any overdamped system. However if you attempt to design a critically damped system you may find that component tolerances leave you with an underdamped system which does oscillate, this may or may not be satisfactory for your system. fact If a second order system is given in the general form: $$ H(s) = \frac{K}{Js^2 + Bs + K} $$ Then \(\frac{K}{J} = \omega_n^2\) and \(\zeta = \frac{B}{B_c}\) where \(B_c = 2 \sqrt{JK}\) is called the "critical damping coefficient" practice problems
I'm a newbie in physics. Sorry, if the following questions are dumb. I began reading "Mechanics" by Landau and Lifshitz recently and hit a few roadblocks right away. Proving that a free particle moves with a constant velocity in an inertial frame of reference($\S$3. Galileo's relativity principle). The proof begins with explaining that the Lagrangian must only depend on the speed of the particle ($v^2={\bf v}^2$): $$L=L(v^2).$$ Hence the Lagrance's equations will be $$\frac{d}{dt}\left(\frac{\partial L}{\partial {\bf v}}\right)=0,$$ so $$\frac{\partial L}{\partial {\bf v}}=\text{constant}.$$ And this is where the authors say Since $\partial L/\partial \bf v$ is a function of the velocity only, it follows that $${\bf v}=\text{constant}.$$ Why so?I can put $L=\|{\bf v}\|=\sqrt{v^2_x+v^2_y+v^2_z}$. Then $$\frac{\partial L}{\partial {\bf v}}=\frac{2}{\sqrt{v^2_x+v^2_y+v^2_z}}\begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix},$$ which will remain a constant vector $\begin{pmatrix} 2 \\ 0 \\ 0 \end{pmatrix}$ as the particle moves with an arbitrary non-constant positive $v_x$ and $v_y=v_z=0$. Where am I wrong here? If I am, how does one prove the quoted statement? Proving that$L=\frac{m v^2}2$ ($\S$4. The Lagrangian for a free particle). The authors consider an inertial frame of reference $K$ moving with a velocity ${\bf\epsilon}$ relative to another frame of reference $K'$, so ${\bf v'=v+\epsilon}$. Here is what troubles me: Since the equations of motion must have same formin every frame, the Lagrangian $L(v^2)$ must be converted by this transformation into a function $L'$ which differs from $L(v^2)$, if at all, only by the total time derivative of a function of coordinates and time(see the end of $\S$2). First of all, what does same formmean? I think the equations should be the same, but if I'm right, why wouldn't the authors write so? Second, it was shown in $\S$2 that adding a total derivative will not change the equations. There was nothing about total derivatives of time and coordinates being the only functions, adding which does not change the equations (or their form, whatever it means). Where am I wrong now? If I'm not, how does one prove the quoted statement and why haven't the authors done it? P. S. Could you recommend any textbooks on analytical mechanics? I'm not very excited with this one. Seems to hard for me.
Possibly related, though of a different flavour. Background In most of the precalculus texts with which I am familiar, readers/students are given a crash course in set theory, handed the definition of a relation, then told that a function is a special kind of relation that associates only one element of the codomain to each element of the domain. For example, a relevant screenshot from Sullivan's Precalculus: This is, of course, entirely correct. A function is defined by its domain, its codomain, and how it associates elements of the domain with elements of the codomain. The problem I have with this standard approach is that students are often asked to answer questions of the type Let $f(x) := \sqrt{x+3}$. What are the the domain and range $f$?. It seems that students are meant to implicitly understand that $f$ is a real-valued function of a real variable. However, as neither is actually specified, there are two reasonable answers: The domain is $\mathbb{R}$. In this case, the codomain is $\mathbb{C}$, at which point the question of the range becomes quite difficult, as I imagine we don't want to talk about branches of complex functions in a precalculus class. Unfortunately, most of my college freshman precalculus students have been exposed (if only briefly) to complex numbers in their high school algebra classes, so they often want to argue that $\sqrt{-4} = -2i$, and therefore there is no issue with negative real numbers. The domain is $[-3,\infty)$, in which case I would argue that the question is ill-posed, and might better be written Let $f(x) := \sqrt{x+3}$. What is the largest set of real numbers on which this formula defines a function with codomain $\mathbb{R}$? What is the range of this function? This seems reasonable, but there is something about it that just "tastes" off to me. I can't really put my finger on the discomfort, but I feel like this approach causes some confusion when we describe restricting domains in order to define inverse functions. I am also concerned that this approach elides the importance of specifying the domain before defining a function---a function without a specified domain doesn't even make sense, so what is the notation $f(x)$ meant to represent? My Solution Instead of taking the traditional route, I am considering the introduction of a slightly modified definition of a function: A function$f : X \to Y$ is a relation that associates to each element of $X$ at most one element of $Y$. The set $X$ is called the natural domain, and the set $Y$ is called the natural range(or codomain). If $x\in X$ and there is some $y\in Y$ such that $f$ associates $y$ to $x$, then we write $x\mapsto y$ and say that $y$ is the imageof $f$ at $x$. The set of all $x\in X$ such that $x\mapsto y$ for some $y\in Y$ is called the domainof $f$, and the set of all $y$ such that $x \mapsto y$ for some $x\in X$ is called the rangeof $f$. Here the question of the domain and range of $f(x) := \sqrt{x+3}$ becomes straightforward: Define a function $f : \mathbb{R} \to \mathbb{R}$ by $f(x) := \sqrt{x+3}$. What are the domain and range of $f$? Additionally, we continue to emphasize the fact that the collection of possible inputs and outputs is specified in advance, thus dealing with the potential observation that imaginary numbers exist---we've already ruled them out as outputs by specifying the codomain. I don't think that this approach is entirely unreasonable, and it even has some moderate precedent. For example, when dealing with unbounded operators on Banach spaces, we understand that the operators naturally live on the Banach space, but must be restricted to the domain where they actually make sense. Questions In writing $f : X \to Y$, we are saying that $X$ is the largest set of values that we want to consider as inputs, rather than the actual set of inputs. The emphasis is on firstdetermining the universe of possible inputs, thendeciding on which inputs are actually valid (as opposed to the reverse). Does this seem like a reasonable approach? This is nonstandard, and actively conflicts with the usual notation. However, most of my precalculus students are unlikely to take higher math (indeed, for many of them precalculus is a terminal math class), and I think that those going on to calculus or even proofs-based math classes should be just fine. As such, am I causing any harm if I introduce the concepts in this way? Am I missing anything important? Is there any obvious drawback to this approach?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Search Now showing items 1-10 of 167 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...