text stringlengths 256 16.4k |
|---|
I have done some manipulation and got that $$\frac{1}{1+e^z} = \sum_{n=0}^\infty \frac{n!}{n!+z^n}$$ by the fact that:
$$\frac{1}{1+e^z}= \frac{1}{1+\sum_{n=0}^\infty\frac{z^n}{n!}}=\frac{1}{2}+\frac{1}{1+z}+\frac{1}{1+\frac{z^2}{2}}+\ldots = \frac{1}{2}+\frac{1}{1+z}+\frac{2!}{2!+z^2}+\frac{3!}{3!+z^3}+\ldots$$ $$= \sum_{n=0}^\infty\frac{n!}{n!+z^n}$$
Assuming I did the above right, I am having trouble finding the radius of convergence of the Taylor series given above. I tried the ratio test but got stuck.
Edit: I just realized I did this completely wrong, thinking that I was taking $$\sum\frac{1}{1+\sum_{n=0}^\infty\frac{z^n}{n!}}$$.
Could anyone help me? My question wants me to compute the first four terms of the taylor series for $\frac{1}{1+e^z}$ and find the radius of convergence. Perhaps they do not want me to actually find the explicit form? |
just started Jordan forms and not sure what the general method is. I watched MathdoctorRob on YouTube, but can't really make of a method to find JCF.
Would really appreciate someone to check why my working doesn't work, and how I would have done it.
$A = \begin{pmatrix} 10 & 1 \\ -9 & 4 \end{pmatrix}$
I found the characteristic polynomial (and also the minimal polynomial) to be $m_A(\lambda) = p_A(\lambda) = (\lambda - 7)^2$, which indicates the the Jordan Block of the matrix is $J = \begin{pmatrix} 2 & 1 \\ 0 & 2\end{pmatrix}$.
If I understand correctly, for the matrix $P$ with columns $v_1,v_2$, then the block is telling me that
$Av_1 = 7v_1$ and $Av_2 = v_1 + 7v_2$. Now I know I can take the second equality to solve for $v_2$ and we're done, but I want to see why this doesn't work: Since $ker(A-7I) \subseteq ker((A-7I)^2)$, then I want a vector $v_2$ such that $(A-7I)^2v_2 = 0$ and $(A-7I)v_1 \neq 0$. The bases for $ker(A-7I)$ and $ker((A-7I)^2)$ respectively are $span\left\{\begin{pmatrix} -1 \\ 3 \end{pmatrix}\right\}$ and $span\left\{\begin{pmatrix} 1 \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \end{pmatrix}\right\}$ (since $(A-7I)^2 = 0$ anyway). Then if I pick $v_2$ to be $\begin{pmatrix} 1 \\ 0 \end{pmatrix} $, which isn't in the kernel of $A-7I$, I have $P = \begin{pmatrix} -1 & 1 \\ 3 & 0 \end{pmatrix} $, which when I test doing $PJP^{-1}$, it doesn't equal to $A$. Can someone guide me? |
October 10th, 2014, 05:43 AM
# 1
Senior Member
Joined: Aug 2014
From: Mars
Posts: 101
Thanks: 9
LCD and prime numbers(with binomials)
So I was given this problem.............
Find the LCD=
$\displaystyle \frac{3}{2x-6}, \frac{4}{x^2-9}, \frac{18}{6x+18}$
The break down (do I call these the prime numbers?
$\displaystyle 2(x-3), (x-3)(x+3), 6(x+3))$
My answer:
$\displaystyle 6(x+3) \cdot 2(x-3)$
Their answer:
$\displaystyle 6(x^2-9)$????? what?
It seems they disregarded the 2, but why? Help me understand please.
Last edited by skipjack; October 11th, 2014 at 04:04 PM.
October 10th, 2014, 06:33 AM
# 2
Global Moderator
Joined: Oct 2008
From: London, Ontario, Canada - The Forest City
Posts: 7,968
Thanks: 1152
Math Focus: Elementary mathematics and beyond
$\displaystyle 2(x-3)\cdot3(x+3)=6(x^2-9)$ (The least common multiple of 2 and 6 is 6).
They are not necessarily prime numbers.
Last edited by greg1313; October 10th, 2014 at 06:35 AM.
October 10th, 2014, 08:21 PM
# 3
Senior Member
Joined: Aug 2014
From: Mars
Posts: 101
Thanks: 9
Quote:
On that same note if I had $\displaystyle 4(x-3), 6(x+3)$ would I some how choose $\displaystyle 12(x^2-9)$ I don't quite understand this.
October 10th, 2014, 08:43 PM
# 4
Global Moderator
Joined: Oct 2008
From: London, Ontario, Canada - The Forest City
Posts: 7,968
Thanks: 1152
Math Focus: Elementary mathematics and beyond
That's correct. You're looking for the least common denominator - the least common multiple
of 4 and 6 is 12. So if you had
$\displaystyle \frac{3}{4(x-3)}+\frac{3}{6(x+3)}$
you'd write it as
$\displaystyle \frac{3}{4(x-3)}\cdot\frac{3(x+3)}{3(x+3)}+\frac{3}{6(x+3)} \cdot\frac{2(x-3)}{2(x-3)}$
$\displaystyle =\frac{9(x+3)+6(x-3)}{12(x-3)(x+3)}$
$\displaystyle =\frac{15x+9}{12(x-3)(x+3)}$
$\displaystyle \frac{3(5x+3)}{12(x-3)(x+3)}$
$\displaystyle =\frac{5x+3}{4(x-3)(x+3)}$
Last edited by greg1313; October 10th, 2014 at 08:50 PM.
October 10th, 2014, 09:59 PM
# 5
Senior Member
Joined: Apr 2014
From: Europa
Posts: 584
Thanks: 177
$\displaystyle \color{green}{2x-6=2(x-3)
\\\;\\
x^2-9=x^2-3^2=(x-3)(x+3)
\\\;\\
6x+18=6(x+3)=2\cdot3(x+3)
\\\;\\
LCD
\\\;\\
2\cdot3(x+3)(x-3)=6(x^2-9)}$
Last edited by skipjack; October 11th, 2014 at 04:05 PM.
Tags binomials, lcd, numberswith, prime
Search tags for this page
Click on a term to search for related topics.
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Prime numbers Shadowdust New Users 1 February 18th, 2014 09:59 PM Prime Numbers gabrielcj Algebra 7 July 19th, 2013 08:16 PM The paradox between prime numbers and natural numbers. Eureka Number Theory 4 November 3rd, 2012 03:51 AM prime numbers dedede1 Algebra 1 October 24th, 2008 11:03 AM Prime Numbers johnny Number Theory 2 August 20th, 2007 07:16 AM |
In general, one extracts a manifold invariant from a TQFT by interpreting the closed manifold as a bordism from the empty set to the empty set. The TQFT sends this bordism to a homomorphism of the ground field, which is a number. Such invariants are always multiplicative under disjoint union, this is a consequence of the TQFT being a
monoidal functor:$$\mathcal{Z}(M_1 \sqcup M_2) = \mathcal{Z}(M_1) \otimes \mathcal{Z}(M_2) = \mathcal{Z}(M_1) \cdot \mathcal{Z}(M_2)$$
Some TQFTs, like the Crane-Yetter invariant (but not, say, the Turaev-Viro model) give manifold invariants that are multiplicative under connected sum $\#$.
One way to see this is to notice that they can be defined (for connected manifolds) with Kirby calculus: Given a manifold, choose a handle decomposition and consider its link diagram. The diagram is then labelled with morphisms from a ribbon fusion category and the whole diagram is evaluated as a morphism from the monoidal identity to itself, again a number. Now the evaluation of the disjoint union of two link diagrams must then give the product of the evaluations of the respective diagrams, since a ribbon fusion category is monoidal. But the disjoint union of two link diagrams of manifolds $M_1$ and $M_2$ is the link diagram of the connected sum $M_1 \# M_2$!
Which leads me to believe that this multiplicativity secretly comes from monoidality of some functor again. Is there a category of bordisms where the monoidal product of morphisms (=bordisms) is the connected sum, and not the disjoint union? Are such TQFTs actually monoidal functors from this bordism category to $\mathrm{Vect}$?
A related, noncategorical question was asked here: Monoid structure of oriented manifolds with connect sum |
Summary of Techniques: Solving Second Order Differential Equations
Summary of Techniques for Solving Second Order Differential Equations
We will now summarize the techniques we have discussed for solving second order differential equations.
(1) Real and Distinct Roots of The Characteristic Equation: If we have a second order linear homogneous differential with constant coefficients $a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0$, and if the roots of the characteristic equation $ar^2 + br + c = 0$ are real and distinct, then the general solution for this differential equation is given by:
\begin{align} \quad y = Ce^{r_1t} + De^{r_2t} \end{align}
(2) Complex Roots of The Characteristic Equation: If we have a second order linear homogeneous differential with constant coefficients $a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0$, and if the roots of the characteristic equation $ar^2 + br + c = 0$ are complex (conjugates of each other), then $r_1 = \lambda + \mu i$ and $r_2 = \lambda - \mu i$ for some $\lambda, \mu \in \mathbb{R}$ and the general solution for this differential equation is given by:
\begin{align} \quad y = Ce^{\lambda t} \cos (\mu t) + De^{\lambda t} \sin (\mu t) \end{align}
(3) Repeated Roots of The Characteristic Equation: If we have a second order linear homogeneous differential with constant coefficients $a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0$, and if the roots of the characteristic equation $ar^2 + br + c = 0$ are real and not distinct (that is $r_1 = r_2$) then the general solution for this differential equation is given by:
\begin{align} \quad y = Ce^{r_1t} + Dte^{r_1t} \end{align}
(4) Reduction of Order on Second Order Linear Homogenous Differential Equations: If we have a second order linear homogeneous differential equation $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t)y = 0$ and we know that $y = y_1(t)$ is a nonzero solution to our differential equation, then we can assume that $y = v(t) v_1(t)$ is a solution to our differential equation and plug it into our differential equation to obtain a first order differential equation for the function $v'(t)$ for which we can apply techniques of solving first order differentials to obtain $v'(t)$, integrate to get $v(t)$, and then obtain a second solution $y_2(t) = v(t) y_1(t)$.
\begin{align} \quad (2y_1'(t) + p(t)y_1(t))v'(t) + y_1(t)v''(t) = 0 \end{align}
(5) Euler Differential Equations: A second order linear homogeneous differential equation in the form $t^2 \frac{d^2 y}{dt^2} + \alpha t \frac{dy}{dt} + \beta y = 0$ for some $\alpha, \beta \in \mathbb{R}$ is called an Euler differential equation. If we let $x = \ln t$, then we can transform this differential equation into the following differential equation with constant coefficients and solve for $y$ in terms of $x$ by using one of the techniques for solving linear homogeneous differential equations with constant coefficients. We can then obtain a solution $y$ in terms of $t$ by substituting back $x = \ln t$.
\begin{align} \quad \frac{d^2 y}{dx^2} + (\alpha - 1) \frac{dy}{dx} + \beta y = 0 \end{align}
The Method of Undetermined Coefficients: If we have a second order linear nonhomogeneous differential equation with constant coefficients, then if the function $g(t)$ is particularly nice - that is, $g(t)$ contains only exponential functions, sines, cosines, or polynomials, then we can assume a form for a particular solution $Y(t)$ - plug this into our differential equation, and then solve for the coefficients. The general solution to this differential equation is then $y(t) = y_h(t) + Y(t)$ where $y_h(t)$ is the solution to the corresponding second order linear homogeneous differential equation. (6) The Method of Variation of Parameters: If we have a second order linear nonhomogeneous differential equation with constant coefficients and if $g(t)$ is not suitable for applying the method of undetermined coefficients, then instead, we can assume that the particular solution $Y(t) = u_1(t)y_1(t) + u_2(t)y_2(t)$ where $y_1(t)$ and $y_2(t)$ form a fundamental set of solutions to the corresponding second order linear homogeneous differential equation and where $u_1$ and $u_2$ can be determined by solving the system of equations below for $u_1'(t)$ and $u_2'(t)$ and integrating the results to obtain a particular solution. Once again, $y(t) = y_h(t) + Y(t)$.
\begin{align} \quad \left\{\begin{matrix} u_1'(t)y_1(t) + u_2'(t)y_2(t) = 0\\ u_1'(t)y_1'(t) + u_2'(t)y_2'(t) = g(t) \end{matrix}\right. \end{align}
We will also comment on the existence of solutions for second order linear differential equations and general solution sets to second order differential equations.
Theorem (Existence/Uniqueness of Solutions to Second Order Linear Differential Equations): Let $p$, $q$, and $g$ be continuous functions on an open interval $I$ such that $t_0 \in I$. Then the second order linear differential equation $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = g(t)$ with the initial conditions $y(t_0) = y_0$ and $y'(t_0) = y'_0$ has a unique solution $y = \phi (t)$ throughout $I$.
Theorem (The Principle of Superposition): If $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ is a second order linear differentiation equation and $y = y_1(t)$ and $y = y_2(t)$ are both solutions to this differential equation, then for $C$ and $D$ as constants, $y = Cy_1(t) + Dy_2(t)$ is also a solution.
Theorem (Abel's Identity): Let $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ be a second order linear homogeneous differential equation where $p$ and $q$ are continuous on an open interval $I$ such that $t_0 \in I$. Then the Wronskian of $y_1$ and $y_2$ at some $t$ is given by $W(y_1, y_2) = C e^{- \int p(t) \: dt}$ where $C$ is some constant dependent on $y_1$ and $y_2$.
Theorem (Wronskian Determinants): Let $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ be a second order linear homogeneous differential equation where $p$ and $q$ are continuous functions on an open interval $I$ such that $t_0 \in I$ and with the initial conditions $y(t_0) = y_0$ and $y'(t_0) = y'_0$. If $y = y_1(t)$ and $y = y_2(t)$ are solutions to this differential equation then there exists constants $C$ and $D$ for which $y = Cy_1(t) + Dy_2(t)$ is a solution to the initial value problem if and only if the Wronskian at $t_0$ is nonzero, that is $W(y_1, y_2) \biggr \rvert_{t_0} = y_1(t_0)y_2'(t_0) - y_1'(t_0)y_2(t_0) \neq 0$.
Theorem (Fundamental Solutions): Let $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t)y = 0$ be a second order linear homogeneous differential equation where $p$ and $q$ are continuous on an open interval $I$ such that $t_0 \in I$, and let $y = y_1(t)$ and $y = y_2(t)$ be two solutions to this differential equation. The set of all linear combinations of these two solutions, $y = Cy_1(t) + Dy_2(t)$ where $C$ and $D$ are constants contains all solutions to this differential equation if and only if there exists a point $t_0$ for which the Wronksian of $y_1$ and $y_2$ at $t_0$ is nonzero, that is $W(y_1, y_2) \biggr \rvert_{t_0} \neq 0$. |
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization.
A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$)
Thus in general:
$$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$
Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe |
Hausdorff Topological Spaces Examples 3
Recall from the Hausdorff Topological Spaces page that a topological space $(X, \tau)$ is said to be a Hausdorff space if for every distinct $x, y \in X$ there exists open neighbourhoods $U, V \in \tau$ such that $x \in U$, $y \in V$ and $U \cap V = \emptyset$.
We also looked at two notable examples of Hausdorff spaces - the first being the set of real numbers $\mathbb{R}$ with the usual topology of open intervals on $\mathbb{R}$, and the second being the discrete topology on any nonempty set $X$.
We will now look at some more problems regarding Hausdorff topological spaces.
Example 1 Prove that every metric space is a Hausdorff space.
Let $X$ be a metric space. The open sets in $X$ are therefore the any set that is the union of a collection of open balls with respect to the metric $d$ defined on $X$.
Let $x, y \in X$ where $x \neq y$. Then $d(x, y) > 0$ and so if we let $r = \frac{d(x, y)}{2}$ then the ball centered at $x$ with radius $r$ and the ball centered at $y$ with radius $r$ are disjoint, that is:(1)
Furthermore if we set $U = B(x, r)$ and $V = B(y, r)$ we have that $U$ and $V$ are open sets of $X$ with respect to the metric $d$. Therefore any metric space is a Hausdorff space.
Example 2 Prove that if $(X, \tau)$ is a Hausdorff space then for every $x \in X$, the singleton set $\{ x \}$ is closed.
Suppose that $(X, \tau)$ is a Hausdorff space and let $x \in X$. We will show that $X \setminus \{ x \}$ is therefore open. Let $y \in X \setminus \{ x \}$. Since $(X, \tau)$ is Hausdorff there exists open neighbourhoods of $U$ of $x$ and $V$ of $y$ such that $U \cap V = \emptyset$. But since $x \in U$ we must have that $x \not \in V$, so $y \in V \subseteq X \setminus \{ x \}$.
So for every $y \in X \setminus \{ x \}$ there exists a $V \in \tau$ such that $y \in V \subseteq X \setminus \{ x \}$ so $\mathrm{int} (X \setminus \{ x \}) = X \setminus \{ x \}$, i.e., $X \setminus \{ x \}$ is open, and so, $\{ x \}$ is closed.
Example 3 Use Example 2 to show that the set $X = \{ a, b, c, d \}$ with the topology $\tau = \{ \emptyset, \{ a \}, \{a, b \}, \{a, c \}, \{a, b, c \}, X \}$ is not a Hausdorff space.
The contrapositive of the result from Example 2 says that if there exists a singleton set $\{ x \}$ which is open then $(X, \tau)$ is not a Hausdorff space.
With the topology above we see that the singleton set $\{ a \}$ is open. For the element $b \in X$ we see that the only open neighbourhoods of $b$ are $\{ a, b \}$, $\{a, b, c \}$, and $X$. Furthermore, all of the open neighbourhoods of $a$ are $\{a \}$, $\{ a, b \}$, $\{a, c \}$, $\{a, b, c \}$, $X$.
The intersection of any of these open neighbourhoods is nonempty, and so there does not exist any open neighbourhoods $U$ of $a$ and $V$ of $b$ such that $U \cap V = \emptyset$, so $(X, \tau)$ is not a Hausdorff space. |
I am still confused about Hypothesis testing. How does one set up the null hypothesis $H_0$ and the alternative hypothesis $H_a$? I have read a post here that doesn't give it much credit, as far as I can tell. (One must use both $H_0$ and $H_a$ for the AP Statistics exam.)
I have read that $H_a$ is to be what one does not want to be false (to happen). Also, one source says that $H_0$ should always have the = sign, even when doing a one-sided (-tailed) test. But another uses the $\geq$ and $\leq$ in $H_0$ when doing a one-sided test. Another doesn't even mention $H_a$ and just goes straight to the P value.
Which is correct?
Question: Is A bigger than B?
$$H_0:= \left(\mu_a-\mu_b\right) = 0\quad \text{or}\quad H_0:= \left(\mu_a-\mu_b\right) \geq 0$$
$$H_a:= \left(\mu_a-\mu_b\right) \lt 0\quad \text{or}\quad H_a:= \left(\mu_a-\mu_b\right) \lt 0$$
Or is this setup better?
$$H_0:= \left(\mu_b - \mu_a\right) = 0\quad \text{or}\quad H_0:= \left(\mu_b-\mu_a\right) \leq 0 $$
$$H_a:= \left(\mu_b-\mu_a\right) \gt 0\quad \text{or}\quad H_a:= \left(\mu_b-\mu_a\right) \gt 0$$ |
Let $n=144$ observations from an $AR(1)$ model $$y_t=\phi y_{t-1}+\epsilon_t$$ where $\epsilon_t$ is White Noise with mean zero and variance $\sigma^2$. If $y_1=-1.7$, $y_{144}=-2.1$, $\sum_{t=2}^n y_t y_{t-1}=-128.6$ and $\sum_{t=1}^n y_t^2=246.4$.
Suppose $\sigma^2$ know, find $Cov(e_n(1),e_n(2))$, where $e_n(j)$ is the forecast error $j$ steps ahead.
This question is related with another that I posted Estimation of $\phi$ in $AR(1)$ process , but I preferred to separate them.
First I found the forecasts $y_t(1)$ and $y_t(2)$ with $t=144$.
$$y_t(1)=E[y_{t+1}|y_1,\dots,y_t]=E[\phi y_t|y_1,\dots,y_t]=\phi y_t$$ $$y_t(2)=E[y_{t+2}|y_1,\dots,y_t]=E[\phi y_{t+1}+\epsilon_{t+2}|y_1,\dots,y_t]=\phi^2 y_t$$
Then the forecast errors are $$e_n(1)=y_{t+1}-y_t(1)=\epsilon_{t+1}$$ $$e_n(2)=y_{t+2}-y_t(2)=\phi y_{t+1}+\epsilon_{t+2}-\phi^2 y_t$$
Finally the covariance is $$Cov(e_n(1),e_n(2))=Cov(\epsilon_{t+1},\phi y_{t+1}+\epsilon_{t+2}-\phi^2 y_t)$$ $$=\phi Cov(\epsilon_{t+1},\phi y_t+\epsilon_{t+1})=\phi Var(\epsilon_{t+1})=\phi \sigma^2$$
Is it right?
NOTE: Sorry if I poste multiple questions about this topic, but I'm learning alone and I don't have solutions manual. |
I don't sharply disagree with Dr Neumaier's answer; it is indeed the case that entanglement may only be discussed for Hilbert spaces that are tensor products.
However, if the two parts of the well are sufficiently distant, this is nearly the case of your situation, too. When one looks at it in this approximate way, the answer is that the electrons – assuming that you only occupied one spin state, for example both electrons are spin up – are
not entangled.
Why?
The Hilbert space with two widely separately wells that can store electrons is approximately the tensor product$$ {\mathcal H} = {\mathcal H}_\text{left well} \otimes {\mathcal H}_\text{right well}$$
The two individual product Hilbert spaces are not quite completely well-defined: one doesn't want to discuss quantum field theory on a "region of space" due to the problems with the boundary conditions (the "big" Hilbert space doesn't constrain the fields near the boundaries around the wells at all while the smaller Hilbert spaces have to impose some boundary conditions, so the factorization above can't be exact).
However, as long as these boundary conditions are not a problem (for example because it's guaranteed that everything is almost totally confined near the well and nothing gets close enough to these boundaries), the Hilbert space does factorize in this way, and so does the state you wrote:$$|\psi\rangle = |\text{1 electron}\rangle_\text{left well} \otimes |\text{1 electron}\rangle_\text{right well} $$Note that the one-well, one-electron problem only has one ground state: there is no degeneracy here, not even an approximate one.
The system is simply composed of two independent systems – two wells in two different regions – that are not correlated or entangled at all. In quantum field theory, the tensor product state above could be written as $a^\dagger_\text{left well} a^\dagger_\text{right well}|0\rangle$ where the two creation operators don't carry any labels and they are composed of field operators near the two wells, respectively. A non-entangled state is defined as one that can be written as a tensor product and that's exactly what we can do here (in the two-region approximation).
We don't violate the Pauli exclusion principle here in any way because in this approximate two-region description of the system, the binary quantum number "rough position" (which is either "near left well" or "near right well") plays the same role as the spin or other quantum numbers. The two electrons have different eigenvalues of "rough position" which is why they can be in exactly the same state when it comes to energy, spin, and all other quantum numbers.
This extra quantum number is also the reason why you have two nearby energy low-lying states of the two-well problem. There's a two-dimensional Hilbert space for a single electron spanned by energy eigenstates with energies $E_1,E_2$: the corresponding eigenvectors are "even" or "odd" functions of the position (the wave functions either have the same sign in both wells or the opposite sign). In the approximation in which the space between the wells is impenetrable and the boundary conditions for the regions don't pose a problem, we have $E_1=E_2$ and the two-dimensional Hilbert space may also be generated from another basis containing the ground state of the left well and the ground state of the right well. In this approximation, we're just filling two states that only differ by the "rough position" by the maximum number of two electrons.
The inequality $E_1\neq E_2$ in your exact treatment only arises because there's a nonzero probability amplitude for an electron to tunnel from one well to the other one. If it couldn't tunnel, we would have the exact "doubling" of the Hilbert space for a single electron. For the same reason, one can't measure the energy "in one well only" with the accuracy needed to distinguish $E_1$ and $E_2$.
If your measurement apparatus is confined to the vicinity of one well, the error in your energy measurement can't be smaller than $E_1-E_2$ so you won't be able to say "which of the two nearby states" the electron is in. The same holds for the vicinity of the other well which is why the measurement in one well can't influence anything detectable near the other well.
The impossibility to distinguish $E_1$ and $E_2$ by a measurement near a single well is easy to prove; if you measure the electron near the left well, with whatever low-lying energy near $E_1$ or $E_2$, you are proving that this electron is in an eigenstate of the "rough position". But the operator of "rough position" doesn't commute with the total energy; the eigenstate $|\text{left well ground state}\rangle$ is a linear superposition of the $|E_1\rangle$ and $|E_2\rangle$ eigenstates (it's the right linear superposition that vanishes near the other well), something like$$ |\text{left well ground state}\rangle = \frac{1}{\sqrt{2}} \left( |E_1\rangle - |E_2\rangle \right ) $$If you've measured the "rough position", you are totally uncertain about the eigenvalue of the "exact energy" because these two operators don't commute with one another; a textbook case of the uncertainty principle. If the two wells are equally deep etc., by seeing an electron near the left well, you have 50% odds that its energy was $E_1$ and 50% odds that it was $E_2$ and nothing can be changed about these odds because they follow from the displayed equation above.
In terms of operators, we may say that in the basis "left well ground state" and "right well ground state", the operator of "exact energy" looks like $$ H = \frac{E_1+E_2}{2}\cdot{\bf 1} + \frac{E_1-E_2}{2} \cdot \sigma_1 $$where the second term is proportional to an off-diagonal matrix similar to the first two Pauli matrices. It isn't diagonal in this basis so if we know that we found an electron near the left well, we know that its "exact energy" (whether it is $E_1$ or $E_2$) is maximally uncertain. And vice versa. If we find an electron in the state $E_1$, and we are sure it is not $E_2$, then this electron must be in a wave function that is nonzero near both well, so we don't learn anything about the "rough position" (left or right) which remains maximally uncertain.
If we make a measurement of an electron near the left well, the right conclusion that the antisymmetry or Pauli's principle allows us to predict is that the other electron is in the right well. It's that simple. But learning that it's in one particular well is incompatible with learning whether or not it is in the $E_1$ or $E_2$ eigenstate because the operators corresponding to these questions don't commute with one another.
If several electrons are in vastly different regions of space, the Pauli exclusion principle becomes inconsequential, of course: the electrons are effectively distinguishable by their location. So the dimension of the Hilbert space for the two separated wells
is the simple product of the dimensions of the Hilbert spaces for the individual wells; there's no additional "antisymmetrization" we should do here because we're discussing "off-diagonal blocks" of a matrix and the antisymmetric part of the state is hiding in the convention how we label the two electrons.
But to be able to look at the situation in this factorized way, I had to organize the Hilbert space as a tensor product of pieces that correspond to individual regions. If we organize the Hilbert space according to "individual electrons that may a priori be anywhere", we can't really talk about the entanglement at all because the total Hilbert space of many electrons isn't a tensor product of the individual electrons' spaces: it's the antisymmetrization of it.
The simplest, strict definitions of entanglement don't apply to such antisymmetrized tensor spaces. There's still a natural convention that if we have antisymmetrized (or symmetrized) tensor product Hilbert spaces, we still consider the antisymmetrization (or symmetrization) of a tensor product state to be a non-entangled state. This includes your state. Such a definition will tend to produce similar verdicts as the procedure based on the quantum field (composed of various regions) that I described above.
At any rate, you won't find any helpful way to argue that (and why) these two electrons are entangled: we're not learning any new information (such as the spin) about "the electron in the left well" at all so this "no information" can't be entangled with any information from the right well (which is also empty). The question whether there's entanglement here is either ill-defined or they are not entangled. And even if you found a (contrived) definition that would allow you to say that the simple state is entangled, such an "entanglement" will have no physical consequences. Two highly separated regions (or wells) are independent. In particular, the laws of quantum field theory are exactly local so a measurement or decision done near one well won't immediately influence a spatially separated other well.
To summarize and address your questions:
Finding an electron in the left well ground state means that it has 50% odds to be in the $E_1$ state and 50% to be in the nearby $E_2$ state of the double well problem; we can't simultaneously distinguish left-right as well as $E_1$ vs $E_2$ because the corresponding operators refuse to commute with one another. (I say "refuse", not "fail", because it's a holy right – and the dominant situation – for two operators not to commute. They have no duty to commute in quantum mechanics so a nonzero commutator isn't a failure, isn't bad in any way.) If we find an electron near the left well, what the antisymmetry allows to tell us is that the second electron is near the right well, and vice versa. But measurements linked to one of the two regions can't tell us about the exact energy of one electron (and therefore it tells us nothing about the energy of the other one, either)
In the description of "individual electrons", one can't talk about entanglement because the full Hilbert space is an antisymmetrization (reduced version) of the tensor product, not the full tensor product. In the approximate description of quantum field theory on two regions, the big Hilbert space tensor factorizes and the two-electron state (occupying the two low-lying states) isn't entangled. If the initial state is not entangled and the evolution of the quantum system respects locality (and quantum field theory does), no entanglement may be created by actions done near one well or the other well. Entanglement is always a consequence of the two subsystems' being in contact in the past.
Yes, as I said, you're exactly right: if we know that an electron is near the left well, the odds for its being in the $E_1$ two-well state or the nearby $E_2$ two-well state are exactly 50% for both cases. The left-vs-right and $E_1$-vs-$E_2$ can't be measured simultaneously much like $J_z$ and $J_x$ components of the spin cannot; in fact, these two examples are totally mathematically isomorphic.
A blog version of this answer of mine is here;
This post imported from StackExchange Physics at 2014-03-22 17:25 (UCT), posted by SE-user Luboš Motl
http://motls.blogspot.com/2012/03/energy-measurements-in-two-fermion.html#more |
Source Variability
Source variability within an observation is assessed by three methods: (1) the Kolmogorov-Smirnov (K-S) test, (2) the Kuiper's test, and (3) computation of the Gregory-Loredo variability probability, all based on the source region counts. Intra-observation source variability within any contributing observations to a master source entry is assessed according to the highest level of variability seen within any single contributing observation. Inter-observation source variability between any contributing observations to a master source entry is assessed by application of a \(\chi^{2}\) hypothesis test applied to the source region photon fluxes observed in the contributing observations.
Gregory-Loredo Variability Probability var_prob
The probability that the source region count rate lightcurve is better described by multiple, uniformly sampled time bins with potentially different rates in each bin, as opposed to being described by a single, uniform rate time bin. This probability is based upon the odds ratios (for describing the lightcurve with two or more bins of potentially different rates) calculated from a Gregory-Loredo analysis of the arrival times of the events within the source region. Corrections to the event rate are applied accounting for good time intervals and for the source region dithering across regions of variable exposure (e.g., chip edges) during the observation. Probability values are calculated for each science energy band.
Kolmogorov-Smirnov (K-S) Test Probability ks_prob
The probability that the arrival times of the events within the source region are inconsistent with a constant source count rate throughout the observation. High values of this quantity imply that the source is not consistent with a constant rate, and that the source is likely variable. The probability is computed by means of a hypothesis rejection test from a one-sample K-S test applied to the unbinned event data, with corrections applied for good time intervals and for the source region dithering across regions of variable exposure (e.g., chip edges) during the observation. Probability values are calculated for each science energy band. Note that this variability diagnostic does not treat the source and background separately.
Kuiper's Test Probability kp_prob
The probability that the arrival times of the events within the source region are inconsistent with a constant source count rate throughout the observation. High values of this quantity imply that the source is not consistent with a constant rate, and that the source is likely variable. The probability is computed by means of a hypothesis rejection test from a one-sample Kuiper's test applied to the unbinned event data, with corrections applied for good time intervals and for the source region dithering across regions of variable exposure (e.g., chip edges) during the observation. Probability values are calculated for each science energy band. Note that this variability diagnostic does not treat the source and background separately.
Variability Index var_index
An index in the range [0,10] that combines (a) the Gregory-Loredo variability probability with (b) the fractions of the multi-resolution light curve output by the Gregory-Loredo analysis that are within 3σ and 5σ of the average count rate, to evaluate whether the source region flux is uniform throughout the observation. See the Gregory-Loredo Probability How and Why topic for a definition of this index value, which is calculated for each science energy band.
Count Rate Variability var_mean, var_sigma, var_min, var_max Mean Count Rate
The mean count rate (
var_mean) is the time-averaged source region count rate derived from the multi-resolution light curve output by the Gregory-Loredo analysis. This value is calculated for each science energy band. Count Rate Standard Deviation
The count rate standard deviation (
var_sigma) is the time-averaged 1σ statistical variability of the source region count rate derived from the multi-resolution light curve output by the Gregory-Loredo analysis. This value is calculated for each science energy band. Minimum Count Rate
The minimum count rate (
var_min) is the minimum value of the source region count rate derived from the multi-resolution light curve output by the Gregory-Loredo analysis. This value is calculated for each science energy band. Maximum Count Rate
The maximum count rate (
var_max) is the maximum value of the source region count rate derived from the multi-resolution light curve output by the Gregory-Loredo analysis. This value is calculated for each science energy band. Dither Warning Flag dither_warning_flag
The dither warning flag consists of a Boolean whose value is TRUE if the highest statistically significant peak in the power spectrum of the source region count rate, for the science energy band with the highest variability index, occurs either at the dither frequency of the observation or at a beat frequency of the dither frequency. Otherwise, the dither warning flag is FALSE. This value is calculated for each science energy band.
Gregory-Loredo Light Curve File see Data Products page
Each light curve file records the multi-resolution light curve output by the Gregory-Loredo analysis of the arrival times of the source events within the source region, per observation and science energy band. A background light curve with identical time-binning to the source light curve is derived from an analysis of the events within the background region. Note that the source lightcurve is not strictly a rate derived from binned counts. Instead, it is a probabilistic model of the lightcurve, derived from a probability weighted average of the lightcurve models calculated by the Gregory-Loredo algorithm at different uniform binnings.
Master Source and Stacked Observation Detection Properties Intra-Observation: Intra-Observation Gregory-Loredo, Kolmogorov-Smirnov, and Kuiper's Variability Probability var_intra_prob, ks_intra_prob, kp_intra_prob
The Gregory-Loredo, Kolmogorov-Smirnov (K-S) test, and Kuiper's test intra-observation variability probabilities represent the highest values of the variability probabilities (
var_prob, ks_prob, kp_prob) calculated for each of the contributing observations (i.e., the highest level of variability among the observations contributing to the master source entry). Intra-Observation Variability Index var_intra_index
The intra-observation variability index (
var_intra_index) represents the highest value of the variability indices ( var_index) calculated for each of the contributing observations. Inter-observation: Inter-Observation Variability Probability var_inter_prob
The inter-observation variability probability (
var_inter_prob) is a value that records the probability that the source region photon flux varied between the contributing observations, based on the \(\chi^{2}\) distribution of the photon fluxes and the errors (standard deviation) of the individual observations. In other words, (1 - var_inter_prob) is the probability that the measured fluxes are consistent with a non-varying source.
The reason for this careful definition is that the probabilities for intra-observation and inter-observation variability are, by necessity, of a different nature. Whereas one can say with reasonable certainty whether a source was variable during an observation covering a contiguous time interval, when comparing measured fluxes from different observations one knows nothing about the source's behavior during the intervening interval(s). Consequently, when the inter-observation variability probability is high (e.g., >0.7), one can confidently state that the source is variable on longer time scales, but when the probability is low, all one can say is that the observations are consistent with a constant flux.
Inter-Observation Variability Index var_inter_index
The inter-observation variability index (
var_inter_index) is an integer value in the range [0,10] that is derived from the inter-observation variability probability to evaluate whether the source region photon flux is constant between the observations. The degree of confidence in variability expressed by this index is similar to that of the intra-observation variability index. Inter-Observation Count Rate Variability var_inter_sigma
The inter-observation flux variability (
var_inter_sigma) is the absolute value of the difference between the error weighted mean source region photon flux density of all the contributing observations and the mean source region photon flux density of the single observation for which the absolute value of the difference, divided by the standard deviation for that observation, is maximized:
Here, \(\left\langle F_{ew}\right\rangle\) represents the
inter-observation error weighted mean source region photon flux density; \(\left\langle f_{i=x}\right\rangle\) is the intra-observation mean source region photon flux density for the single observation \(x\); and \(\sigma_{i=x}\) is the standard deviation corresponding to the observation \(x\) [where \(x \in \left\{ k \in \mathbb{Z}^{+} | k \leq N \right\}\) for \(N\) contributing observations]. Of all contributing observations, observation \(x\) yields the highest value for this equation, which is the value recorded by var_inter_sigma. |
J. D. Hamkins, “book review of Notes on Set Theory, Moschovakis,” J.~Symbolic Logic, vol. 62, iss. 4, p. pp.~1493-1494, 1997.
@article{Hamkins1998:BookReviewMoschovakis, jstor_articletype = {book-review}, title = {book review of {Notes on Set Theory, Moschovakis}}, author = {Hamkins, Joel David}, journal = {J.~Symbolic Logic}, jstor_issuetitle = {}, volume = {62}, number = {4}, jstor_formatteddate = {Dec., 1997}, pages = {pp.~1493-1494}, ISSN = {00224812}, keywords = {book-review}, abstract = {}, language = {English}, year = {1997}, publisher = {Association for Symbolic Logic}, copyright = {Copyright © 1997 Association for Symbolic Logic}, doi = {10.2307/2275660}, url = {http://wp.me/p5M0LV-S}, }
Yiannis N. Moschovakis. Notes on Set Theory. This is a sophisticated undergraduate set theory text, packed with elegant proofs, historical explanations, and enlightening exercises, all presented at just the right level for a first course in set theory. Moschovakis focuses strongly on the Zermelo axioms, and shows clearly that much if not all of classical mathematics needs nothing more. Indeed, he says, “all the objects studied in classical algebra, analysis, functional analysis, topology, probability, differential equations, etc. can be found in [the least Zermelo universe] $\cal Z$” (p. 179). The analysis of this universe $\cal Z$ and the other set-theoretic universes like it at the book’s conclusion has the metamathematical flavor of the forcing arguments one might find in a more advanced text, and ultimately spurs one deeper into set theory.
The
Notes begin, pre-axiomatically, with functions and equinumerosity, proving, for example, the uncountability of $\mathbb{R}$ and the Schröder-Bernstein Theorem. In a dramatic fashion, Moschovakis then slides smoothly into the General Comprehension Principle, citing its strong intuitive appeal, and then BOOM! the Russell paradox appears. With it, the need for an axiomatic approach is made plain. Introducing the basic Zermelo axioms of Extensionality, Empty-set, Pairing, Separation, Power set, Union, and a version of Infinity (but not yet the axioms of Choice, Foundation, or Replacement), he proceeds to found the familiar set theory on them.
Following a philosophy of
faithful representation, Moschovakis holds, for example, that while functions may not actually be sets of ordered pairs, mathematics can be developed as if they were. A lively historical approach, including periodic quotations from Cantor, brings out one’s natural curiosity, and leads to the Cardinal Assignent Problem, the problem of finding a sensible meaning for the cardinality $|A|$ of any set $A$. Among the excellent exercises are several concerning Dedekind-finite sets.
After an axiomatic treatment of the natural numbers, with special attention paid to the Recursion Theorem (three different forms) and the cardinal arithmetic of the continuum (but no definition yet of $|A|$), Moschovakis emphasizes fixed point theorems, proving stronger and better recursion theorems. Wellorderings are treated in chapter seven, with transfinite arithmetic and recursion, but, lacking the Replacement axiom, without ordinals. After this the axiom of Choice arrives with its equivalents and consequences, but without a solution to the cardinal assignment problem. Chapter ten, on Baire space, is an excellent introduction to descriptive set theory. The axiom of Replacement finally appears in chapter eleven and is used to analyze the least Zermelo set-theoretic universe. Replacement leads naturally in the very last chapter to the familiar von Neumann ordinals, defined as the image of a wellorder under a von Neumann surjection (like a Mostowski collapse), and with them come the $\aleph_\alpha$, $\beth_\alpha$ and $V_\alpha$ hierarchies. Two well-written appendices, one, a careful construction of $\mathbb{R}$, the other, a brief flight into the meta-mathematical territory of models of set theory and the anti-foundation axiom, conclude the book.
The text is engaging, lively, and sophisticated; yet, I would like to point out some minor matters and make one serious criticism. The minor errors which mar the text include a mis-statement of the Generalized Continuum Hypothesis, making it trivially true, and an incorrect definition of continuity in 6.22, making some of the subsequent theorems false. Since there are also some editing failures and typographical errors, an errata sheet would be worthwhile. Moreover, the index could be improved; I could find, for example, no reference for $N^*$ and the entry for Cantor Set refers to only one of the two independent definitions. It is also curious that when proving the uncountability of $\mathbb{R}$, Moschovakis does not give the proof that many would find to be the easiest for undergraduates to grasp: direct diagonalization against decimal expansions. Rather, he diagonalizes to deduce the uncountability of $2^{\mathbb{N}}$ and then launches into a construction of the Cantor set, obtained by omitting middle thirds; then, appealing to the the completeness property, he injects $2^{\mathbb{N}}$ into it and finishes the argument.
My one serious objection to the text is that while Moschovakis shows impressively that much mathematics can be done with the relatively weak Zermelo axioms, his decision to postpone the Replacement axiom until the end of the book has the consequence that students are deprived of ordinals exactly when ordinals would help them the most: when using well-orders, cardinal arithmetic, and tranfinite recursion. Without ordinals transfinite recursion is encumbered with the notation, such as $\mathop{\rm seg}_{\langle U,\leq_U\rangle}(x)$, which arises when one must carry an arbitrary well-order $\langle U,\leq_U\rangle$ through every proof. And he is forced to be satisfied with
weak solutions to the cardinal assignment problem, in which $|A|=_{\rm def}A$ is, tacitly, the best option. Additionally, the late arrival of Replacement also makes students unduly suspicious of it.
In summary, Moschovakis’ view that all of classical mathematics takes place in $\cal Z$ should be tempered by his observation (p. 239) that neither HF nor indeed even $\omega$ exist in $\cal Z$. In this sense, $\cal Z$ is a small town. And so while he says “one can live without knowing the ordinals, but not as well” (p. 189), I wish that they had come much earlier in the book. Otherwise, the book is a gem, densely packed with fantastic problems and clear, elegant proofs. |
First, I think your definition of $H_{n}$ does not agree with Newman's definition. Newman says the following: "Let $H_{n} \subset G_{n}$ be the set of functions of $G_{n}$ with non-negative valence at all parabolic points of $Q_{n}$ other than $\tau = i\infty$." Here $G_{n}$ is the set of modular functions that are expressible in terms of the Dedekind eta-function (which I will refer to as eta-quotients). So Newman is asking if the set of level $n$ eta-quotients of weight $0$ span the space of modular functions holomorphic everywhere except at $i\infty$.
If $n$ is squarefree, any weight zero modular function that is non-vanishing on the upper half plane is a constant multiple of an eta quotient (see Winfried Kohnen's paper). However, this is not true when $n$ is not squarefree. The reason is that any weight zero eta-quotient has rational Fourier coefficients, and hence corresponds to an element of $\mathbb{Q}(X_{0}(n))$. However, when $n$ is not squarefree, the cusps of $X_{0}(n)$ need not be rational. Given any two cusps $p_{1}$ and $p_{2}$, the divisor class $p_{1} - p_{2}$ is torsion in $J_{0}(n)$ (by a Theorem of Drinfeld and Manin) and as a consequence, there is a modular function all of whose zeroes are at $p_{1}$ and all of whose poles are at $p_{2}$. In general, this modular function will not have rational Fourier coefficients, and this modular function would be included in $H_{n}$ (via your definition), but not via Newman's definition.
I've written a paper with John Webb (on arXiv here) where we study some related questions to Newman's conjecture and do some more computations. We show that if $n$ is composite, the span of $H_{n}$ has finite codimension in the space of all modular functions holomorphic everywhere except infinity.Also, Newman's conjecture is true for all composite $n \leq 300$ with the possible exceptions of $n = 121$ and $n = 209$.
However, it seems likely that $n = 121$ may be a genuine exception to his conjecture. The form $f(z) = \frac{\eta(121z)^{22}}{\eta(11z)^{2}}$ has weight $10$ and has a zero of order $110$ at $i \infty$ and is nonzero everywhere else. As a consequence, if $g(z)$ is a modular function holomorphic everywhere except at infinity, then $f(z)^{r} g(z)$ is a holomorphic modular form of weight $10r$ provided $110r$ is $\geq$ the order of pole of $g(z)$ at infinity. For $2 \leq r \leq 8$, the subspace of $M_{10r}(\Gamma_{0}(121))$ generated by eta quotients has codimension $90$ - this suggests that $H_{121}$ may have codimension $90$ in the space of all modular functions with poles only at $i\infty$. |
The Dimension of a Direct Sum of Subspaces
Recall from The Dimension of a Sum of Subspaces page that if $V$ is a finite-dimensional vector space and $U_1$ and $U_2$ are subspaces of $V$, then we have that:(1)
Now suppose that $U_1$, $U_2$, …, $U_m$ are all subspaces to the finite-dimensional vector space $V$ and such that $V = U_1 \oplus U_2 \oplus ... \oplus U_m$. The following theorem gives us a formula for the dimension of $V$ in terms of the subspaces $U_1$, $U_2$, …, $U_m$.
Theorem 1: Let $V$ be a finite-dimensional vector space such that $U_1$, $U_2$, …, $U_m$ are subspaces of $V$ and $V = U_1 \oplus U_2 \oplus U_m$. Then $\mathrm{dim} (V) = \mathrm{dim} (U_1) + \mathrm{dim} (U_2) + ... + \mathrm{dim} (U_m)$. Proof:Since $V$ is finite-dimensional then any subspace of $V$ is also finite-dimensional. Let $B_1$, $B_2$, …, $B_m$ be bases of $U_1$, $U_2$, …, $U_m$ respectively. Let $B = U_1 \cup U_2 \cup ... \cup U_m$. We note that every vector $v \in V$ is such that: We note that $u_1 \in U_1$, $u_2 \in U_2$, …, $u_m \in U_m$. Therefore for each $u_i$ for $i = 1, 2, ..., m$ we have that $u_i$ is a linear combination of the vectors in $B_i$. Thus, $B$ is a spanning set of $V$. To show that $B$ is linearly independent, suppose that a linear combination of $B$ is zero. Then group terms coming from each basis $B_1$, $B_2$, …, $B_m$. Since $V = U_1 \oplus U_2 \oplus ... \oplus U_m$, this implies that each group in the equation equals zero, and since each group equations zero and $B_1$, $B_2$, …, $B_m$ are bases of each of these subspaces, then the coefficients all equal zero. Therefore $B$ is linearly independent. Since $B$ is linearly independent and has spans $V$ we have that $B$ is a basis of $V$. Note that $B$ also has $\sum_{i=1}^{m} \mathrm{dim} (U_i)$ elements, and so: |
Definition: Solar Clock
Noun Definition Solar Clock A clock that depends on visual sensations from the Sun. 6-11
Logical Antecedents
Noun Definition Clock $\mathbf{\Theta} \equiv \sf{\text{an set of sensations used to tell time. }}$ 6-1
Nouns Definition Rotating Seeds $\sf{\text{Objectified black and white sensations.}}$ 3-22
Adjective Definition Whiteness $\delta_{w} \equiv \begin{cases} +1 &\sf{\text{if a sensation is white }} \\ -1 &\sf{\text{if a sensation is black }} \end{cases}$ 2-2
Nouns Definition Reference Sensations $\sf{\text{Standards to judge and recognize all perceptions.}}$ 1-1
Related WikiMechanics articles.
Note to editors: the layout for this page is determined by this template.
page revision: 10, last edited: 08 Aug 2017 15:29 |
Method 1: Integration by parts
$$ \int \sqrt{a^2-x^2\,} \,\mathrm{d}x
= \frac{x}{2}\sqrt{a^2-x^2\,} + \frac{a}{2} \int \frac{a}{\sqrt{a^2-x^2\,}\,}\mathrm{d}x \tag{1} $$
Pick $v' = x$ and $u = \sqrt{a^2-x^2}$ then solve with respect to $\int \sqrt{a^2-x^2\,} \,\mathrm{d}x$. Next part is to remember the derivative of the inverse sine function.$$\frac{\mathrm{d}}{\mathrm{d}x} \arcsin x = \frac{1}{\sqrt{1-x^2\,}\,} \tag{2}$$
Hence by using the chain rule, you can show that
$$\frac{\mathrm{d}}{\mathrm{d}x} \arcsin \left( \frac{x}{a} \right) = \frac{a}{\sqrt{a^2-x^2\,}\,} \tag{3}$$
Integrating $(3)$ w.r.t $x$ and then inserting the result into $(1)$ completes the calculations. I will leave it to you to fill in the details. Just ask if any part were particularly confusing.
Method 2: Geometric considerations
here is a proof without words. I was able to discover this through the help of
Barry Cipra in the comments above.
$$ \int_0^x \sqrt{a^2-t^2\,} \,\mathrm{d}t
= \color{blue}{\frac{x}{2}\sqrt{a^2-x^2\,}} + \color{green}{\frac{a^2}{2} \arcsin \left(\frac{x}{a}\right)} $$
Be warned, spoilers ahead.
The total area of a circle is $\pi a^2$. The area of the sector can be obtained by multiplying the circle's area by the ratio of the angle and $2 \pi$ (because the area of the sector is proportional to the angle, and $2 \pi$ is the angle for the whole circle, in radians): $$ A = \pi a^2 \cdot \frac{\theta}{2 \pi} = \frac{a^2}{2} \theta $$ The rest follows since $\sin \theta = x/a$, where $a$ of course is the hypotenuse since it is the radius of the circle $x^2 + y^2 = a^2$. The area of the triangle should be straigt forward to figure out. |
Consider the following two ways of getting the zeroth space in the $K$-theory spectrum $BU \times \mathbb{Z}$:
1) Take the groupoid of finite dimensional complex inner product spaces with isometries as morphisms and apply Quillen $S^{-1}S$-construction to it. This amounts to forming a new category, whose objects are pairs $(V_+, V_-)$. A morphism $(V_+, V_-) \to (W_+,W_-)$ is an equivalence class of triples $(A, f_+, f_-)$, where $A$ is another finite dimensional inner product space and $f_{\pm} \colon V_{\pm} \oplus A \to W_{\pm}$ is an isometric isomorphism (i.e. a morphism in the former category). The equivalence relation identifies isomorphic objects $A$ and $B$ and the corresponding maps $f_{\pm}^A$ and $f_{\pm}^B$. All this can be found in a paper by Daniel Grayson and is also sketched in section 7 of this paper. Take the nerve of this new category to get the first model.
2) According to Segal we could also take the category of finite length chain complexes of finite dimensional vector spaces together with chain homotopy equivalences as morphisms and take the nerve of that to get $BU \times \mathbb{Z}$. (This is part of the $\Gamma$-space construction of $BU_{\otimes}$.)
How are the two models related?
I tried to construct a functor from the category in 2) to the one in 1), that has the chance of being an equivalence, but failed so far. Taking the homology of the chain complexes in 2) yields a functor that ends up in 1), but only "sees" the morphisms where $A = 0$, since chain homotopy equivalences always provide isomorphisms when taking homology. Nevertheless: Is this the right thing to consider? |
So You Think You Can Statistics: Overlapping Confidence Intervals, Statistical Significance, and Intuition
Attention Conservation Notice:I begin a new series on the use of common sense in statistical reasoning, and where it can go wrong. If you care enough to read this, you probably already know it. And if you don’t already know it, you probably don’t care to read this. Also, I’m cribbing fairly heavily from the Wikipedia article on the t-test, so I’ve almost certainly introduced some errors into the formulas, and you might as well go there first. Also also: others have already published a paper and written a Masters thesis about this.
Suppose you have two independent samples, \(X_{1}, \ldots, X_{n}\) and \(Y_{1}, \ldots, Y_{m}\). For example, these might be the outcomes in a control group (\(X\)) and a treatment group (\(Y\)), or a placebo group and a treatment group, etc. An obvious summary statistic for either sample, especially if you’re interested in mean differences, is the sample mean of each group, \(\bar{X}_{n}\) and \(\bar{Y}_{m}\). It is then natural to compare the two and ask: can we infer a difference in the averages of the populations from the difference in the sample averages?
If a researcher is clever enough to use confidence intervals rather than P-values, they may begin by constructing confidence intervals for \(\mu_{X}\) and \(\mu_{Y}\), the (hypothetical) population means of the two samples. For reasonably sized samples that are reasonably unimodal and symmetric, a reasonable confidence interval is based on the \(T\)-statistic. Everyone learns in their first statistics course that the \(1 - \alpha\) confidence intervals for the population means under the model assumptions of the \(T\)-statistic are
\[I_{\mu_{X}} = \left[\bar{X}_{n} - t_{n - 1, \alpha/2} \frac{s_{X}}{\sqrt{n}}, \bar{X}_{n} + t_{n-1, \alpha/2} \frac{s_{X}}{\sqrt{n}}\right]\]
and \[I_{\mu_{Y}} = \left[\bar{Y} _{m}- t_{m - 1, \alpha/2} \frac{s_{Y}}{\sqrt{m}}, \bar{Y}_{m} + t_{m - 1, \alpha/2} \frac{s_{Y}}{\sqrt{m}}\right],\] respectively, where \(s_{X}\) and \(s_{Y}\) are the sample standard deviations. These are the usual ‘sample mean plus or minus a multiple of the standard error’ confidence intervals. It then seems natural to see if these two intervals overlap to determine whether the population means are different. This sort of heuristic, for example, is described here and here. Yet despite the naturalness of the procedure, it also happens to be incorrect[1].
To see this, consider the confidence interval for the difference in the means, which in analogy to the two confidence intervals above I will denote \(I_{\mu_{X} - \mu_{Y}}\). If we construct the confidence interval by inverting[2] Welch’s t-test, then our \(1 - \alpha\) confidence interval will be
\[I_{\mu_{X} - \mu_{Y}} = \left[ (\bar{X}_{n} - \bar{Y}_{m}) - t_{\nu, \alpha/2} \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}}, \\ (\bar{X}_{n} - \bar{Y}_{m}) + t_{\nu, \alpha/2} \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}} \right]\]
where the degrees of freedom \(\nu\) of the \(T\)-distribution is approximated by
\[\nu \approx \frac{\left(\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}\right)^{2}}{\frac{s_{X}^{4}}{n^{2}(n - 1)} + \frac{s_{Y}^{4}}{m^{2} (m - 1)}}.\]
This is a[3] reasonable confidence interval, where it would be a very good confidence interval if you’re willing to assume that the two populations are exactly normal but have unknown and possibly different standard deviations. This is again a ‘sample mean plus or minus a multiple of a standard error’-style confidence interval. How does it relate to the ‘overlapping confidence intervals’ heuristic?
Well, if we’re
only interested in using our confidence intervals to perform a hypothesis test for whether we can reject (using a test of size \(\alpha\)) that the population means are not equal, then our heuristic says that the event \(I_{\mu_{X}} \cap I_{\mu_{Y}} = \varnothing\) (i.e. the individual confidence intervals do not overlap) should be equivalent to \(0 \not \in I_{\mu_{X} - \mu_{Y}}\) (i.e. the confidence interval for the difference does not contain \(0\)).
Well, when does \(I_{\mu_{X}} \cap I_{\mu_{Y}} = \varnothing\)? Without loss of generality, assume that \(\bar{X}_{n} > \bar{Y}_{m}\). In that case, the confidence intervals do not overlap precisely when the lower endpoint of \(I_{\mu_{X}}\) is greater than the upper endpoint of \(I_{\mu_{Y}}\),
That is,
\[\bar{X}_{n} - t_{n-1, \alpha/2} \frac{s_{X}}{\sqrt{n}} > \bar{Y}_{m} + t_{m - 1, \alpha/2} \frac{s_{Y}}{\sqrt{m}},\]
and rearranging,
\[\bar{X}_{n} - \bar{Y}_{m} > t_{n-1, \alpha/2} \frac{s_{X}}{\sqrt{n}} + t_{m - 1, \alpha/2} \frac{s_{Y}}{\sqrt{m}}. \hspace{1 cm} \mathbf{(*)}\]
And when isn’t 0 in \(I_{\mu_{X} - \mu_{Y}}\)? Precisely when the lower endpoint of \(I_{\mu_{X} - \mu_{Y}}\) is greater than 0, so
\[\bar{X}_{n} - \bar{Y}_{m} - t_{\nu, \alpha/2} \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}} > 0\]
Again, rearranging
\[\bar{X}_{n} - \bar{Y}_{m} > t_{\nu, \alpha/2} \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}}. \hspace{1 cm} \mathbf{(**)}\]
So for the heuristic to ‘work,’ we would want \(\mathbf{(*)}\) to imply \(\mathbf{(**)}\). We can see a few reasons why this implication need not hold: the \(t\)-quantiles do not match and therefore cannot be factored out, and even if they did, \(\frac{s_{X}}{\sqrt{n}} + \frac{s_{Y}}{\sqrt{m}} \neq \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}}\). We
do have that \(\frac{s_{X}}{\sqrt{n}} + \frac{s_{Y}}{\sqrt{m}} \geq \sqrt{\frac{s_{X}^{2}}{n} + \frac{s_{Y}^{2}}{m}}\) by the triangle inequality. So if we could assume that all of the \(t\)-quantiles were equivalent, we could use the heuristic. But we can’t. Things get even more complicated if we use a confidence interval for the difference in the population means based on Student’s \(t\)-test rather than Welch’s.
As far as I can tell, the triangle inequality argument is the best justification for the non-overlapping confidence intervals heuristic. For example, that is the argument made here and here. But this is based on confidence intervals from a ‘\(Z\)-test,’ where the quantiles come from a standard normal distribution. Such confidence intervals can be justified asymptotically, since we know that a sample mean standardized by a sample standard deviation will converge (in distribution) to a standard normal by a combination of the Central Limit Theorem and Slutsky’s theorem[4]. Thus, this intuitive approach can give a nearly right answer for large sample sizes in terms of whether we can
reject based on overlap. However, you can still have the case where the one sample confidence intervals do overlap and yet the two sample test says to reject. See more here.
My introduction to the overlapping confidence interval heuristic originally arose in the context of this journal article on contrasting network metrics (mean shortest path length and mean local clustering coefficient) between a control group and an Alzheimers group. The key figure is here, and shows a statistically significant separation between the two groups in the Mean Shortest Path Length (\(L_{p}\) in their notation, right most figure) at certain values of a thresholded connectivity network. Though, now looking back at the figure caption, I realize that their error bars are not confidence intervals, but rather standard errors[5]. So, we can think of these as 84% confidence intervals for a large enough sample. They will be about half as long as a 95% confidence interval. But even doubling them, we can see a few places where the confidence intervals do not overlap and yet the two sample \(t\)-test result is not significant.
Left as an exercise for the reader: A coworker asked me, “If the individual confidence intervals don’t tell you whether the difference is (statistically) significant or not, then why do we make all these plots with the two standard errors?” For example, these sorts of plots. Develop an answer that (a) isn’t insulting to non-statisticians and (b) maintains hope for the future of the use of statistics by non-statisticians.
By ‘incorrect,’ here I mean that we can find situations where the heuristic will give non-significance when the analogous two sample test will give significance, and
vice versa. To quote Cosma Shalizi, writing in a different context, “The conclusions people reach with such methods may be right and may be wrong, but you basically can’t tell which from their reports, because their methods are unreliable.” ↩
I plan to write a post on this soon, since a quick Google search doesn’t turn up any simple explanations of the procedure for inverting a hypothesis test to get a confidence interval (or
vice versa). Until then, see Berger and Casella’s Statistical Inferencefor more. ↩ Abecause you can come up with any confidence interval you want for a parameter. But it should have certain nice properties like attaining the nominal confidence level while also being as short as possible. For example, you could take the entire real line as your confidence interval and capture the parameter value 100% of the time. But that’s a mighty long confidence interval. ↩
We don’t get the convergence ‘for free’ from the Central Limit Theorem alone because we are standardizing with the sample, rather than the population, standard deviation. ↩
There is a scene in the second
PhD Comicsmovie where an audience member asks a presenter, “Are your error bars standard deviations or standard errors?” The presenter doesn’t know the answer, and the audience is aghast. After working through this exercise, this joke is both more and less funny. ↩ |
Let's say that I have a bunch of independent samples, $X_1, X_2, \dots, X_n$ and that they all follow Exponential($\theta_i$) distributions. (So they all have pdf $f(x_i)=\theta_i\exp(-\theta_iy_i)$.) I don't know if all the $\theta_i$s are equal or not, so I will assume the worst and say they are not for generalization purposes. How do I find the maximum likelihood estimate of this?
Here's my work so far:
$L = L(\theta_1, \theta_2, \dots, \theta_n | x_1, x_2, \dots, x_n)=\prod \theta_i \exp(-\sum \theta_iy_i)$
$\ln(L)=\sum\ln(\theta_i) - \sum\theta_ix_i$
$d/d\theta_i=\sum\frac{1}{\theta_i} - \sum x_i = 0$
$\sum \theta_i = \sum \frac{1}{x_i}$
Here's where I'm stuck - how can I say anything about a single $\theta_i$? Is $\hat{\theta}_i=\frac{1}{x_i}$?? |
Since 1998 or earlier, there have been no doubts that the AdS/CFT correspondence provides us with a full non-perturbative definition of string theory on the AdS-like background, including all of (type IIB) stringy objects and interactions and subtleties that we have ever heard of. An obvious reason why the CFT can't be equivalent "just to supergravity" is that the pure supergravity is inconsistent as a quantum theory while the CFT is self-evidently consistent.
The basic relationship between the parameters on both sides of the duality is $$g_{\rm string} = g_{\rm YM}^2, \quad \frac{R^4}{\ell_{\rm string}^4} = g_{\rm YM}^2 N\equiv \lambda $$So at a fixed $N$, the weak coupling of the Yang-Mills side coincides with the weak string coupling in the type IIB string theory bulk.
When $N$ is allowed to scale to infinity as well, the 't Hooft coupling $\lambda\equiv g_{\rm YM}^2 N$ is what decides whether the loop diagrams are actually suppressed.
You see that when $\lambda$ is smaller (or much smaller) than one, then the Yang-Mills expansion is weakly coupled and the perturbative gauge-theory diagrams are guaranteed to approximate physics well (or very well). On the contrary, when $\lambda$ is greater (or much greater) than one, the AdS radius $R$ is greater (or much greater) than the string length which means that one may approximate the physics by string theory on a "mildly curved" background.
In this limit, when the curvature radius is (much) longer than the string length, it is always possible to approximate low-energy physics of string theory by supergravity. In string theory, the SUGRA approximation means to neglect the $\alpha'$ stringy corrections. In the gauge-theoretical language, it means to focus on the planar limit for large $\lambda$ and neglect $1/N$ nonplanar corrections.
However, it's been demonstrated that all the "beyond supergravity" states you expect to see in the type IIB background appear on both sides of the AdS/CFT correspondence, including arbitrary excited strings) – this is particularly clear in the BMN/pp-wave limit (see also 1,000+ followups) – as well as various wrapped D-branes and, what is critical for the usefulness of the whole AdS/CFT framework, evaporating quantum black holes.This post imported from StackExchange Physics at 2014-03-11 10:29 (UCT), posted by SE-user Luboš Motl |
A Robust Solver for Distributed Optimal Control for Stokes Flow Dipl.-Ing. Markus Kollmann March 20, 2012, 3:30 p.m. S2 059
In this talk we consider the following optimal control problem:
Minimize $\quad J(u,f) = \frac{1}{2}||u-u_d||_{L^2(\Omega)}^2 + \frac{\alpha}{2}||f||_{L^2(\Omega)}^2 \quad$ (1)
subject to
$-\Delta u + \nabla p = f$ in $\Omega$
div $u = 0$ in $\Omega$ $u = 0$ on $\Gamma$ $+$ inequality constraints.
Here $\Omega$ is an open and bounded domain in $\mathbb{R}^d$ ($d \in \{1,2,3\}$), $\Gamma$ denotes the boundary and $\alpha > 0$ is a cost parameter. We consider two types of inequality constraints:
inequality constraints on the control $f$, inequality constraints on the state $u$.
In both cases, the first order system of necessary and sufficient optimality conditions of (1) is nonlinear. A semi-smooth Newton iteration is applied in order to linearize the system. In every Newton step a linear saddle point system has to be solved (after discretization). For these linear systems solvers are discussed. Numerical examples are given which illustrate the theoretical results. |
Let's consider a rigid body B and an arbitrary reference point P ${\it fixed}$ with respect to the body (P can be a particle of the body or it can be a "mathematical point" outside the body but solidary to B, it doesn't matter). As B moves, the point P has a velocity ${\bf v}_P$ that, of course, can change over time. The most general motion of B is the composition of a rigid translation with velocity ${\bf v}_P$ and a rotation with angular velocity $\vec \omega$ around an axis that passes through the point P (This result is known as the Chasles' theorem). Accordingly, the velocity of any point A of the body is \begin{equation}{\bf v}_A = {\bf v}_P + {\vec \omega}\wedge \left( {\bf r}_A - {\bf r}_P\right),\;\;\;\;\;\;\;\;\;\;\;\; (1)\end{equation} where ${\bf r}_A$, ${\bf r}_P$ are the vector position of the points A and P with respect to a given reference frame $S.$ This is a kinematic result, it doesn't matter if the reference frame is non-inertial. When we compute the kinetic energy with respect to $S$, $$T=\frac{1}{2}\sum_{i\in B}m_i {\bf v}_i^2,$$ using Eq. (1), $T$ results the sum of three terms: one involving only the rigid translation (${\bf v}_P$), other involving only the rotation ($\vec\omega$), and a third one that mixes translation and rotation. In the special case where the center of mass is taken as the point P, the last term vanishes and, then, the kinetic energy is the sum of the translational kinetic energy (the body moving as a whole with the center of mass velocity) and the rotational kinetic energy (the body rotating around its center of mass):$$T= \frac{1}{2}M{\bf v}_{CM}^2 + \frac{1}{2}\sum_{i \in B} m_i \left(\vec \omega \wedge ({\bf r}_i-{\bf r}_{CM})\right)^2.\;\;\;\;(2)$$In the case of a rotation around a fixed direction (like in your problem), the last term is rewritten as $$T_{rot} = \frac{1}{2}I^{cm}\omega^2,$$where $I^{cm}$ is the body moment of inertia with respect to the rotation axis that pass through the center of mass of the body.
If one point of the body is fixed with respect to $S,$ all the kinetic energy has a rotational character (see Eq. (1), with ${\bf v}_P={\bf 0}$), $$T = T_{rot} = \frac{1}{2}I^{P}\omega^2,\;\;\;\;(3)$$where now $I^{P}$ is the body moment of inertia with respect to an axis that pass through the point P (again we are considering rotation around a fixed direction).
Let's go to your problem. With respect to the rod, we can take the pivot as the reference point $P$ (this is convenient, as the pivot is fixed with respect to a laboratory frame). So, the kinetic energy of the rod is (see Eq. (3)), $$T_{rod}= \frac{1}{2}I_{rod}^P \dot{\theta}^2,$$ where, for a uniform rod $I^P_{rod} = \frac{1}{3}M_{rod} L_{rod}^2,$ and $\theta$ is the angle between the rod and the vertical direction (You can recalculate the rod kinetic energy taking its center of mass as the reference point $P$!).
With respect to the square block, it is convenient to take its center of mass as the reference $P$ point. In this case, the velocity of the center of mass is $|{\bf v}^{cm}_{block}| = L_{rod} |\dot\theta|$, while its angular velocity will depend on whether this block is allowed to rotate independently of the rod, or not. In the first case, if we call $\phi$ the angle that determines the block orientation, the total block kinetic energy is (see Eq. (2)), $$T_{block} = \frac{1}{2}M_{block} L_{rod}^2 \dot\theta^2 + \frac{1}{2}I^{cm}_{block}\dot\phi^2.$$ If the square block is rigidly tied to the rod (that is, it cannot rotate independently), its angular velocity will be the same as the one for the rod ($\dot \theta$). So, in the last equation you should replace $\dot\phi$ by $\dot\theta$.
Don't forget to include the potential energy in the Lagrangean function. |
Difference between revisions of "Demand/Dynamic User Assignment"
(note on convergence and results)
(restructured page)
Line 1: Line 1: + + + + + + + + +
The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium.
The tool '' {{SUMO}}/tools/assign/duaIterate.py '' can be used to compute the (approximate) dynamic user equilibrium.
{{Caution|This script will require copious amounts of disk space}}
{{Caution|This script will require copious amounts of disk space}}
Line 6: Line 15:
''duaIterate.py'' supports many of the same options as [[SUMO]]. Any options not listed when calling ''duaIterate.py {{Option|--help}}'' can be passed to [[SUMO]] by adding {{Option|sumo--long-option-name arg}} after the regular options (i.e. {{Option|sumo--step-length 0.5}}.
''duaIterate.py'' supports many of the same options as [[SUMO]]. Any options not listed when calling ''duaIterate.py {{Option|--help}}'' can be passed to [[SUMO]] by adding {{Option|sumo--long-option-name arg}} after the regular options (i.e. {{Option|sumo--step-length 0.5}}.
− +
This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by
This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by
Line 20: Line 29:
The two methods which are implemented are called [[Publications#Traffic_Assignment|Gawron]] and Logit (reference needed!!!) in the following. The input for each of the methods is a weight or cost function <math>w</math> on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes <math>R</math> where each route <math>r</math> has an old cost <math>c_r</math> and an old probability <math>p_r</math> (from the last iteration) and needs a new cost <math>c_r'</math> and a new probability <math>p_r'</math>.
The two methods which are implemented are called [[Publications#Traffic_Assignment|Gawron]] and Logit (reference needed!!!) in the following. The input for each of the methods is a weight or cost function <math>w</math> on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes <math>R</math> where each route <math>r</math> has an old cost <math>c_r</math> and an old probability <math>p_r</math> (from the last iteration) and needs a new cost <math>c_r'</math> and a new probability <math>p_r'</math>.
−
= Logit =
+
= Logit =
The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation.
The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation.
Line 29: Line 38:
<math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math>
<math>p_r' = \frac{\exp(\theta c_r')}{\sum_{s\in R}\exp(\theta c_s')}</math>
−
= Gawron =
+
= Gawron =
− +
= oneShot-assignment =
= oneShot-assignment =
−
An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using {{XML|<trip>}} input directly in [[SUMO]] instead of {{XML|<vehicle>}}s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios).
+
An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using {{XML|<trip>}} input directly in [[SUMO]] instead of {{XML|<vehicle>}}s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios).
+ +
[[Tools/Assign#one-shot.py]]
+ + +
.
Revision as of 05:54, 26 October 2017 Contents Introduction
For a given set of vehicles with of origin-destination relations (trips), the simulation must determine routes through the network (list of edges) that are used to reach the destination from the origin edge. The simples method find these routes is by computing shortest or fastest routes through the network using a routing algorithm such as Djikstra or A*. These algorithms require assumptions regarding the travel time for each network edge which is commonly not known before running the simulation due to the fact that travel times depend on the number of vehicles in the network.
.
The problem of determining suitable routes that take into account travel times in a traffic-loaded network is called
user assignment.SUMO provides different tools to solve this problem and they are described below. Iterative Assignment ( Dynamic User Equilibrium)
The tool
<SUMO_HOME> /tools/assign/duaIterate.py can be used to compute the (approximate) dynamic user equilibrium. python duaIterate.py -n -t <network-file> -l <trip-file> <nr-of-iterations> duaIterate.py supports many of the same options as SUMO. Any options not listed when calling duaIterate.py --help can be passed to SUMO by adding sumo--long-option-name arg after the regular options (i.e. sumo--step-length 0.5. This script tries to calculate a user equilibrium, that is, it tries to find a route for each vehicle (each trip from the trip-file above) such that each vehicle cannot reduce its travel cost (usually the travel time) by using a different route. It does so iteratively (hence the name) by calling DUAROUTER to route the vehicles in a network with the last known edge costs (starting with empty-network travel times) calling SUMO to simulate "real" travel times result from the calculated routes. The result edge costs are used in the net routing step.
The number of iterations may be set to a fixed number of determined dynamically depending on the used options. In order to ensure convergence there are different methods employed to calculate the route choice probability from the route cost (so the vehicle does not always choose the "cheapest" route). In general, new routes will be added by the router to the route set of each vehicle in each iteration (at least if none of the present routes is the "cheapest") and may be chosen according to the route choice mechanisms described below.
Between successive calls of DUAROUTER, the
.rou.alt.xml format is used to record not only the current best route but also previously computed alternative routes. These routes are collected within a route distribution and used when deciding the actual route to drive in the next simulation step. This isn't always the one with the currently lowest cost but is rather sampled from the distribution of alternative routes by a configurable algorithm described below.
The option --max-convergence-deviation may be used to detect convergence and abort iterations automatically. Otherwise, a fixed number of iterations is used. Once the script finishes any of the resulting
.rou.xml files may be used for simulation but the last one(s) should be the best.
The two methods which are implemented are called Gawron and Logit (reference needed!!!) in the following. The input for each of the methods is a weight or cost function on the edges of the net, coming from the simulation or default costs (in the first step or for edges which have not been traveled yet), and a set of routes where each route has an old cost and an old probability (from the last iteration) and needs a new cost and a new probability .
Logit
The Logit mechanism applies a fixed formula to each route to calculate the new probability. It ignores old costs and old probabilities and takes the route cost directly as the sum of the edge costs from the last simulation.
The probabilities are calculated from an exponential function with parameter scaled by the sum over all route values:
Gawron oneShot-assignment
An alternative to the iterative user assignment above is incremental assignment. This happens automatically when using <trip> input directly in SUMO instead of <vehicle>s with pre-defined routes. In this case each vehicle will compute a fastest-path computation at the time of departure which prevents all vehicles from driving blindly into the same jam and works pretty well empirically (for larger scenarios).
The routes for this incremental assignment are computed using the Automatic Routing / Routing Device mechanism. Since this device allows for various configuration options, the script Tools/Assign#one-shot.py may be used to automatically try different parameter settings.
The MAROUTER application computes a
classic macroscopic assignment. It employs mathematical functions (resistive functions) that approximate travel time increases when increasing flow. This allows to compute an iterative assignment without the need for time-consuming microscopic simulation. |
https://www.cwi.nl/research/groups/networks-and-optimization/events/n-o-seminar-kim-manuel-klein-kiel-university N&O seminar: Kim-Manuel Klein (Kiel University) 2019-09-24T11:00:00+02:00 2019-09-24T12:00:00+02:00 Everyone is welcome to attend the N&O seminar by Kim-Manuel Klein with the title 'About the Complexity of 2-Stage Stochastic IPs'. WhatNetworks & Optimization English Seminars When24-09-2019 from 11:00 to 12:00 (Europe/Amsterdam / UTC200) WhereRoom L016 at CWI, Science Park 123 in Amsterdam Contact NameDaniel Dadush WebVisit external website Add event to calendariCal
Everyone is welcome to attend the N&O seminar by Kim-Manuel Klein with the title 'About the Complexity of 2-Stage Stochastic IPs'.
Abstract:
We consider so called 2-stage stochastic integer programs (IPs) and their generalized form of multi-stage stochastic IPs. A 2-stage stochastic IP is an integer program of the form max \{ c^T x \mid A x = b, l \leq x \leq u, x \in Z^{s + nt} \} where the constraint matrix A\in \ZZ^{r n \times s +nt} consists roughly of n repetitions of a block matrix A \in Z^{r \times s} on the vertical line and n repetitions of a matrix B \in \ZZ^{r \times t} on the diagonal. Hence it is roughly the transposed of the constraint matrix of an n-fold IP. In this talk we present new algorithmic results on how to solve this type of IP. The algorithm is based on the Graver augmentation framework where our main contribution is to give an explicit doubly exponential bound on the size of the augmenting steps. The previous bound for the size of the augmenting steps relied on non-constructive finiteness arguments from commutative algebra and therefore only an implicit bound was known that depends on parameters r,s,t and \Delta, where \Delta is the largest entry of the constraint matrix. The new improved bound however is obtained by a novel theorem which argues about the intersection of paths in a vector space. |
Research talks;Number Theory
For a non-principal Dirichlet character $\chi$ modulo $q$, the classical Pólya-Vinogradov inequality asserts that
$M (\chi) := \underset{x}{max}$$| \sum_{n \leq x}$$\chi(n)| = O (\sqrt{q} log$ $q)$. This was improved to $\sqrt{q} log$ $log$ $q$ by Montgomery and Vaughan, assuming the Generalized Riemann hypothesis GRH. For quadratic characters, this is known to be optimal, owing to an unconditional omega result due to Paley. In this talk, we shall present recent results on higher order character sums. In the first part, we discuss even order characters, in which case we obtain optimal omega results for $M(\chi)$, extending and refining Paley's construction. The second part, joint with Alexander Mangerel, will be devoted to the more interesting case of odd order characters, where we build on previous works of Granville and Soundararajan and of Goldmakher to provide further improvements of the Pólya-Vinogradov and Montgomery-Vaughan bounds in this case. In particular, assuming GRH, we are able to determine the order of magnitude of the maximum of $M(\chi)$, when $\chi$ has odd order $g \geq 3$ and conductor $q$, up to a power of $log_4 q$ (where $log_4$ is the fourth iterated logarithm).
For a non-principal Dirichlet character $\chi$ modulo $q$, the classical Pólya-Vinogradov inequality asserts that $M (\chi) := \underset{x}{max}$$| \sum_{n \leq x}$$\chi(n)| = O (\sqrt{q} log$ $q)$. This was improved to $\sqrt{q} log$ $log$ $q$ by Montgomery and Vaughan, assuming the Generalized Riemann hypothesis GRH. For quadratic characters, this is known to be optimal, owing to an unconditional omega result due to Paley. In this talk, we ...
11L40 ; 11N37 ; 11N13 ; 11M06
... Lire [+] |
Let $M=\{\frac{1}{n} : n\in\mathbb{N}\}$.
I am trying to find a simple surjective and injective function $[0,1]\to [0,1]\setminus M$.
let define that $[0,1]\setminus M = Y$.
I can't understand how to handle with questions like this. I understand that the set $Y$ has all the elements as $[0,1]$ without elements like $1,\frac{1}{2},\frac{1}{3},...\frac{1}{n}$ but for example if I want to send $1$, what the value of function $f(1)$ will be if all other elements are already taken and this function should be injective (because $Y\subseteq [0,1]$). |
This post was first published on 05/14/16, and has since been migrated to Blogger.
Brown University requires S.c.B students to take a
capstonecourse that "studies a current topic in depth to produce a culminating artifact such as a paper of software project".
For my capstone project, I developed a more efficient sampling approach for use in learning POMDP models with Deep Learning. It's still a work in progress, but the results are pretty promising.
Abstract:
Deep neural networks can be applied to model-based learning problems over continuous state-action spaces $S \times A$. By training a prediction network $\hat{f}_\tau : S \times A \to S$ on saved trajectory data, we can approximate the true transition function $f$ of the underlying Markov decision processes. $\hat{f}_\tau$ can then be used within optimal control and planning algorithms to ``predict the future''.
Robustness of $\hat{f}_\tau$ is crucial. If the robot (such as an autonomous vehicle) spends most of its exploration time in a small region of $S \times A$, then $\hat{f}_\tau$ may not be accurate in regions that the robot does not encounter often (such as collision trajectories). However, gathering enough training data to fully characterize $f$ over $S \times A$ is very time-consuming, and tends to result in many redundant samples.
In this work, I propose exploring $S \times U$ using an ``adversarial policy'' $\pi_\rho : S \to A$ that guides the robot into states and actions that maximize model loss. Policy parameters $\rho$ and model parameters $\tau$ are optimized in an alternating minimax game via stochastic gradient descent. Robot simulation experiments demonstrate that adversarial exploration policies improve model robustness with respect to the time the robot spends sampling the environment.
Links: PDF of the writeup Project rearch notebook in blog format Source code on Github E2C vanilla implementation BoxBot simulator |
How to show that all normally open covers form a base for the fine uniformity $\mu_F$? If $\mathcal{B}$ is the collection of all normally open covers, we first need to show that $\mathcal{B}$ is a subcollection of $\mu_F$. Then we have to show that every $\mathcal{U}$ in $\mu_F$ is refined by some cover from $\mathcal{B}$. How to go about the proof?
Following Willard, General Topology, 36.15 (which is what you seem to be doing as well):
some definitions recap: A sequence $(\mathcal{U}_n)_{n \ge 1}$ of covers of $X$ is said to be a normal sequence, when for all $n\ge 1$: $\mathcal{U}_{n+1}\prec^\ast \mathcal{U}_n$.
A cover $\mathcal{U}$ is said to be a normal cover, if there is a normal sequence $(\mathcal{U}_n)_{n \ge 1}$ as above with $\mathcal{U}_1 = \mathcal{U}$, so it can be star-refined as often we like.
Note that by the definition of covering uniformities, all covers in a covering uniformity are normal covers. Such covers need not be open and they often aren't.
A family of covers is called a normal family, if every member of the family is star-refined by some member of the family. The set of members of a normal sequence is a normal family. Any normal family of covers generates a unique smallest uniformity containing that family, and then this family is called a "subbase" for the generated uniformity.
An
open cover $\mathcal{U}$ of $X$ is said to be "normally open" when there is a normal sequence $(\mathcal{U}_n)_{n \ge 1}$ of open covers with $\mathcal{U}_1 =\mathcal{U}$. Such a cover is clearly normal but in a special way, as it can be star-refined by open covers (instead of just covers).
We start in a uniformisable space $(X,\mu)$ with induced topology $\mathcal{T}_\mu$. Let $\mu_F$ be the corresponding "fine" uniformity, which is the largest (by inclusion) covering uniformity, that induces $\mathcal{T}_\mu$. We constructed it here
So let $\mathscr{B}$ be the collection of
all normally open (in said topology)covers of $X$.
Now take some (fixed for now) normally open $\mathcal{U}$ from $\mathscr{B}$, and construct the promised normal sequence of open covers $(\mathcal{U}_n)_{n \ge 1}$ with $\mathcal{U}_1 = \mathcal{U}$. Then in this answer I showed that $\mu \cup \{\mathcal{U}_n \mid n \in \mathbb{N}\}$ is a normal family that induces a uniformity $\mu'$ such that $\mathcal{T}_{\mu'} = \mathcal{T}_\mu$. As $\mathcal{U} \in \mu'$ and $\mu'$ is a uniformity inducing $\mathcal{T}_\mu$, we also know $\mu' \subseteq \mu_F$ by maximality and so $\mathcal{U} \in \mu_F$.
We have shown (as $\mathcal{U} \in \mathscr{B}$ was arbitrary), that indeed $\mathscr{B} \subseteq \mu_F$.
Now we use the following
Fact: (e.g. Willard, General Topology; 36.7) For any covering uniformity $\mu$, the open uniform covers (i.e. open covers that happen to be members of $\mu$) form a base for $\mu$.
Proof sketch of Fact: let $\mathcal{U} \in \mu$ and let $\mathcal{V} \in \mu$ be such that $\mathcal{V} \prec^\ast \mathcal{U}$. Then note that $\mathcal{O}=\{\operatorname{st}(x,\mathcal{V}): x \in X\}$ is an open cover of $X$ (in the induced topology) (note that $\mathcal{V}\prec \mathcal{O}$, so that $\mathcal{O} \in \mu$, as required) that refines $\mathcal{U}$.
This Fact implies
Fact 2: every open cover in a uniformity $\mu$ is normally open.
Proof of Fact 2: Let $\mathcal{U} \in \mu$ be an open cover Proof: Define $\mathcal{U}_1 = \mathcal{U}$. Having defined $\mathcal{U}_n$ for some $n$, such that $\mathcal{U}_n$ is an open cover from $\mu$, let $\mathcal{V}$ be any cover in $\mu$ such that $\mathcal{V}\prec^\ast \mathcal{U}_n$, which can be done as $\mathcal{U}_n \in \mu$ and then by the above fact there is an open cover $\mathcal{O}$ in $\mu$ such that $\mathcal{O} \prec \mathcal{V}$. Standard facts about refinements learn us that:
$$\mathcal{O} \prec \mathcal{V} \prec^\ast \mathcal{U}_n \implies \mathcal{O} \prec^\ast \mathcal{U}_n$$
allowing us to go in the recursion by defining $\mathcal{U}_{n+1} = \mathcal{O}$ keeping everything open and in $\mu$, so we can continue. This recursively defined sequence shows that $\mathcal{U}$ is indeed normally open.
Now take any $\mathcal{U} \in \mu_F$. Then by the above fact, there is an open cover $\mathcal{O} \in \mu$ refining it. By Fact 2, it is normally open so a member of $\mathscr{B}$. This shows $\mathscr{B}$ is a base for $\mu_F$. |
I need some help in plotting the following inequality using Mathematica:
$$\frac{1+(x/100+0.1)\times y/100}{1.15+1.15\times(x/100)\times(y/100)}\geq 1$$
assuming $x,y\in \mathbb{Q}$, the set of rational numbers and $5\leq x\leq 90,\ 50\leq y \leq 650$.
Copyable code for the above inequality:
(1 + (x/100 + 0.1)(y/100))/(1.15 + 1.15 (x/100)(y/100)) >= 1
I've tried the following in Maple (because it's what we had available) but it only simplifies the term and does not plot it and enclosing the inequality with
plot() in Maple only generates errors.
assume(x, rational);assume(y, rational);assume(x>=5 and x<=90);assume(y>=50 and y<=650);(1+1*(x/100+0.1)*(y/100))/(1.15+1.15*(x/100)*(y/100))>=1;
Optional theoretical problem:
$x$ could theoretically (not in practice) be up to $100$, but $(x/100+0.1)$ on the left side would still always max out at $1$. Should the equation be changed to $x_1$ and $x_2$ with respective limits? But those limits would only apply for $x_1$ and $90< x_2 \leq 100$. |
Answer
Proper fractions have a numerator that is smaller than the denominator (e.g. 1/2), while improper fractions have a numerator that is greater than or equal to the denominator (e.g. 3/2).
Work Step by Step
Proper fractions have a numerator that is smaller than the denominator, while improper fractions have a numerator that is greater than or equal to the denominator. An example of a proper fraction is one-half: $\displaystyle \frac{1 \leftarrow numerator}{2\leftarrow denominator}$ An example of an improper fraction is three-halves: $\displaystyle \frac{3 \leftarrow numerator}{2\leftarrow denominator}$ See images for a visual representation. |
exponential distribution
Every day one sees politicians on TV assuring us that nuclear deterrence works because there no nuclear weapon has been exploded in anger since 1945. They clearly have no understanding of statistics.
With a few plausible assumptions, we can easily calculate that the time until the next bomb explodes could be as little as 20 years.
Be scared, very scared.
The first assumption is that bombs go off at random intervals. Since we have had only one so far (counting Hiroshima and Nagasaki as a single event), this can’t be verified. But given the large number of small influences that control when a bomb explodes (whether in war or by accident), it is the natural assumption to make. The assumption is given some credence by the observation that the intervals between wars are random [download pdf].
If the intervals between bombs are random, that implies that the distribution of the length of the intervals is exponential in shape, The nature of this distribution has already been explained in an earlier post about the random lengths of time for which a patient stays in an intensive care unit. If you haven’t come across an exponential distribution before, please look at that post before moving on.
All that we know is that 70 years have elapsed since the last bomb. so the interval until the next one must be greater than 70 years. The probability that a random interval is longer than 70 years can be found from the cumulative form of the exponential distribution.
If we denote the true mean interval between bombs as $\mu$ then the probability that an intervals is longer than 70 years is
\[ \text{Prob}\left( \text{interval > 70}\right)=\exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} \]
We can get a lower 95% confidence limit (call it $\mu_\mathrm{lo}$) for the mean interval between bombs by the argument used in Lecture on Biostatistics, section 7.8 (page 108). If we imagine that $\mu_\mathrm{lo}$ were the true mean, we want it to be such that there is a 2.5% chance that we observe an interval that is greater than 70 years. That is, we want to solve
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.025\]
That’s easily solved by taking natural logs of both sides, giving
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.025\right)}}= 19.0\text{ years}\]
A similar argument leads to an upper confidence limit, $\mu_\mathrm{hi}$, for the mean interval between bombs, by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{hi}}\right)} = 0.975\]
so \[ \mu_\mathrm{hi} = \frac{-70}{\ln{\left(0.975\right)}}= 2765\text{ years}\]
If the worst case were true, and the mean interval between bombs was 19 years. then the distribution of the time to the next bomb would have an exponential probability density function, $f(t)$,
\[ f(t) = \frac{1}{19} \exp{\left(\frac{-70}{19}\right)} \]
There would be a 50% chance that the waiting time until the next bomb would be less than the median of this distribution, =19 ln(0.5) = 13.2 years.
In summary, the observation that there has been no explosion for 70 years implies that the mean time until the next explosion lies (with 95% confidence) between 19 years and 2765 years. If it were 19 years, there would be a 50% chance that the waiting time to the next bomb could be less than 13.2 years. Thus there is no reason at all to think that nuclear deterrence works well enough to protect the world from incineration.
Another approach
My statistical colleague, the ace probabilist Alan Hawkes, suggested a slightly different approach to the problem,
via likelihood. The likelihood of a particular value of the interval between bombs is defined as the probability of making the observation(s), given a particular value of $\mu$. In this case, there is one observation, that the interval between bombs is more than 70 years. The likelihood, $L\left(\mu\right)$, of any specified value of $\mu$ is thus
\[L\left(\mu\right)=\text{Prob}\left( \text{interval > 70 | }\mu\right) = \exp{\left(\frac{-70}{\mu}\right)} \]
If we plot this function (graph on right) shows that it increases with $\mu$ continuously, so the maximum likelihood estimate of $\mu$ is infinity. An infinite wait until the next bomb is perfect deterrence.
But again we need confidence limits for this. Since the upper limit is infinite, the appropriate thing to calculate is a one-sided lower 95% confidence limit. This is found by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.05\]
which gives
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.05\right)}}= 23.4\text{ years}\]
Summary
The first approach gives 95% confidence limits for the average time until we get incinerated as 19 years to 2765 years. The second approach gives the lower limit as 23.4 years. There is no important difference between the two methods of calculation. This shows that the bland assurances of politicians that “nuclear deterrence works” is not justified.
It is not the purpose of this post to predict when the next bomb will explode, but rather to point out that the available information tells us very little about that question. This seems important to me because it contradicts directly the frequent assurances that deterrence works.
The only consolation is that, since I’m now 79, it’s unlikely that I’ll live long enough to see the conflagration.
Anyone younger than me would be advised to get off their backsides and do something about it, before you are destroyed by innumerate politicians.
Postscript
While talking about politicians and war it seems relevant to reproduce Peter Kennard’s powerful image of the Iraq war.
and with that, to quote the comment made by Tony Blair’s aide, Lance Price
It’s a bit like my feeling about priests doing the twelve stations of the cross. Politicians and priests masturbating at the expense of kids getting slaughtered (at a safe distance, of course). |
Multigrid Methods for Elliptic Optimal Control Problems with Neumann Boundary Control Dr. Stefan Takacs June 9, 2009, 3:30 p.m. MZ 005B
In this talk we will discuss multigrid methods for solving the discretized optimality system for elliptic optimal control problems. We will concentrate on the following model problem with Neumann boundary control:
$ J(y, u) = \frac{1}{2} \left\Vert y − y_D \right\Vert_{L^2(Ω)}^2 + \frac{\gamma}{2} \left\Vert u \right\Vert_{L^2 (\partial Ω)} \to \mathrm{min} $
$ −\Delta y + y = 0 ~ \mathrm{in} ~ \Omega, \qquad \frac{\partial y}{\partial n} = u ~ \mathrm{on} ~ \partial \Omega $
The proposed approach is based on the formulation of Karush-Kuhn-Tucker system in terms of the state y, the control u and the adjoined state p. For the model problem, the approximation property is shown similar to the case of distributed control, see [3] and [1]. We will propose on the one hand an Uzawa type smoother and on the other hand a smoother that is based on the normal equation of the Kuhn-Tucker system. For both methods rigorous analysis is available. We will compare both methods in numerical experiments.
Of course, the results can be generalized to other problems, especially the observation can be restricted to the boundary or to some part of the domain Ω.
References
[1] S. C. Brenner. Multigrid methods for parameter dependent problems. RAIRO, Modélisation Math. Anal. Numér, 30:265 – 297, 1996. [2] J. Schöberl and W. Zulehner. On Schwarz-type smoothers for saddle point problems. Numer. Math., 95:377 – 399, 2003. [3] R. Simon and W. Zulehner. On Schwarz-type smoothers for saddle point problems with applications to PDE-constrained optimization problems. Numer. Math., 111:445 – 468, 2009. |
Besides the usual deterministic DFS/BFS approaches, one could also consider a randomized algorithm. I will shortly describe a randomized algorithm for deciding if two vertices $s$ and $t$ are connected. It can also be used to decide if the whole graph is connected. The main benefit is that this method requires $O(\log |V|)$ bits of space, whereas a BFS/DFS requires $\Omega(|V|)$ space.
The
cover time of an undirected graph $G=(V,E)$ is the maximum over all vertices $v \in V(G)$ of the expected time to visit all of the nodes in the graph by a random walk starting from $v$. Using some theory of Markov chains, it is not too hard to prove the cover time of $G$ if bounded from above by $4|V|\cdot|E|$.
The algorithm for deciding if $s$ and $t$ are connected is simple:
Input: two vertices s,t
1. Start a random walk from s.
2. If t is reached within 4|V|^3 steps, return true. Otherwise, return false.
Clearly, if there is no path between $s$ and $t$ the algorithm returns the correct answer. If there is a path, the algorithm errs if it is not found within $4|V|^3$ steps. The cover time of $G$ is bounded from above by $4|V||E| < 2|V|^3$. Using Markov's inequality, the probability that a random walk takes more than $4|V|^3$ steps to reach $t$ from $s$ is at most $1/2$. In other words, the algorithm returns the correct answer with probability $1/2$, and only errs by saying $s$ and $t$ are not connected when they in fact are. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I know how to solve and prove that
$$\int_{0}^{\infty} \frac{\sin(x)}{x^\alpha} \, dx$$ converge for $ 0 < \alpha < 2 $ with regular tests and integration by parts.
But with the Dirichlet test, I just see that I have one function which is monotonic decreasing to $0$ for any given $\alpha$ and the integral on the $\sin(x)$ is bounded for $ [0,\infty]$ because so it should converge. so it should converge for any $\alpha$.
Clearly I don't understand the Dirichlet test, but I've read its definition many times and still I don't understand where I got it wrong. |
Linear regression is a statistical modeling technique used to describe a continuous response variable as a function of one or more predictor variables. It can help you understand and predict the behavior of complex systems or analyze experimental, financial, and biological data.
Linear regression techniques are used to create a linear model. The model describes the relationship between a dependent variable \(y\) (also called the response) as a function of one or more independent variables \(X_i\) (called the predictors). The general equation for a linear regression model is:
\[y = \beta_0 + \sum \ \beta_i X_i + \epsilon_i\]
where \(\beta\) represents linear parameter estimates to be computed and \(\epsilon\) represents the error terms.
There are several types of linear regression models:
Simple:model with only one predictor Multiple:model with multiple predictors Multivariate:model for multiple response variables Generate predictions Compare linear model fits Plot residuals Evaluate goodness-of-fit Detect outliers
To create a linear model that fits curves and surfaces to your data, see Curve Fitting Toolbox. |
I have a doubt about the equivalence between Fourier Transform and Laplace Transform.
It was told me that if I have a function such that:
$f(t)=0$ if t<0 $f\in L^1(R) \bigcap L^2(R)$
I can define
$F[f(t)]=\int_{0}^{\infty}f(t) e^{-j\omega t}dt$
$L[f(t)]=\int_{0}^{\infty}f(t) e^{-s t}dt$
I can look at the Fourier transform as the Laplace Transform evaluated in $s=j\omega$ IF AND ONLY if the abscisse of convergence is strictly less than zero. (I.e. if the region of convergence includes the imaginary axis.
If the abscisse of congence is $\gamma=0$, then (it was told me), I can have poles on the
real axes, and I have to define the Fourier transform with indentations in a proper manner.
In the Papoulis's book there is written "if $\gamma=0, $ the Laplace transform has at least one of the singular points on the imaginary axes.
So, I think that the situation should be like this:
Then, if I extend the frequency in the complex plane, I can consider that -regarding the Fourier transform- the axes are rotated with respect to the axis of the s-plane:
So I should have:
Finally,
I think that these two last steps could explain the word "real" related to the poles at the beginning of the question...
Please, tell me If the reasoning is wrong and where.. many thanks |
Hi what would be a good test to find convergence or divergence please?
$\sum _1 ^{\infty} (e^kcos^2k)/ \pi ^k$
My attempt I got that converges thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Hi what would be a good test to find convergence or divergence please?
$\sum _1 ^{\infty} (e^kcos^2k)/ \pi ^k$
My attempt I got that converges thanks
Since $$0\le \cos^2(\hbox{anything})\le1$$ we have $$0\le\frac{e^k\cos^2k}{\pi^k}\le\frac{e^k}{\pi^k}=\Bigl(\frac{e}{\pi}\Bigr)^k\ .$$ And $$\sum_{k=1}^\infty \Bigl(\frac{e}{\pi}\Bigr)^k$$ converges since it is a GP with ratio $e/\pi<1$, so your series converges by the comparison test.
Hint
Consider $$I=\sum\limits_{k=1}^{\infty}\dfrac{e^{k}\cos^{2}(k)}{\pi^{k}}$$ $$J=\sum\limits_{k=1}^{\infty}\dfrac{e^{k}\sin^{2}(k)}{\pi^{k}}$$ So $$I+J=\sum\limits_{k=1}^{\infty}\dfrac{e^{k}}{\pi^{k}}=\frac{e}{\pi-e }$$ which is a geometric series.
I am sure that you can take from here.
It converges absolutly due to the estimate $$|\cos^{2}(n)|\leq 1$$ |
In implementations of the
Metropolis-Hastings algorithm, how is the target distribution $\pi(\mathbf{x}) = P(\mathbf{x}|\mathbf{e})$ computed or estimated while computing the acceptance probability $\alpha(\mathbf{x'}|\mathbf{x}) = \min \left(1,\frac{\pi(\mathbf{x'})q(\mathbf{x}|\mathbf{x'})}{\pi(\mathbf{x})q(\mathbf{x'}|\mathbf{x})}\right)$?
For special cases of
Metropolois-Hastings such as
Gibbs Sampling, I understand that detailed balance of $q$ with $\pi$ simplifies the acceptance probably to $1$.
What I'm not understanding is how compute the target distribution when using something like a symmetric random walk proposal. |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
I am trying to solve this integral but do not get any point to start.I was thinking an U-substitution may help but do not know what to consider as U. Tried with mathematica but it does not provide any solution. Can anyone help? $\int sin\theta *2\sin^{-1} [\frac{\cos \theta+\sin \gamma}{\sin \theta \tan \beta}] d\theta$
Hint:
With $t:=\cos\theta$, renaming the constants and dropping the $2$,
$$I:=-\int \arcsin\frac{t+c}{b\sqrt{1-t^2}}dt.$$
By parts,
$$I:=-t\arcsin\frac{t+c}{b\sqrt{1-t^2}}+\int t\frac{t(t+c)+1-t^2}{b(1-t^2)^{3/2}\sqrt{1-\dfrac{(t+c)^2}{b^2(1-t^2)}}}dt \\=\cdots+\int t\frac{ct+1}{(1-t^2)\sqrt{b^2(1-t^2)-(t+c)^2}}dt.$$
This rationalizes the integral and Alpha is able to integrate. Strange.
Update:
Actually, Alpha does the integration by parts: |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 6 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. |
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. |
Recall that
$${n\choose k}={n!\over k!\,(n-k)!}={n(n-1)(n-2)\cdots(n-k+1)\over k!}.$$
The expression on the right makes sense even if \(n\) is not a non-negative integer, so long as \(k\) is a non-negative integer, and we therefore define
$${r\choose k}={r(r-1)(r-2)\cdots(r-k+1)\over k!}$$
when \(r\) is a real number. For example,
$${1/2\choose 4}={(1/2)(-1/2)(-3/2)(-5/2)\over 4!}={-5\over128} \quad\hbox{and}\quad {-2\choose 3}={(-2)(-3)(-4)\over 3!}=-4. $$
These
generalized binomial coefficients share some important properties of the usual binomial coefficients, most notably that
$$\eqalignno{ {r\choose k}&={r-1\choose k-1}+{r-1\choose k}.& (3.1.1)\cr }$$
Then remarkably:
Theorem \(\PageIndex{1}\): Newton's Binomial Theorem
For any real number \(r\) that is not a non-negative integer, $$(x+1)^r=\sum_{i=0}^\infty {r\choose i}x^i$$ when \(-1< x< 1\).
Proof
It is not hard to see that the series is the Maclaurin series for \((x+1)^r\), and that the series converges when \(-1< x< 1\). It is rather more difficult to prove that the series is equal to \((x+1)^r\); the proof may be found in many introductory real analysis books.
\(\square\)
Example \(\PageIndex{1}\)
Expand the function \((1-x)^{-n}\) when \(n\) is a positive integer.
Solution
We first consider \((x+1)^{-n}\); we can simplify the binomial coefficients:
$$\eqalign{ {(-n)(-n-1)(-n-2)\cdots(-n-i+1)\over i!} &=(-1)^i{(n)(n+1)\cdots(n+i-1)\over i!}\cr &=(-1)^i{(n+i-1)!\over i!\,(n-1)!}\cr &=(-1)^i{n+i-1\choose i}=(-1)^i{n+i-1\choose n-1}.\cr }$$
Thus
$$(x+1)^{-n}=\sum_{i=0}^\infty (-1)^i{n+i-1\choose n-1}x^i =\sum_{i=0}^\infty {n+i-1\choose n-1}(-x)^i.$$
Now replacing \(x\) by \(-x\) gives $$(1-x)^{-n}=\sum_{i=0}^\infty {n+i-1\choose n-1}x^i.$$ So \((1-x)^{-n}\) is the generating function for \({n+i-1\choose n-1}\), the number of submultisets of \(\{\infty\cdot1,\infty\cdot2,\ldots,\infty\cdot n\}\) of size \(i\).
In many cases it is possible to directly construct the generating function whose coefficients solve a counting problem.
Example \(\PageIndex{2}\)
Find the number of solutions to \(\displaystyle x_1+x_2+x_3+x_4=17\), where \(0\le x_1\le2\), \(0\le x_2\le5\), \(0\le x_3\le5\), \(2\le x_4\le6\).
Solution
We can of course solve this problem using the inclusion-exclusion formula, but we use generating functions. Consider the function
$$(1+x+x^2)(1+x+x^2+x^3+x^4+x^5)(1+x+x^2+x^3+x^4+x^5)(x^2+x^3+x^4+x^5+x^6).$$
We can multiply this out by choosing one term from each factor in all possible ways. If we then collect like terms, the coefficient of \(x^k\) will be the number of ways to choose one term from each factor so that the exponents of the terms add up to \(k\). This is precisely the number of solutions to \(\displaystyle x_1+x_2+x_3+x_4=k\), where \(0\le x_1\le2\), \(0\le x_2\le5\), \(0\le x_3\le5\), \(2\le x_4\le6\). Thus, the answer to the problem is the coefficient of \(x^{17}\). With the help of a computer algebra system we get
$$\eqalign{ (1+x+x^2)(1&+x+x^2+x^3+x^4+x^5)^2(x^2+x^3+x^4+x^5+x^6)\cr =\;&x^{18} + 4x^{17} + 10x^{16} + 19x^{15} + 31x^{14} + 45x^{13} + 58x^{12} + 67x^{11} + 70x^{10}\cr &+67x^9 + 58x^8 + 45x^7 + 31x^6 + 19x^5 + 10x^4 + 4x^3 + x^2,\cr }$$
so the answer is 4.
Example \(\PageIndex{3}\)
Find the generating function for the number of solutions to \(\displaystyle x_1+x_2+x_3+x_4=k\), where \(0\le x_1\le\infty\), \(0\le x_2\le5\), \(0\le x_3\le5\), \(2\le x_4\le6\).
Solution
This is just like the previous example except that \(x_1\) is not bounded above. The generating function is thus
$$\eqalign{ f(x)&=(1+x+x^2+\cdots)(1+x+x^2+x^3+x^4+x^5)^2(x^2+x^3+x^4+x^5+x^6)\cr &=(1-x)^{-1}(1+x+x^2+x^3+x^4+x^5)^2(x^2+x^3+x^4+x^5+x^6)\cr &={(1+x+x^2+x^3+x^4+x^5)^2(x^2+x^3+x^4+x^5+x^6)\over 1-x}. }$$
Note that \((1-x)^{-1}=(1+x+x^2+\cdots)\) is the familiar geometric series from calculus; alternately, we could use Example \(\PageIndex{2}\). Unlike the function in the previous example, this function has an infinite expansion:
$$\eqalign{ f(x)&= x^2+4x^3 + 10x^4 + 20x^5 +35x^6 + 55x^7+ 78x^8 \cr &+ 102x^9 + 125x^{10}+ 145x^{11} + 160x^{12} + 170x^{13}+176x^{14} \cr &+ 179x^{15} +180x^{16} + 180x^{17} + 180x^{18} + 180x^{19} + 180x^{20} +\cdots. }$$
Here is how to do this in Sage.
Example \(\PageIndex{4}\)
Find a generating function for the number of submultisets of \(\{\infty\cdot a,\infty\cdot b,\infty\cdot c\}\) in which there are an odd number of \(a\)s, an even number of \(b\)s, and any number of \(c\)s.
Solution
As we have seen, this is the same as the number of solutions to \(x_1+x_2+x_3=n\) in which \(x_1\) is odd, \(x_2\) is even, and \(x_3\) is unrestricted. The generating function is therefore $$\eqalign{ (x+x^3+x^5&+\cdots)(1+x^2+x^4+\cdots)(1+x+x^2+x^3+\cdots)\cr &=x(1+(x^2)+(x^2)^2+(x^2)^3+\cdots)(1+(x^2)+(x^2)^2+(x^2)^3+\cdots){1\over 1-x}\cr &={x\over (1-x^2)^2(1-x)}.\cr }$$ |
Table of Contents
Self-Adjoint Linear Operators over Complex Vector Spaces
Recall from the Self-Adjoint Linear Operators page that if $V$ is a finite-dimensional nonzero inner product space and if $T \in \mathcal L (V)$ then $T$ is said to be self-adjoint if $T = T^*$.
In the following proposition we will see that if $V$ is a
complex inner product space, that if $<T(v), v> = 0$ for all $v \in V$ then $T$ is identically equal to the zero operator.
Proposition 1: If $V$ is a complex inner product space and $T \in \mathcal L (V)$ is such that $<T(v), v> = 0$ for all vectors $v \in V$, then $T = 0$. Proof:Let $V$ be a complex inner product space. Then for all $u, w \in V$ we can write $<T(u), w>$ as: Let $v_1 = u + w$, $v_2 = u - w$, $v_3 = u + iw$ and $v_4 = u - iw$. Then the equation above can be rewritten as: Now suppose that $<T(v), v> = 0$ for all vectors $v \in V$. Then the righthand side of the equation above reduces to zero and $<T(v), w> = 0$ for all $u, w \in V$ which implies that $T = 0$. $\blacksquare$
With this proposition, we will see in the next corollary that if $V$ is complex inner product space then $T$ will be self-adjoint if and only if the inner product between $v$ and its image $T(v)$ is zero for all vectors $v \in V$.
Corollary 1: If $V$ is a complex inner product space and $T \in \mathcal L (V)$ then $T$ is self-adjoint if and only if $<T(v), v> \in \mathbb{R}$ for all vectors $v \in V$. Proof:$\Rightarrow$ Let $V$ be a complex inner product space and let $v \in V$. Suppose that $<T(v), v> \in \mathbb{R}$ for all vectors $v \in V$. Then $<T(v), v> = \overline{<T(v), v>}$ and so: Therefore $<(T - T^*)(v), v> = 0$ for all $v \in V$. By Proposition 1, this implies that $T - T^* = 0$ and so $T = T^*$, that is, $T$ is self-adjoint. $\Leftarrow$ Suppose that $T$ is self-adjoint. Then $T = T^*$. From above we still have that: Therefore $<T(v), v> = \overline{<T(v), v>}$ which implies that $<T(v), v> \in \mathbb{R}$ for all $v \in V$. $\blacksquare$ |
The Wikipedia article on the Gamma distribution, lists two different parameterisation methods, one of them frequently used in Bayesian econometrics with $\alpha>0$ and $\beta>0$, $\alpha$ is shape parameter, $\beta$ is rate parameter.
$$X\sim \mathrm{Gamma}(\alpha,\beta).$$
In a Bayesian econometrics textbook written by Gary Koop, the precision paramether $\frac{1}{\sigma^2}=h$ follows a Gamma distribution, which is a
prior distribution
$$h\sim \mathrm{Gamma}(\underline{s}^{-2},\underline{\nu}),$$
where $\underline{s}^{-2}$ is mean and $\underline{\nu}$ is degrees of freedom according to his Appendix. Also $s^2$ is standard error with definition
$$s^2=\frac{\sum(y_i-\hat{\beta}x_i)}{\nu}.$$
Thus for me, these two definition of the Gamma distribution are completely different, since the mean and variances will be different. If we follow the wikipedia definition, the mean will be $\alpha/\beta$, not $\underline{s}^{-2}$.
I am highly confused here, would anyone help me to streighten the thoughts here? |
2019-05-20 15:18 Detailed record - Similar records 2019-01-23 09:13
nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Detailed record - Similar records 2019-01-23 08:54 Detailed record - Similar records 2019-01-15 15:35
Report from the LHC Fixed Target working group of the CERN Physics Beyond Colliders forum / Barschel, Colin (CERN) ; Bernhard, Johannes (CERN) ; Bersani, Andrea (INFN e Universita Genova (IT)) ; Boscolo Meneguolo, Caterina (Universita e INFN, Padova (IT)) ; Bruce, Roderik (CERN) ; Calviani, Marco (CERN) ; Carassiti, Vittore (Universita e INFN, Ferrara (IT)) ; Cerutti, Francesco (CERN) ; Chiggiato, Paolo (CERN) ; Ciullo, Giuseppe (Universita e INFN, Ferrara (IT)) et al. Several fixed-target experiments at the LHC are being proposed and actively studied. Splitting of beam halo from the core by means of a bent crystal combined with a second bent crystal after the target has been suggested in order to study magnetic and electric dipole moments of short-lived particles. [...] CERN-PBC-REPORT-2019-001.- Geneva : CERN, 2019 Fulltext: PDF; Detailed record - Similar records 2018-12-20 13:45 Detailed record - Similar records 2018-12-18 14:08
Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Detailed record - Similar records 2018-12-17 18:05
PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Detailed record - Similar records 2018-12-14 16:17
AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Detailed record - Similar records 2018-12-14 15:50
Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Detailed record - Similar records 2018-12-13 13:21
Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Detailed record - Similar records |
A
filter F
on a set
S
is a set of subsets
of
S
with the following properties:
S is in F.
The empty set is not in F.
If A and B are in F, then so is their intersection.
If A is in F and A ⊆ B ⊆ S, then B is in F.
A simple example of a filter is the set of all subsets of
S that include a particular subset C of S. Such a filter is called the "principal filter" generated by C. The Fréchet filter[?] on an infinite set S is the set of all subsets of S that have finite complement.
Filters are useful in topology: they play the role of sequences in metric spaces. The set of all neighbourhoods of a point
x in a topological space is a filter, called the neighbourhood filter of x. A filter which is a superset of the neighbourhood filter of x is said to converge to x. Note that in a non-Hausdorff space a filter can converge to more than one point.
Of particular importance are maximal filters, which are called ultrafilters. A standard application of Zorn's lemma
shows that every filter is a subset of some ultrafilter.
For any filter
F on a set S, the set function defined by
<math>
m(A)=\left\{
\begin{matrix}
\,1 & \mbox{if }A\in F \\
\,0 & \mbox{if }S\setminus A\in F \\
\,\mbox{undefined} & \mbox{otherwise}
\end{matrix}
\right.
</math>
is finitely additive -- a "measure
" if that term is construed rather loosely. Therefore the statement
<math>\left\{\,x\in S: \varphi(x)\,\right\}\in F</math>
can be considered somewhat analogous to the statement that φ holds "almost everywhere". That interpretation of membership in a filter is used (for motivation, although it is not needed for actual
proofs
) in the theory of ultraproducts[?]
in model theory
, a branch of mathematical logic
.
All Wikipedia text is available under the terms of the GNU Free Documentation License |
Polynomials Applied to Linear Operators
Suppose that $T$ is a linear operator from the vector space $V$ to $V$. Then the operator $T \circ T = T^2$ is an operator from $V$ to $V$. In fact, we can compose $T$ with itself multiple times, that is, for $m$ as a positive integer we can define the operator $T^m$ as follows:(1)
If $m = 0$, then we can define $T^0$ to be the identity operator on $V$.
Furthermore, for a positive integer $m$ we can define $T^{-m} = (T^{-1})^m$ provided that $T$ is an invertible linear operator, that is:(2)
So now we have a formal definition of a power of a linear operator $T$. Suppose that $p(x) \in \wp (\mathbb{F})$ suh that $p(x) = a_0 + a_1x + a_2x^2 + ... + a_mx^m$. Then we can define the operator of $p$ applied to $T$ as:(3)
Proposition 1: If $T \in \mathcal L(V)$, then the transformation $S : \wp (\mathbb{F}) \to \mathcal L (V)$ defined by $S(p(x)) = p(T)$ is linear. Proof:Let $T \in \mathcal L (V)$. To show that $S$ is linear, we must show that $S$ has both the additivity property and the homogeneity property. First let $p(x), q(x) \in \wp (\mathbb{F})$. For the additivity property, we have that: Now let $p(x) \in \wp (\mathbb{F})$ and let $a \in \mathbb{F}$. Then for the homogeneity property we have that: Therefore $S$ is a linear transformation. $\blacksquare$ |
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$, cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form, but that need not be the case.
If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate.
When $G$ is semi-simple, the corresponding Killing form is non-degenerate.But $G$ does
not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group. Its Killing form is identically zero. Nevertheless, we have the following YM-type theories:
QED with $G=U(1)$.
the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$.
Also the gauge group $G$ does in principle not have to be compact. This post imported from StackExchange Physics at 2015-01-19 14:11 (UTC), posted by SE-user Qmechanic |
Inner Product Spaces
We are now going to look at a type of vector space that is associated with a function known as an inner product which we define below.
Definition: Let $V$ be a vector space over the field $\mathbb{F}$ ($\mathbb{R}$ or $\mathbb{C}$). An Inner Product on $V$ is a function which takes each pair of vectors $u, v \in V$ and assigns a number $<u, v> \in \mathbb{F}$ with the following properties: 1) $<u, u> ≥ 0$ for all vectors $u \in V$ (Positivity Property). 2) $<u, u> = 0$ if and only if $u = 0$ (Definiteness Property). 3) $<u + v, w> = <u, w> + <v, w>$ for all vectors $u, v, w \in V$ (Additivity in The First Slot Property). 4) $<au, v> = a<u, v>$ for any $a \in \mathbb{F}$ and for all vectors $u, v \in V$ (Homogeneity in The First Slot Property). 5) $<u, v> = \overline{<v, u>}$ for all vectors $u, v \in V$ (Conjugate Symmetry Property). Note that if $V$ is a vector space over $\mathbb{R}$, then the complex conjugate of a real number is equal to itself, and thus property 5 is $<u, v> = <v, u>$ for all vectors $u, v \in V$.
One type of inner product space that we have already seen is the typical dot product. Consider the vector space $\mathbb{R}^n$. For any vectors $u, v \in \mathbb{R}^n$ we have that $u = (u_1, u_2, …, u_n)$ and $v = (v_1, v_2, …, v_n)$. Then the dot product between $u$ and $v$ is denoted $u \cdot v = <u, v>$ and is defined as:(1)
Let's verify that the dot product is indeed an inner product by verifying all five of the properties listed above. We first show that the positivity property holds, we note that:(2)
So $<u, u>$ is the sum of squared real numbers and is hence nonnegative, that is $<u, u> ≥ 0$. Now let's show that the definiteness property holds. Suppose that $<u, u> = 0$. Then we have that:(3)
Each $u_j^2 = 0$ though, since $u_j^2 ≥ 0$ for $j = 1, 2, …, n$ (if some $u_j > 0$ then we would need a $u_i < 0$ to make $<u, u> = 0$). Thus $u_j = 0$ for each $j = 1, 2, …, n$ and so $u = (u_1, u_2, …, u_n) = (0, 0, …, 0) = 0$. Now suppose that $u = 0$. Then $u = (0, 0, …, 0)$ and $<u, u> = 0^2 + 0^2 + … + 0^2 = 0$ so the definiteness property holds.
Now let's show that the dot product has additivity in the first slot. Let $u, v, w \in V$. Then:(4)
Now let's show that the homogeneity property holds. Let $a \in \mathbb{R}$. Then:(5)
We note that the our vector space $\mathbb{R}^n$ is over the field of real numbers, and so clearly $<u, v> = \overline{<v, u>} = <v, u>$ since the dot product results in real outputs and the conjugate of a real number is itself. Therefore we have verified that the dot product is indeed an inner product space.
Of course, there are many other types of inner products that can be formed on more abstract vector spaces. Such a vector space with an inner product is known as an inner product space which we define below.
Definition: A vector space $V$ over the field $\mathbb{R}$ or $\mathbb{C}$ is with an inner product is called an Inner Product Space.
It is nice to have a relatively short list of properties to verify that a function on a vector space is an inner product. Notice though that we only specified additivity and homogeneity in the first slot. In fact, as the following proposition shows, an inner product also has additivity and conjugate homogeneity in the second slot (which can be easily derived from the five properties of inner products already listed).
Proposition 1: If $V$ is an inner product space over the field $\mathbb{F}$ ($\mathbb{R}$ or $\mathbb{C}$) then: a) $<u, v + w> = <u, v> + <u, w>$ for all vectors $u, v, w \in V$ (Additivity in The Second Slot). b) $<u, av> = \overline{a} <u, v>$ for $a \in \mathbb{F}$ and for all vectors $u, v \in V$ (Conjugate Homogeneity in The Second Slot).
The proof of Proposition 1 is relatively straightforward and uses only the five properties listed in the inner product space.
Proof of a): Proof of b): Thus our proof is complete. $\blacksquare$
The following definition for orthogonality is a generalization of the dot product which we defined two vectors in $\mathbb{R}^n$ to be perpendicular (or orthogonal) if $u \cdot v = 0$.
Definition: If $V$ is an inner product space then a vector $u \in V$ is said to be Orthogonal to $v \in V$ if $<u, v> = 0$.
Note that from the definition of an inner product space we have that $<u, u> = 0$ if and only if $u = 0$. Thus $<0, 0> = 0$, and so $0$ is the only vector that is orthogonal to itself. Furthermore, it is not hard to see that any vector is orthogonal to $0$.
Example 1 Consider the vector space $V$ of $2 \times 2$ that has zeroes everywhere except having real entries on the main diagonal. Determine whether or not the function $f(A, B) = \mid \mathrm{tr}(A + B) \mid$ for $A, B \in M_{22}$ is an inner product space.
Let $A$ be defined as $A = \begin{bmatrix} a & 0\\ 0 & -a \end{bmatrix}$ for $a \in \mathbb{R}$ and $a \neq 0$. Then we have that:(8)
However $A$ is not the $2 \times 2$ zero matrix, and so $f$ does not define an inner product space. |
The Area of a Parallelogram in 3-Space
Given two vectors $\vec{u} = (u_1, u_2, u_3)$ and $\vec{v} = (v_1, v_2, v_3)$, if we place $\vec{u}$ and $\vec{v}$ so that their initial points coincide, then a parallelogram is formed as illustrated:
Calculating the area of this parallelogram in 3-space can be done with the formula $A= \| \vec{u} \| \| \vec{v} \| \sin \theta$. We will now begin to prove this.
Theorem 1: If $\vec{u}, \vec{v} \in \mathbb{R}^3$, then the area of the parallelogram formed by $\vec{u}$ and $\vec{v}$ can be computed as $\mathrm{Area} = \| \vec{u} \| \| \vec{v} \| \sin \theta$. Proof:First construct some vectors $\vec{u}$ and $\vec{v}$ in 3-space such that their initial points coincide and let theta be the angle between these two vectors. Geometrically, we know that the area for a parallelogram is $A = bh$ where $b$ is the base of the parallelogram and $h$ is the height. Making appropriate substitutions, we see that the base of the parallelogram is the length of $\vec{v}$ or rather the its norm $\| \vec{v} \|$. Furthermore, we can calculate the height of this parallelogram using right-triangle properties from the following illustration: We know that $\sin \theta = \frac{opposite}{hypotenuse}$, and thus it follows that we need to solve for the opposite side of this constructed triangle (our height). It thus follows that $\sin \theta = \frac{height}{\| \vec{u} \| }$ or more appropriately, $h = \sin \theta \| \vec{u} \|$. Since we now know the base and height of the parallelogram, we can substitute this back into the formula for the area of a parallelogram to get: The Relationship of the Area of a Parallelogram to the Cross Product
As we will soon see, the area of a parallelogram formed from two vectors $\vec{u}, \vec{v} \in \mathbb{R}^3$ can be seen as a geometric representation of the cross product $\vec{u} \times \vec{v}$. First, recall Lagrange's Identity:(2)
We can instantly make a substitution into Lagrange's formula as we have a convenient substitution for the dot product, that is $\vec{u} \cdot \vec{v} = \| \vec{u} \| \| \vec{v} \| \cos \theta$. Making this substitution and the substitution that $\cos ^ \theta = 1 - \sin^2 \theta$ we get that:(3)
The last step is to square root both sides of this equation. Since the length/norm of a vector will always be positive and that $\sin \theta > 0$ for $0 ≤ \theta < \pi$, it follows that all parts under the square root are positive, therefore:(4)
Note that this is the same formula as the area of a parallelogram in 3-space, and thus it follows that $A = \| \vec{u} \times \vec{v} \| = \| \vec{u} \| \| \vec{v} \| \sin \theta$.
The Area of a Triangle in 3-Space
We note that the area of a triangle defined by two vectors $\vec{u}, \vec{v} \in \mathbb{R}^3$ will be half of the area defined by the resulting parallelogram of those vectors. Thus we can give the area of a triangle with the following formula:(5)
Corollary 1: If $\vec{u}, \vec{v} \in \mathbb{R}^3$, then the area of the triangle formed by $\vec{u}$ and $\vec{v}$ is $\mathrm{Area} = \frac{1}{2} \| \vec{u} \| \| \vec{v} \| \sin \theta$. |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
I am interested in computing a normalizing constant (of a Gaussian density in dimension $3$). Such normalizing constants often do not have a closed form. In dimension $2$, this normalizing constant can be computed in closed form.
The integral I would like to compute is :
$$ \int_{\mathbb{R}^{3}} e^{-(r_{1}^{2} + r_{2}^{2} + r_{3}^{3})/2\sigma^{2}} \sinh\Big( \frac{\vert r_{1}-r_{2} \vert}{2} \Big)\sinh\Big( \frac{\vert r_{1}-r_{3} \vert}{2}\Big)\sinh\Big( \frac{\vert r_{2}-r_{3} \vert}{2} \Big)dr_{1}dr_{2}dr_{3} $$
Using the spherical coordinates $(r_{1},r_{2},r_{3}) = (r\sin\varphi\cos\theta,r\sin\varphi\sin\theta,r\cos\varphi)$, I find myself trying to compute the following integral :
$$ \int_{0}^{+\infty}\int_{0}^{2\pi}\int_{0}^{\pi} e^{-r^{2}/2\sigma^{2}} \sinh\Big( \frac{r\sin\varphi\vert \cos\theta - \sin\theta\vert}{2}\Big)\sinh\Big( \frac{r\vert\sin\varphi\cos\theta-\cos\phi\vert}{2}\Big) \times \sinh\Big(\frac{r\vert\sin\varphi\sin\theta-\cos\varphi\vert}{2}\Big)r^{2}\sin\varphi \, drd\theta d\varphi $$
I'm not an expert but, looking at this integral, I doubt there exist a closed expression. |
Abstract / Bemerkung
We use linear estimators to determine the magnitude and direction of thecosmic radio dipole from the NRAO VLA Sky Survey (NVSS) and the WesterborkNorthern Sky Survey (WENSS). We show that special attention has to be given tothe issues of bias due to shot noise, incomplete sky coverage and masking ofthe Milky Way. We compare several different estimators and show thatconflicting claims in the literature can be attributed to the use of differentestimators. We find that the NVSS and WENSS estimates of the cosmic radiodipole are consistent with each other and with the direction of the cosmicmicrowave background (CMB) dipole. We find from the NVSS a dipole amplitude of$(1.6 \pm 0.6) \times 10^{-2}$ in direction $(\mathrm{RA}, \mathrm{dec})=(154^\circ \pm 21^\circ, -2^\circ \pm 21^\circ)$. This amplitude exceeds theone expected from the CMB by a factor of about 3 and is inconsistent with theassumption of a pure kinetic origin of the radio dipole at 99.5% CL.
Erscheinungsjahr
2013
Zeitschriftentitel
Astronomy & Astrophysics
Band
555
Seite(n)
A117
ISSN
0004-6361
eISSN
1432-0746
Page URI
https://pub.uni-bielefeld.de/record/2554921
Zitieren
Rubart M, Schwarz D. Cosmic radio dipole from NVSS and WENSS.
Astronomy & Astrophysics. 2013;555:A117.
Rubart, M., & Schwarz, D. (2013). Cosmic radio dipole from NVSS and WENSS.
Astronomy & Astrophysics, 555, A117. doi:10.1051/0004-6361/201321215
Rubart, M., and Schwarz, D. (2013). Cosmic radio dipole from NVSS and WENSS.
Astronomy & Astrophysics555, A117.
Rubart, M., & Schwarz, D., 2013. Cosmic radio dipole from NVSS and WENSS.
Astronomy & Astrophysics, 555, p A117.
M. Rubart and D. Schwarz, “Cosmic radio dipole from NVSS and WENSS”,
Astronomy & Astrophysics, vol. 555, 2013, pp. A117.
Rubart, M., Schwarz, D.: Cosmic radio dipole from NVSS and WENSS. Astronomy & Astrophysics. 555, A117 (2013).
Rubart, Matthias, and Schwarz, Dominik. “Cosmic radio dipole from NVSS and WENSS”.
Astronomy & Astrophysics555 (2013): A117. |
Literature on Carbon Nanotube Research
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning 6 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 7 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/\mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
Sustained Growth of Ultralong Carbon Nanotube Arrays for Fiber Spinning
Q. Li et al. have published a paper on a subject that is very close to our hearts: growing long CNTs. The longer the fibers, which we hope have a couple of 100GPa of tensile strength, can hopefully be spun into the yarns that will make our SE ribbon. In the paper the method of chemical vapour deposition (CVD) onto a catalyst-covered silicon substrate is described, which appears to be the leading method in the publications after 2004. This way a CNT "forest" is grown on top of the catalyst particles. The goal of the authors was to grow CNTs that are as long as possible. The found that the growth was terminated in earlier attempts by the iron catalyst particles interdiffusing with the substrate. This can apparently be avoided by putting an aluminium oxide layer of 10nm thickness between the catalyst and the substrate. With this method the CNTs grow to an impressive 4.7mm! Also, in a range from 0.5 to 1.5mm fiber length the forests grown with this method can be spun into yarns.
The growth rate with this method was initially <math>60{\rm \mu m\ min.^{-1}}</math> and could be sustained for 90 minutes. The growth was prolonged by the introduction of water vapour into the mixture, which achieved the 4.7mm after 2h of growth. By introducing periods of restricted carbon supply, the authors produced CNT forests with growth marks. This allowed to determine that the forest grew from the base. This is in line with the in situ observations by S. Hofmann et al. (2007).
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. |
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$, cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form, but that need not be the case.
If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate.
When $G$ is semi-simple, the corresponding Killing form is non-degenerate.But $G$ does
not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group. Its Killing form is identically zero. Nevertheless, we have the following YM-type theories:
QED with $G=U(1)$.
the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$.
Also the gauge group $G$ does in principle not have to be compact. This post imported from StackExchange Physics at 2015-01-19 14:11 (UTC), posted by SE-user Qmechanic |
Table of Contents
The Interior of a Set under Homeomorphisms on Topological Spaces
Recall from the Homeomorphisms on Topological Spaces page that if $X$ and $Y$ are topological spaces then a bijective map $f : X \to Y$ is said to be a homeomorphism if it is continuous and open.
Furthermore, if such a homeomorphism exists then we say that $X$ and $Y$ are homeomorphic and write $X \simeq Y$.
We will begin by looking at a nice topological property which says that if $f$ is a homeomorphism from $X$ to $Y$ and if $A$ is a subset of $X$ then the image of the interior of $A$ is equal to the interior of the image of $A$.
Theorem 1: Let $X$ and $Y$ be topological spaces, let $f : X \to Y$ be a homeomorphism, and let $A \subseteq X$. Then $f(\mathrm{int}(A)) = \mathrm{int}(f(A))$. Proof:Let $x \in f(\mathrm{int}(A))$. Then $f^{-1}(x) \in \mathrm{int}(A)$ so $f^{-1}(x)$ is an interior point of $A$ and so there exists an open neighbourhood $U$ in $X$ of $f^{-1}(x)$ such that: Hence we have that $x \in f(U) \subseteq f(A)$. Since $f$ is a homeomorphism and $U$ is open in $X$ we have that $f(U)$ is open in $Y$ so $f(U)$ is an open neighbourhood of $x$ contained in $f(A)$. Hence $x$ is an interior point of $f(A)$ so $x \in \mathrm{int} (f(A))$ so: Now let $x \in \mathrm{int} (f(A))$. Then $x$ is an interior point of $f(A)$ and so there exists an open neighbourhood $V$ in $Y$ of $x$ such that: Therefore $f^{-1}(x) \in f^{-1}(V) \subseteq A$. Since $f$ is a homeomorphism and $V$ is open in $Y$ we have that $f^{-1}(V)$ is open in $X$. Therefore $f^{-1}(V)$ is an open neighbourhood of $f^{-1}(x)$ contained in $A$ and therefore $f^{-1}(x) \in \mathrm{int}(A)$ so $x \in f(\mathrm{int}(A))$. Therefore: We hence conclude that $f(\mathrm{int}(A)) = \mathrm{int} (f(A))$. $\blacksquare$ |
Problem Statement
How big do my hash buckets need to be? What kind of a question is that anyway?
Let's stand back a bit...
As every computer scientist ought to know, a hash table is a wonderfully efficient,
O(1), way to look something up in a table. The idea is simple. For each thing you want to look up, you calculate a hash, which is simply a way to scrunch together the bits of its key such that they all get mixed together and any hash value is as likely as any other. The hash should be equal to the number of entries in the table, so you can use it to index to the table. That is where your object should be. Simple.
The complication is that it's possible for two keys to hash to the same value. Classically, there are two ways to deal with this:
Just go to the next entry, and if that is full, the next one, and so on. This works, but gives disastrous performance if the table gets anywhere close to full. There are techniques for improving this, or at least making it less disastrous, for example quadratic rehash. But it is never good. Instead of keeping the actual values in the table, keep a pointer. Then when collisions occur, just chain together the values in a linked list. This works well and doesn't have the performance problems of the first method, but it is seriously cache unfriendly since the chances are that the chained overflow entries won't be in cache, and there is no easy way to prefetch them.
Our system uses a hash table to associate incoming network packets with other packets from the same TCP or UDP session (a
flow), using the IP addresses and TCP/UDP ports as a key. This is intense real-time code, intended to handle up to 15 million packets per second. This kind of performance requires attention to the tiniest details. For example, absolutely everything has to be in level one cache before it is accessed.
To achieve this, we use a different scheme to handle hash collisions. Each entry in the table is a fixed size
bucket. If two or more flows collide to the same bucket, they are placed in successive slots in the bucket. It's easy to prefetch a whole bucket, so this is cache friendly. As long as the buckets are big enough, this always give excellent performance.
Which brings us back to the first question - how big do the buckets need to be? If there is no free slot in a bucket when a new flow arrives for it, the packets of the flow are simply dropped. So the probability of this happening has to be extremely low, so low that the risk is no higher than for all the other bad things that can happen to a packet in a network.
We could always make the buckets so enormously huge that this just never happens. But there is a cost both in memory and in performance - since a bucket has to be prefetched even if most of it is empty. So we really need to know the optimum size.
So the question can be stated: given a hash table size
k, and a number of concurrent flows noccupying it, what is the probability that slot bin the bucket will be occupied? In the text that follows fis the load factorof the table, i.e. n/k. Results
The answer to the question is given by the following formula, where
f= n/ k:
\[P(n,k,b) < e^{-f}\cdot \frac{f^{b}}{b!}\frac{b}{b-f}\]
This formula involves several approximations, and is a close upper bound on the actual probability. To know whether a bucket of size
This formula involves several approximations, and is a close upper bound on the actual probability.
To know whether a bucket of size
bwill overflow, we need to know whether the (b+1)th slot will be full, i.e. the first slot beyondthe bucket size.
The following table gives the probability of bucket overflow, for various bucket sizes and various load factors. (Some values are not present for small bucket sizes because the above approximation is not applicable when the load factor is greater than the bucket size).
Load 0.1 0.5 1 1.5 2 Bucket Size 1 0.0050269 0.1516327 2 0.0001587 0.0168481 0.1226265 0.5020429 3 0.0000039 0.0018954 0.0229925 0.0941330 0.2706706 4 0.0000001 0.0001805 0.0040875 0.0225919 0.0721788 5 0.0000000 0.0000146 0.0006387 0.0050428 0.0200497 6 0.0000000 0.0000010 0.0000876 0.0010086 0.0051556 7 0.0000000 0.0000001 0.0000106 0.0001805 0.0012030 8 0.0000000 0.0000000 0.0000012 0.0000291 0.0002546 9 0.0000000 0.0000000 0.0000001 0.0000043 0.0000491 10 0.0000000 0.0000000 0.0000000 0.0000006 0.0000087 11 0.0000000 0.0000000 0.0000000 0.0000001 0.0000014 12 0.0000000 0.0000000 0.0000000 0.0000000 0.0000002 at least one bucketbeing of this size depends on the table size as well. So for example, in a filled table of 10,000 entries and bucket size of 7, there is a probability of about 0.1 that a bucket will overflow (i.e. need to be size 8 or greater), which is probably acceptable. But if the table size is increased to 1,000,000, that probability increases to practically 1. To achieve the same 0.1 probability as for the smaller table, the bucket size must be increased to 9.
To be precise, the probability that at least one bucket in a table of size
k, having nentries and a bucket size of b, will overflow, is given by:
\[ p_{overflow}(n,k,b))=1-(1-p(n,k,b))^{k} \] The following table shows the probability of at least one bucket becoming full, for the given table and bucket sizes, at a load factor of 1 (i.e.
n=k).
Table Size 1000 10000 100000 1000000 10000000 2 1.000000 1.000000 1.000000 1.000000 1.000000 3 1.000000 1.000000 1.000000 1.000000 1.000000 4 0.983360 1.000000 1.000000 1.000000 1.000000 5 0.472119 0.998320 1.000000 1.000000 1.000000 6 0.083867 0.583530 0.999843 1.000000 1.000000 7 0.010588 0.100977 0.655090 0.999976 1.000000 8 0.001158 0.011519 0.109400 0.686076 0.999991 9 0.000114 0.001140 0.011340 0.107787 0.680341 10 0.000010 0.000102 0.001023 0.010188 0.097333 11 0.000001 0.000008 0.000084 0.000844 0.008413 12 0.000000 0.000001 0.000006 0.000064 0.000644 Derivation: Some Basics
There's a well known analysis concerning the number of empty buckets you can expect to find in a hash table. Considering a single bucket, we ask: what is the probability that not a single entry will have hit it? This is the same as asking whether every entry has hit somewhere else. For single entry, this is
(k-1)/k, so for everyentry to have hit somewhere else, the probability is:
\[p(n,k,0)=\left ( \frac{k-1}{k} \right )^{n} \]
We'll use the notation
p(n,k,b)to mean the probability that a bucket in a table of size kwith nentries has exactly bslots used - in this case, bis zero. With a bit of approximation, this can be simplified to:
\[p(n,k,0)= e^{-f} \]
In particular, for a table where
n= k, i.e. f=1, this is 1/ e, or about 0.368. So in a table of size 1,000,000, with that many entries, about 368,000 will be empty. The rest will have at least one entry. How many will have more than one?
We can consider the probability of a single key finding itself alone in a single bucket. This is the probability that it will go the bucket, 1
/k, times the probability that nobody else will, which is ((k‑1)/k). Now we have to consider that for every single entry, so we multiply it by n. This gives: n-1
\[p(n,k,1)=\left ( \frac{k-1}{k} \right )^{n-1}\cdot \frac{1}{k} \cdot n \]which can be simplified, with a little approximation, to:
\[p(n,k,1)= f e^{-f} \]For the particular case of
n=k, this gives exactly the same result as for an empty bucket: 0.368. The Equation We're Looking For
All we need to do now is to generalise the above result for a single entry, to cover
\[p(n,k,b)=\left ( \frac{k-1}{k} \right )^{k-b}\cdot k^{-b}\cdot \binom{n}{b} \] bentries. We use the same logic: consider any bspecific keys, what is the probability that they will end up together and with no other keys, in a given bucket? The probability that no other keys will be there is ((k‑1)/k), while the probability that they will all end up in this particular bucket is n-b k. Then we need to multiply this by the number of combinations of -b bkeys, which is given by Putting this all together gives: nC b.
\[p(n,k,b)=\left ( \frac{k-1}{k} \right )^{k-b}\cdot k^{-b}\cdot \binom{n}{b} \]
This can be simplified, again with a little approximation, to:
\[p(n,k,b) = e^{-f}\cdot \frac{f^{b}}{b!}\]
This tells us the probability that a bucket will have
exactly bentries. What we need, though, is the probability that it will have b or moreentries. Calling this function P, we have:
\[P(n,k,b)=\sum_{i=b}^{\infty}p(n,k,b)=\sum_{i=b}^{\infty}e^{-f}\cdot \frac{f^{b}}{b!}\]
There is no closed form for this sum, but there is a good upper bound:\[P(n,k,b) < e^{-f}\cdot \frac{f^{b}}{b!}\frac{b}{b-f}\]
The Math: Obtaining the Simplified Formulae
First, recall that by the definition of
e:
\[ e^{x}=\lim_{k\rightarrow \infty }\left ( 1+\frac{1}{k} \right )^{xk} \]
Hence, substituting
-kfor k, and considering that kis close enough to infinity:
\[ \left ( \frac{k-1}{k} \right )^{n}=\left ( 1-\frac{1}{k} \right )^{k\cdot \frac{n}{k}}=\left ( 1+\frac{1}{k} \right )^{-k\cdot \frac{n}{k}}\simeq e^{-f} \]
Now consider the equation for
b=1:
\[ p(n,k,1)=\left ( \frac{k-1}{k} \right )^{n-1}\cdot \frac{1}{k}\cdot n \]
We rearrange things to get the same approximation for
eand observe that since k is large, ( -f, k-1) /kis approximately 1. And so:
\[\left ( \frac{k-1}{k} \right )^{n-1}\cdot \frac{1}{k}\cdot n\simeq \left ( \frac{k-1}{k} \right )^{n}\cdot \left ( \frac{k-1}{k} \right )^{-1}\cdot \frac{n}{k} \simeq f e^{-f} \]
For the general case,
b>1:
\[p(n,k,b)=\left ( \frac{k-1}{k} \right )^{k-b}\cdot k^{-b}\cdot \binom{n}{b} \]
we consider first the combinatorial term on the right. Since
n» b,we can treat the product in the denominator as n: b
\[\binom{n}{b}=\frac{n!}{(n-b)!b!}=\frac{n(n-1)\ldots (n-b+1)}{b!}\simeq \frac{n^{b}}{b!}\]
Incorporating this approximation and noting as before that
k-1/ kcan be treated as 1, and that n/kis the same as f:
\[\left ( \frac{k-1}{k} \right )^{n-b}\cdot k^{-b}\cdot \binom{n}{b}\simeq \left ( \frac{k-1}{k} \right )^{n}\cdot \left ( \frac{k-1}{k} \right )^{-b}\cdot k^{-b}\cdot \frac{n^{b}}{b!} \simeq e^{-f}\cdot \frac{f^{b}}{b!}\]
Considering now the required value that
allslots higher than bare empty, this is given by:
\[P(n,k,b)=\sum_{i=b}^{\infty}p(n,k,b)=\sum_{i=b}^{\infty}e^{-f}\cdot \frac{f^{i}}{i!}=e^{-f}\cdot\sum_{i=b}^{\infty}\frac{f^{i}}{i!}\]
There is no closed form for such a sum. But we can obtain an upper bound by considering the value of the term in\[t_{i}=t_{i-1}\cdot \frac{f}{i}<t_{i-1}\cdot \frac{f}{b}\:\:\:\:\text{(since }\textit{b}<\textit{i}\text{)}\] Hence by induction: \[t_{i}<\frac{f^b}{b!}\cdot\frac{f^{i-b}}{b^{i-b}}\] And hence: \[P(n,k,b)<e^{-f}\frac{f^b}{b!}\sum_{i=0}^{\infty}\frac{f^{i}}{b^{i}}\] And so by the usual formula for an infinite geometric progression: \[P(n,k,b)<e^{-f}\cdot\frac{f^b}{b!}\cdot\frac{1}{1-\frac{f}{b}}=e^{-f}\cdot\frac{f^b}{b!}\cdot\frac{b}{b-f}\]
ias t: i |
Modes
The term mode has varying meanings, according to the context, but the most common are permitted modes in amateur licensing.
Waves have three characteristics that can be changed, Amplitude, Frequency and Phase. A mode is the way of changing electromagnetic waves,
modulating them so that transmission of information is possible. Modulating signals can be either analogue, for example sound or digital, for example simple binary on-off. Contents 1 Analogue Modulation methods 2 Digital modulation 2.1 Amplitude Shift keying (ASK) 2.2 Continuous Wave (CW) 2.3 Frequency Shift Keying (FSK) 2.4 Packet 2.5 Phase Shift Keying (PSK) 2.6 Digital modes in practice 3 VOIP (Voice Over IP) Modes 4 See also 5 External links Analogue Modulation methods
There are two main
analogue modes, or methods of modulation: Amplitude Modulation (AM), in which the phasor amplitude changes, and Angle Modulation, in which the phasor angle changes.
Double Sideband (DSB), Single Sideband (SSB) and Vestigal Sideband (VSB) are all forms of AM. Frequency Modulation (FM) and Phase Modulation (PM) are all forms of Angle modulation.
Amplitude Modulation (AM)
The transceiver produces a carrier wave at the frequency of transmission. Voice is superimposed on the carrier wave, and alters its shape by changing the
Amplitude or height of the wave. Hence the frequency and wavelength of the carrier do not change with this form of modulation.
See Amplitude Modulation on Wikipedia for more information.
Double-Sideband Modulation (DSB)
Double Sideband is what's usually meant when people talk about AM. In DSB transmissions, the message signal is transmitted in two sidebands, one being the mirror image of the other. The carrier may be either transmitted at full power (DSB-FC), at reduced power (DSB-RC), or completely eliminated (DSB-SC).
Conceptually, the power level at the carrier frequency, equates to the DC bias in the input signal. Mathematically, it looks something like this:
<math>x(t) = (A + m(t))\cos( 2 \pi f_c t + \phi )</math>
Single-Sideband Modulation (SSB)
Single sideband is what you get if you take a DSB-SC signal, and pass it through a sharp high-pass or low-pass filter to reject the offending sideband. It may be generated through high-pass/low-pass filter, or it may be done using a Harley Modulator, which cancels out the unwanted sideband through the use of a Hilbert Transform.
Frequency Modulation (FM)
The transceiver produces a carrier wave, in the same way as for Amplitude Modulation. In this case however, voice is added to the carrier so that is
frequency changes. This in turn affects the wavelength of the carrier, but the amplitude remains constant.
See Frequency Modulation on Wikipedia for more information.
<math>x(t) = \cos( 2 \pi ( f_c + \Delta f m(t) t + \phi ) )</math>
Phase Modulation (PM)
This mode is seldom used in amateur radio. It's very similar to FM, but rather than the frequency changing, it's the phase of the signal that changes according to the modulating signal.
<math>x(t) = \cos( 2 \pi ( f_c t + \Delta \phi m(t) ) )</math>
The well-known PSK31 digital mode is a form of phase modulation.
In analogue radio, phase modulation differs only vary slightly from frequency modulation as mathematically frequency is the rate of change of phase. In FM, output frequency deviation is directly proportional to input signal amplitude only. A PM signal looks much like an FM signal except that output frequency deviation is directly proportional to both input signal amplitude and input frequency. An FM receiver would therefore receive a PM signal, but high frequencies in the audio would be pre-emphasised by 3dB/octave.
Lesser known modes Quadrature amplitude modulation (QAM). In this mode, two carrier waves, 90° out of phase with each other are produced. QAM is a variant of AM, in which each of these two carriers is modulated by a separate audio signal. QAM or a variant thereof had been used in many AM Stereo broadcast radio systems as well as for the colour subcarrier on analogue fast-scan TV transmissions. Digital modulation
Technically, whenever a signal is turned on and off to enable transmission of information, it can be considered to be a digital mode. Under this definition, CW is certainly a digital mode. This section refers to methods of transmitting and receiving (rather than modulating) that are digital, or that require digital processing in part of the transmission or receiving process.
Amplitude Shift keying (ASK)
Also known as
Off/ On Keying, ( OOK)
Amplitude-shift keying (ASK) is a form of modulation in which digital data is sent as variations in the amplitude of a carrier wave. In this mode there are two states, carrier on and carrier off, hence the name off/on keying.
ASK is sensitive to atmospheric noise, distortions and propagation conditions. Because light can be controlled to have two states, on and off, ASK is also commonly used to transmit digital data over optical fiber.
See Wikipedia Amplitude Shift Keying for more information.
Continuous Wave (CW)
A continuous wave is an electromagnetic wave of constant amplitude and frequency, a pure carrier, and information is carried by turning the wave on and off, and measuring the interval. Morse code is often transmitted using CW.
QRSS - Slow Morse
The term QRSS comes from the Morse Code Q-Code QRS which means either "Shall I send more slowly?" or "Please send more slowly"
In practice QRSS has a dot time-length of 3 seconds or more, and occupies a very narrow bandwidth - as low as 1Hz.
QRSS signals are monitored by "grabbers" such as those listed on this site from I2NDT.
This clipboard has probably the most up to date information about qrss beacons and grabbers.
Kits can be obtained here.
See Wikipedia Continuous Wave for more information.
Frequency Shift Keying (FSK)
The frequency of the carrier is varied according to a digital signal.
See Wikipedia Frequency Shift Keying for more information.
MFSK - Multiple Frequency Shift Keying
In MFSK data is sent using many different tones. MFSK is used by several digital modes including MFSK16, Throb, Olivia and Ale and Domino. The advantages of MFSK compared to other FSK modes are:
good noise rejection low propagation distortion less effects from multi-pathing low error rates
Some limitations of MFSK are:
high stability transceivers are required for effective transmission and reception. Exception: Olivia and Domino which are very drift tolerant. some interference effects from ionospheric multipathing some interference from constant carrier signals
External Links to MFSK sites:
ALE Automatic Link establishment. Information and downloads. Domino EX has plenty of info and download links. Olivia includes information, download links and frequencies used. ROS digital Download the latest version and contribute to the blog. Throb screenshot and download link. AMTOR Amateur Teleprinting Over Radio
Also known as SITOR in its commercial form.
AMTOR comes in two types, Type A and Type B.
Type A: information is repeated if requested by the receiving station. This is known as ARG (Automatic ReQuest)
Type B: AMTOR B uses FEC (Forward Error Correction)to ensure data is transmitted with as little loss as possible. This is accomplished by sending each character twice, with three seconds between each end.
The frequency of the carrier is varied according to a digital signal.
Wikipedia article [1]
CLOVER
CLOVER is a PSK mode which provides a full duplex. There are two CLOVER variants, CLOVER I and CLOVER II. Perhaps the most interesting characteristics of Clover is that it adapts to conditions by constantly monitoring the received signal and changes modulation scheme in response.
G-TOR
Invented by M Golay, G-TOR (Golay-TOR) was used to transmit the early Jupter and saturn space mission pictures back to earth. Is an FSK mode that has faster transfer rate than Pactor. To minimize the effects of atmospheric noise, a data interleaving process is employed. This has the added advantage that garbled data can be decoded. GTOR is a proprietary mode developed by Kantronics and is rarely used by radio amateurs. Some features:
a 16 bit CRC (Cyclic Redundancy Check). This process involves the transmitting station sending a checksum with the data. The receiving station compares the checksum with the recieved data, andf can request either new data, a resend of data or a change of baud rate. Baud rates of 100, 200 or 300 to suit varying conditions. PACTOR
PACTOR is a hybrid of Packet and AMTOR
Designed by Peter, DL6MAA and Ulrich, DF4KV asa an alternative to both AMTOR and packet. It has three incarnations, PACTOR I, PACTOR II and PACTOR III. These are effective under weak signaql and high noise conditions. PACTOR is not commonly sed by amateurs.
Radio Teletype (RTTY)
RTTY or "Radio Teletype" is an FSK mode that has been in use longer than any other digital mode except for morse code.
In its original form, RTTY was a very simple technique which used a five-bit Baudot code to represent all the letters of the alphabet, the numbers, some punctuation and some control characters. Transmissions were at approximately 60 wpm (words per minute). More recent implementations operate at higher bitrates using the same ASCII code used for standard computer data.
Because there is no error correction provided in RTTY, noise and interference can have a serious impact on transmissions. RTTY is still popular with many radio amateurs.
RTTY frequencies
As a general general rule of thumb RTTY is usually found between 80kHz and 100kHz up from the lower edge of each band, except for 160M and 80M.
160M - 1800 to 1820 (RTTY is rare on this band) 80M - 3580 to 3650 40M - 7080 to 7100 (differs from region to region)* 30M - 10110 to 10150 20M - 14080 to 14099 17M - 18095 to 18109 15M - 21080 to 21100 12m - 24915 to 24929 10M - 28080 to 28100 6m - 50300 AFSK and 50600 = FSK
A listing of RTTY frequencies used for weather, ham radio bulletins and for other purposes can be found here
Data is transferred from transceiver to receiver in
packets or groups of data bits. Typically, this involves connection of transceiver audio to a terminal node controller, which handles demodulated modem signals and provides some level of automated error correction. APRS (automated packet reporting system, automated position reporting system) is one example of a packet radio system; it is commonly used to provide weather beacons and GPS tracking for unmanned craft such as weather balloons.
This mode was popular in the late 1970's and early 1980's, but has decreased in use since then.
Phase Shift Keying (PSK)
The
phase of the carrier is modulated by a digital signal. In its simplest terms, this could mean for example that the phase of the carrier is turned through 180° with each change in the digital signal. In practical terms, PSK allows long distance communication even when noise level are high.
PSK reporter reports who has been seen using psk.
Common PSK frequencies
(subject to propagation characteristics.)
BAND 160M 80M 40M 40M 30M 30M 20M 17M 15M 15M 12M 10M 6M 2M 1.25M 70cm 33cm FREQ MHz 1.838 3.580 7.035 DX 7.070 US 10.140 DX 10.142 US 14.070 18.100 21.070 21.080 24.920 28.120 50.290 144.150 222.070 432.200 909.000 Digital modes in practice
The licensing regime defines digital modes as those modulation techniques that require digital data processing. In Australia refer to the ACMA LCD ( Licence Conditions Determination) for exact details. You will need to scroll down the page to find the link.
To get on air in digital modes normally an SSB transceiver is used which is coupled to a computer via a so called interface.
As a minimum the interface requires 4 signals from the transceiver:
Audio in - where you would connect the microphone for SSB. Audio out - for the loudspeaker. PTT. Ground.
On the computer side you need corresponding signals from the computers sound-card or integrated sound system:
Audio out - the audio generated by a digital modes program on transmit. Audio in - either the "microphone-in" or the "line-in" connector of the sound-card. PTT - often the RTS or DTR signal of a serial port is used for this. PTT can also be generated within the "interface" by a VOX like circuit. Ground.
The computer has to run a digital modes program - see the digimodes software page.
What do digital modes sound like? Click here to find out. G4UCJ has a useful site with screenshots of digital modes here at G4UCJ's Radio Website VOIP (Voice Over IP) Modes
VOIP is not considered by some hams as being a "true" ham mode. These modes are reliant to some extent on using the internet to transfer voice from one station to another. For example, IRLP users transmit from radio into a "node" - their voice is transferred via the internet to another node, from where it is transmitted from another radio to the receiving station. The link between radios is therefore via the internet.
One advantage of these modes is that hams in restricted communities can have contacts with hams in far distant lands with basic equipment.
Some VOIP modes, for example CQ100 do not require a radio at all as the "rig" is software created and driven.
All VOIP modes require some ham radio certification
See also Repeater listings APRS D-Star Emission Classification Packet Slow-Scan Television (SSTV) Fast-Scan Television (ATV) Optical communications WSPR WSJT Software QRP
Modes of operation Modes CW * AM * FM * SSB * Digital * Echolink * Emission Classification * IRLP * Optical communications Packet APRS * D-Star SSTV and ATV SSTV frequencies * SSTV Modes |
Skills to Develop
Use multiplication notation Model multiplication of whole numbers Multiply whole numbers Translate word phrases to math notation Multiply whole numbers in applications Use Multiplication Notation
Suppose you were asked to count all these pennies shown in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\)
Would you count the pennies individually? Or would you count the number of pennies in each row and add that number \(3\) times.
\[8 + 8 + 8 \nonumber\]
Multiplication is a way to represent repeated addition. So instead of adding \(8\) three times, we could write a multiplication expression.
\[3 \times 8 \nonumber \]
We call each number being multiplied a factor and the result the
product. We read \(3 × 8\) as three times eight, and the result as the product of three and eight.
There are several symbols that represent multiplication. These include the symbol × as well as the dot, • , and parentheses ( ).
Operation Symbols for Multiplication
To describe multiplication, we can use symbols and words.
Operation Notation Expression Read as Result Multiplication × 3 × 8 three times eight the product of 3 and 8 • 3 • 8 () 3(8)
Example \(\PageIndex{1}\): translate
Translate from math notation to words:
\(7 × 6\) \(12 · 14\) \(6(13)\) Solution We read this as seven times sixand the result is the product of seven and six. We read this as twelve times fourteenand the result is the product of twelve and fourteen. We read this as six times thirteenand the result is the product of six and thirteen.
Exercise \(\PageIndex{1}\)
Translate from math notation to words:
\(8 × 7\) \(18 • 11\) Answer a
eight times seven ; the product of eight and seven
Answer b
eighteen times eleven ; the product of eighteen and eleven
Exercise \(\PageIndex{2}\)
Translate from math notation to words:
\((13)(7)\) \(5(16)\) Answer a
thirteen times seven ; the product of thirteen and seven
Answer b
five times sixteen; the product of five and sixteen
Model Multiplication of Whole Numbers
There are many ways to model multiplication. Unlike in the previous sections where we used base-\(10\) blocks, here we will use counters to help us understand the meaning of multiplication. A counter is any object that can be used for counting. We will use round blue counters.
Example \(\PageIndex{2}\): model
Model: \(3 × 8\).
Solution
To model the product \(3 × 8\), we’ll start with a row of \(8\) counters.
The other factor is \(3\), so we’ll make \(3\) rows of \(8\) counters.
Now we can count the result. There are \(24\) counters in all.
\[3 \times 8 = 24 \nonumber \]
If you look at the counters sideways, you’ll see that we could have also made \(8\) rows of \(3\) counters. The product would have been the same. We’ll get back to this idea later.
Exercise \(\PageIndex{3}\)
Model each multiplication: \(4 × 6\).
Answer
Exercise \(\PageIndex{4}\)
Model each multiplication: \(5 × 7\).
Answer Multiply Whole Numbers
In order to multiply without using models, you need to know all the one digit multiplication facts. Make sure you know them fluently before proceeding in this section. Table \(\PageIndex{2}\) shows the multiplication facts. Each box shows the product of the number down the left column and the number across the top row. If you are unsure about a product, model it. It is important that you memorize any number facts you do not already know so you will be ready to multiply larger numbers.
× 0 1 2 3 4 5 6 7 8 9 0 0 0 0 0 0 0 0 0 0 0 1 0 1 2 3 4 5 6 7 8 9 2 0 2 4 6 8 10 12 14 16 18 3 0 3 6 9 12 15 18 21 24 27 4 0 4 8 12 16 20 24 28 32 36 5 0 5 10 15 20 25 30 35 40 45 6 0 6 12 18 24 30 36 42 48 54 7 0 7 14 21 28 35 42 49 56 63 8 0 8 16 24 32 40 48 56 64 72 9 0 9 18 27 36 45 54 63 72 81
What happens when you multiply a number by zero? You can see that the product of any number and zero is zero. This is called the Multiplication Property of Zero.
Definition: Multiplication Property of Zero
The product of any number and \(0\) is \(0\).
\[a \cdot 0 = 0\]
\[0 \cdot a = 0\]
Example \(\PageIndex{3}\): multiply
Multiply:
\(0 • 11\) \((42)0\) Solution
The product of any number and zero is zero. 0 • 11 = 0
Multiplying by zero results in zero. (42)0 = 0
Exercise \(\PageIndex{5}\)
Find each product:
\(0 • 19\) \((39)0\) Answer a
\(0\)
Answer b
\(0\)
Exercise \(\PageIndex{6}\)
Find each product:
\(0 • 24\) \((57)0\) Answer a
\(0\)
Answer b
\(0\)
What happens when you multiply a number by one? Multiplying a number by one does not change its value. We call this fact the Identity Property of Multiplication, and \(1\) is called the multiplicative identity.
Definition: Identity Property of Multiplication
The product of any number and \(1\) is the number.
\[1 \cdot a = a\]
\[a \cdot 1 = a\]
Example \(\PageIndex{4}\): multiply
Multiply:
\((11)1\) \(1 • 42\) Solution
The product of any number and one is the number. (11)1 = 11
Multiplying by one does not change the value. 1 • 42 = 42
Exercise \(\PageIndex{7}\)
Find each product:
\((19)1\) \(1 • 39\) Answer a
\(19\)
Answer b
\(39\)
Exercise \(\PageIndex{8}\)
Find each product:
\((24)(1)\) \(1 × 57\) Answer a
\(24\)
Answer b
\(57\)
Earlier in this chapter, we learned that the Commutative Property of Addition states that changing the order of addition does not change the sum. We saw that \(8 + 9 = 17\) is the same as \(9 + 8 = 17\).
Is this also true for multiplication? Let’s look at a few pairs of factors.
$$\begin{split} 4 \cdot 7 & = 28 \qquad 7 \cdot 4 = 28 \\ 9 \cdot 7 & = 63 \qquad 7 \cdot 9 = 63 \\ 8 \cdot 9 & = 72 \qquad 9 \cdot 8 = 72 \end{split}$$
When the order of the factors is reversed, the product does not change. This is called the Commutative Property of Multiplication.
Definition: Commutative Property of Multiplication
Changing the order of the factors does not change their product.
\[a \cdot b = b \cdot a\]
Example \(\PageIndex{5}\): multiply
Multiply:
\(8 • 7\) \(7 • 8\) Solution
Multiply. 8 • 7 = 56
Multiply. 7 • 8 = 56
Changing the order of the factors does not change the product.
Exercise \(\PageIndex{9}\)
Multiply:
\(9 • 6\) \(6 • 9\) Answer a
\(54\)
Answer b
\(54\)
Exercise \(\PageIndex{10}\)
Multiply:
\(8 • 6\) \(6 • 8\) Answer a
\(48\)
Answer b
\(48\)
To multiply numbers with more than one digit, it is usually easier to write the numbers vertically in columns just as we did for addition and subtraction.
We start by multiplying \(3\) by \(7\).
\[3 \times 7 = 21 \nonumber \]
We write the \(1\) in the ones place of the product. We carry the \(2\) tens by writing \(2\) above the tens place.
Then we multiply the \(3\) by the \(2\), and add the \(2\) above the tens place to the product. So \(3 × 2 = 6\), and \(6 + 2 = 8\). Write the \(8\) in the tens place of the product.
The product is \(81\).
When we multiply two numbers with a different number of digits, it’s usually easier to write the smaller number on the bottom. You could write it the other way, too, but this way is easier to work with.
Multiply: \(15 • 4\).
Solution
Write the numbers so the digits 5 and 4 line up vertically. Multiply 4 by the digit in the ones place of 15. 4 • 5 = 20. Write 0 in the ones place of the product and carry the 2 tens. Multiply 4 by the digit in the tens place of 15. 4 ⋅ 1 = 4. Add the 2 tens we carried. 4 + 2 = 6. Write the 6 in the tens place of the product.
Exercise \(\PageIndex{11}\)
Multiply: \(64 • 8\).
Answer
\(512\)
Exercise \(\PageIndex{12}\)
Multiply: \(57 • 6\).
Answer
\(342\)
Multiply: \(286 • 5\).
Solution
Write the numbers so the digits 5 and 6 line up vertically. Multiply 5 by the digit in the ones place of 286. 5 • 6 = 30. Write the 0 in the ones place of the product and carry the 3 to the tens place. Multiply 5 by the digit in the tens place of 286. 5 • 8 = 40. Add the 3 tens we carried to get 40 + 3 = 43. Write the 3 in the tens place of the product and carry the 4 to the hundreds place. Multiply 5 by the digit in the hundreds place of 286. 5 • 2 = 10. Add the 4 hundreds we carried to get 10 + 4 = 14. Write the 4 in the hundreds place of the product and the 1 to the thousands place.
Exercise \(\PageIndex{13}\)
Multiply: \(347 • 5\).
Answer
\(1,735\)
Exercise \(\PageIndex{14}\)
Multiply: \(462 • 7\).
Answer
\(3,234\)
When we multiply by a number with two or more digits, we multiply by each of the digits separately, working from right to left. Each separate product of the digits is called a partial product. When we write partial products, we must make sure to line up the place values.
HOW TO: MULTIPLY TWO WHOLE NUMBERS TO FIND THE PRODUCT
Step 1. Write the numbers so each place value lines up vertically.
Step 2. Multiply the digits in each place value.
Work from right to left, starting with the ones place in the bottom number. Multiply the bottom number by the ones digit in the top number, then by the tens digit, and so on. If a product in a place value is more than 9, carry to the next place value. Write the partial products, lining up the digits in the place values with the numbers above. Repeat for the tens place in the bottom number, the hundreds place, and so on. Insert a zero as a placeholder with each additional partial product.
Step 3. Add the partial products.
Multiply: \(62(87)\).
Solution
Write the numbers so each place lines up vertically Start by multiplying 7 by 62. Multiply 7 by the digit in the ones place of 62. 7 • 2 = 14. Write the 4 in the ones place of the product and carry the 1 to the tens place. Multiply 7 by the digit in the tens place of 62. 7 • 6 = 42. Add the 1 ten we carried. 42 + 1 = 43. Write the 3 in the tens place of the product and the 4 in the hundreds place. The first partial product is 434. Now, write a 0 under the 4 in the ones place of the next partial product as a placeholder since we now multiply the digit in the tens place of 87 by 62. Multiply 8 by the digit in the ones place of 62. 8 • 2 = 16. Write the 6 in the next place of the product, which is the tens place. Carry the 1 to the tens place. Multiply 8 by 6, the digit in the tens place of 62, then add the 1 ten we carried to get 49. Write the 9 in the hundreds place of the product and the 4 in the thousands place. The second partial product is 4960. Add the partial products.
The product is \(5,394\).
Exercise \(\PageIndex{15}\)
Multiply: \(43(78)\).
Answer
\(3,354\)
Exercise \(\PageIndex{16}\)
Multiply: \(64(59)\).
Answer
\(3,776\)
Example \(\PageIndex{9}\): multiply
Multiply:
\(47 • 10\) \(47 • 100\) Solution
(a) 47 • 10 (b) 47 • 100
When we multiplied \(47\) times \(10\), the product was \(470\). Notice that \(10\) has one zero, and we put one zero after \(47\) to get the product. When we multiplied \(47\) times \(100\), the product was \(4,700\). Notice that \(100\) has two zeros and we put two zeros after \(47\) to get the product.
Do you see the pattern? If we multiplied \(47\) times \(10,000\), which has four zeros, we would put four zeros after \(47\) to get the product \(470,000\).
Exercise \(\PageIndex{17}\)
Multiply:
\(54 • 10\) \(54 • 100\) Answer a
\(540\)
Answer b
\(5,400\)
Exercise \(\PageIndex{18}\)
Multiply:
\(75 • 10\) \(75 • 100\) Answer
\(750\)
Answer
\(7,500\)
Exercise \(\PageIndex{19}\)
Multiply: \(265(483)\).
Answer
\(127,995\)
Exercise \(\PageIndex{20}\)
Multiply: \(823(794)\).
Answer
\(653,462\)
Multiply: \(896(201)\).
Solution
There should be \(3\) partial products. The second partial product will be the result of multiplying \(896\) by \(0\).
Notice that the second partial product of all zeros doesn’t really affect the result. We can place a zero as a placeholder in the tens place and then proceed directly to multiplying by the \(2\) in the hundreds place, as shown.
Multiply by \(10\), but insert only one zero as a placeholder in the tens place. Multiply by \(200\), putting the \(2\) from the \(12\). \(2 • 6 = 12\) in the hundreds place.
Exercise \(\PageIndex{21}\)
Multiply: \((718)509\).
Answer
\(365,462\)
Exercise \(\PageIndex{22}\)
Multiply: \((627)804\).
Answer
\(504,108\)
When there are three or more factors, we multiply the first two and then multiply their product by the next factor. For example:
to multiply 8 • 3 • 2 first multiply 8 • 3 24 • 2 then multiply 24 • 2 48 Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (formerly of Santa Ana College). This content produced by OpenStax and is licensed under a Creative Commons Attribution License 4.0 license. |
This blog post is a result of a discussion (read borderline heated argument) that I had with a friend regarding
if machine learning terms loss function and objective function meant the same. We will get to know if they do mean the same or not by the end of this post. I am choosing to write this post as a dialogue between Pinfy and Scooby, to let you decide if it was a discussion or an argument :'). Also, I am using a colab notebook to write this post because the post is going to get mathematical.
Pinfy : My problem is of the form: For a given function $f(.)$, solve: $max_{\theta} f(r_{\theta})$. A NN is used to model $\theta \in \mathbb{R}^{M}$, and $r \in \mathbb{R}^{N}$. This is clearly an optimization problem with $f(.)$ as the objective. What is even meant by loss here?
Scooby: Okay. Here, $-f$ is like a loss function, right?
Pinfy : No, $f$ is the objective function. As per my understanding, an objective function gives you an absolute measure of goodness or badness of the model parameters. In the problem that I mention above, $f$ measures the goodness or badness of $\theta$. An example of an objective function is a likelihood function.
Loss is a relative quantity. You measure loss with respect to a certain fixed quantity. What is that quantity in $f$?
Scooby: I don't understand. Aren't the terms loss function and objective function interchangeable?
Pinfy : It is certainly debatable. I don't think they are the same, at least the way I understand the terms. For dicriminative modelling, the two terms are the same. Not for generative modelling.
Scooby: From what I know, negative of the likelihood function is considered as the loss function for maximum likelihood generative modelling. Loss function is just the negative of the objective if the optimization problem is finding the max. And, an objective function is called
loss or cost function when the optimization problem is a minimization problem.
Pinfy : Negative or positive is just a change to accomodate if the optimization problem is casted as a minimization problem or a maximization problem. Because most of the implementations use gradient descent to optimiza the models, the negative of the objective is used. So, I am not sure if it is right to say 'Loss function is just the negative of the objective if the optimization problem is finding the max.'
Scooby: Why are you not sure. I don't understand your doubt?
Pinfy : I think it is incorrect to say 'Negative of objective function is the loss function' as a general statement. Loss is a term that is used when there is an explicit sense of difference or distance. Doing ascent or descent over the same objective does not make one thing an objective function and another a loss function.
Scooby: Hmm. So, you are saying $-f$ would be called a loss function only if it can be intepreted as minimizing some distance? Otherwise, one can not call $-f$ a loss function?
Pinfy : Yes, that is exactly what I am saying.
Scooby: Okay, I get now. You have a problem with the notation. Let us try to understand it with an example of a generative modelling problem. Take the example of fitting a Gaussian to certain data. Suppose we are given some $x_i, i = 1,2,...N$, and we want to determine the mean, $\mu$, of the Gaussian that fits the data. Let us keep the standard deviation of the Gaussian fixed, say $\sigma$. Let us pose the problem as a maximum likelihood problem.
Pinfy : Alright, let us write the problem mathematically:
IID input data: $x_i \in \mathcal{X}, i = 1,2,...,N$
Gaussian to be fit to data: $p(x) = \frac{1}{\sqrt{2\pi\sigma^{2}}} \exp(-\frac{(x-\theta)^2}{2\sigma^2})$
Here, $\theta = \mu$ is the mean of the Gaussian (and hence, the parameter of our mean fitting model) that we want to fit to the data $x_i$.
The likelihood function, $L(\theta) = p(x|\theta) = p(x_1|\theta).p(x_2|\theta)...p(x_N|\theta) = \prod_{i = 1}^{N} p(x_i|\theta)$.
The optimization problem to find $\theta$ then becomes:
$\theta = \arg \max_{\theta} L(\theta)$
$\quad = \arg \max_{\theta} \log(L(\theta)) $
$\quad = \arg \max_{\theta} l(\theta) $
Now, $l(\theta) = \log(\frac{1}{(2\pi\sigma^{2})^{N/2}} \prod_{i = 1}^{N}\exp(-\frac{(x_i-\theta)^2}{2\sigma^2})) $
$l(\theta) = -\frac{N}{2}\log(2\pi\sigma^{2}) - \sum_{i = 1}^{N}\frac{(x_i-\theta)^2}{2\sigma^2} $
$\theta = \arg \max_{\theta} -\sum_{i = 1}^{N} \frac{(x_i-\theta)^2}{2\sigma^2}$
$\quad = \arg \max_{\theta} -\sum_{i = 1}^{N} (x_i-\theta)^2 \quad \quad \quad (1)$
Scooby: Do you see it now? The last line you wrote is like minimizing sum of $L^{2}$ distances between the data points, $x_i$ and the mean, $\mu$. So, you are minimizing the $L^{2}$ loss.
Pinfy : No, I still don't see it. I agree $\theta$ can be obtained by minimizing the sum of $L^{2}$ distances. It would be called a loss if I was predicting mean and I had an original mean, something like: $\mu_{\theta} - {\mu}$.
Scooby: I don't think you understand it right. Loss function is always calculated over a set of inputs. What is your definition of a loss function?
Pinfy : Loss function is of the form: $\sum_{i} (y_i - y_i^t)$, where $i$ is iterating over the data, $y_i$ is the estimated output for the input, $x_i$ and $y_i^t$ is the ground truth output. I have a problem with calling Eqn. (1) a loos function. The term
loss should not be used to specify the goodness or badness of parmeters.
Scooby: Why? Why not? What really matters is the underlying mathematics. If the mathematics is consistent, then that is all that matters. What you obtained in Eqn. (1) is the $L^2$ loss for the problem. It doesn't have to look like $\mu^{*} - \mu$.
Pinfy : Okay.
$\arg \max_{\theta} -\sum_{i = 1}^{N} (x_i-\theta)^2$
$\quad = \arg \max_{\theta} \sum_{i = 1}^{N} x_i^{2} + \theta^2 - 2x_i\theta$
$\quad = \arg \max_{\theta} \sum_{i = 1}^{N} \theta^2 - 2x_i\theta $
$\quad = \arg \max_{\theta} \sum_{i = 1}^{N} \theta^2 - \sum_{i = 1}^{N} 2x_i\theta $
$\quad = \arg \max_{\theta} N\theta^2 - 2\theta\sum_{i = 1}^{N} x_i$
$\quad = \arg \max_{\theta} N ( \theta^2 - 2\theta x_{mean}) \quad$ (Assuming, $\sum_{i = 1}^{N} x_i = x_{mean}$)
$\quad = \arg \max_{\theta} N( \theta^2 - 2\theta x_{mean} + x_{mean}^{2} - x_{mean}^{2})$
$\quad = \arg \max_{\theta} N( \theta - x_{mean})^{2}$
Scooby: Now, do you see it?
I am sure you have judged by now if the conversation was an argument or not :'). Irrespective of what it was, let us list down the takeways from the conversation above:
For the general machine learning (ML) problems, the loss function and the objective function mean almost the same thing.
It is important to
be clear with the definitions of the quantities that you deal with on a daily basis as ML researcher. Nobody really defines what a loss function actually means. I believe, the ML community would greatly benifit if an IEEE like standard body could unify the ML terminologies.
Even when things are given to you on a platter,
be critical about them. In the dicussion above, Scooby knew that the loss function was equivalent to the negative of objective function for general ML problems, but, did not have concrete reasoning for the same. |
The following question is part (1/4) of a 2.30h written exam for the course "Probability and Statistics" in a school of engineering. So, although tricky and difficult (because the Professor is really demanding from his students), it should be solvable in a logical amount of time and with a logical amount of calculations.
Let $X_1, \ldots, X_n$ be a random sample (i.i.d. r.v.) from the exponential distribution $\exp(\lambda)$, where $\lambda$ is unknown. Let $M_n=\max\{X_1, \ldots, X_n\}$ with probability distribution function $$G(x)=(1-e^{-\lambda x})^{n}, \qquad x>0$$ and zero elsewhere.
Q1. Find the probability density function of $M_n$.
Q2. If $M_n$ is the only information that you have for $X_1,X_2,\ldots,X_n$, find the maximum likelihood estimator (MLE) $\hat{\lambda}_n$ of $\lambda$.
Q3. Using $(1+x)^n>1+nx$ (or any other way) prove that $\hat{\lambda}_n$ is consistent, i.e. that $P(| \hat{\lambda}_n-\lambda|>\epsilon)\longrightarrow0$, for $n\rightarrow \infty$
For Q1, I took the derivative of the cdf of $M_n$ which I found to be equal to $$g(x)=n\lambda e^{-\lambda x}(1-e^{-\lambda x})^{n-1}$$ (doublechecked with Wolfram|Alpha).
For Q2, I thought that the function I should maximize (with respect to $\lambda$) is $g(x)$ because that is my single observation from the sample of size $n$. If I understand the exercise correctly someone takes a sample of $n$ observations $X_1,X_2,\ldots X_n$ and tells me only their maximum $M_n$. Now, from this single information I have to calculate a MLE for $\lambda$. So, I will maximize the pdf of $M_n$ which is know my likelihood function, no? Is my mistake here?
However, if I took as $$L(x;\lambda)=g(x)$$ and $$l(x;\lambda)=\ln\left(L(x;\lambda)\right)=\ln\left(g(x)\right)=\ln(n)+\ln(\lambda)-\lambda x+(n-1)\ln(1-e^{-\lambda x})$$ Then, as usually, I calculated the derivative of $l(x;\lambda)$ and set it equal to $0$ $$\frac{d}{d\lambda}l(x;\lambda)=\frac{1}{\lambda}-x+(n-1)\frac{xe^{-\lambda x}}{1-e^{-\lambda x}}=0$$ which reduces to $$e^t=\frac{1-nt}{1-t}$$ where $t=\lambda x$. But I cannot solve this equation (called transcendental as someone told me). |
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$, cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form, but that need not be the case.
If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate.
When $G$ is semi-simple, the corresponding Killing form is non-degenerate.But $G$ does
not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group. Its Killing form is identically zero. Nevertheless, we have the following YM-type theories:
QED with $G=U(1)$.
the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$.
Also the gauge group $G$ does in principle not have to be compact. This post imported from StackExchange Physics at 2015-01-19 14:11 (UTC), posted by SE-user Qmechanic |
Search
Now showing items 31-40 of 108
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
J/$\psi$ suppression at forward rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2017-03)
The inclusive J/ψ production has been studied in Pb–Pb and pp collisions at the centre-of-mass energy per nucleon pair View the MathML source, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed, ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Pseudorapidity dependence of the anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-11)
We present measurements of the elliptic ($\mathrm{v}_2$), triangular ($\mathrm{v}_3$) and quadrangular ($\mathrm{v}_4$) anisotropic azimuthal flow over a wide range of pseudorapidities ($-3.5< \eta < 5$). The measurements ...
Correlated event-by-event fluctuations of flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2016-10)
We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus–nucleus collisions, obtained for the first time using a new analysis method based on ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Transverse momentum dependence of D-meson production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-03)
The production of prompt charmed mesons D$^0$, D$^+$ and D$^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb–Pb collisions at the centre-of-mass energy per nucleon pair, $\sqrt{s_{\rm NN}}$ of ... |
Injective and Surjective Linear Maps Examples 1
Recall from the Injective and Surjective Linear Maps page that a linear map $T : V \to W$ is said to be
injective if: $T(u) = T(v)$ implies that $u = v$. $\mathrm{null} (T) = \{ 0 \}$.
Furthermore, the linear map $T : V \to W$ is said to be surjective if:**
If for every $w \in W$ there exists a $v \in V$ such that $T(v) = w$. $\mathrm{range} (T) = W$.
We will now look at some examples regarding injective/surjective linear maps.
Example 1 Let $T \in \mathcal L ( \wp (\mathbb{R}), \mathbb{R})$ be defined by $T(p(x)) = \int_0^1 2p'(x) \: dx$. Prove whether or not $T$ is injective, surjective, or both.
We will first determine whether $T$ is injective. Suppose that $p(x) \in \wp (\mathbb{R})$ and $T(p(x)) = 0$. Then we have that:(1)
Note that if $p(x) = C$ where $C \in \mathbb{R}$, then $p'(x) = 0$ and hence $2 \int_0^1 p'(x) \: dx = 0$. Hence $\mathrm{null} (T) \neq \{ 0 \}$ and so $T$ is not injective.
We will now determine whether $T$ is surjective. Suppose that $C \in \mathbb{R}$. We want to determine whether or not there exists a $p(x) \in \wp (\mathbb{R})$ such that:(2)
Take the polynomial $p(x) = \frac{C}{2}x$. Then $p'(x) = \frac{C}{2}$ and hence:(3)
Therefore $T$ is surjective.
Example 2 Suppose that $S_1, S_2, ..., S_n$ are injective linear maps for which the composition $S_1 \circ S_2 \circ ... \circ S_n$ makes sense. Prove that $S_1 \circ S_2 \circ ... \circ S_n$ is injective.
Let $u$ and $v$ be vectors in the domain of $S_n$, and suppose that:(4)
From the equation above we see that $S_n (u) = S_n(v)$ and since $S_n$ injective this implies that $u = v$. Since the remaining maps $S_1, S_2, ..., S_{n-1}$ are also injective, we have that $u = v$, so $S_1 \circ S_2 \circ ... \circ S_n$ is injective.
Example 3 Let $T$ be a linear map from $V$ to $W$, and suppose that $T$ is injective and that $\{ v_1, v_2, ..., v_n \}$ is a linearly independent set of vectors in $V$. Show that $\{ T(v_1), ..., T(v_n) \}$ is a linearly independent set of vectors in $W$.
Consider the following equation (noting that $T(0) = 0$):(5)
Now since $T$ is injective, this implies that $a_1v_1 + a_2v_2 + ... + a_nv_n = 0$. However, $\{ v_1, v_2, ..., v_n \}$ is a linearly independent set in $V$ which implies that $a_1 = a_2 = ... = a_n = 0$. Therefore $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is a linearly independent set in $W$.
Example 4 Let $T$ be a linear map from $V$ to $W$ and suppose that $T$ is surjective and that the set of vectors $\{ v_1, v_2, ..., v_n \}$ spans $V$. Show that $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $W$.
Let $w \in W$. Since $T$ is surjective, then there exists a vector $v \in V$ such that $T(v) = w$, and since $\{ v_1, v_2, ..., v_n \}$ spans $V$, then we have that $v$ can be written as a linear combination of this set of vectors, and so for some $a_1, a_2, ..., a_n \in \mathbb{F}$ we have that $v = a_1v_1 + a_2v_2 + ... + a_nv_n$ and so:(6)
Therefore any $w \in W$ can be written as a linear combination of $\{ T(v_1), T(v_2), ..., T(v_n) \}$ and so $\{ T(v_1), T(v_2), ..., T(v_n) \}$ spans $W$. |
Rotational Operators
Definition: For any vector $\vec{x} \in \mathbb{R}^n$, a rotational transformation operator $T: \mathbb{R}^n \to \mathbb{R}^n$ rotates every vector $\vec{x}$ by a fixed angle $\theta$.
We will first take a look at rotational transformations in $\mathbb{R}^2$ and then in $\mathbb{R}^3$.
Rotational Transformations in 2-Space
Let $\vec{x} \in \mathbb{R}^2$. We will need to find equations relating $\vec{x} = (x, y)$ to its image $\vec{w} = (w_1, w_2)$ under the a rotational transformation $T$. Let $\phi$ be the angle between the positive $x$-axis and $\vec{x}$, and let $\theta$ be the angle between $\vec{x}$ and $\vec{w}$. The length of both vectors is $\| \vec{x} \|$. The following diagram illustrates what we have defined:
Note that we can calculate the components of our vector $\vec{x} = (x, y)$ with the following polar equations $x = \| \vec{x} \| \cos \phi$ and $y = \| \vec{x} \| \sin \phi$ - both of which were derived by basic trigonometry. Furthermore, we can also calculate the components of $\vec{w} = (w_1, w_2)$ from the following polar equations $w_1 = \| \vec{x} \| \cos (\theta + \phi)$ and $w_2 = \| \vec{x} \| \sin (\theta + \phi)$. If we use the following trigonometric identities:(1)
We can write $w_1$ and $w_2$ as follows:(2)
Lastly we will make out substitutions that $x = \| \vec{x} \| \cos \phi$ and $y = \| \vec{x} \| \sin \phi$ to get:(3)
It thus follows that if $w = Ax$, then our standard matrix $A = \begin{bmatrix}\cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\end{bmatrix}$, and in matrix form our transformation is defined as:(4)
Rotational Transformations in 3-Space
Let $\vec{x} \in \mathbb{R}^3$. We define the rotational transformation of $\vec{x}$ to rotate around a ray known as the axis of rotation by some fixed angle $\theta$. As $\vec{x}$ sweeps around the axis of rotation to its image vector $\vec{w}$, a portion of a cone is also swept out as illustrated:
Let $\vec{u} = (a, b, c)$ be a unit vector for any axis of rotation in $\mathbb{R}^3$. The standard matrix $A$ for the transformation of any vector through an angle $\theta$ around $\vec{u}$ is:(5)
The following table shows the equations defining the image for a rotational transformation and their associated standard matrices. In each case, the axis of rotation for these transformations is either the $x$, $y$ or $z$ axis and thus $\vec{u} = (1, 0, 0)$ (rotation around the $x$-axis), $\vec{u} = (0, 1, 0)$ (rotation around the $y$-axis), or $\vec{u} = (0, 0, 1)$ (rotation around the $z$-axis).
Operator Equations Defining the Image Standard Matrix Counterclockwise rotation about the positive $x$-axis $w_1 = x + 0y + 0z \\ w_2 = 0x + y\cos \theta - z\sin \theta \\ w_3 = 0x + y\sin \theta + z\cos \theta$ $\begin{bmatrix}1 & 0 & 0\\ 0 & \cos \theta & -\sin \theta\\ 0 & \sin \theta & \cos \theta \end{bmatrix}$ Counterclockwise rotation about the positive $y$-axis $w_1 = x\cos \theta + 0y + z\sin \theta \\ w_2 = 0x + y - 0z \\ w_3 = -x\sin \theta + 0y + z\cos \theta$ $\begin{bmatrix}\cos \theta & 0 & \sin \theta\\ 0 & 1 & 0\\ -\sin \theta & 0 & \cos \theta \end{bmatrix}$ Counterclockwise rotation about the positive $z$-axis $w_1 = x\cos \theta - y\sin \theta + 0z \\ w_2 = x\sin \theta + y \cos \theta + 0z \\ w_3 = 0x + 0y + z$ $\begin{bmatrix}\cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1 \end{bmatrix}$ |
A proton-core that establishes a frame-of-reference and the origin for a three-dimensional space containing a photon.
Recall that the quark model of a proton is a bundle of twelve quarks
$\sf{p^{+}} \leftrightarrow \mathrm{4}\bar{\sf{d}} + \mathrm{4}\sf{b} + \mathrm{4}\bar{ \sf{t} }$
Let us combine this proton with an electromagnetic field such as one might find in an atom
$\mathscr{F} \leftrightarrow \mathrm{4}\sf{d} + 2\sf{m} + 2\sf{a} + \sf{e} + \sf{g} + 2\bar{\sf{m}} + 2\bar{\sf{a}} + \bar{\sf{e}} + \bar{\sf{g}}$
$\gamma _{\tiny{\bigcirc}} \leftrightarrow \mathrm{4}\sf{d} + 2\sf{m} + 2\sf{a} + \sf{e} + \sf{g} + \mathrm{4}\bar{\sf{d}} + 2\bar{\sf{m}} + 2\bar{\sf{a}} + \bar{\sf{e}} + \bar{\sf{g}}$
together with a baryonic
proton-core noted by
ⓟ
+$\leftrightarrow \mathrm{4}\sf{b} + \mathrm{4}\bar{ \sf{t} }$
$\sf{p^{+}} + \mathscr{F} \longrightarrow$ⓟ
+$+ \ \gamma _{\tiny{\bigcirc}}$
$\overline{\kappa} \, \large{(}$ⓟ
+$\large{)}$$\; = (0, 0, 0)$and$\lambda \, \large{(}$ⓟ +$\large{)}$$\; = 0$
It does not contain any up or down seeds, so it has no angular momentum
$𝘑 \, \large{(}$ⓟ
+$\large{)}$$\begin{align} \; \equiv \frac{ \, \left| \, N^{\mathsf{U}} - N^{\mathsf{D}} \, \right| \, }{8} =0 \end{align}$
$R \, \large{(}$ⓟ
+$\large{)}$$\begin{align} \; \equiv \frac{hc}{2 \pi } \frac{ \sqrt{𝘑 }}{E} = 0 \end{align}$
So the proton-core is a point shaped, charged particle. We use it as a frame-of-reference F that is perfectly inertial and non-rotating. This frame is then employed to describe other particles in a three-dimensional space centered on the proton-core
$\sf{F} =$ⓟ
+
The photon $\gamma _{\tiny{\bigcirc}}$ is somwhere in this space. But since it has a balanced number of leptonic seeds
$N^{\, \mathsf{M}} \left( \gamma _{\tiny{\bigcirc}} \right) = N^{\, \mathsf{A}} \left( \gamma _{\tiny{\bigcirc}} \right)$and$N^{\, \mathsf{E}}\left( \gamma _{\tiny{\bigcirc}} \right) = N^{\, \mathsf{G}} \left( \gamma _{\tiny{\bigcirc}} \right)$
it does not have a definite spatial orientation or position. We cannot say exactly where it is unless it is absorbed by some more precisely located particle. The photon $\gamma _{\tiny{\bigcirc}}$ is a gamma-ray. Click here for a spreadsheet with more detail about three-dimensional quark arrangements and calculations of particle characteristics. |
Dividing the numerator and denominator of the integral by $\gamma + \kappa$ gives$$\int_{0}^{x}\frac{a(e^{\gamma u}-1)du}{e^{\gamma u}-1+a\gamma}$$Where $a=\frac2{\gamma + \kappa}.$
Breaking this into $2$ integrals,$$=\frac a\gamma\int_{0}^x\frac{\gamma e^{\gamma u}du}{e^{\gamma u}-1+a\gamma}-a\int_0^x\frac{du}{e^{\gamma u}-1+a\gamma}$$
For the first, substitute $e^{\gamma u}-1=t$ and you're left with$$\frac{a}{\gamma}\int \frac{dt}{t+a\gamma}$$Do the same substitution for the second one, and you have$$\frac{1}{\gamma}\int \frac{dt}{(t+1)(t+a\gamma)}$$The first one is now standard, and the second can be done by partial fractions.
Edit:
For the second integral, partial fractions isn't mandatory.
The second integral can be written as$$\frac{1}{\gamma(a\gamma-1)}\int \frac{(t+a\gamma)-(t+1)dt}{(t+1)(t+a\gamma)}$$$$=\frac{1}{\gamma(a\gamma-1)}\int \frac{dt}{t+1}-\frac{1}{\gamma(a\gamma-1)}\int \frac{dt}{t+a\gamma}$$Both of which are standard. |
Is there a way to prove that $\sqrt[m]{a} + \sqrt[n]{b}$ ($\sqrt[m]{a}$ and $\sqrt[n]{b}$ are irrational); $a, b, m, n \in \mathbb{N}$; $m, n \neq 2$; is irrational without using the theorem mentioned in Sum of irrational numbers, a basic algebra problem?
If one of $m$ or $n$ is $2$, then a polynomial with integer coefficients can be easily constructed, and rational root theorem (http://en.wikipedia.org/wiki/Rational_root_theorem) can be used to prove that it's irrational. For example, if $x = \sqrt{2} + \sqrt[3]{3}$:
$$ \begin{align} (x - \sqrt{2})^3 = x^3 - 3x^2\sqrt{2} + 6x - 2\sqrt{2} & = 3 \\ \implies x^3 + 6x - 3 &= \sqrt{2}(3x^2 + 2) \\ \implies x^6 + 12x^4 - 6x^3 + 36x^2 - 36x + 9 & = 2(9x^4 + 12x^2 + 4) \\ \implies x^6 - 6x^4 - 6x^3 + 12x^2 - 36x + 5 & = 0 \end{align} $$
By evaluating the polynomial for $\pm1$ and $\pm5$, it can be verified that $x$ is irrational. However, if neither of $m$ or $n$ is $2$, then constructing a polynomial with integer coefficients seems impossible (if not very tedious). Let's say $x = \sqrt[3]{2} + \sqrt[4]{3}$. Is there any way to prove that this is irrational without using the above-mentioned theorem? |
As John Rennie says the hydrogen atom is a case where the electron is in the s shell, which means it has no angular momentum. We can think of the electron if it were measured as being at a point above the proton, where the electron as a wave would then spread into a spherical shape around the proton. One might imagine a sort of Zeno machine that keeps the electron wave function reduced so it remains at a point, or a set of points that hop around. If the Zeno effect is a set of measurements occurring at time intervals much shorter than the spreading time of the electron wave function then this hopping of the point can be minimized.
Atomic physics is a bit complicated. The wave function has a radial and angular part to it. The angular part defines the shells, s, p, d, f etc. These are also a series in multipole moments, s = spherical, p = dipolar, d = quadrupolar and so forth. So we need to consider the quadrupolar terms. This occurs with atoms in the transition metals, with scandium being the first. This would have a single electron in the outer d shell.
How would one then consider gravitational radiation produced by the quadrupole motion of a particle? This is a sketch of how to look at weak field gravitational radiation. A full treatment is a bit longer. We start with the metric $g_{\mu\nu}~=~\eta_{\mu\nu}~+~h_{\mu\nu}$ where $\eta_{\mu\nu}$ is the flat Minkowski background metric and $h_{\mu\nu}$ is the perturbation on that. We further look at the traceless components of this metric perturbation which we label as $\bar h_{\mu\nu}$. These traceless metric components have two non-zero terms $h_{ii}~=~A_+(t,r)$ and $h_{ij}~=~A_\times(t,r),~i~\ne~j$ for the components indicies $i,~j$ running over two spatial dimensions. These traceless metric components obey the inhomegenous wave equation$$\square\bar h_{\mu\nu}~=~-\frac{16\pi G}{c^4}T_{\mu\nu}$$Now set $G/c^4~=~1$ for simplicity. These metric coefficients are then written according to a Green's function with$$\bar h_{\mu\nu}~=~-16\pi\int G_{\mu\nu}^{\alpha\beta}(t,{\bf r},t{\bf r}')T_{\alpha\beta}(t,{\bf r}')$$We now expand the Green's function according to spherical harmonics and consider the quadrupolar terms the traceless metric terms are then approximately$$\bar h_{\mu\nu}^{\alpha\beta}~=~-4\int d^3r \frac{Q_{\mu\nu}(t-|{\bf r}-{\bf r}'|,{\bf r}')}{|{\bf r}-{\bf r}'|}.$$
That is the classical theory. We want to quantize this. We then have a wave function(al) of the form $\Psi[h]$. To make this simple we then consider this wave function(al) as expanded according to a radial and angular part. The $\frac{1}{|{\bf r}-{\bf r}'|}$ part of the metric means we will have a radial part similar to the Laguerre polynomials in atomic physics, and we then consider the quadrupole term as giving the Legendre polynomial term $Y_\ell^m(\theta,\phi)$ for $\ell~=~2$. We may then proceed with an atomic physics calculation, which below I will only gives a few pointers on.
The stress-energy term $T^{00}~=~\rho$ the energy density, is then expressed as $T^{00}~=~\hbar\omega/volume$. The coupling term for gravitation is $G/c^4$ and so there is the factor $G\hbar/c^4$ associated with the perturbation of the d shell due to gravitation. An atomic transition due to the emission of a graviton would be the emission of a spin-$2$ particle and the transition in $\ell~=~2$ to $\ell~=~0$, so the entire quadupole term is carried off by the graviton. This would have a coupling term $\sim~G\hbar/c^4$, which is $8.7\times 10^{-79}m-s$. This is very small.
Since this is computed for a quadrupole moment or the d shell, one would either have to work with excited hydrogen atoms that remain in that state long enough to perform measurements, or one has to work with a transition metal such as scandium. In the first case this would be tough to measure the perturbation of the d shell by gravitation in a time within the transition time for the atom to relax to the s-shell. If one works with a transition metal that problem is replaced by the fact the underlying electronic configuration will have a lot of complexity that needs to be computed the very high order in perturbation theory, such as Hartree-Fock method. Either way this is a tough call, but not absolutely impossible. I think working with higher Rydberg atomic states of a hydrogen atom would be most likely bear fruit.
The physics that is most likely relevant will then be the perturbation of the d shell by gravitational physics. This will be very small. There could of course be a transition that produces a soft graviton as presented by Weinberg, but the coupling is very small and the probability of such a transition in any reasonable period of time extremely small. |
Injective and Surjective Linear Maps Examples 2
Recall from the Injective and Surjective Linear Maps page that a linear map $T : V \to W$ is said to be
injective if: $T(u) = T(v)$ implies that $u = v$. $\mathrm{null} (T) = \{ 0 \}$.
Furthermore, the linear map $T : V \to W$ is said to be surjective if:
If for every $w \in W$ there exists a $v \in V$ such that $T(v) = w$. $\mathrm{range} (T) = W$.
We will now look at some more examples regarding injective/surjective linear maps.
Example 1 Recall that that if $V$ and $W$ are vector spaces then the set of all linear maps from $V$ to $W$, $\mathcal L(V, W)$ with addition defined by $(S + T)(v) = S(v) + T(v)$ and scalar multiplication defined by $(aT)(v) = aT(v)$ for all $S, T \in \mathcal L (V, W)$ and for all $a \in \mathbb{F}$, is a vector space. Suppose that $2 ≤ \mathrm{dim} (V) ≤ \mathrm{dim} (W)$. Prove that then the subset $\{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is NOT a subspace of $\mathcal L (V, W)$.
To show that $U = \{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is not a subspace of $\mathcal L (V, W)$ - we must either show that $0 \not \in \mathcal L (V, W)$ or show that $\{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is not closed under addition or scalar multiplication.
Note that the zero linear map $0 : V \to W$ defined by $0(v) = 0$ which maps every vector $v \in V$ to $0 \in W$ is not injective because $0(v) = 0(w)$ does not imply that $v = w$. Therefore the zero map is not injective and:(1)
So we have thus far failed to prove that $\{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is not a subspace of $\mathcal L(V, W)$.
We will now check to see if $\{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is closed under addition. Let $S, T \in \{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$.
Let $\{ v_1, v_2, ..., v_n \}$ be a basis of $V$ and let $\{ w_1, w_2, ..., w_m \}$ be a basis of $W$. We note that $2 ≤ n ≤ m$. Define the linear map $S$ by:(2)
Note that $S$ is not injective. To show this, suppose that $u, v \in V$. Then $u = a_1v_1 + a_2v_2 + ... + a_nv_n$ and $v = b_1v_2 + b_2v_2 + ... + b_nv_n$ for some set of scalars $a_1, a_2, ..., a_n, b_1, b_2, ..., b_n \in \mathbb{F}$, and so:(4)
Note that the equation above does not imply that $u = v$. For example, if $u = a_1v_1 + a_2v_2 + ... + a_nv_n$ and $v = b_1v_1 + a_2v_2 + ... + a_nv_n$ where $a_1 \neq b_1$ then $S(u) = S(v)$ but clearly $u \neq v$.
We also note that $T$ is not injective. To show this, suppose that $u, v \in V$. Then once again, $u = a_1v_1 + a_2v_2 + ... + a_nv_n$ and $v = b_1v_1 + b_2v_2 + ... + b_nv_n$ for some set of scalars $a_1, a_2, ..., a_n, b_1, b_2, ..., b_n \in \mathbb{F}$, and so:(5)
Note that the equation above does not imply that $u = v$. For example, if $u = a_1v_1 + a_2v_2 + ... + a_nv_n$ and $v = a_1v_1 + b_2v_2 + a_3v_3 + ... + a_nv_n$ where $a_2 \neq b_2$ then $S(u) = S(v)$ but clearly $u \neq v$.
Now consider the linear map $(S + T)(v) = S(v) + T(v)$ for all $v \in V$. For $u$ and $v$ defined as the linear combinations of the basis vectors previously, we have that:(6)
if we subtract both sides of the equations above, we have that:(7)
Since $\{ w_1, w_2, ..., w_m \}$ is a basis - it is also a linearly independent set. Note that $n ≤ m$ (from earlier), and so the set of vectors $\{ w_1, w_2, ..., w_n \}$ as a subset of the prescribed basis of $W$ is also linearly independent. This implies that:(8)
Therefore $u = v$, so $S + T$ is injective. Therefore, $(S + T) \not \in \{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ so $\{ T \in \mathcal L (V, W) : T \: \mathrm{is \: not \: injective} \}$ is not closed under addition and hence is not a subspace of $\mathcal L (V, W)$. |
Your comments above lead me to believe that you are asking about the distribution of $X$ marginalized over the classes. This marginal distribution is Gaussian in the first dimension but not in the second.
Since the first and second dimensions are independent they can be treated separately. The means and variances conditional on class in the first dimension are the same for both classes, so mean and variance marginalized over class are the same those conditional on class $(\mu_1 = 1,\,\sigma_{1}^2 = 2)$. In the second dimension the marginal distribution is a mixture of Gaussians, which is not itself Gaussian:
$p(x_2) = \frac{1}{2}\text{N}(1,1) + \frac{1}{2}\text{N}(3,1),$
in which $\text{N}(\mu, \sigma^2)$ is the probability density of the Gaussian (aka normal) distribution with mean $\mu$ and variance $\sigma^2$.
The mean of the second dimension variable is
$\text{E}(X_2) = \Pr(C_1)\text{E}(X_2|C_1) + \Pr(C_2) \text{E}(X_2|C_2)$
$\text{E}(X_2) = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot 3 = 2.$
And now the variance. In general,
$\text{Var}(Y) = \text{E}(Y^2) - \left[\text{E}(Y)\right]^2.$
We'll find it useful to rearrange this as
$\text{E}(Y^2) = \text{Var}(Y) + \left[\text{E}(Y)\right]^2.$
So
$\text{E}(X_2^2) = \Pr(C_1)\text{E}(X_2^2|C_1) + \Pr(C_2) \text{E}(X_2^2|C_2)$
$\text{E}(X_2^2) = \frac{1}{2}(\text{Var}(X_2|C_1) + \left[\text{E}(X_2|C_1)\right]^2$$+\text{Var}(X_2|C_2) + \left[\text{E}(X_2|C_2)\right]^2)$
$\text{E}(X_2^2) = \frac{1}{2}(1 + 1^2 + 1 + 3^2) = 6.$
Now we can get
$\text{Var}(X_2) = \text{E}(X_2^2) - \left[\text{E}(X_2)\right]^2 = 6 - 2^2 = 2.$
Here's what the distribution of $X_2$ looks like, along with a Gaussian distribution with the same mean and variance. |
Beta regression (i.e. GLM with beta distribution and usually the logit link function) is often recommended to deal with response aka dependent variable taking values between 0 and 1, such as fractions, ratios, or probabilities: Regression for an outcome (ratio or fraction) between 0 and 1.
However, it is always claimed that beta regression cannot be used as soon as the response variable equals 0 or 1 at least once. If it does, one needs to either use zero/one-inflated beta model, or make some transformation of the response, etc.: Beta regression of proportion data including 1 and 0.
My question is: which property of beta distribution prevents beta regression from dealing with exact 0s and 1s, and why?
I am guessing it is that $0$ and $1$ are not in the support of beta distribution. But for all shape parameters $\alpha>1$ and $\beta>1$, both zero and one
are in the support of beta distribution, it's only for smaller shape parameters that the distribution goes to infinity at one or both sides. And perhaps the sample data are such that $\alpha$ and $\beta$ providing best fit would both turn out to be above $1$.
Does it mean that in some cases one
could in fact use beta regression even with zeros/ones?
Of course even when 0 and 1 are in the support of beta distribution, probability of observing exactly 0 or 1 is zero. But so is the probability to observe any other given countable set of values, so this cannot be an issue, can it? (Cf. this comment by @Glen_b).
In the context of beta regression, beta distribution is parameterized differently, but with $\phi=\alpha+\beta>2$ it should still be well-defined on $[0,1]$ for all $\mu$. |
Injective and Surjective Linear Maps Examples 4
Recall from the Injective and Surjective Linear Maps page that a linear map $T : V \to W$ is said to be
injective if: $T(u) = T(v)$ implies that $u = v$. $\mathrm{null} (T) = \{ 0 \}$.
Furthermore, the linear map $T : V \to W$ is said to be surjective if:
If for every $w \in W$ there exists a $v \in V$ such that $T(v) = w$. $\mathrm{range} (T) = W$.
We will now look at some more examples regarding injective/surjective linear maps.
Example 1 Let $T \in \mathcal L (V, W)$ and let $W$ be a finite-dimensional vector space. Prove that $T$ is injective if and only if there exists a linear map $S \in \mathcal L (W, V)$ such that $ST = I$ where $I$ is the identity map on $V$.
$\Rightarrow$ Suppose that $T$ is an injective linear map. We note that if $\mathrm{dim} (V) > \mathrm{dim} (W)$ then no linear map from $V$ to $W$ is injective. Since $T$ is injective, we must have that $\mathrm{dim} (V) ≤ \mathrm{dim} (W)$. We're also given that $W$ is a finite-dimensional vector space which implies that them $V$ is also a finite-dimensional vector space.
Let $\{ v_1, v_2, ..., v_n \}$ be a basis of $V$. We saw in a previous example that if $\{ v_1, v_2, ..., v_n \}$ is linearly independent and if $T$ is injective then $\{ T(v_1), T(v_2), ..., T(v_n) \}$ is linearly independent in $W$. This set of has $n$ vectors in it, and hence can be extended to a basis of $W$, say $\{ T(v_1), T(v_2), ..., T(v_n), w_{n+1}, ..., w_m \}$.
We now define a linear map $S \in \mathcal L(V, W)$ as:(1)
So for any vector $v = a_1v_1 + a_2v_2 + ... + a_nv_n$ we have that:(2)
Therefore $(ST)(v) = I(v) = v$ for every vector $v \in V$.
$\Leftarrow$ Suppose that there exists a linear map $S \in \mathcal L(W, V)$ such that $ST = I$ where $I$ is the identity map on $V$. Let $u, v \in V$ such that $u \neq v$. Then we have that:(3)
We also have that:(4)
Therefore $(ST)(u) \neq (ST)(v)$, which implies that $T(u) \neq T(v)$. Therefore $T$ is injective.
Example 2 Let $T \in \mathcal L (V, W)$ and let $V$ be a finite-dimensional vector space. Prove that $T$ is surjective if and only if there exists a linear map $S \in \mathcal L (W, V)$ such that $TS = I$ where $I$ is the identity map on $W$.
$\Rightarrow$ Suppose that $T$ is surjective. Then we must have that $\mathrm{dim} (W) ≤ \mathrm{dim} (V)$. We're already given that $V$ is finite-dimensional and so $W$ must also be finite-dimensional. Let $\{ w_1, w_2, ..., w_m \}$ be a basis of $W$. Since $T$ is surjective, then for each $j = 1, 2, ..., m$ there exists a vector $v_j \in V$ such that $w_j = T(v_j)$.
We can define a linear map $S \in \mathcal L (W, V)$ by:(5)
Then we have that:(6)
So for any vector $w = a_1w_1 + a_2w_2 + ... + a_mw_m$ we have that:(7)
Therefore $TS = I$ where $I$ is the identity map on $W$.
$\Leftarrow$ Suppose that there exists a linear map $S \in \mathcal L (W, V)$ such that $TS = I$ where $I$ is the identity map on $W$. Then for any vector $w \in W$ we have that:(8)
Therefore, for every vector $w \in W$ there exists a vector $S(w) \in V$ such that $T(S(w)) = w$, so $T$ is surjective. |
The question is in the title.
Meta-analysis with binomial distribution
I'm trying to do a meta-analysis on occurence of an event across different studies. For each study $i$ ($i=1,\dots,N$), I have the number of participants $n_i$ and the number of events $k_i$. I don't have individual data. The number of event thus follows a binomial distribution: $k_i\sim Bin(\theta_i,n_i)$, where I assume a random effect: $\theta_i\sim N(\theta,\tau)$.
Regression meta-analysis
Additionally, I'm doing a regression on $\theta$ using the variable $x_i$: $$\theta_i\sim N(\alpha+\beta x_i,\tau).$$
Multi-level regression meta-analysis
Some studies are split into groups. As they have the same baseline characteristics, I thought of using a multi-level model (first formulation): $$\theta_{i,j}\sim N(\alpha+\beta x_{i,j},\tau+\delta_j).$$ We could alternatively write it like this (although it is a bit different, second formulation): $$\theta_{i}\sim N(\alpha+\beta x_i,\tau).$$ $$\theta_{i,j}\sim N(\theta_i+\gamma_i x_{ij},\delta_j)$$
Which function to use in R
I thought of doing it with the
rma.mv() function from the
metareg package, but the issue is that you cannot specify the distribution of the data. As far as I understood, it assumes normally distributed data. I thought of applying a logit transformation to the probability of success $p_i:=k_i/n_i$, but it's not possible to the some zero counts ($y_i=0$ for some studies i). Any other function that could solve that?Moreover, I'm not sure if such models follow my first or second formulation. |
Table of Contents
The Canonical Injections of Weak Direct Products of Groups
Recall from The Weak Direct Product of an Arbitrary Collection of Groups page that if $\{ G_i : i \in I \}$ is an arbitrary collection of groups the the weak direct product of the groups $\{ G_i : i \in I \}$ is the set:(1)
with the operation of pointwise product, defined for all $f, g \in \prod_{i \in I}^{\mathrm{weak}} G_i$ by $(fg)(i) = f(i)g(i)$ for all $i \in I$. We proved that $\prod_{i \in I}^{\mathrm{weak}} G_i$ is a normal subgroup of $\prod_{i \in I} G_i$.
We will now define some important functions related to the weak direct product of groups.
Definition: Let $\{ G_i : i \in I \}$ be an arbitrary collection of groups. The Canonical Injection of $G_j$ into $\prod_{i \in I}^{\mathrm{weak}}$ is the map $\iota_j : G_j \to \prod_{i \in I}^{\mathrm{weak}}$ defined for all $g \in G$ by $\iota_j(g)$ to be the function defined for all $i \in I$ by $[\iota_j(g)](i) = \begin{Bmatrix} g & \mathrm{if} \: i = j \\ e_{G_i} & \mathrm{if} \: i \neq j \end{Bmatrix}$. In other words, each $\iota_j(g)$ is the function on $I$ that maps every $i \in I$ to $e_{G_i}$ with the exception that $\iota_j(g)$ maps $j$ to $g$.
The following proposition tells us that each canonical injection $\iota_j$ is a monomorphism from $G_j$ to $\prod_{i \in I}^{\mathrm{weak}} G_i$.
Proposition 1: Let $\{ G_i : i \in I \}$ be an arbitrary collection of groups. Then for each $j \in I$, the canonical injection $\iota_j : G_j \to \prod_{i \in I}^{\mathrm{weak}} G_i$ is a monomorphism. Proof:Let $j \in I$ and let $g, g' \in G_J$. Then: Therefore: So indeed, for all $g, g' \in G_j$ we have that $\iota_j(gg') = \iota_j(g) \iota_j(g)$, so $\iota_j$ is a homomorphism. Now let $g, g' \in G_j$ and suppose that $\iota_j(g) = \iota_j(g')$. Then $[\iota_j(g)](j) = [\iota_j(g')](j)$, or equivalently, $g = g' 4]]. So [[$ \iota_j$ is injective. Thus $\iota_j : G_j \to \prod_{i \in I}^{\mathrm{weak}} G_i$ is a monomorphism. $\blacksquare$
Proposition 2: Let $\{ G_i : i \in I \}$ be an arbitrary collection of groups. Then for each $j \in J$, $\iota_j(G_j)$ is a normal subgroup of $\prod_{i \in I} G_i$. Proof:By Proposition 1, $\iota_j$ is a homomorphism of $G_j$ to $\prod_{i \in I}^{\mathrm{weak}} G_i$ and so $\iota_j(G_j)$ is a subgroup of $\prod_{i \in I}^{\mathrm{weak}} G_i$. Furthermore, $\prod_{i \in I}^{\mathrm{weak}} G_i$ is a subgroup of $\prod_{i \in I} G_i$. So $\iota_j(G_j)$ is a subgroup of $\prod_{i \in I} G_i$. Let $G = \prod_{i \in I} G_i$ and let $H = \iota_j(G_j)$. We aim to show that for all $g \in G$ that $gHg^{-1} \subseteq H$. Let $g \in G$ and let $h \in H$. Since $h \in H$ there exists an $a \in G_j$ such that $\iota_j(a) = h$. So $h(j) = a$ and $h(i) = e_{G_i}$ for all $i \in I \setminus \{ j \}$. So if $i = j$ we have that: And if $i \in I \setminus \{ j \}$ we have that: Let $ghg^{-1} = \iota_j(g(j)ag^{-1}(j)) \in \iota_j(G_j)$. Thus $gHg^{-1} \subseteq H$ which shows that $H = \iota_j(G_j)$ is a normal subgroup of $\prod_{i \in I} G_i$. $\blacksquare$ |
Research talks;Partial Differential Equations;Mathematical Physics
In this talk we present recent results on the Hall-MHD system. We consider the incompressible MHD-Hall equations in $\mathbb{R}^3$.
$\partial_tu +u \cdot u + \nabla u+\nabla p = \left ( \nabla \times B \right )\times B +\nu \nabla u,$ $\nabla \cdot u =0, \nabla \cdot B =0, $ $\partial_tB - \nabla \times \left (u \times B\right ) + \nabla \times \left (\left (\nabla \times B\right )\times B \right ) = \mu \nabla B,$ $u\left (x,0 \right )=u_0\left (x\right ) ; B\left (x,0 \right )=B_0\left (x\right ).$ Here $u=\left (u_1, u_2, u_3 \right ) = u \left (x,t \right ) $ is the velocity of the charged fluid, $B=\left (B_1, B_2, B_3 \right ) $ the magnetic field induced by the motion of the charged fluid, $p=p \left (x,t \right )$ the pressure of the fluid. The positive constants $\nu$ and $\mu$ are the viscosity and the resistivity coefficients. Compared with the usual viscous incompressible MHD system, the above system contains the extra term $\nabla \times \left (\left (\nabla \times B\right )\times B \right ) $ , which is the so called Hall term. This term is important when the magnetic shear is large, where the magnetic reconnection happens. On the other hand, in the case of laminar ows where the shear is weak, one ignores the Hall term, and the system reduces to the usual MHD. Compared to the case of the usual MHD the history of the fully rigorous mathematical study of the Cauchy problem for the Hall-MHD system is very short. The global existence of weak solutions in the periodic domain is done in [1] by a Galerkin approximation. The global existence in the whole domain in $\mathbb{R}^3$ as well as the local well-posedness of smooth solution is proved in [2], where the global existence of smooth solution for small initial data is also established. A refined form of the blow-up criteria and small data global existence is obtained in [3]. Temporal decay estimateof the global small solutions is deduced in [4]. In the case of zero resistivity we present finite time blow-up result for the solutions obtained in [5]. We note that this is quite rare case, as far as the authors know, where the blow-up result for the incompressible flows is proved.
In this talk we present recent results on the Hall-MHD system. We consider the incompressible MHD-Hall equations in $\mathbb{R}^3$. $\partial_tu +u \cdot u + \nabla u+\nabla p = \left ( \nabla \times B \right )\times B +\nu \nabla u,$ $\nabla \cdot u =0, \nabla \cdot B =0, $ $\partial_tB - \nabla \times \left (u \times B\right ) + \nabla \times \left (\left (\nabla \times B\right )\times B \right ) = \mu \nabla B,$ $u\left (x,0 \right ...
35Q35 ; 76W05
... Lire [+] |
Research talks
The productivity of the $\kappa $-chain condition, where $\kappa $ is a regular, uncountable cardinal, has been the focus of a great deal of set-theoretic research. In the 1970’s, consistent examples of $kappa-cc$ posets whose squares are not $\kappa-cc$ were constructed by Laver, Galvin, Roitman and Fleissner. Later, ZFC examples were constructed by Todorcevic, Shelah, and others. The most difficult case, that in which $\kappa = \aleph{_2}$, was resolved by Shelah in 1997.
In the first part of this talk, we shall present analogous results regarding the infinite productivity of chain conditions stronger than $\kappa-cc$. In particular, for any successor cardinal $\kappa$, we produce a ZFC example of a poset with precaliber $\kappa$ whose $\omega ^{th}$ power is not $\kappa-cc$. To do so, we introduce and study the principle $U(\kappa , \mu , \theta , \chi )$ asserting the existence of a coloring $c:\left [ \kappa \right ]^{2}\rightarrow \theta $ satisfying a strong unboundedness condition. In the second part of this talk, we shall introduce and study a new cardinal invariant $\chi \left ( \kappa \right )$ for a regular uncountable cardinal $\kappa$ . For inaccessible $\kappa$, $\chi \left ( \kappa \right )$ may be seen as a measure of how far away $\kappa$ is from being weakly compact. We shall prove that if $\chi \left ( \kappa \right )> 1$, then $\chi \left ( \kappa \right )=max(Cspec(\kappa ))$, where: (1) Cspec$(\kappa)$ := {$\chi (\vec{C})\mid \vec{C}$ is a sequence over $\kappa$} $\setminus \omega$, and (2) $\chi \left ( \vec{C} \right )$ is the least cardinal $\chi \leq \kappa $ such that there exist $\Delta\in\left [ \kappa \right ]^{\kappa }$ and b : $\kappa \rightarrow \left [ \kappa \right ]^{\chi }$ with $\Delta \cap \alpha \subseteq \cup _{\beta \in b(\alpha )}C_{\beta }$ for every $\alpha < \kappa$. We shall also prove that if $\chi (\kappa )=1$, then $\kappa$ is greatly Mahlo, prove the consistency (modulo the existence of a supercompact) of $\chi (\aleph_{\omega +1})=\aleph_{0}$, and carry a systematic study of the effect of square principles on the $C$-sequence spectrum. In the last part of this talk, we shall unveil an unexpected connection between the two principles discussed in the previous parts, proving that, for infinite regular cardinals $\theta< \kappa ,\theta \in Cspec(\kappa )$ if there is a closed witness to $U_{(\kappa ,\kappa ,\theta ,\theta )}$. This is joint work with Chris Lambie-Hanson.
The productivity of the $\kappa $-chain condition, where $\kappa $ is a regular, uncountable cardinal, has been the focus of a great deal of set-theoretic research. In the 1970’s, consistent examples of $kappa-cc$ posets whose squares are not $\kappa-cc$ were constructed by Laver, Galvin, Roitman and Fleissner. Later, ZFC examples were constructed by Todorcevic, Shelah, and others. The most difficult case, that in which $\kappa = \aleph{_2}$, ...
03E35 ; 03E05 ; 03E75 ; 06E10
... Lire [+] |
A general conic is defined by five independent parameters and can pass through five arbitrary points.
Restricting to a parabola sets a constraint on the coefficients (the discriminant of the second degree terms must be zero), which "consumes" one degree of freedom.
But four remain, and you have an infinity of parabolas by the three given points and a fourth free one.
A more difficult question is when the shape of the parabola is fixed, i.e. you can only translate it and rotate it. Then it has only three degrees of freedom and the number of solutions must be finite. In the case of the vertices of an equilateral triangle, there can be at least six of them, by symmetry, as the figure shows.
In the general case, let the parabola have the equation $x=ay^2$, where $a$ is fixed. Then integrating the rigid transform, we need to solve the system
$$\begin{cases}x_0\cos\theta-y_0\sin\theta+t_x=a(x_0\sin\theta+y_0\cos\theta+t_y)^2\\x_1\cos\theta-y_1\sin\theta+t_x=a(x_1\sin\theta+y_1\cos\theta+t_y)^2\\x_2\cos\theta-y_2\sin\theta+t_x=a(x_2\sin\theta+y_2\cos\theta+t_y)^2\\\end{cases}$$
for $\theta, t_x$ and $t_y$.
By subtraction, we can eliminate $t_x$ and we get two equations linear in $t_y$. $$\begin{cases}x_{01}\cos\theta-y_{01}\sin\theta=a(x_{01}\sin\theta+y_{01}\cos\theta)(x'_{01}\sin\theta+y_{01}\cos\theta+2t_y)\\x_{02}\cos\theta-y_{02}\sin\theta=a(x_{02}\sin\theta+y_{02}\cos\theta)(x'_{02}\sin\theta+y'_{02}\cos\theta+2t_y)\\\end{cases}$$
Then eliminating $t_y$, we obtain a cubic polynomial equation in $\cos\theta$ and $\sin\theta$. We can rationalize it with the transform
$$\cos\theta=\frac{t^2-1}{t^2+1},\sin\theta=\frac{2t}{t^2+1}.$$
This turns the trigonometric equation in a sextic one, having up to six real solutions.
The detailed discussion of the number of real roots seems to be an endeavor. As the minimum radius of curvature is $2a$, when the circumscribed circle of the triangle is smaller than this value, there is no solution. |
$A$ and $B$ being independent is a common, but actually strong assumption. It implies one special property for the joint probability density function:
$$f_{AB}(a,b)=f_{A}(a)f_{B}(b)$$
If $X$ is a random variable of its own, the joint probability of $X$, $A$ and $B$ is :
$$f_{XAB}(x,a,b)$$
But the conditional probability of $X$ given $A$ and $B$ has the following property:$$f_{X|AB}(x|a,b) = \frac{f_{XAB}(x,a,b)}{f_{AB}(a,b)}$$
And therefore:$$f_{X|AB}(x|a,b)f_{AB}(a,b) = f_{XAB}(x,a,b)$$
And if $A$ and $B$ are independent:$$f_{X|AB}(x|a,b)f_{A}(a)f_{B}(b) = f_{XAB}(x,a,b)$$
And from that it's possible to derive some of the identities you may need:$$E(X|(A,B))= \int_{\Omega_X} x f_{X|AB}(x|a,b) dx = \int_{\Omega_X} x \frac{f_{XAB}(x,a,b)}{f_{AB}(a,b)} dx $$
Now, in a general case I believe $E[X|(A,B)] \neq E[E[X|A]|B]$ Because when computing E[X|A] the result is a function $g(a)$, in the sense that is may algebraically depend on the assumed value of $A$, but the result would be no longer a function of the random variable $B$ (nor of an algebraic value $b$), thus:$$E[E[X|A]|B] = E[g(a)|B] = g(a) = \int_{\omega_{X}} f_{X|A}(x|a)\, dx$$
The expectation given both $A$ and $B$ is a function $h$ of both algebraic values $a$ and $b$:$$E[X|(A,B)] = \int_{\Omega_{X|(A,B)}} f_{X|AB}(x|a,b) dx = h(a,b)$$
If however, $X$ was assumed independent of both $A$ and $B$, then $E[X] = E[X|(A,B)] = E[E[X|A]|B]$ because the values of $A$ and $B$ wouldn't matter.
If X is not a random variable with its distribution, but a function of $A$ and $B$, then, the principle above holds:$$X= w(A,B) \Rightarrow $$$$E[X|(A,B)] = \int_{\Omega_{A,B}}w(a,b) f_{AB|AB}(a,b)\, da\, db = w(a,b)$$$$E[X|A] = \int_{\Omega_{A,B}}w(a,b) f_{B|A}(b|a)\, db = g(a)$$ |
Suppose your series is $f(x) = \sum_{n=0}^\infty a_n x^n$ with the signs of $a_n$ alternating. If $|a_n| r^n$ is decreasing to $0$, this is an alternating series for $0 < x < r$.
The alternating series bound for the remainder after the $x^n$ term is then $|a_{n+1}| x^{n+1}$. The Lagrange form for the remainder is$\dfrac{f^{(n+1)}(c)}{(n+1)!} x^{n+1}$, where $0 < c < x$, and to get a boundwe want to maximize $|f^{(n+1)}(c)|$ on this interval.
Now $$\dfrac{f^{(n+1)}(c)}{(n+1)!} = \sum_{j=n+1}^\infty {j \choose n+1} a_j c^{j-n-1}$$
The bound is the same as the alternating series bound if the maximum occurs at $c=0$.Now the derivative of this is
$$ \dfrac{f^{(n+2)}(c)}{(n+1)!} = \sum_{j=n+2}^\infty (j-n-1) {j \choose n+1} a_j c^{j-n-2} = (n+2)\sum_{j=n+2}^\infty {j \choose n+2} a_j c^{j-n-2} $$
If it weren't for that ${j \choose n+2}$ factor, this would still be an alternating series, and $f^{(n+2)}(c)$ would have the same sign as $a_{n+2}$, which is opposite to the sign of $a_{n+1}$ and $f^{(n+1)}(c)$, implying that the maximum is at $c=0$. But that factor can mess things up.
Consider e.g. a series that starts $1 - x + x^2 - x^3 + x^4$, with the remaining terms very small (but still alternating for $0 < x < 1$). You want to estimatethe error in the linear approximation $1 - x$. Then $$\dfrac{f''(c)}{2} \approx 1 - 3 c + 6 c^2$$If $ 1/2 < x < 1$, the maximum of this is not at $c=0$ but rather at $c=x$. The Lagrange bound is then approximately $(1 - 3 x + 6 x^2) x^2$, which is different from the alternating series bound of $x^2$. |
Although I've not specifically attempted a \$15\:\text{A}\$ boosted LM317 before, this is along the lines of what I'd try out first. This is roughly taken from the Figure 23 you mentioned:
simulate this circuit – Schematic created using CircuitLab
In this case, I went for the D44/D45 series devices. (The PNP version has simply HORRIBLE Early Effect, but it's not a big deal here.)
The values of \$R_6\$, \$R_8\$, and \$R_9\$ are set to drop somewhere from \$150-200\:\text{mV}\$ at full load. They will need to be rated for at least \$1\:\text{W}\$, but I would not feel comfortable with less than \$2\:\text{W}\$ resistors there. If you adjust those values, please keep in mind the dissipation question. You are talking about a lot of current.
To reduce the oscillation, you really want some ESR in \$C_2\$ to add a nice 'zero'. If you see oscillation in the output, try adding a small series resistor to \$C_2\$. \$15-39\:\text{m}\Omega\$ (as shown with \$R_{10}\$) should put a crimp in the oscillation. You might just make provisions for it and jumper it, without using a resistor, if your output seems fine with the output capacitor you selected. But here is one of those cases where output capacitor ESR is actually a good thing.
Your schematic shows an AC input. That's not good. I hope your schematic was just mistaken, there.
Since the minimum specification for the LM317 is \$3\:\text{V}\$ from input terminal to output terminal, the externally added circuit will always have more than enough headroom to operate so long as you supply that difference.
Keep in mind this is a linear power supply. With \$\approx 3.3\:\text{V}\$ output and \$\approx 3\:\text{V}\$ overhead, you will have little better than 50% efficiency. At full load, you will have \$\ge 45\:\text{W}\$ wasted dissipation, not counting the load's dissipation. And more than that, likely, because this ignores whatever you have supplying the unregulated input DC voltage -- where it is likely you have still more dissipation in diode rectifiers from AC, etc.
While perhaps \$3\:\text{W}\$ dissipation might occur in the emitter resistors, that still leaves a pretty much all the rest with the bypass BJTs. Getting rid of \$15\:\text{W}\$ each will be the challenge. Note that if you want to allow a maximum junction temperature of say, \$100\:^\circ\text{C}\$, and the worst case ambient temperature you care about supporting is \$45\:^\circ\text{C}\$, then this means you need \$\frac{100^\circ\:\text{C}-45^\circ\:\text{C}}{15\:\text{W}}\approx 3.7\:\frac{^\circ\text{C}}{\text{W}}\$. For the parts I mentioned, junction to case is already \$1.8\:\frac{^\circ\text{C}}{\text{W}}\$. That leaves you only \$1.9\:\frac{^\circ\text{C}}{\text{W}}\$ for whatever you use as a heatsink plus the bonding interface between the BJTs and that heatsink. That's not a lot to work with.
You might consider putting more of the dissipation into the emitter resistors, I suppose. More degeneration won't hurt you. I chose to set them at about a minimum resistance for the circuit, so increasing their values will be fine. (Don't decrease them much, though.) You need to work out this balancing act on your own. |
Isogeometric Analysis Discontinuous Galerkin discretizations for elliptic problems with discontinuous coefficients Dr. Ioannis Toulopoulos March 18, 2014, 3:30 p.m. S2 059
In this talk, Isogeometric Analysis (IGA) methods utilizing discontinuous approximations spaces for the solution of
an elliptic problem with discontinuous coefficients will be presented.
The problem is set in a complex domain $\Omega \subset \mathbb{R}^d, d=2,3$, which is subdivided
in a union of sub-domains, $\bar{\Omega}=\cup_{i=1}^N \bar{\Omega_i}$, with interior interfaces $\Gamma=\cup_{i=1}^N \partial \Omega_i \smallsetminus \partial \Omega$. The diffusion coefficients may have jump discontinuities only along the interior interfaces, $F\in \Gamma$. The solution of the problem is approximated in every sub-domain applying IGA methodology, without matching grid conditions along the $\partial \Omega_i$, as well without imposing continuity requirements for the approximation spaces on $\partial \Omega_i$. The numerical scheme is completed by applying Discontinuous Galerkin (DG) techniques. Numerical fluxes with interior penalty jump terms are used on the interfaces of the sub-domains.
In the first part of the talk, error estimates in the classical $\|.\|_{DG}$-norm (consisting of the broken gradient
plus a jump term) will be shown under the usual regularity assumption, $u\in W^{s\geq 2,2}(\Omega)$. \\ In the second part, we consider the model problem with low regularity solution $u\in W^{2,p\in(1,2)}(\Omega)$ and derive error estimates in the $\|.\|_{DG}$. These estimates are optimal with respect to space size discretization.\\ The error analysis makes use of several auxiliary results of the finite element methods, e.g. trace inequalities, interpolation error estimates. These results will be again expressed and discussed in the IGA framework. |
I have some problems of understanding asymptotic definitions of the growth of functions. Namely, I saw from the algorithm book of mine that $O(g(n))=\{f(n):$ there exist a positive constants $c$ and $n_0$ such that $0\leq f(n)\leq cg(n)$ for all $n\geq n_0\}$. But in https://artofproblemsolving.com/community/c7h31517 I saw a more rigorous definition:
Given a positive $ g$ defined in a punctured neighborhood of $ x_0$, denote by $ O_{x_0}(g)$ the class of all functions $ f$ such that the ratio $ f/g$ is bounded in some punctured neighbourhood of $ x_0$.
Here, I think class means a set.
So I think it would be nice to learn similar rigorous definitions for other asymptotic notations
$\theta(g(n))=\{f(n):$ there exist positive constants $c_1,c_2,$ and $n_0$ such that $0\leq c_1g(n)\leq f(n)\leq c_2g(n)$ for all $n\geq n_0 \}$,
$\Omega(g(n))=\{f(n):$ there exist positive constants $c$ and $n_0$ such that $0\leq cg(n)\leq f(n)$ for all $n\geq n_0$,
$o(g(n))=\{f(n):$ for any positive constant $c>0$, there exists a constant $n_0>0$ such that $0\leq f(n)<cg(n)$ for all $n\geq n_0\}$
$\omega(g(n))=\{f(n):$ for any positive constant $c>0$, there exists a constant $n_0>0$ such that $0\leq cg(n)<f(n)$ for all $n\geq n_0\}$
Also, in my book those sets are used in a weird way like $e^x=1+x+\theta(x^2)$.
So could anyone define rigorously the asymptotic notations $\theta$, $\Omega$, $o$ and $\omega$ and give some guide how to use them in a formal way rather than using notations like $e^x=1+x+\theta(x^2)$? Also, does it give some extra information if we use the notation $O_n(g(n))$ rather than $O(g(n))$? |
Here we first give a step-by-step solution of the basic question 1, and then turn to other questions.
Integral representation of $S(p,q,x)$
Writing
$$k^{-q}=\frac{1}{\Gamma (q)}\int_0^{\infty } e^{-k \;t} t^{q-1} \, dt\tag{s1}$$
and observing the formula for the generating function of the generalized harmonic number
$$\sum _{k=1}^{\infty } z^k H_k^{(p)}=\frac{\text{Li}_p(z)}{1-z}\tag{s2}$$
we can write (4) as
$$S(p,q,x) =\frac{1}{\Gamma (q)}\int_0^{\infty } \frac{t^{q-1} \text{Li}_p\left(e^{-t} x\right)}{1-e^{-t} x} \, dt\tag{s3} $$
Hence $S(p,q,x)$ has the form of a Mellin tranformation defined as
$$M(f(t),t,q) = \int_{0}^\infty t^{q-1} f(t) \, dt$$
$$S(p,q,x) =\frac{1}{\Gamma (q)} M(f(p,x,t),t,q)\tag{s3a} $$
with the kernel
$$f(p,x,t) =\frac{ \text{Li}_p\left(e^{-t} x\right)}{1-e^{-t} x} \tag{s3b}$$
For $x = 1$ and $p=1$ this simplifies to
$$S(1,q) =\frac{1}{\Gamma (q)} M(f(t),t,q)\tag{s4} $$
The kernel simplifies to
$$f(t) = \frac{-\log(1-e^{-t})}{1-e^{-t}}\tag{s4a} $$
Notice that for $t>>1$ this kernel approaches the kernel for the $\zeta$-function:
$$f(t \to \infty) =f_{\zeta}(t) = \frac{1}{e^{t}-1}\tag{s5} $$
The method will now be illustrated by examining the simplified expression (s4).
In order to find singularities in $q$, we split the integral in (s4) into two parts $F=\int_0^1 f \, dt$ and $G=\int_1^\infty f \, dt$ and notice that the integral $G$ is always convergent so that $G$ is a holomorphic in $q$.
The singularities must therefore come from $F$, in particular from the vicinity of $t=0$ of the integration. Hence they can be found by expanding the integrand of $F$ into a series about $t=0$.
Singularities of $S(1,q)$
To lowest order in $t$ the integrand (s4a) is given by
$$t^{q-1} \left(-\frac{\log (t)}{2}-\frac{\log (t)}{t}+\frac{1}{2}\right)$$
Integrating over $t$ from $0$ to $1$ an taking into account the $\Gamma$ function gives
$$F_0 = \frac{1}{2 \Gamma (q)}\left( \frac{1}{q^2}+\frac{1}{q}+\frac{2}{(q-1)^2}\right) = \frac{1}{2 \Gamma(q) q^2}+\frac{1}{2 \Gamma(q) q}+\frac{1}{\Gamma(q) (q-1)^2} \\=\frac{1}{2 \Gamma(q+1) q}+\frac{1}{2 \Gamma(q+1)}+\frac{1}{\Gamma(q) (q-1)^2} $$
From this we can easily identify the following basic singularities:
a double pole at $q=1$ with residue $r=1$ a simple pole at $q=0$ with residue $r=\frac{1}{2}$
This is in contrast to the zeta function which has a simple pole at $q=1$ with residue $r=1$ and no singularity at $q=0$.
The next order gives for the integrand
$$t^{q-1} \left(t \left(\frac{5}{24}-\frac{\log (t)}{12}\right)-\frac{\log (t)}{2}-\frac{\log (t)}{t}+\frac{1}{2}\right)$$
which after integrating an taking into account the $\Gamma$ function gives
$$F_1 = F_0 + \frac{5}{24 (q+1) \Gamma (q)}+\frac{1}{12 (q+1)^2 \Gamma (q)}$$
A new pole appears here at $q=-1$. It comes from the last term and the observation that $(q+1)^2 \Gamma (q)=\frac{(q+1)^2 \Gamma (q+1)}{q}=\frac{(q+1) \Gamma (q+2)}{q}$ which goes $\to -(q+1)$ for $q\to -1$.
The last but one term is regular at $q=-1$. In summary we have found a new simple pole at $q=-1$ with a residue $r=-\frac{1}{12}$.
Continuing this procedure leads to the following structure of the poles besides the basic ones:
$S(1,q)$ has simple poles at negative odd integers $q=-(2k-1)$. Their residues turn out to be
$$r(k) =- \frac{B_{2 k}}{2 k}$$
where $B_{n}$ is the n-th Bernoulli number.
Here are the first few pole locations and residues
$$\left(\begin{array}{cc} -1 & -\frac{1}{12} \\ -3 & \frac{1}{120} \\ -5 & -\frac{1}{252} \\ -7 & \frac{1}{240} \\ -9 & -\frac{1}{132} \\ -11 & \frac{691}{32760} \\ -13 & -\frac{1}{12} \\\end{array}\right)$$
For comparision: $\zeta(q)$ has just one simple pole $\frac{1}{q-1}$ in the whole complex $q$-plane.
$S(p,q)$ for $p\gt1$
For a partial answer to question 3 the same method can be applied for $p\gt 1$.
The results for $p=1$ through $p=4$ are written, for each $p$, as a list of poles, their possible multiplicity, and their residues. The list starts with the pole at $q=1$ and proceeds in the direction of the negative real $q$ axis. The last entry is the general expression from that point on, where for each $p$ we let $k=1,2,3,...$
$p=1\; \left( (1^2 , 1), (0, \frac{1}{2} ), ( -(2 k-1) , -\frac{B_{2 k}}{2 k}) \right)$
$p=2\; \left((1, \zeta(2)),(0,-1), (-1,\frac{1}{2}),(-2k, -B_{2 k})\right)$
$p=3\; \left((1, \zeta(3)),(0,0), (-1,-\frac{1}{2}),(-2,\frac{1}{2}),(-(2k+1), -(k+\frac{1}{2})B_{2 k})\right)$
$p=4\; \left((1, \zeta(4)),(0,0), (-1,0),(-2,-\frac{1}{3}),(-3,\frac{1}{2}),(-(2k+2), -\frac{1}{3}(k+1)(2k+1)B_{2 k})\right)$
Observations
The only double pole appears for the case $p=1$ at $q=1$, all other poles are simple ones
with increasing $p$ an increasing gap appears between the pole at $q=1$ and the next pole on the negative real $q$-axis. This corresponds to the fact that the generalized harmonic number $H_k^{(p)}$ approaches $1$ for large $p$ which in turn means that $S(p,q)$ approaches $\zeta(q)$ which has only one pole at $q=1$. The residue of the pole of $S(p,q)$ at $q=1$ is $\zeta(p)$ which for $p\to\infty$ goes to $1$ as it is with $\zeta(q)$.
Singularities of $S(1,q)$ using asymptotic expansion for $H_{k}$
A much simpler way to find the pole structure of $S(1,q)$ consists in using the asymptotic expansion
$$H_k = \log(k) +\gamma + \frac{1}{2k} - \sum_{m\ge 1} \frac{B_{2m}}{2m k^{2m}}$$
Inserting this in the definition of (1) and interchanging the summation gives
$$S(1,q) = \sum_{k\ge 1}\frac{\log(k)}{k^q} +\gamma \sum_{k\ge 1}\frac{1}{k^q}+ \frac{1}{2k} - \sum_{m\ge 1} \frac{B_{2m}}{2m} \sum_{k\ge 1}\frac{1}{ k^{2m+q}}\\= \zeta'(q) +\gamma \zeta(q) + \frac{1}{2}\zeta(q+1) - \sum_{m\ge 1} \frac{B_{2m}}{2m} \zeta(2m+q)$$
All we need to know is that $\zeta(q)$ has a simple pole at $q=1$ with residue $1$.
$\zeta'(q)$ then obviously has a double pole (the derivative of the simple pole) at $q=1$ with residue $-1$.
The second term has a simple pole at $q=1$ with residue $\gamma$ which did not appear previously, and which I consider therefore to be "spurious".
Third term: pole at $(1+q)=1$, i.e. $q=0$ with residue $\frac{1}{2}$.
Fourth term: pole at $2m+q=1$, i.e. $q=1-2m$ ($=-1, -3, -5, ...$) and residues $-\frac{B_{2m}}{2m} $.
Summing up: except for the term with $\gamma$ we find the previously obtained pole structure. |
I think that the problem stems from the action of the operator $\hat p$. Please correct me if I am mistaken.
The action of the operator $\hat p$ in the quantum space is defined as $<x|\hat p|a>=-i \hbar \partial_x <x|a>$ if the state $|a>$ does not depend on x. In fact, if the state $|a>$ depended on $x$, like for instance $|a>=f(x)|b>$ for any scalar function $f(x)$, then clearly the equation $<x|\hat p|a>=<x|\hat p f(x)|b>=-i \hbar \partial_x <x|f(x)|b>= -i \hbar \partial_x (f(x) <x|b>) $ would be badly defined, as it could be evaluated in another different way:$<x|\hat p|a>=<x|\hat p f(x)|b>=f(x) <x|\hat p |b>=f(x)(-i \hbar) \partial_x <x|b>$ The second evaluation comes from the fact that, in Standard Quantum Mechanics, it is postulated that any operator acts on ket vectors and not on scalars (with the exception the Time reversal operator, which is not of any use here).
The commutator relation $\left[\hat x, \hat p\right]=i\hbar$ is obtained from the action of the operator $\hat p$ as defined above. Thus, it comes straightforwardly that such a commutation relation cannot be generally used in a scalar product ($<x|...|ket>$) if the ket state on the right depends on $x$.
Having said that, when you perform the trace of the commutator $\left[\hat x, \hat p\right]$, you are doing
$Tr\Big[\left[\hat x, \hat p\right]\Big]=\int dx <x|(\hat x\hat p-\hat p\hat x)|x>=\int dx <x|(x\hat p-\hat p x)|x>$, where in the last step above I have just extracted the eigenvalues from the eigenstates $|x>$.In the above equation you have a scalar product where the ket on the right depends on $x$. Thus, you'll have to be careful in the evaluation and you cannot use the $xp$-commutation relations straight away. With a little care, everyone can see from the above equation that, indeed, the trace gives zero $\int dx <x|(x\hat p-\hat p x)|x>=\int dx \,x<x|(\hat p-\hat p )|x>=0$, as it should. Whereas, if you had used the $xp$-commutation relations from the outset, you would have wrongly found $Tr\Big[\left[\hat x, \hat p\right]\Big]=Tr\Big[i\hbar\Big]=i\hbar$.
Edited after Joe's Comment In the last equation I forgot the dimensionality of the space. It must be modified as$Tr\Big[\left[\hat x, \hat p\right]\Big]=Tr\Big[i\hbar\Big]=i\hbar\,D$ where $D$ are the dimensions of the quantum space you are taking the trace in. Thanks Joe. |
In set theory, we have the phenomenon of the
universal definition. This is a property $\phi(x)$, first-order expressible in the language of set theory, that necessarily holds of exactly one set, but which can in principle define any particular desired set that you like, if one should simply interpret the definition in the right set-theoretic universe. So $\phi(x)$ could be defining the set of real numbes $x=\mathbb{R}$ or the integers $x=\mathbb{Z}$ or the number $x=e^\pi$ or a certain group or a certain topological space or whatever set you would want it to be. For any mathematical object $a$, there is a set-theoretic universe in which $a$ is the unique object $x$ for which $\phi(x)$.
The universal definition can be viewed as a set-theoretic analogue of the universal algorithm, a topic on which I have written several recent posts:
Let’s warm up with the following easy instance.
Theorem. Any particular real number $r$ can become definable in a forcing extension of the universe.
Proof. By Easton’s theorem, we can control the generalized continuum hypothesis precisely on the regular cardinals, and if we start (by forcing if necessary) in a model of GCH, then there is a forcing extension where $2^{\aleph_n}=\aleph_{n+1}$ just in case the $n^{th}$ binary digit of $r$ is $1$. In the resulting forcing extension $V[G]$, therefore, the real $r$ is definable as: the real whose binary digits conform with the GCH pattern on the cardinals $\aleph_n$. QED
Since this definition can be settled in a rank-initial segment of the universe, namely, $V_{\omega+\omega}$, the complexity of the definition is $\Delta_2$. See my post on Local properties in set theory to see how I think about locally verifiable and locally decidable properties in set theory.
If we push the argument just a little, we can go beyond the reals.
Theorem. There is a formula $\psi(x)$, of complexity $\Sigma_2$, such that for any particular object $a$, there is a forcing extension of the universe in which $\psi$ defines $a$.
Proof. Fix any set $a$. By the axiom of choice, we may code $a$ with a set of ordinals $A\subset\kappa$ for some cardinal $\kappa$. (One well-orders the transitive closure of $\{a\}$ and thereby finds a bijection $\langle\mathop{tc}(\{a\}),\in\rangle\cong\langle\kappa,E\rangle$ for some $E\subset\kappa\times\kappa$, and then codes $E$ to a set $A$ by an ordinal pairing function. The set $A$ tells you $E$, which tells you $\mathop{tc}(\{a\})$ by the Mostowski collapse, and from this you find $a$.) By Easton’s theorem, there is a forcing extension $V[G]$ in which the GCH holds at all $\aleph_{\lambda+1}$ for a limit ordinal $\lambda<\kappa$, but fails at $\aleph_{\kappa+1}$, and such that $\alpha\in A$ just in case $2^{\aleph_{\alpha+2}}=\aleph_{\alpha+3}$ for $\alpha<\kappa$. That is, we manipulate the GCH pattern to exactly code both $\kappa$ and the elements of $A\subset\kappa$. Let $\phi(x)$ assert that $x$ is the set that is decoded by this process: look for the first stage where the GCH fails at $\aleph_{\lambda+2}$, and then extract the set $A$ of ordinals, and then check if $x$ is the set coded by $A$. The assertion $\phi(x)$ did not depend on $a$, and since it can be verified in any sufficiently large $V_\theta$, the assertion $\phi(x)$ has complexity $\Sigma_2$. QED
Let’s try to make a better universal definition. As I mentioned at the outset, I have been motivated to find a set-theoretic analogue of the universal algorithm, and in that computable context, we had a universal algorithm that could not only produce any desired finite set, when run in the right universe, but which furthermore had a robust interaction between models of arithmetic and their top-extensions: any set could be extended to any other set for which the algorithm enumerated it in a taller universe. Here, I’d like to achieve the same robustness of interaction with the universal definition, as one moves from one model of set theory to a taller model. We say that one model of set theory $N$ is a top-extension of another $M$, if all the new sets of $N$ have rank totally above the ranks occuring in $M$. Thus, $M$ is a rank-initial segment of $N$. If there is a least new ordinal $\beta$ in $N\setminus M$, then this is equivalent to saying that $M=V_\beta^N$.
Theorem. There is a formula $\phi(x)$, such that In any model of ZFC, there is a unique set $a$ satisfying $\phi(a)$. For any countable model $M\models\text{ZFC}$ and any $a\in M$, there is a top-extension $N$ of $M$ such that $N\models \phi(a)$.
Thus, $\phi(x)$ is the universal definition: it always defines some set, and that set can be any desired set, even when moving from a model $M$ to a top-extension $N$.
Proof. The previous manner of coding will not achieve property 2, since the GCH pattern coding started immediately, and so it would be preserved to any top extension. What we need to do is to place the coding much higher in the universe, so that in the top extension $N$, it will occur in the part of $N$ that is totally above $M$.
But consider the following process. In any model of set theory, let $\phi(x)$ assert that $x$ is the empty set unless the GCH holds at all sufficiently large cardinals, and indeed $\phi(x)$ is false unless there is a cardinal $\delta$ and ordinal $\gamma<\delta^+$ such that the GCH holds at all cardinals above $\aleph_{\delta+\gamma}$. In this case, let $\delta$ be the smallest such cardinal for which that is true, and let $\gamma$ be the smallest ordinal working with this $\delta$. So both $\delta$ and $\gamma$ are definable. Now, let $A\subset\gamma$ be the set of ordinals $\alpha$ for which the GCH holds at $\aleph_{\delta+\alpha+1}$, and let $\phi(x)$ assert that $x$ is the set coded by the set $A$.
It is clear that $\phi(x)$ defines a unique set, in any model of ZFC, and so (1) holds. For (2), suppose that $M$ is a countable model of ZFC and $a\in M$. It is a fact that every countable model of ZFC has a top-extension, by the definable ultrapower method. Let $N_0$ be a top extension of $M$. Let $N=N_0[G]$ be a forcing extension of $N_0$ in which the set $a$ is coded into the GCH pattern very high up, at cardinals totally above $M$, and such that the GCH holds above this coding, in such a way that the process described in the previous paragraph would define exactly the set $a$. So $\phi(a)$ holds in $N$, which is a top-extension of $M$ as no new sets of small rank are added by the forcing. So statement (2) also holds.
QED
The complexity of the definition is $\Pi_3$, mainly because in order to know where to look for the coding, one needs to know the ordinals $\delta$ and $\gamma$, and so one needs to know that the GCH always holds above that level. This is a $\Pi_3$ property, since it cannot be verified locally only inside some $V_\theta$.
A stronger analogue with the universal algorithm — and this is a question that motivated my thinking about this topic — would be something like the following:
Question. Is there is a $\Sigma_2$ formula $\varphi(x)$, that is, a locally verifiable property, with the following properties? In any model of ZFC, the class $\{x\mid\varphi(x)\}$ is a set. It is consistent with ZFC that $\{x\mid\varphi(x)\}$ is empty. In any countable model $M\models\text{ZFC}$ in which $\{x\mid\varphi(x)\}=a$ and any set $b\in M$ with $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid\varphi(x)\}=b$.
An affirmative answer would be a very strong analogue with the universal algorithm and Woodin’s theorem about which I wrote previously. The idea is that the $\Sigma_2$ properties $\varphi(x)$ in set theory are analogous to the computably enumerable properties in computability theory. Namely, to verify that an object has a certain computably enumerable property, we run a particular computable process and then sit back, waiting for the process to halt, until a stage of computation arrives at which the property is verified. Similarly, in set theory, to verify that a set has a particular $\Sigma_2$ property, we sit back watching the construction of the cumulative set-theoretic universe, until a stage $V_\beta$ arrives that provides verification of the property. This is why in statement (3) we insist that $a\subset b$, since the $\Sigma_2$ properties are always upward absolute to top-extensions; once an object is placed into $\{x\mid\varphi(x)\}$, then it will never be removed as one makes the universe taller.
So the hope was that we would be able to find such a universal $\Sigma_2$ definition, which would serve as a set-theoretic analogue of the universal algorithm used in Woodin’s theorem.
If one drops the first requirement, and allows $\{x\mid \varphi(x)\}$ to sometimes be a proper class, then one can achieve a positive answer as follows.
Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties. If the GCH holds, then $\{x\mid\varphi(x)\}$ is empty. For any countable model $M\models\text{ZFC}$ where $a=\{x\mid \varphi(x)\}$ and any $b\in M$ with $a\subset b$, there is a top extension $N$ of $M$ in which $N\models\{x\mid\varphi(x)\}=b$.
Proof. Let $\varphi(x)$ assert that the set $x$ is coded into the GCH pattern. We may assume that the coding mechanism of a set is marked off by certain kinds of failures of the GCH at odd-indexed alephs, with the pattern at intervening even-indexed regular cardinals forming the coding pattern. This is $\Sigma_2$, since any large enough $V_\theta$ will reveal whether a given set $x$ is coded in this way. And because of the manner of coding, if the GCH holds, then no set is coded. Also, if the GCH holds eventually, then only a set-sized collection is coded. Finally, any countable model $M$ where only a set is coded can be top-extended to another model $N$ in which any desired superset of that set is coded. QED
Update. Originally, I had proposed an argument for a negative answer to the question, and I was actually a bit disappointed by that, since I had hoped for a positive answer. However, it now seems to me that the argument I had written is wrong, and I am grateful to Ali Enayat for his remarks on this in the comments. I have now deleted the incorrect argument.
Meanwhile, here is a positive answer to the question in the case of models of $V\neq\newcommand\HOD{\text{HOD}}\HOD$.
Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties: In any model of $\newcommand\ZFC{\text{ZFC}}\ZFC+V\neq\HOD$, the class $\{x\mid\varphi(x)\}$ is a set. It is relatively consistent with $\ZFC$ that $\{x\mid\varphi(x)\}$ is empty; indeed, in any model of $\ZFC+\newcommand\GCH{\text{GCH}}\GCH$, the class $\{x\mid\varphi(x)\}$ is empty. If $M\models\ZFC$ thinks that $a=\{x\mid\varphi(x)\}$ is a set and $b\in M$ is a larger set $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid \varphi(x)\}=b$.
Proof. Let $\varphi(x)$ hold, if there is some ordinal $\alpha$ such that every element of $V_\alpha$ is coded into the GCH pattern below some cardinal $\delta_\alpha$, with $\delta_\alpha$ as small as possible with that property, and $x$ is the next set coded into the GCH pattern above $\delta_\alpha$. This is a $\Sigma_2$ property, since it can be verified in any sufficiently large $V_\theta$.
In any model of $\ZFC+V\neq\HOD$, there must be some sets that are no coded into the $\GCH$ pattern, for if every set is coded that way then there would be a definable well-ordering of the universe and we would have $V=\HOD$. So in any model of $V\neq\HOD$, there is a bound on the ordinals $\alpha$ for which $\delta_\alpha$ exists, and therefore $\{x\mid\varphi(x)\}$ is a set. So statement (1) holds.
Statement (2) holds, because we may arrange it so that the GCH itself implies that no set is coded at all, and so $\varphi(x)$ would always fail.
For statement (3), suppose that $M\models\ZFC+\{x\mid\varphi(x)\}=a\subseteq b$ and $M$ is countable. In $M$, there must be some minimal rank $\alpha$ for which there is a set of rank $\alpha$ that is not coded into the GCH pattern. Let $N$ be an elementary top-extension of $M$, so $N$ agrees that $\alpha$ is that minimal rank. Now, by forcing over $N$, we can arrange to code all the sets of rank $\alpha$ into the GCH pattern above the height of the original model $M$, and we can furthermore arrange so as to code any given element of $b$ just above that coding. And so on, we can iterate it so as to arrange the coding above the height of $M$ so that exactly the elements of $b$ now satisfy $\varphi(x)$, but no more. In this way, we will ensure that $N\models\{x\mid\varphi(x)\}=b$, as desired.
QED
I find the situation unusual, in that often results from the models-of-arithmetic context generalize to set theory with models of $V=\HOD$, because the global well-order means that models of $V=\HOD$ have definable Skolem functions, which is true in every model of arithmetic and which sometimes figures implicitly in constructions. But here, we have the result of Woodin’s theorem generalizing from models of arithmetic to models of $V\neq\HOD$. Perhaps this suggests that we should expect a fully positive solution for models of set theory.
Further update. Woodin and I have now established the fully general result of the universal finite set, which subsumes much of the preliminary early analysis that I had earlier made in this post. Please see my post, The universal finite set. |
Consider the following matrix: $ \left[ \begin{array}{ccc} -0.05 & 0.45 & 0 \\ 0.05 & -0.45 & 0 \end{array} \right] $
Row reducing the above matrix in Matlab using the
rref() function produces what I would expect (just adding top row to bottom row and scaling top row):$ \left[ \begin{array}{ccc}1.0 & -9.0 & 0 \\0 & 0 & 0 \end{array} \right] $
But if I remove the last column of just zeros, and row reduce that matrix, I get a 2x2 identity matrix: $ \left[ \begin{array}{ccc} -0.05 & 0.45 \\ 0.05 & -0.45 \end{array} \right] \sim \left[ \begin{array}{ccc} 1 & 0 \\ 0 & 1 \end{array} \right] $
I can't see how removing the last column changes anything; adding the top row to the bottom row will still produce the result above, just without the last column of $0$'s. But I'm quite sure Matlab is right and I'm not, so what am I missing here?
Edit: I have managed to reproduce the above and I believe it's all due to rounding errors. If you input $ M = \left[ \begin{array}{ccc} 0.95 & 0.45 \\ 0.05 & .55 \end{array} \right] $ and then do $ A = M - eye(2) $. rref(A) will now give the $2 \times 2$ identity matrix. If I enter the result of $M-eye(2)$ directly, that is $B= \left[ \begin{array}{ccc} -0.05 & 0.45 \\ 0.05 & -0.45 \end{array} \right] $, then rref(B) returns the expected $ \left[ \begin{array}{ccc} 1 & -9 \\ 0 & 0 \end{array} \right] $ .
Here's a screenshot as an example: |
Spectroscopy operation
In spectroscopy, an image of the projector lens crossover is magnified and projected onto the camera. When properly adjusted, the observed CCD image consists of sharp vertical lines that represent electrons of particular energy. Displacement of lines in the horizontal (or dispersive) direction indicates a difference in energy between electrons.
The projector crossover is present under typical transmission electron microscope (TEM) or scanning (S)TEM operating conditions (in some dedicated STEMs a virtual cross over is formed). Movement of the crossover up and down in the column slightly depends on the exact operating mode, but you can easily compensate for this movement by making small adjustments of the pre‐prism quadrupole Focus X. Thus you can tune the spectrometer for any operating condition of the microscope.
The spectra you acquire will correspond to the feature in the center of the microscope viewing screen; this is the area selected by the spectrometer entrance aperture when you lift the screen up. This area can be a precipitate image in TEM (e.g., a Bragg beam from a diffraction pattern) depending on the operational mode. Three basic operation modes are possible: TEM Imaging, TEM Diffraction, and STEM that will be discussed below.
Note: Due to mechanical variability of the TEM, viewing screen and spectrometer mount, the center of the TEM screen (or viewing camera) is not at the exact position of the spectrometer entrance aperture. This is normal and will not affect the operation of the system. For STEM electron energy loss spectroscopy (EELS) acquisition, the alignment of the ADF STEM detector and the entrance aperture is important. Detectors from Gatan should be concentric with the entrance aperture and are adjusted at installation. For detector from 3 rd party or TEM suppliers, please contact the appropriate organization for assistance. TEM imaging mode
With the TEM in the Imaging mode, an image is formed on the viewing screen, while the projector crossover contains a small diffraction pattern. Because the spectrometer projects an energy dispersed diffraction pattern onto the detector, this mode is also known as diffraction‐coupled. The camera length \(L\) of the pattern in the projector crossover is given by
\(L = \frac{h}{M}\)
and the diameter of the projector crossover, \(d_{p}\), is
\(d_{p} = \frac{2 ß h}{M}\)
where
\(h\) =distance from the projector crossover to the viewing screen \(M\) =image magnification at the viewing screen \(ß\) =half acceptance angle as defined by the objective aperture
In practical terms, with \(h\) = 50 cm and at \(M\) = 10,000x, \(L\) = 50 μm. A diffraction pattern that encompasses angles to 50 mrad is therefore only 5 μm in diameter at the projector back focal plane and it becomes even smaller at larger microscope magnifications. This means that you can attain good energy resolution in the TEM imaging mode while it accepts practically all the scattered electrons from a particular specimen area. If you require a smaller range of scattering angles, you can select this when you use the objective aperture.
Operating in the TEM imaging mode is convenient because you can easily modify the spectrum intensity via the illumination setting and/or size of the condenser aperture. The selected specimen area is directly visible on the viewing screen and you change it by translating the specimen, shifting the image electronically, changing the microscope magnification and/or the spectrometer entrance aperture. The collection efficiency can be very high in this mode. These characteristics make the TEM imaging mode ideal for the initial examination of any specimen.
Note: Only operate in TEM imaging mode for spectroscopy during the initial examination, as it may give misleading analysis results.
The TEM imaging mode at first sight appears to allow the selection of a small specimen area for microanalysis by the spectrometer while a larger area of the sample is illuminated. This might seem a useful way to get a spectrum from a small particle or interface when it is impossible to get a small enough probe. A user with a LaB
6 instrument may believe they can do analysis similar to an owner of a FEG instrument. However, they would be incorrect. The image seen on the TEM screen is from the region of the spectrum that has the most electrons, on a thin sample, the zero-loss. Unfortunately electrons that have lost energy are focused by the TEM objective at a different location due to its chromatic aberration (\(C_{c} \)). This means the image at the observed energy loss in the spectrum will come from a different area of the specimen. Thus this method will always give a false result, as shown below. Electrons that have been scattered by large angles and also suffer an energy loss will be displaced laterally in the image by a distance \(d\) given by
\(d = q\cdot \Delta f\)
where
\(q\) = scattering angle \(\Delta f\) = defocus error of the objective lens of the microscope
The defocus error depends on the energy difference between electrons for which the objective lens was focused, and the energy loss electrons of interest. It is equal to
\(\Delta f = C_{c} \cdot \frac{\Delta E}{E_{o}}\)
where
\(C_{c}\) =chromatic aberration coefficient of the objective lens \(E_{o}\) =primary energy
Focus is normally done while you look at the image formed by the zero-loss electrons. For a 100 kV primary energy electron which lost 1000 eV while being scattered over 5 mrad, the displacement (referred back to the sample) amounts to 100 nm (
\(C_{c}\) = 2 mm). Thus, to obtain information from areas smaller than 100 nm, you should use a small probe, as in the STEM mode.
In addition, quantitative analysis is made nearly impossible in the TEM imaging mode for the same reasons. Electrons of different energies are spread over areas of different size in the image. If the illumination area is too small, or the specimen is not homogeneous, the intensity ratio between low and high energy edges can change dramatically. It is therefore almost always better to collect spectra for quantitative analysis in the diffraction mode unless the specimen is homogeneous over tens of µm and the illumination area size on the viewing screen is much larger than the spectrometer entrance aperture. In this case, the electrons that miss the spectrometer entrance aperture due to chromatic aberration will be replaced by a similar number of electrons that enter the aperture in error also due to chromatic aberration. Operation under these conditions is sometimes useful on contamination‐prone specimens, or when one needs to spread the illumination over a large area to minimize radiation damage.
TEM diffraction mode
With the TEM in Diffraction mode, a diffraction pattern is formed on the viewing screen, while the projector crossover contains a small image of the illuminated specimen area. Because the spectrometer projects this energy dispersed image onto the detector, this mode is also known as image‐coupled. If a small area of the sample is illuminate, the size at the projector crossover,
\(d_{p}\), is given by
The magnification, \(M\), of the image at the projector crossover is given by
\(d_{p} = Md_{s}= \frac{h}{L}d_{s}\)
where
\(d_{s}\) =diameter of the illuminated specimen area \(M\)=distance from the projector crossover to the spectrometer entrance aperture \(h\) =camera length at the spectrometer entrance aperture \(L\) =spectrometer energy dispersion and magnification factor
and you therefore determine the diameter
, \(D_{p}\), in eV at the beam trap of the projector crossover by
\(D_{p} = d_{p}K_{s} = \frac{h}{L}d_{s}K\)
where
\(K\) =spectrometer energy dispersion and magnification factor (~0.3 eV/μm for the GIF Quantum ®system at 200 kV)
For example, a 3 μm diameter illuminated specimen area will give rise to a 3 μm projector crossover size for \(L\) = 70 cm and \(h\) = 70 cm. This permits better than 1 eV energy resolution at 200 kV. The TEM Diffraction mode is therefore useful to acquire spectra from large areas, although you must carefully monitor the size of the illuminated area when you use a small camera length, if the energy resolution is not to worsen considerably.
In the TEM diffraction mode, the spectrometer angular acceptance range is limited by the entrance aperture and can be varied when you change the camera length or select a different aperture size. The diffraction pattern visible on the viewing screen makes it possible to monitor the specimen phase, thickness and diffraction condition. When you do not use a selected area diffraction aperture, the spectrum is collected from the whole illuminated specimen area, and the spatial resolution is therefore determined purely by the probe‐forming performance of the microscope. In this case, unlike in the imaging mode, chromatic effects do not appreciably change the collection efficiency of the spectrometer in the Diffraction mode. This makes quantitative chemical analysis of small sample areas much more reliable.
Diffraction mode is ideal to acquire the high quality spectra necessary to look for low concentration elements, or when you want to perform extended fine structure (EXELFS) analysis. It is also very useful to closely monitor the diffraction condition and/or collection angles, as for instance in channeling experiments.
STEM mode
In a correctly set up STEM mode, a diffraction pattern is displayed on the viewing screen and the projector crossover contains an image of the probe. Electron‐optically, this mode is therefore similar to the TEM Diffraction mode.
In the STEM mode, it is easy to image a large area of the sample even though the illumination is focused into a small probe (via collection of a STEM image), and to position the probe accurately on the feature of interest (when you use the scan coils). This makes the STEM mode ideal to acquire EELS data from precise areas of the sample (e.g., precipitates, interfaces, cell membranes).
When the STEM probe scans over large areas of the specimen, the probe motion will transfer to the spectrum, thus giving a motion of the energy loss peaks. This effect is not apparent at high magnifications but becomes increasingly important as the field of view increases. The use of descan coils is recommended to remove the motion of the probe in the energy loss spectrum. |
Let $X$ denote a monoid. Then we can make $Y = \mathcal{P}(X)$ into a monoid, too. Define $$AB = \{ab \mid a \in A, b \in B\}$$ for all $A,B \in Y.$ We see immediately that $1$ (shorthand for $\{1\}$) is our new identity:
$$A1 = A, \;\; 1B = B.$$
In fact, $Y$ becomes an ordered monoid by defining that $A \leq B$ is notation for $A \subseteq B$. It follows that:
$A \leq B \rightarrow AC \leq BC$ $A \leq B \rightarrow CA \leq CB$
Furthermore, $Y$ is a complete atomistic Boolean algebra, and we have compatibility of composition with joins:
$A \left(\bigvee_{i \in I} B_i\right) = \bigvee_{i \in I} AB_i$
$\left(\bigvee_{i \in I} A_i\right) B = \bigvee_{i \in I} A_i B$
Now most authors would probably stop there. And perhaps that is the right thing to do. But, for the sake of experimentation, lets go a step further. Define
another monoid structure on $Y$ by writing
$$A*B = (A^cB^c)^c.$$
Question. Does this new operation "play nice with" the earlier-defined operation in some sense? Indeed, is the $*$ operation in any way useful? Discussion. We see that $*$ is associative, and that $1^c$ is its identity
$$A * 1^c = A, \;\; 1^c * B = B.$$
We also have the following.
$A \leq B \rightarrow A*C \leq B*C$
$A \leq B \rightarrow C*A \leq C*B$
$A * \left(\bigwedge_{i \in I} B_i\right) = \bigwedge_{i \in I} (A*B_i)$
$\left(\bigwedge_{i \in I} A_i\right) * B = \bigwedge_{i \in I} (A_i * B).$
Remark. We can do something similar with the binary relations on a set. Given binary relations $\alpha$ and $\beta$ on a set $S$, define
$$\alpha\beta = \{(x,y) \in S^2 \mid \exists s \in S : (x,s) \in \alpha \wedge (s,y) \in \beta\}$$
$$\alpha * \beta = \{(x,y) \in S^2 \mid \forall s \in S : (x,s) \in \alpha \vee (s,y) \in beta\}.$$
This makes $\mathcal{P}(S^2)$ into an ordered monoid in two distinct ways. In the first way, the diagonal relation is the identity. In the second, its complement is the identity. All the expected interactions with order-theoretic concepts hold. Actually, we can go further. In particular, the class of all sets can be made into a category whose morphisms are binary relations. However, this can be done in two different ways, corresponding to two different laws of composition. Once again, all the expected interactions with order-theoretic concepts hold. |
Let $X$ be an $n$-by-$p$ matrix and consider the closed convex polyhedron
$$\mathcal P_X := \{y \in \mathbb R^n | \|X^Ty\|_\infty \le 1\}.$$ Notice that $\mathcal P_X$ is symmetric about the origin.
Problem 1: Given a point $a \in \mathbb R^n$ with $a \not \in \mathcal P_X$, how to compute the euclidean projection of $a$ on $\mathcal P_X$, i.e to solve the convex optimization problem
$$\text{minimize }\frac{1}{2}\|y - a\|^2\text{ subject to }y \in \mathcal P_X.$$
Let $\mathrm{proj}_{\mathcal P_X}(a)$ denote the unique solution. There is probably a zoo of iterative algorithms (from signal-processing literature, e.g) for approximately solving such problems, but I'd prefer a solution with an analytical taste. Utilimately, I'd like to
Problem 2: Find for any index $j \in \{1,2,\ldots,p\}$, a good upper-bound for the quantity $$|X^T_j\mathrm{proj}_{\mathcal P_X}(a)|.$$For example, the bound $|X^T_j\mathrm{proj}_{\mathcal P_X}(a)| \le 1$ is immediate, but useless... Ideally, given $j$, I'd like to predict if $|X^T_j\mathrm{proj}_{\mathcal P_X}(a)| < 1$. Some basic observations: A general strategy is to locate $\mathrm{proj}_{\mathcal P_X}(a)$ within a simple and small set $K$, and then bound $$|X^T_j\mathrm{proj}_{\mathcal P_X}(a)| \le \sup_{y \in K}|X^T_jy|.$$
"Simple" means the above supremum is easy to compute, and "small" means this supremum approximates $|X^T_j\mathrm{proj}_{\mathcal P_X}(a)|$ well (i.e the smaller the supremum, the better). For example, if one could find a small sphere $K := \{y | \|y - c\|_2 \le r\}$ containing $\mathrm{proj}_{\mathcal P_X}(a)$, then it would follow that $$|X^T_j\mathrm{proj}_{\mathcal P_X}(a)| \le r\|X^T_j\|_2 + |X^T_jc|.$$
It's not easy to find such spheres giving tight bounds. However, using the variational characterization of projection onto closed convex sets, it's easy to show that as $y$ runs through $\mathcal P_X$, all the spheres with center $\frac{1}{2}(y + a)$ and radius $\frac{1}{2}\|y-a\|_2$ contain $\mathrm{proj}_{\mathcal P_X}(a)$, and in fact, this is the only point common to all these spheres and $\mathcal P_X$. Note that these spheres give an arbitrarily bad approximation if the point $a$ is far from $\mathcal P_X$, as this point must ly on the surface of each such sphere...
Yet another random observation: Suppose $X$ has non-zero rows, and let $D_X := \mathrm{diag}(\|X_1\|_\infty,\ldots,\|X_n\|_\infty)$, and $\mathbb B_1$ be the unit ball w.r.t the $\ell_1$-norm. Then
$$Z_{D_X} := D_X^{-1}\mathbb B_1 \subseteq \mathcal P_X.$$
Note that like $\mathcal P_X$, $D_X^{-1}\mathbb B_1$ is also symmetric about the origin. Also, note that it is straight-forward to project onto $Z_{D_X}$, as this problem is essentially equivalent to projecting onto a simplex, for which there are linear-time exact algorithms, etc.
A strategy could then be, starting with the template $Z_{D_X}$, find an invertible diagonal matrix $D$ such that $Z_D := D^{-1}\mathbb B_1 \subseteq \mathcal P_X$ and $\mathrm{proj}_{\mathcal P_X}$ is sufficiently close to the boundary of $Z_D$. Then use this closest boundary point as an approximation for $\mathrm{proj}_{\mathcal P_X}$.
Some geometric experiments
Refer to the figure above. The following are less important problems which are interesint in their own right.
Question: What are necessary and sufficient conditions on $D$ which ensure that $Z_D \subseteq \mathcal P_X$ ? What can be said about the triangle $APQ$ ?
For example, it is immediate that
$$\max\left(\frac{1}{2}AP, PQ\right) \le QA \le AP + PQ,$$
since $QA \ge PQ$, by the variational characterization whispered above, and $AP \le PQ + QA$, by the triangle inequality. As a particular consequence, the point $Q$ can be made an arbitrarily bad approximation for the projection point $P$ by taking the projected point $A$ sufficiently far from the polytope $\mathcal P_X$.
Question: Which minimal conditions on the matrix $X$ ensure that the distance between the boundaries of $\mathcal P_X$ and $Z_{D_X}$ is small? Note that this would give us control over $PQ$. |
You can directly imply a probability distribution from a volatility skew.Note that, for any terminal probability distribution $p(S)$ at tenor $T$, we have the model-free formula for the call price $C(K)$ as a function of strike $K$\begin{equation}C=e^{-rT} \int_0^\infty (S-K)^+ p(S) dS\end{equation}Therefore we can write\begin{equation}e^{rT} \...
I don't know why it was removed, but the R package "orderbook" was available:http://journal.r-project.org/archive/2011-1/RJournal_2011-1_Kane~et~al.pdfhttp://cran.r-project.org/web/packages/orderbook/index.htmlIn the IBrokers package, the function "reqMktDepth" is used for streaming order book data.http://cran.r-project.org/web/packages/IBrokers/...
And then music...Victor Neiderhoffer, in a 2001 interview:The market plays music all the time. The problem is you never know how the music of the market is going to end. But a good framework is that it will end on the tonic. Consonance to dissonance back to consonance. And whenever there's tremendous dissonance, strident moves in one direction, a good ...
Although quite simple connected scatterplots can give interesting new insights on how time series perform together:http://steveharoz.com/research/connected_scatterplot/As an example: Gold vs. S&P 500 from 1970 till today:The green point marks 1970, the red point is today. Every point is a year, moving vertically upwards means rise in the S&P ...
So one such visualization package is demonstrated in http://www.tradeworx.com/movie/booklet_demo/temp/booklet_demo2.mov. AFAICT it looks like a tk script.Trading Technologies (TT) sells another visualization tool. But TBH writing your own tool takes a few hours and allows you to focus on what information you are interested in finding.
To me, coloring by data value is a great way to bring applications alive.If traditional ways are not enough, probably taking 3D in use would be a way:And of course 2D heatmap is a very handy for sure.I'm developing data visualization software components with 3D technologies, so definitely all feedback and ideas are welcome :-)
I don't trust either.That a stock didn't trade carries information about its liquidity and about the magnitude of innovations in its fundamental value. If it is feasible within your model, try to incorporate the framework of Rosett (1959, “A Statistical Model of Friction in Economics”, Econometrica). For a recent application of the friction model to ...
Try to give David Spiegelhalter a read/listen to David Spiegelhalter's work and research. He is a statistician and a Professor of the Public Understanding of Risk at Cambridge England.Rather than new ways of calculating risk, he looks at ways of communicating risk to a general public that doesn't have any knowledge of stats. I Linked an interesting video-...
Firstly, it may depend vastly on your choice of platforms (e.g. R, Python, or Java). Some of the most common ones:PythonOut of the box: OrangeSelf-customized: Scikit-learn and PyBrainJavaOut of the box:RapidMiner and KNIMESelf-customized-prone:WekaR:Machine learning in R.Secondly, it vastly depends on your purpose while choosing whether to ...
Accordingly to this comparison (look for post written by Martin) Rapidminer is more powerful in terms of implemented mining algorithms and scales better for large datasets.Being originally a WEKA user my impression is that Rapidminer is also easier to use than WEKA.
I spent some time (a month or so) using RapidMiner at the start of the year; then I added the R plugin, thinking R was just a library of stats functions. Then I learned more R, discovered it also comes with loads of machine learning functions, and realized R is a superset of everything RapidMiner was giving me.Playing with RapidMiner drag and drop was fun, ...
One option to do it is a heatmap. Not sure which software are you using, but in matlab it is extremely simple to do and powerful to tweak.Below an example. Let's assume there are 30 periods $t$ to $t+30$ and 21 ratings.Then you could run:rating = {'Aaa'; 'Aa1';'Aa2';'Aa3';'A1';'A2';'A3';'Baa1';'Baa2';'Baa3';'Ba1';'Ba2';'Ba3';'B1';'B2';'B3';'Caa1';'...
The real contenders for a desktop based tool are RapidMiner and R. If you like Windows or Mac, you will like RapidMiner. If you like command line or Linux, you will like R.I would say RapidMiner has a flatter learning curve. The previous lecturer in the course I teach used R and the students (MBAs) complained about the learning curve. They did not in my ...
Which is/are the most extensible?RapidMiner and R. Besides, RapidMiner offers extensions for seamlessly integrating R and Weka, hence can combine the power and extensibility of all three platforms within RapidMiner. And you can download RapidMiner and its extension for R and Weka for free.Which is the most efficient in terms of a minimal learning curve...
I've been using HighStock, which produces very slick interactive charts with a relatively small amount of JavaScript. Among other things, it can produce candlestick/OHLC charts with volume bars, take a look at the examples page.
Following on from alpha's answer, you might be able to use some of the ideas and tools described on this blog to link R, and maybe also MATLAB/Octave, to the Metatrader platform to use the charting capabilities of Metatrader. Linked from this blog is this page where there is a dll tool available, with downloadable open code, to call R directly from ...
if you are dealing with FX data only; i have found MetaTrader to be the best.my automated trading system is built in Java; and I output data files to MT4 folder; that get picked up automatically by a custom indicator that I have built; which simply reads the data file and plot it on the currency pair that I am viewing.MT4 charts are extremely fast; ...
By stock chart application you mean you are making a charting tool for traders? Typically there is a choice to plot trade, bid or ask, and almost all the time they will want to look at trade prices. If a stock hasn't been trading then the flatline (or gap) on the chart communicates that.
I would recommend performing visualization intensive tasks and UIs on a separate front-end, given R and Matlab are not optimized to efficiently render charts and other visualizations.If you are able to run WPF/Silverlight apps on your machine I can highly recommend SciChart (http://www.scichart.com/). It fulfills all your stated requirements. The library ...
I came across B/View which is a Java application that visualizes the order book for a single stock on a single day. It encompasses some of the basic features I would expect in such a tool. It appears to be more a demonstration than a general purpose tool.
Great question, I love to visualize data! A visualization is really the most efficient way to display a large amount of information to be processed by the human brain IMO. Depending on what exactly you are trying to plot and visualize, I would suggest trying the javascript API for WebGL called Three.js. Examples of Three.js are here: http://threejs.org/...
This stuff is not exactly my area of expertise, but since you're offering the bounty, I'll start things out and we'll see if the community can get us further along.I believe the essence of your question is actually to find the implied distribution of returns given the B-S volatilities. Once you have an implied distribution, comparing it to a normal ...
Take a look at http://www.modulusfe.com/stockchartsl/They have a nice demo (requires Microsoft Silverlight plugin). You can zoom and scroll, add lines and technical indicators, save image etc.Also see this SO question.
This is an example of minimum price variation (also known as the minimum price increment or the minimum price fluctuation).All public quotes for US equities are displayed to the nearest penny. (Hidden quotes may be entered at sub-penny increments.) US stock indices follow this convention and thus quote to the nearest penny.The oil listing is odd indeed. ...
There is a new Order Book visualization tool, called BookMap:http://www.youtube.com/watch?v=1c6HegAn-CAIt allows to trade and simulate trading in real-time or replay mode.The replay mode is free to use.BookMap is the only tool, that visualizes the history (evolution) of the order book. (the first version will be soon in production)
Let me give you the perfect solution.Use Python.The charting, graphing and analysis can be done using the PyLab environment.You can integrate the code into R using the package called rPython.You can integrate it to C and many other languages.Python also comes with infinite more features. So instead of looking for a particular library, use Python.
BookMap seems cool, indeed.Jigsaw trading has something good, similar, less expensivehttp://www.jigsawtrading.com/order-flow-software/The owner is a traderThis tool is used by profitable traders: http://www.nobsdaytrading.com/free-info/for-inexperienced-traders/DB Vaello from OrderFlow Analytics offers another great toolhttp://www.orderflowanalytics.... |
In Delta of binary option, I do not see how to prove that the limit of $\partial C_t/\partial S_t$ is equal to $+\infty$ as $t \rightarrow T$. Can someone help ?
The value of a bond binary call in the Black-Scholes model is given by
\begin{equation} B_t = e^{-r (T - t)} \mathcal{N} \left( d_- \right), \end{equation}
where
\begin{equation} d_- = \frac{1}{\sigma \sqrt{T - t}} \left( \ln \left( \frac{S_t}{K} \right) + \left( r - \frac{1}{2} \sigma^2 \right) (T - t) \right). \nonumber \end{equation}
The delta is
\begin{equation} \frac{\partial B_t}{\partial S_t} = e^{-r (T - t)} \mathcal{N}' \left( d_- \right) \frac{1}{S_t \sigma \sqrt{T - t}}. \end{equation}
We now want to take the limit as $t \rightarrow T$. First note that
\begin{equation} \lim_{t \rightarrow T} d_- = \begin{cases} -\infty & \text{if } S_t < K\\ 0 & \text{if } S_t = K\\ +\infty & \text{if } S_t > K \end{cases}. \end{equation}
Thus
\begin{equation} \lim_{t \rightarrow T} \mathcal{N}' \left( d_- \right) = \begin{cases} 0 & \text{if } S_t \neq K\\ 1 / \sqrt{2 \pi} & \text{if } S_t = K \end{cases} \end{equation}
and
\begin{equation} \lim_{t \rightarrow T} \frac{\partial B_t}{\partial S_t} = \begin{cases} 0 & \text{if } S_t \neq K\\ +\infty & \text{if } S_t = K \end{cases}. \end{equation}
In the last step we used that the exponential in $\mathcal{N}' \left( d_- \right)$ approaches zero faster than the $1 / \sqrt{T - t}$ approaches plus infinity in the limit when $S_t \neq K$.
Alternatively to LocalVolatility's already very nice answer, here's an approach to see that this result does not only hold under the Black-Scholes dynamics.
The $t$-value of a binary call expiring at $T$ can be written as $$ C_t = \Bbb{E}_t^\Bbb{Q} \left[ e^{-r(T-t)} {\bf{1}}\{S_T \geq K \} \right] $$
Its "delta" is defined as $$\Delta = \frac{\partial C_t}{\partial S_t}$$Under some light conditions (discussed in e.g.
Monte Carlo Methods in Financial Engineering, Glasserman, 2004), you can permute the expectation and differential operators to write:\begin{align}\Delta_t &= \frac{\partial}{\partial S_t} \Bbb{E}_t^\Bbb{Q} \left[ e^{-r(T-t)} {\bf{1}}\{S_T \geq K \} \right] \\&= \Bbb{E}_t^\Bbb{Q} \left[ e^{-r(T-t)} \frac{\partial}{\partial S_t}{\bf{1}}\{S_T \geq K \} \right] \\&= \Bbb{E}_t^\Bbb{Q} \left[ e^{-r(T-t)} \delta(S_T-K) \frac{\partial S_T}{\partial S_t} \right] \tag{1}\end{align}where we've used the chain rule ($S_T$ functionally depends on $S_t$) and the fact that the derivative of the Heaviside function ${\bf{1}}(x \geq a)$ is a Dirac impulse at $a$, i.e. $\delta(x-a)$.
It should then clear that: $$ \lim_{t \to T} \Delta_t = \delta(S_t-K) $$ hence the result.
You're short a digital call struck at 100. Your payoff : -\$1 above 100, \$0 below.
1 second before expiry, spot is 99.9999. If its stays there you owe nothing, if it goes a touch higher you owe $1 to the option's buyer.
You need to replicate this payoff via delta hedging : how much of the underlying do you need to hold to generate a \$1 gain, and offset your \$1 loss, when the spot moves from 99.9999 to 100.0?
Answer : a lot of it. |
I am trying to rewrite a Schrödinger equation using dimensionless quantities but here, the potential is perturbed by $\lambda x^4$:
\begin{equation}V(x) = \frac{m\omega^2}{2}x^2 + \lambda x^4\end{equation}
$\lambda$ and $\omega$ are parameters of the system.
The Schrödinger equation reads then: \begin{equation}-\frac{\hbar}{2m}\psi''(x) + \frac{m\omega^2}{2}x^2 \psi(x) + \lambda x^4 \psi(x) = E \psi(x)\end{equation}
If you try to do the same with the unperturbed version, what one would do is, to find a length scale by combining $\hbar$, $\omega$ and $m$ such that their units cancel to length $\big($ this would be $a = \sqrt{\frac{\hbar}{m\omega}}\big)$.
How can I do the same here?
$[\hbar] = \frac{kg m^2}{s}, [m] = kg , [\omega] = \frac{1}{s}, [\lambda] = \frac{kg}{s^2m^6}$ $\big($Units for $\lambda$ are chosen like this, otherwise it wouldn't cancel with the $x^4$ to units of $\frac{kgm^2}{s^2}$$\big)$.
Now trying to get a length scale: \begin{equation}M^{\alpha}L^{2\alpha}T^{-\alpha} \,\,\, M^{\beta}\,\,\, T^{-\gamma} \,\,\, M^{\delta}T^{-2\delta}L^{-6\delta} = L\end{equation}
This leads to: \begin{equation}\alpha + \beta + \delta = 0\end{equation} \begin{equation}2\alpha - 6\delta = 1\end{equation} \begin{equation}-\alpha - \gamma - 2\delta = 0\end{equation}
These are 3 equations with 4 unknowns, meaning one of the $\alpha,\beta,\gamma,\delta$ is freely choosable. Does it make sense that their are different length scales and what difference does it make whatever I choose? |
In Exercises \((2.3E.1)\) to \((2.3E.6)\), find a particular solution by the method used in Example \((2.3.2)\). Then find the general solution and, where indicated, solve the initial value problem and graph the solution.
Exercise \(\PageIndex{1}\)
\(y''+5y'-6y=22+18x-18x^2\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{2}\)
\(y''-4y'+5y=1+5x\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{3}\)
\(y''+8y'+7y=-8-x+24x^2+7x^3\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{4}\)
\(y''-4y'+4y=2+8x-4x^2\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{5}\)
\(y''+2y'+10y=4+26x+6x^2+10x^3, \quad y(0)=2, \quad y'(0)=9\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{6}\)
\(y''+6y'+10y=22+20x, \quad y(0)=2,\; y'(0)=-2\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{7}\)
Show that the method used in Example \((2.3.2)\) won't yield a particular solution of
\begin{equation}\label{eq:2.3E.1}
y''+y'=1+2x+x^2; \end{equation}
that is, \eqref{eq:2.3E.1} doesn't have a particular solution of the form \(y_p=A+Bx+Cx^2\), where \(A\), \(B\), and \(C\) are constants.
Answer
Add texts here. Do not delete this text first.
In Exercises \((2.3E.8)\) to \((2.3E.13)\), find a particular solution by the method used in Example \((2.3.3)\).
Exercise \(\PageIndex{8}\)
\(x^2y''+7xy'+8y=\displaystyle{6\over x}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{9}\)
\(x^2y''-7xy'+7y=13x^{1/2}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{10}\)
\(x^2y''-xy'+y=2x^3\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{11}\)
\(x^2y''+5xy'+4y=\displaystyle{1\over x^3}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{12}\)
\(x^2y''+xy'+y=10x^{1/3}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{13}\)
\(x^2y''-3xy'+13y=2x^4\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{14}\)
Show that the method suggested for finding a particular solution in Exercises \((2.3E.8)\) to \((2.3E.13)\) won't yield a particular solution of
\begin{equation}\label{eq:2.3E.2}
x^2y''+3xy'-3y={1\over x^3}; \end{equation}
that is, \eqref{eq:2.3E.2} doesn't have a particular solution of the form \(y_p=A/x^3\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{15}\)
Prove: If \(a\), \(b\), \(c\), \(\alpha\), and \(M\) are constants and \(M\ne0\) then
\begin{eqnarray*}
ax^2y''+bxy'+cy=M x^\alpha \end{eqnarray*}
has a particular solution \(y_p=Ax^\alpha\) (\(A=\) constant) if and only if \(a\alpha(\alpha-1)+b\alpha+c\ne0\).
Answer
Add texts here. Do not delete this text first.
If \(a\), \(b\), \(c\), and \(\alpha\) are constants, then
\begin{eqnarray*}
a(e^{\alpha x})''+b(e^{\alpha x})'+ce^{\alpha x}=(a\alpha^2+b\alpha+c)e^{\alpha x}. \end{eqnarray*}
Use this in Exercises \((2.3E.16)\) to \((2.3E.21)\) to find a particular solution. Then find the general solution and, where indicated, solve the initial value problem and graph the solution.
Exercise \(\PageIndex{16}\)
\(y''+5y'-6y=6e^{3x}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{17}\)
\(y''-4y'+5y=e^{2x}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{18}\)
\(y''+8y'+7y=10e^{-2x}, \quad y(0)=-2,\; y'(0)=10\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{19}\)
\(y''-4y'+4y=e^{x}, \quad y(0)=2,\quad y'(0)=0\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{20}\)
\(y''+2y'+10y=e^{x/2}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{21}\)
\(y''+6y'+10y=e^{-3x}\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{22}\)
Show that the method suggested for finding a particular solution in Exercises \((2.3E.16)\) to \((2.3E.21)\) won't yield a particular solution of
\begin{equation}\label{eq:2.3E.3}
y''-7y'+12y=5e^{4x}; \end{equation}
that is, \eqref{eq:2.3E.3} doesn't have a particular solution of the form \(y_p=Ae^{4x}\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{23}\)
Prove: If \(\alpha\) and \(M\) are constants and \(M\ne0\) then constant coefficient equation
\begin{eqnarray*}
ay''+by'+cy=M e^{\alpha x} \end{eqnarray*}
has a particular solution \(y_p=Ae^{\alpha x}\) (\(A=\) constant) if and only if \(e^{\alpha x}\) isn't a solution of the complementary equation.
Answer
Add texts here. Do not delete this text first.
If \(\omega\) is a constant, differentiating a linear combination of \(\cos\omega x\) and \(\sin\omega x\) with respect to \(x\) yields another linear combination of \(\cos\omega x\) and \(\sin\omega x\). In Exercises \((2.3E.24)\) to \((2.3E.29)\) use this to find a particular solution of the equation. Then find the general solution and, where indicated, solve the initial value problem and graph the solution.
Exercise \(\PageIndex{24}\)
\(y''-8y'+16y=23\cos x-7\sin x\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{25}\)
\(y''+y'=-8\cos2x+6\sin2x\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{26}\)
\(y''-2y'+3y=-6\cos3x+6\sin3x\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{27}\)
\(y''+6y'+13y=18\cos x+6\sin x \)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{28}\)
\(y''+7y'+12y=-2\cos2x+36\sin2x, \quad y(0)=-3,\quad y'(0)=3\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{29}\)
\(y''-6y'+9y=18\cos3x+18\sin3x, \quad y(0)=2,\quad y'(0)=2\)
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{30}\)
Find the general solution of
\begin{eqnarray*}
y''+\omega_0^2y =M\cos\omega x+N\sin\omega x, \end{eqnarray*}
where \(M\) and \(N\) are constants and \(\omega\) and \(\omega_0\) are distinct positive numbers.
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{31}\)
Show that the method suggested for finding a particular solution in Exercises \((2.3E.24)\) to \((2.3E.29)\) won't yield a particular solution of
\begin{equation}\label{eq:2.3E.4}
y''+y=\cos x+\sin x; \end{equation}
that is, \eqref{eq:2.3E.4} does not have a particular solution of the form \(y_p=A\cos x+B\sin x\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{32}\)
Prove: If \(M\), \(N\) are constants (not both zero) and \(\omega>0\), the constant coefficient equation
\begin{equation}\label{eq:2.3E.5}
ay''+by'+cy=M\cos\omega x+N\sin\omega x \end{equation}
has a particular solution that's a linear combination of \(\cos\omega x\) and \(\sin\omega x\) if and only if the left side of \eqref{eq:2.3E.5} is not of the form \(a(y''+\omega^2y)\), so that \(\cos\omega x\) and \(\sin\omega x\) are solutions of the complementary equation.
Answer
Add texts here. Do not delete this text first.
In Exercises \((2.3E.33)\) to \(2.3E.38)\), refer to the cited exercises and use the principal of superposition to find a particular solution. Then find the general solution.
Exercise \(\PageIndex{39}\)
Prove: If \(y_{p_1}\) is a particular solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_1(x) \end{eqnarray*}
on \((a,b)\) and \(y_{p_2}\) is a particular solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_2(x) \end{eqnarray*}
on \((a,b)\), then \(y_p=y_{p_1}+y_{p_2}\) is a solution of
\begin{eqnarray*}
P_0(x)y''+P_1(x)y'+P_2(x)y=F_1(x)+F_2(x) \end{eqnarray*}
on \((a,b)\).
Answer
Add texts here. Do not delete this text first.
Exercise \(\PageIndex{40}\)
Suppose \(p\), \(q\), and \(f\) are continuous on \((a,b)\). Let \(y_1\), \(y_2\), and \(y_p\) be twice differentiable on \((a,b)\), such that \(y=c_1y_1+c_2y_2+y_p\) is a solution of
\begin{eqnarray*}
y''+p(x)y'+q(x)y=f \end{eqnarray*}
on \((a,b)\) for every choice of the constants \(c_1,c_2\). Show that \(y_1\) and \(y_2\) are solutions of the complementary equation on \((a,b)\).
Answer
Add texts here. Do not delete this text first. |
Basic Theorems Regarding Free Groups on a Set
Basic Theorems Regarding Free Groups on a Set
On the The Free Group on a Set X, F(X) page we defined the free group $F(X)$ generated by the set $X$. We will now look at some basic results regarding free groups.
Proposition 1: The free group generated $1$ element, $F_1$ is isomorphic to $(\mathbb{Z}, +)$. Proof:Let $X = \{ x \}$, so that $F_1 = F(X)$. Then every word on $X$ is of the form $x^n$ where $x^0$ is defined to be the identity word. Let $\varphi : \mathbb{Z} \to F_1$ be defined for all $n \in \mathbb{Z}$ by $\varphi(n) = x^n$. Clearly $\varphi$ is bijection. It is also a homomorphism since for all $m, n \in \mathbb{Z}$ we have that:
\begin{align} \quad \varphi(m + n) = x^{m+n} = x^mx^n = \varphi(m)\varphi(n) \end{align}
Where we reduce $x^{m}x^{n}$ if $m$ and $n$ have different signs. $\blacksquare$
Proposition 2: Let $X$ be a nonempty set. Then $F(X)$ is abelian if and only if $|X| = 1$. Proof:$\Rightarrow$ Suppose that $|X| \neq 1$. We will show that $F(X)$ is nonabelian. Since $|X| \geq 2$, let $x_1, x_2 \in X$ with $x_1 \neq x_2$. Then $x_1x_2x_1^{-1}x_2^{-1}$ is a reduced word on $X$ that is not the identity word which we will denote by $1$, i.e., $x_1x_2x_1^{-1}x_2^{-1} \neq 1$. Concatenate both words by $x_2$ to get $x_1x_2x_1 \neq x_2$. Then concatenate both words by $x_1$ to get that $x_1x_2 \neq x_2x_1$. So there exists $x_1, x_2 \in X$ with $x_1x_2 \neq x_2x_1$. So $F(X)$ is nonabelian. $\Leftarrow$ Suppose that $|X| = 1$. Then $F(X)$ is isomorphic to $F_1$. By proposition 1, $F_1$ is isomorphic to $(\mathbb{Z}, +)$, which is an abelian group. So $F(X) = F_1$ is abelian. $\blacksquare$
Proposition 3: Let $X$ be a set. If $H$ is a subgroup of $F(X)$ then there exists a set $Y$ such that $H$ is isomorphic to $F(Y)$. That is, a subgroup of a free group is a free group. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.