text stringlengths 256 16.4k |
|---|
One quick way to think about it is that
you're transposing the entire problem. Imagine placing the primal coefficients in a matrix, with the objective at the bottom. This gives $P = \begin{bmatrix} 1 & -6 & 2 \\ 5 & 7 & -4 \\ 8 & 3 & \end{bmatrix}$. Transposing yields $P^T = D = \begin{bmatrix} 1 & 5 & 8 \\ -6 & 7 & 3 \\ 2 & -4 & \end{bmatrix}.$ Thus the dual (coefficients and variables only) looks like $$
\begin{align}
\text{max or min } &2w_1 - 4w_2 \\
\text{subject to } &w_1 + 5w_2 \text{ ? } 8\\
&-6w_1 + 7w_2 \text{ ? } 3.
\end{align}
$$Now we have to figure out whether we are maximizing or minimizing, what should go where the ?'s are, and whether the dual variables are nonnegative, nonpositive, or unrestricted. The table tells us all of this.
Since the primal problem is maximizing, the dual is minimizing. Everything else can be read by pairing rows with columns in the transposition: Row $x$ in $P$ goes with Column $x$ in $D$, and Column $y$ in $P$ goes with Row $y$ in $D$. Then the table says...
Row 1 in $P$ is a $\geq$ constraint in a maximization problem $\Rightarrow$ The variable associated with Column 1 in $D$, $w_1$, has $w_1 \leq 0$. Row 2 in $P$ is a $=$ constraint $\Rightarrow$ $w_2$ is unrestricted. Column 1 in $P$ is for $x_1$, and $x_1 \leq 0$ $\Rightarrow$ The constraint associated with Row 1 in $D$ is a $\leq$ constraint. Column 2 in $P$, with $x_2 \geq 0$ $\Rightarrow$ The second constraint in the dual is a $\geq$ constraint.
Thus we get the complete form of the dual$$
\begin{align}
\min &2w_1 - 4w_2 \\
\text{subject to } &w_1 + 5w_2 \leq 8\\
&-6w_1 + 7w_2 \geq 3 \\
&w_1 \leq 0 \\
&w_2 \text{ unrestricted.}
\end{align}
$$
For remembering how to do this, I prefer something called the "SOB" method instead of memorizing the table. "SOB" here stands for "sensible," "odd," or "bizarre." The SOB table is the following:
Variables Constraints, Maximizing Constraints, Minimizing
Sensible ≥ 0 ≤ ≥
Odd Unrestricted = =
Bizarre ≤ 0 ≥ ≤
Hopefully it makes sense why these are "sensible," "odd," and "bizarre," at least relatively speaking.
The idea then is that
sensible maps to sensible, odd maps to odd, and bizarre maps to bizarre when you're switching from the primal to the dual. Let's take the example problem. It's "bizarre" to have a $\geq$ constraint in a maximization problem, so the variable in the dual associated with the first constraint, $w_1$, must have the "bizarre" nonpositivity restriction $w_1 \leq 0$. It's "odd" to have an equality constraint, and so the variable in the dual associated with the second constraint, $w_2$, must have the "odd" unrestricted property. Then, it's "bizarre" to have a variable $x_1$ with a nonpositivity restriction, so the constraint in the dual associated with $x_1$ must have the "bizarre" $\leq$ constraint for a minimization problem. Finally, it's "sensible" to have a variable $x_2$ with a nonnegativity restriction, so the constraint in the dual associated with $x_2$ must have the "sensible" $\geq$ constraint for a minimization problem.
I've found that after a few practice examples my students generally internalize the SOB method well enough that they can construct duals without needing to memorize anything. |
410 0 [SOLVED] Baker, Campbell, Hausdorff and all that
I'm posting this here because, although it is a mathematics problem, it is related to perturbation theory and is the kind of problem physicists might be more skilled at answering.
Does anyone know an elegant proof of
[tex]e^{A+B} = \int_0^1 d\alpha_1 \,\delta(1-\alpha_1) e^{\alpha_1 A} + \int_0^1 d\alpha_1 d\alpha_2 \,\delta(1-\alpha_1 - \alpha_2) e^{\alpha_1 A} B e^{\alpha_2 A} + \frac{1}{2!}\int_0^1 d\alpha_1 d\alpha_2 d\alpha_3 \,\delta(1-\alpha_1 - \alpha_2 - \alpha_3) e^{\alpha_1 A} B e^{\alpha_2 A} B e^{\alpha_3 A} + \dots[/tex]
where of course [tex]A[/tex] and [tex]B[/tex] are matrices? I can prove it starting from the easy to prove identity
[tex]\frac{d}{ds} e^{A + s B} = \left( \int_0^1\!dt\, e^{t(A + s B)} B e^{-t(A + s B)} \right) e^{A + s B}[/tex]
but the proof gets a bit messy. I was hoping maybe someone recognizes the formula or knows a good references. |
Category:Dark Energy
The observed accelerated expansion of the Universe requires either modification of General Relativity or existence within the framework of the latter of a smooth energy component with negative pressure, called the
dark energy. This component is usually described with the help of the state equation $p=w\rho$. As it follows from Friedman equation,\[\frac{\ddot a}{a}=-\frac{4\pi G}{3}(\rho+3p).\]For cosmological acceleration it is needed that $w<-1/3$. The allowed range of values of $w$ can be split into three intervals. The first interval $-1<w<-1/3$ includes scalar fields named the quintessence. The substance with the state equation $p=-\rho\ (w=-1)$ was named the cosmological constant, because $\rho=const$: energy density does not depend on time and is spacially homogeneous in this case. Finally, scalar fields with $w<-1$ were called the phantom fields. Presently there is no evidence for dynamical evolution of the dark energy. All available data agree with the simplest possibility---the cosmological constant. However the situation can change in future with improved accuracy of observations. That is why one should consider other cases of dark energy alternative to the cosmological constant. Pages in category "Dark Energy"
The following 10 pages are in this category, out of 10 total. |
I will just expand a little on what doraemonpaul posted, as I do think that post is useful for most real-world reasons to ask this type of question.
Write
$$f(x) = \sum_{j=0}^{\infty} f_j x^j$$
(and similarly$g(x) = \sum_{j=0}^{\infty} g_j x^j$).
Then we know that
$$f'(x) = \sum_{j=0}^{\infty} (j+1) f_{j+1} x^j$$
$$(f'(x))^2 = \sum_{j=0}^{\infty} (j+1) f_{j+1} x^j \sum_{k=0}^{\infty} (k+1) f_{k+1} x^k$$
$$= \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} (j+1) (k+1) f_{j+1} f_{k+1} x^{j+k}$$
$$= \sum_{n=0}^{\infty} x^n \sum_{j=0}^n (j+1) (n-j+1) f_{j+1} f_{n-j+1}$$
and
$$f''(x) = \sum_{j=0}^{\infty} (j+2) (j+1) f_{j+2} x^j$$
$$f(x) f''(x) = \sum_{j=0}^{\infty} f_j x^j \sum_{k=0}^{\infty} (k+2) (k+1) f_{k+2} x^k$$
$$= \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} (k+2) (k+1) f_j f_{k+2} x^{j+k}$$
$$= \sum_{n=0}^{\infty} x^n \sum_{j=0}^n (n-j+2) (n-j+1) f_j f_{n-j+2}$$
(and, of course, the same forms for g). Now taking this and plugging into the equation from doraemonpaul (after rescaling $\alpha = 2 \beta$) that:
$$f(x) f''(x) + (f'(x))^2 - \beta f(x) = g(x) g''(x) + (g'(x))^2$$
and equating like powers of $x^n$ you get the following sequence of equations $\forall_{n \in \mathbb{N}} $:
$$-\beta f_n + \sum_{j=0}^n [(n-j+2) (n-j+1) f_j f_{n-j+2} + (j+1) (n-j+1) f_{j+1} f_{n-j+1}] = \sum_{j=0}^n [(n-j+2) (n-j+1) g_j g_{n-j+2} + (j+1) (n-j+1) g_{j+1} g_{n-j+1}]$$
So, as you can see, you can calculate each Taylor expansion coefficient of g in terms of the coefficients of f by starting with the equation for n=0 and working up (with $g_0$, $g_1$ to choose). So the characterization of the solution is pretty complete with the diffy que. This is sufficient for most real-world requirements here. Do you need something else? |
Geometric Mean
Here we will learn all the
Geometric Mean Formula With Example. The Geometric Mean is n th root of the product of n quantities of the series. It is observed by multiplying the values of items together and extracting the root of the product corresponding to the number of items. Thus, the square root of the products of two items and cube root of the products of the three items are the Geometric Mean.
Usually, GM is never larger than AM. If there are negative numbers and zeros in the series, the GM cannot be used. Logarithms can be used to find GM to reduce the large number and to save time.
The geometric mean (GM) of a series of ‘n’ positive numbers is given by:
1. In case of discrete series without frequency,
\[GM=\sqrt[n]{{{x}_{1}}.{{x}_{2}}…..{{x}_{n}}}\] It is also given by \[GM=anti\log (\frac{\sum{\log x}}{n})\]
2. In case of discrete series with frequency,
\[GM=\sqrt[n]{{{x}_{1}}^{{{f}_{1}}}.{{x}_{2}}^{{{f}_{2}}}….{{x}_{n}}^{{{f}_{n}}}}\] Where, \[n={{f}_{1}}+{{f}_{2}}+….+{{f}_{n}}\] It is also given by, \[GM=anti\log \{\frac{\sum{f\log x}}{n}\}\]
3. In case of continuous series,
\[GM=\sqrt[n]{{{m}_{1}}^{{{f}_{1}}}.{{m}_{2}}^{{{f}_{2}}}….{{m}_{n}}^{{{f}_{n}}}}\] Where, \[n={{f}_{1}}+{{f}_{2}}+….+{{f}_{n}}\] And m 1, m 2, …, m n are the mid points of class intervals. It is also given by, \[GM=anti\log \{\frac{\sum{f\log m}}{n}\}\] Weighted Geometric Mean
Like the weighted arithmetic mean we can also calculate the weighted geometric mean.
\[{{G}_{W}}=anti\log \{\frac{\sum{W\log x}}{W}\}\] GW = Weighted Geometric Mean ∑ W log x = Sum of the products of the logarithms of the value x and their corresponding weights. ∑ W = Sum of the weights.
Example 01 Find the Geometric Mean of data 2, 4, 8. Solution: Here x 1 = 2, x 2 = 4, x 3 = 8 \[GM=\sqrt[3]{{{x}_{1}}\times {{x}_{2}}\times {{x}_{3}}}\] \[GM=\sqrt[3]{2\times 4\times 8}=\sqrt[3]{64}=4\]
Example 02 Find the GM of following data.
Marks(x) 130 135 140 145 150 No. of Students(f) 3 4 6 6 3 Solution:
Marks (x)
No. of Students (f) log x f log x 130 3 2.113
6.339
135
4 2.130 8.520 140 6 2.146
12.876
145
6 2.161 12.996 150 3 2.176
6.528
∑ f = n = 22
∑ f log x = 47.23
\[GM=anti\log \{\frac{\sum{f\log x}}{n}\}\]
\[=anti\log \{\frac{47.23}{22}\}=140.212\]
Example 03 Find out GM for given data
Yield of wheat in MT 0-10 10-20 20-30 30-40 40-50 50-60 No. of farms frequency(f) 3 16 26 31 16 8 Solution:
Class Interval
Mid-value (m) No. of farms (f) log m f log m 0-10 5 3 0.699
2.097
10-20
15 16 1.176 18.816 20-30 25 26 1.398
36.348
30-40
35 31 1.544 47.864 40-50 45 16 1.653
26.448
50-60
55 8 1.740 13.920 ∑ f = n = 100
∑ f log m = 145.493
\[GM=anti\log \{\frac{\sum{f\log m}}{n}\}\]
\[=anti\log \{\frac{145.493}{100}\}=28.505\] |
Let $u$ be a continuous utility function on $\mathbb R^2_+\setminus\{0\}$. Consider the following three conditions:
Local non satiationsays that for any $x \in X$ and $\epsilon > 0$, there exists $y \in X$ such that $d(x,y) < \epsilon$ and $U(x) < U(y)$. Local non satiation* says that for any $x \in X$ and $\epsilon > 0$, there exists $y \in X$ such that $d(x,y) < \epsilon$ and $U(x) \neq U(y)$. The indifference sets of $U$ are curves.
As a standard result, (1) implies (3) and (3) implies(1).
Obviously, (1) implies (2) so (3) also implies (2).
Can (2) also imply (1) and (3)? |
Let $\phi: R \to R'$ be a ring isomorphism and $I$ an ideal of $R$. Define $\phi(I)=\{\phi(i): i \in I\}$.
Show that $\frac RI \cong \frac {R'}{\phi(I)}$.
To use the first isomorphism theorem, I was trying to show that the kernel of $\pi \circ \phi$ was $I$, where $\pi: R'\to \frac {R'}{\phi(I)}$. It seems to me this follows from the definition of $I$, but my professor said I needed to use the injectivity of $\phi$ for one of the steps in one of the inclusions. I marked the step with an asterisk:
$I \supseteq \ker\pi \circ \phi$: $$ i \in \ker\pi \circ \phi\implies \overline{\phi(i)}=\overline 0 \implies \phi(i)\in \phi(I)\overset{\ast}{\implies} i \in I$$
So these are my questions:
Why doesn't this just follow from the definition of $\phi(I)$? If we do need the injectivity here, there must be an example where $\phi$ is not injective and there's an element $\phi(i)$ in $\phi(I)$ where $i \notin I$, but I can't think of it. Can you give me an example of this? |
The two questions are as follow and the image attached shows all my steps towards attempting to solve them:
a) $1+ \log y = \log (y+3)$: I am missing something since my steps do not make sense. IF I collect the $\log y$ terms, they cancel each other out when I bring it to the other side to isolate for $y$.
b)$\log_2 (x - 3) + \log_2 (x + 5) − \log_2 (x + 15) =0$: I managed to get two solutions $x= -5$ and $x = 4$, but when I input those values into the original equation, my answer does not equal $0$. I reject $-5$ as an erroneous root because it results in negative values for log. So with $x=4$ left, I get: \begin{align*} \log_2(1) + \log_2(9) - \log_2(19) & = \frac{\log_2(9)}{\log_2(19)} && \text{used product and quotient rules}\\ & = 0.7462285999 \end{align*} so l.s. does not equal r.s.
I appreciate any help or tips you may offer, thank you. |
I have the following problems on my Statistics course (using Casella and Berger's book) problem set:
1) Let $Y_{i} = X_{i}'\theta + U_{i}$ where $\theta \in \mathbb{R}^k$ and $U_{i}$ are iid $N(0, \sigma^2)$ random variables and $X_{i}$ is a fixed vector for each $i$. Find the minimal sufficient statistic when $\sigma^2$ is known.
2) Find the minimal sufficient statistic when $\sigma^2$ is unknown.
I was able to represent this model in a vectorial way, write down the vectorial joint density for $Y$ and show that the OLS regressor of $Y$ on $X$ is a sufficient statistic for $\theta$. However, I'm having trouble with showing that it's the minimal one and also with the case when $\sigma^2$ is unknown. Any ideas how to procede? |
Current browse context:
physics.soc-ph
Change to browse by: References & Citations Bookmark(what is this?) Physics > Physics and Society Title: The role of voting intention in public opinion polarization
(Submitted on 16 Sep 2019)
Abstract: We introduce and study a simple model for the dynamics of voting intention in a population of agents that have to choose between two candidates. The level of indecision of a given agent is modeled by its propensity to vote for one of the two alternatives, represented by a variable $p \in [0,1]$. When an agent $i$ interacts with another agent $j$ with propensity $p_j$, then $i$ either increases its propensity $p_i$ by $h$ with probability $P_{ij}=\omega p_i+(1-\omega)p_j$, or decreases $p_i$ by $h$ with probability $1-P_{ij}$, where $h$ is a fixed step. We analyze the system by a rate equation approach and contrast the results with Monte Carlo simulations. We found that the dynamics of propensities depends on the weight $\omega$ that an agent assigns to its own propensity. When all the weight is assigned to the interacting partner ($\omega=0$), agents' propensities are quickly driven to one of the extreme values $p=0$ or $p=1$, until an extremist absorbing consensus is achieved. However, for $\omega>0$ the system first reaches a quasi-stationary state of symmetric polarization where the distribution of propensities has the shape of an inverted Gaussian with a minimum at the center $p=1/2$ and two maxima at the extreme values $p=0,1$, until the symmetry is broken and the system is driven to an extremist consensus. A linear stability analysis shows that the lifetime of the polarized state, estimated by the mean consensus time $\tau$, diverges as $\tau \sim (1-\omega)^{-2} \ln N$ when $\omega$ approaches $1$, where $N$ is the system size. Finally, a continuous approximation allows to derive a transport equation whose convection term is compatible with a drift of particles from the center towards the extremes. Submission historyFrom: Federico Vazquez [view email] [v1]Mon, 16 Sep 2019 09:53:28 GMT (196kb) |
Abbreviation:
ComBCK
A
is a structure $\mathbf{A}=\langle A,\cdot ,0\rangle$ of type $\langle 2,0\rangle$ such that commutative BCK-algebra
(1): $((x\cdot y)\cdot (x\cdot z))\cdot (z\cdot y) = 0$
(2): $x\cdot 0 = x$
(3): $0\cdot x = 0$
(4): $x\cdot y=y\cdot x= 0 \Longrightarrow x=y$
(5): $x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Remark: Note that the commutativity does not refer to the operation $\cdot$, but rather to the term operation $x\wedge y=x\cdot (x\cdot y)$, which turns out to be a meet with respect to the following partial order:
$x\le y \iff x\cdot y=0$, with $0$ as least element.
A
is a BCK-algebra$\mathbf{A}=\langle A,\cdot ,0\rangle$ such that commutative BCK-algebra
$x\cdot (x\cdot y) = y\cdot (y\cdot x)$
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative BCK-algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism:
$h(x\cdot y)=h(x)\cdot h(y) \mbox{ and } h(0)=0$
Example 1:
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &2\\ f(4)= &5\\ f(5)= &11\\ f(6)= &28\\ f(7)= &72\\ f(8)= &192\\ \end{array}$ |
All quantum operators must be unitary. Unitary means the conjugate-transpose of the operator is its inverse. In your case:
$UU^{\dagger} = \begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 1 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{bmatrix}\begin{bmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 0 & 0\end{bmatrix} =\begin{bmatrix}1 & 0 & 0 & 0\\0 & 2 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{bmatrix}$
So it is most certainly not unitary because $UU^{\dagger} \neq \mathbb{I}_4$ (same as your second attempt).
There's a long way to construct functions like this, and a short way. The long way is to write out all inputs and outputs:
$U|000\rangle = |000\rangle$
$U|001\rangle = |001\rangle$
$U|010\rangle = |011\rangle$
$U|011\rangle = |010\rangle$
$U|100\rangle = |101\rangle$
$U|101\rangle = |100\rangle$
$U|110\rangle = |110\rangle$
$U|111\rangle = |111\rangle$
You can then pretty easily construct the operator from this.
An easier way is to use projection operators & matrix addition to implement "if-then" semantics with matrices:
$U = |00\rangle\langle00| \otimes \mathbb{I}_2 + |01\rangle\langle01|\otimes X_2 + |10\rangle\langle10| \otimes X_2 + |11\rangle\langle11|\otimes \mathbb{I}_2$
The way to read this is "if the input is $|00\rangle$ or $|11\rangle$, do not flip the third bit. If the input is $|01\rangle$ or $|10\rangle$, flip the third bit. $|\phi\rangle\langle \phi|$ is called the
outer product, and is defined for example as follows:
$|0\rangle\langle0| = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$
which is called a
projection operator. Use projection operators to only apply an operation on specific states - here, $|0\rangle$.
A huge benefit of this projectors & addition approach is that you never have to actually write out the full matrix, which can become enormous as the number of qbits increase - 3-qbit 8x8 matrices already have 64 elements! This is your first step into using symbolic rather than matrix reasoning. For example, we can use the rules of linear algebra to calculate the action of our $U$ on some input:
$U|101\rangle = (|00\rangle\langle00| \otimes \mathbb{I}_2 + |01\rangle\langle01|\otimes X_2 + |10\rangle\langle10| \otimes X_2 + |11\rangle\langle11|\otimes \mathbb{I}_2)|101\rangle$
Now, matrix multiplication distributes over addition. This means we have:
$|00\rangle\langle00| \otimes \mathbb{I}_2 |101\rangle + |01\rangle\langle01|\otimes X_2 |101\rangle + |10\rangle\langle10| \otimes X_2 |101\rangle + |11\rangle\langle11|\otimes \mathbb{I}_2 |101\rangle$
Let's apply further transformation rules. Note $|101\rangle = |10\rangle \otimes |1\rangle$, and $(U\otimes V)(|x\rangle \otimes |y\rangle) = U|x\rangle \otimes V|y\rangle$, where in our cases (for example) $U = |00\rangle\langle00|$, $V = \mathbb{I}_2$, $x=|10\rangle$ and $y=|1\rangle$:
$|00\rangle\langle00|10\rangle \otimes \mathbb{I}_2 |1\rangle + |01\rangle\langle01|10\rangle \otimes X_2 |1\rangle + |10\rangle\langle10|10\rangle \otimes X_2 |1\rangle + |11\rangle\langle11|10\rangle \otimes \mathbb{I}_2 |1\rangle$
Now, note we have the following four terms:
$\langle00|10\rangle, \langle01|10\rangle, \langle10|10\rangle, \langle11|10\rangle$
These are called
inner products, or dot products and here all of them are zero except for $\langle10|10\rangle$ - the dot product of $|10\rangle$ with itself:
$\langle10|10\rangle = \begin{bmatrix} 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 1 \\ 0\end{bmatrix} = 1$
Since the other terms are all zero, they all cancel out:
$\require{cancel} \cancel{|00\rangle\cdot 0 \otimes \mathbb{I}_2 |1\rangle} + \cancel{|01\rangle\cdot 0 \otimes X_2 |1\rangle} + |10\rangle\cdot 1 \otimes X_2 |1\rangle + \cancel{|11\rangle\cdot 0 \otimes \mathbb{I}_2 |1\rangle}$
So we are left with:
$|10\rangle \otimes X_2 |1\rangle$
Where of course $X_2|1\rangle = |0\rangle$, so:
$|10\rangle \otimes |0\rangle = |100\rangle$
And we calculated $U|101\rangle = |100\rangle$ as expected, without once having to write out a huge inconvenient matrix! |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Abbreviation:
CRPoMon
A
is a residuated partially ordered monoid $\mathbf{A}=\langle A, \cdot, 1, \to, \le\rangle$ such that commutative residuated partially ordered monoid
$\cdot$ is
: $xy=yx$ commutative
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative residuated partially ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(1)=1$, $h(x \to y)=h(x) \to h(y)$, and $x\le y\Longrightarrow h(x)\le h(y)$.
A
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ |
Brezis Pseudomonotonicity is Strictly Weaker than Ky–Fan Hemicontinuity 528 Downloads Abstract
In 1968, H. Brezis introduced a notion of operator pseudomonotonicity which provides a unified approach to monotone and nonmonotone variational inequalities. A closely related notion is that of Ky–Fan hemicontinuity, a continuity property which arises if the famous Ky–Fan minimax inequality is applied to the variational inequality framework. It is clear from the corresponding definitions that Ky–Fan hemicontinuity implies Brezis pseudomonotonicity, but quite surprisingly, a recent publication by Sadeqi and Paydar (J Optim Theory Appl 165(2):344–358, 2015) claims the equivalence of the two properties. The purpose of the present note is to show that this equivalence is false; this is achieved by providing a concrete example of a nonlinear operator which is Brezis pseudomonotone but
not Ky–Fan hemicontinuous. KeywordsBrezis pseudomonotonicity Ky–Fan hemicontinuity Counterexample Variational inequality Equilibrium problem Mathematics Subject Classification46B 46T 47H 47J 1 Introduction
Variational inequalities (VIs) are a prominent tool in applied mathematics. They have found numerous applications, including constrained optimization problems, Nash equilibrium problems, and several types of contact problems in mechanics. More details can be found, for instance, in [1, 2, 3, 4] and the references therein.
The study of variational inequalities can be divided into multiple facets: most commonly, one is interested in sufficient conditions for the existence of solutions, the design of suitable algorithms for their computation, or other properties of the solution set such as closedness or convexity. Throughout the last decades, various concepts have been developed in order to ascertain these properties, including the monotonicity of the variational operator (and multiple relaxed versions thereof) as well as several types of continuity (hemicontinuity, continuity on finite-dimensional subspaces, etc.). In addition, one often requires suitable properties of the feasible set such as closedness or (weak) compactness.
One of the most general properties which can be used to tackle variational inequalities is that of (
Brezis) pseudomonotonicity, a property which was introduced in [5]. (This should not be confused with pseudomonotonicity in the sense of Karamardian.) The attractive property of Brezis pseudomonotonicity is that it provides a unified approach to monotone and nonmonotone problems—indeed, it is best viewed as a hybrid combining elements of both monotonicity and continuity; see Definition 2.2.
Since its conception in 1968, the notion of pseudomonotonicity has occurred prominently in the works of Browder [6]; Brezis, Nirenberg, and Stampacchia [7]; Zeidler [8]; and Barbu and Precupanu [9]. The standard application of pseudomonotonicity was the construction of existence results for VIs, a topic which occurs in all these references and was also revisited in [10]. In addition, pseudomonotonicity has turned out to be quite useful when analyzing convergence of iterative algorithms for constrained minimization and variational or quasi-variational inequalities; see [11, 12] for more details.
A different but related approach to the existence of solutions is given by the classical minimax inequality of Ky Fan [13]. An application of this result to the VI framework gives rise to a continuity property which is sometimes called
Ky–Fan hemicontinuity. This property implies Brezis pseudomonotonicity (a fact which follows directly from the corresponding definitions, see below), but the latter appears to be more refined and convenient when dealing with infinite-dimensional VIs. However, quite surprisingly, a 2015 publication by Sadeqi and Paydar [14] claims the equivalence of the two properties. The purpose of the present paper is to discuss this equivalence and provide a counterexample which shows that the two properties are in fact distinct. In addition, we also outline an error in the reference which may have led to the false result.
This paper is organized as follows. In Sect. 2, we give a brief summary of the properties in question, their consequences, and relations to other standard properties for VIs. Section 3 contains the main counterexample.
2 Hemicontinuity, Pseudomonotonicity, and Their Role in the Study of Variational Inequalities
Throughout this paper,
X is a real Banach space with norm \(\Vert \cdot \Vert _X\) and continuous dual \(X^*\). The duality pairing between \(X^*\) and X is denoted by \(\langle \cdot ,\cdot \rangle \). We write \(\rightarrow \), \(\rightharpoonup \), and \(\rightharpoonup ^*\) for strong, weak, and weak-\(^*\) convergence. Fand Atakes on the following form: equilibrium problemin the following sense: define \(\varPsi :A^2\rightarrow \mathbb {R}\), \({\varPsi (x,y):=\langle F(x),x-y \rangle }\). Then the VI is obviously equivalent to the existence of \(\bar{x}\in A\) such that \(\varPsi (\bar{x},y)\le 0\) for all \(y\in A\). One of the standard existence results for such problems is the minimax inequality of Ky Fan [13], which requires the weak sequential lower semicontinuity of \(\varPsi \) with respect to x. This gives rise to the following definition. Definition 2.1
(Ky–Fan hemicontinuity) We say that \(F:X\rightarrow X^*\) is
Ky–Fan hemicontinuous if, for every \(y\in X\), the function \(x\mapsto \langle F(x),x-y \rangle \) is weakly sequentially lower semicontinuous.
The above is one of the two main properties which we will discuss in this paper. The second one was introduced by Brezis [5] and is given as follows.
Definition 2.2 (Brezis) pseudomonotoneif, whenever
It is clear from the above definitions that Brezis pseudomonotonicity is weaker than Ky–Fan hemicontinuity: the latter requires that (3) holds for all weakly convergent sequences \(\{x^k\}\subseteq X\) with limit \(x\in X\), whereas Brezis pseudomonotonicity only asserts this estimate for sequences which additionally satisfy the \(\limsup \)-condition in (2).
The set of pseudomonotone operators is large and encompasses many practically relevant examples. Various sufficient conditions for pseudomonotonicity can be found in [8, 12]; in particular,
F is pseudomonotone provided it is either (i) monotone and continuous, (ii) completely continuous, or (iii) the sum of two operators which are themselves pseudomonotone. Using results from differential calculus in Banach spaces, one can also give sufficient conditions for pseudomonotonicity in the special case where F is the Fréchet derivative of a real-valued functional; see [12].
The following is the basic existence result for VIs with pseudomonotone operators. Note that we call
F bounded if it maps bounded sets in X to bounded sets in \(X^*\). Proposition 2.1
(Pseudomonotone VIs [11, Corollary 4.2]) Let \({A\subseteq X}\) be a nonempty, convex, weakly compact set, and \(F:X\rightarrow X^*\) a bounded pseudomonotone operator. Then the variational inequality (1) admits a solution \(\hat{x}\in A\).
If \(F:X\rightarrow X^*\) is pseudomonotone, then the solution set of the VI (1) is always weakly sequentially closed. More generally, if \(\{x^k\}\) is a sequence of suitable “approximate” solutions of the VI, then every weak limit point of \(\{x^k\}\) belongs to its solution set; see [11].
If
Xis finite-dimensional(without loss of generality, a Hilbert space), then an operator \(F:X\rightarrow X\) is bounded and pseudomonotone if and only if it is continuous. Thus, in this case, the study of pseudomonotone VIs subsumes the well-known theory of finite-dimensional VIs; see, for instance, [1]. Remark 2.1
(Sequences versus nets) Some authors define the aforementioned concepts on a general Hausdorff topological vector space (instead of a Banach space endowed with its weak topology). In that case, the properties ought to be formulated in terms of nets or filters instead of ordinary sequences; see, for instance, [7].
3 A Nonlinear Operator Which is Brezis Pseudomonotone But Not Ky–Fan Hemicontinuous
As we shall see below, Brezis pseudomonotonicity and Ky–Fan hemicontinuity are not equivalent. The particular example shown here involves the well-known function spaces \(L^p(\varOmega )\), \(W^{k,p}(\varOmega )\), and \(W_0^{k,p}(\varOmega )\), with \(\varOmega \) a bounded finite-dimensional domain, \(k\in \mathbb {N}\), and \(p\in [1,+\infty ]\); see, for instance, [15].
Example 3.1 p-Laplacian defined by Fis monotone and continuous, hence pseudomonotone. Now, for each \(k\in \mathbb {N}\), let \(u_k:[0,1]\rightarrow \mathbb {R}\) be the piecewise linear function with value 1 / kat \(t=(3 i+1)/(6 k)\) for \(i=0,\ldots ,k-1\), and value zero at \(t=i/(2 k)\) for \(i=0,\ldots ,k\), and on [1 / 2, 1]. Clearly, \(u_k\rightarrow 0\) in \(L^3(0,1)\). Moreover, the weak derivative of \(u_k\) (in the Sobolev sense) is almost everywhere given by k. Finally, we have \(\langle F(u),u-v \rangle =0\), but an elementary calculation shows that k, and thus, Fis notKy–Fan hemicontinuous.
An interesting question that remains is where the argumentation from [14] is incorrect. The following is a particular error which is contained in that paper.
Remark 3.1 E, and \(\{u_k\}\) the sequence of unit vectors. Then \(u_k\rightharpoonup u:=0\) and 4 Conclusions
The example in this paper shows that Brezis pseudomonotonicity and Ky–Fan hemicontinuity are distinct properties. In particular, the former is strictly weaker than the latter and therefore remains one of the most general properties which can be used to tackle variational inequalities.
Notes Acknowledgements
The author would like to thank Daniel Wachsmuth for the basic idea of the counterexample and Ildar Sadeqi for the discussion leading to the creation of this paper. This research was supported by the German Research Foundation (DFG) within the priority program “Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation and Hierarchical Optimization” (SPP 1962) under Grant No. KA 1296/24-1.
References 1. 2.Kinderlehrer, D., Stampacchia, G.: An introduction to variational inequalities and their applications. In: Classics in Applied Mathematics, vol. 31. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000). https://doi.org/10.1137/1.9780898719451 ( reprint of the 1980 original) 3. 4. 5. 6. 7. 8. 9. 10. 11.Kanzow, C., Steck, D.: Quasi-variational inequalities in Banach spaces: theory and augmented Lagrangian methods. ArXiv e-prints. arXiv:1810.00406 (2018) 12.Steck, D.: Lagrange multiplier methods for constrained optimization and variational problems in Banach spaces. Ph.D. thesis (submitted), University of Würzburg (2018)Google Scholar 13.Fan, K.: A minimax inequality and applications. Inequalities, III. In: Proceedings of 3rd Symposium, University California, Los Angeles, California, 1969; dedicated to the memory of Theodore S. Motzkin, pp. 103–113 (1972)Google Scholar 14. 15. 16. 17. |
Abbreviation:
Gph
A
is a structure $\mathbf{G}=\langle G, E\rangle$ such that graph
$G$ is a set,
$E$ is a binary relation on $G$: $E\subseteq G\times G$, and
$E$ is symmetric: $xEy\Longrightarrow yEx$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{G}$ and $\mathbf{H}$ be graphs. A morphism from $\mathbf{G}$ to $\mathbf{H}$ is a function $h:G\rightarrow H$ that is a homomorphism: $xE^{\mathbf G}y\Longrightarrow h(x)\,E^{\mathbf H}\,h(y)$.
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
Suppose that $L$ is a regular language and $0<\alpha<1$ and $\alpha \in Q$. Define $L_\alpha$ as
$$L_\alpha = \{\omega \in \Sigma^* \mid \exists \omega_1 \in \Sigma^* .\omega\omega_1 \in L,\frac{|\omega|}{|\omega\omega_1|}=\alpha\}$$
How do I prove that $L_\alpha$ is regular?
Here is my attempt. If we assume that $D$ is the automaton that recognize $L$ with $Q=\{q_0,...,q_n\}$, if $\delta(q_0,\omega\omega_1)\in F$ and $\delta(q_0,\omega)=q_i$ we can define $D_1$ such that $\delta_1(q_0,\omega)=r_{i-1}$ (for $r_i \in F_1)$, but the problem is I can't define $\delta_1(q_0,a) $for $a\in \Sigma$. |
If $x>0$, find the set of all values of $x$ such that series is convergent$$\sum_{n=1}^{\infty} x^{\ln{n}}$$
My attempt:- I used Ratio test for finding the set of all values of $x$ such that series is convergent.
$$\lim_{x\to\infty}\frac{x^{\ln{n+1}}}{x^{\ln{n}}}$$
$$=\lim_{x\to\infty}x^{\ln\frac{n+1}{n}}$$ This quantity must be less than one for getting a convergent series, I am not able to judge. Can you please help me to find the interval of convergence? |
How would you go about solving integral of a floor? The particular problem I have is:
$$\int \,\left\lfloor\frac{1}{x}\right\rfloor\, dx$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The function:
$$\left\lfloor\frac{1}{x}\right\rfloor$$
is equal to $n$ on the interval $\left(\frac{1}{n+1},\frac{1}{n}\right)$,so if we try to determine the integral from $t>0$ to $1$, we can let $n=\left\lfloor\frac{1}{t}\right\rfloor$ and we have constant value $1$ on range $(\frac{1}{2},1)$, constant value $2$ on range $(\frac{1}{3},\frac{1}{2})$, etc. So since $t<\frac{1}{n}$, we get terms for each interval $(\frac{1}{k+1},\frac{1}{k})$ when $k<n.$ The length of the $k$th interval is $\frac{1}{k(k+1)}$ and the value of the function is $k$ on this interval, so the integral on this interval is $\frac{1}{k+1}$. So the integral from $\frac{1}{n}$ to $1$ is $1/2 + 1/3 + 1/4 + ... + 1/n$. Then then integral from $t$ to $\frac{1}{n}$ is the length of the interval times $n$, which is $n(\frac{1}{n} - t) = 1-nt$. So the total is:
$$\int_t^1 \,\left\lfloor\frac{1}{x}\right\rfloor\, dx = 1 - t{\left\lfloor\frac{1}{t}\right\rfloor} + \sum_{i=2}^{\left\lfloor\frac{1}{t}\right\rfloor}\frac{1}{i}$$
The indefinite integral, then, is the opposite of this:
$$x\left\lfloor\frac{1}{x}\right\rfloor - \sum_{i=2}^{\left\lfloor\frac{1}{x}\right\rfloor}\frac{1}{i} + C$$
I should right off the bat say that the floor function and vertical asymptotes
do not mix very well. Don't expect the answer to be reducible beyond what I give. Anyway moving on...
A floor function integral is best separated into two portions,an integral assuming that the floor function is constant everywhere and then a special sum known as the jump series that represents the portion of the graphs that are jumps.
The equation written for this is a piece wise function using iverson brackets which return $1$ when an equation inside is true and $0$ when it is false:
$$\int f(x) dx - [x > 0]*\sum_{i=0}^{JC(x)} \lim_{a \to JI(x)^+} F(a) \lim_{a \to JI(x)^-} F(a) - [x < 0]*\sum_{i=0}^{JC(x)} \lim_{a \to JI(x)^-} F(a) \lim_{a \to JI(x)^+} F(a)$$
Remember that $F(x)$ is our "faulty" integral. $\ JC$ is the number of jumps between $0$ and $x$. It gives negative numbers on the left side of $0$. $\ JI$ is $x$ coordinate of the $n$'th jump. Once again, negative to the left of $0$. $\ JC$ is traditionally the floor of the integral of the absolute value of the derivative of the function within some floor term. This is supposed to work. For this it yields:
$$ \ JC(x) = floor(INF*[x > 0] - INF*[x < 0] - 1/x)$$
This is ridiculously messy and fails by divergence yet there is no concept of $\ JC$ failure so let's keep going...
$\ JI$ is traditionally the inverse of JC without floor. We can try to find the inverse...
$\ JC(y) = [y > 0]*(1/(y-INF)) + [y < 0]*(1/(y+INF))$
This is arguably messier.
At this point every alarm bell in the cosmos is going off right now. I'm not going any farther. Unfortanately this is beyond divergent sums. This is just /messy/. If you want to do asymptotes, take an integral over a range that doesn't have a divergent sum (yeah, the integral over $0$ diverges) and simply split the integral. Anything greater than $1$ is $0$ for the function you gave, anyway, so it's hardly a worthwhile example beyond showing the insane messiness that can rear it's hideous head. |
I am reading Sec. 1.12 of the Cosmology book by Weinberg.
In this section he explains the very simple model of quintessence which attempts to provide a dynamical explanation of the smallness of the cosmological constant today.
He considers an example of a time-dependent but space-independent scalar field in a space with Robertson-Walker metric
$$ \mathcal{L} = \int d^4 x \sqrt{-det(g)} \left[\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi + V(\phi)\right], \tag{1} $$
where, in the simplest case, the following potential seems to make the job (naively)
$$ V(\phi) = \frac{M^{\alpha+4}}{\phi^\alpha} \tag{2} $$
for some positive constant $\alpha>0$ and mass scale $M$.
I am used to consider field theories where the potentials are analytic functions of the fields, especially in the point $\phi=0$, e.g. $\phi^4$-theory and standard effective field theories. In particular, all the effective field theories used by Beyond Standard Model phenomenologists involve positive power of fields and derivatives, e.g.
$$ V(\phi) = \sum_{n,k}\partial^n \phi^k $$
Up to now I thought that fields in denominators were allowed as long as we expand around some constant VEV $v$, for example $\phi \rightarrow v + \delta\phi$, in such a way that any potential of the form $V(\phi) = \phi^{-\alpha}$ is an expansion of analytic interaction terms involving the perturbation $\delta \phi$. Once we do this expansion, we quantize the theory around the minimum $\phi=v$.
Here instead the minimum of the potential is at $\phi=+\infty$ and I have never seen quantization around $\phi=+\infty$.
I have two very related questions.
Question What is the physical meaning of the potential (2) which blows up for very small values of the field $\phi$? Can we interpret this as a theory of particles? If yes, how do you quantize this theory in RW metric with zero curvature? (so, definition of one-particle states...) |
Initial problem solving strategy formulation and strategy review midway are the key steps
Some problems look difficult, but if you have a clear strategy of dealing with this type of problems, and is ready to change track by reevaluating the strategy midway, chances are high that you will reach the efficient solution in a few quick steps.
In this session we will showcase a trigonometric expression minimization problem that looks difficult. How problem analysis, problem solving strategy formulation and review of the strategy midway to change track and use of a general more basic technique results in quick solution will be highlighted. Additionally we will apply the Many ways technique to solve the problem in two more ways.
Before going ahead you should refer to our concept tutorials on Trigonometry,
Chosen Problem.
The minimum value of $\sin^2 \theta + \cos^2 \theta +\sec^2 \theta +\text{cosec}^2 \theta+\tan^2 \theta +\cot^2 \theta$ is,
3 5 1 7 Solution 1 - Problem analysis and strategy formulation
The expression involved is a long one with many terms in trigonometric functions.
We know, the most important initial objective of minimization of any trigonometric expression is,
To simplify the expression to at most a three term two function expression.
The third term may be a numeric term. This is because, such an expression has more than one method of solution with respect to minimization or maximization of the expression. The methods may be applied individually or together, forming a hybrid method.
Analyzing the given expression we decide as a strategy to simplify the expression in terms of $\tan \theta$ and $\cot \theta$. Being an inverse function pair with product 1, such an expression can easily be solved using AM GM inequality technique.
Solution 1 - Problem solving execution stage 1
The given expression is,
$\sin^2 \theta + \cos^2 \theta +\sec^2 \theta +\text{cosec}^2 \theta+\tan^2 \theta +\cot^2 \theta$
$=1+ (1+\tan^2 \theta) + (1+\cot^2 \theta)+\tan^2 \theta +\cot^2 \theta$
$=3+2(\tan^2 \theta +\cot^2 \theta)$.
Though we can proceed with our earlier approach and use the standard
AM GM inequality technique to solve the problem at this point, we reviewed the problem state and decided that the more basic should provide us the solution in a more elegant way. algebraic approach of minimization for quadratic equations Solution 1 - Intermediate stage strategy change
By the
a quadratic expression is converted to the following form, Algebraic minima technique
$a+(b-c)^2$, where the minima $a$ will occur for $b=c$.
Otherwise, $(b-c)^2$ will always add a positive value to $a$.
This is a purely algebraic minima determination technique and more basic in nature. We can apply this technique here because the two square terms being inverse functions, their product, which will be part of the middle term of the subtractive square, will be a numeric term.
So we have the given expression transformed as,
$3 + 2(\tan^2 \theta +\cot^2\theta)$
$=7+2(\tan \theta - \cot\theta)^2$
This will have the minimum value of 7 when $\tan \theta=\cot \theta$.
Answer: d: 7. Key concepts and techniques used: -- Problem analysis -- Problem solving strategy formulation -- Intermediate stage strategy review Algebraic minima technique -- Inverse trigonometric functions -- Basic algebraic concepts -- Basic trigonometry concepts -- Maxima minima for trigonometric expressions. Solution 2 - Using AM GM inequality
In this solution path, we will proceed the same way as before to simplify the given expression as,
$\sin^2 \theta + \cos^2 \theta +\sec^2 \theta +\text{cosec}^2 \theta+\tan^2 \theta +\cot^2 \theta$
$=3 +2(\tan^2 \theta + \cot^2 \theta)$.
But at this point we decide to apply the
to find the minimum value of $(\tan^2 \theta + \cot^2 \theta)$. AM GM inequality technique
$\text{AM}=\displaystyle\frac{\tan^2 \theta + \cot^2 \theta}{2}$.
$\text{GM}=\sqrt{\tan^2 \theta \times{\cot^2 \theta}}=1$.
By AM GM inequality concept,
$\text{AM} \geq \text{GM}$,
Or, $\displaystyle\frac{\tan^2 \theta + \cot^2 \theta}{2} \geq 1$,
Or, $\tan^2 \theta + \cot^2 \theta \geq 2$.
So the minimum value of $(\tan^2\theta +\cot^2\theta)=2$.
Thus minimum value of our given expression is,
$=3+2\times{2}=7$.
Solution 3 - Problem analysis and solving
In this third solution path, without any specific problem oriented strategy we decide to simplify as we can, intending it to simplify in terms of $\sin \theta$ and $\cos \theta$.
Proceeding as before we arrive at the same middle stage of simplification quickly,
$\sin^2 \theta + \cos^2 \theta +\sec^2 \theta +\text{cosec}^2 \theta+\tan^2 \theta +\cot^2 \theta$
$=3 +2(\tan^2 \theta + \cot^2 \theta)$
$=3+2\left(\displaystyle\frac{\sin^2 \theta}{\cos^2 \theta} + \displaystyle\frac{\cos^2 \theta}{\sin^2 \theta}\right)$
$=3 +2\left(\displaystyle\frac{\sin^4 \theta + \cos^4 \theta}{\sin^2 {\theta}\cos^2 \theta}\right)$
$=-1+2\left(\displaystyle\frac{\left(\sin^2 \theta +\cos^2 \theta\right)^2}{\sin^2 {\theta}\cos^2 \theta}\right)$
$=-1+\displaystyle\frac{2}{\sin^2 {\theta}\cos^2 \theta}$.
Now we will use a rich maximization concept for the denominator of the second term.
As maximum value of $\sin^n {\theta}\cos^n \theta$ is $\left(\frac{1}{2}\right)^n$, the maximum value of $\sin^2 {\theta}\cos^2 \theta$ is,
$\left(\displaystyle\frac{1}{2}\right)^2=\displaystyle\frac{1}{4}$.
Thus the minimum value of the given expression is,
$-1+2\times{4}=7$.
This is rather a long solution.
Important
The alternative solutions showcased above exemplifies,
We should always start solving a problem with a strategy suitable and specific for the problem type and problem solving objectives. First objective is of course to solve the problem, but more importantly, solve the problem along the most efficient shortest path. Even if we start solving the problem with a specific suitable strategy, midway through the solution, if opportunity arises, we must be ready and alert to use a new better path of solution adjusting the technique on the way. Any random approach will invariably take you along a confusing and longer path to the solution. Specifically for this type of problem, we need to adopt a strategy of simplification of the large expression into at most a three term two function expression with the two functions preferably be an inverse function pair such as $\tan$-$\cot$.
While solving the problem in three ways, we have applied the problem solving skill improvement
that tested our skill in solving a problem in many ways as well as gave us the opportunity to compare the many solutions with each other. Many ways technique Resources on Trigonometry and related topics
You may refer to our useful resources on Trigonometry and other related topics especially algebra.
Tutorials on Trigonometry General guidelines for success in SSC CGL Efficient problem solving in Trigonometry How to solve a difficult SSC CGL level problem in few quick steps, Trigonometry 6 A note on usability: The Efficient math problem solving sessions on School maths are equally usable for SSC CGL aspirants, as firstly, the "Prove the identity" problems can easily be converted to a MCQ type question, and secondly, the same set of problem solving reasoning and techniques have been used for any efficient Trigonometry problem solving. |
The intrinsic parameters of a camera allow us to map a point in the 3D scene, as measured relative to the camera (in the camera coordinate system), to the 2D location of the pixel it activates on the camera image plane. You can go the other way too, i.e. use the intrinsic parameters to get the 3D coordinates of a point in the scene corresponding to a pixel location on the 2D image plane.
The intrinsic parameters of a camera encompass the focal length, pixel dimensions, the principal point (where the optical axis intersects with the image plane) and finally, the subject of this post, the skew between the axes of the image plane. These parameters are collected into a matrix called the calibration matrix which when applied to a point in the scene, gives the pixel on the image plane it projects onto.
Usually, the skew coefficient is simply represented as a constant like
s anddetails of how it came about are omitted. To understand why this is so, youmight want to skip ahead to the section Should you care about axisskew?). This post aims to discuss indetail, the contribution of the axis skew to the calibration matrix of acamera. To keep the post focused, I assume knowledge of how the otherparameters are accounted for in the matrix. The discussion takes the form of aformal solution to a problem in the book “Computer Vision, A Modern Approach”,but I hope that it is self-contained enough to be useful without the book too.
Exercise 1.6
Show that when the camera coordinate system is skewed and the angle $\theta$ between the two image axes is not equal to 90 degrees, then Eq. (1.12) $$x = \alpha\hat{x} + x_0,$$ $$y = \beta\hat{y} + y_0$$
transforms into Eq. (1.13) $$x = \alpha\hat{x} - \alpha cot(\theta)\hat{y}+ x_0,$$ $$y = \dfrac{\beta}{sin(\theta)}\hat{y} + y_0$$
where
$x$ and $y$ are the pixel coordinates of the projection of a point in the scene onto the retina, $\hat{x}$ and $\hat{y}$ are the coordinates of the projection of a point in the scene onto the normalized image plane, $\alpha$ and $\beta$ are the pixel magnification factors along the $x$ and $y$ axes respectively, and $x_0$ and $y_0$ are the offsets of the image center from the origin ofthe camera coordinate system 1. Physical and Normalized Image Coordinate Systems Computer Vision A Modern Approach Solution Outline
Let the ideally aligned axes of the normalized image plane with zero skew represent the axes of the coordinate frame $Norm$ and the skewed axes represent the axes of the coordinate frame $Skew$.
Note that $Norm$ and $Skew$ share the bottom left corner by construction.
Ideally aligned pixel grid ($Norm$) Figure 1(a) Skewed pixel grid ($Skew$) Figure 1(b)
Exercise 1.6 then can be rephrased in the following manner.
Given the pixel coordinates of a point in frame $Norm$, prove that its pixel coordinates in frame $Skew$ is given by Eq. (1.13).
I always find that rephrasing a problem in multiple ways helps clarify the question further. So, yet another way of rephrasing the problem is as follows.
Given the pixel in the normalized image grid that is activated by a perspective ray from a point in the scene, prove that the pixel on the skewed pixel grid that is activated by the same ray is given by Eq. (1.13).
The crux of the solution is that the problem has two parts.
Firstly, a point on the image plane is represented in the camera coordinate system not in the normalized image plane coordinate system. Up until now, we didn’t have to note the difference because both were identical (cartesian). After the skew transformation, the camera coordinate system is different from the normalized image plane coordinate system. So we must learn how to represent a point in the skewed camera coordinate system in terms of its known coordinates in the normalized image plane system.
Secondly, we have not yet defined the relationship between the skewed pixelgrid and the normalized image plane grid. How exactly did I generate thepicture of the skewed image grid you might ask? Good question. It turns outthat there are multiple ways to transform the skewed pixel grid to a normalizedgrid with axes perpendicular to each other. This took quite some time to sinkinto my head and would have saved my head and thewallquite a bit of bother if the authors had mentioned upfront which transformationthey were using. I say this because almost everyone else (Matlab
2, otherbooks 3 4) uses a different transform and hence ends up with anequation that is different from Eq. (1.13) and thus a different calibrationmatrix!
I talk about the transformation
from the skewed grid to the normalized gridbecause the skewed grid is the real image plane/retina whereas the normalizedimage plane is something we have conjured up to make the math easier. However,as the intrinsic parameters $\alpha$ and $\beta$ are defined with respect tothe normalized image plane in the book, we will use the inverse of thistransform in our proof. So from now on, we’ll talk about transforming thenormalized grid to the skewed grid.
Finally, using the skewed frame coordinates we obtain in the first part together with the pixel dimensions we obtain based on the transformation we chose in the second part, we can find the pixel coordinates on the skewed grid, thus deriving Eq. (1.13).
Solution 1. Finding image coordinates of a point in the skewed frame.
I’m going to do this two ways. The first explanation is more elaborate but requires only basic knowledge of linear algebra and geometry. The second one is more straightforward but requires at least intermediate knowledge of linear algebra.
Using geometry and basic linear algebra
The coordinates of a point in a given coordinate frame is simply a linear combination of the frame axes. This means that the point can be represented as a weighted sum of $n$ axis vectors of the frame. In this case, we only have 2 axes as the point lies on the 2D image plane.
Using the parallelogram law of vector addition, we can geometrically represent the point in both the frames as the diagonal of the parallelogram formed by drawing vectors, parallel to the axes, to the point as shown in the following figures.
P as a linear combination of the cartesian axes vectors Figure 2(a) P as a linear combination of the skewed axes vectors Figure 2(b)
Just as the length of edges of the parallelogram in frame $Norm$, $\hat{x}$ and $\hat{y}$, are the coordinates of point P in frame $Norm$, the length of the edges of the parallelogram in frame $Skew$, $x_{skew}$ and $y_{skew}$, are the coordinates of the point P in frame $Skew$. Another way to think about this is that the length of the edges are the weights in the weighted sum (linear combination) of the axes.
Superimposing both these coordinate frames so that their origins coincide, we can use geometry and trigonometry to find the length of the edges of the parallelogram in frame $Skew$ (in blue and red) in terms of the coordinates of the point in frame $Norm$ (in cyan and orange).
$Skew$ coordinates in terms of $Norm$ coordinates Figure 3
As you can see
$$x_{skew} = \hat{x} - \hat{y}cot(\theta)$$ $$y_{skew} = \dfrac{\hat{y}}{sin(\theta)}$$
Changing the basis
This problem is equivalent to a change of basis problem
5 with the oldbasis being the standard basis for $R^2$
$$I = \begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}$$
and the new skewed basis being
$$O = \begin{pmatrix}1 & cos(\theta)\\ 0 & sin(\theta)\end{pmatrix}.$$
If the coordinates of a point in the old basis is $\hat{x}, \hat{y}$. The the coordinates of that point in the new basis will be
$$\begin{pmatrix}x_{skew} \\ y_{skew}\end{pmatrix} = O^{-1} I \begin{pmatrix}\hat{x} \\ \hat{y}\end{pmatrix}.$$
$$\implies \begin{pmatrix}x_{skew} \\ y_{skew}\end{pmatrix} = \begin{pmatrix}1 & -cot(\theta)\\ 0 & \frac{1}{sin(\theta)}\end{pmatrix} \begin{pmatrix}\hat{x} \\ \hat{y}\end{pmatrix}.$$
$$\implies \begin{pmatrix}x_{skew} \\ y_{skew}\end{pmatrix} = \begin{pmatrix}\hat{x} - \hat{y}cot(\theta) \\ \frac{\hat{y}}{sin(\theta)}\end{pmatrix}.$$
Just like that! *
snaps fingers*. 2. Choosing the appropriate transformation
A hint as to what transform could be in play is in the use of the word “skew” to represent the state of the pixel grid whose axes are not quite perpendicular to each other. Skew is a synonym for shear and we know that the shear transform can be represented as
$$T_{shear} = \begin{pmatrix}1 & cot(\theta)\\ 0 & 1\end{pmatrix}.$$
This is what a sheared grid (in blue) corresponding to the normalized image grid (in black) looks like.
Sheared pixel grid Figure 4
Notice that the blue and black horizontal lines overlap and the height of the pixel (slanted line) in the sheared grid is longer than the height of the pixel (vertical line) in the normalized grid. In fact, $$pixelHeight_{sheared} = \dfrac{pixelHeight_{normalized}}{sin(\theta)}$$ which proves that when $\theta < 90$ $$pixelHeight_{sheared} > pixelHeight_{normalized}$$
Normalized and sheared pixel Figure 5
Now $$\beta = \dfrac{f}{pixelHeight}.$$ Hence as $f$ (the focal length) and the width of the pixels remain unaffected by the shear transformation, we get $$\beta_{sheared} = \beta_{normalized}sin(\theta) = \beta sin(\theta)$$ $$\alpha_{sheared} = \alpha_{normalized} = \alpha.$$
What if I don’t want the pixel dimensions and hence $\beta$ to change? The following matrix produces a “skew” without changing the dimensions of the pixel. I got this matrix by drawing out the normalized pixel and skewed pixel with the same width and height, then using trigonometry to derive the relationship between their x and y coordinates. $$T_{skew} = \begin{pmatrix}1 & cos(\theta)\\ 0 & sin(\theta)\end{pmatrix}$$
Skewed pixel grid Figure 6 Normalized and skewed pixel Figure 7
Hence by construction $$\beta_{skewed} = \beta_{normalized} = \beta$$ $$\alpha_{skewed} = \alpha_{normalized} = \alpha.$$
Which is the correct transformation though? For this proof the appropriate transformation is $T_{skew}$, the one that preserves pixel dimensions. However, the answer in general is either/both. The normalized image plane is whatever you make it to be as long as its axes are perpendicular to each other and its left bottom aligns with the actual pixel grid. If you want to use the shear transform you would have to adjust the Eq. (1.13) (as we will discuss later).
3. Finding pixel coordinates of a point in the skewed frame.
Now, as
perspective projection proportionality holds even when the $X$ and $Y$ axes are not perpendicular to each other, the pixel dimensions remain unchanged by the chosen skew transformation (as proved in the previous section) and, the focal length ($f$) and distance to object ($Z$) remain unaffected by the skew
Eq. (1.12) also holds for skewed coordinates. Hence
$$x = \alpha x_{skew} + x_0,$$ $$y = \beta y_{skew} + y_0.$$
Replacing $x_{skew}$ and $y_{skew}$ with their values in terms of the normalized image plane coordinates, we get
$$x = \alpha(\hat{x} - \hat{y}cot(\theta)) + x_0,$$ $$y = \beta\dfrac{\hat{y}}{sin(\theta)} + y_0$$
$$\implies$$
$$x = \alpha\hat{x} - \alpha cot(\theta)\hat{y}+ x_0,$$ $$y = \dfrac{\beta}{sin(\theta)}\hat{y} + y_0.$$
where $x$ and $y$ are
pixel coordinates of the image point in the skewedcoordinate frame. This is the same as Eq. (1.13).
Also, when $\theta = 90$ degrees, $cot(\theta) = 0$, $sin(\theta) = 1$ and the equation reduces to
$$x = \alpha\hat{x} + x_0,$$ $$y = \beta\hat{y} + y_0$$
This proves that Eq. (1.12) transforms into Eq. (1.13) when the angle $\theta$ between the image axes is not 90 degrees.
Pixel coordinates with the shear transform
When we use a shear transform to convert the normalized pixel grid to the skewed one, Eq. (1.13) becomes
$$x = \alpha\hat{x} - \alpha cot(\theta)\hat{y}+ x_0,$$ $$y = \dfrac{\beta_{sheared}}{sin(\theta)}\hat{y} + y_0.$$
Substituting the value of $\beta_{sheared}$ we derived in an earlier section we get
$$y = \dfrac{\beta sin(\theta)}{sin(\theta)}\hat{y} + y_0.$$ $$\implies$$ $$y = \beta\hat{y} + y_0.$$
So the pixel coordinates of a point in the skewed coordinate frame are given by $$x = \alpha\hat{x} - \alpha cot(\theta)\hat{y}+ x_0,$$ $$y = \beta\hat{y} + y_0$$
where $-\alpha cot(\theta)$ is also known as the skew coefficient $s$ in many places. This results in a calibration matrix that has the familiar form that we see in Matlab and other books too.
$$K = \begin{pmatrix}\alpha & s & x_0\\ 0 & \beta & y_0\\0 & 0 & 1\end{pmatrix}.$$
Note that Matlab states the skew parameter as equivalent to $+\alphacot(\theta)$. This sign change is because the origin is in the top left cornerand the $y$ axis increases as we go down
6. Should you care about axis skew?
Whew, that was a lot of work! You might be relieved to know that you don’t have to consider the axis skew for most modern cameras because the axes of modern CCD cameras are usually at $90^\circ$ with respect to each other.
Here’s an excerpt from the section “
Camera Intrinsics” on page 46 of the book Computer Vision Algorithms and Applications by Richard Szeliski.
Note that we ignore here the possibility of
skewbetween the two axes on the image plane, since solid-state manufacturing techniques render this negligible.
And here are excerpts from the sections “
Finite projective camera” and“ When is s $\ne$ 0” on pages 143 and 151 respectively of the book MultipleView Geometry in Computer Vision by Richard Hartley and Andrew Zisserman.
The skew parameter will be zero for most normal cameras. However, in certain unusual instances it can take non-zero values.
A true CCD camera has only four internal camera parameters, since generally
s= 0. If s $\ne$ 0 then this can be interpreted as a skewing of the pixel elements in the CCD array so that the x-and y-axes are not perpendicular. This is admittedly very unlikely to happen.
In realistic circumstances a non-zero skew might arise as a result of taking an image of an image, for example if a photograph is re-photographed, or a negative is enlarged. Consider enlarging an image taken by a pinhole camera (such as an ordinary film camera) where the axis of the magnifying lens is not perpendicular to the film plane or the enlarged image plane.
In fact, OpenCV
7 does away with the skew parameter altogether and itscalibration matrix looks like
$$K = \begin{pmatrix}\alpha & 0 & x_0\\ 0 & \beta & y_0\\0 & 0 & 1\end{pmatrix}.$$
To sum up, you would need to account for axis skew when calibrating unusual cameras or cameras taking photographs of photographs, else you can happily ignore the skew parameter.
Acknowledgments
Thanks to automoto for your invaluable review.
References David A. Forsyth and Jean Ponce. (2011), Computer Vision A Modern Approach, Pearson. [return] Matlab. Intrinsic Parameters, What Is Camera Calibration?https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0ni74. [return] Richard Hartley and Andrew Zisserman. (2000), Page 143, Multiple View Geometry in Computer Vision, Cambridge University Press. [return] Richard Szeliski. (2011), Page 47, Computer Vision Algorithms and Applications, Springer. [return] Wikipedia. Change of basis. https://en.wikipedia.org/wiki/Change_of_basis. [return] Jean-Yves Bouguet. Important Convention, Description of the calibration parameters, Camera Calibration Toolbox for Matlab.http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html. [return] OpenCV. calibrateCamera(). https://docs.opencv.org/4.0.1/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d. [return] |
Let $Y_1,Y_2,...,$ be independent $C(0,1)$ random variables, determine the limit distribution of :
$Z_n = \dfrac{1}{n} \cdot max\{Y_1, Y_2,..Y_n \} $ as $n \rightarrow \infty $,
Here is my approach:
$F_{Z_n} (x) = \mathbb{P}(Z_n \leq x) = \mathbb{P}(\dfrac{1}{n} max\{ Y_1,Y_2,...Y_n \} \leq x ) = \mathbb{P}( Y_1 \leq x\cdot n , Y_2 \leq x\cdot n, ....,Y_n \leq x\cdot n ) = \Big(F_{Y}(x\cdot n)\Big)^n $ (where $Y \in C(0,1)$ )
I have used that they are independent and identically distributed.
The next step is to find $F_Y(x\cdot n)$ which can be found by integration of the density of $C(0,1)$ , that is $F_Y(x\cdot n) = \dfrac{1}{\pi} \int_{-\infty}^{x\cdot n} \dfrac{1}{1+x^2} =\dfrac{1}{\pi} \big( arctan(x \cdot n) + \dfrac{\pi}{2} \big)$ .
Here is where Iam stuck , however, i know that $\dfrac{\pi}{2} = arctan(u) + arctan(1/u) $ and that $arctan(z) \approx z - \dfrac{z^3}{3} + \dfrac{z^5}{5} + ...$
I tried to use it somehow to get some "nice" expression for: $\dfrac{1}{\pi} \big( arctan(x \cdot n) + \dfrac{\pi}{2} \big)$, but failed
With "nice" i refer to the fact that $lim_{n \rightarrow \infty} \Big(F_{Y}(x\cdot n)\Big)^n $ could be recognized "easily"
Can someone give me a helping hand ? |
Most tutorials about digital currencies (like BitCoin) say that a hash function is a cryptographic function that takes an input and always return a binary string of length $256$ such that a small change in the input would result in a totally different hash.
Obviously, in the simplest interpretation, the hash function must be injective. However, since the length of input messages is unlimited while there are only $2^{256}$ binary strings, it is obvious that an injective function cannot exist.
So, what is the correct interpretation? I have tried to formulate a possible interpretation, and I did google it to see if there's an actual mathematical definition of the hash function as in BitCoin tutorials, but I didn't have any luck.
Anyway, here's my interpretation:
Let $\mathcal{M}$ denote the set of messages. We can think of this set as binary strings of unlimited length, i.e. $$\mathcal{M} = \Large\{ \large(a_1,a_2,a_3,\cdots,a_n,\cdots) \large\mid a_i=0,1 (\forall i\in \mathbb{N})\Large\}$$ We can define a distance on this set as follows:
$$d(m_1,m_2) = \sum_{i=1}^{\infty} \frac{a_i}{2^i}$$
Then the statement becomes something like this:
$$\exists \delta >0, \forall m_1\in \mathcal{M}: \,\,\mathbb{P}(\{{f(m_2)\, |\, d(m_1,m_2) > \delta \implies f(m_1) \neq f(m_2)\}}) > 1 - \frac{c(m_1)}{2^{256}}$$
Where $\mathbb{P}$ is the uniform distribution on the set of binary strings of length $256$ and $c$ is a constant such that $c(m_1) \ll 2^{256}$. For example, $c \leq 2^{10}$.
Are there any alternative interpretations? A formal definition perhaps?
The relevant cryptographic property of hash functions is that there is no feasible way of finding an input that gives a desired output other than pure brute force.
The fact that even a minor change in the input ought to give a complete change in the output is a consequence of this (rather than being a fundamental property of its own). Otherwise, if you got an output that was close to what you wanted, you could make small changes to the input, expect small changes to the output, and thus not have to brute force as much.
I have heard this property described by the term "irreversible".
Also, there is nothing special about 256 bit outputs. It's just that the currently most used cryptographic hash function (SHA-256) works that way. |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
Prove: If $|x+y|<|x|+|y|$, then $x<0$ or $y<0$
This looks as though it's true from the start. Take $x=-4, y=4$.
$|-4+4|<|-4|+|4|$
$0<8$ is true.
The question is asking for a proof by contradiction or contrapositive. Which means I am going to negate some part of the ending in order to find a contradiction in the hypothesis.
This is of form: If $ P\implies Q$
So for a proof by contradiction I need:
$P \implies \lnot Q $
If pr the contrapositive: $ \lnot Q \implies \lnot P$
Will the following proof work? Also, is my proof formal enough? What can be done to improve it's form?
PF. (by contradiction)
If $|x+y|<|x|+|y|, \implies x \geq 0 \lor y \geq 0$
$x \geq 0, y \geq0$
since $x \geq 0$
$|x+y|<|x|+|y|$ is false proof by contradiction |
DEFINITION - EXPONENTS
Exponents and Roots: Tips and hints
Exponents are a "shortcut" method of showing a number that was multiplied by itself several times. For instance, number \(a\) multiplied \(n\) times can be written as \(a^n\), where \(a\) represents the base, the number that is multiplied by itself \(n\) times and \(n\) represents the exponent. The exponent indicates how many times to multiple the base, \(a\), by itself. TIPS - EXPONENTS1. Exponents one and zero:
\(a^0=1\) Any nonzero number to the power of 0 is 1.
For example: \(5^0=1\) and \((-3)^0=1\)• Note: the case of 0^0 is not tested on the GMAT.
\(a^1=a\) Any number to the power 1 is itself.2. Powers of zero:
If the exponent is positive, the power of zero is zero: \(0^n = 0\), where \(n > 0\).
If the exponent is negative, the power of zero (\(0^n\), where \(n < 0\)) is undefined, because division by zero is implied.3. Powers of one:
\(1^n=1\) The integer powers of one are one.4. Negative powers:
\(a^{-n}=\frac{1}{a^n}\)Important:
you cannot rise 0 to a negative power because you get division by 0, which is NOT allowed. For example, \(0^{-1} = \frac{1}{0}=undefined\).5. Powers of minus one:
If n is an even integer, then \((-1)^n=1\).
If n is an odd integer, then \((-1)^n =-1\).6. Operations involving the same exponents:
Keep the exponent, multiply or divide the bases
\(a^n*b^n=(ab)^n\)
\(\frac{a^n}{b^n}=(\frac{a}{b})^n\)
\((a^m)^n=a^{mn}\)
\(a^m^n=a^{(m^n)}\) and not \((a^m)^n\) (if exponentiation is indicated by stacked symbols, the rule is to work from the top down)7. Operations involving the same bases:
Keep the base, add or subtract the exponent (add for multiplication, subtract for division)
\(a^n*a^m=a^{n+m}\)
\(\frac{a^n}{a^m}=a^{n-m}\)8. Fraction as power:
\(a^{\frac{1}{n}}=\sqrt[n]{a}\)
\(a^{\frac{m}{n}}=\sqrt[n]{a^m}\)DEFINITION - ROOTS
Roots (or radicals) are the "opposite" operation of applying exponents. For instance x^2=16 and square root of 16=4.TIPS - ROOTS
General rules:
1. \(\sqrt{x}\sqrt{y}=\sqrt{xy}\) and \(\frac{\sqrt{x}}{\sqrt{y}}=\sqrt{\frac{x}{y}}\).
2. \((\sqrt{x})^n=\sqrt{x^n}\)
3. \(x^{\frac{1}{n}}=\sqrt[n]{x}\)
4. \(x^{\frac{n}{m}}=\sqrt[m]{x^n}\)
5. \({\sqrt{a}}+{\sqrt{b}}\neq{\sqrt{a+b}}\)
6. \(\sqrt{x^2}=|x|\), when \(x\leq{0}\), then \(\sqrt{x^2}=-x\) and when \(x\geq{0}\), then \(\sqrt{x^2}=x\).
7. When the GMAT provides the square root sign for an even root, such as \(\sqrt{x}\) or \(\sqrt[4]{x}\), then the only accepted answer is the positive root.
That is, \(\sqrt{25}=5\), NOT +5 or -5. In contrast, the equation \(x^2=25\) has TWO solutions, +5 and -5. Even roots have only a positive value on the GMAT.
8. Odd roots will have the same sign as the base of the root. For example, \(\sqrt[3]{125} =5\) and \(\sqrt[3]{-64} =-4\).Please share your Exponents and Roots tips below and get kudos point. Thank you. |
I made a cardinal property as follows
Let $\kappa$ be $\lambda$-Monotonous iff: $$\forall S\in\mathcal{P}_{=\lambda}(\mathrm{V})\forall T\in\mathcal{P}_{=\kappa}(\mathrm{V})(S\subseteq T\rightarrow (S,\in)\prec (T,\in))$$ The intuition behind this is that many $S$ of cardinality $\lambda$ are too similar to tell apart to those of cardinality $T$. So far, I have determined that:
No Monotonous cardinal $\kappa$ is $\lambda$-Monotonous for any $\lambda\geq\kappa$(This one is fairly obvious) No infinite cardinal $\kappa$ is $\lambda$-Monotonous for any finite $\lambda$ There are no cardinals which are $\aleph_0$-Monotonous, this combined with the last 2 statements means $\aleph_0$ is not Monotonous at all Every wordly cardinal is not $\lambda$-Monotonous unless $\lambda$ is wordly Every successor to a worldly cardinal is not $\lambda$-Monotonous unless $\lambda$ is a successor to a wordly cardinal $\beth_{\omega}$ is not $\lambda$-Monotonous for any $\lambda$ a $\beth$ number (and thus GCH implies $\beth_\omega$ is not Monotonous at all)
Can you find anything else about these cardinals?
-BTW Try using Tarski-Vaught test maybe? |
I would like to show Pillai's lower bound. Here we use $ \varphi^*(n) $ to denote $ \Phi(n) $ because of the famous log-star function ($ \log^* $) in complexity.
To get started, we need a special variety of Euler's totient function $ \hat\varphi $, which is $\varphi$ without considering 2, namely $ \hat \varphi(x):=\begin{cases}\varphi(x), x\text{ odd}\\ 2\cdot\varphi(x), x\text{ even}\end{cases} $.
When $ k $ goes sufficiently large(actually it is just $ \varphi^*(x) $), it is easy to see that $ \varphi^k $ approaches $1$ and $ \hat\varphi^k $ approaches some power of $ 2$. Let's denote it as $2^{\sigma(x)} \le2^k=2^{\varphi^*(x)} $. The inequility is obvious. Then we make a claim.
Claim. For all odd number $ x $, there holds $ x\le 3^{\sigma(x)} $. The equality holds iff $ x $ is some power of $3$.
Proof of the claim. Obviously $ \sigma(2)=\sigma(3)=1, \sigma(3^k)=k $. And it is also easy to check that $ \sigma(pq)=\sigma(p)+\sigma(q) $. So we just need to deal with the prime numbers. By induction, we consider the prime $ p>3 $. We have known that $ p-1\le 3^{\sigma(p-1)}=:t $, and the equality never holds, so $ p-1<t $. Since $ \sigma(p-1)=\sigma(p) $ for prime $p$, and notice that $ p\neq 3^{\sigma(p)}=t $, from $ p-1<t,p\neq t $, we get $ p<t $.
Eventually the final step.
When $x $ is odd, $ \varphi^*(x)\ge\sigma(x)+1\ge\log_3n+1 $.
When $ x $ is even, we write $ x=2^d\cdot c, d\ge1, c\text{ odd} $, then $ \varphi^*(x)\ge \sigma(x)=d+\sigma(c)\ge d+\log_3c\ge1+ \log_3{x\over2}$. Easy to check the equality holds iff $ x =2\cdot3^k $ for some $ k $.
Here the proof ends.
The proof comes from a proof sketch by Deng Mingyang(moorhsum), the IMO 2019 gold medal winner from China, posted on Zhihu(Chinese), a chinese version of Quora. I prove the claim here which the sketch not contains. I am curious about why to construct such $ \hat\varphi $ and make such a claim. |
Global Modeling of a Non-Maxwellian Discharge in COMSOL®
Global modeling of plasmas is a powerful approach to study large chemistry sets. In these models, the reactions are represented by rate coefficients. In particular, the rate coefficients of electron impact collisions depend on the electron energy distribution function (EEDF), which is often non-Mawellian and can be computed from an approximation of the Boltzmann equation (BE). Here, we explain how to create a global model fully coupled with the BE in the two-term approximation using the COMSOL Multiphysics® software.
Setting Up a Global Model of a Non-Maxwellian Discharge
The equations in a global model are greatly simplified because the spatial information of the different quantities in the plasma reactor is treated as volume-averaged. Without the spatial derivatives, the numerical solution of the equation set becomes considerably simpler and the computational time is reduced. Consequently, this type of model is useful when investigating a broad region of parameters for plasmas with complex chemistries.
For a closed reactor without net mass creation at the surfaces, a mixture of k=1,\dotsc,Q species and j=1,\dotsc,N reactions is described by the mass fraction balance equations for Q-1 species
where V is the reactor volume, \rho is the mass density, w_k is the mass fraction of species k, A_l is the area of surface l, h_l is a correction factor of surface l, R_s is the surface rate expression of surface l, R_k is the volume rate expression for species k, and M_k is the molar mass.
The sum in the last term is over surfaces where species are lost or created. One of the species mass fractions is found from mass conservation.
The electron number density is obtained from the electron neutrality condition
where n_k is the number density and Z_k is the charge number. The electron energy density, n_{\varepsilon}, can be computed from
+\sum_l \sum_{ions} e h_l \frac{A_l}{V}R_{surf,k,l} N_A \left( \varepsilon_e + \varepsilon_i \right)
where n_{\varepsilon}=n_e \overline \varepsilon, \overline \varepsilon is the mean electron energy, P_{abs} is the power absorbed by the plasma, and e is the elementary charge. The last term on the right-hand side accounts for the kinetic energy transported to the surface by electrons and ions. The summation is over all positive ions and boundaries with surface reactions, \varepsilon_e is the mean kinetic energy lost per electron lost, \varepsilon_i is the mean kinetic energy lost per ion lost, and N_A is Avogadro’s number.
In the equations above, the source terms R_k and R_{\varepsilon} are computed using rate coefficients that represent the effect of collisions. In particular, for electron impact collisions, the rate coefficients depend on the EEDF, which is often a non-Maxwellian distribution and depends on the discharge conditions. In practice, the EEDF can be obtained by solving an approximation of the electron BE using fundamental collision cross-section data. Once the EEDF is known, the rate coefficients are computed by a suitable averaging of the electron impact cross sections over the EEDF.
Describing the Boltzmann Equation and Two-Term Approximation
The BE that describes the evolution of an ensemble of electrons in a six-dimensional phase space is
where f is the EEDF, \textbf{v} is the velocity coordinates, m is the electron mass, \textbf{E} is the electric field, \nabla_v is the velocity gradient operators, and C is the rate of change in f due to collisions.
Normally, a rather simplified BE is solved instead. It is assumed that the electric field and the collision probabilities are spatially uniform. The BE is then written in terms of spherical coordinates in the velocity space and f is expanded in spherical harmonics. The series is truncated after the second term, and the so-called two-term approximation of f is
where f_0 is the isotropic part of f, f_1 is an anisotropic perturbation, v is the magnitude of the velocity, \theta is the angle between the velocity and the field direction, and z is the position along this direction.
The problem is further simplified by solving only steady-state cases where the electric field and EEDF are either stationary or oscillate at a high frequency. The last piece of simplification consists of separating the energy dependence of the EEDF from its time and space dependence using
where F_{0,1} is an energy distribution function constant in time and space that verifies the following normalization
where \gamma=\sqrt{2e/m} and \varepsilon = \left( v / \gamma \right)^2.
Using the above-mentioned approximations and after some manipulations, the equation for F_0 can be written in the form of a 1D convection-diffusion-reaction equation
(For more details, see Ref. 1.)
This equation can be used to compute an EEDF, providing a set of electron collision cross sections and a reduced electric field, E/N (ratio of the electric field strength to the gas number density). Depending on the operating conditions, it might be necessary to include the effect of superelastic collisions and electron-electron collisions. Quite often, the input quantity of interest is the mean electron energy. In this case, a Lagrange multiplier is introduced to solve for the reduced electric field such that the equation below is satisfied.
Once the EEDF is computed, the rate coefficients needed for a plasma global model are computed from
where \sigma_k is the cross section of reaction k.
The figure below plots a computed EEDF obtained for argon at \overline{\varepsilon} = 5 eV and a corresponding Maxwellian. Note how the computed EEDF strongly deviates from a Maxwellian and how sharply it falls above the first excitation level of argon at 11.5 eV. In the same figure, the cross section for the excitation of the lumped level (corresponding to the first 4-s levels of argon) is plotted. With the information in this figure, the rate coefficient for the excitation of the 4-s levels can be computed.
Also important to note from this figure is that the computed EEDF and the cross section vary by several orders of magnitude in the overlapping region. In consequence, a small variation in \overline{\varepsilon} (or E/N) causes a large change in the rate coefficients. This example is for argon, but the same behavior is found in many other gases, and it is one of the reasons why plasmas have very nonlinear behavior.
In a practical application, the BE in the two-term approximation can be solved to provide rate coefficients to a global model. In such cases, the EEDF is computed every time the input conditions for the BE have changed.
Coupling the Global Model with the BE in the Two-Term Approximation
In this section, we show how to make a global model fully coupled with the BE in the two-term approximation using COMSOL Multiphysics. A three-step procedure is advised:
Create a global model where an analytic EEDF is used Use the EEDF Initializationstudy to solve only for the EEDF Solve the fully coupled problem Creating an EEDF Initialization Study
After having the global model working for an analytic EEDF (step 1), you can decide to investigate further to see if the EEDF used is suitable for your needs. You can do this by using an
EEDF Initialization study to solve the BE in the two-term approximation. This study solves the BE for the electron impact cross sections provided and a choice of the reduced electric field or the mean electron energy. This procedure is exemplified in the screenshots below. First, select the Boltzmann equation, two-term approximation (linear) option — or the Boltzmann equation, two-term approximation (quadratic) option — in the Electron Energy Distribution Function Settings section.
Then, set the
Reduced electric field so that it’s used in the solution of the EEDF.
At this stage, you can compare the computed EEDF and the rate coefficients with the ones you used in the global model in step 1 and assess if the model needs further improvements. If you decide to solve the fully coupled problem, add another study and use the solution of the
EEDF Initialization study as the initial condition, as shown below. Using the solution from the EEDF Initialization study is a requirement.
Coupling the Global Model Equations and BE
The coupling between the global model equations and the BE can happen in two different ways, depending on whether you use the
Local field approximation or the Local energy approximation to define the mean electron energy in the Plasma Properties section. When using the Local field approximation, the excitation of the system is given from a reduced field. This electric field can be constant (a parameterization can be made over E/N) or can come from a solution of an equation (e.g., a circuit equation). When using the Local energy approximation, the global model equation for the electron mean energy is solved and the power absorbed by the plasma needs to be set by the user. In this case, the E/N is found so that the equation below is satisfied. Example: A Plasma Sustained by a Direct Current Voltage Source
As a practical example, we chose to model an argon plasma created within a 4-mm gap by a direct current (DC) voltage source of 1 kV in series with a 100-kΩ resistance at 100 mTorr. This model is inspired by Ref. 2. We emphasize that the model has no spatial description and that geometrical parameters and volume-averaged quantities are used to describe the plasma in the gap. The voltage applied to the plasma, V_p, comes from the circuit equation
where V_{dc} is the applied voltage and R is the circuit resistance.
The plasma current, I_p, is computed from
where A is the plasma cross-sectional area and \mu N is the reduced electron mobility.
Solving for E/N, we obtain
where d is the gap distance between electrodes.
If we choose to use the
Local field approximation, the equation for E/N above can be used directly in the EEDF Inputs section, as shown in the screenshot below.
If we choose to use the
Local energy approximation, the power absorbed by the plasma can be defined as
in the
Mean Electron Energy section, as in the screenshot below.
In this model, both approaches give very similar results, since the same electron energy loss/gain from collision events is accounted for in the BE and the mean electron energy equation, and because no energy losses to the wall are included in the mean electron energy equation.
In the figure below, the temporal evolution of the charged species and the reduced electric field are shown. Initially, there is no plasma in the gap and the electric field (black line, right axis) maintains a constant value. When breakdown starts to occur, there is a rapid increase of the charged carriers and the current flowing into the circuit, resulting in a voltage drop across the gap. After this transient regime, a steady state is reached where the plasma is sustained with a reduced electric field of only 4 Td.
The temporal evolution of the EEDF is presented below. Initially, the EEDF has a large population above 15 eV, as it is necessary to facilitate the plasma breakdown. After the plasma formation, and due to the decrease of the electric field, the electron population cools down and the EEDF develops a tail with a steeper slope. As time progresses, and with the increase of the argon excited-state density, the influence of the superelastic reactions in the EEDF becomes noticeable, with the appearance of a bump at the high-energy end. Note that the time variation is presented on a log scale in this animation.
Next Steps
To try the example featured in this blog post, click the button below. Doing so will take you to the Application Gallery, where with a valid software license, you can download MPH-file for the model in addition to the step-by-step documentation.
You can also read more about modeling plasma physics in the following blog posts:
Introduction to Plasma Modeling with Non-Maxwellian EEDFs The Boltzmann Equation, Two-Term Approximation Interface Electron Energy Distribution Function References G.J.M. Hagelaar and L.C. Pitchford, “Solving the Boltzmann equation to obtain electron transport coefficients and rate coefficients for fluid models,” Plasma Sources Science and Technology, vol. 14, pp. 722–733, 2005. S. Pancheshnyi, B. Eismann, G. Hagelaar, and L. Pitchford, “ZDPlasKin: A New Tool for Plasmachemical Simulations”, The Eleventh International Symposium on High Pressure, Low Temperature Plasma Chemistry, 2008. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Congruence Modulo 3 of Power of 2 Theorem
Let $n \in \Z_{\ge 0}$ be a positive integer.
Then:
$2^n \equiv \paren {-1}^n \pmod 3$
That is:
$\exists q \in \Z: 2^n = 3 q + \paren {-1}^n$ Proof
The proof proceeds by induction.
For all $n \in \Z_{\ge 0}$, let $\map P n$ be the proposition:
$2^n \equiv \paren {-1}^n \pmod 3$ $P \left({0}\right)$ is the case:
\(\displaystyle 2^0\) \(=\) \(\displaystyle 1\) \(\displaystyle \) \(=\) \(\displaystyle \paren {-1}^0\) \(\displaystyle \) \(\equiv\) \(\displaystyle \paren {-1}^0\) \(\displaystyle \pmod 3\) Thus $\map P 0$ is seen to hold. Basis for the Induction
$\map P 1$ is the case:
\(\displaystyle 2^1\) \(=\) \(\displaystyle 2\) \(\displaystyle \) \(=\) \(\displaystyle 3 + \paren {-1}\) \(\displaystyle \) \(=\) \(\displaystyle 3 + \paren {-1}^1\) \(\displaystyle \) \(\equiv\) \(\displaystyle \paren {-1}^1\) \(\displaystyle \pmod 3\)
Thus $\map P 1$ is seen to hold.
This is the basis for the induction. Induction Hypothesis
Now it needs to be shown that, if $\map P k$ is true, where $k \ge 2$, then it logically follows that $\map P {k + 1}$ is true.
So this is the induction hypothesis: $2^k \equiv \paren {-1}^k \pmod 3$ from which it is to be shown that: $2^{k + 1} \equiv \paren {-1}^{k + 1} \pmod 3$ Induction Step
This is the induction step:
\(\displaystyle 2^{k + 1}\) \(=\) \(\displaystyle 2 \times 2^k\) \(\displaystyle \) \(=\) \(\displaystyle 2 \times \paren {3 q + \paren {-1}^k}\) \(\displaystyle \) \(=\) \(\displaystyle \paren {3 \paren {2 q} + 2 \paren {-1}^k}\) \(\displaystyle \) \(=\) \(\displaystyle \paren {3 \paren {2 q} + 2 \paren {-1}^k}\) If $k$ is odd, this means:
\(\displaystyle 2^{k + 1}\) \(\equiv\) \(\displaystyle -2\) \(\displaystyle \pmod 3\) \(\displaystyle \) \(\equiv\) \(\displaystyle 1\) \(\displaystyle \pmod 3\) \(\displaystyle \) \(\equiv\) \(\displaystyle \paren {-1}^{k + 1}\) \(\displaystyle \pmod 3\) If $k$ is even, this means:
\(\displaystyle 2^{k + 1}\) \(\equiv\) \(\displaystyle 2\) \(\displaystyle \pmod 3\) \(\displaystyle \) \(\equiv\) \(\displaystyle -1\) \(\displaystyle \pmod 3\) \(\displaystyle \) \(\equiv\) \(\displaystyle \paren {-1}^{k + 1}\) \(\displaystyle \pmod 3\) So $\map P k \implies \map P {k + 1}$ and the result follows by the Principle of Mathematical Induction. Therefore: $\forall n \in \Z_{\ge 0}: 2^n \equiv \paren {-1}^n \pmod 3$
$\blacksquare$ |
Abbreviation:
MA
A
is a structure $\mathbf{A}=\langle A,\vee,0,\wedge,1,\neg,\diamond\rangle$ such that modal algebra
$\langle A,\vee,0, \wedge,1,\neg\rangle $ is a Boolean algebras
$\diamond$ is
: $\diamond(x\vee y)=\diamond x\vee \diamond y$ join-preserving
$\diamond$ is
: $\diamond 0=0$ normal
Remark: Modal algebras provide algebraic models for modal logic. The operator $\diamond$ is the
, and the possibility operator $\Box$ is defined as $\Box x=\neg\diamond\neg x$. necessity operator
Let $\mathbf{A}$ and $\mathbf{B}$ be modal algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\to B$ that is a Boolean homomorphism and preserves $\diamond$:
$h(\diamond x)=\diamond h(x)$
Example 1:
Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes, $n=2$ Congruence regular yes Congruence uniform yes Congruence extension property yes Definable principal congruences no Equationally def. pr. cong. no Discriminator variety no Amalgamation property yes Strong amalgamation property yes Epimorphisms are surjective yes
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$ |
Natural number The numbers which are generally used in our day to day life for counting are termed as natural numbers. They are also referred to as "counting" numbers , N = 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 {\displaystyle \mathbb {N} =0,1,2,3,4,5,6,7,8,9} Even number Number divides by 2 without remainder . Even number is denotes as 2N 2 N = 0 , 2 , 4 , 6 , 8 , . . . {\displaystyle \mathbb {2N} =0,2,4,6,8,...} Odd number Number divides by 2 with remainder Odd number is denotes as 2N+1 2 N + 1 = 1 , 3 , 5 , 7 , 9 , . . . {\displaystyle \mathbb {2N+1} =1,3,5,7,9,...} Prime number Number divides by 1 and itself without remainder . Prime number is denoted as P P = 1 , 3 , 5 , 7... {\displaystyle \mathbb {P} =1,3,5,7...} Integer Signed numbers I = ( − I , 0 , + I ) = ( I < 0 , I = 0 , I > 0 ) {\displaystyle \mathbb {I} =(-I,0,+I)=(I<0,I=0,I>0)} Fraction a b {\displaystyle {\frac {a}{b}}} Complex Number number made up of real and imaginary number Z = a + i b = a 2 + b 2 ∠ b a {\displaystyle \mathbb {Z} =a+ib={\sqrt {a^{2}+b^{2}}}\angle {\frac {b}{a}}} Imaginary Number i = − 1 {\displaystyle \mathbb {i} ={\sqrt {-1}}} i 9 {\displaystyle i9}
Mathematical Operations on arithmetic numbers
Mathematical Operation Symbol Example Addition A + B = C {\displaystyle A+B=C} 2 + 3 = 5 {\displaystyle 2+3=5} Subtraction A − B = C {\displaystyle A-B=C} 2 − 3 = − 1 {\displaystyle 2-3=-1} Multiplication A × B = C {\displaystyle A\times B=C} 2 × 3 = 6 {\displaystyle 2\times 3=6} Division A B = C {\displaystyle {\frac {A}{B}}=C} 2 3 ≈ 0.667 {\displaystyle {\frac {2}{3}}\approx 0.667} Exponentiation A n = C {\displaystyle A^{n}=C} 2 3 = 2 × 2 × 2 = 8 {\displaystyle 2^{3}=2\times 2\times 2=8} Root A = C {\displaystyle {\sqrt {A}}=C} 9 = 3 {\displaystyle {\sqrt {9}}=3} Logarithm L o g A = C {\displaystyle LogA=C} L o g 100 = 2 {\displaystyle Log100=2} Natural Logarithm L n A = C {\displaystyle LnA=C} L n 9 ≈ 2.2 {\displaystyle Ln9\approx 2.2} Example [ edit ] a x , y 2 {\displaystyle ax,y^{2}} Operation Arithmetic Expression [ edit ]
Order of performing mathematical operation on expression follows
Parenthesis . {}, [] , () Power . Add, Subtract, Multiply, Divide . +, -, x , /
Example
( x − y ) 2 + y = z {\displaystyle (x-y)^{2}+y=z} x + y 2 = 6 {\displaystyle x+y^{2}=6} Cartesian Coordinate [ edit ] Polar Coordinate [ edit ]
Real Number Coordination [ edit ]
A point in XY co ordinate can be presented as (
) and ( X , Y {\displaystyle X,Y} ) in R θ co ordinate R , θ {\displaystyle R,\theta } A , ( ) , ( X , Y {\displaystyle X,Y} ) R , θ {\displaystyle R,\theta }
Scalar maths Vector Maths R ∠ θ = X 2 + Y 2 ∠ T a n − 1 Y X {\displaystyle R\angle \theta ={\sqrt {X^{2}+Y^{2}}}\angle Tan^{-1}{\frac {Y}{X}}} R = X 2 + Y 2 {\displaystyle R={\sqrt {X^{2}+Y^{2}}}} θ = ∠ T a n − 1 Y X {\displaystyle \theta =\angle Tan^{-1}{\frac {Y}{X}}} X ( θ ) = R C o s θ {\displaystyle X(\theta )=RCos\theta } Y ( θ ) = R S i n θ {\displaystyle Y(\theta )=RSin\theta } R ( θ ) = X ( θ ) + Y ( θ ) = R ( C o s θ + S i n θ ) {\displaystyle R(\theta )=X(\theta )+Y(\theta )=R(Cos\theta +Sin\theta )} ∇ ⋅ R ( θ ) = X ( θ ) = R C o s θ {\displaystyle \nabla \cdot R(\theta )=X(\theta )=RCos\theta } ∇ × R ( θ ) = Y ( θ ) = R S i n θ {\displaystyle \nabla \times R(\theta )=Y(\theta )=RSin\theta } Complex Number Coordination [ edit ]
Z . ( X , j Y ) , ( Z , θ ) {\displaystyle Z.(X,jY),(Z,\theta )} Z ∗ . ( X , − j Y ) , ( R , − θ ) {\displaystyle Z^{*}.(X,-jY),(R,-\theta )} X ( θ ) = Z C o s θ {\displaystyle X(\theta )=ZCos\theta } j Y ( θ ) = j Z S i n θ {\displaystyle jY(\theta )=jZSin\theta } − j Y ( θ ) = − j Z S i n θ {\displaystyle -jY(\theta )=-jZSin\theta } Z ∠ θ = X 2 + Y 2 ∠ T a n − 1 Y X {\displaystyle Z\angle \theta ={\sqrt {X^{2}+Y^{2}}}\angle Tan^{-1}{\frac {Y}{X}}} Z ∠ − θ = X 2 + Y 2 ∠ − T a n − 1 Y X {\displaystyle Z\angle -\theta ={\sqrt {X^{2}+Y^{2}}}\angle -Tan^{-1}{\frac {Y}{X}}} Z = X 2 + Y 2 {\displaystyle Z={\sqrt {X^{2}+Y^{2}}}} θ = ∠ T a n − 1 Y X {\displaystyle \theta =\angle Tan^{-1}{\frac {Y}{X}}} Z ( θ ) = X ( θ ) + j Y ( θ ) = Z ( C o s θ + j S i n θ ) {\displaystyle Z(\theta )=X(\theta )+jY(\theta )=Z(Cos\theta +jSin\theta )} ∇ ⋅ Z ( θ ) = X ( θ ) = Z C o s θ {\displaystyle \nabla \cdot Z(\theta )=X(\theta )=ZCos\theta } ∇ × Z ( θ ) = j Y ( θ ) = j Z S i n θ {\displaystyle \nabla \times Z(\theta )=jY(\theta )=jZSin\theta } Z ∗ ( θ ) = X ( θ ) − j Y ( θ ) = Z ( C o s θ − j S i n θ ) {\displaystyle Z^{*}(\theta )=X(\theta )-jY(\theta )=Z(Cos\theta -jSin\theta )} ∇ ⋅ Z ( θ ) = X ( θ ) = Z ∗ C o s θ {\displaystyle \nabla \cdot Z(\theta )=X(\theta )=Z^{*}Cos\theta } ∇ × Z ( θ ) = − j Y ( θ ) = j Z ∗ S i n θ {\displaystyle \nabla \times Z(\theta )=-jY(\theta )=jZ^{*}Sin\theta } C o s θ = Z ( θ ) + Z ∗ ( θ ) 2 {\displaystyle Cos\theta ={\frac {Z(\theta )+Z^{*}(\theta )}{2}}} S i n θ = Z ( θ ) − Z ∗ ( θ ) 2 j {\displaystyle Sin\theta ={\frac {Z(\theta )-Z^{*}(\theta )}{2j}}} − S i n θ = Z ∗ ( θ ) − Z ( θ ) 2 j {\displaystyle -Sin\theta ={\frac {Z^{*}(\theta )-Z^{(}\theta )}{2j}}}
Arithematic Function [ edit ] Definition [ edit ]
Function is an arithmetical expression which relates 2 variables . Function is denoted as
f ( x ) = y {\displaystyle f(x)=y}
meaning for any value of x there is a corresponding value y=f(x)
Where
x - independent variable y - dependent variable f(x) - function of x Graph of function [ edit ]
f ( x ) = x {\displaystyle f(x)=x}
x -2 -1 0 1 2 f(x) -2 -1 0 1 2 Straight line passing through origin point (0,0) with slope equals 1
f ( x ) = 2 x {\displaystyle f(x)=2x}
x -2 -1 0 1 2 f(x) -4 -2 0 2 4 Straight line passing through origin point (0,0) with slope equals 2
f ( x ) = 2 x + 3 {\displaystyle f(x)=2x+3}
x -2 -1 0 1 2 f(x) -1 1 3 5 7
Straight line with slope equals 2 has x intercept (-3/2,0) and y intercept (0,3) Types of Functions [ edit ] Mathematical operations on function [ edit ]
An arithmetic equation is an expression of a function of variable that has a value equal to zero
f ( x ) = 0 {\displaystyle f(x)=0}
Arithmetic equations can be solved to find the value of variable that satisfies the equation. The process of finding this value is called root finding. All values of variable that make its function equal to zero are called roots of the equation.
Example [ edit ] Equation . 2 x + 5 = 9 {\displaystyle 2x+5=9} Root . x = 9 − 5 2 = 4 2 = 2 {\displaystyle x={\frac {9-5}{2}}={\frac {4}{2}}=2} is the root of the equation x = 2 {\displaystyle x=2} since substitution the value of x in the equation we have 2 x + 5 = 9 {\displaystyle 2x+5=9} 2 ( 2 ) + 5 = 9 {\displaystyle 2(2)+5=9} Types of equations [ edit ] |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove)
301
We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms.
293
Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
274
This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.
275
283
A regularization Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems (1996)
The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.
277
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
280
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.
270
276
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals.
282
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
279
It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation. |
The classic middle-thirds Cantor set can be generalized to a middle-$\alpha$-th Cantor set, for $0<\alpha<1$. It can be formed by removing the open middle $\alpha$-th from the unit interval, and continuing to remove the middle $\alpha$-th from each of the remaining intervals ad infinitum.
It can be shown by induction that at the $k$-th level of this construction process, there are $2^k$ disjoint sub-intervals. Further, each sub-interval at the $k$-th level has length $\left(\frac{1-a}{2}\right)^k$.
With this knowledge, we can
intuitively find the Hausdorff dimension of this generalized Cantor set to be $$0 < \frac{1}{1-\lg(1-\alpha)} < 1.$$
But this was easy because we are always removing a
fixed percentage of the remaining intervals at each level. That is, at the $k$-th level we always remove the middle $\alpha$-th from an interval of length $\left(\frac{1-a}{2}\right)^k$, i.e. we remove $\alpha\cdot\left(\frac{1-a}{2}\right)^k$.
A variation of this is the SVC set, where we remove
smaller percentages of the remaining intervals at each level, to get a sort of fat Cantor set.
My question is what happens if we decide to remove
larger percentages of the remaining intervals at each level, to get a sort of skinny Cantor set? I find no results on Google about these kinds of Cantor sets. Specifically, I'd to find their dimension, but since I have no formal education in topology, I cannot work this out by hand. Any accessible information on them would be pretty cool.
As an example, let's say we begin with the interval $[0,1]$, and let's say we take out the open middle third so that our next level is $\left[0,\frac{1}{3}\right]\cup\left[\frac{2}{3},1\right]$. Now, instead of removing the middle third from $\left[0,\frac{1}{3}\right]$, let's say we remove the middle-$\left(\frac{1}{3}+\varepsilon\cdot\frac{1}{3}\right)$. That is, we removed an $\varepsilon>0$ percent larger interval than what we would have normally.
The motivation for this question comes from investigating the logistic map $f_r(x) = rx(1-x)$ for when $r > 4$. According to Richard Holmgren in his book
A First Course in Discrete Dynamical Systems, the points inside $[0,1]$ whose $f_r$-iterations remain in $[0,1]$ forever form a Cantor set.
Upon further investigation it looks like this set is of the
skinny variety described above, but I can't find the factor that we increase our removals by! |
Abbreviation:
ApGrp
An
is a $p$-group $\mathbf{A}=\langle A, +, -, 0\rangle$ such that Abelian $p$-group
$\cdot$ is
: $x+y=y+x$ commutative
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be Abelian $p$-groups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x + y)=h(x) + h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Boolean groups]] [[P-groups]] |
Please read this introduction first before looking through the solutions. Here’s a quick index to all the problems in this section.
Composing two general euclidean transformations, we get a transformation of the form below. $$\begin{pmatrix} a_{11}b_{11} - a_{12}b_{12}k & a_{12}b_{11}k + a_{11}b_{12} & a_{12}b_{23}k + a_{11}b_{13} + a_{13} \\ -(a_{11}b_{12}k + a_{12}b_{11}) & a_{11}b_{11} - a_{12}b_{12}k & a_{11}b_{23}k - a_{12}b_{13}k + a_{23} \\ 0 & 0 & a_{33}b_{33} \end{pmatrix}$$
This is clearly a similarity transformation.
Next, let’s consider the fraction relevant to euclidean transformations $$\frac{(a_{11}b_{11} - a_{12}b_{12}k)^2 + (a_{11}b_{12}k + a_{12}b_{11})^2}{a^2_{33}b^2_{33}}$$
Expanding this we get $$\frac{(a_{11}^2 + a_{12}^2)(b_{11}^2 + b_{12}^2)}{a_{33}^2b_{33}^2}$$ $$= \frac{(a_{11}^2 + a_{12}^2)}{a_{33}^2}\frac{(b_{11}^2 + b_{12}^2)}{b_{33}^2}$$
As the value of each of these terms is 1, the value of the whole fraction will also be 1. Thus, two euclidean transformations are closed under the operation of composition.
The inverse of a general euclidean transformation has the form $$\frac{1}{a_{33}k(a_{11}^2 + a_{12}^2)}\begin{pmatrix} a_{11}a_{33}k & -a_{12}a_{33} & a_{12}a_{23} - a_{11}a_{13}k \\ a_{12}a_{33}k & a_{11}a_{33} & -(a_{12}a_{13}k + a_{11}a_{23}) \\ 0 & 0 & k(a_{11}^2 + a_{12}^2) \end{pmatrix}$$
Let’s consider the fraction relevant to euclidean transformations $$\frac{k^2a_{33}^2(a_{11}^2 + a_{12}^2)}{k^2(a_{11}^2 + a_{12}^2)^2}$$ $$= \frac{a_{33}^2}{(a_{11}^2 + a_{12}^2)}$$
The value of this is clearly 1. Hence the inverse is also an euclidean transformation.
Finally, it is obvious, from the rules of matrix multiplication, that the composition of euclidean transformations is associative and that the identity is included among the euclidean transformations.
Hence the set of all euclidean transformations is a group under the operation of composition.
Type I
$a_{11} \ne 0$, $a_{12} \ne 0$ and $a_{11}^2 + a_{12}^2 = a_{33}^2$.
$\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{12} & -a_{11} & a_{23} \\ 0 & 0 & a_{33} \end{pmatrix}$
Type II
$a_{23} \ne 0$
$\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & -a_{11} & a_{23} \\ 0 & 0 & -a_{11} \end{pmatrix}$
Type III
$\begin{pmatrix} a_{11} & 0 & 0 \\ 0 & a_{11} & 0 \\ 0 & 0 & -a_{11} \end{pmatrix}$
Type IV
As a euclidean transformation is also a similarity and there is no similarity transformation of type IV, there can be no euclidean transformation of type IV.
Type V
$a_{13} \ne 0, a_{23} \ne 0$
$\begin{pmatrix} a_{11} & 0 & a_{13} \\ 0 & a_{11} & a_{23} \\ 0 & 0 & a_{11} \end{pmatrix}$
Type VI
$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
The entire field of trigonometry attests to the fact that the angle made by two lines is a function of the lengths of sides of a right triangle that contains the angle. Hence, as euclidean transformations preserve distances, they also preserve angles, which further imply that euclidean transformations are similarities.
This means the matrix of transformation for a euclidean transformation by Theorem 1, Sec. 5.11 takes the form $$\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ -ka_{12} & ka_{11} & a_{23} \\ 0 & 0 & a_{33} \end{pmatrix}$$
where $a_{ij}$ real, $k = \pm1$.
By the result of Exercise 9, Sec. 5.11, we know that the length of a general segment is transformed by a factor of $\sqrt{a_{11}^2 + a_{12}^2}$ under a similarity transform with $a_{33} = 1$. Combining this with the fact that this factor is 1 in a euclidean transformation, we have
$(\frac{a_{11}}{a_{33}})^2 + (\frac{a_{12}}{a_{33}})^2 = 1$ $\implies a_{11}^2 + a_{12}^2 = a_{33}^2$
All these taken together form Theorem 1.
4. Show that every euclidean transformation induces an involution on the ideal line which is the identity, or has $I:(1,i,0)$ and $J:(1,-i,0)$ as its fixed points, or has $I$ and $J$ as a pair of mates. Determine the geometric nature of the euclidean transformations with each of these properties.
I might be wrong here but I think this question is incorrect for two reasons.
Every euclidean transformation does notinduce an involution on the ideal line. The identity is not an involution; it is a transformation of period 1.
I think the question should instead be as follows.
Show that every euclidean transformation
induces an involution onthe ideal line has $I:(1,i,0)$ and $J:(1,-i,0)$ as its fixed points, or has $I$and $J$ as a pair of mates. Determine the geometric nature of the euclideantransformations with each of these properties. that
We know that a euclidean transformation is affine and hence leaves the ideal line invariant. Furthermore, as the last coordinate of every point on the ideal line is $0$, the projectivity induced on the ideal line is given by
$$\begin{pmatrix} a_{11} & a_{12} \\ -ka_{12} & ka_{11} \end{pmatrix}$$
with $(1, 0, 0)$ and $(0, 1, 0)$ as the base points.
From Theorem 5, Sec. 4.8, we know that this projectivity can be an involution only if $a_{11} + ka_{11} = 0$. For this to be true, either $a_{11} = 0$ or $k = -1$.
If $k = -1$, the transformation is an indirect similarity and from Exercise 4, Sec. 5.11 we know that an indirect similarity interchanges the circular points at infinity. This will be a rotation about the origin followed by a translation and a reflection.
If $a_{11} = 0$ and $k = 1$, the transformation is a direct similarity and again from Exercise 4, Sec. 5.11 we know that a direct similarity leaves the circular points at infinity invariant. This will be a rotation of $90^\circ$ about the origin followed by a translation.
5. Show that the composition of two reflections in parallel lines is a translation in the direction perpendicular to the two lines.
Without loss of generality, let the two lines be $x = 0$ and $x = d$. Then the composition of matrices of reflections in these lines will be
$$\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 2d \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -2d \\ 0 & 0 & 1 \end{pmatrix}$$
This is clearly a translation in the direction parallel to the $y$ axis which is perpendicular to the direction of the lines chosen as the axis of reflection.
6. Show that the composition of two reflections in lines which intersect in a finite point is a rotation about the point of intersection of the lines.
Without loss of generality, we can take the point of intersection of the linesto be the origin. This makes both transformations Householder reflections
1implying that $a_{13} = a_{23} = b_{13} = b_{23} = 0$ and $k = -1$ for bothtransformations. Taking $a_{11} = cos(\theta), a_{12} = sin(\theta), b_{11}= cos(\omega), b_{12} = sin(\omega)$, the composition of the tworeflections will be
$$\begin{pmatrix} cos(\theta) & sin(\theta) & 0 \\ sin(\theta) & -cos(\theta) & 0 \\ 0 & 0 & 1 \end{pmatrix}\begin{pmatrix} cos(\omega) & sin(\omega) & 0 \\ sin(\omega) & -cos(\omega) & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ $$= \begin{pmatrix} sin(\theta)sin(\omega) + cos(\theta)cos(\omega) & cos(\theta)sin(\omega) - sin(\theta)cos(\omega) & 0 \\ sin(\theta)cos(\omega) - cos(\theta)sin(\omega) & sin(\theta)sin(\omega) + cos(\theta)cos(\omega) & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ $$ = \begin{pmatrix} cos(\omega - \theta) & sin(\omega - \theta) & 0 \\ -sin(\omega - \theta) & cos(\omega - \theta) & 0 \\ 0 & 0 & 1 \end{pmatrix}$$
which is a rotation of $\omega - \theta$ around the origin as asserted.
Reflection about a point is equivalent to a rotation of $180^\circ$ about thepoint
2. Using the form of the matrix for a rotation of $180^\circ$ aboutan arbitrary point that we derived in Exercise 10, Sec.5.11, we can write the composition of two reflections indistinct points as follows.$$\begin{pmatrix}-1 & 0 & a_{13}\\ 0 & -1 & a_{23}\\ 0 & 0 & 1\end{pmatrix}\begin{pmatrix}-1 & 0 & b_{13}\\ 0 & -1 & b_{23}\\ 0 & 0 & 1\end{pmatrix}$$$$= \begin{pmatrix}1 & 0 & a_{13} - b_{13}\\ 0 & 1 & a_{23} - b_{23}\\ 0 & 0 & 1\end{pmatrix}$$
which is clearly a translation in the direction determined by the centers of reflection/rotation.
As euclidean transformations are a subgroup of similarity transformations and as the type of similarities of period 2 are also euclidean, those are the only type of euclidean transformations of period 2.
As per Exercise 10, Sec. 5.11, these are rotations of an angle of $180^\circ$ about an arbitrary point and reflection about an arbitrary axis.
A euclidean transformation is the composition of rotations, translations and reflections. As translations and reflections can’t have periods of 3, the euclidean transformation having a period of 3 must be a rotation. Rotations that have a period of 3 are rotations of $n\frac{360}{3}^\circ = n120^\circ$.
For two general euclidean transformations, with matrices $A$ and $B$, to commute the following must hold.
$k = 1$
$a_{12}b_{23} + a_{11}b_{13} + a_{13} = b_{12}a_{23} + b_{11}a_{13} + b_{13}$
$a_{11}b_{23} - a_{12}b_{13} + a_{23} = b_{11}a_{23} - b_{12}a_{13} + b_{23}$
For examples of commuting euclidean transformations, please see Commuting Isometries.
References Wikipedia. Householder Transformation. https://en.wikipedia.org/wiki/Householder_transformation. [return] Wikipedia. Point Reflections, Examples. https://en.wikipedia.org/wiki/Point_reflection#Examples. [return] |
It looks like you're new here. If you want to get involved, click one of these buttons!
Posets help us talk about resources: the partial ordering \(x \le y\) says when you can use one resource to get another. Monoidal posets let us combine or 'add' two resources \(x\) and \(y\) to get a resource \(x \otimes y\). But closed monoidal posets go further and let us 'subtract' resources! And the main reason for subtracting resources is to answer questions like
If you have \(x\), what must you combine it with to get \(y\)?
When dealing with money the answer to this question is called \(y - x\), but now we will call it \(x \multimap y\). Remember, we say a monoidal poset is
closed if for any pair of elements \(x\) and \(y\) there's an element \(x \multimap y\) that obeys the law
$$ x \otimes a \le y \text{ if and only if } a \le x \multimap y .$$ This says roughly "if \(x\) combined with \(a\) is no more than \(y\), then \(a\) is no more than what you need to combine with \(x\) to get \(y\)". Which sounds complicated, but makes sense on reflection.
One reason for using funny symbols like \(\otimes\) and \(\multimap\) is that they don't have strongly pre-established meanings. Sometimes they mean addition and subtraction, but sometimes they will mean multiplication and division.
For example, we can take any set and make it into a poset by saying \(x \le y\) if and only if \( x = y\): this is called a
discrete poset. If our set is a monoid we can make it into a monoidal poset where \( x \otimes y \) is defined to be \(x y\). And if our set is a group we can make it into a closed monoidal poset where \(x \multimap y\) is \(y\) divided by \(x\), or more precisely \(x^{-1} y\), since we have
$$ x a = y \text{ if and only if } a = x^{-1} y .$$ I said last time that if \(\mathcal{V}\) is a closed monoidal poset we can make it into a \(\mathcal{V}\)-enriched category where:
the objects are just elements of \(\mathcal{V}\)
for any \(x,y \in \mathcal{V}\) we have \(\mathcal{V}(x,y) = x \multimap y\).
This is the real reason for the word 'closed': \(\mathcal{V}\) becomes a category
enriched over itself, so it's completely self-contained, like a hermit who lives in a cave and never talks to anyone but himself.
Let's show that we get a \(\mathcal{V}\)-enriched category this way.
Theorem. If \(\mathcal{V}\) is a closed monoidal category, then \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category as above. Proof. If you look back at the definition of enriched category you'll see we need to check two things:
a) For any object \(x\) of \(\mathcal{V}\) we need to show
$$ I\leq\mathcal{V}(x,x) .$$ b) For any objects \(x,y,z\) of \(\mathcal{V}\) we need to show
$$ \mathcal{V}(x,y)\otimes\mathcal{V}(y,z)\leq\mathcal{V}(x,z). $$ I bet these are follow-your-nose arguments that we can do without really thinking. For a) we need to show
$$ I \leq x \multimap x $$ but by the definition of 'closed' this is true if and only if
$$ x \otimes I \le x $$ and this is true, since in fact \(x \otimes I = x\). For b) we need to show
$$ (x \multimap y) \otimes (y \multimap z) \leq x \multimap z $$ but by the definition of closed this is true if and only if
$$ x \otimes (x \multimap y) \otimes (y \multimap z) \le z.$$ Oh-oh, this looks complicated! But don't worry, we just need this lemma:
Lemma. In a closed monoidal category we have \(a \otimes (a \multimap b) \le b\).
Then we just use this lemma twice:
$$ x \otimes (x \multimap y) \otimes (y \multimap z) \le y \otimes (y \multimap z) \le z .$$ Voilà!
How do we prove the lemma? That's easy: the definition of closed monoidal category says
$$ a \otimes (a \multimap b) \le b \text{ if and only if } a \multimap b \le a \multimap b $$ but the right-hand statement is true, so the left-hand one is too! \(\qquad \blacksquare \)
Okay, let me leave you with some puzzles:
Puzzle 193. We know that for any set \(X\) the power set \(P(X)\) becomes a monoidal poset with \(S \le T\) meaning \(S \subseteq T\), with product \(S \otimes T = S \cap T\), and with the identity element \(I = X\). Is \(P(X)\) closed? If so, what is \(S \multimap T\)? Puzzle 194. From Lecture 11 we know that for any set \(X\) the set of partitions of \(X\), \(\mathcal{E}(X)\), becomes a poset with \(P \le Q\) meaning that \(P\) is finer than \(Q\). It's a monoidal poset with product given by the meet \(P \wedge Q\). Is this monoidal poset closed? How about if we use the join \(P \vee Q\)? Puzzle 195. Show that in any closed monoidal poset we have
$$ I \multimap x = x $$ for every element \(x\).
In terms of resources this says that \(I\) acts like 'nothing', since it says
If you have \(I\), what do you need to combine it with to get \(x\)? \(\; \; x\)!
Of course \(I \otimes x = x = x \otimes I \) also says that \(I\) acts like 'nothing'.
Puzzle 196. Show that in any closed monoidal poset we have
$$ x \multimap y = \bigvee \lbrace a : \; x \otimes a \le y \rbrace . $$
To read other lectures go here. |
We consider the inverse problem of finding the volatility σ∈Lρ(0, T) such that \(U_{BS}(X,K,r,t,{{\int }_{0}^{t}}\sigma ^{2}(\tau )d\tau )=u(t)\), 0≤t≤T, where UBS is the Black–Scholes formula and u(t) is the observable fair price of an European call option. The problem is ill-posed. Using the residual method, we shall regularize the problem. An explicit error estimate is given.
Keywords
Calibration Volatility Ill-posed Regularization
Mathematics Subject Classification (2010)
35R30 65J20 91B24
This is a preview of subscription content, log in to check access.
Notes
Acknowledgments
The authors are grateful to three anonymous referees for their precious suggestions leading to the improvement version of our paper.
References
1.
Baumeister, J.: Stable solution of inverse problems. Friedr. Vieweg & Son (1987)Google Scholar
Bouchouev, I., Isakov, V.: Uniqueness, stability and numerical methods for the inverse problem that arises in financial markets. Inverse Probl. 15, R95–R116 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
Crépey, S.: Calibration of the local volatility in a generalized Black–Scholes model using Tikhonov regularization. SIAM J. Math. Anal. 34, 1183–1206 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
7.
De Cezaro, A., Scherzer, O., Zubelli, J.P.: Convex regularization of local volatility models from option prices: convergence analysis and rates. Nonlinear Anal. 75, 2398–2415 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
Egger, H., Engl, H.W.: Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates. Inverse Probl. 21, 1027–1045 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
Engl, H.W., Zou, J.: A new approach to convergence rate analysis of Tikhonov regularization for parameter identification in heat conduction. Inverse Probl. 16, 1907–1923 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
12.
Guo, L.: The mollification analysis of Stochastic volatility. Actuar. Res. Clear. House 1, 409–419 (1998)Google Scholar
13.
Hein, T.: Some analysis of Tikhonov regularization for the inverse problem of option pricing in the price-dependent case. Z. Anal. Anwend. 24, 593–609 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
Hofmann, B., Krämer, R.: On maximum entropy regularization for a specific inverse problem of option pricing. J. Inverse Ill-Posed Probl. 13, 41–63 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
16.
Krämer, R., Mathé, P.: Modulus of continuity of Nemytskii operators with application to the problem of option pricing. J. Inverse Ill-Posed Probl. 16, 435–461 (2008)MathSciNetCrossRefzbMATHGoogle Scholar |
Let $(X_{n},d_{n})_{n \in \mathbb{N}}$ be a sequence of complete geodesic metric spaces such that:
$X_{n}$ is a regular$^1$ CW-complex of constant local dimension$^3$ $n$, it is of finite type$^4$, boundaryless$^2$, unbounded, uniform$^5$, and it is the $n$-skeleton of $X_{n+1}$, which is n-connected. Moreover, the distances $d_{n}$ , $d_{n+1}$ generate the same topology on $X_{n}$ and $\forall x,y \in X_{n} \ d_{n+1}(x,y) \le d_{n}(x,y)$. Finally $(X_{n},d_{n})$ is quasi-isometric to $(X_{n+1},d_{n+1})$, through the inclusion map $X_{n} \subset X_{n+1}$, and a distance $d$ on $ \bigcup{X_{n}}$ is defined (for $x, y \in X_{n_0}$) by $d(x,y) := \lim_{n (\ge n_0) \to \infty} d_{n}(x,y)$. Rigidity assumption: if $S$ is a connected subspace of $X_{n}$ such that $S$ contains the geodesic paths between all its points, for $d_{n}$ and $d_{n+1}$, then $d_{n} = d_{n+1}$ on $S$. Remark: Some of these conditions could be useless for a proof, and others, highly generalized. Motivation: See here for applications to geometric group theory and noncommutative geometry.
$^1$Regular (for a CW complex) : the attaching maps are homeomorphism (see this post).
$^2$Boundaryless (for a regular CW complex) : the boundary of each closed cell is contained is the union of the boundaries of other closed cells. $^3$Constant local dimension : the topological dimension of all neighborhood of all point, is constant. $^4$Finite type : finitely many $r$-cells ending in a fixed $(r-1)$-cell. $^5$Uniform : For all $r$-cell $c_{1}$ and $c_{2}$, there is a neighborhood $n_{1}$ of $c_{1}$ and $n_{2}$ of $c_{1}$, such that $n_{1}$ is homeomorphic to $n_{2}$. |
Mapping/Examples/x^2 + y^2 = 1 $R_1 = \set {\tuple {x, y} \in \R \times \R: x^2 + y^2 = 1}$
Then $R_1$ is not a mapping.
Proof
$R_1$ fails to be a mapping for the following reasons:
$(1): \quad$ For $x < -1$ and $x > 1$, there exists no $y \in \R$ such that $x^2 + y^2 = 1$.
Thus $R_1$ fails to be left-total.
$(2): \quad$ For $-1 < x < 1$, there exist exactly two $y \in \R$ such that $x^2 + y^2 = 1$, for example:
\(\displaystyle \paren {\dfrac 1 2}^2 + \paren {\dfrac {\sqrt 3} 2}^2\) \(=\) \(\displaystyle 1\) \(\displaystyle \paren {\dfrac 1 2}^2 + \paren {-\dfrac {\sqrt 3} 2}^2\) \(=\) \(\displaystyle 1\)
So both $\tuple {\dfrac 1 2, \dfrac {\sqrt 3} 2}$ and $\tuple {\dfrac 1 2, -\dfrac {\sqrt 3} 2}$ are elements of $R_1$.
Thus $R_1$ fails to be many-to-one.
$\blacksquare$ |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I proved the following sieve result and - since the proof is quite long and I need to use it in a work - I am looking for a reference to it (or at least something from which it could be proved quickly). Thank you in advance for any suggestion.
Lemma. For each prime number $p$, let $\Omega_p \subsetneq \{0,1,\ldots,p-1\}$ be a set of residues modulo p. Denote by Ω the whole family of the $\Omega_p$'s. Suppose that $|\Omega_p| \leq c$ for all $p$, and that$$\sum_{p \leq x} |\Omega_p|\cdot \frac{\log p}{p} = k \log x + O(1) ,$$for all $x>1$, where $c,k>0$ are given constants. Then, fixed any $\delta_1, \delta_2 > 0$, we have$$|\{n \leq x : (n \bmod p) \notin \Omega_p, \forall p \in {]y,z]}\}| \ll_{\Omega,\delta_1,\delta_2} x \cdot \left(\frac{\log y}{\log x}\right)^k ,$$for all $x > 1$, $2 \leq y \leq (\log x)^{\delta_1}$, and $z \geq x^{\delta_2}$.
Note that, assuming wlog $\delta_2 \leq 1/2$, by the large sieve inequality, the result follows immediately from the lower bound $$\sum_{m \leq x^{\delta_2}} h_y(m) \gg_{\Omega,\delta_1,\delta_2} \left(\frac{\log x}{\log y}\right)^k , $$ where $h_y$ is the multiplicative arithmetic function supported on the squarefree integers with prime factors $> y$ and such that $h_y(p) = |\Omega_p| / (p - |\Omega_p|)$, for all $p > y$.
(This question is clearly connected with this previous one, however I preferred to post it separately since I already edited the latter several times.) |
Given the comments below, I reformed my questions with more background information and simulation/examples. My question also includes the validity of the method in theory. Please correct me if any statement is not accurate.
Background: 1) For our client (policy makers), it is often interesting for them to check whether there is a linear trend in the time series data, especially after a new policy is applied. 2) One characteristic of such data is that it is often with very short time period (around 10 years). 3) Another characteristic of the data is that the observation is not census but estimation from samples (biology and field ecology study, not possible to collect census data). This leads to observation errors involved in the data, sometimes can be very large (looks like outlier).
Motivation: I have been puzzled for some time about how to handle the above three questions. I would like to try as much I can using ARIMA. To solve 1), providing a statistical test for a linear trend.
Method: Theoretically, if the data is stationary after ARIMA(p,d,q) process and it contains a drift, the data can be written as $y_{t}=u\times t + y_{t}^{'}$ and $y_{t}^{'}$ follows a ARIMA process with order(p,d,q). In an example of order (1,1,0), $y_{t}^{'}-y_{t-1}^{'}=\phi(y_{t}^{'}-y_{t-1}^{'})+\epsilon_{t}$. I am thinking to decompose the observed data into a deterministic trend $u\times t $ , a stochastic trend $y_{t}-u\times t -\epsilon_{t}=y_{t}^{'}-\epsilon_{t}$ and error $\epsilon_{t}$. Do you think these two components can be called in these terms? The parameters of the two trends can be estimated using Arima function.
here is the code on a simulated data to demonstrate how I work through it:
set.seed(123)n <- 100e <- rnorm(n,0,1.345)y1 <- 3.4AR <- -0.77 u <- 0.05## 1. simulate ARIMA componentts.sim1 <-arima.sim(n=n,model=list(ar=AR,order=c(1,1,0)),start.innov=y1/(AR),n.start=1,innov=c(0,rnorm(n-1,0,0.345)))ts.sim1 <- ts(ts.sim1[2:(n+1)]) ts.sim1plot(ts.sim1)## 2. add linear trendts.sim2 <- ts.sim1 + u*(1:(n))plot(ts.sim2)
This is extra bit of code I used to test whether the parameters I inputed gives stationary data of ts.sim1 after (1,1,0) process.
dat <- replicate(1000, arima.sim(n=n,model=list(ar=AR,order=c(1,1,0)),start.innov=y1/(AR),n.start=1,innov=c(0,rnorm(n-1,0,0.345))))res <- apply(dat, 2, function(x) {fitt <- Arima(x, order=c(1,1,0), include.drift=F, method="ML"); residuals(fitt)})p <- apply(res, 2, function(x) adf.test(x)$p.value) sum(pv.st > .05)/1000*100
Next:
## 3. make some plotsadf.test(ts.sim2, alternative = "stationary")Acf(ts.sim2, main='')Pacf(ts.sim2, main='')## 4. auto-select best model in terms of AIC, and check residual patternfit<-auto.arima(ts.sim2, seasonal=FALSE, trace=TRUE, allowdrift=TRUE)arima.string1(fit)tsdisplay(residuals(fit), lag.max=15, main='Best Model Residuals')AIC(fit)## 5. Apply a drift version of the best (p,d,q) model, even if the best model does not contain drift, and check residualsfit1 <- Arima(ts.sim2, order=c(1,1,0), include.drift=T, method="ML")summary(fit1)AIC(fit);AIC(fit1) tsdisplay(residuals(fit1), lag.max=15, main='best Model Residuals')
Note that the auto.arima selection do not necessarily gives the true model, in this case, it suggests that the best model is (2,1,0) with drift rather than (1,1,0).
The ARIMA (1,1,0) with drift model output the following:
Series: ts.sim2 ARIMA(1,1,0) with drift Coefficients: ar1 drift -0.9126 0.0298s.e. 0.0533 0.0192sigma^2 estimated as 0.1355: log likelihood=-41.41AIC=88.83 AICc=89.08 BIC=96.61Training set error measures: ME RMSE MAE MPE MAPE MASE ACF1Training set -0.009091779 0.3625225 0.2671725 -3.364276 12.46496 0.5205074 -0.0499297
I can then test for the statistical significance of the linear slope
## 6. test for linear slope drift_index <- 2 n <- length(discards) pvalue <- 2*pt(-abs(fit1$coef[drift_index]/(sqrt(diag(fit1$var.coef))[drift_index]/sqrt(n))), df=n-1)
Finally, plot the fitted deterministic and stochastic trend:
drift_index <- 2par(mfrow=c(3,1))t_s <- 1:nplot(t_s, ts.sim2, type="o", lwd=2, col="red", pch=15, xlab="Year", ylab="", cex.lab=1.5, cex.axis=1.2)# 1. deterministic trendttime <- 1:length(ts.sim2)y1 <- ttime*fit1$coef[drift_index]se_re <- sqrt(fit1$sigma2)m1 <- mean(y1)offset <- (range(discards)[2]-range(discards)[1])/2 -m1y1_low <- y1-1.96*se_re+offsety1_high <- y1+1.96*se_re+offsetplot(t_s,y1+offset, type="n", ylim=range(y1_low,y1_high), xlab="Year", ylab="Drift", cex.lab=1.5, cex.axis=1.2, main="Deterministic trend", main.cex=1.2)polygon(c(t_s,rev(t_s)), c(y1_high,rev(y1_low)), col=rgb(0,0,0.6,0.2), border=FALSE)lines(t_s, y1+offset)# 2. stochastic trend: fitted value of the ARIMA model part with 95% prediction intervalsy2 <- ts.sim2-y1fitted1 <- y2-residuals(fit1)se_re <- sqrt(fit1$sigma2)y2_low <- y2-1.96*se_rey2_high <- y2+1.96*se_replot(t_s, y2, type="n", ylim=range(y2_low,y2_high), xlab="Year", ylab="Fitted ARIMA", cex.lab=1.5, cex.axis=1.2, main="Stochastic trend", main.cex=1.2)polygon(c(t_s,rev(t_s)), c(y2_high,rev(y2_low)), col=rgb(0,0,0.6,0.2), border=FALSE)lines(t_s, y2)
I tried to plot the prediction intervals (conditional to each component) in gray band in the figure, but I am not sure I did the correct calculation in the code.
Additionally, I tried a simulation to see whether this decomposition process is unbiased (codes below). Seems that I need a very long observed time-series (n=1000) to obtain an unbiased estimate of the parameters.
##----simulate N timesN <- 2000model_type <- rep(NA, N)res <- data.frame(ar=rep(NA, N), drift=rep(NA,N))for (k in 1:N) { n <- 1000 e <- rnorm(n,0,1.345) y1 <- 3.4 AR <- -0.77 u <- 0.05 ts.sim1 <- arima.sim(n=n,model=list(ar=AR,order=c(1,1,0)),start.innov=y1/(AR),n.start=1,innov=c(0,rnorm(n-1,0,0.345))) ts.sim1 <- ts(ts.sim1[2:(n+1)]) ts.sim2 <- ts.sim1 + u*(1:(n)) fit<-auto.arima(ts.sim2, seasonal=FALSE, trace=F,allowdrift=TRUE) model_type[k] <- arima.string1(fit) fit1 <- Arima(ts.sim2, order=c(1,1,0), include.drift=T, method="ML") res$ar[k] <- coef(fit1)[1] res$drift[k] <- coef(fit1)[2]}mean(res$ar)mean(res$drift)table(model_type)
My question is:
1) are the terminologies deterministic vs. stochastic trend being correctly used here?
2) Theoretically, is this a valid process to detect linear trend and also allow auto-correlated observation/error? Is there any other method to handle this under ARIMA?
3) In my last simulation, I noticed that an unbiased decomposition only works when the time-series is very long, which is the opposite in my data (around 10 years). I guess this is the general problem of ARIMA method. |
Angle Addition Formulas from Euler's Formula
Introduction
This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT), but only indirectly. The main intent is to get someone who is uncomfortable with complex numbers a little more used to them and relate them back to already known Trigonometric relationships done in Real values. It is essentially a followup to my first blog article "The Exponential Nature of the Complex Unit Circle".
Polar Coordinates
The more common way of specifying the location of a point on a plane is using Cartesian coordinates. They are expressed as a pair of real values like $(x,y)$ where the $x$ indicates (by convention) the distance along the horizontal axis of a pair of orthogonal axes. The $y$ value indicates the distance along the vertical axis. The point can be located by either moving along the horizontal axis by $x$ units, and then vertically by $y$ units. Or the other way around. In either case you end up at the same point. In Cartesian coordinates the representation of the point is unique. That means if $(a,b)$ and $(x,y)$ refer to the same point, $x=a$ and $y=b$.
Polar coordinates are different. They specify a point by giving the distance to the origin and the angle of the line segment joining the origin to the point and the horizontal axis. An angle of zero is towards the right, and an angle of 180 degrees, or $\pi$ radians is to the left. 90 degrees, or $\pi/2$ radians is upward. This is also by convention. They are also specified as a coordinate pair, often called $(r,\theta)$.
The first thing to notice is that if the distance to the origin, also called the radius, is zero, then the angle value is meaningless. The second thing to notice is that the representation of a point is not unique. You can negate $r$ and add $\pi$ to the angle and get back to the same point. Or you can add or subtract any multiple of $2\pi$ to the angle and still be referring to the same point.
Polar to Cartesian Conversion and Back
From here on in, all angles are assumed to be in radians. If these equations aren't familiar to you, do a search on "Polar to Cartesian" and you will find plenty of reference material. Or you can plot a point in the first quadrant, drop a line segment to the x axis and one to the origin and use your basic Trigonometric definitions. $$ x = r \cdot \cos( \theta ) \tag {1} $$ $$ y = r \cdot \sin( \theta ) \tag {2} $$ The inverse equations can be solved for like this: $$ \begin{aligned} x^2 + y^2 &= r^2 \cdot \cos^2( \theta ) + r^2 \cdot \sin^2( \theta ) \\ &= r^2 \cdot ( \cos^2( \theta ) + \sin^2( \theta ) ) \\ &= r^2 \end{aligned} \tag {3} $$ $$ r = \sqrt{ x^2 + y^2 } \tag {4} $$ By convention, the principal root (the positive one) is chosen. Finding the angle goes like this: $$ \frac{y}{x} = \frac{r \cdot \sin( \theta )}{r \cdot \cos( \theta )} = \tan( \theta ) \tag {5} $$ So you would think: $$ \theta = \tan^{-1} \left( \frac{y}{x} \right) \tag {6} $$ But not so fast. There is a problem dealing with the different quadrants: $(-x,-y)$ will give the same answer as $(x,y)$ though they are clearly $\pi$ radians apart, also known as opposite each other. You can solve this with additional tests, which means "if" statements when programming, or you can use a special function called "atan2". Most programming languages and spreadsheets support such a function though it's name might be slightly different. $$ \theta = \text{atan2} \left( y , x \right) \tag {7} $$ This function takes care of the quadrant locations and returns the correct angle.
Imaginary Numbers and the Complex Plane
An imaginary number is a real scalar multiplied by the $\sqrt{-1}$. It's impossible to take the square root of a negative number, you protest. A negative times a negative is positive, and a positive times a positive is also a positive. So how can I multiply something times something and get a negative number? Answer: It takes an imaginary number to do that. As in, "imagine a number that can", not that they don't exist. Well, as much as any Mathematical concept exists. Any such number can have the $\sqrt{-1}$ factored out and then it becomes an ordinary real number. The $\sqrt{-1}$ is usually designated as $i$ by mathematicians and $j$ by electrical engineers. It is not an American/European thing as I said in my first article. This, and all my articles use $i$.
A complex number is the sum of a real number and an imaginary number. So, if $x$ and $y$ are both real, then $x + i y$ is a complex number. It could also be expressed as $x + y i $ since multiplication is commutative. Also by convention, the real part is written before the imaginary part.
The complex plane is a representation of the full set of possible complex numbers. The horizontal axis is used for the real part and the vertical axis is used for the imaginary part. Thus, the point corresponding to the complex number $ x+ i y $ is $(x,y)$ on the plane. Very often the variable $z$ is used to denote a complex number.
Now, let's convert that point to polar coordinates: $$ z = x + i y = r \cos( \theta ) + i r \sin( \theta ) = r \left[ \cos( \theta ) + i \sin( \theta ) \right] \tag {8} $$ That wasn't so hard.
Euler's Formula
Now, it's time to bring in Euler's formula, which is: $$ e^{i\theta} = \cos( \theta ) + i \cdot \sin( \theta ) \tag {9} $$ Look at that, the right hand side is just like the polar form of the complex number given in (8). Substituting it in: $$ z = x + i y = r \cdot e^{i\theta} \tag {10} $$ There is the crux of it. Euler's formula is essentially the conversion of polar coordinates to Cartesian coordinates of a complex number. What is remarkable is that the conversion is actually a genuine exponentiation. A fact that will be exploited for the purposes of this article.
The proper mathematical name for finding the angle of a complex number is "arg", as in: $$ \theta = \arg( x + iy ) \tag {11} $$ This is what you should use when doing math. It is also supported in some computing platforms. You may also find it called "angle". When $r>0$, Euler's formula shows it can also be expressed as: $$ \begin{aligned} z &= x + i y = r \cdot e^{i\theta} \\ \ln( z ) &= \ln( x + i y ) = \ln(r) + i \theta \\ \theta &= \frac{ ln( x + i y ) - ln( r ) }{i} = -i \ln \left( \frac{x+iy}{r} \right) = -i \ln \left( \frac{x+iy}{ \sqrt{x^2+y^2} } \right) \end{aligned} \tag {12} $$ This is the complex version of the natural log, which can get confusing so the "arg" name is the preferred one.
Finding the magnitude of a complex value, which is the length of the radius, is expressed mathematically as: $$ \| z \| = \sqrt{ x^2 + y^2 } = | r | \tag {13} $$ The double absolute value bars stand for the magnitude of a complex number, while the single absolute value bars are for real values. That's in math. In programming the magnitude is often implemented as "Abs" with no distinction being made.
Angle Addition Formulas for Sine and Cosine
Let's simplify things by setting $r=1$. Now suppose that $\alpha$ and $\beta$ are two real valued angle measurements. Then: $$ \begin{aligned} e^{i(\alpha + \beta)} &= e^{i\alpha} \cdot e^{i\beta} \\ &= \left[ \cos( \alpha ) + i \sin( \alpha ) \right] \cdot \left[ \cos( \beta ) + i \sin( \beta ) \right] \end{aligned} \tag {14} $$ Let's take a moment and see what happens when we multiply two complex numbers. $$ \begin{aligned} ( a + i b ) ( c + i d ) &= a ( c + i d ) + i b ( c + i d ) \\ &= a c + i a d + i b c + i^2 b d \\ &= ( a c - b d ) + i ( a d + b c ) \\ \end{aligned} \tag {15} $$ Notice that the $i^2$ becomes $-1$. Applying the same steps to (14) results in: $$ \begin{aligned} e^{i(\alpha + \beta)} &= \left[ \cos( \alpha ) + i \sin( \alpha ) \right] \cdot \left[ \cos( \beta ) + i \sin( \beta ) \right] \\ &= \left[ \cos( \alpha ) \cos( \beta ) - \sin( \alpha ) \sin( \beta ) \right ] \\ &+ i \left[ \cos( \alpha ) \sin( \beta ) + \sin( \alpha ) \cos( \beta ) \right ] \end{aligned} \tag {16} $$ But, this is also true from Euler's formula: $$ \begin{aligned} e^{i(\alpha + \beta)} &= \cos( \alpha + \beta ) + i \cdot \sin( \alpha + \beta ) \end{aligned} \tag {17} $$ The right hand side are the Cartesian coordinates. Remember the uniqueness of Cartesian representation? This means that: $$ \cos( \alpha + \beta ) = \cos( \alpha ) \cos( \beta ) - \sin( \alpha ) \sin( \beta ) \tag {18} $$ And: $$ \sin( \alpha + \beta ) = \cos( \alpha ) \sin( \beta ) + \sin( \alpha ) \cos( \beta ) \tag {19} $$ Those are your angle addition formulas for Sine and Cosine. Pretty neat, huh?
Circular Dependencies
It is important to note that this is not a derivation of these formulas, merely a confirmation. The derivation chain goes like this.
1) Cosine angle addition formula from distances on the unit circle
2) Sine angle addition formula from the cosine formula
3) Derivatives of Sine and Cosine from the angle addition formulas and limits
4) Taylor series for Sine and Cosine from the Derivatives
5) Euler's formula from Taylor series
6) Cosine and Sine angle addition formulas from Euler's formula
There is no "7) Goto 3".
Multiplying Two Complex Numbers Polar Style
Suppose we have $z_1$, $z_2$, and $z_3$ where the last one is the product of the first two. $$ z_3 = z_1 \cdot z_2 \tag {20} $$ You may have heard the rule: "When you multiply two complex numbers the angles are added and the magnitudes are multiplied." First, let's prove it: $$ \begin{aligned} r_3 e^{i \theta_3 } &= r_1 e^{i \theta_1 } \cdot r_2 e^{i \theta_2 } \\ &= r_1 r_2 \cdot e^{i (\theta_1 + \theta_2) } \end{aligned} \tag {21} $$ Now, let's see if it really works. (Have you any doubts?)
Let $z_1 = 3 + i 4$ and $z_2 = 5 + i 12$. $$ z_1 \cdot z_2 = ( 15 - 48 ) + i ( 36 + 20 ) = -33 + i 56 = z_3 \tag {22} $$ $$ \begin{aligned} \| z_1 \| &= \sqrt{ 3^2 + 4^2 } = \sqrt{ 9 + 16 } = \sqrt{ 25 } = 5 \\ \| z_2 \| &= \sqrt{ 5^2 + 12^2 } = \sqrt{ 25 + 144 } = \sqrt{ 169 } = 13 \\ \| z_3 \| &= \sqrt{ (-33)^2 + 56^2 } = \sqrt{ 1089 + 3136 } = \sqrt{ 4225 } = 65 \\ \end{aligned} \tag {23} $$ Yeah, I picked Pythagorean triples for $z_1$ and $z_2$ to make the numbers come out nice. But look, the result is also a Pythagorean triple. The mind boggles. Back to the verification. $ 5 \cdot 13 = 65 $. Check.
Now let's look at the angles, they aren't so clean. Here is a little Gambas code to do the calculations:
Dim z1, z2, z3 As Complex z1 = 3 + 4i z2 = 5 + 12i z3 = z1 * z2 Print "|z1| = "; Abs(z1) Print "|z2| = "; Abs(z2) Print "|z3| = "; Abs(z3) Print Print "arg(z1) = "; z1.Arg(), Deg(z1.Arg()) Print "arg(z2) = "; z2.Arg(), Deg(z2.Arg()) Print "arg(z3) = "; z3.Arg(), Deg(z3.Arg()) Print Print "z3 = "; z3 Print Print "atan2( 4, 3) = "; ATan2(4, 3) Print "atan2(12, 5) = "; ATan2(12, 5) Print "atan2(56,-33) = "; ATan2(56, -33)And these are the results: |z1| = 5 |z2| = 13 |z3| = 65 arg(z1) = 0.92729521800161 53.130102354156 arg(z2) = 1.17600520709514 67.3801350519596 arg(z3) = 2.10330042509675 120.510237406116 z3 = -33+56i atan2( 4, 3) = 0.92729521800161 atan2(12, 5) = 1.17600520709514 atan2(56,-33) = 2.10330042509675It doesn't matter whether radians or degrees, the angles still add up.
Conclusion
This article should have made you more comfortable with complex numbers and their representation in both Cartesian and Polar form. These concepts are essential to understanding many DSP concepts and the DFT (Discrete Fourier Transform) in particular. The phrase "One complex number can be rotated by multiplying it by another" should make sense. If it doesn't, reread the article and play around with numbers until it does.
A Dedication
This article is dedicated to a certain Miss MB, who obviously never heeded "Girls aren't supposed to like math." You go, girl!
About Gambas
Gambas is a Linux based (as of this writing) dialect of BASIC development platform, loosely based on (but a huge improvement over) Microsoft's VB5 and VB6. I encourage all programmers, novice to expert, to check it out.
Technical details:
The version in my distro (3.1.1) is way out of date. The latest is 3.12.2 PPA: gambas-team/gambas3
References
[1] Dawg, Cedron, The Exponential Nature of the Complex Unit Circle
Previous post by Cedron Dawg:
Off Topic: Refraction in a Varying Medium
Next post by Cedron Dawg:
A Two Bin Solution
Hi Cedron. In your Polar Coordinates section you wrote: "You can negate r and add π to the angle and get back to the same point." Is it possible to have a negative-valued radius in polar coordinates?
I think it is considered in bad form to leave a value like that. Similar to how $\frac{1}{\sqrt{2}}$ should be converted to $\frac{\sqrt{2}}{2}$ and $\frac{1}{1+i}$ should be converted to $\frac{1-i}{2}$.
But for sure (I resisted saying "Absolutely"), otherwise you couldn't say things like: "What does the graph of $ r = \sin( \theta ) $ look like?"
The essence of the article is (15) and (16) as a way to remember the angle addition formulas in a clutch. You can also think:$$ real = real \cdot real - imag \cdot imag $$ $$ imag = real \cdot imag + imag \cdot real $$
You can also note that $ r = \text{sgn}( x ) \cdot \sqrt{ x^2 + y^2 } $ solves the $ \tan^{-1} $ quandary (Added: You still have to deal with $x=0$ separately).
Ced
Hi Cedron. Your March 17 reply makes me think I didn't ask my question is a clear enough way. My question, in different words, is: "If I have a complex number represented in polar form, is it possible for the radius of that polar-form number to be negative?"
Sure $(-1, \pi/2) $ is just as valid as $( 1, 3\pi/2)$ for the same point.
In order to achieve uniqueness in representation, the typical restriction is $ r > 0 $ and $ 0 <= \theta < 2 \pi $ and if $r=0$ make $\theta=0$. In which case the latter one is the "correct" one.
Does that clarify it?
Hi Cedron.
In your last March 17 reply you seem to be saying that a complex number, represented in polar form, can have a negative-valued magnitude. But it seems to me that your Eq. (13) specifies that all complex number magnitudes be positive-only in value.
If a complex number's magnitude represents a "length" on the complex plane, I'm trying to figure out what on earth does a "negative length" mean.
Hi Rick,
For polar coordinates, think of the "r" value as lying on a "r-axis", which is rotated by "$\theta$" radians (or degrees, if you are using that). So you can think of the $\theta$-axis as being wrapped around the unit circle. There is no problem having a negative value for "r" in this situation, and its meaning is clear. Under (4), I specifically state that "r" is chosen as positive by convention. If you choose the negative root, then the appropriate $\theta$ has to be chosen as well.
Magnitudes, on the other hand, are a distance measurement and are therefore non-negative. So, when you calculate the magnitude and arg of a complex number, they can be interpreted as (mapped to) polar coordinates, but they are not the same thing. "r" is not always a magnitude though. Changing the name of the variable to make the point clearer: $$ z = k e^{i \theta} = -k e^{i (\theta + \pi)} $$ Indeed, any multiple of $i2\pi$ could be added in the exponent on either side and the equation will still be true. All refer to the same point on the complex plane. There is no restriction that says the scalar has to be positive. In this case, I don't know whether k is positive, negative, or zero. That's why the absolute value bars are there around "r" in (14), and for this example: $$ \|z\| = |k| $$ Due to the lack of uniqueness, when the cartesion form ($x+iy$) is converted to the polar form($re^{i\theta}$) by the equations given in the article, it is "a" conversion, not "the" conversion, and "r" is chosen to be positive by convention, and "$\theta$" as the angle to a positive "r".
Equation 12 doesn't disregard the real part of the logarithm; shouldn't it be $\operatorname{Im} \ln (x+iy)$ ?
Good catch, thanks. I fixed it slightly differently and kept it all under (12) so as to not disturb the numbering.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads. |
Four Ways to Compute an Inverse FFT Using the Forward FFT Algorithm
If you need to compute inverse fast Fourier transforms (inverse FFTs) but you only have forward FFT software (or forward FFT FPGA cores) available to you, below are four ways to solve your problem.
$$ Forward \ FFT \rightarrow X(m) = \sum_{n=0}^{N-1} x(n)e^{-j2\pi nm/N} \tag{1} $$
$$ Inverse \ FFT \rightarrow x(n) = {1 \over N} \sum_{m=0}^{N-1} X(m)e^{j2\pi mn/N} $$
Preliminaries To define what we're thinking about here, an N-point forward FFT and an N-point inverse FFT are described by:
$$ \qquad \qquad \qquad = {1 \over N} \sum_{m=0}^{N-1} [X_{real}(m) + jX_{imag}(m)]e^{j2\pi mn/N} \tag{2}$$
Inverse FFT Method# 1 The first method of computing inverse FFTs using the forward FFT was proposed as a "novel" technique in 1988 [1]. That method is shown in Figure 1.
Figure 1: Method# 1 for computing the inverse FFT using forward FFT software.
Inverse FFT Method# 2 The second method of computing inverse FFTs using the forward FFT, similar to Method#1, is shown in Figure 2(a). This Method# 2 has an advantage over Method# 1 when the input $ X(m) $ spectral samples are conjugate symmetric. In that case, shown in Figure 2(b), only one data flipping operation is needed because the output of the forward FFT will be real-only.
Figure 2: Method# 2 Processing flow: (a) standard Method# 2;(b) Method# 2 when X(m) samples are conjugate
symmetric.
The next two inverse FFT methods are of interest because they avoid the data reversals necessary in Method# 1 and Method# 2.
Inverse FFT Method# 3 The third method of computing inverse FFTs using the forward FFT, by way of data swapping, is shown in Figure 3.
Figure 3: Method# 3 for computing the inverse FFT
using forward FFT software.
Inverse FFT Method# 4 The fourth method of computing inverse FFTs using the forward FFT, by way of complex conjugation, is shown in Figure 4.
Figure 4: Method# 4 for computing the inverse FFT
using forward FFT software.
References [1] Duhamel P., el al, "On Computing the Inverse DFT", IEEE Trans. on Acoustics, Speech, and Signal Processing, Vol. 36, No. 2, Feb. 1988. Previous post by Rick Lyons:
Correcting an Important Goertzel Filter Misconception
Next post by Rick Lyons:
The Most Interesting FIR Filter Equation in the World: Why FIR Filters Can Be Linear Phase
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads. |
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ... |
Didn't there used to be a list of said self-complementary rules on the wiki?Apple Bottom wrote:Rules that are their own black/white reversal? According to the wiki, 512.gameoflifemaniac wrote:Related: how many two-state black-white reversal rules are there (rules like Day & Night)?
For general discussion about Conway's Game of Life.
Yes. (It's in my userspace because I didn't consider the list sufficiently interesting to warrant a Main namespace article.)muzik wrote:Didn't there used to be a list of said self-complementary rules on the wiki?
Is there a list (anywhere) that lists the rules where 2x2 blocks simulate other CA using the margolus neighbourhood?
Not to my (very limited) knowledge, but David Eppstein notes such simulations on his glider page, e.g. here.muzik wrote:Is there a list (anywhere) that lists the rules where 2x2 blocks simulate other CA using the margolus neighbourhood?
This is the only other rule I know of with this property:
Code: Select all
x = 2, y = 2, rule = B1246/S0123482o$2o!
Can it be proved that every single such rule can be simulated with a totalistic or non-totalistic rule, given that the 2x2 sections are shrunk down to 1x1 and only every second generation is considered?
Code: Select all
x = 4, y = 4, rule = B1246/S0123484o$4o$2b2o$2b2o!
Here's a few examples from Eppstein's page (this list is probably not exhaustive):muzik wrote:This is the only other rule I know of with this property
B3/S15
B3/S25
B347/S045
B35678/S5678
B3568/S2567
B36/S1258
B368/S25
B38/S125
B38/S15
B38/S25
B38/S5
If you want to learn more about these, I'd encourage you to ask yourself a few questions and try to answer them yourselves.
How many different Margolus neighborhood CAs are there? Try to enumerate them, and see if you can compute the total number.
What constraints do the B/S conditions of rules that simulate these CAs satisfy? How many such rules are there?
When you have two identical numbers, i.e. "there are x different Margolus neighborhood CAs, and there are also x different ules satisfying the constraints I have worked out", can you show that each such CA is simulated by one of these rules, and vice versa?
Can you come up with a way to "convert" a rule to its equivalent Margolus neighborhood CA, or to "convert" a Margolus neighborhood CA to its equivalent rule?
Are there any ways to play and display RLEs and patterns on the wiki just like on the forums with the code boxes?
Given a pattern P and a number k, is the problem of synthesizing P in k gliders decidable?
A stronger version: Is it possible to enumerate all K-glider collisions? (K is the largest number such that there is something synthesizable by K+1 gliders but not synthesizable by K gliders)
A stronger version:
Is it possible to enumerate all K-glider collisions? (K is the largest number such that there is something synthesizable by K+1 gliders but not synthesizable by K gliders)
Still drifting.
Check out LifeViewer Build 233 (Tiki bar link). This is a work in progress, but it seems very promising so far.muzik wrote:Are there any ways to play and display RLEs and patterns on the wiki just like on the forums with the code boxes?
If you have ideas for what should be possible, or shouldn't be possible, starting from a wiki RLE snippet, please go ahead and add it to that discussion!
I guess the problem may not be provably undecidable for small k, but nobody is going to be able to prove that it's decidable either, even for k=4. We can't enumerate 4-glider collisions reliably, and until we can there's not much point in looking at k>4 cases.Bullet51 wrote:Given a pattern P and a number k, is the problem of synthesizing P in k gliders decidable?
(Someone please correct me if I'm wrong. I usually am, about things like decidability questions.)
Here's an example of why 4-glider enumeration is a nearly neverending process. Take a 3-glider synthesis of a switch engine -- not the recently-discovered "clean" synthesis, but one of simsim314's messy ones where the switch engine doesn't self-destruct. Now figure out where you can safely stop colliding one more glider into that switch engine. As long as you can produce a big messy explosion that interacts with the entire length of the switch engine's debris, you might get something different by waiting longer to send in that glider.
See the "Argh, kickbacks" section of this message:Bullet51 wrote:A stronger version: Is it possible to enumerate all K-glider collisions? (K is the largest number such that there is something synthesizable by K+1 gliders but not synthesizable by K gliders) Even in the 4-glider case, there's no way to prove that one of those long-delayed kickbacks might not hit the Prolonged 2-Glider Mess and improbably turn it into Pattern P.*dvgrn wrote:From the point of view of 4G collisions, that's actually not good enough. A fourth glider could hit a piece of junk, or the last dying spark from let's say the 2-glider-mess reaction, and prolong it for long enough that a verylong-delayed kickback glider might do something unique.
-- Yes, we know that that's too improbable to happen in almost any conceivable case... but we also know how to make arbitrarily high-population "diehard seeds" that are statistically indistinguishable from Prolonged 2-Glider Mess ash. It doesn't seem to me a valid proof will ever be able to find its way around that problem.
-------------------------------
* Well, okay, Pattern P plus N output gliders. But you could shoot those down, making a synthesis of P with 4+N gliders. That doesn't seem to make very much difference to the basic problem.
But this only proves that a depth-first search fails. A breadth-first search (although slower) will not end up fixated on the SE+G explosions.dvgrn wrote:Here's an example of why 4-glider enumeration is a nearly neverending process. Take a 3-glider synthesis of a switch engine -- not the recently-discovered "clean" synthesis, but one of simsim314's messy ones where the switch engine doesn't self-destruct. Now figure out where you can safely stop colliding one more glider into that switch engine. As long as you can produce a big messy explosion that interacts with the entire length of the switch engine's debris, you might get something different by waiting longer to send in that glider.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
If you can prove that none of the SE+G explosions produces Pattern P, then maybe breadth-first vs. depth-first could make a difference somehow... though I'm not quite sure how: if it's a complete enumeration, then you'll have to get to all the same cases eventually, either way.toroidalet wrote:But this only proves that a depth-first search fails. A breadth-first search (although slower) will not end up fixated on the SE+G explosions.dvgrn wrote:Here's an example of why 4-glider enumeration is a nearly neverending process. Take a 3-glider synthesis of a switch engine -- not the recently-discovered "clean" synthesis, but one of simsim314's messy ones where the switch engine doesn't self-destruct. Now figure out where you can safely stop colliding one more glider into that switch engine. As long as you can produce a big messy explosion that interacts with the entire length of the switch engine's debris, you might get something different by waiting longer to send in that glider.
Maybe you
canprove that about SE+G, for some target P anyway, if there are unwanted output gliders. Unfortunately you'll also have to be able to prove something similar for all possible Two Interacting Two-Glider-Messes, and for any of the gazillion three-glider methuselahs with a fourth glider added at any point... and so forth and so on.
For any long-lived 3G crash that _doesn't_ produce output gliders or spaceships, how do you prove that it can't be made to settle into your target Pattern P, without trying every possible glider you can hit the active reaction with before it settles? That should give some sense of the size of the problem, and that's just for k=4.
For most P, we know immediately that it's so unlikely that there's no point in bothering to check... but a positive answer to the question would require a provably correct algorithm, not a common-sense statistical argument that's likely to have occasional incredibly rare exceptions.
Here's a related opinion about proofs involving 4-glider and k-glider collisions, from someone with much more respectable mathematical credentials than mine...!
calcyman wrote:Corollary: producing a complete list of constructions with k gliders is, at least morally, impossible. Maybe 3-glider collisions can be fully classified (I'm imagining tens of thousands of individual cases together with a few hundred infinite families), but I doubt anyone could ever manage an exhaustive classification of all possible 4-glider collisions. In particular, I doubt anyone could ever provethe falsity of the following claim: "There is a 4-glider synthesis of the Caterpillar."
Is there any way to join 2 streams of ants?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
How are oscillator orders determined? Like this? It seems to have a pattern but I can't figure it out.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Current rule interest: B2ce3-ir4a5y/S2-c3-y
The comparison method is detailed on the site. This comparison method is used to determine the standard form of objects in Small Object Format.drc wrote:How are oscillator orders determined? Like this? It seems to have a pattern but I can't figure it out. I don't understand what you mean. Can you show what the input and output of the joining process should look like?toroidalet wrote:Is there any way to join 2 streams of ants?
How many non-totalistic CA are there?
Also, would I be correct in saying that there are 2^512 non-isotopic CA?
Also, would I be correct in saying that there are 2^512 non-isotopic CA?
No. There are 2^512 2-state CA with moore neighbourhoods, but note that this is a superset of the set of all isotropic 2-state CA with moore neighbourhoods. To get the amount of non-isotropic CA you'd have to subtract one from the other.muzik wrote:... would I be correct in saying that there are 2^512 non-isotopic CA?
succ
Come on, that's high school-level mathematics. It would likely have taken you less time to figure out the answer yourself than to write this post.muzik wrote:How many non-totalistic CA are there? No, because there's no agreed-on definition of "non-isotopic". ("Non-isotAlso, would I be correct in saying that there are 2^512 non-isotopic CA? Ropic", on the other hand...) More precisely still, that's 2-state CAs defined on ℤ² using theblah wrote:No. There are 2^512 2-state CA with moore neighbourhoods, but note that this is a superset of the set of all isotropic 2-state CA with moore neighbourhoods. To get the amount of non-isotropic CA you'd have to subtract one from the other. range-1Moore neighborhood. So since there's 51 different birth conditions, that would be 2^51. Am I making any mistakes here?Apple Bottom wrote:Come on, that's high school-level mathematics. It would likely have taken you less time to figure out the answer yourself than to write this post.muzik wrote:How many non-totalistic CA are there? The next time you see autocorrect, give it a massive slap across the face. Preferably with a cactus.Apple Bottom wrote:No, because there's no agreed-on definition of "non-isotopic". ("Non-isotAlso, would I be correct in saying that there are 2^512 non-isotopic CA? Ropic", on the other hand...) Yes: you're not counting survival conditions.muzik wrote:So since there's 51 different birth conditions, that would be 2^51. Am I making any mistakes here? Right, so that would make it 2^102.Apple Bottom wrote:Yes: you're not counting survival conditions.muzik wrote:So since there's 51 different birth conditions, that would be 2^51. Am I making any mistakes here? 2^64. Since every transition can have 8 different orientations (rotation and reflection), there are 2^64 non-totalistic rules, because 512 divided by 8 is 64.muzik wrote:How many non-totalistic CA are there? Also, would I be correct in saying that there are 2^512 non-isotopic CA?
https://www.youtube.com/watch?v=q6EoRBvdVPQ
One big dirty Oro. Yeeeeeeeeee...
One big dirty Oro. Yeeeeeeeeee...
Yes, but now you're back to calculating isotropic rules, aren't you? The question was (supposed to be) about non-isotropic rules. Also, some transitions are twofold or fourfold rotationally symmetric, or mirror symmetric, so you can't really just divide like that, you end up with much too small a number. 2^102 is pretty clearly right for isotropic rules, isn't it? That accounts for all the isotropic rule strings that you could possibly generate.gameoflifemaniac wrote:2^64. Since every transition can have 8 different orientations (rotation and reflection), there are 2^64 non-totalistic rules, because 512 divided by 8 is 64.muzik wrote:How many non-totalistic CA are there? Also, would I be correct in saying that there are 2^512 non-isotopic CA?
-- This is the problem with the term "non-totalistic". People have been using it to mean "isotropic non-totalistic", but that's not really what it means. If you pick a MAP rule at random, it's definitely not going to be totalistic, probability near zero -- but it's also very very unlikely to be isotropic. Those near-2^512 rules are the full set of (mostly kinda boring) non-totalistic rules.
When these new rule types first started showing up on LifeViewer, I tried to be careful to use "totalistic", "isotropic non-totalistic", "anisotropic non-totalistic" as the three categories, in hopes that people would pick those terms up. But I have to admit those last two are pretty horrible terms.
That same reasoning would lead you to believe that there are 3^(3^9/8) such 3-state rules, which is nonsensical since that number isn't an integer.gameoflifemaniac wrote:2^64. Since every transition can have 8 different orientations (rotation and reflection), there are 2^64 non-totalistic rules, because 512 divided by 8 is 64.muzik wrote:How many non-totalistic CA are there? Also, would I be correct in saying that there are 2^512 non-isotopic CA?
What do you do with ill crystallographers? Take them to the
! mono-clinic
There should be 2^102 isotropic rules in total, 2^101 essentially distinct because every rule has either an ON/OFF dual rule or an ON-OFF-symmetric/strobing dual rule. There should be 2^512 - 2^102 non-isotropic rules, but the number of essentially distinct ones is harder to calculate due to rotations/reflections.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce |
I need to design a circuit that implements the following transfer function:
\$G(s)= G\frac{s+z}{s+p}\$
where, G = gain, p = pole and z = zero.
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
When s=0 and infinity the G(s)=1.
The transfer function at mean midpoint of sqrt(530*2220)=1070 inserted for s becomes ,
G(1030)=0.486
Thus a unity gain low pass then high pass parallel & series T filter is needed or an active equivalent.
So the transfer function resembles a passive loudness switch that cuts the audio midrange 20log0.486=-6.3dB. My Bogan stereo tube amp during the ‘60’s and ‘70’s had the -15 dB preferred solution over the bass-boom box version used since then.
One reason for this type of response to cut the loudness in the midrange only, rather increase, may be due a phone call, so we cut the midrange only then restore to flat for full loudness. It seems only Bogan got it right. Much after Baxandall invented it, stereo designers used the loudness switch boost crappy speakers bass response below 100 Hz rather than cut the midrange to match our hearing response according to the well known Fletcher-Munson curves. developed before 1933. So this transfer function is a simple approximation for these curves to listen at -6dB very slightly lower volumes.
The next clue is the passive Baxandall dual T filter fixed mid-range filter. Peter J.Baxandall invented this filter just before I was born in 1952. The variable bass treble version is still in use today in old tuners.
Can you imagine an RCR//CRC filter to do this with the R ratios being 2200/520?
If so , you may be as smart as Bax. If not learn how to find his answer. This is for your education, learning how to learn on your own without spoon-feeding.
That’s MY reason for not giving you the solution.
The answers seem to be confusing.
$$G(s) = 4.15\frac{s+530}{s+2200}$$
This transfer function at DC goes to \$G(s) \xrightarrow{s\to 0}\ \sim1\$, while for higher frequencies it goes to \$G(s) \xrightarrow{s\to +\infty} 4.15\$.
There is no way to make this filter in a passive way if the gain remains >1. I suggest looking up filters using opamps.
I suggest the following configuration:
At low frequencies the capacitor acts like an open circuit, yielding a gain of 1. At high frequencies, the capacitor will short-circuit, producing a gain of \$1+\frac{R_1}{R_2}\$.
The transfer function is given as (using the EET theorem on the capacitor):
$$\begin{align} H(s) &= \frac{1 + \frac{Z_n}{Z}}{1 + \frac{Z_d}{Z}}\\ & = \frac{1 + (R_1+R_2)C_2s}{1 + R_2C_2s}\\ &= \left(1 + \frac{R_1}{R_2}\right)\frac{s+\frac{1}{(R_1+R_2)C_2}}{s+\frac{1}{R_2C_2}} \end{align}$$
We already determined that \$1+\frac{R_1}{R_2} = 4.15\$, and then you can choose either \$\frac{1}{(R_1+R_2)C_2}=530\$
or \$\frac{1}{R_2C_2} = 2200\$ depending on which one you want to be exact.
To make the filter perfectly, you can for example use this schematic:
The gain of this amplifier is
$$ A(s) = \frac{C_1s + G_1}{C_2s + G_2} $$
which almost gives you a one on one mapping. |
Given a probability space $(\Omega, \Sigma, \pi)$, a measurable space $(X, \chi)$ and a measurable function $f : \Omega \rightarrow X$.
We can define a regular conditional probability $v(x, A)$, such that for all $A \in \Sigma$ and all $C \in \chi$,
$\int_C v(x, A) \pi(f^{-1}(dx)) = \pi(A \cap f^{-1}(C))$.
Let $g$ be a function such that $g(x, A) = A_x = \{y \vert y \in A \wedge f(y) = x \}$ (Throwing away all the elements which don't map to $x$).
We define a measurable space for each $x$, $(\Omega_x, \Sigma_x)$, such that $\Omega_x = g(x, \Omega)$, $\Sigma_x = \{g(x, A) \vert A \in \Sigma\}$. There exists a probability measure $\pi_x$ on $(\Omega_x, \Sigma_x)$ such that $v(x, A) = \pi_x(g(x, A))$.
This existence would require the following to be true about $v$, $\forall A_1, A_2 \in \Sigma$,
$g(x, A_1) = g(x, A_2) \implies v(x, A_1) = v(x, A_2)$.
Are their any restrictions other than existence of regular conditional probability distribution for me to construct spaces $(\Omega_x, \Sigma_x, \pi_x)$ for each $x \in X$ and guarantee the above condition?
This formulation seems to be trivially true for product spaces i.e. $\Omega = X \times Y$.
Can anyone point me to a paper/book exploring something similar ? |
I have been asked this question by school kids, colleagues and family (usually less formally):
When ascending a flight of stairs, you exchange mechanical work to attain potential energy ($W_{ascend} = E_{pot} = m \cdot g \cdot h$).
However, when descending, you have to exert an equivalent force to stop yourself from accelerating and hitting the ground (with $v_{splat} = \sqrt{2 \cdot g \cdot h}$ ). If you arrive downstairs with $v_{vertical} << v_{splat}$, you counteracted basically all of your potential energy, i.e. $\int F(h) \cdot dh = W_{descend} \approx E_{pot} = m \cdot g \cdot h$.
So is the fact that ascending stairs is commonly perceived as significantly more exhausting than descending the same stairs purely a biomechanical thing, e.g. having joints instead of muscles absorb/counteract kinetic energy? Or is there a physical component I am missing?
edit1:
I feel I need to clarify some points in reaction to the first answers.
A)
The only reason I introduced velocity into the question was to show that you actually have to expend energy going downstairs to prevent ending up as a wet spot on the floor at the bottom of the steps.
The speed with which you ascend or descend doesn't make a difference when talking about the energy, which is why I formulated the question primarily using energy and mechanical work. Imagine that while ascending you pause for a tiny moment after each step ($v = 0$). Regardless of wether you ascended very slowly or very quickly, you would have invested the same amount of work and gained the same amount of potential energy ($\delta W = m \cdot g \cdot \delta h_{step} = \delta E_{pot}$).
The same holds true while descending. After each step, you would have gained kinetic energy equivalent to $E_{kin} = m \cdot g \cdot \delta h_{step}$, but again, imagine you take a tiny pause after each step. For each step, you will have to exert a force with your legs such that you come to a complete stop (at least in y direction). However fast or slow you do it, you mathematically will end up expending $W_{step} = \int F(h) \cdot dh = m \cdot g \cdot \delta h_{step}$.
If you expended any less "brake" work, some of your kinetic energy in y direction would remain
for each step, and adding that up over a number of steps would result in an arbitrarily high terminal velocity at the bottom of the stairs. Since we usually survive descending stairs, my argument is that you will have to expend approximately the same amount of energy going down as going up, in order to reach the bottom of arbitrarily long flights of stairs safely (i.e. with $v_y \approx 0$).
B) I am
quite positive fairly sure that friction does not play a significant role in this thought experiment. Air friction as well as friction between your shoes and the stairs should be pretty much the same while ascending and descending. In both cases, it would be basically the same amount of additional energy expenditure, still yielding identical total energy emounts for ascending and descending. Anna v is of course right in pointing out that you need the friction between your shoes and the stairs to be able to exert any force at all without slipping (such as on ice), but in the case of static friction without slippage, no significant amount of energy should be dissipated, since said friction exerts force mainly in x direction, but the deceleration of your body has a mostly y component, since the x component is roughly constant while moving on the stair (~orthogonal directions of frictional force and movement, so no energy lost to friction work). edit2: Reactions to some more comments and replies, added some emphasis to provide structure to wall of text
C) No,
I am not arguing that descending is subjectively less exhausting, I am asking why it is less exhausting when the mechanics seem to indicate it shouldn't be.
D)
There is no "free" or "automatic" normal force emanating from the stairs that stops you from accelerating.
The normal force provided by the mechanic stability of the stairs stops the stairs from giving in when you step on them, alright, but you have to provide an equal and opposite force (i.e. from your legs) to decelerate your center of gravity, otherwise you will feel the constraining force of the steps in a very inconveniencing manner. Try not using your leg muscles when descending stairs if you are not convinced (please use short stairs for your own safety).
E) Also, as several people pointed out,
we as humans have no way of using or reconverting our stored potential energy to decelerate ourselves. We do not have a built-in dynamo or similar device that allows us to do anything with it - while descending the stairs we actually have to "get rid of it" in order to not accelerate uncontrollably. I am well aware that energy is never truly lost, but also the "energy diversion instead of expenditure" process some commenters suggested is flawed (most answers use some variation of the argument I'm discussing in C, or "you just need to relax/let go to go downhill", which is true, but you still have to decelerate, which leads to my original argument that decelerating mathematically costs exactly as much energy as ascending).
F) Some of the better points so far were first brought up by dmckee and Yakk:
Your muscles have to continually expend chemical energy to sustain a force, even if the force is not acting in the sense of $W = F \cdot s$. Holding up a heavy object is one example of that. This point merits more discussion, I will post about that later today. You might use different muscle groups in your legs while ascending and descending, making ascending more exhausting for the body (while not really being harder energetically). This is right up the alley of what I meant by biomechanical effects in my original post. edit 3:In order to address E as well as F1, let's try and convert the process to explicit kinematics and equations of motion. I will try to argue that the force you need to exert is the same during ascent and descent both over y direction (amount of work) and over time (since your muscles expend energy per time to be able to exert a force).
When ascending (or descending stairs), you bounce a little to not trip over the stairs. Your center of gravity moves along the x axis of the image with two components: your rougly linear ascent/descent (depends on steepness of stairs, here 1 for simplicity) and a component that models the bounce in your step (also, alternating of legs). The image assumes $$h(x) = x + a \cdot \cos(2 \pi \cdot x) + c$$ Here, $c$ is the height of your CoG over the stairs (depends on body height and weight distribution, but is ultimately without consequence) and $A$ is the amplitude of the bounce in your step.
By derivation, we obtain velocity and acceleration in y direction $$ v(x) = 1 - 2 \pi \cdot A \sin(2 \pi \cdot x)\\ a(x) = -(2 \pi)^2 \cdot A \cos(2 \pi \cdot x) $$ The total force your legs have to exert has two parts: counteracting gravity, and making you move according to $a(x)$, so $$F(x) = m \cdot g + m \cdot a(x)$$ The next image shows F(x) for $A = 0.25$, and $m = 80 kg$. I interpret the image as showing the following:
In order to gain height, you forcefully push with your lower leg, a) counteracting gravity and b) gaining momentum in y direction. This corresponds to the maxima in the force plotted roughly in the center of each step. Your momentum carries you to the next step. Gravity slows your ascent, such that on arriving on the next step your velocity in y direction is roughly zero (not plotted $v(x)$). During this period of time right after completely straightening the pushing lower leg, your leg exerts less force (remaining force depending on the bounciness of your stride, $A$) and you land with your upper foot, getting ready for the next step. This corresponds to the minima in F(x).
The exact shape of h(x) and hence F(x) can be debated, but they should look qualitatively similar to what I outlined. My main points are:
Walking down the stairs, you read the images right-to-left instead of left-to-right. Your h(x) will be the same and hence F(x) will be the same. So $W_{desc} = \int F(x) \cdot dx = W_{asc}$. The spent amounts of energy should be equal. In this case, the minima in F(x) correspond to letting yourself fall to the next step (as many answers pointed out), but crucially, the maxima correspond to exerting a large force on landing with your lower leg in order to a) hold your weight up against gravity and b) decelerate your fall to near zero vertical velocity. If you move with roughly constant x velocity, $F(x)$ is proportional to $F(t)$. This is important for the argument that your muscles consume energy based on the time they are required to exert a force, $W_{muscle} \approx \int F(t) \cdot dt$. Reading the image right-to-left, F(t) is read right-to-left, but keeps its shape. Since the time required for each segment of the ascent is equal to the equivalent "falling" descent portion (time symmetry of classical mechanics), the integral $W_{muscle}$ remains constant as well. This result carries over to non-linear muscle energy consumption functions that depend on higher orders of F(t) to model strength limits, muscle exhaustion over time and so on. |
Given any solution to PCA, a sign-flipped version of it is an equally valid solution. A numerical solver breaks this symmetry by finding one of these equally valid solutions. Implementation details and initial conditions determine which solution the solver will produce.
One way to think about PCA is that it maximizes the sum of the variance of the data projected onto the weight vectors, subject to the constraint that the weight vectors are orthonormal. Say the data set contains $n$ points in a $d$ dimensional space with mean $\mu$. We seek a set of orthonormal vectors $\{v_1, ..., v_p\}$ that solves the following optimization problem:
$$\max_{v_i, ..., v_p} \quad\sum_{i=1}^p \frac{1}{n} \sum_{j=1}^n \left [ (x_j - \mu)^T v_i \right ]^2\quad \quad s.t. \quad\begin{array}{ll} \|v_i\| = 1 & \forall i \quad \\ v_i^T v_j = 0 & \forall i \ne j \\\end{array}$$
Say we have a set of weight vectors that solves this problem. You can see that flipping their signs gives the exact same value for the objective function, and the constraints remain satisfied. Hence, the sign-flipped solution is equally valid. There are various other ways of thinking about PCA and writing it as an optimization problem. Some of these ways sound conceptually different, but they all yield the same set of solutions, and the same reasoning holds for all of them.
The reason a particular function implementing PCA returns any one of these solutions over the others comes down to implementation details and initial conditions. Say we're trying to solve the above problem using a standard optimization solver. It starts from some (possibly random) initial set of parameters, then iteratively updates them to increase the value of the objective function, while respecting the constraints. Imagine the objective function as a hilly landscape, where each location corresponds to a particular choice of parameters and the height at each location is the value of the objective function for those parameters. The constraints define particular regions the solver is allowed to go. Each solution to the problem is the highest allowed location on some surrounding hill. There are multiple hills, and the solutions all have the same height (i.e. are equally good). The solver starts from some initial location in this landscape and generally tries to move uphill, eventualy stopping when it can't make any further uphill progress. So, the solution it finally attains is determined by the hill it starts on and how it chooses to step around the landscape.
Of course, one wouldn't typically solve PCA this way because there are more specialized, computationally efficient ways to do it. For example, one popular method is to obtain the weights as eigenvectors of the covariance matrix. But the eigenvalue solver is itself an iterative algorithm, and is subject to the same kinds of issues. |
Difference between revisions of "Quantum mechanics"
(Added stuff about probability)
(Adjusted spacing and fixed formula)
Line 35: Line 35:
Collapsing of the wave function is by no means magic. In can be intuitively understood as this: You find a particle at a particular spot; if you look again immediately, it's still in the same spot.
Collapsing of the wave function is by no means magic. In can be intuitively understood as this: You find a particle at a particular spot; if you look again immediately, it's still in the same spot.
−
===The uncertainty principle===
===The uncertainty principle===
Line 74: Line 73:
so that the expected momentum is:
so that the expected momentum is:
−
<math>\langle p \rangle = \int^\infty_{-\infty} \Psi^{*} \hat{p} \, \Psi dx = \frac{\hbar}{i} \int^\infty_{-\infty} \Psi^{*} \frac{\partial \Psi}{\partial x} dx
+
<math>\langle p \rangle = \int^\infty_{-\infty} \Psi^{*} \hat{p} \, \Psi dx = \frac{\hbar}{i} \int^\infty_{-\infty} \Psi^{*} \frac{\partial \Psi}{\partial x} dx
==Interpretations==
==Interpretations==
Revision as of 16:26, 22 November 2016 Quantum mechanics is the branch of physics that describes the behavior of systems on very small length and energy scales, such as those found in atomic and subatomic interactions. The fundamental principle of quantum mechanics is that there is an uncertainty in the location of a subatomic particle until attention is focused on it by observing its location. This insight is essential for understanding certain concepts that classical physics cannot explain, such as the discrete nature of small-scale interactions, wave-particle duality, the uncertainty principle, and quantum entanglement. Quantum mechanics forms the basis for our understanding of many phenomena, including chemical reactions and radioactive decay, and is used by all computers and electronic devices today. In addition, quantum mechanics explains why the Second Law of Thermodynamics is always true. . The Book of Genesis explains that the world was an abyss of chaos at the moment of creation. Quantum mechanics is predicted in several additional respects by the Biblical scientific foreknowledge. The order created by God is on a foundation of uncertainty
The name "Quantum Mechanics" comes from the idea that energy is transmitted in discrete quanta, and not continuous. Another historical name for "quantum mechanics" was "wave mechanics."
Contents History
Until the early 1900s, scientists believed that electrons and protons were small discrete lumps. Thus, electrons would orbit the nucleus of an atom just as planets orbit the sun. The problem with this idea was that, according to classical electromagnetism, the orbiting electron would emit energy as it orbited. This would cause it to lose rotational kinetic energy and orbit closer and closer to the proton, until it collapses into the proton! Since atoms are stable, this model could not be correct.
The idea of "quanta", or discrete units, of energy was proposed by Max Planck in 1900, to explain the energy spectrum of black body radiation. He proposed that the energy of what we now call a photon is proportional to its frequency. In 1905, Albert Einstein also suggested that light is composed of discrete packets (
quanta) in order to explain the photoelectric effect.
In 1915, Niels Bohr applied this to the electron problem by proposing that angular momentum is also quantized - electrons can only orbit at certain locations, so they cannot spiral into the nucleus. While this model explained how atoms do not collapse, not even Bohr himself had any idea why. As Sir James Jeans remarked, the only justification for Bohr's theory was "the very weighty one of success".
[1]
It was Prince Louis de Broglie who explained Bohr's theory in 1924 by describing the electron as a wave with wavelength λ=h/p. Therefore, it would be logical that it could only orbit in orbits whose circumference is equal to an integer number of wavelengths. Thus, angular momentum is quantized as Bohr predicted, and atoms do not self-destruct.
[1]
Eventually, the mathematical formalism that became known as quantum mechanics was developed in the 1920s and 1930s by John von Neumann, Hermann Weyl, and others, after Erwin Schrodinger's discovery of wave mechanics and Werner Heisenberg's discovery of matrix mechanics.
The work of Tomonaga, Schwinger and Feynman in quantum electrodynamics led to the modern framework of quantum mechanics, currently applicable in quantum electrodynamics and quantum chromodynamics.
Principles Every system can be described by a wave function, which is generally a function of the position coordinates and time. All possible predictions of the physical properties of the system can be obtained from the wave function. The wave function can be obtained by solving the Schrodinger equation (or the Klein-Gordon equation for relativistic-quantum mechanics) for the system. An observable is a property of the system which can be measured. In some systems, many observables can take only certain specific values. If we measure such an observable, generally the wave function does not predict exactly which value we will obtain. Instead, the wave function gives us the probability that a certain value will be obtained. After a measurement is made, the wave function is permanently changed in such a way that any successive measurement will certainly return the same value. This is called the collapse of the wave function. Collapse of The Wave Function
In quantum mechanics, it is meaningless to make absolute statements such as "the particle is here". This is a consequence of the Heisenberg Uncertainty Principle which (simply put) states, "particles move," in an apparently random manner. Thus, giving a definite position to a particle is meaningless. Instead, scientists use the particle's "position function," or "wave function," which gives the probability of a particle being at any point. As the function increases, the probability of finding the particle in that location increases. In the diagram, where the particle is free to move in 1 direction, we see that there is a region (close to the y-axis) where the particle is more likely to be found. However, we also notice that the wave function does not reach zero as it moves towards infinity in both directions. This means that there is a high likelihood of finding the particle around the center, but there is still a possibility that, if measured, the particle will be a long ways away.
When the particle is actually observed to be in a specific location, its wave function is said to have "collapsed". This means that if it is again observed immediately the probability that it will be found near the original location is almost 1. However, if it is not immediately observed, the wave function reverts to its original shape as expected. The collapsed wave function has a much narrower and sharper peak than the original wave function.
Collapsing of the wave function is by no means magic. In can be intuitively understood as this: You find a particle at a particular spot; if you look again immediately, it's still in the same spot.
The uncertainty principle
As a result of the wave nature of a particle, certain quantities cannot be known to an arbitrary precision simultaneously. This happens when the operators for the two quantities do not commute. An example is position and momentum. Whenever its position is measured more accurately (beyond a certain limit), its momentum becomes less certain, and vice versa. Hence, there is an inherent uncertainty that prevents precisely measuring both the position and the momentum simultaneously. This is known as the Heisenberg Uncertainty Principle:
[2]
where:
is the standard deviation (uncertainty) of position is the standard deviation of momentum is the reduced Planck's constant
Other examples include energy and time, as well as different components of angular momentum.
Probability
The probability a particle is found in a certain region can be described as:
provided that the wavefunction is normalised:
In general, the expected value of a measurement of a quantity can be described as:
so that the expected momentum is:
Interpretations
Several interpretations have been advanced to explain how wavefunctions "collapse" to yield the observable world we see.
The "hidden variable" interpretation [3]says that there is actually a deterministic way to predict where the wavefunction will collapse; we simply have not discovered it. John von Neumann attempted to prove that there is no such way; however, John Stuart Bell pointed out an error in his proof. The many-worlds interpretation says that each particle does show up at every possible location on its wavefunction; it simply does so in alternate universes. Thus, myriads of alternate universes are invisibly branching off of our universe every moment. The currently prevailing interpretation, the Copenhagen interpretation, states that the wavefunctions do notcollapse until the particle is observed at a certain location; until it is observed, it exists in a quantum indeterminate state of simultaneously being everywhere in the universe. However, Schrodinger, with his famous thought experiment, raised the obvious question: who, or what, constitutes an observer? What distinguishes an observer from the system being observed? This distinction is highly complex, requiring the use of quantum decoherence theory, parts of which are not entirely agreed upon. In particular, quantum decoherence theory posits the possibility of "weak measurements", which can indirectly provide "weak" information about a particle withoutcollapsing it. [4] Applications
An important aspect of Quantum Mechanics is the predictions it makes about the radioactive decay of isotopes. Radioactive decay processes, controlled by the wave equations, are random events. A radioactive atom has a certain probability of decaying per unit time. As a result, the decay results in an exponential decrease in the amount of isotope remaining in a given sample as a function of time. The characteristic time required for 1/2 of the original amount of isotope to decay is known as the "half-life" and can vary from quadrillionths of a second to quintillions of years.
Quantum Mechanics has important applications in chemistry. The field of Theoretical Chemistry consists of using quantum mechanics to calculate atomic and molecular orbitals occupied by electrons. Quantum Mechanics also explain different spectroscopy used everyday to identify the composition of materials.
See also Concepts in quantum mechanics Important contributors to quantum mechanics
For an excellent discussion of quantum mechanics, see: |
[Image source] kfrom a large dataset of some unknown size n. The hidden assumption here is that nis large enough that the whole dataset does not fit into main memory, whereas the desired sample does.
Let's first review how this problem is tackled in a
sequentialsetting - then we'll proceed with a distributed map-reduce solution. Reservoir samplingOne of the most common sequentialapproaches to this problem is the so-called reservoir sampling. The algorithm works as follows: the data is coming through a stream and the solution keeps a vector of $k$ elements (the reservoir) initialized with the first $k$ elements in the stream and incrementally updated as follows: when the $i$-th element arrives (with $i \gt k$), pick a random integer $r$ in the interval $[1,..,i]$, and if $r$ happens to be in the interval $[1,..,k]$, replace the $r$-th element in the solution with the current element.
A simple implementation in Python is the following. The input items are the lines coming from the standard input:
You can test it from the console as follows to pick 3 distinct random numbers between 1 and 100:
# reservoir_sampling.pyimport sys, randomk = int(sys.argv[1])S, c = [], 0for x in sys.stdin: if c < k: S.append(x) else: r = random.randint(0,c-1) if r < k: S[r] = x c += 1print ''.join(S),
for i in {1..100}; do echo $i; done | python ./reservoir_sampling.py 3
Why does it work? The math behind it(*) (Feel free to skip this section if math and probability are not your friends)
Let's convince ourselves that every element belongs to the final solution with the same probability.
Let $x_i$ be the $i$-th element and $S_i$ be the solution obtained after examining the first $i$ elements. We will show that $\Pr[x_j \in S_i] = k/i$ for all $j\le i$ with $k\le i\le n$. This will imply that the probability that any element is in the final solution $S_n$ is exactly $k/n$.
The proof is by induction on $i$: the base case $i=k$ is clearly true since the first $k$ elements are in the solution with probability exactly 1. Now let's say we're looking at the $i$-th element for some $i>k$. We know that this element will enter the solution $S_i$ with probability exactly $k/i$. On the other hand, for any of the elements $j\lt i$, we know that it will be in $S_i$ only if it was in $S_{i-1}$ and is not kicked out by the $i$-th element. By induction hypothesis, $\Pr[x_j \in S_{i-1}]= k/(i-1)$, whereas the probability that $x_j$ is not kicked out by the current element is $(1-1/i) = (i-1)/i$. We can conclude that $\Pr[x_j \in S_{i}] = \frac{k}{i-1}\cdot\frac{i-1}{i} = \frac{k}{i}$.
MapReduce solutionHow do we move from a sequential solution to a distributed solution?
To make the problem more concrete, let's say we have a number of files where each line is one of the input elements (the number of lines over all files sum up to
n) and we'd like to select exactly kof those lines. The Naive solutionThe simplest solution is to reduce the distributed problem to a sequential problem by using a single reducer and have every mapper map every line to that reducer. Then the reducer can apply the reservoir sampling algorithm to the data. The problem with this approach though is that the amount of data sent by the mappers to the reducer is the whole dataset. A better approachThe core insight behind reservoir sampling is that picking a random sample of size $k$ is equivalent to generating a random permutation (ordering) of the elements and picking the top $k$ elements. Indeed, a random sample can be generated as follows: associate a random floatid with each element and pick the elements with the $k$ largest ids. Since the ids induce a random ordering of the elements (assuming the ids are distinct), it is clear that the elements associated with the $k$ largest ids form a random subset.
We will start implementing this new algorithm in a streaming sequential setting. The goal here is to incrementally keep track of the $k$ elements with largest ids seen so far. A useful data structure that can be used to this goal is the binary min-heap. We can use it as follows: we initialize the heap with the first $k$ elements, each associated with a random id. Then, when a new element comes, we associate a random id with it: if its id is larger than the smallest id in the heap (the heap's root), we replace the heap's root with this new element.
A simple implementation in Python is the following:
Again, the following test pick 3 distinct random numbers between 1 and 100:
# rand_subset_seq.pyimport sys, randomfrom heapq import heappush, heapreplacek = int(sys.argv[1])H = []for x in sys.stdin: r = random.random() # this is the id if len(H) < k: heappush(H, (r, x)) elif r > H[0][0]: heapreplace(H, (r, x)) # H[0] is the root of the heap, H[0][0] its id print ''.join([x for (r,x) in H]),
By looking at the problem under this new light, we can now provide an improved map-reduce implementation.The idea is to compute the ordering distributedly, with each mapper associating a random id with each element and keeping track of the top $k$ elements. The top $k$ elements of each mapper are then sent to a single reducer which will complete the job by extracting the top $k$ elements among all. Notice how in this case the amount of data sent out by the map phase is reduced to the top $k$ elements of each mapper as opposed to the whole dataset.
for i in {1..100}; do echo $i; done | python ./rand_subset_seq.py 3
An important trick that we can use is the fact that Hadoop framework will automatically present the values to the reducer in order of keys from lowest to highest. Therefore, by using the negation of the id as key, the first $k$ element read by the reducer will be the top $k$ elements we are looking for.
We now provide the mapper and reducer code in Python language, to be used with Hadoop streaming.
The following is the code for the mapper:
The Reducer simply returns the first $k$ elements received.
#!/usr/bin/python# rand_subset_m.pyimport sys, randomfrom heapq import heappush, heapreplacek = int(sys.argv[1])H = []for x in sys.stdin: r = random.random() if len(H) < k: heappush(H, (r, x)) elif r > H[0][0]: heapreplace(H, (r, x))for (r, x) in H: # by negating the id, the reducer receives the elements from highest to lowest print '%f\t%s' % (-r, x),
We can test the code by simulating the map-reduce framework.First, add the execution flag to the mapper and reducer files (e.g.,
#!/usr/bin/python# rand_subset_r.pyimport sysk = int(sys.argv[1])c = 0for line in sys.stdin: (r, x) = line.split('\t', 1) print x, c += 1 if c == k: break
chmod +x ./rand_subset_m.py and
chmod +x ./rand_subset_r.py). Then we pipe the data to the mapper, sort the mapper output, and pipe it to the reducer.
k=3; for i in {1..100}; do echo $i; done | ./rand_subset_m.py $k | sort -k1,1n | ./rand_subset_r.py $k
Running the Hadoop jobWe can finally run our Python MapReduce job with Hadoop. If you don't have Hadoop installed, you can easily set it up on your machine following these steps. We leverage Hadoop Streaming to pass the data between our Map and Reduce phases via standard input and output. Run the following command, replacing [myinput] and [myoutput] with your desired locations. Here, we assume that the environment variable HADOOP_INSTALL refers to the Hadoop installation directory.
The first flag sets a single reducer, whereas the second and third are used to make Hadoop sort the keys numerically (as opposed to using string comparison).
k=10 # set k to what you needhadoop jar ${HADOOP_INSTALL}/contrib/streaming/hadoop-*streaming*.jar \-D mapred.reduce.tasks=1 \-D mapred.output.key.comparator.class=org.apache.hadoop.mapred.lib.KeyFieldBasedComparator \-D mapred.text.key.comparator.options=-n \-file ./rand_subset_m.py -mapper "./rand_subset_m.py $k" \-file ./rand_subset_r.py -reducer "./rand_subset_r.py $k" \-input [myinput] -output [myoutput]
Further notesThe algorithm-savvy reader has probably noticed that while reservoir sampling takes linear time to complete (as every step takes constant time), the same cannot be said of the approach that uses the heap. Each heap operation takes $O(\log k)$ time, so a trivial bound for the overall running time would be $O(n \log k)$. However, this bound can be improved as the heap replace operation is only executed when the $i$-th element is larger than the root of the heap. This happens only if the $i$-th element is one of the $k$ largest elements among the first $i$ elements, which happens with probability $k/i$. Therefore the expected number of heap replacements is $\sum_{i=k+1}^n k/i \approx k \log(n/k)$. The overall time complexity is then $O(n + k\log(n/k)\log k)$, which is substantially linear in $n$ unless $k$ is comparable to $n$. What if the sample doesn't fit into memory?So far we worked under the assumption that the desired sample would fit into memory. While this is usually the case, there are scenarios in which the assumption may not hold. Afterall, in the big data world, 1% of a huge dataset may still be too much to keep in memory!
A simple solution to generate large samples is to modify the mapper to simply output every item along with a random id as key. The MapReduce framework will sort the items by id (substantially, generating a random permutation of the elements). The (single) reducer can be left as is to just pick the first $k$ elements. The drawback with this approach is again that the whole dataset needs to be sent to a single reducer. Moreover, even if the reducer does not store the $k$ items in memory, it has to go through them, which can be time-consuming if $k$ is very large (say $k=n/2$).
We now discuss a different approach that uses multiple reducers. The key idea is the following: suppose we have $\ell$ buckets and generate a random ordering of the elements first by putting each element in a random bucket and then by generating a random ordering in each bucket. The elements in the first bucket are considered smaller (with respect to the ordering) than the elements in the second bucket and so on. Then, if we want to pick a sample of size $k$, we can collect all of the elements in the first $j$ buckets if they overall contain a number of elements $t$ less than $k$, and then pick the remaining $k-t$ elements from the next bucket. Here $\ell$ is a parameter such that $n/\ell$ elements fit into memory. Note the key aspect that buckets can be processed distributedly.
The implementation is as follows: mappers associate with each element an id $(j,r)$ where $j$ is a random index in $\{1,2,\ldots,\ell\}$ to be used as key, and $r$ is a random float for secondary sorting. In addition, mappers keep track of the number of elements with key less than $j$ (for $1\le j\le \ell$) and transmit this information to the reducers. The reducer associated with some key (bucket) $j$ acts as follows: if the number of elements with key less or equal than $j$ is less or equal than $k$ then output all elements in bucket $j$; otherwise, if the number of elements with key strictly less than $j$ is $t\lt k$, then run a reservoir sampling to pick $k-t$ random elements from the bucket; in the remaining case, that is when the number of elements with key strictly less than $j$ is at least $k$, don't output anything.
After outputting the elements, the mapper sends the relevant counts to each reducer, using -1 as secondary key so that this info is presented to the reducer first.
The reducer first reads the counts for each bucket and decides what to do accordingly.
#!/usr/bin/python# rand_large_subset_m.pyimport sys, random l = int(sys.argv[1])S = [0 for j in range(l)]for x in sys.stdin: (j,r) = (random.randint(0,l-1), random.random()) S[j] += 1 print '%d\t%f\t%s' % (j, r, x),for j in range(l): # compute partial sums prev = 0 if j == 0 else S[j-1] S[j] += prev # number of elements with key less than j print '%d\t-1\t%d\t%d' % (j, prev, S[j]) # secondary key is -1 so reducer gets this first
The following bash statement tests the code with $\ell=10$ and $k=50$ (note the sort flag to simulate secondary sorting):
#!/usr/bin/python# rand_large_subset_r.pyimport sys, randomk = int(sys.argv[1])line = sys.stdin.readline()while line: # Aggregate Mappers information less_count, upto_count = 0, 0 (j, r, x) = line.split('\t', 2) while float(r) == -1: l, u = x.split('\t', 1) less_count, upto_count = less_count + int(l), upto_count + int(u) (j, r, x) = sys.stdin.readline().split('\t', 2) n = upto_count - less_count # elements in bucket j # Proceed with one of the three cases if upto_count <= k: # in this case output the whole bucket print x, for i in range(n-1): (j, r, x) = sys.stdin.readline().split('\t', 2) print x, elif less_count >= k: # in this case do not output anything for i in range(n-1): line = sys.stdin.readline() else: # run reservoir sampling picking (k-less_count) elements k = k - less_count S = [x] for i in range(1,n): (j, r, x) = sys.stdin.readline().split('\t', 2) if i < k: S.append(x) else: r = random.randint(0,i-1) if r < k: S[r] = x print ''.join(S), line = sys.stdin.readline()
l=10; k=50; for i in {1..100}; do echo $i; done | ./rand_large_subset_m.py $l | sort -k1,2n | ./rand_large_subset_r.py $k
Running the Hadoop jobAgain, we're assuming you have Hadoop ready to crunch data (if not, follow these steps). To run our Python MapReduce job with Hadoop, run the following command, replacing [myinput] and [myoutput] with your desired locations.
Note how we enabled secondary key sorting as explained in the Hadoop streaming quickguide. Each map output record is composed of the bucket $j$, the random id $r$, and the rest. We use
k=100000 # set k to what you needl=50 # set the number of "buckets"r=16 # set the number of "reducers" (depends on your cluster)hadoop jar ${HADOOP_INSTALL}/contrib/streaming/hadoop-*streaming*.jar \-D mapred.reduce.tasks=$r \-D mapred.output.key.comparator.class=org.apache.hadoop.mapred.lib.KeyFieldBasedComparator \-D stream.num.map.output.key.fields=2 \-D mapred.text.key.partitioner.options=-k1,1 \-D mapred.text.key.comparator.options="-k1n -k2n" \-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner \-file ./rand_large_subset_m.py -mapper "./rand_large_subset_m.py $l" \-file ./rand_large_subset_r.py -reducer "./rand_large_subset_r.py $k" \-input [myinput] -output [myoutput]
stream.num.map.output.key.fields sets the key to be the pair $(j, r)$. We use
mapred.text.key.partitioner.options along with the
-partitioner argument to partition only over $j$. Finally, we use
mapred.text.key.comparator.options along with
mapred.output.key.comparator.class to sort by $j$ in numerical order and then by $r$ again in numerical order. |
#
1
Problem with understanding the proof of Sauer Lemma
I will replicate the proof here which is from the book "Learning from Data"
Sauer Lemma:
$B(N,K) \leq \sum_{i=0}^{k-1}{n\choose i}$
Proof:
The statement is true whenever k = 1 or N = 1 by inspection. The proof is by induction on N. Assume the statement is true for all $N \leq N_o$ and for all k. We need to prove that the statement for $N = N_0 + 1$ and fpr all k. Since the statement is already true when k = 1(for all values of N) by the initial condition, we only need to worry about $k \geq 2$. By (proven in the book), $B(N_0 + 1, k) \leq B(N_0, k) + B(N_0, k-1)$ and applying induction hypothesis on each therm on the RHS, we get the result.
**My Concern** From what I see this proof only shows that if $B(N, K)$ implies $B(N+1, K)$. I can't see how it shows $B(N, K)$ implies $B(N, K+1)$. This problem arises because the $k$ in $B(N_0 + 1, K)$ and $B(N_0, K)$ are the same, so i think i need to prove the other induction too. Why the author is able to prove it this way?
#
2
Re: Problem with understanding the proof of Sauer Lemma
OK i think i will just post it below. I can't find an edit button. I mean for 2 variable induction, shouldn't we prove B(N,k) implies B(N+1,k) and B(N, K+1)?
#
3
Re: Problem with understanding the proof of Sauer Lemma
You can imaging that the induction hypothesis to be satisfying the inequality for "all k", and then, satisfies the inequality for "all k" too.
Hope this helps.
__________________
When one teaches, two learn.
Thread Tools Display Modes |
I am trying to understand singular value decomposition. I get the general definition and how to solve for the singular values of form the SVD of a given matrix however, I came across the following problem and realized that I did not fully understand how SVD works:
Let $0\ne u\in \mathbb{R}^{m}$. Determine an SVD for the matrix $uu^{*}$.
I understand that $uu^{*}$ has rank 1 and thus only has one singular value i.e.
$$\Sigma = \begin{pmatrix} \sigma_1 & \ldots & 0\\ 0& \ldots & 0 \end{pmatrix} \in\mathbb{R}^{m\times m}$$
and I realize since $uu^{*}\in\mathbb{R}^{m\times m}$ then for $uu^{*}=U\sum V^{*}$ that $U,\Sigma,V\in\mathbb{R}^{m\times m}$. Additionally, I realize that the columns of $U$ and $V$ are orthonormal.
I guess my question is how do you determine U and V from $uu^{*}$? |
Abbreviation:
GEAlg
A
is a separation algebra that is generalized effect algebra : $x\cdot y=e$ implies $x=e=y$. positive
A
is of the form $\langle A,+,0\rangle$ where $+:A^2\to A\cup\{*\}$ is a partial operation such that generalized effect algebra
$+$ is
: $x+y\ne *$ implies $x+y=y+x$ commutative
$+$ is
: $x+y\ne *$ implies $(x+y)+z=x+(y+z)$ associative
$0$ is an
: $x+0=x$ identity
$+$ is
: $x+y=x+z$ implies $y=z$ and cancellative
$+$ is
: $x+y=0$ implies $x=0$. positive
Let $\mathbf{A}$ and $\mathbf{B}$ be generalized effect algebra. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(e)=e$ and if $x + y\ne *$ then $h(x + y)=h(x) + h(y)$.
Example 1:
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &2\\ f(4)= &5\\ f(5)= &12\\ f(6)= &35\\ f(7)= &119\\ f(8)= &496\\ f(9)= &2699\\ f(10)= &21888\\ f(11)= &292496\\ \end{array}$ |
Abbreviation:
Grp
A
is a structure $\mathbf{G}=\langle G,\cdot,^{-1},e\rangle $, where $\cdot $ is an infix binary operation, calledthe group , $^{-1}$ is a postfix unary operation, called the group product and $e$ is a constant (nullary operation), called the group inverse , such that identity element
$\cdot $ is associative: $(xy)z=x(yz)$
$e$ is a left-identity for $\cdot$: $ex=x$
$^{-1}$ gives a left-inverse: $x^{-1}x=e$.
Remark: It follows that $e$ is a right-identity and that $^{-1}$gives a right inverse: $xe=x$, $xx^{-1}=e$.
Let $\mathbf{G}$ and $\mathbf{H}$ be groups. A morphism from $\mathbf{G}$ to $\mathbf{H}$ is a function $h:Garrow H$ that is a homomorphism:
$h(xy)=h(x)h(y)$, $h(x^{-1})=h(x)^{-1}$, $h(e)=e$
Example 1: $\langle S_{X},\circ ,^{-1},id_{X}\rangle $, the collection of permutations of a sets $X$, with composition, inverse, and identity map.
Example 2: The general linear group $\langle GL_{n}(V),\cdot ,^{-1},I_{n}\rangle $, the collection of invertible $n\times n$ matrices over a vector space $V$, with matrix multiplication, inverse, and identity matrix.
Classtype variety Equational theory decidable in polynomial time Quasiequational theory undecidable First-order theory undecidable Congruence distributive no ($\mathbb{Z}_{2}\times \mathbb{Z}_{2}$) Congruence modular yes Congruence n-permutable yes, n=2, $p(x,y,z)=xy^{-1}z$ is a Mal'cev term Congruence regular yes Congruence uniform yes Congruence types 1=permutational Congruence extension property no, consider a non-simple subgroup of a simple group Definable principal congruences Equationally def. pr. cong. no Amalgamation property yes Strong amalgamation property yes Epimorphisms are surjective yes Locally finite no Residual size unbounded
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &1\\ f(6)= &2\\ f(7)= &1\\ f(8)= &5\\ f(9)= &2\\ f(10)= &2\\ f(11)= &1\\ f(12)= &5\\ f(13)= &1\\ f(14)= &2\\ f(15)= &1\\ f(16)= &14\\ f(17)= &1\\ f(18)= &5\\ \end{array}$
Information about small groups up to size 2000: http://www.tu-bs.de/~hubesche/small.html |
Please read this introduction first before looking through the solutions. Here’s a quick index to all the problems in this section.
1. Prove Theorem 4 by showing that the determination of the required collineation is equivalent to solving a system of 12 homogeneous linear equations in 13 variables when the rank of the matrix of the coefficients is 12.
Let the matrix of the collineation be $$\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}$$
Let’s represent the 4 given points by $P:(p_1, p_2, p_3)$, $Q:(q_1, q_2, q_3)$, $R:(r_1, r_2, r_3)$, $S:(s_1, s_2, s_3)$ and their images by $P’:(p’_1, p’_2, p’_3)$, $Q’:(q’_1, q’_2, q’_3)$, $R’:(r’_1, r’_2, r’_3)$, $S’:(s’_1, s’_2, s’_3)$.
As these are in homogeneous coordinates, the coordinates obtained by multiplying the matrix of the collineation by the given point can be any scalar multiple of the image point.
Hence, taking $k_1, k_2, k_3, k_4$ to be the unknown scaling factors corresponding to each image point, the following relationships hold between the points, their images and the elements of the matrix of the collineaion.
$$k_1p’_1 = a_{11}p_1 + a_{12}p_2 + a_{13}p_3$$ $$k_1p’_2 = a_{21}p_1 + a_{22}p_2 + a_{23}p_3$$ $$k_1p’_3 = a_{31}p_1 + a_{32}p_2 + a_{33}p_3$$
$$k_2q’_1 = a_{11}q_1 + a_{12}q_2 + a_{13}q_3$$ $$k_2q’_2 = a_{21}q_1 + a_{22}q_2 + a_{23}q_3$$ $$k_2q’_3 = a_{31}q_1 + a_{32}q_2 + a_{33}q_3$$
$$k_3r’_1 = a_{11}r_1 + a_{12}r_2 + a_{13}r_3$$ $$k_3r’_2 = a_{21}r_1 + a_{22}r_2 + a_{23}r_3$$ $$k_3r’_3 = a_{31}r_1 + a_{32}r_2 + a_{33}r_3$$
$$k_4s’_1 = a_{11}s_1 + a_{12}s_2 + a_{13}s_3$$ $$k_4s’_2 = a_{21}s_1 + a_{22}s_2 + a_{23}s_3$$ $$k_4s’_3 = a_{31}s_1 + a_{32}s_2 + a_{33}s_3$$
These is a system of 12 homogeneous linear equations in 13 variables. The matrix for this system will be $$\begin{pmatrix} p_1 & p_2 & p_3 & 0 & 0 & 0 & 0 & 0 & 0 & -p’_1 & 0 & 0 & 0 \\ 0 & 0 & 0 & p_1 & p_2 & p_3 & 0 & 0 & 0 & -p’_2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & p_1 & p_2 & p_3 & -p’_3 & 0 & 0 & 0 \\ q_1 & q_2 & q_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q’_1 & 0 & 0 \\ 0 & 0 & 0 & q_1 & q_2 & q_3 & 0 & 0 & 0 & 0 & -q’_2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & q_1 & q_2 & q_3 & 0 & -q’_3 & 0 & 0 \\ r_1 & r_2 & r_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -r’_1 & 0 \\ 0 & 0 & 0 & r_1 & r_2 & r_3 & 0 & 0 & 0 & 0 & 0 & -r’_2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & r_1 & r_2 & r_3 & 0 & 0 & -r’_3 & 0 \\ s_1 & s_2 & s_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -s’_1 \\ 0 & 0 & 0 & s_1 & s_2 & s_3 & 0 & 0 & 0 & 0 & 0 & 0 & -s’_2 \\ 0 & 0 & 0 & 0 & 0 & 0 & s_1 & s_2 & s_3 & 0 & 0 & 0 & -s’_3 \end{pmatrix}\begin{pmatrix} a_{11} \\ a_{12} \\ a_{13} \\ a_{21} \\ a_{22} \\ a_{23} \\ a_{31} \\ a_{32} \\ a_{33} \\ k_{1} \\ k_{2} \\ k_{3} \\ k_{4} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}$$
When the rank of the giant matrix above is 12 (due to no three points beingcollinear), the dimension of the column space will be 12 and as the number ofcolumns is 13, the dimension of its nullspace
1 will be 13 - 12 = 1. Asthe solution we’re looking for lies in the nullspace and as the nullspacehas the dimension of exactly one, it will exist and be unique (within anarbitrary scaling factor). This is exactly what Theorem 4 asserts.
(a) $(0, 0, 1) \rightarrow (1, -1, 2)$
$(1, -1, 2) \rightarrow (3, 0, 5)$ $(1, 1, 1) \rightarrow (0, 3, 1)$ $(1, 0, 0) \rightarrow (1, -1, 0)$
(b) $(1, 0, 0) \rightarrow (0, 1, 1)$
$(1, -1, 0) \rightarrow (1, -1, 0)$ $(2, 0, 1) \rightarrow (1, 0, 2)$ $(1, -1, 1) \rightarrow (0, 1, 0)$
(c) $(1, 0, 0) \rightarrow (1, 1, 1)$
$(0, 1, 0) \rightarrow (1, 0, 1)$ $(-1, 1, 1) \rightarrow (0, 0, 1)$ $(1, 1, 1) \rightarrow (2, 2, 1)$
(d) $(0, 0, 1) \rightarrow (0, 1, 3)$
$(1, 1, 0) \rightarrow (3, 0, 1)$ $(0, 3, 1) \rightarrow (3, 1, 0)$ $(1, 1, 1) \rightarrow (-3, 1, 2)$
Finding the nullspace of a matrix is super easy using maxima/scipy. Using the technique we derived in the solution to #1, the answers are
(a) $\begin{pmatrix} -3 & 7 & -4 \\ 3 & 11 & 4 \\ 0 & 14 & -8 \end{pmatrix}$
(b) $\begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 2 \\ 1 & 1 & 0 \end{pmatrix}$
(c) The transformation for this mapping is not unique as $(1, 0, 0)$, $(-1, 1, 1)$ and $(1, 1, 1)$ are collinear.
The solution space is spanned by the two matrices $$A_1 = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}$$ $$A_2 = \begin{pmatrix} 0 & -1 & 1 \\ 0 & 0 & 0 \\ 0 & -1 & 1 \end{pmatrix}$$
Any nonsingular linear combination of these two matrices can serve as the matrix of transformation for this mapping. For example $$A_1 + A_2 = \begin{pmatrix} 1 & -1 & 2 \\ 1 & 0 & 1 \\ 1 & -1 & 1 \end{pmatrix}$$ will result in the mapping provided.
(d) The transformation for this mapping is not unique as $(0, 0, 1)$, $(1, 1, 0)$ and $(1, 1, 1)$ are collinear. The solution space is spanned by the two matrices $$A_1 = \begin{pmatrix} 0 & -27 & 0 \\ 12 & -12 & 9 \\ 0 & -9 & 27 \end{pmatrix}$$ $$A_2 = \begin{pmatrix} 3 & -3 & 0 \\ 1 & -1 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$ Any nonsingular linear combination of these two matrices can serve as the matrix of transformation for this mapping.
3. Show that there is more than one transformation $T$ which will accomplish the following mappings, find the most general form of $T$, and explain why $T$ is not unique:
(a) $(2, 0, 1) \rightarrow (1, 1, 0)$
$(2, 2, 3) \rightarrow (0, 0, 1)$ $(1, 1, 1) \rightarrow (1, 0, 0)$ $(0, 1, 1) \rightarrow (2, 2, 1)$
(b) $(1, 0, 0) \rightarrow (1, 0, 0)$
$(1, 1, 0) \rightarrow (1, 0, -1)$ $(3, 0, -1) \rightarrow (1, -1, 1)$ $(-1, 2, 1) \rightarrow (0, 1, -2)$
(a) As the three points $(2, 0, 1), (2, 2, 3), (0, 1, 1)$ and their corresponding images are collinear multiple transformations will be able to achieve this mapping.
Using the technique of finding the nullspace of a $12 \times 13$ matrix as in #1, we find that the solution space is spanned by the linear combination of the following two matrices.
$$A_1 = \begin{pmatrix} 2 & -2 & 0 \\ 2 & -2 & 0 \\ 1 & 1 & -2 \end{pmatrix}$$
$$A_2 = \begin{pmatrix} 0 & -6 & 4 \\ 2 & -2 & 0 \\ 1 & 1 & -2 \end{pmatrix}$$
Expressing the general linear combination as $pA_1 + qA_2$, we get $$\begin{pmatrix} 2p & -2p - 6q & 4q \\ 2p + 2q & -2p - 2q & 0 \\ p + q & p + q & -2p -2q \end{pmatrix}$$
To get the answer in the book, substitute $p + q = b$, $p = a$ and $q = b - a$. $$\begin{pmatrix} 2a & 4a - 6b & 4b - 4a \\ 2b & -2b & 0 \\ b & b & -2b \end{pmatrix}$$
(b) As the three points $(1, 1, 0), (3, 0, -1), (1, -1, 1)$ and their corresponding images are collinear multiple transformations will be able to achieve this mapping.
Using the technique of finding the nullspace of a $12 \times 13$ matrix as in #1, we find that the solution space is spanned by the linear combination of the following two matrices.
$$A_1 = \begin{pmatrix} -1 & 1 & -3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$
$$A_2 = \begin{pmatrix} 0 & -1 & 2 \\ 0 & 0 & -2 \\ 0 & 1 & 2 \end{pmatrix}$$
Expressing the general linear combination as $pA_1 + qA_2$, we get $$\begin{pmatrix} -p & p - q & -3p + 2q \\ 0 & 0 & -2q \\ 0 & q & 2q \end{pmatrix}$$
To get the answer in the book, substitute $p = -2a$, $q = b$.
4. Verify that there is no collineation which will accomplish either of the following mappings, and explain why:
(a) $(1, 1, -1) \rightarrow (2, 1, 1)$
$(3, 0, -1) \rightarrow (0, 0, 1)$ $(1, -1, 0) \rightarrow (1, 1, 0)$ $(1, -2, 1) \rightarrow (1, 1, 1)$
(b) $(3, 2, 0) \rightarrow (1, -1, 0)$
$(2, 2, 1) \rightarrow (1, 1, 2)$ $(1, 1, 1) \rightarrow (1, 0, 1)$ $(4, 2, -1) \rightarrow (1, 1, 0)$
These mappings can’t be achieved by collineations because a collineation preserves collinearity, but the images $(2, 1, 1), (0, 0, 1), (1, 1, 1)$ of the three collinear points $(1, 1, -1), (3, 0, -1), (1, -2, 1)$ in (a) and the images $(1, -1, 0), (1, 1, 2), (1, 1, 0)$ of the three collinear points $(3, 2, 0), (2, 2, 1), (4, 2, -1)$ in (b) are not collinear.
To verify this assertion, let’s use the technique from #1. The matrices of transformation will be
(a) $\begin{pmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 0 & 0 & 0 \end{pmatrix}$
(b) $\begin{pmatrix} 2 & -3 & 2 \\ 0 & 0 & 0 \\ 2 & -3 & 2 \end{pmatrix}$
Both these are singular matrices and hence not collineations.
5. (a) Let $T$ be the collineation which leaves the points $(1, 0, 0)$, $(0, 1, 0)$, $(0, 0, 1)$ invariant and maps the point $P:(y_1, y_2, y_3)$ onto the point $(1, 1, 1)$. Determine the possible types for $T$, and find the locus of $P$ corresponding to each type. (b) Work part (a) if $T$ effects the mapping of $(1, 0, 0) \rightarrow (0, 1, 1)$, $(0, 1, 0) \rightarrow (1, 0, 1)$, $(0, 0, 1) \rightarrow (1, 1, 0)$, $(y_1, y_2, y_3) \rightarrow (1, 1, 1)$
(a) As three non-collinear points are invariant, this can not be a collineation of types II, IV or V. Using the technique in #1, the matrix for this transformation looks like $$\begin{pmatrix} y_2y_3 & 0 & 0 \\ 0 & y_1y_3 & 0 \\ 0 & 0 & y_1y_2 \end{pmatrix}$$ It is clear that if $y_1 \ne y_2 \ne y_3$, this is the canonical matrix for a collineation of type I. If $y_i = y_j \ne y_k$ for any one $k$, then the matrix becomes the canonical matrix of a collineation of type III. Finally, if $(y_1, y_2, y_3)$ is the point $(1, 1, 1)$, we have 4 invariant points with no three of them being collinear, so by Theorem 1 the transformation that achieves this mapping is a collineation of type VI or the identity transformation.
The locus of $P$ in a collineation of type VI is the point $P$ itself. Under a collineation of type III the locus is one of the lines $y_i = y_j$. Under a collineation of type I the locus of $P$ is any point on $\Pi_2$.
(b) Using the technique in #1, the matrix for this transformation looks like $$\begin{pmatrix} 0 & y_1y_3 & y_1y_2 \\ y_2y_3 & 0 & y_1y_2 \\ y_2y_3 & y_1y_3 & 0 \end{pmatrix}$$
The characteristic polynomial of this matrix is $$2y_1^2y_2^2y_3^2 + ky_1y_2y_3(y_1 + y_2 + y_3) - k^3 = 0$$
As this polynomial does not have a $k^2$ term, it can not be the expansion of a cubic polynomial of the form $(k - a)^3$ and hence can’t have a single repeated root. This means it can’t be a collineation of types IV, V or VI.
For it to have one repeated and one simple root, it must be of the form $$(k - a)*(k - b)^2$$
Expanding this we get $$k^3 - k^2(2b + a) + (2ab + b^2)k - ab^2$$
For this to match the characteristic polynomial, the coefficient of the k^2 term must be 0. So we get $$2b = -a$$
Hence the cubic equation can be rewritten as $$k^3 - 3b^2k + 2b^3$$
Matching the coefficients of this cubic equation with the characteristic polynomial, we get
$$ -3b^2 = y_1y_2y_3(y_1 + y_2 + y_3) \implies b^2 = -\frac{1}{3}y_1y_2y_3(y_1 + y_2 + y_3)$$ $$ 2b^3 = 2y_1^2y_2^2y_3^2 \implies b^3 = y_1^2y_2^2y_3^2 $$
Raising the first equation to the power of 3 and the second one to the power of 2 we get $$-\frac{1}{27}y_1^3y_2^3y_3^3(y_1 + y_2 + y_3)^3 = y_1^4y_2^4y_3^4$$ $$\implies (y_1 + y_2 + y_3)^3 = 27y_1y_2y_3$$
Hence for the transformation to have a repeated root and a simple root, the locus of $P$ must be $(y_1 + y_2 + y_3)^3 = 27y_1y_2y_3$.
For this to be a collineation of type III, the lines between the points and their images must intersect at the invariant center. In this case it is $(1, 1, 1)$. Hence this will be a collineation of type III if $P$ is $(1, 1, 1)$. In all other cases, any point having the locus $(y_1 + y_2 + y_3)^3 = 27y_1y_2y_3$ will lead to a collineation of type II.
If $P$ is any point not on the curve $(y_1 + y_2 + y_3)^3 = 27y_1y_2y_3$ then the collineation will be of the most general type, type I.
References Wikipedia. Rank Nullity Theorem. https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem. [return] |
I think this is my first research-level question, so I'm going to ask it here first before going to Math Overflow.
In most tutorial papers like Burges, the Vapnik-Chervonenkis dimension is introduced as a way of bounding the expected error of a learning machine by the sum of its training error and a slightly more complicated expression which is $O(\sqrt{log(n)})$:
$Error \leq \text{training error} + \sqrt{h(\log(2N/h)+1)-\log(\eta/4)\over N}$
The VC dimension $h$ is the number of points in two classes in some feature space that may be properly divided by a classifier operating on that space, where 'properly' here means with zero error regardless of the combination of classes assigned to the point. An example is the perceptron, which can learn any binary function except XOR, which means that its VC dimension is 3. This gives a highly conservative upper bound which enables one to select a learning machine with the smallest VC dimension $h$ that gives low training error.
Regularization, on the other hand, is a method of penalizing some feature of the solution generated by the learning machine. An example is $L_2$ regularization (Tikhonov):
$Cost(\mathbf{w}) = ||\mathbf{y} - \mathbf{w}^H\mathbf{X}||^2 + \lambda||\mathbf{w}||^2$
The first term is the mean-squared error, where $\mathbf{y}$ is a vector of desired outputs, and $\mathbf{X}$ is a matrix of corresponding observed features of the data for each $y$. The weight vector $\mathbf{w}$ here represents a linear function mapping the features to an estimate of the desired output (normally the nonlinear case is handled by kernel methods). The important thing here is the penalty $\lambda$, which indicates how much you pay for a 'big' weight vector (in the $L_2$ sense).
I have done a couple of searches and not seen anything characterizing the connection between these two methods, other than one informal ranking by conservativeness.
Does anyone know what regularization does to your VC dimension? Intuitively I feel that something like $L_1$ regularization would immediately start cutting it down, but I can't visualize what would happen. If there are papers I wouldn't mind referring to those. |
In many texts, the non-relativistic (Newtonian) kinetic energy formula $$\text{KE}_\text{Newton} =\frac{1}{2}mv^2$$ is referred to as a first order approximation of the relativistic kinetic energy $$\text{KE}_\text{relativistic} = \gamma mc^2 - mc^2$$ The same is also said of the classical momentum formula in relation to its relativistic counterpart.
However, comparing the Newtonian approximations to their respective relativistic formulas, the Newtonian KE formula appears to be a second order approximation while the momentum formula appears to be of first order.
Let's begin with momentum. The relativistic formula for momentum is $$ p=\gamma mv=\frac{mv}{\sqrt{1-\left(\frac{v}{c}\right)^2}} \, . $$ For non-relativistic velocities ($v \ll c$), we use the Taylor series $$ \frac{x}{\sqrt{1-x^2}} \approx x\left(1 + \frac{x^2}{2}\right) \, , $$ giving $$p/c \approx mv/c \left[ 1 + \frac{1}{2}\left( \frac{v}{c} \right)^2 \right] \approx m (v/c)$$ which is first order in $v/c$. In other words, $p\approx mv$ which is the usual Newtonian expression.
On the other hand, the relativistic kinetic energy is \begin{align} \text{KE}_\text{relativisitic} = \gamma mc^2 - mc^2 = \frac{mc^2}{\sqrt{1-\left( \frac{v}{c}\right)^2}} - mc^2 \end{align} which for $v \ll c$ is $$ \text{KE}_\text{relativistic} \approx mc^2 \left[ 1 + \frac{1}{2}\left( \frac{v}{c} \right)^2\right] - mc^2 = mc^2 \frac{1}{2} \left( \frac{v}{c} \right)^2 = \frac{1}{2} m v^2$$ which is obviously second order in $v$.
If we compare plots of the Newtonian forms for kinetic energy and linear momentum against their respective relativistic formulas, there appears to be a closer agreement for the approximation of kinetic energy than can be seen for linear momentum.
And hence my question: why is the Newtonian formula for kinetic energy referred to as a first order approximation when it appears to be of a second order? |
I am trying to find the distribution of a random variable that is calculated according to $Y:=\sum_{i=1}^n X_i^2$ where $X_i $ is distributed as $ \mathcal{N}(0,\sigma^2_i)$. Does there exist a particular of calculating this?
Thank you so much!
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
I am trying to find the distribution of a random variable that is calculated according to $Y:=\sum_{i=1}^n X_i^2$ where $X_i $ is distributed as $ \mathcal{N}(0,\sigma^2_i)$. Does there exist a particular of calculating this?
Thank you so much!
$X_i \sim \mathcal{N}(0,\sigma^2_i) \Rightarrow \frac{X_i}{\sigma_i}\sim \mathcal{N}(0,1) $
$\therefore$ $\frac{X_i^2}{\sigma_i^2} \sim \chi^2(1)=\Gamma(1/2,2)$
$X_i^2 \sim \sigma_i^2\Gamma(1/2,2)=\Gamma(1/2,2\sigma_i^2)$
If your $\sigma_i$s are fixed (i.e all the same) then
$\sum_{i=1}^n X_i^2 \sim \sum_{i=1}^n\Gamma(1/2, 2\sigma^2)=\Gamma(n/2,2\sigma^2)$ suppose $\sigma_i$s are equal to $\sigma$
i.e $\sum_{i=1}^n X_i^2$ has a gamma distribution with $k=n/2,\theta=2\sigma^2$
If your $\sigma_i$s are
not fixed then
ref this |
I am trying to solve a real-world problem that I was able to reduce to the problem described below. I would like to know the following things:
Is there literature about this problem? Is the corresponding decision problem NP-complete? Do you know an efficient algorithm to solve the problem? If not, what would be a good technique to solve it approximately?
The problem is as follows. Given a $n \times m$ matrix $M$ with entries in $[0, 1]$, and a function $a \colon \{1, \ldots m\} \to \mathbb{N}$. For each column $j$, we want to pick $a(j)$ entries in that column, such that in each row, at most one element was picked. We want to do this in such a way that the sum of the picked values is maximized.
More formally, find a function $T\colon \{1, \ldots n\} \to \{1, \ldots m\} \cup \{\bot\}$ which maps each row to a column (or $\bot$), with the following two properties:
$\forall j \in \{1, \ldots m\} \colon \lvert T^{-1}(j)\rvert = a(j)$ The following expression is maximized: $$ \sum_{\substack{i=1 \\ T(i) \neq \bot}}^n M_{i, T(i)} $$
Of course, you could also define it the other way around, with each column mapping to a set of rows. Finally, the corresponding decision problem is in NP, as the function $T$ acts as a certificate. I tried a reduction to knapsack, as the two seem quite similar, but I was unsuccessful. |
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model
If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set
SW.
A set of extended base triplet is defined as $\mathfrak{B}^3$ = {
XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group:
$(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$
where
X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it.
+ D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G
This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from
SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76. |
It's obvious many times why one prefers an unbiased estimator. But, are there any circumstances under which we might actually prefer a biased estimator over an unbiased one?
Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared. This is an extremely fundamental idea in machine learning, and statistics in general. Frequently we see that a small increase in bias can come with a large enough reduction in variance that the overall MSE decreases.
A standard example is ridge regression. We have $\hat \beta_R = (X^T X + \lambda I)^{-1}X^T Y$ which is biased; but if $X$ is ill conditioned then $Var(\hat \beta) \propto (X^T X)^{-1}$ may be monstrous whereas $Var(\hat \beta_R)$ can be much more modest.
Another example is the kNN classifier. Think about $k = 1$: we assign a new point to its nearest neighbor. If we have a ton of data and only a few variables we can probably recover the true decision boundary and our classifier is unbiased; but for any realistic case, it is likely that $k = 1$ will be far too flexible (i.e. have too much variance) and so the small bias is not worth it (i.e. the MSE is larger than more biased but less variable classifiers).
Finally, here's a picture. Suppose that these are the sampling distributions of two estimators and we are trying to estimate 0. The flatter one is unbiased, but also much more variable. Overall I think I'd prefer to use the biased one, because even though on average we won't be correct, for any single instance of that estimator we'll be closer.
$$ \ $$
Update
I mention the numerical issues that happen when $X$ is ill conditioned and how ridge regression helps. Here's an example.
I'm making a matrix $X$ which is $4 \times 3$ and the third column is nearly all 0, meaning that it is almost not full rank, which means that $X^T X$ is really close to being singular.
x <- cbind(0:3, 2:5, runif(4, -.001, .001)) ## almost reduced rank> x [,1] [,2] [,3][1,] 0 2 0.000624715[2,] 1 3 0.000248889[3,] 2 4 0.000226021[4,] 3 5 0.000795289(xtx <- t(x) %*% x) ## the inverse of this is proportional to Var(beta.hat) [,1] [,2] [,3][1,] 14.0000000 26.00000000 3.08680e-03[2,] 26.0000000 54.00000000 6.87663e-03[3,] 0.0030868 0.00687663 1.13579e-06eigen(xtx)$values ## all eigenvalues > 0 so it is PD, but not by much[1] 6.68024e+01 1.19756e+00 2.26161e-07solve(xtx) ## huge values [,1] [,2] [,3][1,] 0.776238 -0.458945 669.057[2,] -0.458945 0.352219 -885.211[3,] 669.057303 -885.210847 4421628.936solve(xtx + .5 * diag(3)) ## very reasonable values [,1] [,2] [,3][1,] 0.477024087 -0.227571147 0.000184889[2,] -0.227571147 0.126914719 -0.000340557[3,] 0.000184889 -0.000340557 1.999998999
Update 2
As promised, here's a more thorough example.
First, remember the point of all of this: we want a good estimator. There are many ways to define 'good'. Suppose that we've got $X_1, ..., X_n \sim \ iid \ \mathcal N(\mu, \sigma^2)$ and we want to estimate $\mu$.
Let's say that we decide that a 'good' estimator is one that is unbiased. This isn't optimal because, while it is true that the estimator $T_1(X_1, ..., X_n) = X_1$ is unbiased for $\mu$, we have $n$ data points so it seems silly to ignore almost all of them. To make that idea more formal, we think that we ought to be able to get an estimator that varies less from $\mu$ for a given sample than $T_1$. This means that we want an estimator with a smaller variance.
So maybe now we say that we still want only unbiased estimators, but among all unbiased estimators we'll choose the one with the smallest variance. This leads us to the concept of the uniformly minimum variance unbiased estimator (UMVUE), an object of much study in classical statistics. IF we only want unbiased estimators, then choosing the one with the smallest variance is a good idea. In our example, consider $T_1$ vs. $T_2(X_1, ..., X_n) = \frac{X_1 + X_2}{2}$ and $T_n(X_1, ..., X_n) = \frac{X_1 + ... + X_n}{n}$. Again, all three are unbiased but they have different variances: $Var(T_1) = \sigma^2$, $Var(T_2) = \frac{\sigma^2}{2}$, and $Var(T_n) = \frac{\sigma^2}{n}$. For $n > 2$ $T_n$ has the smallest variance of these, and it's unbiased, so this is our chosen estimator.
But often unbiasedness is a strange thing to be so fixated on (see @Cagdas Ozgenc's comment, for example). I think this is partly because we generally don't care so much about having a good estimate in the average case, but rather we want a good estimate in our particular case. We can quantify this concept with the mean squared error (MSE) which is like the average squared distance between our estimator and the thing we're estimating. If $T$ is an estimator of $\theta$, then $MSE(T) = E((T - \theta)^2)$. As I've mentioned earlier, it turns out that $MSE(T) = Var(T) + Bias(T)^2$, where bias is defined to be $Bias(T) = E(T) - \theta$. Thus we may decide that rather than UMVUEs we want an estimator that minimizes MSE.
Suppose that $T$ is unbiased. Then $MSE(T) = Var(T) = Bias(T)^2 = Var(T)$, so if we are only considering unbiased estimators then minimizing MSE is the same as choosing the UMVUE. But, as I showed above, there are cases where we can get an even smaller MSE by considering non-zero biases.
In summary, we want to minimize $Var(T) + Bias(T)^2$. We could require $Bias(T) = 0$ and then pick the best $T$ among those that do that, or we could allow both to vary. Allowing both to vary will likely give us a better MSE, since it includes the unbiased cases. This idea is the variance-bias trade-off that I mentioned earlier in the answer.
Now here are some pictures of this trade-off. We're trying to estimate $\theta$ and we've got five models, $T_1$ through $T_5$. $T_1$ is unbiased and the bias gets more and more severe until $T_5$. $T_1$ has the largest variance and the variance gets smaller and smaller until $T_5$. We can visualize the MSE as the square of the distance of the distribution's center from $\theta$ plus the square of the distance to the first inflection point (that's a way to see the SD for normal densities, which these are). We can see that for $T_1$ (the black curve) the variance is so large that being unbiased doesn't help: there's still a massive MSE. Conversely, for $T_5$ the variance is way smaller but now the bias is big enough that the estimator is suffering. But somewhere in the middle there is a happy medium, and that's $T_3$. It has reduced the variability by a lot (compared with $T_1$) but has only incurred a small amount of bias, and thus it has the smallest MSE.
You asked for examples of estimators that have this shape: one example is ridge regression, where you can think of each estimator as $T_\lambda(X, Y) = (X^T X + \lambda I)^{-1} X^T Y$. You could (perhaps using cross-validation) make a plot of MSE as a function of $\lambda$ and then choose the best $T_\lambda$.
Two reasons come to mind, aside from the MSE explanation above (the commonly accepted answer to the question):
Managing risk Efficient testing Risk, roughly, is the sense of how much something can explode when certain conditions aren't met. Take superefficient estimators: $T(X) = \bar{X}_n$ if $\bar{X}_n$ lies beyond an $\epsilon$-ball of 0, 0 otherwise. You can show that this statistic is more efficient than the UMVUE, since it has the same asymptotic variance as the UMVUE with $\theta \ne 0$ and infinite efficiency otherwise. This is a stupid statistic, and Hodges threw it out there as a strawman. Turns out that if you take $\theta_n$ on the boundary of the ball, it becomes an inconsistent test, it never knows what's going on and the risk explodes.
In the minimax world, we try to minimize risk. It can give us biased estimators, but we don't care, they still work because there are fewer ways to break the system. Suppose, for instance, I were interested in inference on a $\Gamma(\alpha, \beta_n)$ distribution, and once in a while the distribution threw curve balls. A trimmed mean estimate $$T_\theta(X) = \sum X_i \mathcal{I} (\|X_i\| < \theta) / \sum \mathcal{I} (\|X_i\| < \theta)$$ systematically throws out the high leverage points.
Efficient testing means you don't estimate the thing you're interested in, but an approximation thereof, because this provides a more powerful test. The best example I can think of here is logistic regression. People always confuse logistic regression with relative risk regression. For instance an odds ratio of 1.6 for cancer comparing smokers to non-smokers does NOT mean that "smokers had a 1.6 greater risk of cancer". BZZT wrong. That's a risk ratio. They technically had a 1.6 fold odds of the outcome (reminder: odds = probability / (1-probability)). However, for rare events, the odds ratio approximates the risk ratio. There is relative risk regression, but it has a lot of issues with converging and is not as powerful as logistic regression. So we report the OR as a biased estimate of the RR (for rare events), and calculate more efficient CIs and p-values. |
Abbreviation:
SchrCat
A
is an enriched category $\mathbf{C}=\langle C,\circ,\text{dom},\text{cod}\rangle$ Schroeder category
in which every hom-set is a Boolean algebras.
Let $\mathbf{C}$ and $\mathbf{D}$ be Schroeder categories. A morphism from $\mathbf{C}$ to $\mathbf{D}$ is a function $h:C\rightarrow D$ that is a
: $h(x\circ y)=h(x)\circ h(y)$, $h(\text{dom}(x))=\text{dom}(h(x))$ and $h(\text{cod}(x))=\text{cod}(h(x))$. functor
Remark: These categories are also called
. groupoids
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ |
> **Conjecture.** Any functor \\( F: \mathbf{Set} \to \mathbf{N}\\) must send every morphism in \\(\textbf{Set}\\) to the identity morphism.
I argue this conjecture is *false*
This might be true for functions in traditional ZF set theory.
However, category theorists define morphisms on \\(\mathbf{Set}\\) a little differently. In particular, an *initial object* is considered.
(EDIT: After a little reread, I am wrong about this - everyone is using the same definition after all!)
**Proof.**
To start, I am going to write \\(\mathbf{1}\_\ast\\) as \\(id\_\ast\\). This is because if \\(\circ\\) for \\(\mathbf{N}\\) is like \\(+\\) then \\(\mathbf{1}_\ast\\) feels more like 0...
Let \\(\tilde{2}\\) be some morphism in \\(\mathbf{N}\\) where \\(\tilde{2} \neq id_\ast\\).
For any set \\(Y\\), consider the unique morphism \\(\phi : \varnothing \to Y \\). This unique morphism exists because \\(\varnothing\\) is the *initial object* of \\(\mathbf{Set}\\) (see [nLab](https://ncatlab.org/nlab/show/initial+object#examples)).
Now define the functor \\(F : \mathbf{Set} \to \mathbf{N}\\) such that for all \\(f : X \to Y \\) where \\(X \neq \emptyset\\) then \\( F(f) = id_\ast\\) and \\(F(\phi) = \tilde{2}\\) otherwise.
\\(F\\) obeys the functor laws and doesn't send everything to the identity morphism.
\\(\Box\\) |
Abbreviation:
FL$_c$
A
is an FL-algebra $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \backslash, /, 0\rangle$ such that FL$_c$-algebra
$\cdot$ is
: $x\le x\cdot x$ contractive
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
Classtype (value, see description) Equational theory undecidable 1) Quasiequational theory undecidable First-order theory undecidable Locally finite no Residual size infinite Congruence distributive yes Congruence modular yes Congruence $n$-permutable yes Congruence regular Congruence uniform Congruence extension property Definable principal congruences Equationally def. pr. cong. Amalgamation property Strong amalgamation property Epimorphisms are surjective
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
K. Chvalovsky and R. Horcík, 1) , Journal of Symbolic Logic, Full Lambek calculus with contraction is undecidable 81(2), 524–540 |
Abbreviation:
OMonZ
An
is of the form $\mathbf{A}=\langle A,\cdot,1,0,\le\rangle$ such that $\mathbf{A}=\langle A,\cdot,1,\le\rangle$ is an ordered monoid and ordered monoid with zero
$0$ is a
: $x\cdot 0 = 0$ and $0\cdot x = 0$ zero
Let $\mathbf{A}$ and $\mathbf{B}$ be ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(1)=1$, $h(0)=0$, $x\le y\Longrightarrow h(x)\le h(y)$.
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$f(n)=$ number of members of size $n$.
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &3\\ f(4)= &15\\ f(5)= &84\\ f(6)= &575\\ f(7)= &4687\\ f(8)= &45223\\ f(9)= &\\ \end{array}$
Ordered monoids reduced type
Ordered semigroups with zero reduced type |
Average of an observable is
where \(\rho\) is the “probability matrix”. In QM, this is
If a system of QM comes to equilibrium, that means
\(\rho = \frac{ e^{-\beta H} }{\mathrm{Tr} e^{-\beta H} }\);
\(\rho\) is diagonal in energy eigen space.
As we already know, from math of classical statistical mechanics, free energy
Suppose we have two systems, one with \(N_1\) and \(V_1\) the other with \(N_2\) and \(V_2\). Now we mix them. Our physics intuition would tell us that the free energy of this new system should be \(A = A_1 + A_2\). However, from the free energy equation, we get
This is different from the result we thought of
That is, free energy becomes neither intensive nor extensive in our derivation.
The fairly simple way to make it extensive is to
divide V by N. Now a new term will appear in our free energy, namely \(N\ln N\). Recall that in Sterling approximation, \(\ln N! = N\ln N -N\). So in a large system, we can create some kind of free energy definition which makes it extensive.
Note
We can’t just pull out some results from statistical mechanics and apply them to a small system composed of several particles. In stat mech we use a lot of approximations like Sterling approximation which is only valid when particle number is huge.
which is to say
This definition “solves” the Gibbs mixing paradox. The physics of this modification requires QM.
Statistical mechanics starts from a given energy spectrum. With energy spectrum solved, we can do statistics.
For an interacting system, we need to solve the energy spectrum first then calculate the partition function. Usually we would have coupled equations for interacting systems. For example, in a coupled HO system with N oscillators. Image of two HO example is given here.
Our equations of motion are
with i go from 2 to N-1.
A transformation \(x_q = \sum x_m e^{i m q}\) will dissociate these equations,
We would have normal modes. \(w_q ~ \sin|q/2|\).
In some sense, Debye theory is an many Einstein theory.
In Einstein model, every particle is identical and have the same frequency. However, this theory shows a result of Boltzmann factor behavior, which is
We got sleeping slope at very low temperature where experiments show that this wrong.
So the Debye theory is composed of two steps.
Calculate the energy spectrum of N coupled particle system by finding out a decouple transformation;
Evaluate the heat capacity integral.
Once we find the energy spectrum, we will know the dispersion relation, which is different from Einstein’s model.
(Taken without permission from here .)
What did Debye do is to use a linear approximation, \(w= c k\) for dispersion relation.
(Taken without permission from here .)
Through a calculation, we show that
for a 3D lattice.
So average energy is
Heat Capacity is
where \(x(\omega_m) = \Theta_D/ T\) and \(\Theta_D = \hbar \omega_D/k_B\).
Note
What’s amazing of Debye theory is that the low temperature behavior is independent of cut off frequency. At low temperature, \(x(\omega_D)\) becomes infinite and it becomes an integration from 0 to infinity thus we do not need to know the cut off temperature to find out the low temperature result and it agrees with experiments well.
Important
We start for an Einstein theory and reaches a non sleeping model. What happened when we integrated over all DoS in Debye model? Work this out in details.
Hint
This has something to do with density of states dispersion relation.
This is because our density of states \(g(\omega)\propto \omega^2\) at low temperature tells us that we would have more states at a certain energy as \(\omega\) increases. That is to say the system need more energy to increase temperature, identically the heat capacity line becomes steeper.
Important
Why is the # of modes important in Debye model? The degree of freedom is finite in an system. If we don’t cut off the frequency, that means we would have infinite degree of freedom because we have made an approximation that dispersion relation is a straight line \(\omega = c k\). That would certainly lead to infinite heat capacity and infinite total energy.
Debye model is simple yet classic. Generally we can not find the right transformation that decouples the particles. For example we have the Ising model,
with \(J^{ij} = J ( \delta _ i \delta _ {j-1} +\delta _ i \delta _{j+1} )\) as an simple model.
Hint
The reason that we can decouple the simple coupled HO system is that the coupling constant are the same and each HO is identical. In that case the system is just a homogeneous chain such that the normal modes are sin or cos waves depending on the boundary condition. If the system is inhomogeneous, there is no way that we can use simple plain waves on the chain as normal modes. |
Search for chargino and neutralino production at \(\sqrt{s} = 189\) GeV at LEP 45 Downloads Abstract.
A search for charginos and neutralinos, predicted by supersymmetric theories, is performed using a data sample of 182.1 pb\(^{-1}\) taken at a centre-of-mass energy of 189 GeV with the OPAL detector at LEP. No evidence for chargino or neutralino production is found. Upper limits on chargino and neutralino pair production (\(\tilde{\chi}^+_1 \tilde{\chi}^-_1\), \(\tilde{\chi}^0 _1 \tilde{\chi}^0 _2\)) cross-sections are obtained as a function of the chargino mass (\(m_{\tilde{\chi}^\pm_1}\)), the lightest neutralino mass (\(m_{\tilde{\chi}^0 _1}\)) and the second lightest neutralino mass (\(m_{\tilde{\chi}^0 _2}\)). Within the Constrained Minimal Supersymmetric Standard Model framework, and for \(m_{\tilde{\chi}^\pm_1} - m_{\tilde{\chi}^0 _1} \geq 5\) GeV, the 95% confidence level lower limits on \(m_{\tilde{\chi}^\pm_1}\) are 93.6 GeV for \(\tan \beta = 1.5\) and 94.1 GeV for \(\tan \beta = 35\). These limits are obtained assuming a universal scalar mass \(m_0 \geq\) 500 GeV. The corresponding limits for all \(m_0\) are 78.0 and 71.7 GeV. The 95% confidence level lower limits on the lightest neutralino mass, valid for any value of \(\tan \beta\) are 32.8 GeV for \(m_0 \geq 500\) GeV and 31.6 GeV for all \(m_0\).
KeywordsConfidence Level Pair Production Minimal Supersymmetric Standard Model Model Framework Scalar Mass Preview
Unable to display preview. Download preview PDF. |
Category:Dark Energy
The observed accelerated expansion of the Universe requires either modification of General Relativity or existence within the framework of the latter of a smooth energy component with negative pressure, called the
dark energy. This component is usually described with the help of the state equation $p=w\rho$. As it follows from Friedman equation,\[\frac{\ddot a}{a}=-\frac{4\pi G}{3}(\rho+3p).\]For cosmological acceleration it is needed that $w<-1/3$. The allowed range of values of $w$ can be split into three intervals. The first interval $-1<w<-1/3$ includes scalar fields named the quintessence. The substance with the state equation $p=-\rho\ (w=-1)$ was named the cosmological constant, because $\rho=const$: energy density does not depend on time and is spacially homogeneous in this case. Finally, scalar fields with $w<-1$ were called the phantom fields. Presently there is no evidence for dynamical evolution of the dark energy. All available data agree with the simplest possibility - the cosmological constant. However the situation can change in future with improved accuracy of observations. That is why one should consider other cases of dark energy alternative to the cosmological constant. Pages in category "Dark Energy"
The following 10 pages are in this category, out of 10 total. |
I was looking at my teacher's notes and came about the following recurrence equation :
$$T(n) = \begin{cases} 1 &\quad\text{if } n\leq 1\\ 4T\left(\frac{n}{2}\right) + n^3 &\quad\text{if } n\gt1 \\ \end{cases}$$
In order to solve it I proceeded as follows : $$ T(n) = n^3 + 4T\left(\frac{n}{2}\right) = n^3 + 4\big( 4T\left(\frac{n}{4}\right) + \left(\frac{n}{2}\right)^3 \big) = \\ n^3 + 4\left(\frac{n}{2}\right)^3 + 16T(\frac{n}{4}) = \cdots $$ I'll spare some latex and state that we can see from unwrapping the equation that at the generic level $i$ the incurred price is : $$ T_i = 4^i\left(\frac{n}{2^i}\right)^3 $$ In order to compute the overall price we can sum all the prices incurred at all levels, obtaining : $$ T(n) = \sum_{i=0}^{log^n -1}{\big(4^i\left(\frac{n}{2^i}\right)^3\big)} + 4^{log^n}T(1) = \\ n^3 \sum_{i=0}^{log^n-1}{\big(2^{2i}\frac{1}{2^{3i}}\big)} + n^2 = \\ n^3 \sum_{i=0}^{log^n-1}{\big(\frac{1}{2^{i}}\big)} + n^2 $$ My problem begins here. In order to solve it I'd say that the series in questions can be solved as : $$ {\displaystyle \sum _{k=m}^{n}x^{k}={\frac {x^{m}-x^{n+1}}{1-x}}\quad {\text{with }}x\neq 1.} $$ which applied to my scenario would yield $$ n^3 \left(\frac{1 - (\frac{1}{2})^{log^n}}{1 - \frac{1}{2}}\right) + n^2 = \\ 2n^3 - 2n^2 + n^2 = 2n^3 - n^2 = \Theta(n^3) $$ But my professor says : $$ n^3 \sum_{i=0}^{log^n-1}{\big(\frac{1}{2^{i}}\big)} + n^2 \leq n^3 \sum_{i=0}^{\infty}{\big(\frac{1}{2^{i}}\big)} + n^2 = \\ n^3 \frac{1}{1 - \frac{1}{2}} + n^2 = 2n^3 + n^2 $$ And thus $T(n) = O(n^3)$ instead of $\Theta(n^3)$ since we proved only an upper limit.
My question is thus, why can't I solve the summation as I did instead of extending it to infinity? |
The procedure of dimensional regularization for UV-divergent integrals is generally described as first evaluating the integral in dimensions low enough for it to converge, then "analytically continuing" this result in the number of dimensions $d$. I don't understand how this could possibly work conceptually, because a d-dimensional integral $I_d$ is only defined when $d$ is an integer greater than or equal to 1, so the domain of $I_d$ is discrete, and there's no way to analytically continue a function defined on a discrete set.
For example, in Srednicki's QFT book, the key equation from which all the dim reg results come is (pg. 101) "... the area $\Omega_d$ of the unit sphere in $d$ dimensions ... is $\Omega_d = \frac{2 \pi ^{d/2}}{\Gamma \left( \frac{d}{2} \right) };$ (14.23)". But this is highly misleading at best. The area of the unit sphere in $d$ dimensions is $\frac{2 \pi^{d/2}}{\left( \frac{d}{2} - 1 \right) !}$ if $d$ is even and $\geq 2$, it is $\frac{2^d \pi^\frac{d-1}{2} \left( \frac{d-1}{2} \right)! }{(d-1)!}$ if $d$ is odd and $\geq 1$, and it is nothing at all if $d$ is not a positive integer. These formulas agree with Srednicki's when $d$
is a positive integer, but they avoid giving the misleading impression that there is a natural value to assign to $\Omega_d$ when it isn't.
Beyond purely mathematical objections, there's a practical ambiguity in this framework - how do you interpolate the factorial function to the complex plane? Srednicki chooses to do so via the Euler gamma function without any explanation. But there are other possible interpolations which seem equally natural - for example, the Hadamard gamma function or Luschny's factorial function. (See http://www.luschny.de/math/factorial/hadamard/HadamardsGammaFunction.html for more examples.) Why not use those?
In fact, these two alternative functions are both analytic everywhere, so you can't use them to extract the integral's pole structure, which you need in order to cancel the UV infinities. To me, this suggests that the final results of dim reg might be highly dependent on your choice of interpolation scheme, therefore requiring a justification for using the Euler gamma function. Could we prove to a dim reg skeptic that all results for physical observables are independent of the interpolation scheme? (Note that this is a stronger requirement than showing they are independent of the fictitious mass parameter $\tilde{\mu}$.)
(I know that the Bohr-Mollerup theorem shows that the Euler gamma function uniquely has certain "nice" properties, but I don't see why those properties are helpful for doing dim reg.)
I'm not looking for a hyper-technical treatment of dim reg, just a conceptual picture of what it even
means to analytically continue a function from the discrete set of positive integers. Edit: It appears that the details of exactly which field-theory results do and do not depend on the choice of regularization scheme are not well-understood; see this paper for one discussion. |
I've just learned about the density operator, and it seems like a fantastic way to represent the branching nature of measurement as simple algebraic manipulation. Unfortunately, I can't quite figure out how to do that.
Consider a simple example: the state $|+\rangle$, which we will measure in the classical basis (so with measurement operator $I_2$). The density operator of this state is as follows:
$\rho = |+\rangle\langle+| = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} ⊗ \begin{bmatrix} \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} \frac 1 2 & \frac 1 2 \\ \frac 1 2 & \frac 1 2 \end{bmatrix}$
Since measuring $|+\rangle$ in the classical basis collapses it to $|0\rangle$ or $|1\rangle$ with equal probability, I'm imagining there's some way of applying the measurement operator $I_2$ to $\rho$ such that we end up with the same density operator as when we
don't know whether the state is $|0\rangle$ or $|1\rangle$:
$\rho = \frac 1 2 \begin{bmatrix} 1 \\ 0 \end{bmatrix} ⊗ \begin{bmatrix} 1, 0 \end{bmatrix} + \frac 1 2 \begin{bmatrix} 0 \\ 1 \end{bmatrix} ⊗ \begin{bmatrix} 0, 1 \end{bmatrix} = \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & \frac 1 2 \end{bmatrix}$
From there, we can continue applying unitary transformations to the density operator so as to model a measurement occurring mid-computation. What is the formula for applying a measurement operator to the density operator? Looking in the Mike & Ike textbook section on the density operator, I only see the density operator measurement formula for a
specific measurement outcome. I'd like to know the density operator measurement formula which captures all possible results of the measurement.
As a followup question, I'm also curious as to the formula when measuring some subset of multiple qubits. |
5. Series 30. Year Post deadline: - Upload deadline: -
(3 points)1. space snowman
Consider a snowman consisting of 3 homogeneous spheres of density $ρ$ with centres on a line, floating in free space. The smallest sphere (the head) has radius $r$ and each consecutive sphere has twice the radius of the previous one. Our snowman is the only thing in the universe and it does not rotate in any way. Find the force holding the head to the rest of the snowman.
Bonus: Generalise the problem for $N\ge3$ spheres. Will the force converge to a finite value for $N→∞$ or will it go to infinity?
Karel came up with a problem for Fyziklani and realized he wouldn't want to be checking result.
(3 points)2. spheres in viscous fluids
When solving problems involving drag in air or in general a fluid, we use Newton's resistance equation
$$F=\frac{1}{2}C\rho Sv^2\,,$$
where $Cis$ the drag coefficient of the object in the direction of motion, $ρis$ the density of the fluid, $Sis$ the cross-section area and $vis$ the velocity of the object. This is usually quite accurate for turbulent flow. We are interested in a sphere, for which $C=0.50$. In the case of laminar flow, we usually use Stokes' law
$$F = 6 \pi \eta r v\,,$$
where $ηis$ the dynamic viscosity of the fluid and $ris$ the radius of the sphere. Is there a velocity for which the two resistance forces are equal for the same sphere?. How will this velocity depend on the radius of the sphere?
Karel heard at a conference that people struggle with equations.
(6 points)3. accurate central collisions
Consider 3 equal non-rotating discs moving in a straight line in the order 1, 2, 3 without friction or any other resistance forces on a horizontal surface. Discs 1 and 2 are moving to the right and disc 3 is moving against them to the left. We know that the velocity of disc 1 is larger than that of disc 2. How do the final velocities (after all collisions) depend on the order in which the collisions take place? What are these velocities? (Do not forget that all answers must be properly justified).
Bonus: Discs have different masses. (8 points)4. on a string
Two masses of negligible dimensions and mass $m=100g$ are connected by a massless string with rest length $l_{0}=1\;\mathrm{m}$ and spring constant $k=50\;\mathrm{kg}\cdot \mathrm{s}^{-2}$. One of the masses is held fixed and the other rotates around it with frequency $f=2\;\mathrm{Hz}$. The first mass can rotate freely around its axis. At one point the fixed mass is released. Find the minimal separation of the two masses during the resulting motion. Do not consider the effects of gravity and assume the validity of Hook's law.
(8 points)5. balloon
Consider a balloon with mass $m$ (blown up) and volume $V$ filled with helium. An infinite string of length density $τ=10gm^{-1}$ is tied to the balloon. Assuming the atmosphere is isothermal, in which the pressure depends on height $z$ as
$p=p_0e^{-z/z_0}\$,,
($z_{0}$ is a parameter of the atmosphere), what is the maximum height the balloon will reach?
(8 points)P. glasses
Describe the imaging system of a microscope (consisting of two convex lenses) and that of a Keplerian telescope. Explain the difference in function and construction of a microscope and a telescope and sketch the rays passing through the systems. How can we usefully define magnification for these optical systems? Derive the equations for magnification.
Kuba finally understood, how it all works!
(12 points)E. fishing line
Measure the shear modulus $G$ (modulus in torsion) of a fishing line. Unfortunately, we are unable to mail the fishing line samples abroad, we therefore ask that you obtain one by yourself and include pictures of the line (and the reel it came from) you use in your solution. |
Let's suppose t be in the inverval $(-\pi, \pi]$ and that $n$ is a natural number. What is $(\cos t + i\sin t )^{\frac 1n}$? Using Euler's formula would give us the following:
$(\cos t + i\sin t )^{\frac 1n}=$
$(e^{it})^{\frac 1n}=$
$e^{it\times \frac 1n} = $
$e^{\frac{it}{n}} = $
$\cos \frac 1n + i\sin \frac 1n$.
However, this would be problematic if we're taking on odd root of $-1$. When we're just dealing with the real numbers, any odd root of $-1$ is $-1$. However, if $n$ is odd, then based on formula above, since the argument of $-1$ is $\pi$, $(-1)^{\frac 1n} = \cos \frac{\pi}{n}+ i\sin \frac{\pi}{n}$. So $(-1)^{\frac 13}$ would be equal to $ \frac 12 + i \frac{\sqrt 3}{2}$, even though it would just be $-1$ if we were only dealing with the real numbers. It doesn't make sense that the same operation would yield a different result just because we've extended the number-system we're working with. So, I was wondering if this in fact, is the correct formula for determining the principal root of a unti complex number.
$(\cos t + i\sin t )^{\frac 1n}= \cos \frac 1n + i\sin \frac 1n$ if $-\pi<t<\pi$ or $n$ is even
$=-1$ if $t=\pi$ and $n$ is odd
If this isn't the correct formula, then what is the correct formula? |
In R, if I write
lm(a ~ b + c + b*c)
would this still be a linear regression?
How to do other kinds of regression in R? I would appreciate any recommendation for textbooks or tutorials?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector can be written $\hat{\beta} = \sum_i{w_iy_i}$, where the $\{w_i\}$ are weights determined by your estimation procedure. Linear models can be solved algebraically in closed form, while many non-linear models need to be solved by numerical maximization using a computer.
This post at minitab.com provides a very clear explanation:
Response = constant + parameter * predictor + ... + parameter * predictor
I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
Assuming you're asking if the following equation is linear:
a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * (b*c))
The answer is yes, if you assemble a new independent variable such as:
newv = b * c
Substituting the above newv equation into the original equation probably looks like what you're expecting for a linear equation:
a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * newv)
As far as references go, Google "r regression", or whatever you think might work for you.
You can write out the linear regression as a (linear) matrix equation.
$ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 \\b_3 & c_3 & b_3*c_3 \\b_4 & c_4 & b_4*c_4 \\b_5 & c_5 & b_5*c_5 \\ &...& \\ b_n & c_n & b_n*c_n } \right] \times \left[\matrix{\alpha_b & \alpha_c & \alpha_{b*c}} \right] + \left[ \matrix{\epsilon_1 \\\epsilon_2 \\\epsilon_3 \\\epsilon_4 \\\epsilon_5 \\ ... \\ \epsilon_n} \right] $
or if you collapse this:
$\mathbf{a} = \alpha_b \mathbf{b} + \alpha_c \mathbf{c} + \alpha_{b*c} \mathbf{b*c} + \mathbf{\epsilon} $
This linear regression is equivalent to finding the
linear combination of vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$ that is closest to the vector $\mathbf{a}$. (This has also a geometrical interpretation as finding the projection of $\mathbf{a}$ on the span of the vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$. For a problem with two column vectors with three measurements this can still be drawn as a figure for instance as shown here: http://www.math.brown.edu/~banchoff/gc/linalg/linalg.html )
Understanding this concept is also important in non-linear regression. For instance it is much easier to solve $y=a e^{ct} + b e^{dt}$ than $y=u(e^{c(t-v)}+e^{d(t-v)})$ because the first parameterization allows to solve the $a$ and $b$ coefficients with the techniques for linear regression. |
6. Series 30. Year Post deadline: - Upload deadline: -
(3 points)1. heavy guns
Two machine guns, that are able to shoot bullets of mass $m=25g$ and speed $v_{1}=500\;\mathrm{m}\cdot \mathrm{s}^{-1}$ with at 10 rounds per second, are attached to the front of a car. The car accelerates on a flat surface to a speed $v_{2}=80\;\mathrm{km}\cdot h^{-1}$ and then starts firing. How many shots will be fired before the car stops? The car is neutral whilst shooting, the air and tyre resistance can be ignored. The heat losses in the machine guns are also negligible.
Mirek was thinking of GTA 2.
(3 points)2. accidental drop
From what height would we need to „drop“ an object on a neutron star to make it land with a speed 0,1 $c$ (0,1 of speed of light). Our neutron star is 1.5 times heavier than our Sun and has diameter $d=10\;\mathrm{km}$. Ignore both the atmosphere of the star and its rotation. You can also ignore the correction for special relativity. However, do compare the results for a homogenous gravitational field (with the same strength as is on the star surface) and for a radial gravitational field.
Bonus: Do not ignore the special relativity correction.
Karel was thinking about neutron stars (yet again)
(6 points)3. relativistic Zeno's paradox
Superman and Flash decided to race each other. The race takes place in deep space as there is no straight beach long enough on Earth. As Flash is slower, he starts with a length lead $l$ ahead of Superman. At one moment, Flash starts with a constant speed $v_{F}$ comparable with the speed of light. At the moment Superman sees that Flash started, he starts running at a constant speed $v_{S}>v_{F}$. How long will it take Superman to catch up with Flash (from Superman's point of view)? How long will it take from Flash's point of view? Was the starting method fair? Can you devise a more fair method (keeping the length lead $l)?$
(7 points)4. shoot your rat
Mirek wants to shoot a rat he sees at the dorm. To that end, he made a simple air gun which can be modeled as a tube with constant cross-section $S=15\;\mathrm{mm}$ and length $l=30\;\mathrm{cm}$ closed on one side and open on the other. Mirek plans to place a bullet of mass $m=2g$ into the tube so that the bullet seals to tube exactly and is fixed at a distance $d=3\;\mathrm{cm}$ from the closed end. He that pumps up the closed section to a pressure $p_{0}$ and then releases the bullet. He wants the speed of the bullet to be at least $v=90\;\mathrm{m}\cdot \mathrm{s}^{-1}$ as it exits the tube. What pressure will he need to achieve if the gas is ideal? Discuss the realism of the situation. Assume the bullet is released by a quasi-static adiabatic process where $κ=7⁄5$, as the gas is diatomic. Assume an external atmospheric pressure $p_{a}=10^5Pa$. Neglect losses due to friction, air resistance and gas compression ahead of the bullet.
Karel wanted to find out if the solvers could pass the Masters programme admissions at MFF
(8 points)5. hit him over the knuckles
Consider a homogeneous rod of constant cross-section and length $l$ attached to a freely rotating joint at one end. At the beginning, the rod points straight up and is in a homogeneous gravitational field with acceleration $g$. Due to a slight whiff of wind, the rod starts turning and „falling“ down, but is still held on the joint. Find the acceleration of the end of the rod in time.
(9 points)P. evaporating asteroid
A very large piece of ice (let us say with diameter 1 km) is placed near a Sun-like star to a circular orbit. It is placed so close, that the equillibrium temperature of a black body at this distance would be approximately 30 ° C. What will happen with such an asteroid and its orbit? The asteroid is not tidally locked.
Karel likes astrophysics, so he came up with something again.
(12 points)E. composition as if by Cimrman
Get a wine glass, ideally a thin one with a ground edge. First measure the internal diameter of the glass as a function of height from the bottom. Then use it to create sound by moving a wet finger along its edge (this requires pations). Measure how do the frequencies of tones created in this way depend on the height of water in the glass (measure at least 5 different heights and 2 frequencies at each height).
Hint: If the walls of the glass are thin, we can assume the internal dimensions are the same as external and measure the diameter as a function of height from an appropriate photograph with a scale. For measuring sound we recommend the free software Audacity (Analyze $→$ Plot spectrum).
Karel likes playing with glasses at formal dinners.
(10 points)S. nonlinear
Describe in your own words how and when the nonlinear regression can be used (it is sufficient to describe the following: nonlinear regression model, estimation of unknown regression coefficients, expression of uncertainty of estimates of regression coefficients and fitted values, statistical tests of hypotheses about regression coefficients, identifiability of regression coefficients and choice of regression function). It is not necessary to provide derivations and proofs, brief overview is sufficient. See values ($x_{i},y_{i})$ in the attached file regrese1.csv. We want to fit the theoretical functional dependence, which in this case is a sinusoid, i.e. the function
$$f(x)=a b\cdot \sin(cx d)$$
Plot a graph of observed values and fitted function (such graph has to meet usual requirements) and provide brief interpretation. It is not necessary to do regression diagnostics.
Hint: Do not forget to correctly solve the identifiability problem of this model by suitable restrictive conditions on possible values of parameter $c$. See values ($x_{i},y_{i})$ in the attached file regrese2.csv. We want to fit the theoretical functional dependence, which, in this case, is an exponential function, i.e. the function
$$f(x)=a \;\mathrm{e}^{bx c}$$
Provide values of estimates of all regression coefficients including corresponding standard errors.
Hint: Try to check (by the means of graphical methods) whether the assumption of homoskedasticity holds and if necessary, use White's estimate (sandwich estimate) of covariance matrix to compute standard errors correctly. See values ($x_{i},y_{i})$ in the attached file regrese3.csv. We want to fit the theoretical functional dependence, which in this case is a hyperbolic function, i.e. the function
$$f(x)=a \frac{1}{bx c}$$
Plot a graph of observed values (in the form of error bars) and fitted function and provide brief interpretation. Perform regression diagnostics.
Bonus: See values ($x_{i},y_{i})$ in the attached file regrese4.csv. We want to fit the theoretical functional dependence, which in this case is too complicated to be expressed in analytical form. Try to fit regression splines (with suitably chosen knots and suitable degree). Plot a graph of observed values and fitted function.
It is recommended to use statistical software
R for all computations. Sample R script (with comments in code explaining syntax of \emph{R} programming language) may be helpful (in Czech only). .
Michal thinks that the last round has to be as difficult as possible. |
For anyone who happens upon this question and doesn't understand, like I did, I would like to elaborate on exactly what
I specifically was missing in my understanding.
When I posted this question, I didn't know what people meant when they said $\sum_{n=1}^\infty n = -1/12$. Now that I'm older and a bit more experienced, this statement is mathematically
wrong. Equating these two quantities is simply a false, pseudo-mathematical inanity. No one told me this when I heard about it, so I always perceived it as mathematical fact.
So what do people
actually mean when they write that? Well, we can define the Riemann Zeta Function:$$\zeta(s) = \sum_{n=1}^\infty\frac1{n^s}$$for a complex number $s$. We know that this sum converges for $\Re(s)>1$, and it diverges when $s=1$, and $\Re(s)<1$. However, quite frequently we talk about values of the zeta function for which $\Re(s)<1$. This is because we're not really talking about the zeta function, but rather an (and the unique) analytic continuation of the zeta function.
To be specific, if a function $f$ is analytic in a domain $U$, and $V\supset U$, and $F$ is some function analytic on $V$ such that $F(z)=f(z)$ for $z\in U$, then $F$ is an
analytic continuation of $f$ to a larger domain. It turns out that this function $F$ is unique. Since the zeta function is analytic in the half-plane $\Re(s)\ge 1$, minus the point $s = 1$, it has an analytic continuation to $\mathbb{C}\backslash\{1\}$, which is uniquely determined by the behaviour of the function $\zeta(s)$ in the domain of convergence of the infinite sum.
One may be curious as to why this analytic continuation cannot be "infinite everywhere" (which would seem to be the only logical choice for such a function given that the sum
doesn't converge anywhere outside $\Re(s)>1$). If this were true, then the reciprocal of that analytic continuation would be identically zero on a dense subset of the complex plane, which, by the principle of analytic continuation once again, must imply that the function is identically zero everywhere. But this doesn't make sense since $1/\zeta(s)$ is clearly defined for $\Re(s)>1$.
Hence, there is some
meaning to the values $\zeta(s)$ for $\Re(s)<1$, but this does not mean that $\zeta(s) = \sum_{n=1}^\infty\frac1{n^s}$ for all $s$. The sum diverges. The zeta function is . extended
However..., if we abuse our notation a bit, and let $\sum_{n=1}^\infty\frac1{n^s}$ denote the
analytic continuation, rather than the actual sum, for $s$ outside the domain of convergence, then we can plug in $s = 1$ to get$$\zeta(-1) "=" 1 + 2 + 3 + 4 + \cdots$$and one can prove, properly (i.e. using real math, not pseudo-math), that $\zeta(-1)$ (that is, the analytic continuation of $\zeta$ evaluated at $s=-1$), is indeed equal to $-1/12$.
As for the sum $\sum_{n=1}^\infty\frac1{n}$, this is simply a pole of the Riemann zeta function. The zeta function, on the entire real line, is a meromorphic function (i.e. it is holomorphic/analytic everywhere except on a countable set with no limit points). Functions like these are "allowed" to have singularities on a countable number of disjoint points. They can be thought of as parts where the function "looks like" $1/(s-z)^k$ for some $k$. (There are also "essential singularities" which go to infinity faster than this for any $k$, but the pole of the zeta function is not essential).
Essentially, what this means is that $\sum_{n=1}^\infty n = -1/12$ doesn't contradict some notion of convergence/divergence which implies $\sum_{n=1}^\infty\frac1n = \infty$. Both diverge, we just have a "special way" of evaluating the former sum. |
It is well known that, given a sphere, the maximum number of identical spheres that we can pack around it is exactly 12, corresponding to a face centered cubic or hexagonal close packed lattice.
My question is: given a sphere of radius $R$, how many spheres of radius $r<R$ can we closely pack around it?
With disks, the problem is rather easy to solve. Indeed, with reference to the picture at the bottom, we can see that we must have
$$\theta = \frac{2 \pi} n = 2 \arctan \left( \frac r {\sqrt{R^2+2 R r}} \right)$$
from which
$$n = \left \lfloor \frac \pi {\arctan \left( \frac r {\sqrt{R^2+2 R r}} \right)}\right \rfloor$$
The last expression gives the correct result for $R=r$, namely $n=6$ (hexagonal lattice). Moreover, when $R \gg r$, we get
$$n \simeq \left \lfloor \frac {\pi R} {r}\right \rfloor$$
which is completely reasonable.
How can I tackle the same problem in the 3D case (spheres)?
It is clear that for $R \gg r$ we must get
$$n \simeq \left \lfloor \frac {4 \pi R^2} {\pi r^2}\right \rfloor$$
and also that we must have $n(R=r)=12$.
Any hint/suggestion is appreciated. |
Let's write your integral as $\DeclareMathOperator{re}{Re}$
$$\int_{-\infty}^{\infty} f(t) e^{kg(t)}\,dt,$$
where
$$f(t) = \frac{1}{1+t^2} \qquad \text{and} \qquad g(t) = \frac{i}{5}t^5 + it.$$
The lay of the land.
The critical points of the exponent function $g$ occur at $t = e^{i\pi (2k+1)/4}$, $k=0,1,2,3$.
Here's a plot showing these critical points in yellow with the paths of constant altitude (of $\re g$) passing through them shown in white. The real axis is shown in black. The background is colored according to the value of $\re g(t)$, with higher points colored lighter and lower points darker.
Note that the function $\re g$ has ten "hills" and "valleys" radiating away from the origin. Since
$$g(t) \sim \frac{i}{5}t^5$$
as $t \to \infty$ we deduce that these hills and valleys lie approximately on the rays
$$t = s e^{i\pi(2k+1)/10}, \quad s > 0,\,k = 0,1,\ldots,9,$$
with even $k$ corresponding to valleys and odd $k$ corresponding to hills.
In order for the contour to pass through either of the two saddle points in the lower half-plane we would need to deform at least one tail of the current contour (the real axis) over one of these hills. This doesn't seem feasible, so we'll instead focus on the two saddle points in the upper half-plane.
The new contour.
With a little work it's possible to show that we can deform the contour to the one shown in black in the following image. We'll call this new contour $\gamma$.
Here lines of constant altitude on the surface $\re g(t)$ are again shown in white. The point $t=i$ is shown in yellow.
The new contour $\gamma$ consists of two curves. The first originates at $t = e^{i\pi 9/10} \infty$ then passes through the saddle point at $t = e^{i\pi 3/4}$ at an angle of $\pi/8$ before terminating at $t = i \infty$. The second originates at $t = i \infty$, passes through the saddle point at $t = e^{i\pi/4}$ at an angle of $-\pi/8$, then terminates at $e^{i\pi/10} \infty$.
Note that to deform the contour from the real axis to $\gamma$ we must enclose the pole of $f$ located at $t=i$. Ultimately we have
$$\begin{align}\int_{-\infty}^{\infty} f(t) e^{kg(t)}\,dt &= \int_\gamma f(t) e^{kg(t)}\,dt + 2\pi i\operatorname{Res}\left(f(t) e^{kg(t)},t=i\right) \\&= \int_\gamma f(t) e^{kg(t)}\,dt + \pi e^{-6k/5}. \tag{1}\end{align}$$
We will show in the next section that the term $\pi e^{-6k/5}$ is negligible compared to the integral $\int_\gamma$.
Estimating the new integral.
Now we will estimate the integral
$$I(k) = \int_\gamma f(t) e^{kg(t)}\,dt.$$
We've chosen the contour $\gamma$ in such a way that the largest values of $\re g(t)$ for $t \in \gamma$ occur at the saddle points $t = e^{i\pi 3/4}, e^{i\pi/4}$. Further,
$$\re g\left(e^{i\pi 3/4}\right) = \re g\left(e^{i\pi/4}\right) = -\frac{4}{5\sqrt{2}}.$$
Consequently we'll need to take both saddle points into account. For the first we have
$$g\left(e^{i\pi 3/4} + se^{i\pi/8}\right) = \frac{4}{5} e^{-i\pi 3/4} - 2s^2 + O(s^3)$$
and for the second we have
$$g\left(e^{i\pi/4} + se^{-i\pi/8}\right) = \frac{4}{5} e^{i\pi 3/4} - 2s^2 + O(s^3)$$
as $s \to 0$, so applying the Laplace method yields, to leading order,
$$\begin{align}I(k) &\approx e^{i\pi/8} f\left(e^{i\pi 3/4}\right) \int_{-\infty}^{\infty} \exp\left[k\left(\frac{4}{5} e^{-i\pi 3/4} - 2s^2\right)\right]\,ds \\&\qquad + e^{-i\pi/8} f\left(e^{i\pi/4}\right) \int_{-\infty}^{\infty} \exp\left[k\left(\frac{4}{5} e^{i\pi 3/4} - 2s^2\right)\right]\,ds \\&= \sqrt{\frac{\pi}{k}} \exp\left(-\frac{4}{5\sqrt2}k\right) \cos\left(\frac{4}{5\sqrt2}k - \frac{3\pi}{8}\right)\end{align}$$
as $k \to \infty$. Combining this with equation $(1)$ we conclude that, to leading order,
$$\int_{-\infty}^{\infty} f(t) e^{kg(t)}\,dt \approx \sqrt{\frac{\pi}{k}} \exp\left(-\frac{4}{5\sqrt2}k\right) \cos\left(\frac{4}{5\sqrt2}k - \frac{3\pi}{8}\right)$$
as $k \to \infty$, as desired. |
My answer will be shamelessly Newtonian and Physics 101 in formulation. To start off the assumptions, I'm going to assume the air has no mass. To what extent is this valid? Air has about 1000x the density of other materials like rock and concrete, so we're looking at about that same volume ratio before the air mass becomes significant compared to the wall and as you'll see further into the calculations, this won't quite be the case until the object really is close to the size of the Earth.
The gravity at the surface of the balloon will be the following.
$$ g = \frac{G M }{R^2} $$
Here I have used the M variable to refer to the total mass of the wall. Now, this isn't the field that acts on the wall due to my prior arguments. Here's the part I was most unsure about: I divide this by two. Why? Well, the outside of the wall has $g$ act on it, but the inside of the wall has no gravitational field act on it at all (since I neglect the effect of the air). Average that out to get $1/2$. How does that translate into pressure? Introduce $\mu = \rho t$ where $t$ is the thickness of the wall, and you have the surface mass thickness in $kg/m^2$. This is what we need. Multiply that by the gravity and you have the same equation used on earth to find fluid head.
$$ P = \frac{1}{2} g \mu = \frac{G M}{2 R^2} \rho t$$
The point is that we already have a value of $P=14 psi$ that we wish to satisfy. For assumptions about density, $\rho$, my favorite approach is to assume it's made out of asteroid material with $\rho=1.3 g/cm^3$. Next, I'll introduce another easy equation, which is to multiply the mass thickness by the area to get mass.
$$M = 4 \pi \mu R^2 = 4 \pi t \rho R^2$$
These equations, with known density, by themselves can predict the shell thickness in what I call the "large limit". This assumed that the thickness is small relative to the total radius. So for any large space balloon made out of asteroid-density material the thickness is dictated by:
$$ t = \sqrt{ \frac{P}{2 G \pi \rho^2}} = 12.0 km$$
Since we know the thickness we may specify the radius or the mass. I thought it most appropriate to just say we have some given mass to work with. I took the mass of the asteroid 87 Sylvia, which is $1.5 \times 10^{19} kg$. Getting the rest is easy.
$$ R = \sqrt{ \frac{M}{4 \pi t \rho}} = 277.0 km$$
Yes, this is very big. However, the diameter is still about half that of Ceres. And 87 Sylvia is about the 18th largest by mass. Note that in the discussed configuration, the wall would occupy about 6.7% of the total volume.
Now I'm going to seaway into a different part of the answer where I ask "what if the balloon is fairly small?" We will start by defining $R$ to be the
inner radius of the shell, which is the boundary of the air-filled region. To quickly get an answer, assume that $R\approx 0$, this forms the "small limit". You basically have a spherical asteroid and a negligible amount of air in the center. Integrate to find the fluid head, which will be set equal to 1 atmosphere.
$$ P = \int_0^t g(r) \rho dr = \int_0^t G \frac{4}{3} r \rho^2 dr = \frac{2}{3} G \pi \rho^2 t^2 $$
Now we get a definable limit for the smallest object we can make out of asteroid material that gets 1 atmosphere of pressure in its center.
$$ t = \sqrt{ \frac{3 P }{ 2 G \pi \rho^2 }} = 20.7 km $$
Obviously, this is larger than the previous large limit thickness, which is just due to geometrical factors. Now, how do we transition between the small limit and large limit values? We set up a more complex geometry, where the inner radius of the shell is $R$ and the outer radius of the shell is $R+t$. I struggled with this part of the problem a good deal, but I now have high confidence in this answer. To set it up, it's appropriate to say that the field within the rock is equal to the field you would have if the entire thing was solid (4/3 pi G rho r),
minus the field you would get if the air were rock. This is using the superposition principle to subtract the rock in the middle which was "cut out".
$$ g(r) = g_{solid}(r) - g_{center}(r) = \frac{4}{3} \pi G \rho r + \frac{ G \left( \frac{4}{3} \pi R^3 \right) }{ r^2 } \\= \frac{4}{3} G \rho \pi t \frac{ \left( 3 R^2 + 3 R t + t^2 \right) }{ \left( R + t \right)^2 } $$
$$ P = \int_R^{R+t} g(r) \rho dr = \frac{2}{3} \pi \rho^2 G t^2 \frac{ 3 R+t }{ R+t} $$
This equation is easy to solve in terms of R, but not so easy to do in terms of t. It can also relatively easily be put in terms of P, M, and t.
I had graphs, but they were done when the equations were wrong, so I just wanted to get the math error corrected for now. Maybe I'll add more later. |
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
August 2015 , Volume 9 , Issue 3
Select all articles
Export/Reference:
Abstract:
We present a new qualitative imaging method capable of selecting defects in complex and unknown background from differential measurements of farfield operators: i.e. far measurements of scattered waves in the cases with and without defects. Indeed, the main difficulty is that the background physical properties are unknown. Our approach is based on a new exact characterization of a scatterer domain in terms of the far field operator range and the link with solutions to so-called interior transmission problems. We present the theoretical foundations of the method and some validating numerical experiments in a two dimensional setting.
Abstract:
In this paper we solve the inverse problem of recovering a single spatially distributed conductance parameter in a cable equation model (one-dimensional diffusion) defined on a metric tree graph that represents a dendritic tree of a neuron. Dendrites of nerve cells have membranes with spatially distributed densities of ionic channels and hence non-uniform conductances. We employ the boundary control method that gives a unique reconstruction and an algorithmic approach.
Abstract:
We present a design scheme that generates tight and semi-tight frames in discrete-time periodic signals space originated from four-channel perfect reconstruction periodic filter banks. Filter banks are derived from interpolating and quasi-interpolating polynomial and discrete splines. Each filter bank comprises one linear phase low-pass filter (in most cases interpolating) and one high-pass filter, whose magnitude's response mirrors that of a low-pass filter. These filter banks comprise two band-pass filters. We introduce local discrete vanishing moments (LDVM). When the frame is tight, analysis framelets coincide with their synthesis counterparts. However, for semi-tight frames, we swap LDVM between synthesis and analysis framelets. The design scheme is generic and it enables us to design framelets with any number of LDVM. The computational complexity of the the framelet transforms, which consists of calculating the forward and the inverse FFTs, does not depend on the number of LDVM and does depend on the size of the the impulse response filters. The designed frames are used for image restoration tasks, which were degraded by blurring, random noise and missing pixels. The images were restored by the application of the Split Bregman Iterations method. The frames performances are evaluated. A potential application of this methodology is the design of a snapshot hyperspectral imager that is based on a regular digital camera. All these imaging applications are described.
Abstract:
We consider inverse boundary value problems for the Schrödinger equations in two dimensions. Within less regular classes of potentials, we establish a conditional stability estimate of logarithmic order. Moreover we prove the uniqueness within $L^p$-class of potentials with $p>2$.
Abstract:
This paper concerns the transmission eigenvalue problem for an inhomogeneous media of compact support containing small penetrable homogeneous inclusions. Assuming that the inhomogeneous background media is known and smooth, we investigate how these small volume inclusions affect the real transmission eigenvalues. Note that for practical applications the real transmission eigenvalues are important since they can be measured from the scattering data. In particular, in addition to proving the convergence rate for the eigenvalues corresponding to the perturbed media as inclusions' volume goes to zero, we also provide the explicit first correction term in the asymptotic expansion for simple eigenvalues. The correction terms involves the eigenvalues and eigenvectors of the unperturbed known background as well as information about the location, size and refractive index of small inhomogeneities. Thus, our asymptotic formula has the potential to be used to recover information about small inclusions from a knowledge of real transmission eigenvalues.
Abstract:
Artificial boundary conditions have long been an active research topic in numerical approximation of scattering waves: The truncation of the computational domain and the assignment of the conditions along the fictitious boundary must be done so that no spurious reflections occur. In inverse boundary value problems, a similar problem appears when the estimation of the unknowns is restricted to a domain that represents the whole domain of the solutions of a partial differential equation with unknown coefficient. This problem is significantly more challenging than general scattering problems, because the coefficients representing the unknown material parameter of interest are not known in the truncated portion and assigning suitable condition on the fictitious boundary is part of the problem also. The problem is addressed by defining a Dirichlet-to-Neumann map, or Steklov-Poincaré map, on the boundary of the domain truncation. In this paper we describe the procedure, provide a theoretical justification and illustrate with computed examples the limitations of imposing fixed boundary condition. Extensions of the proposed approach will be presented in a sequel article.
Abstract:
In [3], the authors discussed the electrical impedance tomography (EIT) problem, in which the computational domain with an unknown conductivity distribution comprises only a portion of the whole conducting body, and a boundary condition along the artificial boundary needs to be set so as to minimally disturbs the estimate in the domain of interest. It was shown that a partial Dirichlet-to-Neumann operator, or Steklov-Poincaré map, provides theoretically a perfect boundary condition. However, since the boundary condition depends on the conductivity in the truncated portion of the conductive body, it is itself an unknown that needs to be estimated along with the conductivity of interest. In this article, we develop further the computational methodology, replacing the unknown integral kernel with a low dimensional approximation. The viability of the approach is demonstrated with finite element simulations as well as with real phantom data.
Abstract:
This paper concerns the approximation of a Cauchy problem for the elliptic equation. The inverse problem is transformed into a PDE-constrained optimal control problem and these two problems are equivalent under some assumptions. Different from the existing literature which is also based on the optimal control theory, we consider the state equation in the sense of very weak solution defined by the transposition technique. In this way, it does not need to impose any regularity requirement on the given data. Moreover, this method can yield theoretical analysis simply and numerical computation conveniently. To deal with the ill-posedness of the control problem, Tikhonov regularization term is introduced. The regularized problem is well-posed and its solution converges to the non-regularized counterpart as the regularization parameter approaches zero. We establish the finite element approximation to the regularized control problem and the convergence of the discrete problem is also investigated. Then we discuss the first order optimality condition of the control problem further and obtain an efficient numerical scheme for the Cauchy problem via the adjoint state equation. The paper is ended with numerical experiments.
Abstract:
In the paper, we present an algorithm framework for the more general problem of minimizing the sum $f(x)+\psi(x)$, where $f$ is smooth and $\psi$ is convex, but possible nonsmooth. At each step, the search direction of the algorithm is obtained by solving an optimization problem involving a quadratic term with diagonal Hessian and Barzilai-Borwein steplength plus $ \psi(x)$. The nonmonotone strategy is combined with -Borwein steplength to accelerate the convergence process. The method with the nomonotone line search techniques is showed to be globally convergent. In particular, if $f$ is convex, we show that the method shares a sublinear global rate of convergence. Moreover, if $f$ is strongly convex, we prove that the method converges R-linearly. Numerical experiments with compressive sense problems show that our approach is competitive with several known methods for some standard $l_2-l_1$ problems.
Abstract:
We present new continuous variants of the Geman--McClure model and the Hebert--Leahy model for image restoration, where the energy is given by the nonconvex function $x \mapsto x^2/(1+x^2)$ or $x \mapsto \log(1+x^2)$, respectively. In addition to studying these models' $\Gamma$-convergence, we consider their point-wise behaviour when the scale of convolution tends to zero. In both cases the limit is the Mumford-Shah functional.
Abstract:
Image inpainting or disocclusion, which refers to the process of restoring a damaged image with missing information, has many applications in different fields. Different techniques can be applied to solve this problem. In particular, many variational models have appeared in the literature. These models give rise to partial differential equations for which Dirichlet boundary conditions are usually used. The basic idea of the algorithms that have been proposed in the literature is to fill-in damaged regions with available information from their surroundings. The aim of this work is to treat the case where this information is not available in a part of the boundary of the damaged region. We formulate the image inpainting problem as a nonlinear Cauchy problem. Then, we give a Nash-game formulation of this Cauchy problem and we present different numerical experiments using the finite-element method for solving the image inpainting problem.
Abstract:
We study weighted $l^2$ fidelity in variational models for Poisson noise related image restoration problems. Gaussian approximation to Poisson noise statistic is adopted to deduce weighted $l^2$ fidelity. Different from the traditional weighted $l^2$ approximation, we propose a reweighted $l^2$ fidelity with sparse regularization by wavelet frame. Based on the split Bregman algorithm introduced in [21], the proposed numerical scheme is composed of three easy subproblems that involve quadratic minimization, soft shrinkage and matrix vector multiplications. Unlike usual least square approximation of Poisson noise, we dynamically update the underlying noise variance from previous estimate. The solution of the proposed algorithm is shown to be the same as the one obtained by minimizing Kullback-Leibler divergence fidelity with the same regularization. This reweighted $l^2$ formulation can be easily extended to mixed Poisson-Gaussian noise case. Finally, the efficiency and quality of the proposed algorithm compared to other Poisson noise removal methods are demonstrated through denoising and deblurring examples. Moreover, mixed Poisson-Gaussian noise tests are performed on both simulated and real digital images for further illustration of the performance of the proposed method.
Abstract:
We discuss Bayesian inverse problems in Hilbert spaces. The focus is on a fast concentration of the posterior probability around the unknown true solution as expressed in the concept of posterior contraction rates. This concentration is dominated by a parameter which controls the variance of the prior distribution. Previous results determine posterior contraction rates based on known solution smoothness. Here we show that an oracle-type parameter choice is possible. This is done by relating the posterior contraction rate to the root mean squared estimation error. In addition we show that the tail probability, which usually is bounded by using the Chebyshev inequality, has exponential decay, at least for
a prioriparameter choices. These results implement the exponential concentration of Gaussian measures in Hilbert spaces. Abstract:
We have developed a method for hyperspectral image data unmixing that requires neither pure pixels nor any prior knowledge about the data. Based on the well-established Alternating Direction Method of Multipliers, the problem is formulated as a biconvex constrained optimization with the constraints enforced by Bregman splitting. The resulting algorithm estimates the spectral and spatial structure in the image through a numerically stable iterative approach that removes the need for separate endmember and spatial abundance estimation steps. The method is illustrated on data collected by the SpecTIR imaging sensor.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
matplotlib.tri¶
Unstructured triangular grid functions.
matplotlib.tri.
Triangulation(
An unstructured triangular grid consisting of npoints points and ntri triangles. The triangles can either be specified by the user or automatically generated using a Delaunay triangulation.
Parameters:
Notes
For a Triangulation to be valid it must not have duplicate points, triangles formed from colinear points, or overlapping triangles.
Attributes:
calculate_plane_coefficients(
Calculate plane equation coefficients for all unmasked triangles fromthe point (x, y) coordinates and specified z-array of shape (npoints).The returned array has shape (npoints, 3) and allows z-value at (x, y)position in triangle tri to be calculated using
z = array[tri, 0] * x + array[tri, 1] * y + array[tri, 2].
edges¶
Return integer array of shape (nedges, 2) containing all edges of non-masked triangles.
Each row defines an edge by it's start point index and end pointindex. Each edge appears only once, i.e. for an edge between points
i and j, there will only be either (i, j) or (j, i).
get_cpp_triangulation(
Return the underlying C++ Triangulation object, creating it if necessary.
get_from_args_and_kwargs(
Return a Triangulation object from the args and kwargs, and the remaining args and kwargs with the consumed values removed.
There are two alternatives: either the first argument is a Triangulation object, in which case it is returned, or the args and kwargs are sufficient to create a new Triangulation to return. In the latter case, see Triangulation.__init__ for the possible args and kwargs.
get_masked_triangles(
Return an array of triangles that are not masked.
get_trifinder(
Return the default
matplotlib.tri.TriFinder of thistriangulation, creating it if necessary. This allows the sameTriFinder object to be easily shared.
neighbors¶
Return integer array of shape (ntri, 3) containing neighbor triangles.
For each triangle, the indices of the three triangles that share the same edges, or -1 if there is no such neighboring triangle. neighbors[i,j] is the triangle that is the neighbor to the edge from point index triangles[i,j] to point index triangles[i,(j+1)%3].
set_mask(
Set or clear the mask array. This is either None, or a boolean array of shape (ntri).
matplotlib.tri.
TriFinder(
Abstract base class for classes used to find the triangles of a Triangulation in which (x,y) points lie.
Rather than instantiate an object of a class derived from TriFinder, it isusually better to use the function
matplotlib.tri.Triangulation.get_trifinder().
Derived classes implement __call__(x,y) where x,y are array_like point coordinates of the same shape.
matplotlib.tri.
TrapezoidMapTriFinder(
Bases:
matplotlib.tri.trifinder.TriFinder
TriFinder class implemented using the trapezoidmap algorithm from the book "Computational Geometry, Algorithms andApplications", second edition, by M. de Berg, M. van Kreveld, M. Overmarsand O. Schwarzkopf.
The triangulation must be valid, i.e. it must not have duplicate points, triangles formed from colinear points, or overlapping triangles. The algorithm has some tolerance to triangles formed from colinear points, but this should not be relied upon.
matplotlib.tri.
TriInterpolator(
Abstract base class for classes used to perform interpolation on triangular grids.
Derived classes implement the following methods:
__call__(x, y), where x, y are array_like point coordinates of the same shape, and that returns a masked array of the same shape containing the interpolated z-values.
gradient(x, y), where x, y are array_like point coordinates of the same shape, and that returns a list of 2 masked arrays of the same shape containing the 2 derivatives of the interpolator (derivatives of interpolated z values with respect to x and y).
matplotlib.tri.
LinearTriInterpolator(
Bases:
matplotlib.tri.triinterpolate.TriInterpolator
A LinearTriInterpolator performs linear interpolation on a triangular grid.
Each triangle is represented by a plane so that an interpolated value at point (x,y) lies on the plane of the triangle containing (x,y). Interpolated values are therefore continuous across the triangulation, but their first derivatives are discontinuous at edges between triangles.
Parameters:
Methods
`__call__` (x, y) ( Returns interpolated values at x,y points) `gradient` (x, y) (Returns interpolated derivatives at x,y points)
gradient(
Returns a list of 2 masked arrays containing interpolated derivatives at the specified x,y points.
Parameters: Returns:
matplotlib.tri.
CubicTriInterpolator(
Bases:
matplotlib.tri.triinterpolate.TriInterpolator
A CubicTriInterpolator performs cubic interpolation on triangular grids.
In one-dimension - on a segment - a cubic interpolating function is defined by the values of the function and its derivative at both ends. This is almost the same in 2-d inside a triangle, except that the values of the function and its 2 derivatives have to be defined at each triangle node.
The CubicTriInterpolator takes the value of the function at each node - provided by the user - and internally computes the value of the derivatives, resulting in a smooth interpolation. (As a special feature, the user can also impose the value of the derivatives at each node, but this is not supposed to be the common usage.)
Parameters:
Notes
This note is a bit technical and details the way a
CubicTriInterpolator computes a cubicinterpolation.
The interpolation is based on a Clough-Tocher subdivision scheme ofthe
triangulation mesh (to make it clearer, each triangle of thegrid will be divided in 3 child-triangles, and on each child trianglethe interpolated function is a cubic polynomial of the 2 coordinates).This technique originates from FEM (Finite Element Method) analysis;the element used is a reduced Hsieh-Clough-Tocher (HCT)element. Its shape functions are described in [R0be0c58fd53f-1].The assembled function is guaranteed to be C1-smooth, i.e. it iscontinuous and its first derivatives are also continuous (thisis easy to show inside the triangles but is also true when crossing theedges).
In the default case (
kind ='min_E'), the interpolant minimizes acurvature energy on the functional space generated by the HCT elementshape functions - with imposed values but arbitrary derivatives at eachnode. The minimized functional is the integral of the so-called totalcurvature (implementation based on an algorithm from [R0be0c58fd53f-2] - PCG sparsesolver): \[E(z) = \frac{1}{2} \int_{\Omega} \left( \left( \frac{\partial^2{z}}{\partial{x}^2} \right)^2 + \left( \frac{\partial^2{z}}{\partial{y}^2} \right)^2 + 2\left( \frac{\partial^2{z}}{\partial{y}\partial{x}} \right)^2 \right) dx\,dy\]
If the case
kind ='geom' is chosen by the user, a simple geometricapproximation is used (weighted average of the triangle normalvectors), which could improve speed on very large grids.
References
[R0be0c58fd53f-1] Michel Bernadou, Kamal Hassan, "Basis functions for general Hsieh-Clough-Tocher triangles, complete or reduced.", International Journal for Numerical Methods in Engineering, 17(5):784 - 789. 2.01.
[R0be0c58fd53f-2] C.T. Kelley, "Iterative Methods for Optimization".
Methods
`__call__` (x, y) ( Returns interpolated values at x,y points) `gradient` (x, y) (Returns interpolated derivatives at x,y points)
gradient(
Returns a list of 2 masked arrays containing interpolated derivatives at the specified x,y points.
Parameters: Returns:
matplotlib.tri.
TriRefiner(
Abstract base class for classes implementing mesh refinement.
A TriRefiner encapsulates a Triangulation object and provides tools for mesh refinement and interpolation.
Derived classes must implements:
refine_triangulation(return_tri_index=False, **kwargs), where the optional keyword arguments
kwargsare defined in each TriRefiner concrete implementation, and which returns: a refined triangulation optionally (depending on return_tri_index), for each point of the refined triangulation: the index of the initial triangulation triangle to which it belongs.
refine_field(z, triinterpolator=None, **kwargs), where:
zarray of field values (to refine) defined at the base triangulation nodes triinterpolatoris a
TriInterpolator(optional)
the other optional keyword arguments kwargsare defined in each TriRefiner concrete implementation
and which returns (as a tuple) a refined triangular mesh and the interpolated values of the field at the refined triangulation nodes.
matplotlib.tri.
UniformTriRefiner(
Bases:
matplotlib.tri.trirefine.TriRefiner
Uniform mesh refinement by recursive subdivisions.
Parameters:
refine_field(
Refines a field defined on the encapsulated triangulation.
Returns
refi_tri (refined triangulation), refi_z (interpolatedvalues of the field at the node of the refined triangulation).
Parameters: Returns:
refine_triangulation(
Computes an uniformly refined triangulation
refi_triangulation ofthe encapsulated
triangulation.
This function refines the encapsulated triangulation by splitting eachfather triangle into 4 child sub-triangles built on the edges midsidenodes, recursively (level of recursion
subdiv).In the end, each triangle is hence divided into
4**subdivchild triangles.The default value for
subdiv is 3 resulting in 64 refinedsubtriangles for each triangle of the initial triangulation.
Parameters: Returns:
matplotlib.tri.
TriAnalyzer(
Define basic tools for triangular mesh analysis and improvement.
A TriAnalyzer encapsulates a
Triangulationobject and provides basic tools for mesh analysis and mesh improvement.
Parameters: Attributes:
circle_ratios(
Returns a measure of the triangulation triangles flatness.
The ratio of the incircle radius over the circumcircle radius is awidely used indicator of a triangle flatness.It is always
<= 0.5 and
== 0.5 only for equilateraltriangles. Circle ratios below 0.01 denote very flat triangles.
To avoid unduly low values due to a difference of scale between the 2axis, the triangular mesh can first be rescaled to fit inside a unitsquare with
scale_factors (Only if
rescale is True, which isits default value).
Parameters: Returns:
get_flat_tri_mask(
Eliminates excessively flat border triangles from the triangulation.
Returns a mask
new_mask which allows to clean the encapsulatedtriangulation from its border-located flat triangles(according to their
circle_ratios()).This mask is meant to be subsequently applied to the triangulationusing
matplotlib.tri.Triangulation.set_mask().
new_mask is an extension of the initial triangulation maskin the sense that an initially masked triangle will remain masked.
The
new_mask array is computed recursively; at each step flattriangles are removed only if they share a side with the current meshborder. Thus no new holes in the triangulated domain will be created.
Parameters: Returns:
Notes
The rationale behind this function is that a Delaunaytriangulation - of an unstructured set of points - sometimes containsalmost flat triangles at its border, leading to artifacts in plots(especially for high-resolution contouring).Masked with computed
new_mask, the encapsulatedtriangulation would contain no more unmasked border triangleswith a circle ratio below min_circle_ratio, thus improving themesh quality for subsequent plots or interpolation.
scale_factors¶
Factors to rescale the triangulation into a unit square.
Returns
k, tuple of 2 scale factors.
Returns: |
I'm going through old exams for my Calc III course and came across a problem that I did not know how to do. The problem is:
Find the interval of convergence of the series
$\sum_{n=0}^{\infty}\frac{(x-2)^{n^2}}{2n+1}$
and determine whether the series converges absolutely or conditionally at the endpoints.
I can't think of any tests that would work. The Ratio test doesn't yield anything solvable and the Root Test seems also useless here. The Comparison Test might work but I'm having trouble finding a series to compare it to.
Thank you in advance for any help you can give. |
Question:
An electron and positron are moving in opposite directions, and are in the spin singlet state. Two Stern-Gerlach machines are orientated in some arbitrary direction; one along unit vector $\hat{s}_1$ (which measures the electron's position) and one along unit vector $\hat{s}_2$ (which measure's the positron's position).
1) Find the electron and positron eigenspinors in terms of spherical coordinates $\theta_1$ and $\phi_1$ (corresponding $\hat{s}_1$) and $\theta_2$ and $\phi_2$ (corresponding $\hat{s}_2$).
2) Calculate the probabilities of obtaining every possible spin outcome (both particles up; electron up, positron down, etc), and simplify by using directional cosines to replace the angles.
Attempt: Both the electron and positron are spin one-half particles. Let us define
$$\hat{s}_1=\hat{i}\sin\theta_1\cos\phi_1+\hat{j}\sin\theta_1\sin\phi_1+\hat{k}\cos\theta_1$$ $$\hat{s}_2=\hat{i}\sin\theta_2\cos\phi_2+\hat{j}\sin\theta_2\sin\phi_2+\hat{k}\cos\theta_2$$
If we look at the electron, we can then constract the spin matrix $S_1$, which represents the spin angular momentum along the $\hat{s}_1$ position:
$$S_1=S\cdot \hat{s}_1=S_x\sin\theta_1\cos\phi_1+S_y\sin\theta_1\sin\phi_1+S_z\cos\theta_1 $$ $$\rightarrow S_1=\frac{\hbar}{2}\begin{pmatrix} \cos\theta_1 & e^{-i\phi_1}\sin\theta_1\\ e^{i\phi_1}\sin\theta_1 & -\cos\theta_1 \end{pmatrix}$$
Here, I have utilized the Pauli matrices $S_x$, $S_y$, and $S_z$. From here, one finds the eigenvalues to be
$$\lambda=\pm \frac{\hbar}{2}$$
Plugging in, we obtain the normalized eigenspinors $\chi_+^1$ and $\chi_-^1$, corresponding to the spin up and spin down directions, respectively:
$$\chi_+^1=\begin{pmatrix} \cos(\theta_1/2) \\ e^{i\phi}\sin(\theta_1/2) \end{pmatrix}$$ $$\chi_-^1=\begin{pmatrix} e^{i\phi_2}\sin(\theta_1/2) \\ -\cos(\theta_1/2) \end{pmatrix}$$
We can now find the generic spinor $\chi^1$:
$$\chi^1=\left (\frac{a+b}{\sqrt{2}}\right)\chi_+^{(1)}+\left (\frac{a-b}{\sqrt{2}}\right)\chi_-^{(1)}$$
However, would I basically obtain the same expressions for the positron, only with different angles: i.e.,
$$\chi_+^2=\begin{pmatrix} \cos(\theta_2/2) \\ e^{i\phi}\sin(\theta_2/2) \end{pmatrix}$$ $$\chi_-^2=\begin{pmatrix} e^{i\phi_2}\sin(\theta_2/2) \\ -\cos(\theta_2/2) \end{pmatrix}$$
I know that both the electron and positron are spin $1/2$, but does that mean their eigenspinors would look nearly identical?
As well, I have a question concerning part 2. I know that, if we are measuring, say, $S_y$, then the probability of obtaining an up spin up would be $[(\chi_+^{(y)})^\dagger \chi]^2$. However, here, we have a two particle system, so does that mean the probability of obtaining, say, an electron spin up and a positron spin down would be $[(\chi_+^{(1)})^\dagger \chi^1]^2\cdot [(\chi_-^{(2)})^\dagger \chi^2]^2$, i.e., you multiply the separate probabilities? Also, what does it mean to express the probability in "directional cosines"?
Thank you in advance. |
Bulletin of the American Physical Society 16th APS Topical Conference on Shock Compression of Condensed Matter Volume 54, Number 8 Sunday–Friday, June 28–July 3 2009; Nashville, Tennessee
Session B5: ID-1: Shock Response of Aluminum Hide Abstracts Chair: Eric Herbold, Georgia Institute of Technology
Room:
Magnolia Ballroom
Monday, June 29, 2009
9:00AM - 9:15AM
B5.00001: Elastic Wave Amplitude and Attenuation in Shocked Pure AL
J.M. Winey, P.B. Trivedi, B.M. LaLone, Y.M. Gupta, R.F. Smith, J.H. Eggert, G.W. Collins
Shock-induced elastic-plastic deformation in pure aluminum was examined at 4 GPa peak stress by measuring wave profiles in thin (40$-$180 $\mu$m) samples under plate impact loading. Large elastic wave amplitudes ($\sim$1 GPa) and rapid elastic wave attenuation with propagation distance were observed, indicating a time-dependent elastic-plastic response. These results are in contrast to the $\sim$0.1 GPa elastic wave amplitudes observed in past work (\textit{J. Appl. Phys.} \textbf{98}, 033524 (2005)) using thick ($>$1 mm) samples. The combination of large elastic wave attenuation in thin samples and differences in sample thicknesses between the present and past work suggests a consistent picture of shock wave propagation in pure aluminum: manifestations of time-dependent elastic-plastic response are confined to material very near the impact surface. The present results cannot be fully reconciled with recent shockless compression results (\textit{Phys. Rev. Lett.} \textbf{98}, 065701 (2007)). Work supported by DOE. [Preview Abstract]
Monday, June 29, 2009
9:15AM - 9:30AM
B5.00002: Constitutive Model Constants for Al7075-T651 and Al7075-T6
Nachhatter Brar, Vasant Joshi, Bryan Harris
Aluminum 7075-T651 and 7075-T6 are characterized at quasi-static and high strain rates to determine Johnson-Cook (J-C) strength and fracture model constants. Constitutive model constants are required as input to computer codes to simulate projectile (fragment) impact or similar impact events on structural components made of these material. J-C strength model constants (A, B, n, C, and m) for the two alloys are determined from tension stress-strain data at room and high temperature to 250$^{\circ}$C. J-C strength model constants for Al7075-T651 are: A=527 MPa, B=676 MPa, n=0.71, C=0.017, and m=1.61 and for Al7075-T6: A = 546 MPa, B = 674 MPa, n = 0.72, C = 0.059, and m =1.56. J-C fracture model constants are determined form quasi-static and high strain rate/high temperature tests on notched and smooth tension specimens. J-C fracture model constants for the two alloys are: Al7075-T651; D$_{1}$ = 0.110, D$_{2 }$= 0.573, D$_{3}$= -3.4446, D$_{4 }$= 0.016, and D $_{5}$= 1.099 and Al7075-T6;~D$_{1}$= 0.451 D$_{2}$= -0.952 D$_{3}$= -.068, D$_{4 }$=0.036, and D$_{5 }$= 0.697. [Preview Abstract]
Monday, June 29, 2009
9:30AM - 9:45AM
B5.00003: High Strain-Rate Response of High-Purity Aluminum at Temperatures Approaching Melt
Stephen Grunschel, Rodney Clifton, Tong Jiao
High-temperature, pressure-shear plate impact experiments were conducted to investigate the rate-controlling mechanisms of the plastic response of high-purity aluminum at high strain rates (10$^{6}$ s$^{-1})$ and at temperatures approaching melt. Similar experiments were conducted by Frutschy and Clifton (\textit{JMPS }\textbf{46, }1998, 1723-1743) on OFHC copper. In the current study, temperatures that are larger fractions of the melting temperature were accessible because of the lower melting point of aluminum. Since the melting temperature of aluminum is pressure dependent, and a typical pressure-shear plate impact experiment subjects the sample to large pressures (2 GPa -- 7 GPa), a pressure-release type experiment was used to reduce the pressure in order to measure the shearing resistance at temperatures up to 95{\%} of the current melting temperature. The measured shearing resistance was remarkably large ($\sim $50 MPa at a shear strain of 2.5) for temperatures this near melt. Numerical simulations conducted using a version of the Nemat-Nasser/Isaacs constitutive equation (\textit{Acta Materialia} \textbf{45}(3), 1997, 907-919), modified to model the mechanism of geometric softening, appears to capture adequately the hardening/softening behavior observed experimentally. [Preview Abstract]
Monday, June 29, 2009
9:45AM - 10:00AM
B5.00004: The Effect of Heat Treatment on the Shock Response of the Aluminium Alloy 6061
Ming Chu, Ian Jones, Jeremy Millett, Neil Bourne, Rusty Gray
The mechanical response of aluminium alloys such as 6061 is manipulated through heat treatment to create a fine distribution of intermetallic particles. Post shock recovered microstructures of similar alloys has shown that in the solution treated (T0) state, with all alloy additions dissolved in the aluminium, deformation occurs via the formation of dislocation cells, in a similar manner to other face centred cubic metals such as copper or nickel. Further, a significant post shock hardening has also been observed, in agreement with the observed increase in dislocation density. In contrast, in the fully aged (T6) material, deformation occurs results in a random distribution of dislocations, with no enhanced hardening. From these previous observations, it is expected that the variation of shock induced shear strength, both with shock amplitude and pulse duration will be significantly different between the two heat treated states, and thus it is these features that this investigation addresses. [Preview Abstract]
Monday, June 29, 2009
10:00AM - 10:15AM
B5.00005: The Shock Response of the Magnesium--Aluminium Alloy, AZ6
Jeremy Millett, Neil Bourne, Stewart Stirk, Rusty Gray
The response of the magnesium alloy, AZ61 to shock loading has been investigated in terms of it's Hugoniot (Equation of State) and variation of shear strength with impact stress. Comparison of the Hugoniot with that of the similar magnesium alloy AZ31 shows very little difference, and hence gives us confidence in our results. Measurement of the lateral stress shows a decrease behind the shock front which suggests a degree of time dependent hardening. Similar results have been observed in fcc metals, corresponding to observed increases in dislocation density in recovered samples. British Crown Copyright MOD/2009. [Preview Abstract]
Engage My APS Information for
The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700 |
Does Reed-Muller codes $\mathrm{RM}(r,m)$ exist for $r = 0$ and $m = 0$? Namely, does $\mathrm{RM}(0,0)$ exist? Some books mention that $m$ should be a natural number whereas some books mention that it can be a whole number including $0$, for example Shu Lin and Daniel Costello Error Control Coding book.
The codewords in a Reed-Muller code of degree $d$ and length $2^n$ are binary vectors of the form $$\big(f(0,0,\ldots, 0, 0), f(0,0, \ldots, 0, 1), f(0,0, \ldots, 1, 0), f(0,0, \ldots, 1, 1), \cdots, f(1,1,\ldots, 1,1) \big)$$ where $f$ is the corresponding binary polynomial of degree at most $d$ in $n$ variables.
Suppose first that $n > 0$. The
dimension of the code is$$k = \sum_{i=0}^d \binom{n}{i}\tag{1}$$If $d=0$, then $k = \binom{n}{0}=1.$ There are only two "polynomials"of degree $0$ in $n$ variables, namely the constants $0$ and $1$, and consequentlythe codewords of length $2^n$ are $000\cdots 0$ and $111\cdots 1$.
If $n=0$ also, the codewords (if any) are of length $2^0 = 1$. Now, you might want to claim from $(1)$ that $k = \binom{0}{0} = \frac{0!}{0!(0-0)!} = 1$ and so there are $2^k = 2$ codewords (which, of course, are $0$ and $1$). Alternatively, the $k$ in Equation $(1)$ is the total number of subsets of cardinality $d$ or less of a set of $n$ elements. There is exactly one subset (of cardinality $0$) of the set of $0$ elements (namely,the empty set $\emptyset$). and that subset is $\emptyset$ itself. Thus, the Reed-Muller code of order $0$ and length $2^0$ is the $(1,1)$ code consisting of two codewords $0$ and $1$. As a linear code, this is the identity map. |
I have found an example in following book and my answer is a modified version of the Sec 8.4,8.6 of the book in order to make it concise and clear.
Gerber, Hans U. "Life insurance." Life Insurance Mathematics. Springer Berlin Heidelberg, 1990.
$B_1,\cdots B_n$ are arbitrary events. $N$ is a random variable ranging over $\{0, 1, ... , m\}$. For arbitrary real coefficients $c_1,\cdots c_m$, the Schuette–Nesbitt formula is the following operator identity between shifting operator $E:c_n\mapsto c_{n+1}$ and difference operator $\Delta:c_n\mapsto c_{n+1}-c_{n}$. By definition they are related via $E=id+\Delta$, the SN formula is$$\sum_{n=0}^{m}c_n\cdot Pr(N=n)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k$$where $S_k=\sum_{j_1,\cdots j_k}Pr(B_{j1}\cap\cdots \cap B_{jk})$ is the symmetric sum among these $n$ events and $S_0=1$. Note that $[\Delta^{k}c_0]$ means difference operator acting on $c_0$. For example, $[\Delta^{2}c_0]=\Delta^{1}(c_1-c_0)=\Delta^{1}(c_1)-\Delta^{1}(c_0)=(c_2-c_1)-(c_1-c_0)=c_2-2c_1+c_0$. Both operators are linear and hence they have representations in terms of matrix, therefore they can be extended to polynomial rings and modules (since these two objects have "basis",loosely speaking.)$$E=\left(\begin{array}{ccccc}0 & 0 & 0 & \cdots\\1 & 0 & 0 & \cdots\\0 & 1 & 0 & \cdots\\0 & 0 & 1 & \cdots\end{array}\right) $$$$\Delta=\left(\begin{array}{ccccc}-1 & 0 & 0 & \cdots\\1 & -1 & 0 & \cdots\\0 & 1 & -1 & \cdots\\0 & 0 & 1 & \cdots\end{array}\right)$$
The proof makes use of the indicator trick and expansion of the operator polynomial $\prod_{j=1}^{m}(1+I_{B_j}\Delta)$ and the fact that $I_A\cdot I_B=I_{A\cap B}$ and $\Delta$ commutes with indicators, I will refer you to Gerber's book.
If we choose $c_0=1$ and all other $c_1=c_2=\cdots=c_n=1$, then SN formula becomes the inclusion-exclusion principle as below:$$\sum_{n=1}^{m} Pr(N=n)=\sum_{k=0}^{m}\Delta^{k}c_0S_k=c_0 S_0+(c_1-c_0)S_1+(c_2-2c_1+c_0)S_2+\cdots =S_1-S_2+S_3+\cdots+(-1)^{n}S_n=[Pr(B_1)+\cdots+Pr(B_n)]-[Pr(B_1\cap B_2)+\cdots+Pr(B_{n-1}\cap B_{n})]+\cdots+(-1)^n\cdot Pr(S_1\cap\cdots \cap S_n)$$
Waring's Theorem gives the probability that exactly $r$ out of the $n$ events $B_1,\cdots B_n$ occur. Thus it can be derived by specifying $c_r=1$ and all other $c$'s=0. The SN formula becomes$$ Pr(N=r)=\sum_{k=0}^{m}[\Delta^{k}c_0]S_k=\sum_{k=r}^{m}[\Delta^{k}c_0]S_k$$ because any term $[\Delta^{k}c_0]=0$ when $k<r$, a change of variable $t=k-r$ will yield Waring's formula.
There is an envelope assignment example in Gerber's book you can have a look into, but my suggestion is to understand it in terms of operator algebra instead of probability. |
Answer
The number of different ways to choose and rank the three best movies is 6840.
Work Step by Step
We need to choose 3 movies from a list of 20 movies. The order in which the movies are chosen matters because the movies should be selected according to preference. Therefore, we need to find the number of permutations of 20 things taken 3 at a time. Use the formula, ${}_{n}{{P}_{r}}=\frac{n!}{\left( n-r \right)!}$ Where $ n=20,r=3$ $\begin{align} & {}_{20}{{P}_{3}}=\frac{20!}{\left( 20-3 \right)!} \\ & =\frac{20!}{17!} \end{align}$ Solve after to get, $\begin{align} & _{20}{{P}_{3}}=\frac{20\times 19\times 18\times 17!}{17!} \\ & =20\times 19\times 18 \\ & =6840 \end{align}$ |
Sukalpa Biswas
☆
India,
2019-08-06 09:15
(edited by Sukalpa Biswas on 2019-08-07 09:07)
Posting: # 20475
Views: 632
This is regarding NTI drug Bio equivalence study Statistical approach.
As per regulatory,
Observations during study:
All the above criteria has been met except 90% confidence interval for the test-to-reference ratio of the within-subject variability ≤ 2.5 were not meet the criteria for all PK variables (Cmax, AUCt and inf).
Exercises, Observations and Analysis:
Kindly respond.
Edit: Please follow the Forum’s Policy. Category changed; see also this post #1. [Helmut]
Helmut
★★★
Vienna, Austria,
2019-08-07 11:09
@ Sukalpa Biswas
Posting: # 20482
Views: 522
Hi Sukalpa,
»
3. The within-subject standard deviation of test and reference products will be compared, and the upper limit of the 90% confidence interval for the test-to-reference ratio of the within-subject variability should be ≤ 2.5.
» […] 90% confidence interval for the test-to-reference ratio of the within-subject variability ≤ 2.5 were not meet the criteria for all PK variables (Cmax, AUCt and inf).
Failed to demonstrate BE due to the higher within-subject variability of the test product. Full stop.
» Exercises, Observations and Analysis:
What do you mean by „Exercises”? Since the study failed are you asking for a recipe to cherry-pick?
»
1. We have taken subjects who have completed at least 2R or 2T in Reference Scaled Average Bio equivalence calculation (existing study).
That’s my interpretation as well. Although only the calculation of
s is given in Step 1 of the guidance by analogy the same procedure should be applicable for WR s. WT
»
2. We have done the exercise who have completed all four treatments and did the statistical calculation- still failing on the same criteria marginally.
Leaving cherry-picking aside: By doing so, you drop available information. One should always use all. The more data you have, the more accurate/precise an estimate will be. Have a look at the formula to calculate the 100(1–α) CI of σ
/σ WT :$$\left(\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{\alpha /2,\nu_1,\nu_2}}},\frac{s_{WT} / s_{WR}}{\sqrt[]{F_{1-\alpha /2,\nu_1,\nu_2}}} \right)$$We have two different degrees of freedom ( WR ν 1, ν 1), the first associated with s and the second with WT s. WR
»
3. It was observed that if the “SWT” value should be closed to SWR value or lower, then 90% CI for the test-to-reference ratio of the within-subject variability ≤ 2.5 will meet the criteria.
Of course.
»
1. Which Reference Scaled Average Bioequivalence approach is acceptable in regulatory?
Yes.
»
Approach 2: Subjects who completed four period will be consider for SWR & SWT calculation.
No.
»
or both.
Which one will you pick at the end if one
passes and the other one fails? The passing one, right? The FDA will love that. Be aware that the FDA recalculates every study.
BTW, how would you describe that in the SAP?
»
2. which are the factors adding variability to SWT?
That’s product-related. The idea behind the FDA’s reference-scaling for NTIDs is not only the narrow the limits but also to prevent products with higher variability than the reference’s entering the market.
»
3. Whether same formulation can be taken for the repeat bio-study with some clinical restrictions? If yes then what are the clinical factor to be considered?
As I wrote above, the failure to show BE was product-related. If you introduce clinical restrictions* in order to reduce within-subject variability – due to randomization – both products will be affected in the same way and
s/ WT s be essentially the same like in the failed study. WR
Reformulate.
PS: I changed the category of your post to yesterday and you to today. Wrong. Don’t test my patience – your problems are definitely study-specific (see the Policy for a description of categories).
—
Cheers,
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Sukalpa Biswas
☆
India,
2019-08-09 06:11
@ Helmut
Posting: # 20488
Views: 441
» Failed to demonstrate BE due to the higher within-subject variability of the test product. Full stop.
Accepted.
» » Exercises, Observations and Analysis:
» What do you mean by „Exercises”?
Since the study has been failed, we wanted to dig out the probable reasons for the study failure. In that process certain statistical exercises have be done officially.
» Although only the calculation of
s is given in Step 1 of the guidance by analogy the same procedure should be applicable for WR s. WT
Accepted. Thanks.
» […] you drop available information. One should always use all. […]
Suggestion well accepted. Thanks.
» »
1. Which Reference Scaled Average Bioequivalence approach is acceptable in regulatory?
» Yes.
» »
Approach 2: Subjects who completed four period will be consider for SWR & SWT calculation.
» No.
» »
or both.
» Which one will you pick at the end if one
passes and the other one fails? The passing one, right? The FDA will love that. Be aware that the FDA recalculates every study.
Thanks for your suggestion.
» »
2. which are the factors adding variability to SWT?
» That’s product-related. The idea behind the FDA’s reference-scaling for NTIDs is not only the narrow the limits but also to prevent products with higher variability than the reference’s entering the market.
Agreed
» As I wrote above, the failure to show BE was product-related. If you introduce clinical restrictions* in order to reduce within-subject variability – due to randomization – both products will be affected in the same way and
s/ WT s be essentially the same like in the failed study. WR
» Reformulate.
OK. I would like to mention one thing that, the failed study was fed one, fasting study passed quite comfortably (both ABE and SABE). Is there any possibility that the test formulation is more variable in fed condition?
» PS: I changed the category of your post […].
Sorry. This is the first time I am posting something in this forum. Bit confused regarding the rules and regulation of this forum.
»
Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut]
Helmut
★★★
Vienna, Austria,
2019-08-09 12:36
@ Sukalpa Biswas
Posting: # 20489
Views: 405
Hi Sukalpa,
» » Reformulate.
»
» OK. I would like to mention one thing that, the failed study was fed one, fasting study passed quite comfortably (both ABE and SABE). Is there any possibility that the test formulation is more variable in fed condition?
That’s quite possible. An extreme example of the past: The first PPIs were monolithic gastric-resistant formulations. Crazy variability, both fasting and fed. Current formulations are capsules with gastric-resistant pellets. Variability still high but way better than the monolithic forms. Of course, when the capsules were introduced, BE studies were performed. All PK metrics passed but by inspecting the profiles you could clearly see the lower variability of the capsules. OK, these are easy drugs (now many are already OTCs). Imagine that they would be NTIDs and the formulation change the other way ‘round (capsule → monolithic). No way ever to pass the
s/ wT s criterion. wR
In your case this means again to reformulate. Don’t ask me
how (I’m not a formulation chemist). Maybe dissolution testing in the various stinking FeSSIF “biorelevant” media helps.
» This is the first time I am posting something in this forum. Bit confused regarding the rules and regulation of this forum.
Some hints in the Policy.
—
Cheers,
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes |
Institute of Reproducing Kernels
Kawauchi-cho, 5-1648-16,
Kiryu 376-0041, Japan
February 2, 2018
The Institute of Reproducing Kernels is dealing with the theory of division by zero calculus and declares that the division by zero was discovered as $0/0 =1/0 = z/0 = 0$ in a natural sense on 2014.2.2. The result shows a new basic idea on the universe and space since Aristotels $($BC384 - BC322$)$ and Euclid $($BC 3 Century - $)$, and the division by zero is since Brahmagupta $($598 - 668?$)$. In particular, Brahmagupta defined as $0/0 = 0$ in Brhmasphuasiddhnta $($628$)$, however, our world history stated that his definition $0/0 = 0$ is wrong over 1300 years, but, we showed that his definition is suitable. For the details, see the references and the site: http://okmr.yamatoblog.net/ We wrote a global book manuscript [21] with 154 pages and stated in the preface and last section of the manuscript as follows:
Preface
The division by zero has a long and mysterious story over the world $($see, for example, H. G. Romig [15] and Google site with the division by zero$)$ with its physical viewpoints since the document of zero in India on AD 628. In particular, note that Brahmagupta $($598 -668 ?$)$ established the four arithmetic operations by introducing 0 and at the same time he defined as $0/0 = 0$ in Brhmasphuasiddhnta. Our world history, however, stated that his definition $0/0 = 0$ is wrong over 1300 years, but, we will see that his definition is right and suitable.
The division by zero $1/0 = 0/0 = z/0$ itself will be quite clear and trivial with several natural extensions of the fractions against the mysterously long history, as we can see from the concepts of the Moore-Penrose generalized inverses or the Tikhonov regularization method to the fundamental equation $az = b$, whose solution leads to the definition $z = b/a$.
However, the result $($definition$)$ will show that for the elementary mapping
$$
W =\frac{1}{z} \tag{0.1}
$$the image of $z = 0$ is $W = 0$ $($should be defined from the form$)$. This fact seems to be a curious one in connection with our well-established popular image for the point at infinity on the Riemann sphere ([1]). As the representation of the point at infinity of the Riemann sphere by the zero $z = 0$, we will see some delicate relations between 0 and $\infty$ which show a strong discontinuity at the point of infinity on the Riemann sphere. We did not consider any value of the elementary function $W = 1/z$ at the origin $z = 0$, because we did not consider the division by zero $1/0$ in a good way. Many and many people consider its value by the limiting like $+\infty$ and $-\infty$ or the point at infinity as $\infty$. However, their basic idea comes from continuity with the common sense or based on the basic idea of Aristotle. – For the related Greece philosophy, see [23, 24, 25]. However, as the division by zero we will consider its value of the function $W = 1/z$ as zero at $z = 0$. We will see that this new definition is valid widely in mathematics and mathematical sciences, see $($[9, 10]$)$ for example. Therefore, the division by zero will give great impacts to calculus, Euclidean geometry, analytic geometry, differential equations, complex analysis in the undergraduate level and to our basic ideas for the space and universe.
We have to arrange globally our modern mathematics in our undergraduate level. Our common sense on the division by zero will be wrong, with our basic idea on the space and the universe since Aristotle and Euclid. We would like to show clearly these facts in this book. The content is in the undergraduate level.
Conclusion
Apparently, the common sense on the division by zero with a long and mysterious history is wrong and our basic idea on the space around the point at infinity is also wrong since Euclid. On the gradient or on derivatives we have a great missing since $tan(\pi/2) = 0$. Our mathematics is also wrong in elementary mathematics on the division by zero.
This book is an elementary mathematics on our division by zero as the first publication of books for the topics. The contents have wide connections to various fields beyond mathematics. The author expects the readers write some philosophy, papers and essays on the division by zero from this simple source book.
The division by zero theory may be developed and expanded greatly as in the author’s conjecture whose break theory was recently given surprisingly and deeply by Professor Qi’an Guan [3] since 30 years proposed in [17] $($the original is in [16]$)$.
We have to arrange globally our modern mathematics with our division by zero in our undergraduate level.
We have to change our basic ideas for our space and world.
We have to change globally our textbooks and scientific books on the division by zero.
References
[1] L. V. Ahlfors, Complex Analysis, McGraw-Hill Book Company, 1966.
[2] L. P. Castro and S. Saitoh, Fractional functions and their representations, Complex Anal. Oper. Theory 7 $($2013$)$, no. 4, 1049-1063.
[3] Q. Guan, A proof of Saitoh’s conjecture for conjugate Hardy H2 kernels, arXiv:1712.04207.
[4] M. Kuroda, H. Michiwaki, S. Saitoh, and M. Yamane, New meanings of the division by zero and interpretations on 100/0 = 0 and on 0/0 = 0, Int. J. Appl. Math. 27 $($2014$)$, no 2, pp. 191-198, DOI: 10.12732/ijam.v27i2.9.
[5] T. Matsuura and S. Saitoh, Matrices and division by zero $z/0=0$, Advances in Linear Algebra & Matrix Theory, 6$($2016$)$, 51-58 Published Online June 2016 in SciRes. http://www.scirp.org/journal/alamt http://dx.doi.org/10.4236/alamt.2016.62007.
[6] T. Matsuura and S. Saitoh, Division by zero calculus and singular integrals. $($Submitted for publication$)$
[7] T. Matsuura, H. Michiwaki and S. Saitoh, log0 = log∞ = 0 and applications. Differential and Difference Equations with Applications. Springer Proceedings in Mathematics & Statistics.
[8] H. Michiwaki, S. Saitoh and M.Yamada, Reality of the division by zero z/0 = 0. IJAPM International J. of Applied Physics and Math. 6$($2015$)$, 1–8. http://www.ijapm.org/show-63-504-1.html
[9] H. Michiwaki, H. Okumura and S. Saitoh, Division by Zero $z/0 = 0$ in Euclidean Spaces, International Journal of Mathematics and Computation, 28$($2017$)$; Issue 1, 1-16.
[10] H. Okumura, S. Saitoh and T. Matsuura, Relations of 0 and ∞, Journal
of Technology and Social Science $($JTSS$)$, 1$($2017$)$, 70-77.
[11] H. Okumura and S. Saitoh, The Descartes circles theorem and division by zero calculus. https://arxiv.org/abs/1711.04961 $($2017.11.14$)$.
[12] H. Okumura, Wasan geometry with the division by 0. https://arxiv.org/abs/1711.06947 International Journal of Geometry.
[13] H. Okumura and S. Saitoh, Applications of the division by zero calculus to Wasan geometry. $($Submitted for publication$)$.
[14] S. Pinelas and S. Saitoh, Division by zero calculus and differential equations. Differential and Difference Equations with Applications. Springer Proceedings in Mathematics & Statistics.
[15] H. G. Romig, Discussions: Early History of Division by Zero, American
Mathematical Monthly, Vol. 31, No. 8. $($Oct., 1924$)$, pp. 387-389.
[16] S. Saitoh, The Bergman norm and the Szegö norm, Trans. Amer. Math. Soc. 249 $($1979$)$, no. 2, 261–279.
[17] S. Saitoh, Theory of reproducing kernels and its applications. Pitman Research Notes in Mathematics Series, 189. Longman Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York, 1988. x+157 pp. ISBN: 0-582-03564-3
[18] S. Saitoh, Generalized inversions of Hadamard and tensor products for matrices, Advances in Linear Algebra & Matrix Theory. 4 $($2014$)$, no. 2, 87–95. http://www.scirp.org/journal/ALAMT/
[19] S. Saitoh, A reproducing kernel theory with some general applications, Qian,T./Rodino,L.$($eds.$)$: Mathematical Analysis, Probability and Applications - Plenary Lectures: Isaac 2015, Macau, China, Springer Proceedings in Mathematics and Statistics, 177(2016), 151-182. $($Springer$)$.
[20] S. Saitoh, Mysterious Properties of the Point at Infinity、arXiv:1712.09467 [math.GM]$($2017.12.17$)$.
[21] S. Saitoh, Division by zero calculus $($154 pages: draft$)$: $($http://okmr.yamatoblog.net/ $)$
[22] S.-E. Takahasi, M. Tsukada and Y. Kobayashi, Classification of continuous fractional binary operations on the real and complex fields, Tokyo Journal of Mathematics, 38$($2015$)$, no. 2, 369-380.
[23] https://philosophy.kent.edu/OPA2/sites/default/files/012001.pdf
[24] http://publish.uwo.ca/ jbell/The 20Continuous.pdf
[25] http://www.mathpages.com/home/kmath526/kmath526.htm
[26] Announcement 179 $($2014.8.30$)$: Division by zero is clear as z/0=0 and it is fundamental in mathematics.
[27] Announcement 185 $($2014.10.22$)$: The importance of the division by zero $z/0 = 0$.
[28] Announcement 237 $($2015.6.18$)$: A reality of the division by zero $z/0 = 0$ by geometrical optics.
[29] Announcement 246 $($2015.9.17$)$: An interpretation of the division by zero $1/0 = 0$ by the gradients of lines.
[30] Announcement 247 $($2015.9.22$)$: The gradient of y-axis is zero and $\tan(\pi/2) = 0$ by the division by zero $1/0 = 0$.
[31] Announcement 250 $($2015.10.20$)$: What are numbers? - the Yamada field containing the division by zero z/0 = 0.
[32] Announcement 252 $($2015.11.1$)$: Circles and curvature - an interpretation by Mr. Hiroshi Michiwaki of the division by zero $r/0 = 0$.
[33] Announcement 281 $($2016.2.1$)$: The importance of the division by zero $z/0 = 0$.
[34] Announcement 282 $($2016.2.2$)$: The Division by Zero z/0 = 0 on the Second Birthday.
[35] Announcement 293 $($2016.3.27$)$: Parallel lines on the Euclidean plane from the viewpoint of division by zero $1/0=0$.
[36] Announcement 300 $($2016.05.22$)$: New challenges on the division by zero $z/0=0$.
[37] Announcement 326 $($2016.10.17$)$: The division by zero z/0=0 - its impact to human beings through education and research.
[38] Announcement 352$($2017.2.2$)$: On the third birthday of the division by zero $z/0=0$.
[39] Announcement 354$($2017.2.8)$:$ What are $n = 2,1,0$ regular polygons inscribed in a disc? – relations of 0 and infinity.
[40] Announcement 362$($2017.5.5$)$: Discovery of the division by zero as $0/0 =
1/0 = z/0 = 0$
[41] Announcement 380 $($2017.8.21$)$: What is the zero?
[42] Announcement 388$($2017.10.29$)$: Information and ideas on zero and division by zero $($a project$)$.
[43] Announcement 409 $($2018.1.29.$)$: Various Publication Projects on the Division by Zero.
[44] Announcement 410 $($2018.1 30.$)$: What is mathematics? – beyond logic; for great challengers on the division by zero.
PR |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I must simplify $\log_4 (9) + \log_2 (3)$. I have tried but I can't get the correct answer $2 \log_2 (3)$. How do I proceed?
The good way is to use calculus's answer.
Another way (I am lazy and I hate bases !) is to convert everything to natural logarithms. So, $$ \log_2 (3)+\log_4 (9) =\frac{\log (3)}{\log (2)}+\frac{\log (9)}{\log (4)}=\frac{\log (3)}{\log (2)}+\frac{\log (3^2)}{\log (2^2)}=\frac{\log (3)}{\log (2)}+\frac{2\log (3)}{2\log (2)}=2\frac{\log (3)}{\log (2)}$$ $$ \log_2 (3)+\log_4 (9) =2\log_2 (3)$$
It is known, that $log_{a^2}(b^2)=log_a(b)=x$. You can write it in an exponential form: $(a^2)^x=b^2$ and $a^x=b$. Both equations have the same solution.
In your case it is $log_4(9)+log_2(3)=log_2(3)+log_2(3)=2log_2(3)$
We can use $\log_b x+\log_b y=\log_b xy$ to get $\log_v 23+\log_v 49=\log_v 1127$. However, using $\log_b x^y=y\log_b x$, we see that the "correct answer" is equal to $\log_v 23^2=\log_v 529\neq \log_v 1127$ |
This question already has an answer here:
Let $a_{n+1} = \sin{a_n}$ and $a_0 = 1$. Does the series $\displaystyle \sum_{i=0}^\infty a_i$ converge?
Here is my solution.
Let's prove by induction, that $a_n > \frac{1}{n}$.
We see that $\sin{(1)} > \frac{1}{2}$.
Suppose, that $a_n > \frac{1}{n}$. Then:
$$\underbrace{\sin{\sin{(...\sin{(1)}...)}} )}_\text{$n+1$ sines}\ ?\ \frac{1}{n+1}$$
$$\underbrace{\sin{\sin{(...\sin{(1)}...)}} )}_\text{$n$ sines}\ ?\ \mbox{arcsin}(\frac{1}{n+1})$$
If $\mbox{arcsin}(\frac{1}{n+1}) < \frac{1}{n}$ (becuase left part is bigger then it by induction hypothesis) then we are done. And it really is, because $\sin{\frac{1}{n}} > \frac{1}{n+1} \Leftrightarrow \frac{1}{n} - \frac{1}{n^33!} > \frac{1}{n+1}$ (here I used Teylor expansion for sine).
As $a_n > \frac{1}{n}$ our series is divergent by comparison test.
Is my reasoning correct? |
I am a newbie on topological space and self-studying general topology when I read this pdf. I find some discrepancy on my proof and the author's proof.
Here is my problem:
If $\{A_i | i \in J\}$, is a collection of connected subspaces of a space X with $\bigcap A_i \neq \emptyset $, then $\bigcup A_i$ is connected.
My proof is:
Assume $M = \bigcup A_i $ is disconnected. There exist $U$ and $V$ in X such that $M \subset U \cap V$ and $M \cap U \cap V = \emptyset $.
Since $A_i$ is connected, if $A_i \cap U \cap V \neq \emptyset \implies U \text{ and } V \text{ are not separation for M }$, so $A_i \subset U$ or $A_i \subset V$. WLOG, $A_i \subset U$, there exists a $A_j$ which is a subset of $V$. Since $\bigcap A_i \neq \emptyset $, then $A_i \cap A_j \neq \emptyset$ which turns out $U \cap V\neq \emptyset$
By $M \cap U \cap V = \emptyset $ and $M \subset U \cap V$, there are two cases:
If $M \subset U$ and $M \cap V = \emptyset$ then $U$ and $V$ are not separation so it is contradict.
If $M \cap U \neq \emptyset$ and $M \cap V \neq \emptyset$ then there exists $A_i$ and $A_j$ such that $A_i \cap A_j = \emptyset$ so it is contradict.
Is my proof right? Any help I will appreciate. ^-^ |
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep. |
I thought that for momentum integrals in Minkowski space, the Wick rotation to Euclidean space $k_0 \to ik_0$ allows one to write (let's say $f$ comes with an $i\epsilon$ prescription):
$$\int_{\mathbb{M}^4} d^4k \ f(k^2) = i \int_{\mathbb{E}^4} d^4k \ f(-k^2) = 2\pi^2i\int_0^\infty dk \ k^3 f(-k^2).$$
Sorry if the notation is weird, I wasn't sure how to denote the difference between Minkowski space and Euclidean space.
But I've come across a problem (calculating a loop integral) where doing this doesn't give the right answer, namely for
$$f(k^2)= \frac{1}{\left( k^2 - \Delta +i \epsilon \right)^2}.$$
Doing a Wick rotation results in:
$$I = \int_0^\infty \frac{k^3 dk}{\left( k^2 + \Delta \right)^2}$$
But the correct answer should be:
$$I = \int_0^\infty \frac{k^2 dk}{\left( k^2 + \Delta \right)^{3/2}}$$
According to Aitchison, Hey - 'Gauge Theories in Particle Physics', eq. (10.42). I uploaded it here.
Of course, both of these integrals are formally divergent, but suppose there's an energy cutoff. What went wrong here? What conditions must be met for a Wick rotation to work?
One suspicion I have is that maybe these two turn out to be equivalent, up to a choice of the cutoff energy. Especially since there is still a $\int_0^1 dx$ integration to be done to obtain the actual observable quantity, and $\Delta$ is a quadratic function of $x$:
$$\Delta = m_1^2 (1-x) + m_2^2 x -p^2 x(1-x) \equiv Ax^2 +Bx + C.$$
Could this be the reason? Or is the Wick rotation a mistake for some mathematical reason?
EDIT: I checked and the difference between the two integrals exists and equals $\log 2 - 1/2 \approx 0.2$, independent of $\Delta$. So they're not exactly equal even in the limit, but does it matter...? |
Luzin's theorem says:
(1) Suppose $X$ is a $\sigma$-compact metric space, $\mu$ is a complete Radon measure on $X$, and $f\in\mathcal L_0(X,\mu, E)$. Then for every measurable set $A$ with finite measure and every $\epsilon>0$ there is a compact $K\subset X$ such that $\mu(A\setminus K)<\epsilon$ and $f|_K\in C(K,E)$.
Note: above the notation $f\in\mathcal L_0(X,\mu, E)$ just means that $f:X\to E$ is $\mu$-measurable, where $E$ is a Banach space and $X$ is $\sigma$-finite. Also here $\sigma$-compact means that $X$ is locally compact and that there is a sequence of compact sets that cover $X$.
Now the proof start with this phrase that I cant follow:
(2) Because $X$ is $\sigma$-compact then there is some compact $K\subset X$ such that $\mu(A\setminus K)<\epsilon/2$.
I can understand the above but not because $X$ is $\sigma$-compact, if not because $\mu$ is regular and then because $\mu(A)<\infty$ then there is some $K\subset A$ such that $\mu(A)-\mu(K)=\mu(A\setminus K)<\epsilon/2$, but I cant follow the reasoning about this property just because $X$ is $\sigma$-compact.
Can someone clarify the statement on (2)? Thank you.
P.S.: this comes from the page 76 of the book
Analysis III of Amann and Escher |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.