text stringlengths 256 16.4k |
|---|
This is a matter of understanding what you're dealing with.
You're asked to differentiate $x^2+y^2=1$. An equation isn't a differentiable function, therefore the equation can't be differentiated.
Now comes the 'translating the problem part'.
The equation $x^2+y^2=1$ 'defines a function', more precisely, there exists a function $g\colon U\to V$ such that $x^2+(g(x))^2=1$, for some sets $U$ and $V$. (A lot can be said about $g, U$ and $V$). Let's assume for the time being that $g$ is differentiable. Now what the problems is actually asking you to do is to differentiate both sides of $x^2+(g(x))^2=1$, yielding $2x+2g(x)g'(x)=0$.
All this is simply the Implicit Function Theorem. The details can be checked on the link.
In two dimensions the theorem goes as follows:
Let $D\subseteq \Bbb R^2$ be an open set and let $f\colon D \to \Bbb R$ be a class $C^1$ function. Given $a\in \Bbb R$, suppose there exists $(x_0, y_0)\in D$ such that $f(x_0, y_0)=a$ and $f_y(x_0, y_0)\neq 0$. Then there are open intervals $U$ and $V$ with the property that there exists a class $C^1$ function $g\colon U\to V$ such that $\forall x\in U\left(f(x,g(x))=c\right)$. Furthermore defining $h\colon U\to \Bbb R, x\mapsto f(x,g(x))$, the chain rule yields $\forall x\in U(h'(x)=f_x(x,g(x))+f_y(x,g(x))g'(x)=0)$. |
Is it true that $$\gcd\left(5^{2^n} + 1, 13^{2^n} + 1\right) = 2$$ for all $n \in \mathbf{Z}_{\geq 0}$?
I'm continually stumped with this and verifying it numerically is quite expensive very quickly (around $n = 20$).
I have previously blogged about the origins of the problem and included a proof of an easier version, but this continues to stump me.
Letting $F_n = 5^{2^n} + 1$ and $T_n = 13^{2^n} + 1$ it's clear both satisfy the recurrence: $$f(n) \left(f(n) - 2\right) = f(n + 1) - 2$$ but as of yet, I've been unable to utilize this. However, it may be useful in reducing the amount of computation for attacking it numerically. Concretely, starting from a known output from the Euclidean algorithm $$a_n F_n + b_n T_n = 2$$ the recurrence may be useful in determining $a_{n + 1}, b_{n + 1}$ in less expensive ways. (Of course we need to define these in such a way that they are unique, e.g. $0 \leq b_n < F_n$.)
[
UPDATE]: I have verified @Zander's answer that both $F_{2206}$ and $T_{2206}$ are divisible by the value $p = 3 \cdot 2^{2208} + 1$ with the short snippet of Python below. This clearly shows the gcd is not $2$. I was also able to show that $p$ is prime (though this is not necessary) by using Proth's theorem and $a = 11$ as a witness to primality. Finally, I remain unclear on how such an $n$ (and with it a $p$) could be found.
n = 2206modulus = 3 * (2 ** (n + 2)) + 1f_residue = 5**(2**0) + 1t_residue = 13**(2**0) + 1for _ in xrange(n): f_updated = f_residue * (f_residue - 2) + 2 f_residue = f_updated % modulus t_updated = t_residue * (t_residue - 2) + 2 t_residue = t_updated % modulusprint '(5^(2^2206) + 1) MODULO (2^(2208) + 1):'print f_residueprint '(13^(2^2206) + 1) MODULO (2^(2208) + 1):'print t_residue |
For a continuous-time signal $x(t)$ that is bandlimited (in the baseband) to $[-W,W]$, the standard proof of Nyquist sampling theorem proceeds in the frequency domain by examining the Fourier transform $X_s(f)$ of the $x(t)$ sampled at rate $2W=1/T$, and then showing that one can re-construct $x(t)$ using the low-pass filter with $\operatorname{sinc}(t/T)$ as impulse response to input of samples modulated by a Dirac comb. The standard proof uses the Fourier transform $X(f)$ of the original signal $x(t)$, which is computed over an infinite time domain, thus implicitly using infinite number of samples.
I am wondering if there is an alternative proof of the sampling theorem that, given the same conditions on the signal $x(t)$ as in the standard proof (band-limited, continuous, etc) uses
fixed number of samples $n$ (as opposed to infinite number in the standard version), and then proceeds to show that, if sampled at rate $2W$, the reconstruction of the signal $\hat{x}(t)$ gets "closer" to original signal $x(t)$ as one increases $n$ such that $$\lim_{n\rightarrow\infty}\|x(t)-\hat{x}(t)\|=0$$
for some metric $\|\cdot\|$ like $L_1$ or $L_2$.
I don't have the mathematical sophistication to prove this myself, but I feel that this might be in the literature somewhere... Perhaps this an extension of the standard proof somehow. |
Let $(E,\|\cdot\|)$ be a normed space und let $E'$ be the topological dual space of $E$ provided with the norm $\|f\|:=\sup\{|f(x)|\colon\|x\|\leq 1\}$
Show, that for $F\subseteq E$ linear subspace provided with the restricted norm and is $g\in F'$, then exists $f\in E'$ such that $f_{|F}=g$ and $\|f\|=\|g\|$.
I want to solve this task. I think I can use the theorem of Hahn-Banach.
Hahn-Banach states:
Let $E$ be a $\mathbb{K}$-vectorspace, $p: E\to [0,\infty)$ a seminorm, $F\subseteq E$ a linear subspace and $g: F\to\mathbb{K}$ linear (hence $g\in F'$), with $|g|\leq p$ on $F$. Then exists a linear function $\hat{g}:E\to\mathbb{K}$ with $\hat{g}_{|F}=g$ and $|\hat{g}(x)|\leq p(x)$ for all $x\in E$.
To use this theorem it is missing, that $|g|\leq p$. So I have to show, that $\|g(x)\|_F\leq \color{red}{\|x\|}$ (where $\|\cdot\|_F$ notes the restriced norm on $F$.) Is that correct?
If I succeed we get the desired $f\in E'$ with $f_{|F}=g$ and $\|f\|\leq \color{red}{\|g\|}$ for all $x\in E$.
Then I need to show, that $\|f\|=\|g\|$, hence it would be missing, that $\|f\|\geq\|g\|$.
Am I right? I marked $\color{red}{red}$ where I think I made a mistake.
I appreciate any kind of help. Thanks in advance. |
Problem:
For positive integers $n,k$, let $$S(n,k)=\sum_{i=1}^{n}i^k$$ and for positive integers $m,b$, with $b>1$, let $D(m,b)$ be the sum of the base-$b$ digits of $m$.
Q$1$-Show that if $k\in\{1,2,3\}$, and $a$ is a positive integer such that $a{\,|\,}S(a,k)$, then $D(S(b,k),b)=b$, where $b=a+1$ but not satisfied $\forall$ $k > 3$?
Q$2$-Show that $D((p')^{t}-D((p')^{2k+1}-S(p',2k),p'),p')=p'$ where $p$ is prime and $p+1=p'$ and $p>2k+1$ and $(p')^{t} \ge D((p')^{2k+1}-S(p',2k),p')>(p')^{t-1}$ and $k,t \in \mathbb{N}$? note
Quasi well proved half of question$1$ in this link proof for $k\in\{1,2,3\}$
I have already mention the observation of question$2$ in different manner in this link reference for Q$2$ |
I have the following PDE: $$ \frac{\partial }{\partial x}\left(G_x \left(\frac{\partial \phi (x,y)}{\partial x}-y\right)\right)+\frac{\partial }{\partial y}\left(G_y \left(\frac{\partial \phi (x,y)}{\partial y}+x\right)\right)=0 $$ with BCs $\frac{\partial \phi (x,y)}{\partial x}-y=0$ at $x=\pm a$ and $\frac{\partial \phi (x,y)}{\partial y}+x=0$ at $y=\pm b$, $G_x$ and $G_y$ are constants.
After a lot of work and help from this community I managed to get an analytical solution. Now I want to confirm this solution using
NDSolve so I typed the following code:
(*Main equation*)eqn[x_,y_]=D[(Gx(D[\[Phi][x,y],x]-y)),x]+D[(Gy(D[\[Phi][x,y],y]+x)),y];(*BCs*)bcx[x_,y_]=D[\[Phi][x,y],x]-y;bcy[x_,y_]=D[\[Phi][x,y],y]+x;bcs={bcx[-a,y]==0,bcx[a,y]==0,bcy[x,-b]==0,bcy[x,b]==0};(*Values for numerical solution*)Gy=41018756.0;Gx=72463203.0;a=0.0025;b=0.0025;NDSolve[{eqn[x,y]==0,bcs},\[Phi],{x,-0.0025,0.0025},{y,-0.0025,0.0025}]
but I get the message
NDSolve::fembdnl: The dependent variable in -y+(\[Phi]^(1,0))[-0.0025,y]==0 in the boundary condition DirichletCondition[-y+(\[Phi]^(1,0))[-0.0025,y]==0,x==-0.0025] needs to be linear.
I thought that all equations can have a numerical solution but are not guaranteed an analytical one so I'm sure I'm making a mistake but I can't see where.
Side note: here is the analytical solution $$ \phi (x,y)=x y-\frac{32 \sqrt{G_y} (-1)^n \sin \left(\frac{1}{2} \pi (2 n+1) x\right) \text{sech}\left(\frac{\pi b \sqrt{G_x} (2 n+1)}{2 \sqrt{G_x}}\right) \sinh \left(\frac{\pi \sqrt{G_x} (2 n+1) y}{2 \sqrt{G_y}}\right)}{\pi ^3 \sqrt{G_x} (2 n+1)^3} $$ where $n=0,1,2,3,...$. |
Extracted from
'At the frontier of ParticlePhysics, handbook of QCD, volume 2',
'...
in the physical Bjorken $x$-space formulation, an equivalent definition of PDFs can be given in terms of matrix elements of bi-local operators on the lightcone. The distribution of quark 'a' in a parent 'X' (either hadron or another parton) is defined as $$f^a_X (\zeta, \mu) = \frac{1}{2} \int \frac{\text{d}y^-}{2\pi} e^{-i \zeta p^+ y^-} \langle X | \bar \psi_a(0, y^-, \mathbf 0) \gamma^+ U \psi_a(0) | X \rangle ,'$$ where $$U = \mathcal P \exp \left( -ig \int_0^{y^-} \text{d}z^- A_a^+(0,z^-, \mathbf 0) t_a \right)$$ is the Wilson line.
My questions are:
1) Where does this definition come from? I'd like to particularly understand in detail the content of the rhs (i.e the arguments of the spinors, why an integral over $y^-$ etc)
2) The review also mentions that in the physical gauge $A^+=0$, $U$ becomes the identity operator in which case $f^a_X$ is manifestly the matrix element of the number operator for finding quark 'a' in X with plus momentum fraction $p_a^+ = \zeta p_X^+, p_a^T=0$. Why is $A^+=0$ the physical gauge? |
When performing $N$ independent measurements following a Gaussian distribution, it is sometimes said to report the mean and the standard deviation of the mean as value and its uncertainty. Now, the instrument has a non zero precision, and the standard deviation of the mean can be made arbitrarily close to 0 granted that the number of measurements can be made arbitrarily large. I had been taught to report the precision of the instrument instead of the standard deviation of the mean when this latter becomes smaller than the former, on the basis that "one cannot report the period of a pendulum with an error smaller than the precision of the chronometer used". I feel this rule is arbitrary and I don't find information about it. I also don't see reports where the uncertainty is smaller than the precision of the apparatus. My question is,
what do scientists report as uncertainty when the standard deviation of the mean is smaller than the precision of the instrument? I see no problem reporting a number arbitrarily close to 0 as long as it's clear than I'm reporting the standard deviation of the mean. I'd also have no problem reporting another statistics, e.g. the standard deviation to indicate how spread the data is, but then I'm not reporting the uncertainty of the mean value anymore.
When performing $N$ independent measurements following a Gaussian distribution, it is sometimes said to report the mean and the standard deviation of the mean as value and its uncertainty. Now, the instrument has a non zero precision, and the standard deviation of the mean can be made arbitrarily close to 0 granted that the number of measurements can be made arbitrarily large. I had been taught to report the precision of the instrument instead of the standard deviation of the mean when this latter becomes smaller than the former, on the basis that "one cannot report the period of a pendulum with an error smaller than the precision of the chronometer used". I feel this rule is arbitrary and I don't find information about it. I also don't see reports where the uncertainty is smaller than the precision of the apparatus. My question is,
Ideally, you should report both errors. Example: your measurement device makes systematic errors of 0.2 and your mean value is 1.5 with a standard uncertainty of 0.15, then you quote along the lines of:
$$x = 1.5 \pm 0.15\,(\text{statistical}) \pm 0.2\,(\text{systematic})$$
This way this allows people reading your work the freedom to use advanced statistical method to treat the two errors differently. This is especially important if people are trying to combine your measurements with somebody else who uses a different apparatus with, perhaps, a different systematic error. @Tajimura's is acceptable for crude reporting but it makes it hard to impossible to sensibly combine your results with others. When I say crude, this is a bit mean actually: if you have to report only one series of results and the measurement error of your apparatus is not correlated with the random errors of the measurement entering the statistical error above, then his formula is perfect. But one day your work might become relevant to other physicists and they will hate you if you did not separate systematic and statistical errors!
Back in my university we used to report so-called "full uncertainty" which is a combination of standard deviation and device precision:
$\Delta x = \sqrt{\sigma ^2 + \delta ^2}$
Where $\sigma$ is a standard deviation and $\delta$ is a measurement device precision. |
In many areas of application, one needs to solve a nonlinear system of equations $$ F(x) = 0. $$ Sometimes, the formulation $$ \|F(x)\|^2 \to\min $$ is used. Clearly, every solution $\hat{x}$ of $F(x)=0$ is also a solution of the second problem; the converse is also true (if a solution exists).
The question is if one can tell a-priori which formulation is better suited for a given problem. Have people worked on this before?
One example
Consider the function $$ F(x, y) = \begin{pmatrix} x^3 - 3x y^2 - 1\\ 3 x^2 y - y^3 \end{pmatrix}. $$ It has the three roots $x_1=(1,0)$ (green in the figure below), $x_2=(-0.5,\sqrt{3}/2)$ (blue), $x_3=(-0.5,-\sqrt{3}/2)$ (red). When applying Newton's method to $F$, the starting point will determine to which of the three solutions we converge.
The darker the color, the more Newton iterations were required. The typical Newton fractals appear.
When finding critial points $\nabla (\|F(x)\|^2) = 0$, again with Newton's method, the picture is a little different.
Note that the point $(0,0)$ is a critical point of $\|F(x)\|^2$, but no solution of $F(x)=0$.
This highlights one possible problem with the $\min$-formulation. |
№ 9
All Issues Romanenko Ye. Yu.
Ukr. Mat. Zh. - 2007. - 59, № 2. - pp. 217–230
We propose an approach to the analysis of turbulent oscillations described by nonlinear boundary-value problems for partial differential equations. This approach is based on passing to a dynamical system of shifts along solutions and uses the notion of ideal turbulence (a mathematical phenomenon in which an attractor of an infinite-dimensional dynamical system is contained not in the phase space of the system but in a wider functional space and there are fractal or random functions among the attractor “points”). A scenario for ideal turbulence in systems with regular dynamics on an attractor is described; in this case, the space-time chaotization of a system (in particular, intermixing, self-stochasticity, and the cascade process of formation of structures) is due to the very complicated internal organization of attractor “points” (elements of a certain wider functional space). Such a scenario is realized in some idealized models of distributed systems of electrodynamics, acoustics, and radiophysics.
Ukr. Mat. Zh. - 2005. - 57, № 11. - pp. 1534–1547
Let $\{ I, f Z^{+} \}$ be a dynamical system induced by the continuous map $f$ of a closed bounded interval $I$ into itself. In order to describe the dynamics of neighborhoods of points unstable under $f$, we suggest a notion of $\varepsilon \omega - {\rm set} \omega_{f, \varepsilon}(x)$ of a point $x$ as the $\omega$-limit set of $\varepsilon$-neighborhood of $x$. We investigate the association between the $\varepsilon \omega - {\rm set}$ and the domain of influence of a point. We also show that the domain of influence of an unstable point is always a cycle of intervals. The results obtained can be directly applied in the theory of continuous time difference equations and similar equations.
Ukr. Mat. Zh. - 2000. - 52, № 12. - pp. 1615-1629
We investigate the asymptotic behavior of solutions of the simplest nonlinear
q-difference equations having the form x( qt+ 1) = f( x( t)), q> 1, t∈ R +. The study is based on a comparison of these equations with the difference equations x( t+ 1) = f( x( t)), t∈ R +. It is shown that, for “not very large” q> 1, the solutions of the q-difference equation inherit the asymptotic properties of the solutions of the corresponding difference equation; in particular, we obtain an upper bound for the values of the parameter qfor which smooth bounded solutions that possess the property \(\begin{array}{*{20}c} {\max } \\ {t \in [0,T]} \\ \end{array} \left| {x'(t)} \right| \to \infty \) as T→ ∞ and tend to discontinuous upper-semicontinuous functions in the Hausdorff metric for graphs are typical of the q-difference equation.
Ukr. Mat. Zh. - 1993. - 45, № 10. - pp. 1398–1410
The article presents three scenarios of the evolution of spatial-temporal chaos and specifies the corresponding types of chaotic solutions to a certain nonlinear boundary-value problem for PDE. Analytic assertions are illustrated by numerical analysis and computer graphics.
Representation of the local general solution of a certain class of differential-functional equations
Ukr. Mat. Zh. - 1990. - 42, № 2. - pp. 206–210
Ukr. Mat. Zh. - 1989. - 41, № 11. - pp. 1526–1532
Ukr. Mat. Zh. - 1987. - 39, № 1. - pp. 123-129
Representation of the solutions of quasilinear functional differential equations of neutral type in case of resonance
Ukr. Mat. Zh. - 1977. - 29, № 2. - pp. 280–283
Ukr. Mat. Zh. - 1974. - 26, № 6. - pp. 749–761 |
Use clifford algebra. You have two vectors $a$ and $b$ so that you want to rotate $a$ to $b$. Compute the bivector that the vectors reside in, $B = (a \wedge b)$. Normalize this bivector: $\hat B = B/\sqrt{|B^2|}$. Note that $\hat B^2 = -1$. Compute the rotation angle $\theta$ by $a \wedge b = |a||b| \hat B \sin \theta$.
There is a rotor $R = \exp(-\hat B \theta/2) = \cos \frac{\theta}{2} - \hat B \sin \frac{\theta}{2}$, and the full rotation can be computed by
$$\underline R(c) = R c R^{-1}$$
for any vector $c$. Compute the components by plugging in basis vectors.
Edit: I will work an example that, hopefully, convinces you of the usefulness of this method.
Let $a = e_1$ and $b = e_3 + e_4$ be two vectors. In clifford algebra, we have a wedge product $(\wedge)$ that is anticommutative (like the cross product) but whose result is
not a vector. The result is instead called a bivector, and this object is suitable for describing planes in an $N$-dimensional space.
The wedge product of $a$ and $b$ is $B = a \wedge b$ and is written out in components as
$$B = a \wedge b = e_1 \wedge (e_3 + e_4) = e_1 \wedge e_3 + e_1 \wedge e_4$$
That's all there is to computing the wedge product. I will write this for brevity as $B = e_1 e_3 + e_1 e_4$ however. This is legal using the
geometric product, defines as
$$ab = a \cdot b+ a \wedge b$$
The geometric product is useful because it contains all the information about whether a vector is parallel to another or perpendicular (or how much it's parallel and perpendicular) both in the same product. On a practical level, computations with the geometric product in a basis look like this. Let $i, j \in \{1, 2, 3, 4\}$, and we have
$$e_i e_j = \begin{cases} 1, & i = j \\ -e_j e_i, & i \neq j\end{cases}$$
Along with associativity, distributivity over addition, all the usual convenient stuff.
It is through the geometric product that we can compute the magnitude of $B$:
$$B^2 = e_1 (e_3 + e_4) e_1 (e_3 + e_4) = -e_1 e_1 (e_3 + e_4) (e_3 + e_4) = -2$$
That this squares to a negative number is actually quite important. Taking an exponential will result in the usual trig functions coming out of power series, and that's critically important. I dare say it is
why we use trig functions in Euclidean space. Using bivectors, you get stuff that would ordinarily need an imaginary unit, but in an entirely real space!
So our normalized bivector $\hat B = e_1 (e_3 + e_4)/\sqrt{2}$, and that's fine. Here, I picked two vectors that are already orthogonal, so the angle between them must be $\pi/2$. If they weren't already orthogonal, then you could do $a \cdot b = |a| |b| \cos\theta$ as per usual.
All we need to do now to calculate the rotation is to take the exponential of the bivector.
$$R = \exp(-\hat B \pi/4) = \cos \frac{\pi}{4} - \hat B \sin \frac{\pi}{4} = \frac{1}{\sqrt{2}} - \frac{e_1 (e_3 + e_4)}{2}$$
The half-angle use of $\pi/4$ instead of $\pi/2$ is important for reasons I have no time to get into, but if you're familiar with quaternions, it should be no surprise.
The final rotation comes out to
$$\underline R(c) = R c R^{-1} = \left( \frac{1}{\sqrt{2}} - \frac{e_1 e_3 + e_1 e_4}{2} \right) c \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$
for any vector $c$. For the sake of demonstration, I will choose $c = e_1$ to show how the computation works. Again, these products are geometric, using the rules I outlined above. We start by just plugging that in:
$$\underline R({\color{green}{e_1}}) =\left( \frac{1}{\sqrt{2}} - \frac{e_1 e_3 + e_1 e_4}{2} \right) {\color{green}{e_1}} \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$
Through associativity, move $\color{green}{e_1}$ into the brackets on the left.
$$\underline R(\color{green}{e_1}) =\left( \frac{e_1}{\sqrt{2}} - \frac{e_1 e_3 \color{green}{e_1} + e_1 e_4 \color{green}{e_1}}{2} \right) \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$
$e_1 e_3 e_1 = -e_1 e_1 e_3 = -e_3$ by associativity and anticommutivity of orthogonal vectors. The same logic applies to $e_1 e_4 e_1$ to get
$$\underline R(e_1) =\left( \frac{e_1}{\sqrt{2}} + \frac{e_3 + e_4 }{2} \right) \left( \frac{1}{\sqrt{2}} + \frac{e_1 e_3 + e_1 e_4}{2} \right)$$
Now we just have to distribute and multiply.
$$\underline R(e_1) =\frac{e_1}{2} + \frac{e_3 + e_4 }{2\sqrt{2}} + \frac{{\color{red} {e_1 e_1}} e_3 + {\color{red} {e_1 e_1}} e_4}{2 \sqrt{2}} + \frac{e_3 e_1 e_3 + {\color{blue} {e_3 e_1 e_4 + e_4 e_1 e_3}} + e_4 e_1 e_4}{4}$$
Again, ${\color{red} {e_1 e_1}} = 1$, so that simplifies the third term. Note that ${\color{blue} {e_3 e_1 e_4 = - e_4 e_1 e_3}}$ (this takes 3 swaps, so it overall picks up a minus sign), so those terms cancel, and we get
$$\underline R(e_1) =\frac{e_1}{2} + \frac{e_3 + e_4 }{\sqrt{2}} + \frac{- 2e_1}{4} = \frac{e_3 + e_4}{\sqrt{2}}$$
As desired.
Clifford algebra may be unfamiliar, but it's a very powerful language for doing geometric computations. There's already a great module for doing computations in python (using sympy), where it's referred to as
geometric algebra for a good reason. GA, as it's called, lends itself to an object-oriented approach to geometry. All you need is the ability to program the products of the algebra, and you're off and running.
Edit edit: The form of the final rotation can be simplified somewhat.
$$\underline R(c) = c \cos \theta - \hat B (\hat B\wedge c) (1-\cos \theta) + (c \cdot \hat B) \sin \theta$$
where $c \cdot \hat B = (c \hat B - \hat B c)/2$ is the vector part of $c\hat B$, and $\hat B \wedge c = (\hat B c + c \hat B)/2$ is the trivector part of $\hat B c$. This is the clifford algebra analogue to the Rodrigues formula, but using these particular products, it is valid in all dimensions. |
Prove: If $f,g:S^{n-1} \to X$ are homotopic maps, then $X\sqcup_fD^n$ and $X\sqcup_gD^n$ are homotopy equivalent.
I think it can be proved by showing they are both deformation retracts of $X\sqcup_H(D^n\times I)$ where $H$ is the homotopy between $f$ and $g$.
However, I have difficult in proving that the deformation retracts are continuous map. In fact, I have difficulty in representing a map in quotient spaces like $X\sqcup_fD^n$. I think a map from $X\sqcup_fD^n$ to $W$ can be represented by two maps: $m_1: X\to W$, $m_2: D^n\to W$, where for $x\in S^{n-1}$, $m_1\circ f(x)=m_2\circ i(x)$.
Then I construct the deformation retract this way: $m_1: X\to X$. For $x\in H(S^{n-1},t)$, $m_1(x)=H(S^{n-1},0)$, otherwise $m_1(x)=x$.
$m_2: D^n\times I\to D^n\times {0}$: $m_2((D^n,t))=(D^n,0)$.
It is easy to verify that $m_1$ and $m_2$ define a map from $X\sqcup_H(D^n\times I)$ to $X\sqcup_fD^n$. As long as this is a continous map, obvioulsy then we find a deformation retract. But it seems such a map is not continous? |
I have this code:
S = (Sqrt[2]/2)*{{1 + Conjugate[δ], 0}, {0,1 - Conjugate[δ]}}(** Suppose a+b=1 and δ=((a-b)/(a+b))\[Conjugate] **)k = (1/Sqrt[2])*{{S[[1, 1]] + S[[2, 2]]}, {S[[1, 1]] - S[[2, 2]]}, {2 S[[1, 2]]}} // SimplifySubscript[T, 0] = Dot[k, ConjugateTranspose[k]]Subscript[T, 0] // MatrixFormSubscript[T, 0] // TraditionalForm
$$\left( \begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \delta \delta ^* & 0 \\ 0 & 0 & 0 \\ \end{array} \right)$$
As you see at the end the product of $\delta$ and $\delta^*$ is not printed as $|\delta|^2$ but as $\delta\delta^*$
Someone told me in one of my questions that this is because:
It seems that you did not instruct Mma that δ∗ is a conjugated value of δ. Using simply a conjugate symbol is not enough. You should use Conjugate[δ] instead and then apply ComplexExpand
so far I have tried several ways like
Using the UpsetDelayed operator in the begining of code as:
δ\[Conjugate] ^:= Conjugate[δ]
or using:
ComplexExpand[Subscript[T,0], δ, TargetFunctions -> {Abs, Conjugate}]
But I couldn't change any thing?!
Following the the first answer posted to the question I wrote:
FullSimplify[Subscript[T, 0]] // TraditionalForm
$$\left(\begin{array}{ccc} 1 & \delta & 0 \\ \delta ^* & \left| \delta \right| ^2 & 0 \\ 0 & 0 & 0 \\\end{array}\right)$$
But when I continue the code and apply the same trick on another matrix, the trick doesn't work!
R[ψ_] := {{1, 0, 0}, {0, Cos[2 ψ], Sin[2 ψ]}, {0, -Sin[2 ψ], Cos[2 ψ]}}T[ψ_] := Dot[R[ψ], Subscript[T, 0], Transpose[R[ψ]]]FullSimplify[T[ψ]] // TraditionalForm
$$\left( \begin{array}{ccc} 1 & \delta (\cos (2 \psi )) & -\delta (\sin (2 \psi )) \\ \delta ^* (\cos (2 \psi )) & \delta \delta ^* \left(\cos ^2 (2 \psi )\right) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) \\ -\delta ^* (\sin (2 \psi )) & -\frac{1}{2} \delta \delta ^* (\sin (4 \psi )) & \delta \delta ^* \left(\sin ^2 (2 \psi )\right) \\ \end{array} \right)$$ |
You need to know that the natural map$$Ext^{1}(\Omega_{X},\mathcal{O}_{X})\rightarrow H^{0}(X,\mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X}))$$is surjective. For instance, this is true when the deformations of $X$ are unobstructed that is $H^{2}(X,T_X)=0$.
The vector space of locally trivial first order infinitesimal deformations of $X$ is $H^{1}(X,T_{X})$, where $T_{X} = \mathcal{H}om(\Omega_{X},\mathcal{O}_{X})$. While the vector space of first order infinitesimal deformations of $X$ is $Ext^{1}(\Omega_{X},\mathcal{O}_{X})$. In general, these two spaces are linked by the following exact sequence$$0\mapsto H^{1}(X,T_{X})\rightarrow Ext^{1}(\Omega_{X},\mathcal{O}_{X})\rightarrow H^{0}(X,\mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X}))\rightarrow H^{2}(X,T_{X})$$The sheaf $\mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X})$ is supported on the singular locus of $X$.
Now, let $S$ be an étale neighborhood of $x_1$. Since your singularity admits non-trivial first order infinitesimal deformations we have $Ext(\Omega_S,\mathcal{O}_S)\neq 0$. Since $\mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X})$ is supported on $x_1$ and $x_2$ we get that $H^{0}(X, \mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X}))\neq 0$.
Now, assume that the map $$Ext^{1}(\Omega_{X},\mathcal{O}_{X})\rightarrow H^{0}(X,\mathcal{E}xt^{1}(\Omega_{X},\mathcal{O}_{X}))$$in the above exact sequence is surjective. This is the case when the deformations of $X$ are unobstructed that is $H^{2}(X,T_X)=0$. Clearly, this yields that $Ext^{1}(\Omega_{X},\mathcal{O}_{X})\neq 0$ and you are done.
As an example you can consider the weighted projective plane $\mathbb{P}(1,2,3)$. The singularity of type $\frac{1}{3}(1,2)$ admits non-trivial first order infinitesimal deformations. To see this just observe that is isomorphic to $\mathbb{A}^{2}/\mu_{3}$ where the action is given by $$\begin{array}{ccc}\mu_{3}\times\mathbb{A}^{2} & \longrightarrow & \mathbb{A}^{2}\\(\epsilon,x_{1},x_{2}) & \longmapsto & (\epsilon x_{1},\epsilon^{2}x_{2})\end{array}$$The invariant polynomials with respect to this action are clearly $x_{1}^{3},x_{2}^{3},x_{1}x_{2}$. Therefore, étale locally, in a neighborhood of the singularity the surface is isomorphic to an étale neighborhood of the vertex of the cone$$S = \{f(x,y,z) = z^{3}-xy = 0\}\subset\mathbb{A}^{3}$$Now, we have $Ext^{1}(\Omega_{S},\mathcal{O}_{S})\cong K[x,y,z]/(f,\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}) = K[x,y,z]/(z^{3}-xy,-y,-x,3z^{2})\cong K[z]/(z^{2})$.
We have the following exact sequence $$0\mapsto \mathcal{O}_{\mathbb{P}(1,2,3)}\rightarrow\mathcal{O}_{\mathbb{P}(1,2,3)}(1)\oplus\mathcal{O}_{\mathbb{P}(1,2,3)}(2)\oplus\mathcal{O}_{\mathbb{P}(1,2,3)}(3)\rightarrow T_{\mathbb{P}(1,2,3)}\mapsto 0$$Taking cohomology we get $h^{0}(T_{\mathbb{P}(1,2,3)}) = 5$ and $h^{1}(T_{\mathbb{P}(1,2,3)}) = h^{2}(T_{\mathbb{P}(1,2,3)}) = 0$. We conclude that $H^{i}(\mathbb{P}(1,2,3),T_{\mathbb{P}(1,2,3)}) = 0$ for $i\geq 1$.
Then, $Ext^{1}(\Omega_{\mathbb{P}(1,2,3)},\mathcal{O}_{\mathbb{P}(1,2,3)})\cong H^{0}(\mathbb{P}(1,2,3),\mathcal{E}xt^{1}(\Omega_{\mathbb{P}(1,2,3)},\mathcal{O}_{\mathbb{P}(1,2,3)})) \neq 0$, that is $\mathbb{P}(1,2,3)$ is not rigid.
For an example of infinitesimally rigid surfaces whose singularities admit non-trivial deformations you may consider Beauville surfaces. Roughly speaking, these are quotients $(C_1\times C_2)/G$ where $C_1, C_2$ are smooth curves of genus at least two, and $G$ is a finite group. Some of these surfaces have non rigid singularities, on the other hand Beauville surfaces are infinitesimally rigid. For this you may take a look at:
Ingrid Bauer, Shelly Garion, Alina Vdovina,
Beauville Surfaces and Groups, Springer, 2015. |
4:28 AM
@MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project.
@MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a
\oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs.
@MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward.
4:57 AM
Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things.
I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question.
You can see in revision history that this bullet point was added by Workaholic: "I searched for
\oint $\oint$, but I only got results related to
\int $\int$. I tried for
\oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results."
So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged.
And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.)
I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct?
I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$.
However when I try the query from this quiz, I get completely different results.
I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.)
I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect.
My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers.
5:40 AM
I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches.
For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above:
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ",
Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication:
{ /* 4 */
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" },
"Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ",
"hints": [
"This should be easy, the only thing I need to do is do some calculation...",
"I can use my computer to enumerate...",
"... (10 minutes after) ...",
"OK, I give up. Why borther list them <b>all</b>?",
"Is that possible to <a href=\"#\">search it</a> on Internet?"
],
"search": "all positive integers, $i^5 + j^6 = k^7$"
},
8 hours later…
1:19 PM
@MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision.
@MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore.
I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons:
1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware
1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them)
2) there is a bug in approach0 that I am not aware
In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL.
@MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason.
@MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are
structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical.
After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score).
I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below:
As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are
structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical.
Now the next things for me to do is to investigate some "missing results" suggested by you.
1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed)
1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed)
2:23 PM
Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…)
An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result)
10 hours ago, by Martin Sleziak
I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$.
2:39 PM
For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5
3:04 PM
For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink".
I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages.
However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case.
Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1
Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures.
This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink
3:25 PM
@MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer.
I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331)
2 hours later…
5:13 PM
Update: I have fixed that quiz problem. See: approach0.xyz/search/…
That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result.
2 hours later…
6:57 PM
« first day (2 days earlier) next day → last day (1104 days later) » |
I am curious if the following model was studied or has some obvious lower bounds:
We want to compute a polynomial $P(x_1,x_2, \dots , x_n)$. Suppose we have a graph G on $n$ nodes that we are going to call
. Now we will call a monomial $x_{i_1} \times x_{i_2} \times \dots \times x_{i_d}$ valid if no two variables form an edge in the restriction graph $G$: restriction graph
$$x_{i_1} \times x_{i_2} \times \dots \times x_{i_d} \text{ is valid } \iff \forall a,b: \{x_{i_{a}},x_{i_b}\} \not\in E(G)$$
A polynomial is valid if all of its monomials are valid. We will call a circuit $G$-restricted if all the gates in $G$ compute valid polynomials. We will be only interested in $G$-restricted circuits for valid polynomials.
This model is a strict generalization of multilinear circuits: we can take $G$ to be a collection of loops. My questions are:
Was this model studied? Are there any other classes of circuits that can be represented this way? Are there any obvious lower bounds for $G$-restricted circuits for explicit polynomials (and explicit $G$)? |
The following analytic continuation for $\zeta(s)$ towards $\Re(s)>-1$ was derived here:
$$\displaystyle \zeta(s) = \frac{1}{2\,(s-1)} \left(\sum _{n=1}^{\infty } {\frac {s-1-2\,n}{{n}^{s}}} + \sum _{n=0}^{\infty } {\frac {s+1+2\,n}{\left( n+1 \right) ^{s}}} \right)$$
I plugged this formula into this question about the zeros of $\zeta(s) \pm \zeta(1-s)$. After rearranging the terms a new function emerges:
$$\displaystyle Z(s) = \frac{1}{2\,(s-1)} \sum _{n=1}^{\infty } {\frac {s-1-2\,n}{{n}^{s}}} \pm \frac{1}{2\,(-s)} \sum _{n=0}^{\infty } {\frac {1-s+1+2\,n}{\left( n+1 \right) ^{1-s}}}$$
with $\zeta(s) \pm \zeta(1-s) = Z(s) \pm Z(1-s)$.
What surprised me is that $Z(s)$ seems to diverge for all values of $s$,
except when $\Re(s)=\frac12$, in which case either only its real part $(\pm = +)$ or only its imaginary part $(\pm = -)$ converges. These are then equal to respectively $\Re(\zeta(\frac12 + s \, i)$ and $\Im(\zeta(\frac12 + s \, i)$ and correctly induce the non-trivial zeros as well as the zeros at $2^s \pi^{s-1} \sin(\frac{\pi s}{2}) \phantom. \Gamma(1-s) = \pm 1$. Question:
Can it be proven that either the real or the imaginary part of $Z(s)$ only converges for $\Re(s)=\frac12$?
Addition:
Think I can show that $\Re(Z(s))$ diverges for $\Re(s)<-1$ and $\Re(s)>2$, however can't much get closer towards $\frac12$. The solution could lie in the fact that only when $\Re(s)=\frac12$, the series $Z(s)$ is related to the real and imaginary parts of $\zeta(s)$ and in all other cases these links simply don't exist.
Note that the
finite series $Z(s,v)$ can be expressed into zetas and Hurwitz zetas:
$Z(s,v):=\frac{\zeta \left( s \right) -\zeta_{H} \left(s,v+1 \right)}{2} +{ \frac {\zeta \left( s-1 \right) -\zeta_{H} \left(s-1,v+1 \right) }{1- s}}\pm \left( \frac{\zeta \left( 1-s \right) -\zeta_{H} \left(1-s,v+2 \right)}{2} -\frac {\zeta \left( -s \right) +\zeta_{H} \left(-s,v+2 \right) }{s} \right)$
For $\pm=+$ and $s=\frac12$ this becomes:
$Z(\frac12,v):= \zeta \left( \frac12 \right) -\frac{\zeta_{H} \left(\frac12,v+1 \right)+\zeta_{H} \left(\frac12,v+2 \right)}{2} - \frac{\zeta_{H} \left(-\frac12,v+1 \right) -\zeta_{H} \left(-\frac12,v+2 \right)}{\frac12}$
Since $Z(\frac12)=\zeta(\frac12)$, it follows that the other terms must converge to zero when $v \rightarrow \infty$. Under the assumption that $v$ is positive, it follows that (with help from Wolfram math and working fine):
$\displaystyle \lim_{v \to +\infty} \frac {-2\sqrt {v+1}}{4 \,v+3} \, \zeta_{H} \left(\frac12,v+1 \right)=1$ |
I'm trying to apply the finite-volume method (FVM), with which I'm not so familiar, so a simple 1D PDE equation. I already asked this question in the math stackexchange, but was told that it could be a better fit here.
The equation I want to solve is, to simplify, $$\frac{\partial U}{\partial t} + A\left(x\right) \frac{\partial}{\partial x}\left(\frac{F(U)}{A(x)}\right) + \frac{\partial G(U)}{\partial x} = 0.$$
where $U$ is my unknown, $F$ and $G$ some flux functions computed from $U$ and $A$ a function of $x$, that is strictly positive and very smooth. Most of the time, I can actually assume $A$=constant, and therefore the equation becomes $$\frac{\partial U}{\partial t} + \frac{\partial(G(U)+F(U))}{\partial x} = 0,$$ which is straightforward to solve using FVM.
However, I'm not sure how to handle the case where $A$ is not constant. My idea was, when applying the integration step of the finite volume method, to write $$\int_{\text{cell}} A\left(x\right) \frac{\partial}{\partial x}\left(\frac{F(U)}{A(x)}\right) \mathrm{d}\Omega \approx \bar{A} \int_{\text{cell}} \frac{\partial}{\partial x}\left(\frac{F(U)}{A(x)}\right) \mathrm{d}\Omega$$ where $\bar{A}$ is the mean value of $A$ over the considered cell. Once fully discretized, this term would therefore end up looking like
$$\bar{A}_{i} \left( \frac{1}{\Delta x} \left( \frac{F_{i+1/2}}{A_{i+1/2}} - \frac{F_{i-1/2}}{A_{i-1/2}} \right) \right).$$
Does it make sense, or is it completely wrong? Is there a better way to proceed?
I tried changing the coordinate, but I'm not familiar enough with that, and of course I'd like to keep the $\dfrac{\partial G(U)}{\partial x}$ term as it is.
Thanks a lot for any inputs and advices on how to deal with this kind of problem. |
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer |
I managed to reduce certain computational problem to the Gauss-Seidel solution of the following linear system: $$Ax=Ly,$$ where $A, L\in\mathbb{R}^{n\times n}$ are weighted Laplacian matrices (symmetric, positive semi-definite; negative off-diagonal entries, with rows(collumns) summing (in absolute values) to positive diagonal entries; matrix eigenvalue $0$ corresponds to $1_n$ eigenvector of the nullspace), and $x,y\in\mathbb{R}^{n\times 2}$ are vectors with the unknown $x$. The solution has the form $$x_i^{[k+1]} = \left.\left(b_i - \sum_{j=1}^{i-1}a_{ij}x_j^{[k+1]} - \sum_{j=i+1}^{n}a_{ij}x_j^{[k]}\right)\middle/a_{ii}\right.,$$ where $b_i$ is the $i^{th}$ entry of $Ly$. Note that, with Gauss-Seidl, the update of $x_i$ takes effect immediately,
i.e., calculation for the following $x_{i+1}$ is based on the new value of $x_i$ that has been computed just before.
Now, suppose an
iteration consists of a single update of all $x_i$ in some arbitrary order. In other words, each $x_i$ is considered only once (and is updated only once) in an iteration. My question is: could it be guaranteed that after a single iteration with initial $x^{[0]}=y$, the solution $x^{[1]}$ has all unique coordinates, i.e., there are not two rows of $x^{[1]}$ that are equal?
You could assume that the initial $x^{[0]}=y$ has non-unique coordinates. If the uniqueness cannot be resolved this way, I would appreciate a suggestion on the coordinate traversal order to increase the chance of achieving uniqueness (
i.e., no two coordinates take the same value). |
After mathematically ensuring the performance of the jet the next step should be to control the stall speed in order to avoid surprises at the take off and landing phase.
A usual criterion for estimating smooth and trouble-free handling of the model on normal airfields is the well-established method of measuring the wing loading. This is applicable when two similar models are compared and should not be applied with a strongly different airfoil, a very different aspect ratio or a divergent proportion of wing and fuselage. Besides, the airfoil depth has an impact on stall speed. In order to consider all of these parameters at least to some extent, we provide you with the following formula.
Horizontal velocity of the jet:
\({v=\sqrt{2 \cdot m_{ges} \cdot g \over \rho \cdot c_a \cdot A}}\)
In here \(g\) equals the gravitational acceleration of \(9,81 m/s^2\) and rho equals the air density of \(1,2 kg/m^3\). \(m_{ges}\) means the overall weight of our model in kg, \(A\) is the wing surface in \(m^2\) and \(c_a\) is the maximum lift coefficient, depending on the Reynolds number and the used profile, which is still flyable in the landing phase. The following table illustrates parameters for the maximum lift coefficient \(c_a\). (\(c_a\) should be derived from the L/D polar curve of the used profile and the corresponding Re number if available)
model type, airfoil: lift coefficient \(c_a\) Airliner, thick, high cambered airfoil: 1,2 – 1,3 moderate speed jet, airfoil e.g. HQ 2.0/10: 1,1 – 1,2 sporty jet, airfoil e.g. RG-15: 0,95 – 1,1 speed jet, airfoil e.g. RG-14, HQ1.0/8: 0,8 – 0,95
These values, of course, also depend on the Re number and the airfoil depth. If the model has a small wing depth the smaller \(c_a\) value should be used.
e.g. Model Pampa, \(m_{ges}=2,4 kg\), \(c_a=0,95\), \(A=21,5 dm^2=0,215 m^2\)
\({v=\sqrt{2 \cdot 2,4kg \cdot 9,81m/s^2 \over 1,2kg/m^3 \cdot 0,95 \cdot 0,251m^2}}\)
\(v\)=12,75m/s
The stall speed resulting from these calculations is already quite realistic but still slightly too high as the lift effect of the fuselage has not been considered yet. At extremes (e.g. with the Starfighter, SU-27...) that approach would not work as the fuselage generates about 40% of the overall lift.
In that case it is necessary but also more time consuming to calculate the lift of the fuselage or to make use of empirical values. Typical minimum velocities for Airliners are at about 9 – 11m/s. Moderate speed models are a bit heavier and have a value of about 12 – 13m/s and small speed jets can operate in an area of 14 – 16m/s. These estimated values are considered for informational purposes only and enable the estimation of a maximum weight for our corresponding jet. However, in some cases the value for the wing loading is going to be extraordinarily high while the model still flies smoothly. |
Neither of you is wrong. Your final expression is just the same as $H(\omega)\cdot e^{-j\omega(N-1)/2}$, which is (an approximation of) the desired complex frequency response with an additional linear phase term that depends on the chosen filter length $N$. This means that you need to subtract that linear phase term from the desired phase response, because it will be added in the design process.
Let the desired complex frequency response be given by
$$D(\omega)=|D(\omega)|\cdot e^{j\phi(\omega)}\tag{1}$$
Now you have to compute a new complex response by subtracting the linear phase term $-\omega (N-1)/2$ from the phase of $(1)$:
$$\tilde{D}(\omega)=|D(\omega)|\cdot e^{j[\phi(\omega)+\omega(N-1)/2]}\tag{2}$$
You have to use the real and imaginary parts of $\tilde{D}(\omega)$ to design the filter.
What is meant by the option
'hilbert', is that you tell the design routine that the desired response (the imaginary part of $\tilde{D}(\omega)$) should actually be multiplied by $j$ and it is an odd function of frequency, because otherwise it will be interpreted as a purely real and even function. This will result in an asymmetric impulse response. In principle this can be done, but I don't know if it can actually be done for arbitrary prescribed responses with the current versions of Matlab's filter design routines.
Note that such an approach is actually not really necessary for the least squares design of FIR filters, because solving a complex least squares problem (for non-linear phase designs) is just as simple as solving a real least squares problem (for linear phase designs). As far as I know it's just not implemented in Matlab's Signal Processing Toolbox. I've written a few Matlab/Octave functions that solve the complex least squares approximation problem for FIR filter design: lslevin.m for complex least squares design of real-valued filters, and cfirls.m for filters with complex coefficients.
For equi-ripple designs, the complex (non-linear phase) approximation problem is indeed much harder than the real (linear phase) problem. For the latter we have the Remez exchange algorithm, which was adapted to the linear phase FIR filter design problem by Parks and McClellan and which is implemented in Matlab (
firpm.m). There are algorithms for solving the complex equi-ripple design problem, but they are much less efficient than the Parks-McClellan algorithm. One of them is implemented in Matlab (cfirpm.m).
Note that for the least squares approximation, solving two real-valued approximation problems for the real and imaginary parts gives the same solution as solving one complex least squares approximation problem. This is easily seen as follows. The complex approximation error is given by
$$E(\omega)=D(\omega)-H(\omega)\tag{3}$$
where $D(\omega)$ and $H(\omega)$ are the desired and actual complex frequency responses. The complex least squares solution minimizes the integral over $|E(\omega)|^2$, whereas the individual real-valued least squares solutions minimize the integrals over $E_R^2(\omega)$ and $E_I^2(\omega)$, where $E_R(\omega)$ and $E_I(\omega)$ are the real and imaginary parts of $E(\omega)$, respectively. Since
$$|E(\omega)|^2=E_R^2(\omega)+E_I^2(\omega)\tag{4}$$
minimizing the integrals over $E_R^2(\omega)$ and $E_I^2(\omega)$ is equivalent to minimizing the integral over $|E(\omega)|^2$.
For the complex Chebyshev approximation problem the situation is different. The quantity that is minimized is the maximum of the magnitude of the complex error over the bands of interest $\max_{\omega}|E(\omega)|$, but since
$$\max_{\omega}|E(\omega)|\neq \max_{\omega}|E_R(\omega)|+\max_{\omega}|E_I(\omega)|\tag{5}$$
independent Chebyshev approximation of the real and imaginary parts is not equivalent to complex Chebyshev approximation. If for a given complex approximation problem the maximum error of the optimal solution equals $\delta$, then in the worst case the errors of both real-valued approximations will also be equal to $\delta$. This means that in the worst case, the maximum complex error of the solution obtained from independent optimization of the real and imaginary parts becomes
$$\max_{\omega}|E(\omega)|=\max_{\omega}\sqrt{E_R^2(\omega)+E_I^2(\omega)}\le\sqrt{\max_{\omega}E_R^2(\omega)+\max_{\omega}E_I^2(\omega)}\le\sqrt{2}\delta\tag{6}$$
This is equivalent to a maximum of $3$ dB increase in error compared to the optimal solution.
I'll show an example design to illustrate a few points that I made. I designed a bandpass filter with a desired phase response that is linear but with a smaller group delay than an FIR filter with perfectly linear phase. The filter length is $N=61$, so the delay of a linear phase FIR filter would be $(N-1)/2=30$ samples. I chose the desired delay to be $20$ samples instead. I used a complex least squares and a complex Chebyshev criterion to approximate the given complex frequency response. For each criterion I designed the filter in two different ways. First, directly in the complex domain, and second by using the real and imaginary parts of the desired frequency response to design two linear phase filters, which, after combining them, should approximate the original complex frequency response. The figure below shows the design results. As expected, for the least squares design, both solutions are equivalent (up to numerical accuracy) and only one of them is shown in the plots (in blue). For the Chebyshev criterion both solutions are slightly different, as predicted. The solution from splitting the response into real and imaginary parts (in red) has a larger error by about $1.2$dB (as can be seen at the first sidelobe in the stopband). In the passband all three solutions are very similar. Note that all three filters approximate the desired group delay value of $20$ samples relatively well over a large portion of the passband. |
Question:
Let $X(t)$ be a birth-death process with $\lambda_n = \lambda > 0$ and $\mu_n = \mu > 0,$ where $\lambda > \mu$ and $X(0) = 0$. Show that the total time $T_i$ spent in state $i$ is $\exp(\lambda−\mu)$-distributed.
Solution from the professor:
Writing $q_i$ for the probability of ever visiting $0$ having started at $i$ we have
$q_0 = 1$ and
$$q_i = \frac{\mu}{\lambda+\mu} q_{i-1} + \frac{\lambda}{\lambda+\mu} q_{i+1}$$
for $i ≥ 1$
The zeros of the characteristic polynomial for this difference equation are
$$p(x) = \frac{\lambda}{\lambda+\mu} x^2 - x + \frac{\mu}{\lambda+\mu} = 0$$
$$x = \frac{\mu}{\lambda}$$ or $$x=1$$
so that $q_i = A (\frac{\mu}{\lambda})^i + B1^i$ for some constants $A, B \in \mathbb{R}$. As we must have $q_i → 0$ as $i \rightarrow \infty$ we have $B = 0$ after which $q_0 = 1$ gives $A = 1$, so that $q_i = (\frac{\mu}{\lambda})^i$ for $i ≥ 0$.
To find $T_0$ we note that this time is the sum of the $\exp(\lambda)$-distributed time it takes to leave $0$ plus another independent $\exp(\lambda)$-distributed time added for each revisit of $0$, where the number $N$ of such revisits has PMF
$$P(N = n) = (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda})$$
for $n ≥ 0$
As the CHF of an $\exp(\lambda)$-distributed random variable is
$$E(e^{jω exp(\lambda)}) = \frac{\lambda}{\lambda-j\omega}$$
it follows that (making use of the basic fact that the CHF of a sum of independent random variables is the product of the CHF’s of the individual random variables)
$$E[e^{j\omega T_0}] = \frac{\lambda}{\lambda-j\omega} \sum_{n=0}^\infty (\frac{\lambda}{\lambda - j\omega})^n (\frac{\mu}{\lambda})^n(1−\frac{\mu}{\lambda}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$
To find $T_i$ we note that (by considering what the first state after having left $i$ is $i−1$ or $i+1$) the probability of ever returning to $i$ having started there is
Im having a hard time understanding this part and would appreciate any help
$$\frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu} q_{i} = \frac{\mu}{\lambda+\mu} + \frac{\lambda}{\lambda+\mu}\frac{\mu}{\lambda} = \frac{2\mu}{\lambda+\mu}$$
As the time spent at each visits of $i$ is $\exp(\lambda+\mu)$-distributed it follows as above that
$$E[e^{j\omega T_i}] = \frac{\lambda+\mu}{\lambda+\mu-j\omega} \sum_{n=0}^\infty (\frac{\lambda+\mu}{\lambda +\mu- j\omega})^n (\frac{2\mu}{\lambda+\mu})^n(1−\frac{2\mu}{\lambda + \mu}) = \frac{\lambda-\mu}{\lambda-\mu-j\omega}$$ |
It's been several years since this was asked, but then again, there's only that many mentions of domain theory in the forum, so let me try to set some things straight for the sake of my own understanding at least. To skip my disclaimer blah-blah, just scroll down to the bold-face letters.
Possibly confused by Kaveh's justified clarification questions, you write "fully-defined means maximal, yes", but in the original post you had already written "[...] $f$ is [...] "total" (i.e. the elements of its domain are all finite, and it takes "fully-defined" values to "fully-defined" values) [...]", which suggested to me that you treat "total" and "fully defined" as synonyms---as indeed I do: you can't get more partial than the undefined $\bot$, and no more defined than some total element. But: I don't agree with the condition that the inputs of $f$ be "finite" (either domain-theoretically or in their number).
In fact, by (the usual) definition, a continuous function $f : A \to B$, for arbitrary data types $A$, $B$, is
total whenever for all $x \in A$, if $x$ is total then also $f(x)$ is total. This presupposes of course that we agree on a notion of totality for base types, and the obvious choice is to ban everything that involves $\bot$ (assuming that we are working with flat domains, there's just one such element, but there are possibly more in the non-flat case).
Maximality and totality in domain theory are two quite different things. At
flat base types they coincide, but at higher types they generally don't. There are examples in [Stoltenberg-Hansen et al 1994, section 8.3], but since you already mention Haskell and laziness, let me just recall that in the non-flat domain $\mathbb{N}_\ast$ of lazy natural numbers you have the maximal but not total element $\{\bot, S\bot, SS\bot,\ldots \}$. Just to be complete, an example of a total but not maximal element at type, e.g., $\mathbb{B}_\bot \to \mathbb{B}_\bot$ would be the function determined by $\bot \mapsto \bot$ and $x \mapsto \mathtt{tt}$ for $x \neq \bot$, which is total but strictly below the function having $x \mapsto tt$ for all $x$.
Concerning Neel's answer, though I appreciate its pragmatic intuition and I'm going to take his "this will lead you astray" warning seriously, I still feel that I should stress the obvious good theoretical reasons to want to talk about undefinedness as a bona-fide value: it's the way to thematize computational partiality while conveniently using mathematically total mappings, and so end up with an accurate and rich enough theory. Besides, on the pragmatic level, we do have functions that are either provably or
by definition undefined (no pun) at certain inputs; think here of how one uses
Maybe in Haskell, or the very idea of exception handling in general for that matter, as devices that treat undefinedness as a value.
So I may be lacking basic intuition that Neel has, and therefore be missing his point altogether, but to me a function $f$ as an element of a domain is the
known or expected program behavior in its ideal entirety (things are a bit different if we work with more tangible representations of domains, like information systems, where we may assume to only know parts of the behavior of programs); in particular, I do not talk about some implementation of $f$ that might run faster or slower than others so that I might never be sure of its termination. On this basis, my answer below differs from Neel's quite radically (and I'd appreciate feedback or corrections on this, from Neel or anyone).
So, finally,
to answer the question. We assume that $f$ is total and that $g$ is above $f$. I understand that you ask what the maximal $g^{\max}$ above $f$ is.
First a quick remark: we can prove that $g$ is also total (which is sometimes called "extension lemma"), so it would arguably make more sense, from the point of view of effectivity, to ask about "minimal" totality instead of "maximal" totality: if $h$ is some element (possibly partial), what is a minimal way to extend it to an $f$ which will be total? (that we can extend it to a total at all is the content of the fundamental
density theorem, see [Normann 2008]). And then: if $f$ is already total, what is a minimal equivalent total $g$ (that is, a total $g$ which agrees with $f$ on its total inputs)? Of relevance here is the characterization by [Longo & Moggi 1984] of two totals being equivalent exactly when they have a total intersection.
But back to the "maximal" totality. One more thing we can prove is that two totals are equivalent if and only if they are bounded. From this follows that, given a total $f$, a first answer to your question is$$g^{\max} = \bigsqcup \{g \mid g \mbox{ is total and equivalent to }f\} .$$This thing
does exist, since domains are bounded-complete.
But I say "a first answer", because I can't think clearly now of an actual
technique, as you specifically ask, of providing this supremum. In other words, I'm not sure of how constructive all the relevant steps are in this thread of thought. Perhaps someone can comment on this aspect (or I might, if I find the time to think about it with a clear head). |
Suppose that a wooden cube, whose edge is $3$ inch, is painted red, then cut into $27$ pieces of $1$ inch edge. Find total surface area of unpainted?
First of all, I have tried to draw the cube using MS Paint, below is given picture:
The surface area of cube is $6\cdot a^2$, where $a$ is the length of cube, when cube is painted, I'm trying to imagine it visually, how much surface would be unpainted?
If it's painted,does not it mean that all pages are painted?
EDIT:
A $3\times3\times3$ cube gets painted red. Then it gets split into $27$, $1\times1\times1$ cubes.
Find the number of painted faces:
On the $3$ inch cube, there are $9$ $1$-inch faces on each face, and there are $6$ such faces, so the total red 1-inch faces is: $$ 9\cdot 6=54 $$ Find the total number of faces of all the cubes:
There are $6$ faces per cube and $27$ cubes, so: $$ 27\cdot 6=162. $$ Of these, we know $54$ are painted: $$ 162-54=108 $$ There is a total of $108$ unpainted $1\times1$ squares each having an area of 1, so there are $108$ unpainted square inches.
EDIT:
I'd like to post also my approaches if the cube would be divided into
two identical parts, actually if we have a cube with length $a$, divided into $2$ parts, then we get two cubes, which volume is $a^3/2$.
So length is the cubic root of $a^3/2$, or length of this two cubes is $a$ divided by cubic root of $2$.
Now we have length $3$, which means that it's length would be $3$ divided by cubic root of $2$, on each face with length $3$, we would have: $$ \frac{9}{3/\sqrt[3]{2}}=3\cdot \sqrt[3]{2} $$ in total $4$ such cubes, so $2*4$ of such faces, total face would be $2*6=12$,so unpainted would be $12-8=4$ is it correct? |
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Registre complet - Registres semblants 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Registre complet - Registres semblants 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Registre complet - Registres semblants 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Registre complet - Registres semblants 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Registre complet - Registres semblants 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Registre complet - Registres semblants |
Let $A$ be a $2\times 2$ complex matrix such that $A^2=0$. Prove that either $A=0$ or $A$ is similar over $\mathbb{C}$ to $$\left(\begin{array}{cc} 0 & 0 \\1 & 0 \end{array}\right) $$
If $A^2=0$, then $det(A^2) = [det(A)]^2 = 0$, implying $det(A) = 0$. Since $A$ is a $2 \times 2$ matrix, what can you conclude about $A$?
If $A$ is non-zero,there is $X$ such that $AX$ is non-zero.Now $\{X,AX\}$ is linearly independent,forms a basis of $C^2$.Then the matrix representation of $A$ with respect to the basis is the given matrix. [QED]
If $Av=\lambda v$ with $v\ne0$ then $$0=0v=A^2v=A\lambda v=\lambda^2v$$ so $\lambda=0$. That is, the only eigenvalue $A$ has is zero. Either there are two linearly independent eigenvectors $v$ and $w$, in which case $Ax=0$ for all $x$, and $A=0$; or, there's only one eigenvector $v$, in which case you can show there's a vector $w$ with $Aw=v$. Then if $P$ is a matrix whose columns are $v$ and $w$ you should get $AP=PD$ where $D$ is (the transpose of) your second possibility.
If $A$ is similar to a projection, i.e., $P = S A S^{-1}$ and $P^2 = P$, then $S A^2 S^{-1} = P^2 = P = S A S^{-1}$, so $A^2 = A$, i.e., $A$ is a projection. But, for $A \ne 0$, this is a contradiction with $A^2 = 0$. Hence, your title is wrong.
As for the body of your question: if you know what Jordan normal form is, the answer is straightforward. |
Originally asked this on math.stackexchange, but I figure it's also appropriate here. I'm reading through some finite difference code for a diffusion equation and came across something odd for the boundary conditions, and I was wondering about the validity of the method. I'm familiar with the finite difference method, but I've never seen this certain method before.
The equation I am solving is
$$ \frac{\partial f}{\partial t}=C(x)\frac{\partial^2 f}{\partial x^2}$$
with boundary condition
$$ \frac{\partial f}{\partial x}=0\ \text{on}\ x=0,1 $$
across time, $t$, and position, $x$, with position-dependent diffusion constant, $C(x)$. It's being solved in an explicit way using the forward difference for time and the second-order central difference for space
$$ \frac{f^\text{new}_i-f^\text{old}_i}{\Delta t}=C_i\frac{f^\text{old}_{i-1}-2f^\text{old}_{i}+f^\text{old}_{i+1}}{(\Delta x)^2}$$
along discretised space $\{x_1,\dots,x_n\}$. Rearranging the above equation gives
\begin{align} f^\text{new}_i=&f^\text{old}_i + \frac{C_i\Delta t}{(\Delta x)^2}\Big(f^\text{old}_{i-1}-2f^\text{old}_{i}+f^\text{old}_{i+1}\Big)\\ =&f^\text{old}_{i-1}(S_i)+f^\text{old}_{i}(1-2S_i)+f^\text{old}_{i+1}(S_i) \end{align}
where $S_i=\frac{C_i\Delta t}{(\Delta x)^2}$.
This is simply represented in the code. For the boundary at $x_1$ (and at $x_n$, but I'll only talk about the first as they are similar), there are some strange things going on. Usually I would do a Taylor series expansion of function $f$ at points $x_2$ and $x_3$
$$ f_2=f_1+\frac{\partial f}{\partial x}\bigg\rvert_i \Delta x + \frac{1}{2} \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i (\Delta x)^2 + \mathcal{O}((\Delta x)^3)$$ $$ f_3=f_1+\frac{\partial f}{\partial x}\bigg\rvert_i (2\Delta x) + \frac{1}{2} \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i (2\Delta x)^2 + \mathcal{O}((\Delta x)^3)$$
which, due to boundary condition $\frac{\partial f}{\partial x}=0$ can be rearranged to
$$ \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i\approx\frac{2}{(\Delta x)^2}\Big(f_2 - f_1\Big)\tag{$A$}$$ $$ \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i\approx\frac{1}{2(\Delta x)^2}\Big(f_3 - f_1\Big)\tag{$B$}.$$
Usually, I'd average $A$ and $B$ to determine how the boundary conditions are applied in the eventual matrix solution
$$ \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i\approx\frac{1}{(\Delta x)^2}\Big(-\frac{5}{4}f_1 + f_2 + \frac{1}{4}f_3 \Big).$$
However, the code that I'm reading does something different and takes the boundary as a weighted sum of $A/5+4B/5$, as shown below
$$\frac{A+4B}{5} \Rightarrow \frac{\partial^2 f}{\partial x^2}\bigg\rvert_i\approx\frac{1}{(\Delta x)^2}\Big(-\frac{4}{5}f_1 + \frac{2}{5}f_2 + \frac{2}{5}f_3 \Big).$$
I don't think this as being inherently wrong, as the LHS is still $\frac{\partial^2 f}{\partial x^2}\big\rvert_i$, but I'm not sure. Is this a valid step to make? Is it ok to "weight" results from different nodes, and if so, in what situations would weighting or not weighting be useful? |
Physic models in STAR-CCM+ (Part III)
When the Knudsen number
$$ Kn = \frac{M}{Re}\sqrt{\frac{\gamma \pi}{2}} $$
is small,
Kn < 0.01, the continuum mechanics formulation of fluid dynamics is valid, and so are the Navier-Stokes equations to which I am allotting Part III of the Physic models in STAR-CCM+ series.
Contents
Fundamental equations
All of fluid dynamics is based on three physical principles:
Mass is conserved (i.e., mass can be neither created nor destroyed). Newton’s second law: F= m a. Energy is conserved; it can only change from one form to another.
These physical principles are developed into the fundamental governing equations of fluid dynamics: the continuity, momentum, and energy equations. These equations are also known as transport equations or conservation equations. These equations are also known as transport equations or conservation equations.
Continuity equation
Applying the first physical principle to a finite control volume fixed in space, at a point on the control surface, the flow velocity is
V and the vector elemental surface area is dS. Also dV is an elemental volume inside the control volume.
$$ \frac{\partial}{\partial t}\iiint\limits_\mathcal{V} \rho\, d\mathcal{V} + \iint\limits_{S} \rho \vect{V} \cdot \vect{dS} = 0 $$
or, alternatively, in in the form of partial differential equation
$$ \frac{\partial\rho}{\partial t} + \nabla\cdot\left( \rho \vect{V} \right) = 0 $$
Momentum equation
Newton’s second law can be rewritten in a more general form as
$$ \vect{F} = \frac{d}{dt} (m\vect{V}) $$
which reduces to
F = m a for a body of constant mass. In the equation, m V is the momentum of a body of mass m.
On the left hand side, the force exerted on the fluid as it flows through the control volume comes from two sources:
Body forces:gravity, electromagnetic forces, or any other forces which “act at a distance” on the fluid inside V. Surface forces:pressure and shear stress acting on the control surface S.
On the right hand side, the time rate of change of momentum of the fluid as it sweeps through the fixed control volume is the sum of two terms:
Net flow of momentum outof control volume across surface S. Time rate of change of momentum due to unsteady fluctuations of flow properties inside V.
Hence, Newton’s second law applied to a fluid flow is
$$ – \iint\limits_S p\, \vect{dS} + \iiint\limits_\mathcal{V} \rho \vect{f}\, d\mathcal{V} + \vect{F}_\text{viscous} = \frac{\partial}{\partial t}\iiint\limits_\mathcal{V}\rho\vect{V}\, d\mathcal{V} + \iint\limits_S (\rho\vect{V}\cdot\vect{dS})\vect{V} $$
which can be rewritten as partial differential equations that relate flow-field properties at any point in the flow,
$$ \frac{\partial(\rho u)}{\partial t} + \nabla\cdot (\rho u \vect{V}) = -\frac{\partial p}{\partial x} + \rho f_x + (\mathcal{F}_x)_\text{viscous} \\
\frac{\partial(\rho v)}{\partial t} + \nabla\cdot (\rho v \vect{V}) = -\frac{\partial p}{\partial y} + \rho f_y + (\mathcal{F}_y)_\text{viscous} \\ \frac{\partial(\rho w)}{\partial t} + \nabla\cdot (\rho w \vect{V}) = -\frac{\partial p}{\partial z} + \rho f_z + (\mathcal{F}_z)_\text{viscous} $$
where the subscripts
x, y and z on f and F denote the x, y and z components of the body and viscous forces, respectively.
Adopting an infinitesimally small moving fluid element of fixed mass a the model of the flow, and applying Newton’s second law in the form of
F = m a we can get to
$$ \rho \frac{Du}{Dt} = – \frac{\partial p}{\partial x} + \frac{\partial\tau_{xx}}{\partial x} + \frac{\partial\tau_{yx}}{\partial y} + \frac{\partial\tau_{zx}}{\partial z} \\
\rho \frac{Dv}{Dt} = – \frac{\partial p}{\partial y} + \frac{\partial\tau_{xy}}{\partial x} + \frac{\partial\tau_{yy}}{\partial y} + \frac{\partial\tau_{zy}}{\partial z} \\ \rho \frac{Dw}{Dt} = – \frac{\partial p}{\partial z} + \frac{\partial\tau_{xz}}{\partial x} + \frac{\partial\tau_{yz}}{\partial y} + \frac{\partial\tau_{zz}}{\partial z} $$
These are the momentum equations in the
x, y and z directions, respectively. They are scalar equations and are called the Navier-Stokes equations, named after Claude-Louis Navier and George Gabriel Stokes. Energy equation
For a study of incompressible flow, the continuity and momentum equations are sufficient tools to do the job. However, for a compressible flow, we need an additional fundamental equation to complete the system. This fundamental relation is the energy equation.
The physical principle on which the energy equation lies is the first law of thermodynamics,
Energy can be neither created nor destroyed; it can only change in form.
which can be stated as
$$ \delta q + \delta w = de$$
where
δq and δw represent an incremental amount of heat and work respectively, which are forms of energy, that when added to the system, change the amount of internal energy de in the system. This physical principle, through a control volume, can be directly translated into
$$ \begin{multline}
\iiint\limits_\mathcal{V} \dot{q}\rho\,d\mathcal{V} + \dot{Q}_\text{viscous} – \iint\limits_S p \vect{V}\cdot\vect{dS} + \iiint\limits_\mathcal{V} \rho (\vect{f}\cdot\vect{V})\,d\mathcal{V} + \dot{W}_\text{viscous}\\ = \frac{\partial}{\partial t} \iiint\limits_\mathcal{V}\rho \left( e + \frac{V^2}{2} \right)\,d\mathcal{V} + \iint\limits_S \rho \left( e + \frac{V^2}{2} \right)\vect{V}\cdot\vect{dS} \end{multline}$$
and in the form of partial differential equations as
$$ \begin{multline}
\frac{\partial}{\partial t} \left[ \rho \left( e + \frac{V^2}{2} \right) \right] + \nabla\cdot \left[ \rho \left( e + \frac{V^2}{2} \right) \vect{V} \right] \\ = \rho\dot{q} – \nabla\cdot (p\vect{V}) + \rho(\vect{f}\cdot\vect{V}) + \dot{Q}’_\text{viscous} + \dot{W}’_\text{viscous} \end{multline} $$
where $\dot{Q}’_\text{viscous}$ and $\dot{W}’_\text{viscous}$ represent the proper forms of the viscous terms.
Summary
These equations are fundamental to all of aerodynamics making a total of 5 equations with five accompanying independent variables, namely pressure, three components of velocity and temperature. Also, a thermodynamic equation of state can be added to the set to relate density to the pressure, temperature, and composition.
Exact analytical solutions of the complete Navier-Stokes equations exist for only a few very specialized cases.
Fortunately, many problems of engineering interest are adequately described by simplified forms —by deleting superfluous terms— of the full conservation equations, which can often be solved easily.
References
[1]
User Guide STAR-CCM+ Version 8.06. 2013. [2] Anderson, J. 2007. Fundamentals of aerodynamics. 4th ed. Boston: McGraw-Hill Higher Education. [3] Ganić, E., Hicks, T. and Predko, M. 2003. McGraw-Hill’s Engineering Companion. New York: McGraw-Hill. |
This question already has an answer here:
If magnetic field is conservative, then why not the magnetic force?
My professor thinks it is non conservative but he couldn't explain to me why?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
If magnetic field is conservative, then why not the magnetic force?
My professor thinks it is non conservative but he couldn't explain to me why?
This is because of the definition of a conservative force (and links therein):
If a force acting on an object is a function of position only, it is said to be a conservative force, and it can be represented by a potential energy function which for a one-dimensional case satisfies the derivative condition
Lets look at the magnetic field, can it be described by a scalar potential?
There is no general scalar potential for magnetic field B but it can be expressed as the curl of a vector function
${\vec{B} = \vec{\nabla} \times \vec{A}}$
So it does not fall within the definition of conservative forces.
A force field $F$, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions:
The curl of $F$ is zero: $$\nabla \times \vec{F} = 0.$$
There is zero net work ($W$) done by the force when moving a particle through a trajectory that starts and ends in the same place: $$W \equiv \oint_C \vec{F} \cdot \mathrm{d}\vec r = 0.$$
The force can be written as the negative gradient of a potential, $\Phi$: $$\vec{F} = -\nabla \Phi.$$
[Proof of equivalence omitted.]
The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force.
Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative,
[3]while others do not. [4]The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative.
So it is not so clear, as with conservation of energy and momentum :).
This is a strange one. The magnetic field is NOT conservative in the presence of currents or time-varying electric fields.
A conservative field should have a closed line integral (or curl) of zero. Maxwell's fourth equation (Ampere's law) can be written $$ \nabla \times {\bf B} = \mu_0 {\bf J} + \mu_0 \epsilon_0 \frac{\partial {\bf E}}{\partial t}\, , $$ so you can see this will equal zero only in certain cases.
Magnetic force is also only conservative in special cases. The force due to an electromagnetic field is written $$ {\bf F} = q{\bf E} + q{\bf v} \times {\bf B}$$
For this to be conservative then $\nabla \times {\bf F} = 0$ and $$ \nabla \times {\bf F} = q\nabla \times {\bf E} + q\nabla \times ({\bf v} \times {\bf B}) .$$ But from Faraday's law we know that $$ \nabla \times {\bf E} = - \frac{\partial {\bf B}}{\partial t}\, , $$ so, $$ \nabla \times {\bf F} = - q\frac{\partial {\bf B}}{\partial t} + q{\bf v}(\nabla \cdot {\bf B}) - q{\bf B}(\nabla \cdot {\bf v}) + ({\bf B}\cdot \nabla){\bf v} - ({\bf v}\cdot \nabla){\bf B}\, .$$ From the solenoidal law $\nabla \cdot {\bf B}=0$ always, and $\nabla \cdot {\bf v} = \partial/\partial t(\nabla \cdot {\bf r})=0$. Furthermore, $({\bf B}\cdot \nabla){\bf v} = ({\bf B}\cdot \frac{\partial}{\partial t} \nabla){\bf r} = 0$, so $$ \nabla \times {\bf F} = - q\left[\frac{\partial {\bf B}}{\partial t} + \frac{\partial {\bf B}}{\partial x} \frac{\partial x}{\partial t} + \frac{\partial {\bf B}}{\partial y} \frac{\partial y}{\partial t} + \frac{\partial {\bf B}}{\partial z} \frac{\partial z}{\partial t}\right] $$ $$\nabla \times {\bf F} = - q\frac{d {\bf B}}{d t}$$ and the force is only conservative in the case of stationary magnetic (and hence electric) fields.
Edit: Note that work is done by time-varying B-fields because of the inevitable accompanying E-field. So that may be a potential point of ambiguity. |
A certain filter I'm writing uses two different kernels. The Fejer kernel (which is common) and the Jackson kernel:
$$ \Delta_T(x) = T \,\left( \frac{\sin \pi T x}{\pi T x}\right)^2 \quad\text{and}\quad J_T(x) = \frac{3T}{4} \left( \frac{\sin \pi T x/2}{\pi T x/2}\right)^4 $$
So I had to remind myself how these two kernels behaved and what their properties were:
For a fairly broad class of functions, convolving with the Fejer kernel is the same as integrating against a triangle:
$$ \int_{-T}^T \left( 1 - \frac{|\xi|}{T} \right) \hat{f}(\xi) \,e^{2\pi i \, x \xi} \,d\xi = (f * \Delta_T)(x) $$
Notably, the discrete Fejér kernel (over $S^1$) is a limit of the continuous Fejer kernel (over $\mathbb{R}$):
$$ F_N(x) = \frac{D_0(x) + \dots + D_{N-1}(x)}{N} = \frac{1}{N} \left(\frac{\sin \left(Nx/2\right)}{\sin (x/2)}\right)^2 = \Delta_{1/T}(x)$$
Here it emulates Cesaro summabilition on the Fourier series:
$$ \big(f*F_N\big)(x) = \frac{ S_0(f) (x) + \dots + S_{N-1}(f)(x)}{N} = \frac{1}{N} \sum_{n=0}^{N} \sum_{|x| \leq n} \widehat{f}(x)$$
The similarities continue that the $S^1$ Fejér kernel is the periodization of the $\mathbb{R}$ Fejér kernel:
$$ \sum_{n \in \mathbb{Z}} \Delta_{\frac{1}{N}}(x + n) = F_N(x) $$
These are taken from Fourier Analysis by Elias Stein. My question is what similar properties should occur for the Jackson kernel?
Googling the Jackson kernel doesn't return much. E.g. this note on trigonometric polynomials and modulus of continuity. |
Suppose, we have an array of numbers $x_j$ and their corresponding weights $w_j$ where $\sum_j w_j \gt 1$. Now we need to find $x_m$ such that
$$\sum_{j=1}^{m-1} w_j \lt 1/2 \quad \text{and} \quad \sum_{j=m+1}^{n} w_j \ge 1/2$$
Moreover, $x_m > x_j$, $x_m < x_k$ where $j \ne k$. i.e. a solution should be like this --
$$\underbrace{x_1, x_2, \ldots, x_{m-1}}_{\lt \, x_m}, x_m, \underbrace{x_{m+1}, \ldots, x_{n-1}, x_n}_{\ge \, x_m} \\ \underbrace{w_1, w_2, \ldots, w_{m-1}}_{\lt \, 1/2}, w_m, \underbrace{w_{m+1}, \ldots, w_{n-1}, w_n}_{\ge \, 1/2}$$
Moreover, it was also mentioned that I may use Dynamic Programming that could be bounded by $O(n\lg n)$.
EDIT:
$\{x_j, w_j\}: \quad x_j \text{ is the value and } w_j \text{ is the weight.}$
Example Input: $\{10, 0.4\}, \, \{5, 0.1\}, \, \{6, 0.9\}, \, \{2, 0.3\}, \, \{3, 0.1\}$
Example Output: $\{2, 0.3\}, \, \{3, 0.1\}, \, \underbrace{\{5, 0.1\}}_{x_m}, \, \{6, 0.9\}, \, \{10, 0.4\}$
How I tried
Step 1: First sort the list according to $w_j$. -- $O(n \lg n)$
Step 2: Start from the first element from the left, add the weights $w_j$ until $\sum_j w_j \ge \, 1/2$. The current $x_j$ is the $x_m$. -- $O(n)$
Step 3: Stop, now we have two lists. One is on the left $L=\{x_1, x_2, \ldots, x_{m-1}\}$ and the other is on the right $R = \{x_m, x_{m+1}, \ldots, x_n\}$.
Step 4: Go through the list $L$, if there is any value $x_k > x_m$, move $x_k$ into $R$ at an appropriate position. Do this until all elements in $L$ is smaller than $x_m$. -- $O(n^2)$
Step 5: if $L \ne \emptyset$, $x_m$ is the answer, otherwise $x_1$ is the answer.
The overall complexity will be $O(n \lg n) + O(n) + O(n^2) \approx O(n^2)$. I got confused about the DP stuff at the end of the question, so I was wondering if there is really any way to do it in $O(n \lg n)$ (or better), how do I build the optimal substructure in the case of DP? |
I've written down the following proof:
Given positive integers $x,y,z,k$ we have: $$[1]\;\;\;\;x^2+y^2+z^2=k^2, \rightarrow x^2+y^2=(k-z)(k+z)$$ We now let $k=z+1$, thus: $$x^2+y^2=2z+1$$ Let $z$ be even i.e. $z=2z'$, then:$$[2]\;\;\;\;x^2+y^2=4z'+1$$ We know that there exist an infinite amount of primes of the form $4m+1$, therefore there are infinitely many $z'$s such that $4z'+1$ is prime.
By Fermat's theorem in additive number theory we know that if $p \equiv 1 \pmod4$ then $p=u^2+v^2$ for some positive integer $u$ and $v$, since for every prime of the form $4z'+1 \equiv 1 \pmod4$, there are infinitely many solutions to $[2]$ therefore there are infinitely many solutions to $[1]$.
I know this proof is probably an "overkill" for the question and that I should prove the two statements I used (infinitely many primes and F.Theorem), but I think the proof is nonetheless correct right? Also, how can I prove that there are infinitely many solutions to $[1]$ such that $(x,y,z)=1$ ? |
I am currently woking on some clack online homework problem. I really have no idea how to approach this problem. If someone could help me solve this question I would greatly appreciate it!
From Rogawski ET 2e section 10.7, exercise 31.
Find the Taylor series for $f(x) = \dfrac{1}{1 - 3x}$ centered at $c=1$.
$$\frac{1}{1-3x} = \sum_{n=0}^{\infty} [\textrm{_________}]$$ |
I’ve been trying to homebrew an Eldritch Invocation that grants the Warlock blindsight, but I’m not sure how far this blindsight should extend. I initially thought to compare it to the pre-existing invocation Devil’s Sight, which grants 120 feet of darkvision through magical and non-magical darkness, but blindsight is (obviously) far superior to darkvision, so I deserted that comparison. Any suggestions for how far the blindsight should extend would be greatly appreciated.
I was hoping to make this blindsight Eldritch Invocation have no Warlock level or Pact prerequisites, but I would be willing to add a level prerequisite if it’s absolutely necessary.
If $ \Phi$ is a diffeomorphism on $ \Omega \subset \mathbb{R}^n$ and $ f$ is a real valued integrable function on $ \mathbb{R}^n$ . Then we know that $ \int\limits_{\Phi (\Omega)}f(x)dx=\int\limits_{\Omega}f(\Phi(x))|det J_{\Phi}|dx$ .
How do we do this when derivatives are involved in the integrand. For example,
Let us take $ \Phi: \mathbb{R}^2 \rightarrow \mathbb{R}$ defined by $ \Phi(x,y)=x+sy$ , where $ s \in \mathbb{R}$
$ f(x,y)=g_x(x,y)+h_y(x,y)$ where $ h$ and $ g$ are smooth functions. Then how to apply change of variable to $ \int\limits_{\Phi (\Omega)}\frac{\partial}{ \partial x}g(x,y)+\frac{\partial}{ \partial y}h(x,y) dx dy.$ Pleases suggest me the reference for the proof.
I have a two table like this :
Table Name : TAGS
id brand-id tag1 tag2 tag3 ---- -------- ---- ---- ----- 1 10 A B C 2 11 D E F
Table Name : MY_TAGS
tags brand-id ------ -------- A 10 B 10 C 10 D 11 E 11 F 12
I need to write a trigger for following case
While Inserting new value in TAGS table
It should insert all the tags in TAGS value.
After value insertion into
TAGS table it should check all three tags (
tag1,
tag2,
tag3) in
MY_TAGS table for that brand, if any particular tag doesn’t exists then only it should insert that tag into
MY_TAGS table
I was trying something like this But i am confused what i need to write in
BEGIN block
CREATE TRIGGER mytrigger AFTER INSERT ON tags REFERENCING NEW AS newRow OLD AS oldRow FOR EACH ROW WHEN (newRow.id>1) BEGIN -- What I need to write here-- END mytrigger;
I am new to pl/sql triggers so any help would be appreciated! Thanks
I have the following scenario, which I would like to have it in AWS.
The model contains
three layers, each layer has its nodes and connections as basic network elements. In the functional layer different Network Functions are supported at different locations which are VNFs. The traffic is being forwarded from eNB (1) towards (2) and requires the VNFs v1 , v2 , and v3 . Therefore, the bold black lines present the functional connections which are used in the applied Service Chain. The other black lines are the unused functional connections. The dashed lines show the actual path of the traffic in the IP layer, which passes the IP nodes R1 , R2 , R3 and R4 , then goes back to R3, and ends in eNB (2).
I have implemented the IP layer in AWS
CSR 1000v where I had each router in deferent VPC as it is shown in the figure above. The connection between them is realized by GRE tunnels. I have also created 3 VMs that hosts Network Functions. As well as, eNB which are VMs which is the point where to send the traffic. They are located as it is shown in the figure above.
My problem is, after sending traffic from VMs (eNB) to another, how can make the traffic goes only through the
GRE tunnels with respecting the descried scenario above? |
Given two sets of (two-dimensional) points, say, $A=\{a_1,a_2,\ldots,a_{n_a}\}$ and (you guessed it) $B=\{b_1,b_2,\ldots,b_{n_b}\}$, and $d^2_{i,j}=\mid a_i-b_j\mid^{\ 2}$ the "matrix" (not necessarily square) of distances between them, I want to map $m:A\to B$ (i.e., with each $A$-point $a\in A$ associate a corresponding $B$-point $m(a)\in B$) such that the following...
The maximal point-to-point $d_{i,m(i)}$ distance should be minimal. And once you've accomplished that first $m:i\to m(i)$ correspondence, each set gets one point smaller, $A\to A\backslash\{a_i\}$ and $B\to B\backslash\{b_{m(i)}\}$. And now, the second point-to-point correspondence made by $m$ should similarly minimize the maximal distance on these one-point-smaller sets. And then iteratively. So,
my question: what's an efficient algorithm for this? ($n_a,n_b\sim10^4$, way too large for brute force)
And it's not necessary that $n_a=n_b$. If $n_a\lt n_b$, discard any of the excess $n_b-n_a$ $B$-points you like such that the set of all those maximal distances is minimized. That is, just do the iteration, and when you're done, just discard the "unused" $B$-points. On the other hand, if $n_b\lt n_a$ just discard the $A$-points resulting from the
first $n_a-n_b$ iteration steps (i.e., again minimize the set of maximal distances). At least, I think that's the right "discard" scheme, but please correct me if I'm wrong.
The "point" (sorry:) of all this is, as per the subject, to "morph" one bitmapped image into another, whereby each $A$-point denotes a source image pixel, and each $B$-point denotes a target image pixel (and if $n_b\gt n_a$, I'll just inconspicuously/randomly "pop" a few extra $B$-pixels into place during each frame of the morph). Since this kind of morphing is all over the place, I'd have thought I could easily google algorithms/code/etc. But I strangely couldn't get google to cough it up. So if you're familiar with this kind of stuff and can just point me in the right direction, that would be great, too. Thanks. |
It is no easy task to move material stock that spans over 12 meters in length while weighing a few tons into a sawing system. For this process, it is common to use forklifts or gantry cranes to move the material onto a loading table or transfer mechanism. The danger any plant manager faces, is the potential that an inexperienced crane or lift truck operator may accidentally drop heavy materials from a higher distance, damaging the material load table. This uncontrollable interaction may also put the rest of the material handling system at risk for severe abuse. A poka-yoke system is needed to guarantee high Overall Operations Effectiveness (OOE).
Figure 1 An example of severe material load table damage experienced from a heavy material drop. Why Material Load Tables are Over-Engineered
Material load tables and transfer mechanisms are often over-engineered with excessive material in an attempt to avoid severe damage from dropped loads. Although the strength of the load table may be improved, this may increase the cost—a cost that gets passed onto the customer. Another approach to avoid increased costs when designing material handling systems is to use advanced engineering practices.
Modern CAD technology, such as SolidWorks, provides powerful tools to calculate and analyze mechanical structures. These programs make it easier to add and test materials that fortify structures in CAD software. The advantage of working in the virtual world is that you can test a variety of materials without adding more cost to the prototyping process.
Within these CAD programs, features such as Finite Element Analysis (FEA), can simulate the effects of shock load before the prototype product is even produced. This allows for early modifications in the design process at little-to-no cost, while ensuring the best strength-to-weight ratios at the lowest cost for the customer.
Theory and Analytical Example
By using the simplifying assumptions of strain energy and the principle of conservation of energy, one can presume that the potential energy before the impact, the kinetic energy right at the impact, and the stored elastic energy in a structure (spring) are equal. In the following formula, \(\delta_{max}\) is the maximum displacement of the spring;
c is the spring rate; m is the falling mass; g is the acceleration of gravity; and h is the starting height:
$$mg (h+\delta_{max}) = \left(\frac{c*\delta^2_{max}}{2}\right)$$
The formula for introducing displacement under a static loading condition \(\delta_{st}\) looks like:
$$\delta_{max} = \delta_{st}\Bigl[1 + \sqrt {(1 + \frac{2h}{\delta_{st}})}\Bigr]$$
The same formula can be written in a different form when applying Hooke’s Law — \(F_{max}\) is the maximum contact force and \(F_{st}\) is the static force:
$$F_{max} = F_{st}\Bigl[1 + \sqrt {(1 + \frac{2h}{\delta_{st}})}\Bigr]$$
This confirms that a quasi-static calculation with an amplification factor (impulse factor) is possible, and conducting dynamic simulations as a first approach is not necessary. This formula can also be applied to a static displacement calculation of a beam with the use of the Euler-Bernoulli beam theory.
Example:
Let us assume a 100 kg mass is hitting a support beam and the contact force and resulting stress needs to be known:
E (Young’s modulus) is 210 GPa; h is 25 mm; I is 1 m; and the beam cross section is a 50 mm x 25 mm rectangle. Figure 2 A visual representation calculating the contact force and resulting stress of a dropped support beam.
Moment of inertia for a rectangular profile is:
$$I = \frac{bh^3}{12} = 65*10^{-9}m^4$$
The static displacement is:
$$\delta_{st} = \frac{mgl^3}{48EI} = 65*10^{-9}m$$
Bending stress due to static load—
M is the maximum bending moment; W is the moment of resistance:
$$\sigma_{st} = \frac{M}{W} = \frac{F\frac{L}{2}}{\frac{bh^2}{6}} = 94 MPa$$
The impact factor is:
$$n = 1 + \sqrt {(1 + \frac{2h}{\delta_{st}})} = 6.7$$
This demonstrates that the impact force from a 25 mm height will be seven times higher while the static load of the mass is 981 N. The bending stress will rise in the same fashion.
The formula for the impact factor shows that an increase in stiffness reduces the static displacement. This increases both the impact factor and impact force. The result shows the paradigm that is generally true from machine tools: the stiffer the better; however, this is not necessarily true when dealing with shock loads.
Verification of Durability
The Transient Nonlinear Dynamic Analysis in FEA software verifies the product in an early stage and conducts damage analysis of it. This provides insight on the contact, reaction forces, and the stress as a function of time. The impact takes place in a matter of milliseconds, making the adequate discretization of space and time crucial. The boundary condition represents the interface to the floor. The initial condition is the impact velocity \((v = \sqrt {2gh})\)—with
g being the gravity constant and h the starting height from where a steel billet is dropped. Figure 3 & 4 FEA software analysis of heavy load drop contact, reaction forces, and stress as a function of time. Improved Ways to Design Material Load Tables
With today’s technology, there are improved ways to design material load tables so they are not overloaded with costs. Generally, designers have two choices:
Over-engineer the structure to withstand any heavy shock load, increasing the cost of the load table for the customer. Through the use of FEA software, design the structure to allow elastic flexing while avoiding the effects of plastic deformation.
We have all have dropped our smartphones. To reduce the risk of cracking or shattering the screen, we have placed the phone within a shock absorbing case. This simple approach reduces the stiffness and impact forces that correspond with the stresses of the phone when dropped. The same is true for dropping material onto a material load table.
It's only fair to share... |
Suppose $X$ is a linear space and $X$ has a Hamel basis with uncountable number of elements. Does there exist a norm on $X$ such that $X$ is a Banach space with respect to this norm?
I can answer this question when the cardinality $\lambda$ of Hamel basis of $X$ is not less than $\mathfrak{c}$.
For any infinite dimensional Banach space its cardinality and cardinality of its Hamel basis are equal. See theorem 3.5 from The cardinality of Hamel bases of Banach spaces by Lorenz Halbeisen , Norbert Hungerbühler.
Consider Banach space $\ell_\infty(\Lambda)$. Clearly $\operatorname{Card}(\ell_\infty(\lambda))=\lambda\times \operatorname{Card}(\mathbb{C})=\lambda$, so cardinalities of Hamel bases of $\ell_\infty(\lambda)$ and $X$ coincide. Therefore, there is some linear isomorphism between $X$ and $\ell_\infty(\Lambda)$. This isomorphism induces complete norm on $X$.
The cardinality $\lambda$ of a Hamel basis of an infinite-dimensional Banach space always satisfies $\lambda^\omega =\lambda$. Choose your favourite uncountable cardinal without this property. |
Here is the Gamma PDF:
$f(x) = \frac{1}{\Gamma(a) b} (\frac{x}{b})^{a-1} e^{-x/b} \;\; x\geq 0; a , b>0$
The mean is ab and the variance is ab². When a=1 it is equivalent to the exponential distribution. In fact when a is an integer, it is equivalent to the sum of (a) independent exponentially distributed random variables each of which has a mean of (b). It is shaped like the exponential distribution with a spike at 0 for a<1, but has a mode at (a-1)b for a>1 (see the Wikipedia article).
MacKay suggests representing the positive real variable x in terms of its logarithm z=ln x (ITILA, pp. 314). This will give us a better idea about the order of magnitude of typical x in terms of a and b. The distribution in terms of z is:
$f(z) = \frac{1}{\Gamma(a)} (\frac{x}{b})^a e^{-x/b} \;\; z \in \Re; x=e^z; a, b>0$
We can get an idea about the shape of f(z) by looking at its first two derivatives with respect to z:
$f'(z) = f(z) (a-\frac{x}{b})$
$f''(z) = f(z) (a^2 - (2a+1)\frac{x}{b} + (\frac{x}{b})^2)$
The graph above shows f(z) and its two derivatives for a=1/10 and b=10. The first derivative tells us that f(z) has a single mode at x=ab. Note that x=ab is the mean of f(x) but only the mode (not the mean) of f(z). The curve raises slowly on the left of the mode and falls sharply on the right. The second derivative has two roots that give us the values with the minimum and the maximum slope:
$x = ab + \frac{b}{2} \pm \frac{b}{2} \sqrt{1+4a}$.
Now we are going to look at the limit where a<<1, typically used as a vague prior. The height of the mode at x=ab is a
ae -a/Γ(a). Γ(a) is well approximated by 1/a for small a, a aand e -aboth go to 1, so f(z) ≈ a at the mode.
Next, let's look at the right side (x>ab) where f(z) seems to fall sharply. According to the roots of the second derivative given above, the minimum slope occurs at around x=b (if we ignore the terms with a<<1). The value of f(z) when x=b is 1/(e Γ(a)). Γ(a) is well approximated by 1/a for small a, so this value is approximately a/e. The slope at x=b is approximately -a/e and if we fit a line at that point the line would cross 0 at x=eb. Thus for small a, the probability can be considered negligible for x>eb.
Next, let's look at the left side (x < ab) where f(z) appears more flat. The maximum slope occurs around x=a²b (if we approximate √ 1+4a with 1+2a-2a²). The slope at x=a²b is approximately a² which gives a flat shape for x<ab when a<<1.
In summary, when used with a<<1, f(z) rises slowly for x<ab (with approximate slope a²) and falls sharply for x>ab (with approximate slope -a/e). You are unlikely to see x values larger than eb from such a distribution, but you may see values much smaller than the mean ab. Thus a vague Gamma prior is practically putting an upper bound on your positive value. The figure below shows how the f(z) distribution starts looking like a step function as the shape parameter approaches 0 (b=1/a and the peak heights have been matched for comparison).
I should also note that in the limit where a→0 and ab=1, we get an improper prior where f(z) becomes flat and the Gamma distribution becomes indifferent to the order of magnitude of the random variable. However it flattens a lot faster on the left than on the right. |
I am implementing auxiliary function based Independent Vector Analysis (AuxIVA) from Nobutaka Ono's original paper Stable and fast update rules for independent vector analysis based on auxiliary function technique. AuxIVA is also presented in other freely available papers, for example this one. It is an extension of plain Independent Vector Analysis where an auxiliary function of the original cost function is minimized.
There are four equations that make up the update rules of AuxIVA:
$$ r_k = \sqrt{\sum_{\omega=1}^{N_\omega} {\lvert w_k^h(\omega)x(\omega)\rvert}^2} $$ $$ V_k(\omega) = {1\over{N_t}}\sum_{t=1}^{N_t}{{G'(r_k) \over r_k} x(\omega) x^h(\omega) } $$ $$ w_k(\omega) = (W(\omega)V_k(\omega))^{-1}e_k$$ $$ w_k(\omega) = {w_k(\omega) \over {\sqrt {w_k^h(\omega)V_k(\omega)w_k(\omega)}}} $$
where
$ N_s $ is the number of sources.
$ k = 1,2,...,N_s$ is the source index.
$ e_k = [0,0,...0,1,0...]^T $, is a column vector with its only non-zero entry being a 1 at the k-th index.
$ G'(r_k)$ is the first derivative of the AuxIVA contrast function and since I'm using $G(r) = r$, $ G'(r_k) = 1$.
$ N_w $ is the number of frequency bins and $ N_t $ is the number of time bins in the STFT of the mixture.
The input I am using is structured like this:
$X$ is a tensor $N_s * N_t * N_\omega $ which is the short-time Fourier transform (STFT) of $N_s$ mixtures. $W$ is the mixing tensor that will be estimated by repeatedly applying the 4 update rules. Its dimensions are $N_s*N_s*N_\omega$
In Matlab notation, $w^h_k(\omega) = W[k,:,\omega] $ and $x(\omega)=X[:,:,\omega]$.
When I evaluate $r_k$ using the first equation , I get a vector of size $ 1 * N_t $.
To evaluate $V_k(\omega)$ using the second equation, I evaluate $ {G'(r_k) \over r_k}$, which is just $r_k$ because of the assumption $G(r_k) = r_k$. I then do point-wise multiplication between $ {G'(r_k) \over r_k} $ and $x(\omega)$ then post-multiply this product by $x^h(\omega)$. Because $x(\omega)$ has size $N_s*N_t$ and $ {G'(r_k) \over r_k} $, the point-wise multiplication between them a weighted copy of $x(\omega)$. When this weighted copy is post-multiplied by $x^h(\omega)$, a weighted covariance matrix of size $N_s*N_s$ is generated.
In the third equation I post-multiply $W(\omega)$, that has size $N_s*N_s$, by $V_k(\omega)$ from the previous equation, that has size $N_s*N_s$. I then invert this matrix to get $(W(\omega)V_k(\omega))^{-1}$ and post-multiply this by $e_k$ to get $w_k(\omega)$ that is a vector of size $1*N_s$
Finally, in the last equation, I evaluate ${\sqrt {w_k^h(\omega)V_k(\omega)w_k}}$, that is a scalar, and divide the vector $w_k(\omega)$ by this scalar to get a scaled version of $w_k(\omega)$.
I'd like to know if my interpretation of the four equations is correct. I do not have an in-depth understanding of the derivation of these equations so I'm relying on little information present in these papers to interpret and implement the algorithm. |
Q. A parallel plate capacitor is made of two square plates of side 'a', separated by a distance d (d< Capacitance of this capacitor is :
Solution:
$\frac{y}{x} = \frac{d}{a} $ $ y = \frac{d}{a} x$ $ dy = \frac{d}{a} \left(dx\right)$ $ \frac{1}{dc } = \frac{y}{KE.adx} + \frac{\left(d-y\right)}{\in_{0} adx } $ $ \frac{1}{dc} = \frac{1}{\in_{0} adx } \left(\frac{y}{k} + d -y\right) $ $ \int dc = \int \frac{\in_{0} adx }{\frac{y}{k} + d - y}$ $ c = \in_{0} a. \frac{a}{d} \int^{d}_{0} \frac{dy}{d+y \left(\frac{1}{k} -1\right)} $ $ = \frac{\in_{0} a^{2}}{\left(\frac{1}{k} - 1\right)d} \left[\ell n \left(d+y \left(\frac{1}{k} -1\right)\right)\right]^{d}_{0} $ $= \frac{k\in_{0} a^{2}}{\left(1-k\right)d} \ell n \left(\frac{d+d\left(\frac{1}{k} -1\right)}{d}\right) $ $ = \frac{k\in_{0} a^{2}}{\left(1-k\right)d} \ell n \left(\frac{1}{k}\right) =\frac{k\in_{0} a^{2}\ell nk}{\left(k-1\right)d} $ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 9. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Q. A string of length 1 m and mass 5 g is fixed at both ends. The tension in the string is 8.0 N. The siring is set into vibration using an external vibrator of frequency 100 Hz. The separation between successive nodes on the string is close to :
Solution:
Velocity of wave on string $V = \sqrt{\frac{T}{\mu}} = \sqrt{\frac{8}{5} \times1000} = 40m/s $ Now, wavelength of wave $ \lambda = \frac{v}{n} = \frac{40}{100} m $ Separation b/w successive nodes,$ \frac{\lambda}{2} = \frac{20}{100} m $ = 20 cm Questions from JEE Main 2019 3. A power transmission line feeds input power at 2300 V to a step down transformer with its primary windings having 4000 turns. The output power is delivered at 230 V bv the transformer. If the current in the primary of the transformer is 5A and its efficiency is 90 %, the output current would be : 5. The pitch and the number of divisions, on the circular scale, for a given screw gauge are 0.5 mm and 100 respectively. When the screw gauge is fully tightened without any object, the zero of its circular scale lies 3 divisions below the mean line.
The readings of the main scale and the circular scale, for a thin sheet, are 5.5 mm and 48 respectively, the thickness of this sheet is : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Construction solutions for Neumann problem with Hénon term in $ \mathbb{R}^2 $
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
$ \begin{eqnarray*} \left\{ \begin{array}{ll} -\Delta u +u = \lambda |x-q_1|^{2\alpha_1}\cdots |x-q_n|^{2\alpha_n} u^{p-1}e^{u^p},\ \ u>0,\ \ \ & {\rm in}\ \Omega;\\ \frac{\partial u}{\partial\nu} = 0\ \ \ & {\rm on}\ \partial\Omega, \end{array} \right. \end{eqnarray*} $
$ \Omega $
$ \mathbb{R}^2 $
$ q_1,\ldots,q_n\in \Omega $
$ \alpha_1,\cdots,\alpha_n\in(0,\infty)\backslash\mathbb{N} $
$ \lambda>0 $
$ 0< p <2 $
$ \nu $
$ \partial\Omega $
$ k $
$ l $
$ k\geq 1 $
$ l\geq 1 $ Keywords:Neumann problem, Hénon term, exponential nonlinearity, interior and boundary bubbling solutions, Lyapunov-Schmidt reduction procedure. Mathematics Subject Classification:Primary: 35J25; Secondary: 35J08, 35J60. Citation:Shengbing Deng. Construction solutions for Neumann problem with Hénon term in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2233-2253. doi: 10.3934/dcds.2019094
References:
[1]
S. Baraket and F. Pacard,
Construction of singular limits for a semilinear elliptic equation in dimension 2,
[2]
D. Bartolucci and G. Tarantello,
Liouville type equations with singular data and their applications to periodic multivortices for the electroweak theory,
[3]
H. Brezis and F. Merle,
Uniform estimates and blow-up behavior for solutions of $-\Delta u= V(x) e^u$ in two dimensions,
[4]
D. Chae and O. Imanuvilov,
The existence of non-topological multivortex solutions in the relativistic self-dual Chern-Simons theory,
[5] [6] [7]
M. del Pino, M. Musso and B. Ruf,
New solutions for Trudinger-Moser critical equations in $\mathbb{R}^2$,
[8] [9] [10]
S. Deng, D. Garrido and M. Musso,
Multiple blow-up solutions for an exponential nonlinearity with potential in $\mathbb{R}^2$,
[11] [12]
S. Deng and M. Musso,
Blow up solutions for a Liouville equation with Hénon term,
[13] [14]
P. Esposito, M. Musso and A. Pistoia,
Concentrating solutions for a planar elliptic problem involving nonlinearities with large exponent,
[15]
P. Esposito, M. Grossi and A. Pistoia,
On the existence of blowing-up solutions for a mean field equation,
[16] [17] [18] [19] [20] [21] [22] [23]
show all references
References:
[1]
S. Baraket and F. Pacard,
Construction of singular limits for a semilinear elliptic equation in dimension 2,
[2]
D. Bartolucci and G. Tarantello,
Liouville type equations with singular data and their applications to periodic multivortices for the electroweak theory,
[3]
H. Brezis and F. Merle,
Uniform estimates and blow-up behavior for solutions of $-\Delta u= V(x) e^u$ in two dimensions,
[4]
D. Chae and O. Imanuvilov,
The existence of non-topological multivortex solutions in the relativistic self-dual Chern-Simons theory,
[5] [6] [7]
M. del Pino, M. Musso and B. Ruf,
New solutions for Trudinger-Moser critical equations in $\mathbb{R}^2$,
[8] [9] [10]
S. Deng, D. Garrido and M. Musso,
Multiple blow-up solutions for an exponential nonlinearity with potential in $\mathbb{R}^2$,
[11] [12]
S. Deng and M. Musso,
Blow up solutions for a Liouville equation with Hénon term,
[13] [14]
P. Esposito, M. Musso and A. Pistoia,
Concentrating solutions for a planar elliptic problem involving nonlinearities with large exponent,
[15]
P. Esposito, M. Grossi and A. Pistoia,
On the existence of blowing-up solutions for a mean field equation,
[16] [17] [18] [19] [20] [21] [22] [23]
[1] [2]
Haitao Yang, Yibin Zhang.
Boundary bubbling solutions for a planar elliptic problem with exponential Neumann data.
[3] [4]
Shengbing Deng, Fethi Mahmoudi, Monica Musso.
Bubbling on boundary submanifolds for a semilinear Neumann problem near high critical exponents.
[5]
Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang.
Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition.
[6]
Manuel del Pino, Jean Dolbeault, Monica Musso.
Multiple bubbling for the exponential nonlinearity in the slightly supercritical case.
[7]
Jaeyoung Byeon, Sangdon Jin.
The Hénon equation with a critical exponent under the Neumann boundary condition.
[8]
Martin Gugat, Günter Leugering, Ke Wang.
Neumann boundary feedback stabilization for a nonlinear wave equation: A strict $H^2$-lyapunov function.
[9]
Liping Wang, Juncheng Wei.
Solutions with interior bubble and boundary layer for an elliptic problem.
[10]
Liping Wang.
Arbitrarily many solutions for an elliptic Neumann problem with sub- or supercritical
nonlinearity.
[11]
Giuseppina D’Aguì, Salvatore A. Marano, Nikolaos S. Papageorgiou.
Multiple solutions to a Neumann problem with
equi-diffusive reaction term.
[12]
Futoshi Takahashi.
Singular extremal solutions to a Liouville-Gelfand type problem with exponential nonlinearity.
[13]
Bin Guo, Wenjie Gao.
Finite-time blow-up and extinction rates of solutions to an initial Neumann problem
involving the $p(x,t)-Laplace$ operator and a non-local
term.
[14]
Yang Wang.
The maximal number of interior peak solutions concentrating on hyperplanes for a singularly perturbed Neumann problem.
[15]
Yihong Du, Zongming Guo, Feng Zhou.
Boundary blow-up solutions with interior layers and spikes in a bistable problem.
[16] [17]
Monica Musso, Donato Passaseo.
Multiple solutions of Neumann elliptic problems with critical nonlinearity.
[18]
Genni Fragnelli, Dimitri Mugnai.
Singular parabolic equations with interior degeneracy and non smooth coefficients: The Neumann case.
[19]
María Anguiano, Tomás Caraballo, José Real, José Valero.
Pullback attractors for reaction-diffusion equations in some unbounded domains
with an $H^{-1}$-valued non-autonomous forcing term and without
uniqueness of solutions.
[20]
Haiyang He.
Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
I have some troubles finding the Laplace transform of : $~~~f(t)=~e^{a|t|} ~~~(a\le0)~~$ from it's
Fourier transform : $~~\hat{f}(s)=\int_{-\infty }^{+\infty}f(t)e^{-i2\pi st}~dt= \frac{-1}{a+2\pi is}+\frac{1}{-a+2\pi is} = \frac{-1}{a+i\omega}+\frac{1}{-a+i\omega} $ .
I do know that the Laplace transform of : $ \mathcal{L}(f(t))(p)=H(i2\pi s)=H(i\omega)$
so $H(p)$ should be equal to : $ \frac{-1}{a+p}+\frac{1}{-a+p} $
which is quite different from the result I got from wolphram :
Any pointers to my mistake ?
Thank you . |
Science Advisor
Homework Helper
1,200 400
Part 1 of (9)
##\phi: R \to S## is a ring epimorphism. Define ##I:= \phi^{-1}(J)##. It is well known that the inverse image of an ideal is an ideal, thus ##I## is an ideal.
Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]## This is well defined: If ##r \in I##, then ##\phi(r) \in J##. Clearly, this is also a ring morphism. For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective. Surjectivity follows immediately by surjectivity of ##\phi##. It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##.
Define ##\psi: R/I \to S/J: [r] \mapsto [\phi(r)]##
This is well defined: If ##r \in I##, then ##\phi(r) \in J##.
Clearly, this is also a ring morphism.
For injectivity, assume ##[\phi(r)] = 0##, then ##\phi(r) \in J##, and ##r \in \phi^{-1}(J) = I##, thus ##[r] = 0##. The kernel is trivial and the map is injective.
Surjectivity follows immediately by surjectivity of ##\phi##.
It follows that ##\psi## is an isomorphism, and thus ##R/I \cong S/J##.
Last edited: |
I am implementing a simple Kalman Filter that estimates the heading direction of a robot. The robot is equipped with a compass and a gyroscope.
Say at time $t-dt$, the compass reports a reading $\theta_{t-dt}$, and the gyroscope reports a reading $\omega_{t-dt}$. Then I assume from time $t-dt$ to $t$, the rotation rate can be regarded as a constant. Thus, my current heading direction is $$\theta_{t}=\theta_{t-dt}+\omega_{t-dt}\cdot dt$$ As can be seen, the $\theta$ can be easily time-updated.
But what about my $\omega$? The robot is not at my control. So its rotation rate at next moment is
unpredictable. How should I do the time update in this case? |
Consider a non linear ode in dimension $10$: $\dot x = f(t,x,\lambda)$ where $\lambda$ is a vector of $p$ parameters.
Consider a boundary value problem of the form :
$\dot x(t) = f(t,x(t),\lambda)$ for $t\in [0,T]$ such that $g(x(0), x(T),\lambda)\le 0$ and $h(x(0),x(T),\lambda)=0$ for some vector valued function $g$ and $h$.
Is there exists some numerical scheme to solve this kind of problem.
First I think I can add the parameters $\lambda$ to the state $x$ that is consider the state $\tilde x=(x,\lambda)$ such that $\dot \lambda = 0$ so that I don't consider parametric BVP but BVP.
Second, to handle the inequalities, I am thinking to add also some constants $c$ such that $g(\tilde x(0),\tilde x(T)) + c =0$ and such that $\dot c=0$.
Then I can apply a Newton type algorithm to find a zero of my shooting function $S(\tilde x,c)$.
I can't implement it in python, fortran, matlab, c++ ...
How serious this idea sound ? Do you have other method, maybe a relaxation of my problem leading to an optimization problem ? |
I'm working on arbitrary product of sets in topology.
I want to prove that all projections are open. So I take a projection $\Pi_\beta : \Pi X_\alpha \longrightarrow X_\beta$. I'm considering $\Pi X_\alpha$ with product topology. Then I take a basic open set, say $$A= \Pi_{\alpha_1}^{-1}(A_{\alpha_1})\cap\ldots \cap \Pi_{\alpha_1}^{-1}(A_{\alpha_N}),$$ where each $A_{\alpha_i}$ are open sets in $X_{\alpha_i}$.
Now, I want to show that $\Pi_\beta (A)$ is open.
Suppose that $A \neq \varnothing$. Suppose also that $\beta \neq \alpha_i$ for all $i$.
Now, here's my problem. Intuitively, I can say that $\Pi_\beta (A)=X_\beta$ because $\Pi_\beta (A) = \{\Pi_\beta(f) = f(\beta) \colon f(\alpha_i) \in A_{\alpha_i} \text{ for all } i \leq N \}$ and as $\beta \neq a_i$ then it doesn't matter because it's a proposition not in terms of $\beta$.
But I can't formalize. How canI make this argument more formal? |
Pooling equilibrium with no expansion
Let us investigate if the strategy profile {
NoExp H
,
NoExp L
} can be supported as a pooling PBE of this signaling game. First, the entrant’s beliefs are
μ
(
H
|
NExp
) =
p
after observing no expansion (in equilibrium) and
μ
(
H
|
Exp
) =
μ
∊ [0,1]after observing expansion (off-the-equilibrium). Given these beliefs after observing
no expansion
(in equilibrium) the entrant enters if and only if
$$ p\left( {\pi_{{ent,H}}^{{D,NE}} - F} \right) + \left( {1 - p} \right)\left( {\pi_{{ent,L}}^{{D,NE}} - F} \right) \geqslant 0, $$
where the right-hand side represents the entrant’s profits from staying in the perfectly competitive market (with zero profits). Solving for
p
, we obtain that the entrant enters if \( p \geqslant \frac{{F - \pi_{{ent,L}}^{{D,NE}}}}{{\pi_{{ent,H}}^{{D,NE}} - \pi_{{ent,L}}^{{D,NE}}}} \equiv {{p}^{{NE}}} \)
. Note that this cutoff is positive and smaller than one, 1 >
p NE
> 0, since entry costs,
F
, satisfy \( \pi_{{ent,H}}^{{D,NE}} > F > \pi_{{ent,L}}^{{D,NE}} \)
by definition. Hence, after observing no expansion (in equilibrium) the entrant enters the market if
p
≥
p NE
and stays out otherwise. Similarly, after observing expansion (off-the-equilibrium), the entrant enters if and only if
$$ \mu \left( {\pi_{{ent,H}}^{{D,E}} - F} \right) + \left( {1 - \mu } \right)\left( {\pi_{{ent,L}}^{{D,E}} - F} \right) \geqslant 0 $$
Solving for
μ
, we find that the entrant enters if \( \mu \geqslant \frac{{F - \pi_{{ent,L}}^{{D,E}}}}{{\pi_{{ent,H}}^{{D,E}} - \pi_{{ent,L}}^{{D,E}}}} \equiv {{p}^E} \)
. Note that this cutoff is positive and smaller than one, 1 >
p NE
> 0, since \( \pi_{{ent,H}}^{{D,E}} > F > \pi_{{ent,L}}^{{D,E}} \)
is satisfied by definition. Indeed, \( \pi_{{ent,H}}^{{D,NE}} = \pi_{{ent,H}}^{{D,E}} > F > \pi_{{ent,L}}^{{D,NE}} > \pi_{{ent,L}}^{{D,E}} \)
given that the entrant’s profits are not affected by the (unconstrained) high-cost incumbent decision to expand, \( \pi_{{ent,H}}^{{D,NE}} = \pi_{{ent,H}}^{{D,E}} \)
, and the entrant’s profits are higher when the low-cost incumbent does not expand than when she does, \( \pi_{{ent,L}}^{{D,NE}} > \pi_{{ent,L}}^{{D,E}} \)
. Hence, after observing expansion (off-the-equilibrium) the entrant enters if
μ
≥
p E
and stays out otherwise. Given the entrant’s strategies let us now analyze the incumbent:
If
p < p and NE μ ≥ p then the entrant does not enter after observing no expansion (in equilibrium) but enters otherwise. Hence, the low-cost incumbent prefers not to expand (as prescribed) if and only if \( \matrix{ {\pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right) > \pi_{{inc,L}}^{{D,E}} - {{K}_L},}{\text{or}\;}{{{K}_L} > \pi_{{inc,L}}^{{D,E}} - \pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right)} \\ }<!end array> \), where \( BC{{C}_L} - PLE_L^E \equiv \pi_{{inc,L}}^{{D,E}} - \pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right) \). Similarly, the high-cost incumbent does not expand if \( \matrix{ {\pi_{{inc,H}}^{{M,NE}} > \pi_{{inc,H}}^{{D,E}} - {{K}_H},} \hfill &{\text{or}} \hfill &{{{K}_H} > \pi_{{inc,H}}^{{D,E}} - \pi_{{inc,H}}^{{M,NE}} < 0} \hfill \\ }<!end array> \)is satisfied, which holds for any expansion cost E K > 0. Thus, the strategy profile in which both types of incumbent do not expand their facility can be supported as a pooling PBE in the signaling game if \( {{K}_L} > BC{{C}_L} - PLE_L^E \); as described in Proposition 1, Part 3a. H
If
p < p and NE μ < p then the entrant does not enter after observing any action from the incumbent. Therefore, the low-cost monopolist prefers not to expand (as prescribed) if and only if \( \matrix{ {\pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right) > \pi_{{inc,L}}^{{M,E}} - {{K}_L},} \hfill &{\text{or}} \hfill &{{{K}_L} > \pi_{{inc,L}}^{{M,E}} - \pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right) \equiv BC{{C}_L}} \hfill \\ }<!end array> \). Similarly, the high-cost incumbent prefers not to expand since \( \matrix{ {\pi_{{inc,H}}^{{M,NE}} > \pi_{{inc,H}}^{{M,E}} - {{K}_H}} \hfill &{\text{or}} \hfill &{{{K}_H} > \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{M,NE}} < 0} \hfill \\ }<!end array> \) which is satisfied for any E K > 0. Thus, this strategy profile can be sustained as a pooling PBE in the signaling game if expansion costs satisfy H K > L BCC ; as described in Proposition 1, Part 3b. L
If
p ≥ p and NE μ < p then the entrant enters after observing no expansion (in equilibrium) but does not enter otherwise. Hence, the low-cost incumbent does not expand (attracting entry) if and only if \( \matrix{ {\pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) > \pi_{{inc,L}}^{{M,E}} - {{K}_L},} \hfill &{\text{or}} \hfill &{{{K}_L} > \pi_{{inc,L}}^{{M,E}} - \pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) \equiv BC{{C}_L} + PLE_L^{{NE}}} \hfill \\ }<!end array> \). Similarly, the high-cost incumbent does not expand if and only if \( \matrix{ {\pi_{{inc,H}}^{{D,NE}} > \pi_{{inc,H}}^{{M,E}} - {{K}_H}} \hfill &{,{\text{or}}} \hfill &{{{K}_H} > \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{D,NE}} = \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{D,E}} = PLE_H^E} \hfill \\ }<!end array> \), since \( \pi_{{inc,H}}^{{D,E}} = \pi_{{inc,H}}^{{D,NE}} \)given that the high-cost incumbent is unaffected by the capacity constraint. Thus, this strategy profile can be supported as a pooling PBE in the signaling game under expansion costs \( \matrix{ {{{K}_L} > BC{{C}_H} + PLE_H^{{NE}}} \hfill &{\text{and}} \hfill &{{{K}_H} > PLE_L^E} \hfill \\ }<!end array> \); as described in Proposition 2a, and Proposition 3. E
If
p ≥ p and NE μ ≥ p then the entrant enters after observing any action from the incumbent. Therefore, the low-cost incumbent does not expand (as prescribed) if and only if \( \matrix{ {\pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) > \pi_{{inc,L}}^{{D,E}} - {{K}_L}} \hfill &{,{\text{or}}} \hfill &{{{K}_L} > \pi_{{inc,L}}^{{D,E}} - \pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) \equiv BC{{C}_L} + PLE_L^{{NE}} - PLE_L^E} \hfill \\ }<!end array> \). Similarly, the high-cost incumbent does not expand since \( \matrix{ {\pi_{{inc,L}}^{{D,NE}} > \pi_{{inc,L}}^{{D,E}} - {{K}_L},} \hfill &{\text{or}} \hfill &{{{K}_L} > \pi_{{inc,L}}^{{D,E}} - \pi_{{inc,L}}^{{D,NE}} = 0} \hfill \\ }<!end array> \), which holds for any E K >0. Thus, this strategy profile can be supported as a pooling PBE for expansion costs \( {{K}_L} > BC{{C}_L} + PLE_L^{{NE}} - PLE_L^E \) and H K H > 0; as described in Proposition 2b, and Proposition 3. Separating equilibrium
Let us now consider the separating strategy profile where only the low-cost incumbent expands, i.e., {
NotExpand , H Expand }. First, entrant’s updated beliefs become L μ( H| NExp) = 1 and μ( H| Exp) = 0. Given these beliefs, the entrant enters after observing no expansion since \( \matrix{ {\pi_{{ent,H}}^{{D,E}} - F > 0,} \hfill &{\text{or}} \hfill &{\pi_{{ent,L}}^{{D,NE}} < F} \hfill \\ }<!end array> \),, which satisfies our initial assumptions. On the other hand, after observing expansion the entrant stays out since \( \matrix{ {\pi_{{ent,L}}^{{D,E}} - F < 0,} \hfill &{\text{or}} \hfill &{\pi_{{ent,L}}^{{D,E}} < F} \hfill \\ }<!end array> \), which also holds by definition. Therefore, given the entrant’s responses, the high-cost incumbent does not expand (as prescribed) if and only if \( \matrix{ {\pi_{{inc,H}}^{{D,NE}} > \pi_{{inc,H}}^{{M,E}} - {{K}_H},} \hfill &{\text{or}} \hfill &{{{K}_H} > \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{D,NE}}} \hfill \\ }<!end array> \). Since \( \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{D,NE}} = \pi_{{inc,H}}^{{M,E}} - \pi_{{inc,H}}^{{D,E}} \equiv PLE_H^E \) given that \( \pi_{{inc,H}}^{{D,NE}} = \pi_{{inc,H}}^{{D,E}} \), we can then conclude that the high-cost incumbent does not expand if \( {{K}_H} > PLE_H^E \). By contrast, the low-cost incumbent expands (as prescribed) if and only if \( \matrix{ {\pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) < \pi_{{inc,L}}^{{M,E}} - {{K}_L}} \hfill &{\text{or}} \hfill &{{{K}_L} < \pi_{{inc,L}}^{{M,E}} - \pi_{{inc,L}}^{{D,NE}}\left( {\overline q } \right) \equiv BC{{C}_L} + PLE_L^{{NE}}} \hfill \\ }<!end array> \). Thus, this strategy profile can be sustained as a separating PBE for expansion costs \( \matrix{ {{{K}_H} > PLE_H^E} \hfill &{\text{and}} \hfill &{{{K}_L} < BCC_L + PLE_L^{{NE}}} \hfill \\ }<!end array> \); as described in Proposition 1 (Part 1) and Proposition 3.
For completeness, let us check that the opposite separating strategy profile {
Exp , H NoExp } cannot be supported as a PBE of the signaling game. In this case, the entrant’s updated beliefs become L μ( H| NExp) = 0 and μ( H| Exp) = 1. Given these beliefs, the entrant enters after observing expansion since \( \pi_{{ent,H}}^{{D,E}} - F > 0\,{\text{or}}\,\pi_{{ent,H}}^{{D,E}} > F \), which holds by definition. However, the entrant does not enter after observing no expansion given that \( \matrix{ {\pi_{{ent,L}}^{{D,NE}} - F < 0,} \hfill &{\text{or}} \hfill &{\pi_{{ent,L}}^{{D,NE}} < F} \hfill \\ }<!end array> \), which is satisfied by definition. Given the entrant’s responses, the low-cost incumbent does not expand (as prescribed) if and only if \( \matrix{ {\pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right) > \pi_{{inc,L}}^{{D,E}} - {{K}_L},} \hfill &{\text{or}} \hfill &{{{K}_L} > \pi_{{inc,L}}^{{D,E}} - \pi_{{inc,L}}^{{M,NE}}\left( {\overline q } \right)} \hfill \\ }<!end array> \). On the other hand, the high-cost incumbent expands (as prescribed) if \( \matrix{ {\pi_{{inc,H}}^{{M,NE}} < \pi_{{inc,H}}^{{D,E}} - {{K}_H},} \hfill &{\text{or}} \hfill &{{{K}_H} < \pi_{{inc,H}}^{{D,E}} - \pi_{{inc,H}}^{{M,NE}} = \pi_{{inc,H}}^{{D,E}} - \pi_{{inc,H}}^{{M,E}} < 0} \hfill \\ }<!end array> \) (since \( \pi_{{inc,H}}^{{M,E}} = \pi_{{inc,H}}^{{M,NE}} \)), which cannot hold for any K > 0. Thus, this strategy profile cannot be supported as a separating PBE of the signaling game. H |
I do have a stirred tank reactor with two inlets and one outlet. Several components enter the reactor at inlet 0 and particles at inlet 1. All component from inlet 0 adsorb on the particles from inlet 1.
I can write a mass balance like:
\begin{equation} C \cdot V_{liquid} + Q \cdot V_{particles} = 0 \end{equation}
C is the concentration in the liquid, Q is the concentration on the particles, V are volumes (or flow rates).
The ODE may look like (k,l,m are parameters):
\begin{equation} \frac{dQ}{dt} = k \, (1 - Q[0])^l * m \end{equation}
Accordingly I can write
\begin{equation} \frac{dC}{dt} = - V_{particles}/V_{liquid} *\frac{dQ}{dt} \end{equation}
So I have two unknowns for every components: C[2] and Q[2] which are the concentrations in the reactor (or at the outlet of the reactor). The ODE model adds two equations for every component, therefore the system is not under determined.
If I write all equations together, it is an implicit DAE model. However, if I say, that the components are mixed instantaneously at time = 0 and adsorption happens afterwards, I can calculate all initial conditions necessary for an ODE system. It seems to work but I am not sure whether this trick is a pitfall or not.
Is this assumption correct ? Or do I miss something ? |
No.
To my knowledge, the only really serious calculations regarding this scenario are in an article by Alexander Bolonkin and Joseph Friedlander. It's currently cited in the current highest-voted answer as a feasibility study of the possibility of destroying the Sun by detonating a nuclear weapon in the Sun's atmosphere, inducing a self-supporting nuclear detonation wave that would subsequently propagate throughout the entire Sun, causing a catastrophic explosion. I think it's an excellent guide with which to show that this idea is
not at all possible, contrary to the claims made. Given that, I'm going to critique its analysis, and therefore the scenario given. The setup
Let's assume that someone has created a spaceship, placed a nuclear weapon on board, and sent it on a trajectory towards the Sun. They've timed it to detonate in the solar atmosphere; moreover, they've designed shielding that protect it from high temperatures and solar activity like flares and coronal mass ejections. Essentially, we can assume that the payload is delivered successfully and the detonation begins as desired.
If a nuclear weapon was detonated in
any environment, creating a self-sustaining blast wave, the wave would be supported by whatever fusion reactions are favored by the surrounding medium. In other words, the weapon itself doesn't dictate the type of nuclear reactions supporting the blast wave, and the most efficient ones will be chosen. This is something that was studied during the Manhattan project. The scientists were concerned that the first detonation of a nuclear weapon would initiate a self-supporting blast wave that would travel through the atmosphere and oceans, killing all life on the planet.
It's a scary possibility, and naturally, it was modeled in a lot of detail. A number of papers were published on it over the years, including
Ignition of the atmosphere with nuclear bombs. In air, the reactions the physicists were most concerned about involved the fusion of two nitrogen atoms - certainly a possibility, as nitrogen is the most abundant component of the atmosphere. Even though the groups considered the most favorable conditions for sustaining such a blast wave, they found a runaway detonation impossible for reasonably powerful nuclear weapons. I'm sure they thoroughly checked their calculations.
The Sun is largely composed of hydrogen, ionized because of the high temperatures. It generates energy primarily via a form of the proton-proton chain reaction (p-p chain); much higher temperatures would be needed to use reactions found in more massive stars. In particular, a variant called the
p-p I branch is dominant and most temperatures in the solar core. It's reasonable to expect that the same sort of reactions would occur immediately following the detonation of the weapon, provided the required temperatures (10-15 million Kelvin) could be achieved. Why would a nuclear weapon help?
With the exception of the corona, the Sun's photosphere has a temperature of about 5800 K. The temperature increases further into the Sun, but with the exception of the core, conditions aren't extreme enough for nuclear fusion to proceed. Bolonkin claims that even in the core, temperatures are low enough that the p-p chain proceeds slowly - about 15 million Kelvin. He invokes something called the
Coulomb barrier to support his point, claiming that a nuclear weapon could surmount it.
The Coulomb barrier is an extremely well-studied phenomenon, because it's extremely important when fusion is on the verge of happening. Nuclei have a net positive charge, as they're composed of protons. Therefore, any two nuclei will repel each other if brought close together, via the electrostatic force - described by Coulomb's law, which you've probably talked about in an introductory physics course. This repulsion gets stronger the closer together the nuclei get, meaning that it's very, very hard to overcome the force. This is the Coulomb barrier.
The Coulomb barrier is a problem - so big a problem, in fact, that stars shouldn't be able to avoid it. Stellar fusion would be impossible except at extremely high temperatures - over 10 billion Kelvin! Fortunately, there's a way around it: quantum tunneling. Quantum tunneling arises because a particle's position and momentum can never be known exactly, and there is always a probability that a particle will be in a given location. The wavefunction of the particle - a description of how likely it is to be in a certain state - shows that two protons have a probability of being arbitrarily close together, which would normally be forbidden by classical physics.
Bolonkin ignores quantum tunneling, arguing that the merit of a nuclear weapon is that it could temporarily raise temperatures in a small region of the Sun. The higher the temperature, the more likely a particle is to move at higher speeds. Therefore, more protons would be likely to fuse. I've seen the same logic used elsewhere to justify using a nuclear weapon in this scenario. However, the temperatures around a nuclear weapon will only rise to several tens of millions of Kelvin - extremely hot by most standards, but far too cool to help more particles overcome the Coulomb barrier.
The stability conditions
Bolonkin claims that in order for a detonation wave to continue propagating, it must travel faster than the ion speed of sound. He eventually derives what he claims is the criterion for a successful, self-sustaining blast wave:
1$$n\tau>\frac{\gamma zk_BT}{(\gamma^2-1)E\langle\sigma v\rangle}\tag{1}$$where: $n$ is the number density of particles. $\tau$ is something equivalent to the confinement time $\gamma$ is the adiabatic index $k_B$ is the Stefan-Boltzmann constant $T$ is the temperature of the environment $E$ is the energy of the reaction $\langle\sigma v\rangle$ is the mean reaction rate - an average of the product of the collisional cross-section of a proton and the relative velocity of protons $z$ is the charge of the nucleus divided by the fundamental charge.
Bolonkin claims that his condition is superior to the Lawson criterion, which is commonly used in designs of nuclear fusion reactors to determine whether fusion can take place. It's usually derived from a perspective of energy loss: Can the reaction, in the given environment, produce more energy than it loses? The Lawson criterion is$$n\tau>\frac{12k_BT}{E\langle\sigma v\rangle}\tag{2}$$which is very similar. The authors seem to imply that Lawson's derivation is inapplicable in a star because, as they claim, there are no energy losses; in a nuclear reactor, on the other hand, energy can be lost to the walls and surrounding environment. Therefore, they conclude, their version is correct. Well, then let's see how much more favorable their condition is. Bolonkin says that $\gamma$ should be between 1.2 and 1.4, and that $z$ should be set to 1. In the cases where $\gamma=1.2$ and $\gamma=1.4$, we find that$$n\tau>\frac{2.73k_BT}{E\langle\sigma v\rangle},\quad n\tau>\frac{1.46k_BT}{E\langle\sigma v\rangle}$$That's not a huge improvement - lower than Lawson's by a factor of 4 to 8, roughly. We shouldn't get too excited here. It's debatable as to whether
either criterion holds, in fact, as Bolonkin failed to consider energy losses in the photosphere, where the detonation would originate. The upper layers of the Sun's atmosphere are optically thin, meaning that light can travel through them with relative ease. I'm concerned reasonable that, therefore, energy would be lost rather easily. Slightly more complex formulations of the Lawson criterion look at other sources of energy loss; Bolonkin's clearly does not.
One form of energy loss that comes to mind is thermal bremsstrahlung. Bremsstrahlung is radiation emitted when one charged particle is accelerated or decelerated by another. Given that after the detonation, we have hot ($\sim10^7$ Kelvin) plasma in an environment that may be optically thin to these x-rays, bremsstrahlung could be an efficient form of energy loss.
2
I should note, of course, that the Lawson criterion is usually applied to nuclear reactors, not stars. Therefore, it seems strange that Bolonkin would want to compare his results to Lawson's at all.
The thermostat effect
The Sun is composed mostly of plasma - largely, as I said above, of hydrogen nuclei - protons! The gas obeys the ideal gas law, hopefully another concept you've come across before. The ideal gas law is an
equation of state, meaning that it relates several thermodynamic variables together. Although the law is usually formulated as $PV=nRT$, a sometimes-preferred form in astrophysics is$$P=nk_BT\tag{3}$$where $P$ is pressure, $n$ is number density, and $T$ is temperature. The ideal gas law should hold well in the outer layers, and should be a decent approximation in the core. The big criterion is that the thermal energy be much larger than the energy of interactions between protons, which holds in general. The standard solar model confirms this; the ideal gas law's predictions largely agree.
There are some pretty nice consequences of the ideal gas law. Let's say that temperature in a pocket of the Sun rises, thanks to the rate of nuclear reactions increasing. This should in turn speed up the reaction rate; I said before that higher temperatures are more beneficial to fusion. Well, according to the ideal gas law, if the temperature rises, then either the pressure increases or the density decreases.
It turns out that we should expect $P$ to increase and $n$ to decrease simultaneously. A star supporting itself by nuclear fusion is in hydrostatic equilibrium. The gas pressure trying to expand the star opposes the force of gravity trying to collapse the star. However, if the temperature rises, the pressure will increase. Suddenly, the star is out of equilibrium, and the net force on any layer will be upwards, away from the center. This lowers the density, which in turn lowers the reaction rate and the temperature, bringing the star to equilibrium again. This is sometimes informally referred to as the solar thermostat. This prevents runaway nuclear reactions, for the most part.
The quantity $\langle\sigma v\rangle$ is often approximated as being a power law in terms of temperature dependence. That is, $\langle\sigma v\rangle\propto T^\eta$, where $\eta$ is a constant. For the p-p chain, there is a small temperature dependence, relative to other reactions (like the CNO cycle). In particular, we can say that $\eta=4$.
3 If we plug this into either version of the criterion, we find that$$n\tau>CT^{-3}$$where $C$ is a constant depending on which criterion you've chosen. Therefore, at lower temperatures, $n\tau$ must be greater, making it harder and harder for fusion to occur as the temperature drops. Again, this assumes that both criteria are valid; even if they are, the risk of a runaway detonation is non-existent. Astronomical events
It turns out we can look to the skies to think about naturally-occurring events that are similar to the scenario you describe. First, there are examples of solar activity, including solar flares and coronal mass ejections. The energy released in these events can range from $\sim10^{20}$ Joules to $\sim10^{25}$ Joules. However, the Tsar Bomba (the most powerful nuclear weapon ever detonated) released only $\sim10^{17}$ Joules. Given that solar flares regularly release thousands of times as much energy in the photosphere - the target region of detonation - without any catastrophic problems, I think we can consider the risk of detonation by nuclear weapon to be even lower.
Moving on, consider helium flashes. These are believe to occur in low-mass red giants (less than 2 solar masses). As hydrogen fusion ceases in the core of a star (while continuing further out), the core falls out of hydrostatic equilibrium, and the star begins to contract. This raises temperatures until matter in the core becomes degenerate. Degenerate matter does
not obey the ideal gas law, 4 and so cannot fight back against rising temperatures. Eventually, runaway fusion begins via the triple alpha process, at temperatures around 100 million Kelvin. However, even under such conditions, the matter soon becomes non-degenerate. Thermal pressure returns, the ideal gas law is applied, and the star is in hydrostatic equilibrium once more. Helium flashes are much more powerful than solar flares, coming in at around $\sim10^{41}$ Joules. You can read more about the instabilities involved in these detailed slides.
The thermostat mechanism is
not applicable in objects composed solely of degenerate matter, like white dwarfs. This often has dire consequences; if matter is transferred onto the surface of a white dwarf and it heats up, runaway fusion can occur, usually involving carbon and oxygen. The result is a nova, which leaves much of the star intact, or a Type Ia supernova, which may destroy the white dwarf or turn it into a neutron star or black hole. Type Ia supernovae usually release $\sim10^{44}$ Joules of energy - although this is a byproduct of a successful detonation, not the cause of it.
Numerical simulations have been done of the propagation of detonation waves through white dwarfs. One result is that detonations have the potential to turn into deflagration waves, which are less catastrophic. This has been studied a lot in pure fluid dynamics, but it's interesting to know that instabilities can quench possible detonations in white dwarfs - I'll try to pull up an article on some examples. It makes me wonder whether, even if I'm wrong about everything above, if this hypothetical detonation could falter into a deflagration, therefore saving the Sun from destruction.
However, even in extremely catastrophic situations, a non-degenerate star like the Sun can stabilize itself against runaway fusion reactions. A red giant could survive a helium flash, which at first seems extremely devastating. There's no way that a puny nuclear weapon could overcome the mighty thermostat effect. In short, if you're trying to blow up the Sun, I'd recommend turning your efforts elsewhere. Bolonkin and Friedlander are, simply put, wrong.
Footnotes
1 His notation is non-standard and unclear, and include unnecessary terms for unit conversions. I've standardized them here for clarity, and fixed a typo or two he made. 2 The power radiated by thermal bremsstrahlung is proportional to $T^{1/2}$. 3 We call the case where $\eta=4$ weakly temperature dependent because some fusion reactions in slightly more massive stars involve $\eta=17$ or $\eta=20$! 4 White dwarfs and matter supported by electron degeneracy obey one of two main equations of state. For the ideal gas law, $P\propto\rho T$, where $\rho$ is density. White dwarfs obey $P\propto\rho^{5/3}$ (non-relativistic) or $P\propto\rho^{4/3}$ (relativistic), depending on the regime. In both cases, there is no temperature dependence. |
Revista Matemática Iberoamericana
Full-Text PDF (289 KB) | Metadata | Table of Contents | RMI summary
Volume 15, Issue 1, 1999, pp. 93–116 DOI: 10.4171/RMI/251
Published online: 1999-04-30
The angular distribution of mass by Bergman functionsDonald E. Marshall
[1]and Wayne Smith [2](1) University of Washington, Seattle, USA
(2) University of Hawai‘i at Mānoa, Honolulu, USA
Let $\mathbb D = {z : |z| < 1}$ be the unit disk in the complex plane and denote by $d\mathcal A$ two-dimensional Lebesgue measure on $\mathbb D$. For $\epsilon > 0$ we define $\sum_\epsilon = z:|$ arg $z | < \epsilon$. We prove that for every $\epsilon > 0$ there exists a $\delta > 0$ such that if $f$ is analytic, univalent and area-integrable on $\mathbb D$, and $f(0) = 0$, then $$\int _{f^–1(\sum_\epsilon)} | f | d\mathcal A > \delta \int_\mathbb D | f | d\mathcal A$$. This problem arose in connection with a characterization by Hamilton, Reich and Strebel of extremal dilatations for quasiconformal homeomorphisms of $\mathbb D$.
No keywords available for this article.
Marshall Donald, Smith Wayne: The angular distribution of mass by Bergman functions.
Rev. Mat. Iberoam. 15 (1999), 93-116. doi: 10.4171/RMI/251 |
A brief description of the 18 electron rule
A valence shell of a transition metal contains the following: 1 $s$ orbital, 3 $p$ orbitals and 5 $d$ orbitals; 9 orbitals that can collectively accommodate 18 electrons (as either bonding or nonbonding electron pairs). This means that, the combination of these nine atomic orbitals with ligand orbitals creates nine molecular orbitals that are either metal-ligand bonding or non-bonding, and when metal complex has 18 valence electrons, it is said to have achieved the same electron configuration as the noble gas in the period.
In some respect, it is similar to the octet rule for main group elements, something you might be more familiar with, and thus it may be useful to bear that in mind. So in a sense, there's not much more to it than "electron bookkeeping"
As already mentioned in the comments, 18 electron rule is more useful in the context of organometallics.
Two methods are commonly employed for electron counting:
Neutral atom method: Metal is taken as in zero oxidation state for counting purpose
Oxidation state method: We first arrive at the oxidation state of the metal by considering the number of anionic ligands present and overall charge of the complex
I think this website does a good job of explaining this: http://www.ilpi.com/organomet/electroncount.html (plus, they have some practice exercises towards the end)
Let's just focus on Neutral Atom Method (quote from the link above)
The major premise of this method is that we remove all of the ligands from the metal, but rather than take them to a closed shell state, we do whatever is necessary to make them neutral. Let's consider ammonia once again. When we remove it from the metal, it is a neutral molecule with one lone pair of electrons. Therefore, as with the ionic model, ammonia is a neutral two electron donor.
But we diverge from the ionic model when we consider a ligand such as methyl. When we remove it from the metal and make the methyl fragment neutral, we have a neutral methyl radical. Both the metal and the methyl radical must donate one electron each to form our metal-ligand bond. Therefore, the methyl group is a one electron donor, not a two electron donor as it is under the ionic formalism. Where did the other electron "go"? It remains on the metal and is counted there. In the covalent method, metals retain their full complement of d electrons because we never change the oxidation state from zero; i.e. Fe will always count for 8 electrons regardless of the oxidation state and Ti will always count for four.
Ligand Electron Contribution (for neutral atom method)
a. Neutral Terminal (eg. $\ce{CO}, \ce{PR_3}, \ce{NR_3}$) : 2 electrons
b. Anionic Terminal (eg. $\ce{X^-}, \ce{R_2P^-}, \ce{Ro^-}$) : 1 electron
c. Hapto Ligands (eg. $ \eta^2-\ce{C_2R_4}, \eta^1-\text{allyl}$): Same as hapticity
d. Bridging neutral (eg. $\mu_2-\ce{CO}$) : 2 electrons
e. Bridging anionic (eg. $\mu_2-\ce{CH_3}$) ( no lone pairs): 1 electron
f.Bridging anionic (eg. $\mu_2-\ce{Cl}, \mu_2-\ce{OR}$) (with 1 lone pair)): 3 electrons
(with 1 lone pair)
or, $\mu_2-\ce{Cl}$(with 2 lone pairs): 5 electrons
g. Bridging alkyne 4 electrons
h. NO linear 3 electrons
i. NO bent ( lone pair on nitrogen): 1 electron
j. Carbene (M=C): 2 electron
k.Carbyne (M≡C): 3 electron
Determining # Metal-Metal bonds
Step 1: Determine the total valence electrons (TVE) in the entire molecule (that is, the number of valence electrons of the metal plus the number of electrons from each ligand and the charge)-- I'll call this T (T for total, I'm making this up )
Subtract this number from $n × 18$ where $n$ is the number of metals in the complex, i.e $(n × 18) – T$ -- call this R (R for result, nothing fancy)
(a) R divided by 2 gives the total number of M–M bonds in the complex.(b) T divided by n gives the number of electrons per metal.
If the number of electrons is 18, it indicates that there is no M–M bond; if it is 17 electrons, it indicates that there is 1 M–M bond; if it is 16electrons, it indicates that there are 2 M–M bonds and so on.
At this point, let's apply this method to a few examples
a) Tungsten Hexacarbonyl (picture)
Let's use the neutral atom method, W has 6 electrons, the carbonyls donate 12 electrons and we get a total of 18. Of course there can be no metal metal bonds here.
(b) Tetracobalt dodecacarbonyl (picture), here let's figure out the no. of metal-metal bonds. T is 16, R is 12, Total # M-M bonds is 6, # electrons per metal is 15, so 3 M-M bonds.
A few examples where the "18 electron Rule" works
I.
Octahedral Complexes with strong $\pi$ - acceptor ligands
eg. $\ce{[Cr(CO)_6]}$
Here, $t_{2g}$ is strongly bonding and is filled and, $e_g$ is strongly antibonding, and empty. Complexes of this kind tend to obey the 18-electron rule irrespective of their coordination number. Exceptions exist for $d^8$, $d^{10}$ systems (see below)
II. Tetrahedral Complexes
e.g. $\ce{[Ni(PPh_3)_4]}$ ($\ce{Ni^0}$, $d^{10}$ 18-electron complex
Tetrahedral complexes cannot exceed 18 electrons because there are no lowlying MOs that can be filled to obtain tetrahedral complexes with more than 18 electrons. In addition, a transition metal complex with the maximum of 10 d-electrons, will receive 8 electrons from the ligands and end up with a total of 18 electrons.
Violations of 18 electron rule
I. Bulky ligands : (eg. $\ce{Ti(\text{neopentyl})_4} $ has 8 electrons) Bulky ligands prevent a full complement of ligands to assemble around the metal to satisfy the 18 electron rule.
Additionally, for early transition metals, (e.g in $d^0$ systems), it is often not possible to fit the number of ligands necessary to reach 18 electrons around the metal. (eg. tungsten hexamethyl, see below)
II. Square Planar $d^8$ complexes (16 electrons) and Linear $d^{10}$ complexes (14 electrons)
For square planar complexes, $d^8$ metals with 4 ligands, gives 16-electron complexes. This is commonly seen with metals and ligands high in the spectrochemical series
For instance, $\ce{Rh^+}, \ce{Ir^+}, \ce{Pd^2+}, \ce{Pt^2+}$) are square planar. Similarly, $\ce{Ni^2+}$ can be square planar, with strong $\pi$-acceptor ligands.
Similarly, $d^{10}$ metals with 2 ligands, give 14-electron complexes. Commonly seen for for $\ce{Ag^+}, \ce{Au^+}, \ce{Hg^{2+}}$
III. Octahedral Complexes which disobey the 18 electron rule, but still have fewer than 18 electrons (12 to 18)
This is seen with second and third row transition metal complexes, high in the spectrochemical series of metal ions with $\sigma$-donor or $\pi$-donor ligands (low to medium in the spectrochemical series).$t_{2g}$ is non-bonding or weakly anti-bonding (because the ligands are either $\sigma$-donor or $\pi$-donor), and $t_{2g}$ usually contains 0 to 6 electrons. On the other hand, $e_g$ are strongly antibonding, and thus are empty.
IV. Octahedral Complexes which exceed 18 electrons (12 to 22)
This is observed in first row transition metal complexes that are low in the spectrochemical series of metal ions, with $\sigma$-donor or $\pi$-donor ligands Here, the $t_{2g}$ is non-bonding or weakly anti-bonding, but the $e_g$ are only weakly antibonding, and thus can contain electrons. Thus, 18 electrons maybe exceeded.
References:
The following weblinks proved useful to me while I was writing this post, (especially handy for things like MO diagrams)
http://www.chem.tamu.edu/rgroup/marcetta/chem462/lectures/Lecture%203%20%20excerpts%20from%20Coord.%20Chem.%20lecture.pdf
http://classes.uleth.ca/201103/chem4000b/18%20electron%20rule%20overheads.pdf
http://web.iitd.ac.in/~sdeep/Elias_Inorg_lec_5.pdf
http://www.ilpi.com/organomet/electroncount.html
http://www.yorku.ca/stynes/Tolman.pdf
and obviously, the wikipedia page is a helpful guide https://en.wikipedia.org/wiki/18-electron_rule |
This question already has an answer here:
Show using the Poisson distribution that
$$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Show using the Poisson distribution that
$$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$
By the definition of Poisson distribution, if in a given interval, the expected number of occurrences of some event is $\lambda$, the probability that there is exactly $k$ such events happening is $$ \frac {\lambda^k e^{-\lambda}}{k!}. $$ Let $\lambda = n$. Then the probability that the Poisson variable $X_n$ with parameter $\lambda$ takes a value between $0$ and $n$ is $$ \mathbb P(X_n \le n) = e^{-n} \sum_{k=0}^n \frac{n^k}{k!}. $$ If $Y_i \sim \mathrm{Poi}(1)$ and the random variables $Y_i$ are independent, then $\sum\limits_{i=1}^n Y_i \sim \mathrm{Poi}(n) \sim X_n$, hence the probability we are looking for is actually $$ \mathbb P\left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P( Y_1 + \dots + Y_n \le n) = \mathbb P(X_n \le n). $$ By the central limit theorem, the variable $\frac {Y_1 + \dots + Y_n - n}{\sqrt n}$ converges in distribution towards the Gaussian distribution $\mathscr N(0, 1)$. The point is, since the Gaussian has mean $0$ and I want to know when it is less than equal to $0$, the variance doesn't matter, the result is $\frac 12$. Therefore, $$ \lim_{n \to \infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!} = \lim_{n \to \infty} \mathbb P(X_n \le n) = \lim_{n \to \infty} \mathbb P \left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P(\mathscr N(0, 1) \le 0) = \frac 12. $$
Hope that helps, |
After I've read an identity involving an integral related with special functions, I've consider a different integral by trials asking to Wolfram Alpha online calculator
ExampleFor the code int_0^1 (-log u)^s/u^s duthe online calculator said that is equal to $$(1-s)^{-s-1}\Gamma(s+1)$$ for $-1<\Re s<1$.
Thus we take here $s=\sigma+it$ the complex variable, and the logarithmic function as you see is the
natural logarithm. I've refreshed my knowdledges in complex analysis integration, thus I known the more relevant theorems: Cauchy formula and the Residue Theorem. Also I know representations for the Gamma function, and facts about the theory of such function.
Question.How one can prove previous identity? I ask, can you tell us how prove rigurously that $$\int_0^1\frac{(-\log u)^s}{u^s}du=\frac{\Gamma(s+1)}{(1-s)^{s+1}}$$ holds for $-1<\sigma<1$? Then I can encourage to ask more questions about complex analysis. Thanks in advance.
I don't know if my question is obvious from the definiton of the Gamma function, you can take this approach if such is the case, but I would like to see, if the calculations are feasibles, an answer from complex integration theory. |
Let $A_k$ denote $D^2$ with $k\geq 0$ disjoint open disks removed. For $k=0$, the answer is positive by Brouwer's fixed point theorem. For $1\leq k\neq 2$, it's not difficult to see that the answer is negative: by arranging the disks in a symmetrical way, I can apply certain rotations to obtain self maps with no fixed point, e.g. if $k=4$, I can set one disk at the center, and position the other three in an equilateral triangle around the center; then a 120 degree rotation is a map from $A_4$ to itself with no fixed point. This sort of arrangement can always be done for $k\neq 2$. Note that 120 degree rotation of $A_4$ can be extended to a continuous map of $D^2$ to itself.
The tricky case is $k=2$, which is where I'm stuck. My thinking here is to consider any possible extension
$$\bar f:D^2\rightarrow D^2\ |\ \bar f|_{A_2}=f.$$
Since $\bar f$ is continuous, it must map each of the two missing disks to either itself or to the other. In the case where $\bar f$ maps one disk to the other, then no fixed points lie in either of the disks (since they are disjoint), hence, by Brouwer, there must be a fixed point of $\bar f$ lying in $A_2$, which will be a fixed point of $f$. In the case where $\bar f$ maps the two disks to themselves, then by applying Brouwer to the restriction of $\bar f$ to each disk, we have that $\bar f$ has at least two fixed points in $D^2$.
If I had holomorphicity to work with, then this would be enough, since a holomorphic map from $D^2$ to itself with two fixed points must be the identity. Am I on the right track, or is there some arrangement of the missing disks and a transformation that I'm just not seeing? |
Initial remark. Suppose someone gives you a continuous function of two variables, and asks you to calculate the Taylor series in $(0,0)$. If we happen to notice that the first derivatives vanish at the origin, we will call it a critical point.The idea of the tangent cone is that we are doing exactly this: taking the Taylor series of the simplest continuous function: a polynomial. And if the first derivatives vanish, we call $(0,0)$ a singular point.
We have a curve $X=V(f)\subset \mathbb A^2$ passing through the origin.Even if the origin is a nonsingular point of the curve, you do have a tangent cone at the origin: it is also called the
tangent line! (and this is precisely encoded in the corresponding first terms of the "Taylor series" of $f$).
However, the tangent cone is constructed starting from the
leading form of $f\in k[x,y]$, which is the homogeneous form $\tilde f$ of smallest degree appearing in the decomposition of $f$; it is again in two variables. For instance, the leading form of $f=2x+y-8y^2x$ is $\tilde f=2x+y$, the term of smallest degree. The tangent cone is the zero set $V(\tilde f)$. So the origin is nonsingular if and only if $f$ has leading form of degree one. In that case, $V(\tilde f)$ is exactly the tangent line.
In general, the leading form will be a product of linear factors, each appearing with some exponent. So the scheme $V(\tilde f)$ is a union of lines, where some of them are possibly nonreduced.
If $f$ starts, say, with degree $2$, then for instance $(0,0)$ will be a node in case the (degree $2$) leading form is a product of two distinct linear forms, i.e. something of the kind $$(ax+by)\cdot (cx+dy).$$ If, instead, the leading form has the shape $(ax+by)^2$, something else happens (example: the cusp $f=y^2-x^3$, whose tangent cone which is a double line).
Reference. All this is
beautifully explained in Mumford's The red book of varieties and schemes. |
The numerical answer is $17/27$.
Divide our set of $4$ people into groups of two.
One grouping is $\{A, B\}, \{C,D\}$. There are $2$ other groupings, $\{A, C\}, \{B,D\}$ and $\{A, D\}, \{B,C\}$.
The probability that $A$ and $B$ write each other's names is $\dfrac{1}{9}$. The same applies to $C$ and $D$. Let us compute the probability that both these things happen. It is $\dfrac{1}{81}$.
So the probability that $A$ and $B$ write each other's name,
or that $C$ and $D$ do (or both), is$$\frac{1}{9}+\frac{1}{9}-\frac{1}{81}.$$
We subtract the $1/81$ to avoid "double-counting" the situations where $A$ and $B$ pick each other, and $C$ and $D$ also do. Or else we can think of it as following from the formula$$P(X\cup Y)=P(X)+P(Y)-P(X\cap Y).$$
The same calculation applies to the other two pairings. So we multiply $\dfrac{1}{9}+\dfrac{1}{9}-\dfrac{1}{81}$ by $3$. The result is $2/3-1/27$, which is $17/27$. |
Let $I \subseteq \mathbb{R}$ be an interval and $g: I \to \mathbb{C}$ continuous. Define $f: \mathbb{C} \backslash \overline{Im(f)} \to \mathbb{C}$ by
$f(z) := \int_I \frac{1}{g(x) - z} dx$
(with $\overline{Im(f)}$ being the closure of $Im(f)$.)
I now want to show that $f$ is analytic, and rewrite it into a power series that's (locally) defined for each $z_0 \in \mathbb{C} \backslash \overline{Im(f)}$.
Now I must admit that I don't really know how to get started. I haven't dealt much with analytic functions before. I know that a complex function is per definition analytic iff it can be written as a power series (therefore, by completing the second part of the task, the first one would follow, although I don't really know how I could write the function a), and iff it is differentiable once (hence differentiable infinitely often).
Therefore, it would also be sufficient for the first part to show that $f$ is differentiable, I think? How do I show that though? I would need to differentiate by $z$, whereas $f$ is defined as an integral with respect to $x$. I'm rather confused by this function. |
I am studying the reverse mode of automatic differentiation.
The reverse mode of automatic differentiation allows the efficient computation of a the derivative of a single dependent variable $y$ with respect to as many independent variables $x_i$ as you want. One assigns to each intermediate variable $v$ an adjoint variable $\bar{v}$ which is the derivative of a chosen dependent variable with respect to the subexpression $$\bar{v} \rightarrow \bar{v} = \frac{\partial y}{\partial v}$$
so the assignment $$y = v_1 \sin(v_2)$$ corresponds to the adjoint assignments
\begin{eqnarray} \bar{v_1} &=& \bar{y} \sin(v_2) \\ \bar{v_2} &=& \bar{y} v_1 \cos(v_2) \end{eqnarray}
where $\bar{y} = 1$ .
I am interested in the situation where you solve a linear system in the program:
$$Ax = b$$
where $y$ might be another function of $x$: $$y(x)$$
According to this tutorial of the
CoDiPack software, the corresponding adjoint statements are
\begin{eqnarray} \bar{A} &=& - \lambda x^T \\ \bar{b} &=& \lambda \\ \end{eqnarray} where $\lambda$ is the the solution of the adjoint equation $$A^T \lambda = \bar{x}$$
I found the same algorithm in several other documents, for examplehere in section 7,
Iteration and equation solving.
It is not clear to me how to arrive at these statements. I think the derivation must be similar as in the case where one wants to optimize $w(x)$ where $x$ is subject to the constraint $$Ax = b$$ See for example this document. |
I'm trying to solve a toy model of 1D maxwell equations for a time-varying medium. In this case the permittivity varies as $(1+t)^2$.
My PDE-systems looks like this
$$\frac{\partial}{\partial t}(U(x,t)(1+t)^2)=\frac{\partial}{\partial x}V(x,t)\\ \frac{\partial}{\partial t}V(x,t)=\frac{\partial}{\partial x}U(x,t)$$
with initial conditions
$$U(x,0)=\sin(x)\\V(x,0)=-\cos(x)/2$$
The exact solution on the spatial interval $[0,\pi]$ is
$$U(t,x)=\frac{1}{\sqrt{(1+t)^3}}\cos(\frac{\sqrt{3}\ln(1+t)}{2})\sin(x)$$ $$V(t,x)=\frac{1}{2\sqrt{1+t}}\left(-\cos(\frac{\sqrt{3}\ln(1+t)}{2})+\sqrt{3}\sin(\frac{\sqrt{3}\ln(1+t)}{2})\right)\cos(x)$$
Can Mathematica's Dsolve find this solution? I've tried to,but I failed. What am I doing wrong? Here's my attempt:
ClearAll[U, V]sys = { D[U[x, t], t] == (D[V[x, t], x] - 2*(1 + t)*U[x, t])/(1 + t)^2, D[V[x, t], t] == D[U[x, t], x], U[x, 0] == Sin[x], V[x, 0] == -Cos[x]/2}DSolve[sys, {U[x, t], V[x, t]}, {x, t}]
EDIT: first equation was wrong. Still I cant find a solution using Dsolve |
When we perform a Legendre transform on the connected generate functional $W[J]$ we get the quantum action (or 1PI action)
$$ \Gamma[\phi] = W[J(\phi)] - \int\mathrm{d}^4x\,\phi J,\quad\phi(J)=\frac{\delta W}{\delta J}. $$
Then it can be shown that
$$ \Gamma[\phi] = S[\phi] \mp\frac{1}{2}\log\det\left(\frac{\delta^2S}{\delta\Phi(x)\delta\Phi(y)}\right)_{\Phi=\phi} +\ldots, $$
where $S[\phi]$ is the classical action and the dots represent higher corrections. It is said that the lowest quantum correction (ie, the term involving $\log\det$) is the result of a resummation of one loop diagrams.
Why is the $\log\det$ term identified with a one loop correction? I took a look at the proof, but it seems to me that there is no connection to the one loop diagrams at all.
Why is $\frac{\delta^2S}{\delta\Phi(x)\delta\Phi(y)}$ identified with the propagator? Is it the free propagator or the exact propagator (including interactions)?
Is it possible to get the same one loop correction to the action using the Wilson effective action instead? |
Let's denote by $\otimes,\oplus,\ominus$ (I was lazy trying to get circled version of division operator) the floating-point analogs of exact multiplication ($\times$), addition ($+$), and subtraction ($-$), respectively. We'll assume (IEEE-754) that for all of them $$[x\oplus y]=(x+ y)(1+\delta_\oplus),\quad |\delta_\oplus|\le\epsilon_\mathrm{mach},$$where $\epsilon_\mathrm{mach}$ is the machine epsilon giving an upper bound on the relative error due to rounding off.We will also use the following lemma (assuming all $|\delta_i|\le\epsilon_\mathrm{mach}$, and $m$ is not too large) that can be easily proven:$$\prod\limits_{i=1}^{m}(1+\delta_i)=1+\theta(m),\quad |\theta(m)|\le\frac{m\epsilon_\mathrm{mach}}{1-m\epsilon_\mathrm{mach}}$$
Let's define the true function $f$ that operates on real numbers $x,y,z$ as
$$f(x,y,z)=(x\times z)-(y\times z)$$
and two versions of the function implementation in IEEE-compliant floating-point arithmetic as $\tilde{f_1}$ and $\tilde{f_2}$ that operate on floating-point representations $\tilde{x}=x(1+\delta_x),\tilde{y},\tilde{z}$, as follows:
$$\tilde{f_1}(\tilde{x},\tilde{y},\tilde{z})=(\tilde{x}\otimes\tilde{z})\ominus(\tilde{y}\otimes\tilde{z}),$$
$$\tilde{f_2}(\tilde{x},\tilde{y},\tilde{z})=(\tilde{x}\ominus\tilde{y})\otimes\tilde{z}.$$
Error analysis for $\tilde{f_1}$:
$$\begin{aligned}\tilde{f_1}&=\Big(\underbrace{\big(x(1+\delta_x)\times z(1+\delta_z)\big)\big(1+\delta_{\otimes_{xz}}\big)}_{(\tilde{x}\otimes\tilde{z})}-\underbrace{\big(y(1+\delta_y)\times z(1+\delta_z)\big)\big(1+\delta_{\otimes_{yz}}\big)}_{(\tilde{y}\otimes\tilde{z})}\Big)\Big(1+\delta_{\ominus}\Big)\\&=xz(1+\delta_x)(1+\delta_z)(1+\delta_{\otimes_{xz}})(1+\delta_\ominus)-yz(1+\delta_y)(1+\delta_z)(1+\delta_{\otimes_{yz}})(1+\delta_\ominus)\\&=xz(1+\theta_{xz,1})-yz(1+\theta_{yz,1}).\end{aligned}$$Here, $|\theta_{xz,1}|,|\theta_{yz,1}|\le\frac{4\epsilon_\mathrm{mach}}{1-4\epsilon_\mathrm{mach}}$.
Similarly, for $\tilde{f_2}$$$\begin{aligned}\tilde{f_2}&=\Bigg(\Big(\big( x(1+\delta_x)-y(1+\delta_y \big)\big(1+\delta_{\ominus_{xy}}\big)\Big)\times \Big(z(1+\delta_z)\Big)\Bigg)\Bigg(1+\delta_{\otimes}\Bigg)\\&=xz(1+\delta_x)(1+\delta_z)(1+\delta_{\ominus_{xy}})(1+\delta_\otimes)-yz(1+\delta_y)(1+\delta_z)(1+\delta_{\ominus_{xy}})(1+\delta_\otimes)\\&=xz(1+\theta_{x,2})-yz(1+\theta_{y,2}).\end{aligned}$$Here, $|\theta_{x,2}|,|\theta_{y,2}|\le\frac{4\epsilon_\mathrm{mach}}{1-4\epsilon_\mathrm{mach}}$.
So, for both $\tilde{f_1}$ and $\tilde{f_2}$ we got expressions of the same type, thus I do not see why one implementation would be preferred to another from a numerical point of view (except the fact that $\tilde{f_2}$ performs only 2 floating-point operations compared to $\tilde{f_1}$).
Computing the relative error will show, that the problem comes from the fact that $x$ and $y$ can be very close (cancellation).
$$\begin{aligned}\frac{|\tilde{f_1}-f|}{|f|}&=\frac{|xz+xz\theta_{xz,1}-yz-yz\theta_{yz,1}-(xz-yz)|}{|xz-yz|}=\frac{|x\theta_{xz,1}-y\theta_{yz,1}|}{|x-y|}\\&\le\frac{|x|+|y|}{|x-y|}\frac{4\epsilon_\mathrm{mach}}{1-4\epsilon_\mathrm{mach}},\end{aligned}$$$$\begin{aligned}\frac{|\tilde{f_2}-f|}{|f|}&=\frac{|xz+xz\theta_{x,2}-yz-yz\theta_{y,2}-(xz-yz)|}{|xz-yz|}=\frac{|x\theta_{x,2}-y\theta_{y,2}|}{|x-y|}\\&\le\frac{|x|+|y|}{|x-y|}\frac{4\epsilon_\mathrm{mach}}{1-4\epsilon_\mathrm{mach}}.\end{aligned}$$
Slight differences between $\theta$'s might make one of the two numerical implementations marginally better or worse depending on $x,y,z$. However, I doubt it can be of any significance.The result totally makes sense, because no matter what, if you have to compute $(x-y)$, when $x$ and $y$ are close enough in values (for the precision you work with) using floating-point arithmetic, no scaling will significantly help you: you are already in trouble.
NB: All discussion above assumes no overflow or underflow, i.e. $x,y,z,f(x,y,z)\in\mathbb F_0$, $\mathbb F_0$ being the set of all normal floating-point numbers. |
Background
For a system consisting of two molecules (monomers or fragments are also used) X and Y, the binding energy is
$$\Delta E_{\text{bind}} = E^{\ce{XY}}(\ce{XY}) - [E^{\ce{X}}(\ce{X}) + E^{\ce{Y}}(\ce{Y})]\label{eq:sherrill-1} \tag{Sherrill 1}$$
where the letters in the parentheses refer to the atoms present in the calculation and the letters in the superscript refer to the (atomic orbital, AO) basis present in the calculation. The first term is the energy calculated for the combined X + Y complex (the dimer) with basis functions, and the next two terms are energy calculations for each isolated monomer with only their respective basis functions. The remainder of this discussion will make more sense if the complex geometry is used for each monomer, rather than the isolated fragment geometry.
The counterpoise-corrected (CP-corrected) binding energy [1] to correct for basis set superposition error (BSSE) [2] is defined as
$$\Delta E_{\text{bind}}^{\text{CP}} = E^{\ce{XY}}(\ce{XY}) - [E^{\ce{XY}}(\ce{X}) + E^{\ce{XY}}(\ce{Y})]\label{eq:sherrill-3} \tag{Sherrill 3}$$
where the monomer calculations are now performed in the dimer/complex basis. Let's explicitly state how this works for the $E_{\ce{XY}}(\ce{X})$ term. The first molecule X contributes nuclei with charges, basis functions (AOs) centered on those nuclei, and electrons that will count to the final occupied molecular orbital (MO) index into the MO coefficient array. There is no reason why additional AOs that are not centered on atoms can't be added to a calculation. Depending on their spatial location, if they're close enough to have significant overlap, they may combine with atom-centered MOs, increasing the variational flexibility of the calculation and lowering the overall energy. Put another way, place the AOs that would correspond to molecule Y at their correct positions, but don't put the nuclei there, and don't consider the number of electrons they would contribute to the total number of occupied orbitals. This means that for the full electronic Hamiltonian
$$\hat{H}_{\text{elec}} = \hat{T}_{e} + \hat{V}_{eN} + \hat{V}_{ee}$$
calculating the electron-nuclear attraction $\hat{V}_{eN}$ term is now different. Considered explicitly in matrix form in the AO basis,
$$\begin{align*}V_{\mu\nu} &= \int \mathop{d\mathbf{r}_{i}} \chi_{\mu}(\mathbf{r}_{i}) \left( \sum_{A}^{N_{\text{atoms}}} \frac{Z_{A}}{|\mathbf{r}_{i} - \mathbf{R}_{A}|} \right) \chi_{\nu}(\mathbf{r}_{i}) \\&=\sum_{A}^{N_{\text{atoms}}} Z_{A} \left< \chi_{\mu} \middle| \frac{1}{r_{A}} \middle| \chi_{\nu} \right>\end{align*}$$
there are now fewer terms in the summation, since the nuclear charges from molecule Y are zero (the atoms just aren't there), but the number of $\mu\nu$ are the same as for the XY complex. This and the $\hat{T}_{e}, \hat{V}_{ee}$ terms aren't really mathematically or functionally different then; this is more to show where the additional basis functions enter, or to show where nuclei appear in the equations [3].
These atoms that don't have nuclei or electrons, only basis functions, are called
ghost atoms. Sometimes you also see the term ghost functions, ghost basis, or ghost {something} calculation. Adding the basis of monomer Y to make the full "dimer basis" means taking the monomer X and including basis functions at the nuclear positions for Y. Geometry optimization
Now to calculate the molecular gradient, that is, the derivative of the energy with respect to the $3N$ nuclear coordinates. This is the central quantity in any geometry optimization. For the sake of simplicity, consider a steepest descent-type update of the nuclear coordinates$$R_{A,x}^{(n+1)} = R_{A,x}^{(n)} - \alpha \frac{\partial E_{\text{total}}^{(n)}}{\partial R_{A,x}}\label{eq:steepest-descent} \tag{Steepest Descent}$$where $n$ is the optimization iteration number, $\alpha$ is some small step size with units [length
2][energy], and the last term is the derivative of the total (not just electronic) energy with respect to a change in atom $A$'s $x$-coordinate. Even Newton-Raphson-type updates with approximate Hessians (2nd derivative of the energy with respect to nuclear coordinates, rather than the 1st) need the gradient, so we must formulate it. Formulation of the energy
We're in a bit of trouble, because we want to replace $E_{\text{total}}$ in the gradient with $E_{\text{total}}^{\text{CP}}$, but all we have is $\Delta E_{\text{bind}}^{\text{CP}}$. The concept of CP correction can still be applied to a total energy, but the BSSE must be removed from each monomer. The BSSE correction itself for each monomer is$$\begin{split}E_{\text{BSSE}}(\ce{X}) &= E^{\ce{XY}}(\ce{X}) - E^{\ce{X}}(\ce{X}), \\E_{\text{BSSE}}(\ce{Y}) &= E^{\ce{XY}}(\ce{Y}) - E^{\ce{Y}}(\ce{Y}),\end{split}\label{eq:2}$$which, when subtracted from $\eqref{eq:sherrill-1}$, gives $\eqref{eq:sherrill-3}$. More correctly, considering that the geometry for each step is at the final cluster geometry and not the isolated geometry, the above is [4]$$\begin{split}E_{\text{BSSE}}(\ce{X}) &= E_{\ce{XY}}^{\ce{XY}}(\ce{X}) - E_{\ce{XY}}^{\ce{X}}(\ce{X}), \\E_{\text{BSSE}}(\ce{Y}) &= E_{\ce{XY}}^{\ce{XY}}(\ce{Y}) - E_{\ce{XY}}^{\ce{Y}}(\ce{Y}).\end{split}\label{eq:sherrill-10} \tag{Sherrill 10}$$
The CP-corrected
total energy is the full dimer energy with BSSE removed from each monomer is then$$\begin{split}E_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}} &= E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - E_{\text{BSSE}}(\ce{X}) - E_{\text{BSSE}}(\ce{Y}), \\&= E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - \left[ E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X}) - E_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X}) \right] - \left[ E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y}) - E_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y}) \right].\end{split}\label{eq:sherrill-15} \tag{Sherrill 15}$$Note that I have modified which geometry is used for each monomer in $\eqref{eq:sherrill-15}$. All monomers are calculated at the supermolecule geometry. This is convenient for two reasons: 1. We are only interested in removing the BSSE, not the effect of monomer deformation, and 2. a isolated monomer geometry without deformation doesn't make sense in the context of a geometry optimization. I also added the tilde to signify that the supermolecular/dimer geometry used may not be the final or minimum-energy geometry, as would be the case during a geometry optimization. We simply extract all structures consistently from a given geometry iteration. Perhaps $\ce{XY}(n)$ would be better notation. Formulation of the gradient
As Pedro correctly states, the differentiation operator is a linear operator. Because there are no products in $\eqref{eq:sherrill-15}$, the total gradient needed for $\eqref{eq:steepest-descent}$ will be a sum of gradients [5]:$$\frac{\partial E_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}}}{\partial R_{A,x}} = \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY})}{\partial R_{A,x}} - \left[ \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X})}{\partial R_{A,x}} - \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X})}{\partial R_{A,x}} \right] - \left[ \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y})}{\partial R_{A,x}} - \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y})}{\partial R_{A,x}} \right],$$so each step of a CP-corrected geometry optimization will require 5 gradient calculations rather than 1. Note that the nuclear gradient should be included for each term as well, which is a trivial calculation.
Extension to other molecular properties
Although not commonly done, counterpoise correction can be applied to any molecular property, not just energies or gradients. Simply replace $E$ or $\partial E/\partial R$ with the property of interest. For example, the CP-corrected polarizability $\alpha$ of two fragments is$$\alpha_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}} = \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - \left[ \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X}) - \alpha_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X}) \right] - \left[ \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y}) - \alpha_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y}) \right]$$where I believe it now makes even less sense to have each individual fragment calculation not be at the cluster geometry. In papers that calculate CP-corrected properties, no mention is usually made of which geometry the individual calculations are performed at for this reason.
References Boys, S. Francis; Bernardi, F. The calculation of small molecular interactions by the differences of separate total energies. Some procedures with reduced errors. Mol. Phys. 1970, 19, 553-566. Sherrill, C. David. Counterpoise Correction and Basis Set Superposition Error. 2010, 1-6. One implementation note: Most common quantum chemistry packages should allow for the usage of ghost atoms in energy and gradient calculations. However, as Sherrill states, they do not properly allow for composing the full gradient expression to perform CP-corrected geometry optimizations. Gaussian can, and Psi4 may. For programs that can calculate gradients with ghost atoms, Cuby can be used to drive the calculation of CP-corrected geometries and frequencies. There is a typo in the Sherrill paper; the subscripts for all 4 energy terms should be $AB$, which here are $\ce{XY}$. Simon, S.; Bertran, J.; Sodupe, M. Effect of Counterpoise Correction on the Geometries and Vibrational Frequencies of Hydrogen Bonded Systems. J. Chem. Phys. A 2001, 105,, 4359-4364. |
Division does not have a remainder. The remainder is an artifact (do not use this word on a six year old) of an
incomplete division.
When we do long division, a remainder can take place at any stage of the division process. In fact, a remainder keeps it going from one step to the next.
When we divide 529 by 3, what do we do first? We note that 3 "goes into" 5 one time, and from this, there is a remainder of 2.
If we didn't care about accuracy down to the unit, we could just stop right there. We could pad our partial quotient with enough zeros to make $100$, and then append the remaining digits $29$ to the remainder. And thus $529\div 3 = 100, R\ 229$. Surely enough, checking our result, if we multiply $3\times 100$ we get $300$, and if we add $229$ to it, we get $529$.
Of course we know that $529\div 3$
isn't $100$. This is just an approximation which is good to within $100$. The actual number is in fact $100$-something. It's not less than $100$, and it's not $200$ or more.
Now usually we do not do this. We don't stop division at the hundreds or tens to take a funny remainder. We usually stop division at the units, and take the remainder there.
This is because many objects that we work with in the real world cannot be divided beyond the unit. If we want to distribute 13 toys among 4 children, everyone gets three toys, and there is a toy left over. We don't want to divide that toy into four, because it will be destroyed. Thus, we use a form of inexact mathematical division which gives us a model for this real-world constraint: integer division with a remainder.
But non-unit remainders can be useful too. Suppose we are distributing a large number of toys at the wholesale level, and there are 24 toys in a box. Customers must get a whole box; we do not divide boxes. Boxes are only opened in the retail store to sell individual toys.
Now suppose I have $1272$ toys in my warehouse and $5$ stores approach me, all wanting to buy $300$ toys. I don't have $1500$ toys, so I decide to give the customers an equal number of toys based on what I have. Now $1272 \div 5$ gives us $254.4$, or $254\ R\ 2$. Great, each customer can have $254$ toys, with two toys left over I can keep. But wait, the toys are in boxes of $24$, so that can't work! I cannot ship $254$ toys, because that is 10 whole boxes, plus 14 toys from an open box. The calculation has to be done with boxes, not with toys. In fact I have 53 boxes of toys, and I can give the customers 10 boxes each, and have 3 boxes left over. So the remainder is $3\times 24 = 72$ toys. In other words, in a real and useful sense, the remainder of dividing $1272 \div 5$ can be $240$ with remainder of $72$, when
we require the result to be a multiple of 24.
So, on to the question: could there be a remainder in multiplication? The answer is no.
A remainder is a special way of expressing the error in a calculation, peculiar to inexact division. We do not have to use a remainder to express the error in an inexact division: we can simply express division error as the difference between the approximate quotient and the exact one. For instance $17\div 8$ doesn't have to be $2\ R\ 1$; it could just be $2$ with an error of $-1/8$, since the exact quotient is $2\frac{1}{8}$ and $2 - 2\frac{1}{8} = -\frac{1}{8}$. The result is $\frac{1}{8}$ less than the exact result, and so that is the error.
Multiplications cannot have a remainder for two reasons. Firstly, multiplicationsare usually carried to completion and therefore exact. There is no reason why a multiplication of two integers would be left incomplete, in the same way that we can stop a division short and take stock of what is left. Secondly, multiplications
can be inexact, when the inputs are fractions (possibly themselves inexact) and we truncate the result to a given number of significant digits. However, when multiplication is inexact, we do not express the error as a remainder.
The concept of "remaining" is peculiar to division. When we divide a number, and make the result slightly smaller so that the division "works out evenly" we have something "left over". This does not apply in multiplication.
Note also that in (ordinary) multiplication, the two operands are equivalent and it is commutative. In $5\times 4$ both values are products, and it is the same as $4\times 5$. One of the products is called a
multiplicand and the other a multiplier, but they can readily switch roles. But in division $5\div 4$ is not $4\div 5$: dividend and divisor cannot switch roles, and the remainder is closely related to the divisor. So right off the bat we have a conceptual problem with a remainder in multiplication: which of the two products should be related to the remainder, like the divisor is in division? If we round off the result of a multiplication to some multiple, and choose to express the error as a remainder related to one of the products, should we choose the multiplicand? or the multiplier? And why? |
I sure hope I could get into details but I will be straightforward.
a) Is there any way a process involving a photon can be strong?
I suppose you mean "strong" in a phenomenological description of the processes involved. In a loose way of speaking they could be similar if you restrict yourself to discussing conservation of certain quantities e.g.
total Isospin, parity, etc, however it is the fact that these can be considered somehow "accidental" what makes them different in a general playground. The process you describe is an example of this.
b) Are there types of electromagnetic processes for which G-parity is conserved?
Of course, behind this there are two important facts. The first is:
Non-conservation of a property doesn't mean violation under all circumstances.
while the second is:
G-parity relates a rotation in Isospin space ($R_T$) and charge conjugation ($C$).
Electromagnetic interaction is already $C$-invariant so what you could ask, for G-parity to be conserved in the end, is if the process you're studying is invariant under the Isospin rotation involved. I happened to find your question while looking for something related to a problem in Fayyazuddin's "A modern introduction to particle physics". There you can check that:$$|\pi^{\pm}\rangle \xrightarrow{R_T} |\pi^{\mp}\rangle$$$$|\pi^{0}\rangle \xrightarrow{R_T} |\pi^{0}\rangle$$
since $\eta$ (interaction eigenstate) is a singlet you should have similarly $|\eta\rangle \xrightarrow{R_T} |\eta\rangle$. The photon part may seem trivial but remember that in general a photon may be considered to be a superposition of a $I=0$ and a $I=1$ contribution (do not confuse with EW Isospin). If however you could justify $|\gamma\rangle \xrightarrow{R_T} |\gamma\rangle$ the exercise would be over (this is part of my homework) and G-parity conservation would be a reasonable tool to use. |
I keep reading/hearing that the results from mean-var optimization is max Sharpe ratio. It seems making sense if you fix either target return or target risk, but in general, it doesn't seems right, for example, $J1$ and $J2$ are target function:
$J1 = \mu\prime w - \lambda w\prime\Sigma w.$
$J2 = (\mu\prime w)/\sqrt{w\prime\sigma w}$
The optimal solution of $J1$ and $J2$ should be very different, because $J1$ depends on lambda, $J2$ does not, not to mention the derivatives respect to w are very different.
what am I missing here? |
I am trying to express $\sin(18°)$ in algebraic form using only complex numbers. I know that when I factor $z^5-1.$ I get an expansion that looks like: $(z-1)(z^4+z^3+z^2+z^1+1)$ The exercise then says substitute for $z+\frac{1}{z}$ in the 'long factor' and then somehow derive $\sin(18°)$. I just have no idea how to I am supposed to do this. But knowing how, could teach me something about complex numbers.
I guess you realize that $18^\circ$ is $1/20$ of a circle, so that $a=\sin 18^\circ+i\cos 18^\circ$ is a $5$ root of unity. So $a$ must satisfy $z^{5}=1$.
The polynomial $z^{5}-1$ factors and a bit of thought gets you to the point where you are, that $a$ must satisfy $z^4+z^3+z^2+z+1=0$. So if you can find its roots, then one of the real parts will be $\sin 18^\circ.$
Divide the above equation by $z^2.$
$$z^2+z+1+\frac{1}{z}+\frac{1}{z^2}=0.$$
Let $w=z+\frac{1}{z}$ and note the the 2nd and 4th terms equal $w$. Also note that $w^2= z^2+2+\frac{1}{z^2}$, so by adding and subtracting $1$ we make the above
$$w^2+w-1 = 0.$$
Solve this to get
$$w=\frac{-1\pm\sqrt{5}}{2}.$$
For each of these two solutions solve the quadratic equations
$$z+\frac{1}{z} = \frac{-1\pm\sqrt{5}}{2}.$$
Figure out which one is in the 1st quadrant and take its imaginary part.
Start with the fact that $$\zeta=\cos(2\pi/5)+i\sin(2\pi/5)=\sin 18^{\circ}+i\cos 18^{\circ}$$ is a root of $z^5-1=0$ and obviously $\zeta\neq 1$ so that it is a root of $$z^4+z^3+z^2+z+1=0$$ Dividing this by $z^2$ and setting $y=z+z^{-1}$ we have $$y^2+y-1=0$$ On the other hand note that $$\zeta +\zeta^{-1}=2\sin 18^{\circ}$$ and hence the desired value of $\sin 18^{\circ}$ is $y/2$. From quadratic formula $y=(\sqrt{5}-1)/2 $ (the other root is negative) so that $$\sin 18^{\circ}=\frac{\sqrt{5}-1}{4}$$ |
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep. |
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep. |
Before we begin, HAVE YOU WATCHED THE VIDEO “ANYONE CAN QUANTUM”??? Paul Rudd, Keanu Reeves, Stephen Hawking, Quantum Chess, Quantum Physics for Babies, and even tardigrades: this video has it all!
Made by our colleagues from the Institute for Quantum Information and Matter in Caltech, this clip has masterfully shared with almost
two million people all around the world the same message that I have been trying to spread with these blog posts: anyone can understand quantum mechanics! In their video, Keanu Reeves tells us that “Paul Rudd changed the world by showing the world that anyone can grapple with the concepts of quantum mechanics. It sparked an era of invention and ingenuity the likes of which humanity had never seen.” There is a lot of truth in that: when everyone believes they can understand nature at its most fundamental level, we can accomplish amazing things.
Of course, it’s one thing to
claim that people can understand quantum mechanics, but it’s something else entirely to help people actually do it. That’s the job I started with my previous posts and that I am going to continue today. Let’s go!
In the previous installment of this series, we learned the basic postulates of quantum mechanics. In other words, we learned what quantum mechanics
is. In this final part of the series, we are going to shift gears and study what quantum mechanics implies about our universe. What is possible in a quantum world that can’t be done in a classical one? What is new, what is different? These are important questions and today we’ll learn some of the answers!
My job, for instance, could be loosely described as studying the implications of quantum mechanics for communication, cryptography and thermodynamics. In particular, in this lesson we’ll learn about two important aspects of quantum mechanics that are not present in classical theories:
Heisenberg’s uncertainty principle and entanglement. Uhu! Lesson 1: The uncertainty principle
In our previous lesson, we used a quantum coin as an example of a quantum system, whose state could be \(|Heads\rangle\), \(|Tails\rangle\) or any superposition of these two states. Today, we are going to be more general and instead we are going to think of a system with two possible configurations which we call \(|0\rangle\) and \(|1\rangle\). Using this notation is great because it’s shorter to write (which is always appreciated) and because it is more general: we don’t really need to be talking about a quantum coin, it could be any system with two degrees of freedom. The word we use for such an object is a
qubit, in analogy with a classical bit, which is any system that can be in states 0 or 1.
Remember that in quantum mechanics we can have superpositions, so we are also going to define two other important states of a qubit
\[|+\rangle=\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle \]
\[|-\rangle=\frac{1}{\sqrt{2}}|0\rangle - \frac{1}{\sqrt{2}}|1\rangle \]
Notice that the states \(|+\rangle \) and \(|-\rangle \) are both an equal superposition of the states \(|0\rangle \) and \(|1\rangle \). Notice also that the states \(|0\rangle \) and \(|1\rangle \) are equal superpositions of \(|+\rangle \) and \(|-\rangle \) since we can write
\[|0\rangle=\frac{1}{\sqrt{2}}|+\rangle + \frac{1}{\sqrt{2}}|-\rangle \]
\[|1\rangle=\frac{1}{\sqrt{2}}|+\rangle - \frac{1}{\sqrt{2}}|-\rangle \]
Do you remember how to define a measurement in quantum mechanics from the previous lesson? The only thing we have to do is to ask systems what states they are in. In this lesson, we are going to focus on two special measurements of a qubit. We’ll call the question
“Are you in state \(|0\rangle\) or \(|1\rangle \) ?”
a
Z measurement. Similarly, we’ll call the question
“Are you in state \(\|+\rangle\) or \(|-\rangle\) ?”
an
X measurement. Why do we call them that? Well, there’s a relatively complicated historical reason behind it, but this is the terminology that scientists use and I want you to be familiar with these terms.
Now let’s suppose that we have a qubit in the state \(|0\rangle\) and we want to measure it. If we make a Z measurement, we know for sure that the outcome will be “I’m in state \(|0\rangle\)”. However, because of the laws of quantum mechanics that we learnt last time, if we make an X measurement, half of the time we’ll obtain the outcome “I’m in state \(|+\rangle\)” and the other half we’ll get the outcome “I’m in state \(|-\rangle\)”. In other words, for this state, we don’t have any uncertainty about the outcome of a Z measurement, but we have maximum uncertainty about the outcome of an X measurement. See where I’m going?
What happens if instead we start with a qubit in the state \(|+\rangle\)? You guessed it, the situation is reversed! In this case, we don’t have any uncertainty about the outcome of an X measurement, but we have maximum uncertainty about the outcome of a Z measurement! It turns out that no matter what state we start with,
there will always be some uncertainty in at least one of these two measurements. That is Heisenberg’s uncertainty principle.
More precisely, the uncertainty principle states that for virtually any two measurements we can make on any system – let’s call them measurement A and measurement B – it holds that
Uncertainty(A)+Uncertainty(B)>0.
In other words, no matter what state the system is in, there exist pairs of measurements whose outcomes cannot
both be predicted perfectly. This never happens classically! In a classical world, if we know the state of a system perfectly, in principle we can predict everything about its future behaviour, including the results of any two measurements. But in a quantum world, there is a fundamental limitation to our ability to predict the outcomes of measurements: most of the time, there will always be some uncertainty about which outcomes we’ll see. The only exception to this rule occurs when both A and B are said to commute, but most pairs of measurements don’t have that property.
Many of you are probably thinking, “Wait, didn’t the uncertainty principle have something to do with the position and momentum of a particle?”
Well, the uncertainty principle applies to measurements of position and momentum as well: we can never predict the outcome of both measurements perfectly. In other words, in our universe, for any measurement X of the position of a particle and any measurement P of its momentum, it holds that
Uncertainty(X)+Uncertainty(P)>0.
The uncertainty principle tells us something very deep about our ability to obtain information from physical systems. In many ways, it sets a fundamental limit to our capability to make predictions and to perform precise measurements. This has HUGE implications. To name a few, the uncertainty principle is the reason why quantum states cannot be cloned, why empty space is not really empty, and why quantum cryptography is possible. That’s the beauty of our quantum world!
Before our next lesson, you can take a break and admire this picture that my wife took of the Gardens by the Bay in Singapore, the city where we now live.
Lesson 2: Entanglement
So far in our discussion of quantum mechanics we have focused on single systems: a single quantum coin, a single quantum die, a single qubit. But what happens if we combine systems together? In particular, what happens if we have two qubits instead of one?
The first thing we have to understand is how to
represent the states of two qubits. Turns out that all we have to do is to “stick them together”. If one qubit is in state \(|0\rangle\) and the other is in state \(|1\rangle\), then we represent the joint state of both qubits as \(|0\rangle |1\rangle\). Easy! Mathematicians call this operation “taking the tensor product”, I prefer to use the term “sticking them together”: it gets the point across.
Other examples of possible states of two qubits are
\[|1\rangle |0\rangle\]
\[|+\rangle |1\rangle\]
\[|-\rangle |+\rangle\]
\[(\frac{3}{5}|0\rangle +\frac{4}{5}|1\rangle) |0\rangle\]
You get the idea. Notice that in each of these examples, it is straightforward to identify what state each of the two individual qubits is in. For instance, for the state \(|-\rangle |+\rangle\), it is clear that the first qubit is in state \(|-\rangle\) and the second qubit is in state \(|+\rangle\).
Now comes the interesting part. Remember that in quantum mechanics we can have
superpositions of different states. Hopefully many of you are already realizing that much of the magic of the quantum world comes solely from superposition: it is one of the defining properties that makes quantum mechanics such a beautiful and rich theory. For example, in quantum mechanics, a system of two qubits can be in the state
\[\frac{1}{\sqrt{2}}(|0\rangle |1\rangle + |1\rangle |0\rangle) \]
Does this state look special to you? If not, then let me ask you a couple of questions: what state is the first qubit in? What state is the second qubit in? Think about it for a while.
So, what’s the answer? That’s right: they don’t have a definite state! In fact, if we perform
any measurement in either of the two qubits, we will always get a completely random outcome. Thus, this peculiar state has the intriguing property that even though we know the state of both qubits perfectly, we are completely ignorant of the state of each individual qubit. Mind-blowing isn’t it?
Any state that
cannot be written in the form \(|state1\rangle|state2\rangle\) is called entangled, where \(|state1\rangle\) is some state of the first qubit and \(|state2\rangle\) is some state of the second qubit. You can check for yourself that indeed the state
\[\frac{1}{\sqrt{2}}(|0\rangle |1\rangle + |1\rangle |0\rangle) \]
which from now on we’ll call \(|\Psi\rangle\), cannot be written in this form and is therefore an entangled state.
The Centre for Quantum Technologies (CQT), where I now work as a research fellow, organized a mini-competition last year to coin a new way of referring to entanglement to replace the popular “spooky action at a distance”, which I dislike (more on that in a few minutes). The winner entry was “Mutual existence”, which was chosen by writer George Musser and CQT professor Christian Kurtsiefer. You can read more about it and other entries here. In Musser’s words “I like 'mutual existence' because it captures the principle that entangled particles behave as a single unified system, with global properties that do not reside on either particle, or even derive from them.” Now you know what he means! The joint state of two entangled systems is perfectly defined, but in such a way that their individual states are not. Beautiful!
Now what happens if we measure one of the qubits in an entangled state? Well, we know we’ll get
some outcome, but as you might have guessed, because the state of each individual qubit is not well defined, no matter what measurement we make, we’ll always obtain a random answer. If we measured the first qubit of state \(|\Psi\rangle\) by asking “are you in state \(|0\rangle\) or in state \(|1\rangle\)?” we’ll obtain each possible answer with 50% probability. But notice something amazing: because quantum mechanics always gives consistent answers, if we then measure the second qubit we know what outcome we’ll obtain! If the outcome of the measurement of qubit 1 was “I’m in state \(|0\rangle\)” then for sure we’ll obtain outcome “I’m in state \(|1\rangle\)” when we measure qubit 2, since \(|\Psi\rangle\) was an equal superposition of \(|0\rangle|1\rangle\) and \(|1\rangle|0\rangle\). Moreover, this is true no matter how far apart the qubits are from each other.
Many people were frightened by this realization: the state of qubit 2 is initially not well-defined, but as soon as we measure qubit 1, we immediately know the state of qubit 2. This is what led Einstein to call this effect “spooky action at a distance”. But as you’ll see, it’s not spooky and it’s not action at a distance.
Following the argument of the great John Stewart Bell, imagine there is a person that always wears socks of different colours. In Bell’s case, this was his friend, Reinhold Bertlmann. On a given day, it was impossible to predict what sock he would wear on each foot. However, if you got a glimpse at
one of his socks then you immediately knew that the other sock must be of a different colour. Sounds familiar?
So you see, there is nothing quantum about objects being correlated in this way: even if their states are uncertain, their shared properties may allow us to make inferences about one of them from knowledge of the other. Here’s what’s quantum about entangled states: this powerful correlation remains no matter what measurements we make!
Once again, in quantum mechanics, we have superpositions, so we can ask a richer class of questions. In the case of the entangled state \(|\Psi\rangle\) , we could ask the first qubit “Are you in state \(|+\rangle\) or in state \(|-\rangle\).” You can check for yourselves (or trust me on this) that we can equivalently write \(|\Psi\rangle\) as
\[|\Psi\rangle=\frac{1}{\sqrt{2}}(|+\rangle |+\rangle - |-\rangle |-\rangle)\]
so now we know that the outcome of the same measurement on qubit 2 will always be the same as for qubit 1. This correlation between several different measurements is not possible to achieve classically: entangled states have much stronger correlations. That’s the reason that my personal entry for the mini competition was this:
Quantum correlations are stronger than classical ones and they lead to a myriad of applications, like randomness generation, quantum cryptography and quantum teleportation. Perhaps most importantly, as shown by Bell in the 1970s, the properties of entangled states have taught us that we cannot understand the world as being one in which the outcomes of all events have been pre-established and where signals cannot travel faster than light: at least one of these two principles does not hold in our universe.
I hope you have enjoyed this trip across the quantum world. My honest hope is to have given you an understanding of the basic concepts of quantum mechanics and, most importantly, to have ignited a desire to learn more about this most beautiful of theories. |
Let $X = [0,1]^{[0,1]}$ be equipped with the product topology. The overarching task here is to show that the set $K = \{f\in X : \exists Y-\text{countable }(\forall x\in[0,1]\backslash Y) f(x) = 0\}$ is sequentially compact. My general strategy is as follows:
Let $\{f_n\}_n$ be a sequence in $K$. Define $$A := \{x\in[0,1] : (\exists n\in\mathbb{N})(f_n(x) \neq 0)\}$$ $A$ is at most countable, so let $\{x_n\}_n$ be an enumeration of $A$. $\{f_n(x)\}_n$ is bounded for each $x\in A$, so (for each $x\in A$) there exists a subsequence $\{f_{n_i}\}_i$ such that $\{f_{n_i}(x)\}_i$ is convergent. From here I want to construct a convergent subsequence of $\{f_n\}_n$ using $\{x_n\}_n$ by doing something like the following:
Let $\{f_{n_i}\}^1_i$ be a subsequence of $\{f_n\}_n$ such that $\{f_{n_i}(x_1)\}_i$ is convergent.
Let $\{f_{n_i}\}^{k+1}_i$ be a subsequence of $\{f_{n_i}\}^k_i$ such that $\{f_{n_i}(x_{k+1})\}^k_i$ is convergent.
Define $\{f_{n_i}\}_i = \bigcap^\infty_{k=1}\{f_{n_i}\}^k_i$.
Hopefully the idea is clear even if my notation is messy. The obvious worry is that $\{f_{n_i}\}_i$ may be empty. So could the subsequence, as defined, be empty? If so, is there any way to carry out the strategy and get a nonempty subsequence that is pointwise convergent? |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
After much research and work, I wrote a little explanation (not that the other answers weren't good, they just weren't well written for someone who didn't know about thermodynamics and other concepts …):
The below results were determined experimentally but this explanation gives some insight into why they are the way they are. Whilst many factors affect the probability of a reaction occurring, and thus the rate, the affect of concentration can be quite easily determined. Consider the following reaction:
$$\ce{a X + b Y -> c Z}$$
The probability of it occurring can be broken down into the probability of the particles reacting in a space, and the probability of them being in a space, thus:
$$P(reaction)=P(\ce{a X} \in \Delta V)\times P(\ce{b Y} \in \Delta V) \times P(\ce{X\bond{->}Y} \in \Delta V)$$
Wherein the probability $P$ of an atom being in certain volume $\Delta V$ is multiplied with the final probability of a collision (here displayed as $\ce{X\bond{->}Y}$) happening in said volume. The probability of a molecule being in a set space can be determined from its concentration:
$$C=\frac nV$$
$$P(\ce{1X} \in \Delta V)= C \times 6.02 \times 10^{23} \times 10^3 \frac{L}{m^3} \times \Delta V$$
This equation assumes that $\Delta V$ is in cubic meters and concentration is in $\mathrm{mol/L}$, however this may not be the case. The main concept is that the probability of one particle being present in some area is proportional to the concentration and some scaling factor:
$$P(\ce{X} \in \Delta V)=K_S×C(\ce{X})$$
Based on basic probability, it is known that the probability of a particles being in the volume is equal to the probability of one particle being in there to the power of $a$. Thus from this, assuming the scaling constant is $K_S$, and that the probability of a reaction occurring at the given temperature in the given space if particles are present is equal to $K_R$, the probability of a reaction occurring based on concentration is as shown below:
$$P(reaction)=K_R K_S^{a+b} [\ce{X}]^a [\ce{Y}]^b$$
If a reversible reaction is considered, the overall direction can be determined by finding the ratio between the rates of the reactions in each direction. Thus the ratio of backward reaction to forward reaction for the below reaction can be identified. (Note the constants on the top and bottoms are different)
$$\ce{a W + b X <=> c Y + d Z}$$
$$K_\mathrm{tot}= \frac{K_{R\ce{A}} K_S^{a+b} [\ce{W}]^a [\ce{X}]^b}{K_{R\ce{B}} K_S^{c+d} [\ce{Y}]^c [\ce{Z}]^d }$$
Dynamic equilibrium is the state for a reversible reaction, in which the rate of both the forward and backward direction is equal and thus the overall change is zero. In this case, the ratio would equal one.
$$1 = \frac{K_{R\ce{A}} K_S^{a+b} [\ce{W}]^a [\ce{X}]^b}{K_{R\ce{B}} K_S^{c+d} [\ce{Y}]^c [\ce{Z}]^d }$$
It can be easier however, to simplify repetitive calculations to just the concentrations, and create a new constant from all the constants shown:
$$\frac{K_{R\ce{B}} K_S^{c+d}}{K_{R\ce{A}} K_S^{a+b} }=\frac{[\ce{W}]^a [\ce{X}]^b}{[\ce{Y}]^c [\ce{Z}]^d }$$
This is the equilibrium constant, although it does change with temperature as the average kinetic energy of particles changes. The calculations made on the concentrations come up with a value called the concentration quotient ($Q$) which is equal to the equilibrium constant when a dynamic equilibrium is reached. |
I am solving my problem with
LinearProgramming. In certain cases, my coefficients that are fed into the function come from evaluating trigonometric functions, $\sin(\frac{\pi}{N})$ or $\sin^2(\frac{\pi}{N})$ for different $N$. Their values are problematic for the solver, for example the Interior Point Method gives the following warnings:
Min::meprec: Internal precision limit \$MaxExtraPrecision = 500.` reached while evaluating 1/2-Sin[π/8]^2+1/2 (-1+2 Sin[π/8]^2).
The solution is produced at the end but as the effect of the machine precision arithmetic. I need to obtain exact solutions. There is the same problem with the Simplex method. I have constraints which are equalities and it is necessary to solve the problem exactly.
Are there any numerical tricks or Mathematica options that would handle these irrational numbers?
Thanks. |
In the local-density approximation (LDA), the many-electron problem is approximated by a set of single-particle equations which are solved with the self-consistent field method. The total energy is minimized. The total energy is taken to be the sum of a kinetic energy,
T, the classical Hartree term for the electron density, E coul, the electron-nucleus energy, E enuc, and the exchange-correlation energy, E xc, which takes into account approximately the fact that an electron does not interact with itself, and that electron correlation effects occur.
One solves the Kohn-Sham orbital equations
$$[-{1\over2}\nabla^2 + v_{\rm eff}(\vec r)] \psi_i(\vec r) = \varepsilon_i \psi_i(\vec r) ~ ,$$ (Eq. 25)
with
$$v_{\rm eff}(\vec r) = v(\vec r) + \int {\rm d}\vec r^\prime {{\rho(\vec r)}\over {|\vec r - \vec r' |}} + v_{\rm xc} (\vec r) ~ .$$ (Eq. 26)
The charge density
ρ is given by
$$\rho (\vec r) = 2 \sum_i f_i \mid \psi_i (\vec r)\mid^2 ~.$$ (Eq. 27)
where the 2 accounts for doubling the occupancy of each spatial orbital because of spin degeneracy and
f i account for partial occupancy. The potential, $$v(\vec{r})$$, is the external potential; in the atomic case, this is Z nuc/ r Zis the atomic number. The exchange correlation potential, $$v_{xc}(\vec{r}) ~,$$ is a function only of the charge density, i.e., $$v_{xc}(\vec{r}) ~ = ~ v_{xc}[\rho(\vec{r})]~.$$ We use the functional of Vosko, Wilk, and Nusair (1980) [4], as described above.
The various parts of the total energy are given by:
$$ T = - 2 \sum_i f_i \int {\rm d}\vec{r} \, \psi_i^\ast (\vec{r}) \, {\textstyle{1\over2}} \, \nabla^2 \psi_i (\vec{r}) ~,$$ (Eq. 28)
$$ E_{\rm enuc} = \int {\rm d}{\vec{r}} \rho(\vec{r}) \, v(\vec{r}) ~ ,$$ (Eq. 29)
$$ E_{\rm coul} = \, {\textstyle{1\over2}} \, \int {\rm d}\vec{r} {\rm d}\vec{r}^\prime ~ {{\rho(\vec{r}) \rho(\vec{r}^\prime) }\over{\mid \vec{r} - \vec{r}^\prime\mid }} ~ ,$$ (Eq. 30) and
$$E_{\rm xc} = \int d \vec r \rho(\vec r) \varepsilon_{\rm xc}(\rho) ~ ,$$ (Eq. 31)
The LSD approximation
For atoms, it is sufficient to pick an arbitrary spin-polarization direction, and to consider the local-spin-density, $$\rho (\vec r, \sigma) ~ = ~ | \psi (\vec r, \sigma)|^2~.$$
In general, the local-spin density approximation requires consideration of the spin-density matrix, $$| \psi (\vec r, \sigma)^* ~ \psi (\vec r, \sigma^\prime)|~,$$
where
σ and σ′ represent spin up or spin down. This leads to consideration of a potential of the form, $$v_{xc}(\vec{r}, \sigma, \sigma^\prime) ~.$$ The RLDA approximation
The relativistic local-density approximation [13] (RLDA) may be obtained from the (non-relativistic) local-density approximation (LDA) by substituting the relativistic kinetic-energy operator for its non-relativistic counterpart, and using relativistic corrections to the local-density functional. We use the relativistic corrections proposed by MacDonald and Vosko [7].
Here, we give the radial equations which are solved by our programs:
$${{{\rm d}F}\over{{\rm d}r}} - {\kappa\over r} F= -c^{-1} (\epsilon-v(r) ) G , $$ (Eq. 32)
$${{{\rm d}G}\over{{\rm d}r}} + {\kappa\over r} G= c^{-1} (\epsilon-v(r)+2 c^2) F ,$$ (Eq. 33)
where
ε is the eigenvalue in Hartrees, and c is the speed of light; ε = 0 describes a free electron with zero kinetic energy. The functions G( r) and F( r) are related to the Dirac spinor by
$$\psi = \pmatrix {G(r) r^{-1} {\cal Y}_{\kappa m} (\hat r) \cr
$$\rho(\vec{r})= 2\sum_i\,f_i \sum_\mu |\psi_\mu(\vec{r})|^2$$ #G added by Janet Bass per Bob Dragoset's direction
(Eq. 34)
where $${\cal Y}_{\kappa m}(\hat r)$$
is a Pauli spinor [12].
Dirac's
κ quantum number, along with the azimuthal quantum number m, determines the angular dependence of the state. For the central-field problem, the levels with various m are degenerate and hence not solved for separately. The following table relates the values of κ used in this project to the more common spectroscopic notation.
κ state κ state -1 s 1/2 1 p 1/2 -2 p 3/2 2 d 3/2 -3 d 5/2 3 f 5/2 -4 f 7/2
The charge density is obtained from $$\rho (\vec r) = 2 \Sigma_i f_i \Sigma_{\mu} | \psi_{\mu}(\vec r)|^2$$
The ScRLDA approximation
The inclusion of relativistic effects doubles the number of degrees of freedom in atomic calculations. However, sometimes it is desirable to include some of the effects of relativity without increasing the number of degrees of freedom. Specifically, it is possible to neglect the spin-orbit splitting while including other relativistic effects, such as the mass-velocity term, the Darwin shift, and (approximately) the contribution of the minor component to the charge density.
Koelling and Harmon[14] have proposed a method to achieve this end, which we call the scalar-relativistic local-density approximation (ScRLDA). (Sc is used to avoid confusion with spin-polarization which is abbreviated S.) This is a simplified version of the RLDA. The equations to solve are:
$${{{\rm d}^2 G}\over{{\rm d}r^2}} - {{\ell(\ell+1)}\over{r^2}}~ G = 2 M (V-\epsilon) G + {{1}\over{M}} {{{\rm d} M}\over{{\rm d}r}} \left( {{{\rm d}G}\over{{\rm d}r}} + {{\langle \kappa \rangle}\over{r}}G \right) ~ ,$$ (Eq. 35)
where $${\langle \kappa \rangle} = -1$$
is the degeneracy-weighted average value of the Dirac's
κ for the two spin-orbit-split levels, and ε is the eigenvalue in Hartrees, with the same meaning as in the RLDA.
The parameter
M is given by
$$M = 1 + {{\alpha^2}\over{2}} (\epsilon-V) ~ ,$$ (Eq. 36)
where
α is the fine structure constant. The charge density is related to G by the usual non-relativistic formula,
$$r^2 \rho(r) = G(r)^2$$ (Eq. 37)
without an explicit contribution from the minor component
F( r). |
Existence of solutions for some "noncoercive" parabolic equations
1.
Dipartimento di Matematica "G. Castelnuovo", Università degli Studi di Roma "La Sapienza", P.le A. Moro, 2 - 00185 Roma, Italy
$\frac{\partialu}{\partial t}-$ div$(|Du|^{p-2}Du) + B(x, t)\cdot |Du|^{\gamma-1}Du = f$ in $\Omega_T$,
$u(x, t)=0$ on $\Omega\times (0, T),$
$u(x, 0) = u_0(x)$ in $\Omega,$
under suitable hypotheses on the data.
Keywords:Cauchy problems, noncoercive, existence and regularity of solutions., Parabolic equations. Mathematics Subject Classification:35B45, 35K55, 35K6. Citation:Maria Michaela Porzio. Existence of solutions for some "noncoercive" parabolic equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 553-568. doi: 10.3934/dcds.1999.5.553
[1] [2] [3] [4]
Pierpaolo Soravia.
Existence of absolute minimizers for noncoercive Hamiltonians and viscosity solutions of the Aronsson equation.
[5]
Olga Bernardi, Franco Cardin, Antonio Siconolfi.
Cauchy problems for stationary Hamilton-Jacobi equations under mild regularity assumptions.
[6]
Leszek Gasiński, Liliana Klimczak, Nikolaos S. Papageorgiou.
Nonlinear noncoercive Neumann problems.
[7]
Mikhail D. Surnachev, Vasily V. Zhikov.
On existence and uniqueness classes for the Cauchy problem for parabolic equations of the p-Laplace type.
[8]
E. N. Dancer, Norimichi Hirano.
Existence of stable and unstable periodic solutions for semilinear parabolic problems.
[9]
Kazuhiro Ishige.
On the existence of solutions of the Cauchy problem for porous medium equations with radon measure as initial data.
[10]
Vy Khoi Le.
Existence and enclosure of solutions to noncoercive systems of inequalities with multivalued mappings and non-power growths.
[11]
Hiroshi Watanabe.
Existence and uniqueness of entropy solutions to strongly degenerate parabolic equations with discontinuous coefficients.
[12] [13] [14] [15] [16]
Pablo Ochoa, Julio Alejo Ruiz.
A study of comparison, existence and regularity of viscosity and weak solutions for quasilinear equations in the Heisenberg group.
[17]
Lucio Boccardo, Maria Michaela Porzio.
Some degenerate parabolic problems: Existence and decay properties.
[18] [19] [20]
Rui Huang, Yifu Wang, Yuanyuan Ke.
Existence of non-trivial nonnegative periodic solutions for a class
of degenerate parabolic equations with nonlocal terms.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
My wish is to describe the time complexity of several clustering approaches. For example, suppose we have $n$ data points in $m$ dimensional space.
Suppose further that the pairwise dissimilarity matrix $\Delta$ of $n\times n$ dimensions is already computed and that we have already spent $O(m\cdot n^2)$ steps. What is then the time complexity just of
hierarchical clustering (HC) using Ward's linkage HC using complete linkage HC using average linkage HC using single linkage $k$-medoid approach $k$-means approach
Is there any benefit if the dissimilarity matrix $\Delta$ is not already computed? As I understand it is necessary for HC and $k$-medoid approach but not for $k$--means?
Thank you for your help! |
I am studying topology, on my own, using a text I found online. I am currently reviewing the “Metrics” section that reminds me of the real analysis course I took over 10 years ago.
The text ask me to “show” the following:
Suppose M is a metric space. Show that an open ball in M is an open subset, and a closed ball in M is a closed subset.
I have what I think is a counterexample to the second part. First, let me state the definitions as they are written in the book I am using:
For any $x \in M$ and $r>0$, the
(open) ball of radiusis the set raround x
$$ B_r(x)=\{y \in M: d(x,y)<r \}, $$
and the closed ball of radiusis $$ \overline B_r(x)=\{y \in M: d(x,y) \leq r \}, $$ raround x
A subset $A \subseteq M $ is said to be an
open subset of Mif it contains an open ball around each of its points.
A subset $A \subseteq M $ is said to be an
closed subset of Mif M\A is open.
I believe the following is a counterexample to this:
Let $$M = [1,10].$$ Now $ \overline B_1(5)=\{y \in M: d(5,y) \leq 1 \} $ is a closed ball. More simply put, $ \overline B_1(5)=[4,6] $. Lets call the closed ball $A$.
$$ A=\overline B_1(5)=[4,6]$$ Clearly, $A \subseteq M $, and $ M-A = [1,4) \cup (6,10] $. However $M-A$ is not open because $\{1\}$ and $\{10\}$ cannot have open balls around them without going beyond M.
Is there an error in the text, or an error in my thinking? |
I have the following propositional formula:
$$\lnot A \land (\lnot A \lor \lnot B) \land (\lnot A \lor C) \land (\lnot B \lor C)$$
I can see and "explain" that the middle two terms in brackets are redundant, so the formula simplifies to:
$$\lnot A \land (\lnot B \lor C)$$
I can't, however, figure out how to get there formally using the associativity laws of $\land$ and $\lor$, de Morgan's laws, etc.
Can anyone please give some help with those formal steps? Thanks in advance. |
I have to solve the following least squares problem: \begin{equation} \| \left[ \begin{smallmatrix} \mathbf{L} \\ \mathbf{I} \end{smallmatrix} \right]\mathbf{x} - \mathbf{b} \|_2^2 \end{equation} where $\mathbf{L} \in \mathbb{R}^{n\times n}$ is $O(n)$ sparse lower triangular matrix, $\mathbf{I} \in \mathbb{R}^{n \times n}$ is the identity and $\mathbf{b} = \left[ \begin{smallmatrix} \mathbf{b}_1 \\ \mathbf{b}_2 \end{smallmatrix} \right] \in \mathbb{R}^{2n}$.
Hence, solving individual system $\mathbf{Lx} = \mathbf{b}_1$ is of $O(n)$ complexity, by the forward substitution algorithm, but the least squares fit is expensive.
I am open to
any suggestion, including fast approximate stochastic solvers etc. Of course, it would be perfect if one is aware of a direct method that exploits this kind of structure. |
If you look at the optimization problem that SVM solves:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
s.t. $y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0,$for all $ i=1,\dots n$
the support vectors are those $x_i$ where the corresponding $\xi_i \gt 0$. In other words, they are the data points that are either misclassified, or close to the boundary.
Now let's compare the solution to this problem when you have a full set of features, to the case where you throw some features away. Throwing a feature away is functionally equivalent to keeping the feature, but adding a contraint $w_j=0$ for the feature $j$ that we want to discard.
When you compare these two optimization problems, and work through the math, it turns out there is no hard relationship between the number of features and the number of support vectors. It could go either way.
It's useful to think about a simple case. Imagine a 2-dim case where your negative and positive features are clustered around (-1,-1) and (1,1), respectively, and are separable with a diagonal separating hyperplane with 3 support vectors. Now imagine dropping the y-axis feature, so your data in now projected on the x-axis. If the data are still separable, say at x=0, you'd probably be left with only 2 support vectors, one on each side, so adding the y-feature would increases the number of support vectors. However, if the data are no longer separable, you'd get at least one support vector for each point that's on the wrong side of x=0, in which case adding the y-feature would reduce the number of support vectors.
So, if this intuition is correct, if you're working in very high-dimensional feature spaces, or using a kernel that maps to a high dimensional feature space, then your data is more likely to be separable, so adding a feature will tend to just add another support vector. Whereas if your data is not currently separable, and you add a feature that significantly improves separability, then you're more likely to see a decrease in the number of support vectors. |
That's not what the formula gives you. As the caption says, the capacity of the augmenting path in the residual network in (b) is $4$. Therefore we send 4 units of flow along the augmenting path from $s$ to $t$, namely, the path $s \to v_2 \to v_3 \to t$. In particular, $f(s,v_2)=8$, $f'(s,v_2)=4$, and $f'(v_2,s)=0$, so the updated flow is $8+4-0=12$.
Those answers assume that all edge capacities are integers. Assuming they are, this works.Suppose the min-cut in the original graph has total capacity $x$; then it will have total capacity $x(|E|+1)+k$ in the transformed graph, where $k$ counts the number of edges crossing that cut. Note that if you consider any cut in the original graph with larger ...
Let us consider $K_{2,2}$, the complete bipartite graph with two vertices on either side. A valid max flow sends $1/2$ units of flow across each edge of the bipartite graph. This gives a negative answer to your first question.On the other hand, the integral flow theorem guarantees that there exists an integral max flow, and such a max flow can be found ...
Edmonds-Karp is a specialisation/elaboration of Ford-Fulkerson, so any bound for the latter also applies to the former. In other words, EK is $O(|E|\min(f_{max}, |V||E|))$ time (and writing it this way does add information, since $f_{max}$ can be much smaller than $|V||E|$ -- and this is the only time when you might otherwise consider using some other ...
It is explained in part (b) of the caption of Figure 26.4.The residual network $G_f$ with augmenting path $p$ shaded; its residual capacity is $c_f(p)=c_f(v_2,v_3)=4$.Since the capacity of path $p$ is 4 (not 5), we find a flow $f'$ in the residual network $G_f$ that is defined by $f'(s,v_2)=f'(v_2,v_3)=f'(v_3,t)=4$. So for the network flow $f\uparrow f'...
Your problem is NP-hard. There is a reduction from Independent Set to its decision version.Consider an instance $G=(V,E)$ of Independent Set, you construct a network with vertices $\{s,t\}\cup V\cup V'$ where each vertex in $V'$ corresponds to a pair of vertices in $V$. For example, if $V=\{1,2,3\}$, then $V'=\{v_{12},v_{23},v_{13}\}$. Then we construct ...
To test correctness: the set of edges belonging to a min-cut is not unique in general, so a dataset like you ask would not really be helpful, apart from checking that the value of the cut is the right one. However, it is easy to check correctness yourself by using the output of your algorithm if you compute both the max-flow and the min-cut; just check that ...
Consider a graph of two node $s$ and $t$ and one edge $(s,t)$ with a flow $f$, $f(s,t)=1$.Let $S=\{s\}$ and $T=\{t\}$. Then the flow across the cut $(S, T)$ is, apparently 1. Or,$$f(S, T) = \sum_{u\in S} \sum_{v\in T} f(u,v) - \sum_{u\in S} \sum_{v\in T} f(v,u)= f(u,v)=1.$$$\sum_{u\in S} \sum_{v\in T} f(u,v)$ is the flow from $S$ to $T$. $\sum_{u\in S}...
No, your gut feeling is not correct.Consider the following flow network with source $s$ and sink $t$, where the capacity of every edge is 1. The max-flow from source to sink is 0. The s-t cut $(\{s,A\}, \{B,t\})$ is a minimum cut since the only connecting edge $(B, A)$ goes from sink side to source side.$$ s \longrightarrow A \longleftarrow B \...
A full edge, e.g. $a \rightarrow c$, has a residual capacity of $0$ in the residual network. So you can't make an augmenting path over that directed edge.However the reversed edge, $c \rightarrow a$ has a residual capacity of $5$ (since $c_{c \rightarrow a} = 0$ and $f_{c \rightarrow a} = -5$). Therefore you can create an augmenting path using the reversed ...
This is sometimes called the minimum edge-cost flow problem or fixed-cost flow problem. As you suspected, it is indeed NP-hard, even when the network is bipartite. It is listed as problem ND32 in the list of NP-hard problems by Garey and Johnson:M.R. Garey, D.S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, New ...
This problem is NP-hard if 0 weight is allowed. We can reduce Not-All-Equal 3SAT to the decision version of this problem.Given an instance of Not-All-Equal 3SAT with $n$ variables and $m$ clauses, for each variable $x_i$, we create two vertices $v_i$ and $v_i'$ with an edge between them for each variable. In addition, for each clause, for example, $x_1\...
Yes, you can. If it has, say, no outcoming edge, there can be no flow routed over this node. Otherwise the flow conservation constraint for this node ($v$)$$\sum_{(u,v) \in E} f_{uv} - \underbrace{\sum_{(v,w) \in E} f_{vw}}_{= 0} = 0$$is violated if you have incoming flow.
I would do it kind of the other way around as you suggested.First compute a flow that saturates $(u,v)$. This can be done with the Ford--Fulkerson algorithm. Look only for augmenting paths which contain $(u,v)$ and augment the flow until the edge is saturated.In the second step you augment further, but avoid the edge $(u,v)$ when searching for augmenting ...
Timetabling is known to be NP-complete. Your more complex variant is too. Don't expect "nice" or "efficient" solutions. Either settle for an approximate solution (good luck in deriving one) or some sort of randomized heuristic.I'd try some variant of genetic algorithms (look around for it's application to time tabling, they use special mutation and ...
Apply the transformation $w \mapsto (m+1)w + 1$ to all weights (where $m$ is the number of edges in the graph), and find the minimum weight cut in the new graph. This will give you the minimum weight cut with the minimum number of edges. By computing the value modulo $m+1$, you can determine the number of edges.
Remove $u$ and $v$ (as well as all edges connected to them), and for any removed edge $(u,x)$, add an edge from $s$ to $x$ with the same capacity; for any removed edge $(y,v)$, add an edge from $y$ to $t$ with the same capacity. Now find a min cut in this new graph. The partition of nodes in this cut suggests a min cut among those including $e$ in the ...
Just to respond to the above comment by the OP "why does linear programming for $K_{2,2}$ fail". Perhaps your confusion is because we need to distinguish between "solving an LP" and "solving an LP using a particular algorithm".The LP formulation of maxflow (with real variables) has optimal solution (1,0,1,0) (for the edges of the bipartite graph $K_{2,2}...
Flow is an abstraction of how much "stuff" you want to move through the network. Exactly what the stuff is depends on what you're modelling with the network - water pipes, transport networks, computer networks, etc.The problem (or something close to it) can also be used to model other problems, in which case flow could be all kinds of things.
There is a significant typo on that slide."$c_f (u, v) = f (v, u)$ if $f (v, u)$ not in $E$" should have been "$c_f (u, v) = f (v, u)$ if $(u, v)$ not in $E$" or, what is equivalent, "$c_f (u, v) = f (v, u)$ if $(v, u)$ is in $E$".Why do we care about edge that is not in $E$?In fact, we care only edge $(u,v)$ if either $(u,v)$ is in $E$ or $(v,u)$ ...
First a quick note, the flow entering a vertex is equal to the flow leaving. I'll just refer to it as the amount of flow through a vertex.Second, note that we can say $a \leq b \wedge b \leq a$, thus we can add the constraint $a = b$ for two vertices.Then we can re-use the NP-hard result from the paper that you quoted:"A negative disjunctive ...
I would say that 3. is a special case of 2. instead but it is a point of view.On 3., you can create a new node "source" which has edges to every supply nodes (b(v) > 0). These edges have cost 0 and capacity b(v).Then you create a new node "sink" which has edges from every demand node. These edges have cost 0 and capacity -b(v).All this replaces the b(v)...
Here is your definition of reversed edges in the case of flow network given in your comment.Between 2 vertices there is the normal forward edge (u,v) , and another edge (v,u) that goes backward(this is the reversed edge). regardless their capacities.If your definition is used, flow networks may have reversed edges indeed. For example, the flow network (...
The problem is NP-complete, because in the special case that all cars have the same capacity, it is just the bin-packing problem.If car A has a higher capacity than car B, and you get an optimal solution (smallest number of cars) containing B but not A, then you can swap cars A and B, you won't need more cars, and because A has more empty capacity, you ...
Let $G=(V,E)$ be your input graph. Now consider a maximum flow $f$ on $G$.Let $f$ be a flow in $G$ such that the residual network $G_R$ has no s-t path, then $f$ is a maximum flow.Let's define $G'=(V,E')$ to be your graph with $E'=E \cup \{(u_i,s)\}$ for $i=1,...,N$.Since $f$ is a maximum flow on $G$, then $G_R$ has no s-t paths.Now you can ... |
Consider a parabola with focus $F$ and vertex $V$; define $a := |\overline{VF}|$. Let $\overline{PQ}$ be a focal chord of the parabola, with $M$ its midpoint. Let $F^\prime$, $P^\prime$, $Q^\prime$, $M^\prime$ be the projections of the corresponding points onto the directrix. (Note that $|\overline{VF^\prime}| = a$.)
It is "known" that the tangents (not shown) at the endpoints of a focal chord are perpendicular, and that they meet at the point on the directrix halfway between their own projections. In our scenario, that point must be $M^\prime$, so that the circle with diameter $\overline{PQ}$ is tangent to the directrix at $M$. Let $r := |\overline{MM^\prime}|$ be the radius of that circle.
It is also "known" that the tangents at $P$ and $Q$ bisect respective angles $\angle FPP^\prime$ and $\angle FQQ^\prime$. This implies that $F$ is the common reflection of $P$ over $\overline{PM^\prime}$ and $Q$ over $\overline{QM^\prime}$, so that $\overline{FM^\prime}\perp\overline{PQ}$. Define $m := |\overline{FM^\prime}|$. A little angle chasing shows that $\angle FM^\prime F^\prime \cong \angle FMM^\prime$, so that the similar right triangles yield$$\frac{2a}{m} = \frac{m}{r} \qquad\to\qquad m^2 = 2 a r \tag{1}$$
This comes in handy for calculating the power of $V$ with respect to the circle:
$$\begin{align}\text{power of $V$ wrt $\bigcirc{M}$} &:= n^2 - r^2 \\&\,= n^2 - (r - a)^2 - 2 a r+ a^2 \\&\,= |\overline{M^\prime F^\prime}|^2 - 2 a r + a^2 \\&\,= m^2 - (2a)^2 - 2 a r + a^2 \\&\,= (m^2 - 2 a r) - 3 a^2 \\&\,= - 3 a^2 \tag{2}\end{align}$$
We observe that this value is independent of our choice of $P$ and $Q$, and is therefore a constant of this configuration. Consequently, for any two focal-chord-diameter circles, vertex $V$ has the same power with respect to each; this places $V$ on the circles' radical axis, which for intersecting circles is the line containing their common chord. $\square$ |
To determine the efficient frontier of a mean-variance framework, one needs estimates of the expected return $r_i$, the variance $\sigma_i^2$ and the co-variance $\sigma_{ij}^2$ for each stocks $i$, $j$. For $n$ stocks, you have to estimate a total of $\frac{n(n-1)}{2}$ correlation coefficients. Index models are used, to reduce this huge amount of needed estimates.
Single Index Models
It is assumed, that the return of a stock can be written as$$r_i = a_i + \beta_i r_m + e_i$$,where $r_m$ denotes the market-return, $e_i$ a mean-zero error term and $\beta_i$ a stocks beta. The key assumptions are:$$\operatorname{E}[e_i(r_m-\bar{r}_m]=0$$$$\operatorname{E}[e_ie_j]=0$$This implies, that the only reason stocks vary together, systematically, is because of a common comovement with the market. One can show, that the covariance can be expressed as$$\sigma_{ij}^2 = \beta_i \beta_j \sigma_m^2$$, where $\sigma_m^2$ denotes the variance of the market-return. In summary, if you assume the single-index model, you just have to estimate a total of $3n+1$ parameters for $n$ stocks.
CAPM
The CAPM is an economic theory in equilibrium with further assumptions for an investor's utility-preference function, costless diversification,...
Combing the economic theory from Markowitz portfolio-diversification, Von Neumann and Morgenstern expected utilities etc. leads to the CAPM (where $r^f_t$ denotes the risk-less rate of interest):
$$r_{i,t}-r^f_t = \alpha_i + \beta_i(r^m_t-r^f_t)+ \epsilon_{i,t}$$
, with the following (strong) assumption:
$$\alpha_i = 0$$
You may look at this excellent answer with more details.
Differences from Single Index Models and the CAPM
In fact, the single index model is just a statistical technique, because you can replace $r_m$ with any other variable you think fits best to explain a stocks return. The CAPM however is an economic model in equilibrium, where the market-portfolio return $r_m$ is a clearly determined portfolio (of all risky assets, investments, also human-capital...). See also this answer:
The $\beta_i$ for a stock in the single-index model is
not the same $\beta_i$ as in the CAPM.
Reference:
Elton/Gruber/Brown/Götzmann (2014),
Modern Portfolio Theory and Investment Analysis, ed. 9, John Wiley & Sons. |
I recently asked a question pertaining to the appliciation of Jacobi's method to a semilinear elliptic PDE (Poisson's equation)
$$ \nabla^2u = -\rho~e^{-u} $$
A more efficient method like the Bi conjugate gradient stabilised method was recommended. I have tested this method out and it is indeed much faster. But I am unsure of what the matrix representation of a semilinear system would look like. For an ordinary linear PDE like
$$ \nabla^2u=-\rho $$
it looks like $$ \frac{1}{h^2}\left( \begin{array}{ccc} 2 & -1 & & & \\ -1 & 2 & -1 & & \\ & -1 & 2 & -1 & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \\ \end{array} \right)\left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5\end{array} \right) = \left( \begin{array}{c} \rho_1+g \\ \rho_2 \\ \rho_3 \\ \rho_4 \\ \rho_5+g\end{array} \right) $$ where $g$ is the Dirichlet boundary condition.
My question: What would the corresponding matrix representation of the set of simultaneous equations for the semilinear case look like? I'm guessing something like
$$ \frac{1}{h^2}\left( \begin{array}{ccc} 2 & -1 & & & \\ -1 & 2 & -1 & & \\ & -1 & 2 & -1 & \\ & & -1 & 2 & -1 \\ & & & -1 & 2 \\ \end{array} \right)\left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ u_4 \\ u_5\end{array} \right) = \left( \begin{array}{c} \rho_1e^{-u_1}+g \\ \rho_2e^{-u_2} \\ \rho_3e^{-u_3} \\ \rho_4e^{-u_4} \\ \rho_5e^{-u_5}+g\end{array} \right) $$
But this doesn't leave me with all $u$ values in a single vector.
Would it make sense to do something like:
1) Solve the linear case $$ \nabla^2u = -\rho $$ 2) Use the resultant $u$ to construct a new linear case $$ \nabla^2u_i = -\rho~C $$ where $$ C = e^{-u_{old}} $$ 3) Repeat step 2 until self-consistency is reached. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Tuesday , May 7, Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto) |
There is an operation for which I have long wanted to find a better solution.
Let:
abe a matrix of dimensions $m\times n$
vbe an integer vector of length $n$ with elements drawn from $[1, m]$
For every element $x$ at position $p$ in
v I wish to select the element at row $x$, column $p$ in
a.
Example:
SeedRandom[0]a = Array[Range[7] 10^# &, 3, 0]v = RandomInteger[{1,3}, 7]
$\left( \begin{array}{ccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 10 & 20 & 30 & 40 & 50 & 60 & 70 \\ 100 & 200 & 300 & 400 & 500 & 600 & 700 \\ \end{array} \right)$
{3, 3, 2, 1, 1, 3, 1}
Desired output:
{100, 200, 30, 4, 5, 600, 7}
Details:
Although a compiled function is likely to be the fastest approach for packed arrays I want something more general, allowing arrays of mixed types, and ideally optimized for arrays in which each row is a packed array (list) of a different type, e.g.
$\left( \begin{array}{ccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 \\ \text{a} & \text{b} & \text{c} & \text{d} & \text{e} & \text{f} & \text{g} \\ \end{array} \right)$
I am still interested in seeing the fastest possible compiled function as it may serve as the basis for a general solution as well.
I seek a solution that works well for any shape of array
a, from $n\gg m$ to square to $m\gg n$, though if compromise is necessary I would optimize for $n > m$. |
If I understand how to do the following problem, it will help with the real complicated problem I am actually doing. The problem I wish to understand is how to derive the equations of motion for a magnetic moment $\vec{m}$ that is in a changing magnetic field $\vec{B}$. I know I can use the Lagrangian to quite easily obtain the equations of motion i.e. the motion of $\ddot{\theta}$ and $\ddot{\phi}$ of the magnetic moment (the reason why I do not want to use the Lagrangian is because the actual problem that I am doing has a piecewise torque function so when the and when integrating the torque to find a potential energy the equation becomes super nasty). Instead I wish to obtain the equations of motion using $\vec{L}=\mathbf{I}\vec{\omega}$ where $\mathbf{I}$ is the moment of inertia tensor. In representing a the magnetic moment as a thin rod, the moment of inertia tensor becomes,
$$\mathbf{I}=\begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & 0 \end{bmatrix},$$
where $\lambda_1 = \lambda_2$.
The part that I am getting hung up on is the Euler's equation $\dot{\vec{L}}+\omega \times\vec{L}=\vec{\Gamma}=\vec{m}\times\vec{B}$:
$$\lambda_1\dot{\omega_1}-(\lambda_2 - \lambda_3)\omega_2\omega_3=\Gamma_1,$$ $$\lambda_2\dot{\omega_2}-(\lambda_3 - \lambda_1)\omega_3\omega_1=\Gamma_2,$$ $$\lambda_3\dot{\omega_3}=\Gamma_3,$$
where the $\omega$ terms are in the body frame.
So my question is how to implement the torque when the direction is constantly changing i.e. the magnetic field is a function of time and can be pointing in any direction. If I could get help on this one little spot (since the textbooks seem to have nice torques that are perpendicular to an axis) that would be appreciated. |
Q. The energy associated with electric field is $(U_E)$ and with magnetic field is $(U_B)$ for an electromagnetic wave in free space. Then :
Solution:
Average energy density of magnetic field, $u_{B} = \frac{B_{0}^{2}}{2 \mu_{0}} , B_{0} $ is maximum value of magnetic field. Average energy density of electric field, $u_{E} = \frac{\varepsilon_{0} \in^{2}_{0}}{2}$ now, $ \in_{0} =CB_{0} , C^{2} = \frac{1}{\mu_{0} \in_{0} } $ $ u_{E} = \frac{\in_{0}}{2} \times C^{2} B^{2}_{0} $ $ = \frac{\in_{0}}{2} \times\frac{1}{\mu_{0} \in_{0}} \times B^{2}_{0} = \frac{B^{2}_{0}}{2 \mu_{0}} = u_{B}$ $ u_{E} = u_{B} $ since energy density of electric & magnetic field is same, energy associated with equal volume will be equal. $ u_{E} = u_{B} $ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 8. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ 10. Two Carrnot engines A and B are operated in series. The first one, A, receives heat at $T_1(= 600 \; K)$ and rejects to a reservoir at temperature $T_2$. The second engine B receives heat rejected by the first engine and, in turn, rejects to a heat reservoir at $T_3(= 400 \; K)$. Calculate the temperature $T_2$ if the work outputs of the two engines are equal : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
Recently the question If $\frac{d}{dx}$ is an operator, on what does it operate? was asked on mathoverflow. It seems that some users there objected to the question, apparently interpreting it as an elementary inquiry about what kind of thing is a differential operator, and on this interpretation, I would agree that the question would not be right for mathoverflow. And so the question was closed down (and then reopened, and then closed again….
sigh). (Update 12/6/12: it was opened again,and so I’ve now posted my answer over there.)
Meanwhile, I find the question to be more interesting than that, and I believe that the OP intends the question in the way I am interpreting it, namely, as a logic question, a question about the nature of mathematical reference, about the connection between our mathematical symbols and the abstract mathematical objects to which we take them to refer. And specifically, about the curious form of variable binding that expressions involving $dx$ seem to involve. So let me write here the answer that I had intended to post on mathoverflow:
————————-
To my way of thinking, this is a serious question, and I am not really satisfied by the other answers and comments, which seem to answer a different question than the one that I find interesting here.
The problem is this. We want to regard $\frac{d}{dx}$ as an operator in the abstract senses mentioned by several of the other comments and answers. In the most elementary situation, it operates on a functions of a single real variable, returning another such function, the derivative. And the same for $\frac{d}{dt}$.
The problem is that, described this way, the operators $\frac{d}{dx}$ and $\frac{d}{dt}$ seem to be the
same operator, namely, the operator that takes a function to its derivative, but nevertheless we cannot seem freely to substitute these symbols for one another in formal expressions. For example, if an instructor were to write $\frac{d}{dt}x^3=3x^2$, a student might object, “don’t you mean $\frac{d}{dx}$?” and the instructor would likely reply, “Oh, yes, excuse me, I meant $\frac{d}{dx}x^3=3x^2$. The other expression would have a different meaning.”
But if they are the same operator, why don’t the two expressions have the same meaning? Why can’t we freely substitute different names for this operator and get the same result? What is going on with the logic of reference here?
The situation is that the operator $\frac{d}{dx}$ seems to make sense only when applied to functions whose independent variable is described by the symbol “x”. But this collides with the idea that what the function is at bottom has nothing to do with the way we represent it, with the particular symbols that we might use to express which function is meant. That is, the function is the abstract object (whether interpreted in set theory or category theory or whatever foundational theory), and is not connected in any intimate way with the symbol “$x$”. Surely the functions $x\mapsto x^3$ and $t\mapsto t^3$, with the same domain and codomain, are simply different ways of describing exactly the same function. So why can’t we seem to substitute them for one another in the formal expressions?
The answer is that the syntactic use of $\frac{d}{dx}$ in a formal expression involves a kind of binding of the variable $x$.
Consider the issue of
collision of bound variables in first order logic: if $\varphi(x)$ is the assertion that $x$ is not maximal with respect to $\lt$, expressed by $\exists y\ x\lt y$, then $\varphi(y)$, the assertion that $y$ is not maximal, is not correctly described as the assertion $\exists y\ y\lt y$, which is what would be obtained by simply replacing the occurrence of $x$ in $\varphi(x)$ with the symbol $y$. For the intended meaning, we cannot simply syntactically replace the occurrence of $x$ with the symbol $y$, if that occurrence of $x$ falls under the scope of a quantifier.
Similarly, although the functions $x\mapsto x^3$ and $t\mapsto t^3$ are equal as functions of a real variable, we cannot simply syntactically substitute the expression $x^3$ for $t^3$ in $\frac{d}{dt}t^3$ to get $\frac{d}{dt}x^3$. One might even take the latter as a kind of ill-formed expression, without further explanation of how $x^3$ is to be taken as a function of $t$.
So the expression $\frac{d}{dx}$ causes a binding of the variable $x$, much like a quantifier might, and this prevents free substitution in just the way that collision does. But the case here is not quite the same as the way $x$ is a bound variable in $\int_0^1 x^3\ dx$, since $x$ remains free in $\frac{d}{dx}x^3$, but we would say that $\int_0^1 x^3\ dx$ has the same meaning as $\int_0^1 y^3\ dy$.
Of course, the issue evaporates if one uses a notation, such as the $\lambda$-calculus, which insists that one be completely explicit about which syntactic variables are to be regarded as the independent variables of a functional term, as in $\lambda x.x^3$, which means the function of the variable $x$ with value $x^3$. And this is how I take several of the other answers to the question, namely, that the use of the operator $\frac{d}{dx}$ indicates that one has previously indicated which of the arguments of the given function is to be regarded as $x$, and it is with respect to this argument that one is differentiating. In practice, this is almost always clear without much remark. For example, our use of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ seems to manage very well in complex situations, sometimes with dozens of variables running around, without adopting the onerous formalism of the $\lambda$-calculus, even if that formalism is what these solutions are essentially really about.
Meanwhile, it is easy to make examples where one must be very specific about which variables are the independent variable and which are not, as Todd mentions in his comment to David’s answer. For example, cases like
$$\frac{d}{dx}\int_0^x(t^2+x^3)dt\qquad
\frac{d}{dt}\int_t^x(t^2+x^3)dt$$
are surely clarified for students by a discussion of the usage of variables in formal expressions and more specifically the issue of bound and free variables. |
Shafarevich in the book "Basic Algebraic Geometry I" gives the following definition of a quasi-projective variety:
A quasi-projective variety is an open subset (respect to the induced Zariski topology) of a closed projective set.
Then he says that
a closed affine set is a quasi-projective variety, but i disagree with this statement. Here I'll try to show why:
We know that $\mathbb P^n_k=U_0\cup\ldots\cup U_n$, where $U_i$ is open and $U_i\cong\mathbb A_k^n$. If $X$ is a closed affine set and $\overline X$ is its projective closure (note that $\overline X$ is a closed subset of $\mathbb P^n_k$ and it contains a "copy" of $X$), one can show that $X$ is homeomorphic to $\overline X\cap U_i$ which is open in $\overline X$. So technically $X$ is homeomorphic to a quasi-projective variety, but according to the previous definition it is
not a quasi-projective variety because $X\not\subset\mathbb P^n_k$. Where is the mistake in my argumentation?
Clearly Shafarevich is working into the classical framework of Algebraic Geometry, so without mentioning the concept of scheme or sheaf. For this reason I'd like an answer concerning only "classical arguments". I know that this is not the most elegant way to introduce varieties, but I want to understand the abstract concepts through sucessive generalizations. |
From Fermat's little theorem we know that every odd prime $p$ divides $2^a-1$ with $a=p-1$.
Is it possible to prove that there are infinitely many primes not dividing $2^a+2^b-1$?
(With $2^a,2^b$ being incoguent modulo $p$) 1. Obviously, If $2$ is not a quadratic residue modulo $p$ then we have the solution $a=1, b=\frac{p-1}{2}$ 2. If $2$ is a quadratic residue and the order of $2 \ modp$ is $r=\frac{p-1}{2}$ then the set $ \{2^1,2^2,...,2^{\frac{p-1}{2}}\}$ is a complete quadratic residue system modp. So,In this case, $p\mid2^a+2^b-1$ is equivalent to $p\mid x^2+y^2-1$ with $x^2,y^2$ being incogruent modp, which is always true for every $p\geq11$ . 3. It is not true that if $p \mid2^a+2^b-1$ and $q\mid2^{a'}+2^{b'}-1$ then $p\cdot q\mid 2^c+2^d-1$ .
There is the counterexample: $5\mid 2^1+2^2-1$ and $17\mid 2^1+2^4-1$ but
$5\cdot 17=85\not \mid2^a+2^b-1$.
We can see a few examples of numbers which have the questioned property :$3,7,31,73,89,...$
(In fact,every Mersenne prime does not divide $2^a+2^b-1$)
Thanks in advance! |
I need some viable sources for entropy to seed a CSPRNG. So far, I have:
JS Events Web Crypto API performance.now() Timing for xmlHTTPRequests etc.
Are there any other viable/secure entropy sources that I can use/access in JavaScript?
I need some viable sources for entropy to seed a CSPRNG. So far, I have:
Are there any other viable/secure entropy sources that I can use/access in JavaScript?
There is an information source on the information source alphabet $ A = \{a, b, c\}$ represented by the state transition diagram below:
a) The random variable representing the $ i$ -th output from this information source is represented by $ X_i$ . It is known that the user is now in state $ S_1$ . In this state, let $ H (X_i|s_1)$ denote the entropy when observing the next symbol $ X_i$ , find the value of $ H (X_i|s_1)$ , entropy of this information source, Calculate $ H (X_i|X_{i-1}) $ and $ H (X_i)$ respectively. Assume $ i$ is quite large
How can I find $ H(X_i|s_1)?$ I know that $ $ H(X_i|s_1) = -\sum_{i,s_1} p\left(x_i, s_1\right)\cdot\log_b\!\left(p\left(x_i|s_1\right)\right) = -\sum_{i,j} p\left(x_i, s_1\right)\cdot\log_b\!\left(\frac{p\left(x_i, s_1\right)}{p\left(s_1\right)}\right)$ $ but I don’t know $ p(s_1)$ .
$ $ A=\begin{pmatrix}0.25 & 0.75 & 0\0.5 & 0 & 0.5 \0 & 0.7 & 0.3 \end{pmatrix}.$ $
From matrix I can know that $ p(s_1|s_1)=0.25$ , etc.
But what is the probability of $ s_1$ ? And how can I calculate $ H (X_i|X_{i-1})$ ?
Let’s say I have image where all pixel values are either 0 or 1. What I’d like to do is to be able to generate a new image with the same dimensions where each pixel represents how “ordered” the area around the corresponding pixel in the original image is. In particular, I’m looking for “spatial” order: whether or not there is some regularity or pattern in that local area. This could then be used to segment in image into regions of relative order and regions of relative disorder.
For example:
and
are both highly ordered. On the other hand,
probably has varying levels of order within the image but is overall disorded. Finally, an image like
has areas of order (bottom left and to some extent top right) and disorder (rest of the image).
I’ve considered taking some general measure of entropy (like Shannon’s image entropy) and applying it with a moving window across the image, but my understanding is that most measures of entropy do not capture much about the spatial aspects of the image. I’ve also come across the concept of “lacunarity” which looks promising (it’s been used to segment e.g., anthropogenic structures from natural landscapes on the basis of homogeneity) but I’m having a hard time understanding how it works and thus if it’s truly appropriate. Could either of these concepts be made to work for what I’m asking, or is there something else I haven’t considered?
Assuming I have a secret key of sufficient length and entropy (I get to decide the length and have a good random source).
I would like to generate 256 length keys by hashing the root key with the name of each key, ex:
key1 = sha256(rootKey +"key1") key2 = sha256(rootKey +"key2") ... keyN = sha256(rootKey +"keyN")
Is the sha256 hash a good choice ?
If yes, what length should the root secret be ? I’m thinking 256 bit is pretty good, but it wouldn’t cost much to make it bigger…
This is not necessarily a research question, since I do not know, if someone is working on this or not, but I hope to gain some insight by asking it here:
The idea behind this question is to attach to a natural number in a “natural” way an entropy, such that “multiplication increases entropy”. (Of course one can attach very different entropies to natural numbers such that multiplication reduces entropy, but I will try to give an argument, why this choice is natural.) Let $ n$ be a composite number $ \ge 2$ , $ \phi$ the Euler totient function. Suppose a factorization algorithm $ A$ outputs a number $ X$ , $ 1 \le X \le n-1$ with equal probability $ \frac{1}{n-1-\phi(n)}$ such that $ 1 < \gcd(X,n) < n$ . Then we can view $ X$ as a random variable and attach the entropy to it $ H(X_n):=H(n):= \log_2(n-1-\phi(n))$ .
The motivation behind this definition comes from an analogy to physics: I think that “one-way-functions” correspond to the “arrow of time” in physics. Since “the arrow of time” increases entropy, so should “one-way-functions”, if they exist. Since integer factorization is known to be a candidate for owf, my idea was to attach an entropy which would increase when multiplying two numbers.
It is proved here ( https://math.stackexchange.com/questions/3275096/does-entropy-increase-when-multiplying-two-numbers ) that:
$ H(mn) > H(n) + H(m)$ for all composite numbers $ n \ge 2, m \ge 2$
The entropy of $ n=pq$ , $ p<q<2p$ will be $ H(pq) = \log_2(p+q-1) > \log_2(2p-1) \approx \log_2(2p) = 1+\log_2(p)$ .
At the beginning of the factorization, the entropy of $ X_n$ , $ n=pq$ will be $ \log_2(p+q-1)$ since it is “unclear” which $ X=x$ will be printed. The algorithm must output $ X=x$ as described above. But knowing the value of $ X$ , this will reduce the entropy of $ X$ to $ 0$ . So the algorithm must reduce the entropy from $ \log_2(p+q-1)$ to $ 0$ . From physics it is known, that reducing entropy can be done with work. Hence the algorithm “must do some work” to reduce the entropy.
My question is, which functions do reduce entropy? (So I am thinking about how many function calls will the algorithm at least make to reduce the entropy by the amount described above?)
Thanks for your help!
Does multiplication increase entropy?
The Shannon entropy of a number $ k$ in binary digits is defined as $ $ H = -\log(\frac{a}{l})\cdot\frac{a}{l} – \log(1-\frac{a}{l})\cdot (1-\frac{a}{l})$ $ where $ l = \text{ floor }(\frac{\log(k)}{\log(2)})$ is the number of binary digits of $ k$ and $ a$ is the number of $ 1$ -s in the binary expansion of $ k$ . So we view the number $ k$ as a “random variable”.
Suppose that $ n,m$ are uniformly randomly chosen in the interval $ 1 \le N$ .
Hypothesis 1):
$ H_{m \cdot n}$ is “significantly” larger then $ H_n$ .
Hypothesis 2):
$ H_{m + n}$ is not “significantly” larger then $ H_n$ .
Here is some empirical statistical test indicating that multiplication increases entropy, but addition does not:
def entropyOfCounter(c): S = 0 for k in c.keys(): S += c[k] prob = [] for k in c.keys(): prob.append(c[k]/S) H = -sum([ p*log(p,2) for p in prob]).N() return H def HH(l): return entropyOfCounter(Counter(l)) N = 10^4 HN = [] HmXn = [] HmPn = [] for k in range(N): n = randint(1,17^50) m = randint(1,17^50) Hn = HH(Integer(n).digits(2)) Hm = HH(Integer(m).digits(2)) HmXn.append(HH(Integer(n*m).digits(2))) HmPn.append(HH(Integer(n+m).digits(2))) HN.append(Hn) X = mean(HN) Y = mean(HmPn) Z = mean(HmXn) n = len(HN) m = n SX2 = variance(HN) SY2 = variance(HmPn) SZ2 = variance(HmXn) SXY2 = ((n-1)*SX2 + (m-1)*SY2)/(n+m-2) SXZ2 = ((n-1)*SX2 + (m-1)*SZ2)/(n+m-2) TXY = sqrt((m*n)/(n+m)).N()*(X-Y)/sqrt(SXY2).N() TXZ = sqrt((m*n)/(n+m)).N()*(X-Z)/sqrt(SXZ2).N() print TXY,TXZ,n+m-2 Output: -1.43265218355297 -32.5323306851490 19998
The second case (multiplication) increases entropy significantly. The first case ( addition) does not.
Is there a way to give a heuristic explanation why this is so in general (if it is), or is this empirical obervation in general $ 1 \le N$ wrong?
Related: https://physics.stackexchange.com/questions/487780/increase-in-entropy-and-integer-factorization-how-much-work-does-one-have-to-do
Suppose I am given
I now want to find a
new family of transition kernels $ \{ q^1 (x \to \cdot) \}_{x \in \mathbf R^d}$ such that
\begin{align} \pi (x) q^1 ( x \to y ) = \pi (y) q^1 ( y \to x) \end{align}
To this end, I introduce the following functional
\begin{align} \mathbb{F} [ q^1 ] &= \int_{x \in \mathbf R^d} \pi (x) \cdot KL \left( q^1 (x \to \cdot ) || q^0 (x \to \cdot ) \right) dx\ &= \int_{x \in \mathbf R^d} \int_{y \in \mathbf R^d} \pi (x) \cdot q^1 (x \to y) \log \frac{ q^1 (x \to y) }{ q^0 (x \to y) } \, dx dy \ &= \int_{x \in \mathbf R^d} \int_{y \in \mathbf R^d} \pi (x) \cdot \left\{ q^1 (x \to y) \log \frac{ q^1 (x \to y) }{ q^0 (x \to y) } – q^1 (x \to y) + q^0 (x \to y) \right \} \, dx dy. \end{align}
I thus wish to minimise $ \mathbb{F}$ over all transition kernels $ \{ q^1 (x \to \cdot) \}_{x \in \mathbf R^d}$ satisfying the desired detailed balance condition. Implicitly, there are also the constraints that the $ q^1$ are nonnegative and normalised.
I would like to solve this minimisation problem, or at least derive e.g. an Euler-Lagrange equation for it.
Currently, I can show that the first variation of $ \mathbb{F}$ is given by
\begin{align} \left( \frac{d}{dt} \vert_{t = 0} \right) \mathbb{F} [q^1 + t h] = \int_{x \in \mathbf R^d} \int_{y \in \mathbf R^d} \pi (x) \cdot \log \frac{ q^1 (x \to y) }{ q^0 (x \to y) } \cdot h (x, y) \, dx dy. \end{align}
Moreover, my constraints stipulate that any admissible variation $ h$ must satisfy the following two conditions:
I have not been able to translate these conditions into an Euler-Lagrange equation. I acknowledge that since the functional involves no derivatives of $ q^1$ , the calculus of variations approach may be ill-suited. If readers are able to recommend alternative approaches, this would also be appreciated. Anything which would allow for a more concrete characterisation of the optimal $ q^1$ would be ideal.
I’ve seen two versions of the cross entropy cost function, and conflicting information about it. \begin{equation}J(\theta) = -\frac{1}{N} \sum_{n=1}^N\sum_{i=1}^C y_{ni}\log \hat{y}_{n_i} (\theta)\end{equation} $ $ C(\theta) = – \frac{1}{N}\sum_{n=1}^N \sum_{i=1}^{C}[ y_{ni}\log (\hat{y}_{ni})+ (1-y_{ni}) \log(1-\hat{y}_{ni})] $ $
Some are saying that the second one is equivalent to the first for the case where there are only two classes, which makes sense. But couldn’t the second one also be used for more than two classes? For example, say we have three class classification, with the ground truth $ \vec{y} = \begin{bmatrix} 1 \ 0 \ 0 \end{bmatrix}$ and $ \vec{\hat{y}} =\begin{bmatrix} 0.7 \ 0.1 \ 0.2 \end{bmatrix} $ . Then with the first equation, cross entropy would simply be $ -\log(0.7)$ . But couldn’t we also use the second equation and calculate cross entropy as $ -\log(0.7) – \log(0.9)-\log(0.8)?$ When would we use the first equation and when would we use the second one, and why?
Let $ x \in \{0,1\}^n$ be uniformly at random. What is an estimate for the entropy of moments, $ H(\sum_i x_i, \sum_i i\cdot x_i, \sum_i i^2\cdot x_i)$ ?
Let’s say I need to generate a 32-character secret comprised of ASCII characters from the set ‘0’..’9′. Here’s one way of doing it:
VALID_CHARS = '0123456789' generate_secret_string() { random = get_crypto_random_bytes(32) secret = '' for (i = 0; i < 32; i++) { secret += VALID_CHARS[random[i] % 10] } return secret }
My concern is that my character selection is biased. Because 10 doesn’t divide evenly into 256, the first 6 VALID_CHARS are slightly more likely to occur.
The secret space is 10
32, but my generated secrets have less entropy than that. How can I calculate precisely how much entropy I actually have? |
Recall that a unit lower triangular matrix $L\in\mathbb{R}^{n\times n}$ is a lower triangular matrix with diagonal elements $e_i^{T}L e_i = \lambda_{ii} = 1$. An elementary unit lower triangular column form matrix, $L_i$, is an elementary unit lower triangular matrix in which all of the nonzero subdiagonal elements are contained in a single column. For example, for $n = 4$
$$L_1 = \begin{pmatrix} 1 & 0 & 0 & 0\\ \lambda_{21} & 1 & 0 & 0\\ \lambda_{31} & 0 & 1 & 0\\ \lambda_{41} & 0 & 0 & 1\\ \end{pmatrix} \ \ \ L_2 = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & \lambda_{32} & 1 & 0\\ 0 & \lambda_{42} & 0 & 1\\ \end{pmatrix} \ \ \ L_3 = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & \lambda_{43} & 1\\ \end{pmatrix}$$
Our first task was to show that any unit lower triangular column form matrix, $L_i\in\mathbb{R}^{n\times n}$, can be written as the identity matrix plus an outer product of two vectors, i.e., $L_i = I + v_i w_i^{T}$ where $v_i\in\mathbb{R}^{n\times n}$ and $w_i\in \mathbb{R}^n$.
solution - Since only the $i$-th column of $L_i$ differs from the identity matrix the outer product $v_i w_i^{T}$ must have the same structure. This implies that $w_i = e_i$ and it follows that $v_i$ is added to the $i$-th column of $I$ to define $L_i e_i$. Since only elements below the main diagonal element are different from $I$, it follows that $v_i$ has a "lower" structure to its potentially nonzero elements. This is often indicated in the notation by using $l_i$ instead of the generic $v_i$. The conditions on the vector are $$l_i^{T}e_j = \begin{cases}0 \ & 1\leq j \leq i\\ \lambda_{ji} \ & i+1\leq j \leq n \end{cases}$$
and the expression is $L_i = I + l_i e_i^{T}$
Now the question I have is the following:
i.) Suppose $L_i\in\mathbb{R}^{n\times n}$ and $L_j\in\mathbb{R}^{n\times n}$ are elementary unit lower triangular column form matrices with $1\leq i < j \leq n-1$. Consider the matrix product $B = L_i L_j$. Determine an efficient algorithm to compute the product and its computational and storage complexity.
ii.) Suppose $L_i\in\mathbb{R}^{n\times n}$ and $L_j\in\mathbb{R}^{n\times n}$ are elementary unit lower triangular column form matrices with $1\leq j \leq i \leq n-1$. Consider the matrix product $B = L_i L_j$. Determine an efficient algorithm to compute the product and its computational and storage complexity.
The only difference from (i) and (ii) are the inequalities as you can see. I have been told that (i) requires no computation but I don't understand why. I am quite confused about these types of problems. Any suggestions are greatly appreciated. |
Neutral currents in alternative U
3 (W)-gauge models of weak and electromagnetic interactions Permanent link: https://www.ias.ac.in/article/fulltext/pram/012/04/0419-0425 3(W)-gauge theory
Two alternative U
3(W)-gauge models are presented. Both agree with the recent Abbott-Barnett fits to the neutrino-nucleon neutral-current data, and with the SLAC measurement of the asymmetry parameter for longitudinally polarised electrondeuteron inelastic scattering. Results for$$\sigma \left( {\nu _\mu e} \right),\sigma \left( {\bar \nu _\mu e} \right)$$ are also found in agreement with the latest measurements. The models differ in the parameter Q W(Z, N) characterising parity-violation in heavy atoms for which, however, the experimental situation is still unclear.
Current Issue
Volume 93 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
Last year I made a post about the universal program, a Turing machine program $p$ that can in principle compute any desired function, if it is only run inside a suitable model of set theory or arithmetic. Specifically, there is a program $p$, such that for any function $f:\newcommand\N{\mathbb{N}}\N\to\N$, there is a model $M\models\text{PA}$ — or of $\text{ZFC}$, whatever theory you like — inside of which program $p$ on input $n$ gives output $f(n)$.
This theorem is related to a very interesting theorem of W. Hugh Woodin’s, which says that there is a program $e$ such that $\newcommand\PA{\text{PA}}\PA$ proves $e$ accepts only finitely many inputs, but such that for any finite set $A\subset\N$, there is a model of $\PA$ inside of which program $e$ accepts exactly the elements of $A$. Actually, Woodin’s theorem is a bit stronger than this in a way that I shall explain.
Victoria Gitman gave a very nice talk today on both of these theorems at the special session on Computability theory: Pushing the Boundaries at the AMS sectional meeting here in New York, which happens to be meeting right here in my east midtown neighborhood, a few blocks from my home.
What I realized this morning, while walking over to Vika’s talk, is that there is a very simple proof of the version of Woodin’s theorem stated above. The idea is closely related to an idea of Vadim Kosoy mentioned in my post last year. In hindsight, I see now that this idea is also essentially present in Woodin’s proof of his theorem, and indeed, I find it probable that Woodin had actually begun with this idea and then modified it in order to get the stronger version of his result that I shall discuss below.
But in the meantime, let me present the simple argument, since I find it to be very clear and the result still very surprising.
Theorem. There is a Turing machine program $e$, such that $\PA$ proves that $e$ accepts only finitely many inputs. For any particular finite set $A\subset\N$, there is a model $M\models\PA$ such that inside $M$, the program $e$ accepts all and only the elements of $A$. Indeed, for any set $A\subset\N$, including infinite sets, there is a model $M\models\PA$ such that inside $M$, program $e$ accepts $n$ if and only if $n\in A$. Proof. The program $e$ simply performs the following task: on any input $n$, search for a proof from $\PA$ of a statement of the form “program $e$ does not accept exactly the elements of $\{n_1,n_2,\ldots,n_k\}$.” Accept nothing until such a proof is found. For the first such proof that is found, accept $n$ if and only if $n$ is one of those $n_i$’s.
In short, the program $e$ searches for a proof that $e$ doesn’t accept exactly a certain finite set, and when such a proof is found, it accepts exactly the elements of this set anyway.
Clearly, $\PA$ proves that program $e$ accepts only a finite set, since either no such proof is ever found, in which case $e$ accepts nothing (and the empty set is finite), or else such a proof is found, in which case $e$ accepts only that particular finite set. So $\PA$ proves that $e$ accepts only finitely many inputs.
But meanwhile, assuming $\PA$ is consistent, then you cannot refute the assertion that program $e$ accepts exactly the elements of some particular finite set $A$, since if you could prove that from $\PA$, then program $e$ actually would accept exactly that set (for the shortest such proof), in which case this would also be provable, contradicting the consistency of $\PA$.
Since you cannot refute any particular finite set as the accepting set for $e$, it follows that it is consistent with $\PA$ that $e$ accepts any particular finite set $A$ that you like. So there is a model of $\PA$ in which $e$ accepts exactly the elements of $A$. This establishes statement (2).
Statement (3) now follows by a simple compactness argument. Namely, for any $A\subset\N$, let $T$ be the theory of $\PA$ together with the assertions that program $e$ accepts $n$, for any particular $n\in A$, and the assertions that program $e$ does not accept $n$, for $n\notin A$. Any finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. Any model of this theory realizes statement (3).
QED
One uses the Kleene recursion theorem to show the existence of the program $e$, which makes reference to $e$ in the description of what it does. Although this may look circular, it is a standard technique to use the recursion theorem to eliminate the circularity.
This theorem immediately implies the classical result of Mostowski and Kripke that there is an independent family of $\Pi^0_1$ assertions, since the assertions $n\notin W_e$ are exactly such a family.
The theorem also implies a strengthening of the universal program theorem that I proved last year. Indeed, the two theorems can be realized with the same program!
Theorem. There is a Turing machine program $e$ with the following properties: $\PA$ proves that $e$ computes a finite function; For any particular finite partial function $f$ on $\N$, there is a model $M\models\PA$ inside of which program $e$ computes exactly $f$. For any partial function $f:\N\to\N$, finite or infinite, there is a model $M\models\PA$ inside of which program $e$ on input $n$ computes exactly $f(n)$, meaning that $e$ halts on $n$ if and only if $f(n)\downarrow$ and in this case $\varphi_e(n)=f(n)$. Proof. The proof of statements (1) and (2) is just as in the earlier theorem. It is clear that $e$ computes a finite function, since either it computes the empty function, if no proof is found, or else it computes the finite function mentioned in the proof. And you cannot refute any particular finite function for $e$, since if you could, it would have exactly that behavior anyway, contradicting $\text{Con}(\PA)$. So statement (2) holds. But meanwhile, we can get statement (3) by a simple compactness argument. Namely, fix $f$ and let $T$ be the theory asserting $\PA$ plus all the assertions either that $\varphi_e(n)\uparrow$, if $n$ is not the domain of $f$, and $\varphi_e(n)=k$, if $f(n)=k$. Every finite subtheory of this theory is consistent, by statement (2), and so the whole theory is consistent. But any model of this theory exactly fulfills statement (3). QED
Woodin’s proof is more difficult than the arguments I have presented, but I realize now that this extra difficulty is because he is proving an extremely interesting and stronger form of the theorem, as follows.
Theorem. (Woodin) There is a Turing machine program $e$ such that $\PA$ proves $e$ accepts at most a finite set, and for any finite set $A\subset\N$ there is a model $M\models\PA$ inside of which $e$ accepts exactly $A$. And furthermore, in any such $M$ and any finite $B\supset A$, there is an end-extension $M\subset_{end} N\models\PA$, such that in $N$, the program $e$ accepts exactly the elements of $B$.
This is a much more subtle claim, as well as philosophically interesting for the reasons that he dwells on.
The program I described above definitely does not achieve this stronger property, since my program $e$, once it finds the proof that $e$ does not accept exactly $A$, will accept exactly $A$, and this will continue to be true in all further end-extensions of the model, since that proof will continue to be the first one that is found. |
Let us start with the log-linear model. Let \(x\) be the data we are trying to model, \(\phi(x)\) a vector of features, and \(w\) a vector of model parameters. (Here I use \(x\) to denote the whole training set, more typically denoted as \(\{(x_1,y_1), (x_2,y_2), \ldots, (x_m,y_m)\}\), to reduce notational clutter). The generative log-linear model defines the probability of a particular dataset we observed, \(x_o\), as: \[ \begin{eqnarray*} p(x_o) &=& (1/Z) \exp w^\top \phi(x_o) \\ Z &=& \sum_{x} \exp w^\top \phi(x) \end{eqnarray*} \] where \(Z\), the normalization constant, or as physicists call it the partition function, is computed as a sum over all possible observations \(x\). To maximize the log likelihood of the observed data \(x_o\) we compute its gradient with respect to coefficients \(w_k\) that correspond to features \(\phi_k(x)\): \[ \begin{eqnarray*} \log p(x_o) &=& w^\top \phi(x_o) - \log Z \\ \frac{\partial}{\partial w_k} w^\top \phi(x_o) &=& \phi_k(x_o) \\ \frac{\partial}{\partial w_k}\log Z &=& (1/Z) \sum_{x} (\exp w^\top \phi(x)) \, \phi_k(x) \\ &=& \sum_{x} p(x) \phi_k(x) = \langle\phi_k\rangle \\ \frac{\partial}{\partial w_k}\log p(x_o) &=& \phi_k(x_o) - \langle\phi_k\rangle \end{eqnarray*} \] which shows that at the maximum, where the derivative is zero, the model expectation of the k'th feature \(\langle\phi_k\rangle\) should be equal to its value in the observed data \(\phi_k(x_o)\). That is very nice, but it does not tell us how to set \(w_k\) to achieve that maximum, for that we still need to run an optimizer. To compute the gradient for the optimizer, or even the likelihood of a particular \(w\) we need to compute a sum over all possible \(x\), which is completely impractical. The typical solution is to sample a random \(x\) from the distribution \(p(x)\) and use \(\phi_k(x)\) as an unbiased estimate to replace \(\langle\phi_k\rangle\). But sampling from a distribution with an unknown \(Z\) is no easy matter either. Now you know why generative log-linear models are not very popular.
Let us now turn to multinomial models like HMM. How do they define a distribution over all possible observations, yet avoid computing very large sums? They (very carefully) get rid of the big \(Z\). In particular, they come up with a generative story where the observed data is built piece by piece, and the choice for each piece depends only on a subset of the previously selected pieces. For example, an HMM is a simple machine which picks a current state based on its previous state (or last few states), and its current output based on the current state. To compute the probability of a particular state-output sequence we just multiply the conditional probabilities for each of the choices.
It is difficult to find a notation to express multinomial models that is simple, general, and that makes their relation to log-linear models obvious. Here is an attempt: let \(d\) be a decision, \(c\) be a condition, \(p(d|c)\) the probability of making decision \(d\) under condition \(c\), and \(n_o(c,d)\) the number of times we made decision \(d\) under condition \(c\) while constructing the observed data \(x_o\). For example in an HMM, \(d\) would be a member of states and outputs, \(c\) would be a member of states. We can then express the probability of the observed data \(x_o\) as: \[ \begin{eqnarray*} p(x_o) &=& \prod_{c,d} p(d|c)^{n_o(c,d)} \\ \log p(x_o) &=& \sum_{c,d} n_o(c,d) \log p(d|c) \end{eqnarray*} \] If we define \(\phi\) and \(w\) vectors indexed by \((c,d)\) pairs as: \[ \begin{eqnarray*} \phi_{c,d}(x_o) &=& n_o(c,d)\\ w_{c,d} &=& \log p(d|c) \end{eqnarray*} \] we can write the log likelihood as: \[ \log p(x_o) = w^\top \phi(x_o) \] which looks exactly like the log-linear model except for the missing \(Z\) term! Of course that doesn't mean we can pick any old \(w\) vector we want. In particular the probability of our decisions under each condition should sum to 1: \[ \forall c \sum_d p(d|c) = \sum_d \exp w_{c,d} = 1 \] To find the \(w\) that satisfies these constraints and maximizes log-likelihood, we use Lagrange multipliers: \[ \begin{eqnarray*} L(w,\lambda) &=& w^\top \phi(x_o) + \sum_c \lambda_c (1 - \sum_d \exp w_{c,d}) \\ \frac{\partial}{\partial w_{c,d}} L &=& \phi_{c,d}(x_o) - \lambda_c \exp w_{c,d} \\ &=& n_o(c,d) - \lambda_c p(d|c) = 0 \\ p(d|c) &=& n_o(c,d) / \lambda_c \end{eqnarray*} \] which shows that the maximum likelihood estimates for conditional probabilities \(p(d|c)\) are proportional to the observed counts \(n_o(c,d)\). To satisfy the sum to 1 constraint, we obtain \(p(d|c) = n_o(c,d)/n_o(c)\).
So, multinomial models are specific cases of log-linear models after all. However they avoid the big \(Z\) calculation by carefully defining the features and the corresponding parameters so as to ensure \(Z=1\). They do this by decomposing the observation into many small parts and making sure the decisions that generate each part are individually normalized. If each decision involves one of 10 choices and we need to make 20 decisions to construct the whole observation the set of all possible observations would have \(10^{20}\) elements. Normalizing 20 decisions with 10 elements each is easier than normalizing a set with \(10^{20}\) elements.
Questions for the interested reader: Show that normalization of each part ensures the normalization of the full observation. The log-linear solution above used an explicit Z, whereas the multinomial solution used constrained optimization. Can you use constrained optimization for log-linear models or an explicit Z for multinomial models, and does that make a difference? LSP mentions disadvantages of locally normalized conditional models compared to globally normalized conditional models. Do the same arguments apply to globally normalized generative log-linear models vs locally normalized generative multinomial models? Unsupervised learning can be thought of as hiding part of x from view and learning weights that maximize the likelihood given the visible portion. Is inference with multinomial models still easy in the unsupervised case? How about log-linear models? In the world of neural networks, log-linear models are analogous to "energy based", "undirected" or "symmetric" models like Hopfield nets and Boltzmann machines whereas multinomial models are analogous to "causal", "directed" models like feed-forward neural nets or sigmoid belief nets. Show that analogous arguments apply to these models. In the world of probabilistic graphical models, log-linear models are instances of "undirected" or "Markov" networks, whereas multinomial models are instances of "directed, acyclic" or "Bayesian" networks. Show that analogous arguments apply to these networks. |
Good day everyone. I was reading the more advanced lectures on complex analysis and encountered a lot of questions, concerning the determination of complex logarithm. As far I don't even understand the concept of it, but I'll do provide you with several practical questions concerning the topic.
First of all, the determination of complex logarithm on open $\Omega \subset \mathbb{C}$ is continuous $f(w)$ such that:
$$\forall w \in \Omega \text{ }\exp(f(w))=w$$
So then it starts. Maybe someone could explain what presumptions does the following statement contradicts.
There is no continuous determination for complex logarithm in $\mathbb{C}$ \ ${0}$.
Second part is more practical.
We say that determination of complex logarithm is called principal if it given as a complement of $\mathbb{C}$ and semi axis of negative or zero reals $\Omega_{\pi}=\mathbb{C}$ \ $\{z\in \mathbb{C} : \Re(z) \leq 0\}$. Such as $$ f(z) = \log(|z|)+ i\begin{cases} \arcsin{(y/|z|)} & x \geq 0, \\ \pi - \arcsin{(y/|z|)} & x \leq 0,\, y \geq 0, \\ -\pi - \arcsin{(y/|z|)} & x \leq 0,\, y \leq 0. \end{cases}$$
We can see that the argument belongs to $(-\pi,\pi]$, but I don't understand neither why a set without a negative reals define such argument neither how does it happens. After this there is an example saying that if we take $\Omega_0=\mathbb{C}$ \ $\{z \in \mathbb{C} : \Re \geq 0\}$ then the argument will be $(0,2\pi]$ but I also didn't get how this happens. What will happen if we take out non positive imaginary part? What kind of argument we will have then?
There is no such topic explained in wiki so maybe the deep answer will help others who encountered the same problem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.