text stringlengths 256 16.4k |
|---|
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
The simplest two body interaction term for fermions is
$$H = \sum_{ijkl} U_{ijkl} a_i^\dagger a_j^\dagger a_k a_l$$
and I'm trying to determine the symmetries on $U$. Unfortunately I keep getting weird sign errors. The first symmetry comes from Hermiticity. To have $H$ be Hermitian, we need
$$H = H^\dagger = \sum_{ijkl} U_{ijkl}^\dagger a_l^\dagger a_k^\dagger a_j a_i$$
Then relabel the indices $i\leftrightarrow l$, $j\leftrightarrow k$:
$$H = \sum_{ijkl} U_{lkji}^\dagger a_i^\dagger a_j^\dagger a_k a_l$$
This should indicate that $U_{lkji}^\dagger = U_{ijkl}$. Along similar lines,
$$H = \frac{H + H}{2} = \frac{\sum_{ijkl} U_{ijkl} a_i^\dagger a_j^\dagger a_k a_l + \sum_{ijkl} U_{ijkl} a_i^\dagger a_j^\dagger a_k a_l}{2}$$
Relabel the indices in the second one as $k\leftrightarrow l$:
$$H = \frac{\sum_{ijkl} U_{ijkl} a_i^\dagger a_j^\dagger a_k a_l + \sum_{ijkl} U_{ijlk} a_i^\dagger a_j^\dagger a_l a_k}{2}$$
then apply the anticommutation relation:
$$H = \frac{\sum_{ijkl} U_{ijkl} a_i^\dagger a_j^\dagger a_k a_l - \sum_{ijkl} U_{ijlk} a_i^\dagger a_j^\dagger a_k a_l}{2} =\frac{\sum_{ijkl} (U_{ijkl}-U_{ijlk}) a_i^\dagger a_j^\dagger a_k a_l}{2}$$
Thus suggests that $U_{ijkl}$ is antisymmetric under the last two indices. Similarly, it should be antisymmetric under the first two indices. Unfortunately, a number of online sources seem to suggest that it should be symmetric, for instance http://sirius.chem.vt.edu/wiki/doku.php?id=crawdad:programming:project3#step_3two-electron_integrals -- here $\langle{\mu\sigma|\lambda\rho\rangle} = U_{\mu\sigma\lambda\rho}$, unless I'm somehow very sorely mistaken. Another reason is that $U_{ijkl}$ will often get contract with the density matrix $D_{kl}$ which is symmetric, and so would vanish if it wasn't antisymmetric.
Is it correct that the symmetries necessary of $U$ are the Hermitian symmetry given above, and antisymmetry in the (12) or (34) pairs? Or is it symmetric? Or neither? Thank you. |
Let $N:=A-B$. By assumption, the matrix $N$ is nilpotent.This means that there exists a positive integer $n$ such that $N^n$ is the zero matrix $O$.
Let $\lambda$ be an eigenvalue of $B$ and let $\mathbf{v}$ be an eigenvector corresponding to $\lambda$. That is, we have $B\mathbf{v}=\lambda \mathbf{v}$ and $\mathbf{v}\neq \mathbf{0}$.We prove that $\lambda$ is also an eigenvalue of $A$.
Note that since $A$ and $B$ commute each other, it follows that the matrices $N$ and $B-\lambda I$ commute each other as well.Then we compute\begin{align*}(A-\lambda I)^n&=(N+B-\lambda I)^n\\&=\sum_{i=0}^n \begin{pmatrix}n \\i\end{pmatrix}N^i(B-\lambda I)^{n-i},\end{align*}where the second equality follows by the binomial expansion. (Note that the binomial expansion is true for matrices commuting each other.)Then we have\begin{align*}(A-\lambda I)^n \mathbf{v}&=\sum_{i=0}^{n-1} \begin{pmatrix}n \\i\end{pmatrix}N^i(B-\lambda I)^{n-i}\mathbf{v}+N^n\mathbf{v}=\mathbf{0}\end{align*}since $(B-\lambda I)\mathbf{v}=\mathbf{0}$ and $N^n=O$.
This implies that there exists an integer $k$, $0\leq k \leq n-1$ such that\[\mathbf{u}:=(A-\lambda I)^k\mathbf{v}\neq \mathbf{0} \text{ and } (A-\lambda I)^{k+1}\mathbf{v}=\mathbf{0}.\]
It yields that $(A-\lambda I)\mathbf{u}=\mathbf{0}$ and $\mathbf{u}\neq \mathbf{0}$, or equivalently $A\mathbf{u}=\lambda \mathbf{u}$.Hence $\lambda$ is an eigenvalue of $A$.
This proves that each eigenvalue of $B$ is an eigenvalue of $A$.Note that if $A-B$ is nilpotent, then $B-A$ is also nilpotent.Thus, switching the roles of $A$ and $B$, we also see that each eigenvalue of $A$ is an eigenvalue of $B$.Therefore, the eigenvalues of $A$ and $B$ are the same.
Nilpotent Matrices and Non-Singularity of Such MatricesLet $A$ be an $n \times n$ nilpotent matrix, that is, $A^m=O$ for some positive integer $m$, where $O$ is the $n \times n$ zero matrix.Prove that $A$ is a singular matrix and also prove that $I-A, I+A$ are both nonsingular matrices, where $I$ is the $n\times n$ identity […]
Nilpotent Matrix and Eigenvalues of the MatrixAn $n\times n$ matrix $A$ is called nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix.Prove the followings.(a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero.(b) The matrix $A$ is nilpotent if and only if […]
An Example of a Matrix that Cannot Be a CommutatorLet $I$ be the $2\times 2$ identity matrix.Then prove that $-I$ cannot be a commutator $[A, B]:=ABA^{-1}B^{-1}$ for any $2\times 2$ matrices $A$ and $B$ with determinant $1$.Proof.Assume that $[A, B]=-I$. Then $ABA^{-1}B^{-1}=-I$ implies\[ABA^{-1}=-B. […]
A Recursive Relationship for a Power of a MatrixSuppose that the $2 \times 2$ matrix $A$ has eigenvalues $4$ and $-2$. For each integer $n \geq 1$, there are real numbers $b_n , c_n$ which satisfy the relation\[ A^{n} = b_n A + c_n I , \]where $I$ is the identity matrix.Find $b_n$ and $c_n$ for $2 \leq n \leq 5$, and […]
Is the Derivative Linear Transformation Diagonalizable?Let $\mathrm{P}_2$ denote the vector space of polynomials of degree $2$ or less, and let $T : \mathrm{P}_2 \rightarrow \mathrm{P}_2$ be the derivative linear transformation, defined by\[ T( ax^2 + bx + c ) = 2ax + b . \]Is $T$ diagonalizable? If so, find a diagonal matrix which […]
Every Diagonalizable Nilpotent Matrix is the Zero MatrixProve that if $A$ is a diagonalizable nilpotent matrix, then $A$ is the zero matrix $O$.Definition (Nilpotent Matrix)A square matrix $A$ is called nilpotent if there exists a positive integer $k$ such that $A^k=O$.Proof.Main PartSince $A$ is […] |
I don't think Hilmar's answer is very good as it interprets the DFT within a specific application context. That confuses issues.
The DFT is a tranform that works on a set of N samples. The samples are presumed to be evenly spaced in their domain on a finite interval of samples is called a frame. It may be time, it may be distance, or it could even be another dimension. The bin values tell you how close your samples are to corresponding to that bin. The important thing to remember is each bin corresponds to a sinusoidal which has a frequency of the bin index measured in units of cycles per frame. What happens outside the frame is out of scope for the DFT. Saying "The DFT assumes the interval is infinitely repeating" is misleading without any constructive benefit. What is true is that an inverse DFT can be extended, and it will form a repeating pattern.
The concept of "energy" is only valid is a subset of applications. "Leakage" is not about energy at all, it is about representation of points in multidimensional coordinate systems. But that discussion is a bit mathematical.
What it seems you are stuggling with having to do with definitions on particular applications and what is the significance of the number of sample points on your frequency values and the possible resolutions. Those concepts are straightforward with a understanding of the DFT devoid of application details.
Suppose you have a given signal over a fixed interval. If that signal has three cycles in the frame, the parameters of the fundamental will be found in bin 3 (zero based indexing), and its harmonics will be in bin 6, 9, 12, etc. The number of sample points you use will determine the number of bins there are. The more bins there are the higher up your halfway point (called Nyquist) is. If the frequency of your signal (or one of its harmonics) exceeds the Nyquist value, then it will "wrap around" and look like a different frequency. This is called an alias.
For instance, suppose you had N=16. Then the Nyquist bin is 8. A signal will 10 cycles per frame will land in bin 10, but its mirror image value lands in bin 6, so if you are looking at the DFT you would say, "Hey there is a 6 cycles per frame signal in there", when in fact it is 10. Increasing your sample count would fix that. For N=32, the 10 bin would still have a value, but now the mirror image is in 22. That's why you have to sample at least twice the rate of the highest frequency you want to find to have it land in the lower half.
BTW, every DFT is exact, aka lossless.
The sampling rate is what ties the units of the DFT to the units of your application. N has units of samples per frame. $f_s$ is the conventional symbol for samples per application unit, usually seconds, so the unit is samples per second, designated with Hz. This is a little misleading as the sampling is better thought of as a rate instead of a frequency. Anyway, using T for your interval length, measured in seconds, you have.
$$ f_s \left(\frac{samples}{second}\right) = \frac{ N \left(\frac{samples}{frame}\right) }{ T \left(\frac{seconds}{frame}\right) } $$
Of course, you can switch that around, say:
$$ N = f_s \cdot T $$
To interpret the bin values, $k$ is usually used for the bin index, so the frequency associated with a bin is $k$ cycles per frame. Hence:
$$ \frac{ k \left(\frac{cycles}{frame}\right) }{ T \left(\frac{seconds}{frame}\right) } = k/T \left(\frac{cycles}{second}\right) $$
Therefore the "bin spacing", i.e. the difference in frequency between two adjacent bins, is $1/T$, but the units are still cycles per second.
Now, from above, $ T = N / f_s $, so the bin spacing can also be called $ f_s / N $.
And just a bit more insight...
Your Nyquist frequency is always going to be 2 samples per cycle for any N.
$$ \frac{ 2\pi \left(\frac{radians}{cycle}\right) }{ 2 \left(\frac{samples}{cycle}\right) } = \pi \left(\frac{radians}{sample}\right) $$
It is also helpful to picture the DFT bins arranged around the unit circle in the complex plane. The DC bin on the 1, and the Nyquist is on the -1. The Nyquist is the same distance from the DC whether you go clockwise or counter-clockwise. The bins are indexed from 0 to N/2 across the top for even N, and 0 to (N-1)/2 for odd N. There is only a Nyquist bin for even N. The bottom half is better indexed with negative values($-k$), but usually comes back as the upper part of the results($N-k$). The definition doesn't care and they are aliases.
The mirror image, aka conjugate symmetry, is only true for real valued signals. Ultimately it works the way it does because:
$$ \cos( \theta ) = \frac{ e^{i\theta} + e^{-i\theta} }{2} $$
But that's a longer story. |
Does the frictional force increase as the normal force increases, or does the coefficient of friction get smaller in value?
The coefficient of friction should in the majority of cases, remain constant no matter what your normal force is. When you apply a greater normal force, the frictional force increases, and your coefficient of friction stays the same. Here's another way to think about it: because the force of friction is equal to the normal force times the coefficient of friction, we expect (in theory) an increase in friction when the normal force is increased.
One more thing, the coefficient of friction is a property of the materials being "rubbed", and this property usually does not depend on the normal force.
Friction $F$ is non-linear by nature and linear models only apply in certain ranges.
Firstly, friction comes from the contact of the two surfaces. They are rough, so they have peaks and spikes that meet the other surface. At contact these peaks "glue" to the surface by adhesion (chemical bonding). In order to make the surfaces slide across each other, these bonds must be broken - in other Words: the strength of the material that these Peaks are made of, must be overcome so that they will deform and eventually break.
In the widely used Coulomb's friction law$F=\mu n$, the coefficient of friction$\mu$ is a constant and $n$ the normal force. The law was originally defined in a general per-area form: $$\tau=\mu q$$ where $\tau$ is friction per area and $q$ is normal pressure(normal force $n$ per area). This law only applies at low normal pressures- that is, when the normal pressure $q$ is way lower than the shear strength $k$ of the weakest material. In this case, when pressing the surfaces harder together, the asperity tops will be more flat. This will increase the real contact area proportionaly to the pressure, which is what this law shows.
The shear strength $k$ tells something about when a material will start to deform from such sideways motion. Such deformation will namely change the surface and thereby the roughness and contact area.
At higher normal pressures, one may use a constant-friction model: $$\tau=mk$$ The coefficient $m$ will be in the range from $0$ to $1$. This illustrates how we at high pressures have flattened the asperities fully. Further pressure therefore can't flatten them more, and the contact area is constant at higher pressure. Therefore friction remains constant for higher pressure, as the law states, and now only depends on the materials strength $k$.
The combination of these two models is called the
Orowan's friction model, where the first one will apply for $\mu q \ll k$ and the second for $\mu q \gg k$. In the range around $\mu q = k$, this model is not usable. This is namely the region where the deformation zones around the asperities, which are being flattened, start to meet and overlap. The deformation zones Thus prevent each other from deforming further, and therefore the contact area stops being proportional to the increasing pressure. When the asperities are fully flattened the constant-friction law takes over. See the graph below:
The in-between transition region is much more complicated than any of the two laws, which are merely applied in their respective regions in order to have simpler expressions to work with. Other models are trying to model the whole friction range better. The only one I know of is
the Wanheim and Bay's friction model, which takes into account the area dependence:
$$\tau=f\alpha k$$
where $f$ is called the
friction factor and $\alpha$ is the ratio between the apparent and real contact area.
In the idealistic model of friction - the coefficient does not change.
But in the realistic, practical sense the idealistic model fails 9 times out of 10. The sliding of two surfaces causes heat and rise in temperature, and this can lead to second order effects. Continued sliding can polish the tow surfaces reducing roughness and this can reduce the coefficient. In other materials the sliding can create stickiness and lead to a slip-stick frictional force that's unpredictable by any model except perhaps in a few cases by stochastic models.
Friction is in general nonlinear and the equation you cite is an ideal approximation of how things might happen in the carefully controlled physics lab. |
I have a image $U_{m \times n}:\Omega \to \mathbb R^2$, the output $P$ can be define as $$P=\mu J_{m \times n} - U$$ where $\mu = \max \{ u_{ij} : 1 \leq i \leq m, 1 \leq j \leq n\}$, $J_{m \times n}$ be the $m \times n$ matrix whose $i, j$th component is $1$: that is, the all-ones matrix. (This notation isn't quite standard, but it's as close to standard as I know. $J$ is often the all-ones matrix)
However, it is so many sentences for expression the above equation. Do we have more short and standard way to represent it? As my found, the $J_{m \times n}$ can be expressed by indicator function such as
$$P=\max (U)\times 1_{\Omega}-U$$
where $1_{\Omega}$ is Indicator function
Does it equivalent with original meaning? If not, please give me a standard and common expression in image processing. Thanks |
By the height of an algebraic number $\alpha$, I mean the absolute, logarithmic (additive) Weil height $h(\alpha)$; e.g. $h(2^{1/n}) = (\log 2)/n.$
If $K$ is a number field, let $\delta(K)$ denote the smallest positive height of an element of $K$ (recall roots of unity are precisely the points of height zero). Here is a paper where some very general bounds for $\delta(K)$ are obtained: http://arxiv.org/pdf/1203.4976v1.pdf
I'm interested in something a little different. If $L/K$ is an extension of number fields, let $\delta(L/K)$ denote the smallest positive height of an element in $L\setminus K$ (the smallest height of a "new" element). Loosely speaking, if finding $\delta(K)$ is about finding short vectors in a lattice, then finding $\delta(L/K)$ is about finding short vectors in a lattice with a sublattice removed.
Algorithms to determine the set of all elements in a number field with height less than a given bound are discussed here:
and also here
Clearly one could use such an algorithm to find $\delta(K)$ or $\delta(L/K)$, but might there be an easier way? Or, could there be a faster algorithm to find a new element with small height, but maybe not provably the smallest? Obviously the height of any non-torsion element of $L^\times \setminus K^\times$ is an upper bound for $\delta(L/K)$, so I guess my question is:
Is there an algorithm faster than finding the set of all points of bounded height, which might give a "good" upper bound for $\delta(L/K)$? This is a soft question, since I'm not saying what I mean by "good." The goal would be to get computational evidence to make conjectures about the behavior of $\delta(K_{i+1}/K_i)$ in certain towers of number fields $K_0 \subsetneq K_1 \subsetneq K_2 \subsetneq \cdots.$ |
In a paper by Yuji Tachikawa, I found a q-deformed "2d Yang-Mills paritition function for a cylinder". Here it is (adapted):
$$ Z(q, x_L, x_R) = \mu(q, x_L)^{-1/2} \langle x_L | \bigg\[ \sum_{R \in \mathrm{Irr}(G)} | R \rangle e^{- aC_2(R) } \langle R | \bigg\] |x_R \rangle \mu(q, x_R)^{-1/2}$$
Here's some stuff to help you interpret:
$G$ is a compact lie group and the Irreducible representations should be indexed by the root lattice. conjugacy classes are indexed by elements of maximal torus $\vec{x} \in \mathbb{T}^n \subset G$ $C_2(R)$ is the quadratic Casimir of the representation. In my notation, borrowed from quantum mechanics $\langle R|x \rangle = \chi_R(x), \langle x|R \rangle=\overline{\chi_R(x)}$. $\displaystyle \mu(q, X) = \exp \left[ \sum_{n=1}^\infty \frac{-2q^n}{1-q^n}\chi_{\mathrm{adj}}(x^n) \right]$ The partition function depends on the area $a$ of the cylinder.
In fact, let's turn this into a statement about the Laplacian: The $q$-dependence is hidden:
$$ e^{- a \Delta} = \sum_{R \in \mathrm{Irr}(G)} | R \rangle e^{- aC_2(R) } \langle R | $$
Let's set the area to $0$. From the last line, we should get the identity matrix. However,
$$ \sum_{R \in \mathrm{Irr}(G)} \langle x_L | R \rangle \overline{ \langle x_R | R \rangle } = \mu(q, x_L) \delta(x_L = x_R)$$
This really looks like orthogonality of characters for compact groups, except the right side should be the identity.
What are these characters $\langle x | R \rangle$ ?
Originally, I wanted to ask about an analogue for finite $G$, but I don't even have a point of reference. |
We first analyze the capacitated facility location problem with splittable demands (CFLS) and prove its approximation ratio.
Lemma 1.1
(Adding an AP)
If the current solution\({\mathcal {S}}\) satisfy\(c_s({\mathcal {S}}) - c({\mathcal {S}}^{*}) \ge \frac{nc({\mathcal {S}})}{p(n)}\), then there exists a\(n \in {\mathcal {N}}\) that can be added to\({\mathcal {S}}\) to improve the current solution. Proof
Noticing that \(c_s({\mathcal {S}}) > c_s({\mathcal {S}}^{*})\), we obtain \({\mathcal {S}}^{*}-{\mathcal {S}} \ne \emptyset\). Denote \(n_1,\cdots ,n_l\) as facilities in \({\mathcal {S}}^{*}-{\mathcal {S}}\). Now we need to prove that \(n_i\) is a facility among them that satisfies Lemma 1.1.
Denote \(\sigma\)
and \(\sigma ^{*}\)
as the assignment for \({\mathcal {S}}\)
and \({\mathcal {S}}^{*}\)
, respectively. We now need to analyze the difference between them. \(G(\sigma , \sigma ^{*})\)
is a difference graph for \(\sigma\)
and \(\sigma ^{*}\)
. We can obtain a set of cycles \({\mathscr {C}}\)
and paths \({\mathscr {F}}\)
after decomposing the difference graph
G
. Clearly each one of the set \({\mathscr {F}} \cup {\mathscr {C}}\)
contains a non-positive cost. Also we have \(cost({\mathscr {C}})+cost({\mathscr {F}})=c_s({\mathcal {S}})-c_s({\mathcal {S}}^{*})\)
. Since \(c_s({\mathcal {S}}) - c({\mathcal {S}}^{*}) \ge \frac{nc({\mathcal {S}})}{p(n)}\)
, we have
$${\sum _{i=1}^{l}}cost({\mathscr {F}}_i)\ge c_s({\mathcal {S}})-c_s({\mathcal {S}}^{*}).$$
(5)
For each \(n_i\)
, we can have an assignment \({\mathcal {S}}+n_i\)
. The service cost will be
$$c_s({\mathcal {S}}+n_i) = c_s({\mathcal {S}})-cost({\mathscr {F}}_{i}).$$
(6)
From Eq. (5
), we can obtain
$${\sum _{i=1}^{l}}(cost({\mathscr {F}}_i)-f_{n_i}) \ge c_s({\mathcal {S}})-c_s({\mathcal {S}}^{*})-c_f({\mathcal {S}}^{*})$$
(7)
after subtracting \(c_f({\mathcal {S}}^{*})\)
.
The averaging result of Eq. (7
) will be
$$cost({\mathscr {F}}_i)-f_{n_i} \ge \big (c _s({\mathcal {S}})-c_s({\mathcal {S}}^{*})-c_f({\mathcal {S}}^{*})\big )/l.$$
(8)
Combining these bounds, we have
$$\begin{aligned} c({\mathcal {S}})-c({\mathcal {S}}+u_i)& {}= c_s({\mathcal {S}})-c_s({\mathcal {S}}+u_i)-f_{u_i}\nonumber \\& {} = cost({\mathscr {F}}_i)-f_{u_i}\nonumber \\ & {} \ge \displaystyle \frac{c_s({\mathcal {S}})-c_s({\mathcal {S}}^{*}) -c_f({\mathcal {S}}^{*})}{l}\nonumber \\ & {} \ge \displaystyle \frac{c_s({\mathcal {S}})-c_s({\mathcal {S}}^{*})}{n}\nonumber \\ & {} \ge \displaystyle \frac{c({\mathcal {S}})}{p(n)}. \end{aligned}$$
(9)
Consider that \(c_s({\mathcal {S}})-c({\mathcal {S}}^{*})\ge \frac{nc({\mathcal {S}})}{p(n)}\)
, the Proof of Lemma 1.1
is completed. \(\square\)
Lemma 1.2
(Dropping/swapping APs) \({\mathcal {S}}\)
is a subset of\({\mathcal {N}}\)
.
An upper bound holds if the cost of facility\({\mathcal {S}}\) cannot be improved by at least\(c({\mathcal {S}})/p(n)\) through dropping or swapping:
$$c_f({\mathcal {S}})\bigg (1-\frac{n^2}{p(n)}\bigg ) < 5c({\mathcal {S}}^{*})+2c_s({\mathcal {S}})+\frac{c({\mathcal {S}})}{n}.$$
(10)
Proof
\({\mathcal {S}}^{*}\) is the optimal solution while \({\mathcal {S}}\) is a solution that satisfies Lemma 1.2. The following candidate operations for \({\mathcal {S}}\) are analyzed to prove the hypothesis of Lemma 1.2.
When an AP
i satisfies \(f_i\le c({\mathcal {S}})/n\), we call it cheap, and expensive otherwise. If an AP i in \({\mathcal {S}} -{\mathcal {S}}^{*}\) satisfies \(D_i^{\prime \prime }\le M/2\), we call it light. Facilities that are not light are called heavy. The closest facility to be chosen is called primary, and secondary otherwise. Now \({\mathcal {S}}\) can be partitioned into several classes. \({\mathcal {S}}_C\) refers to facilities in \({\mathcal {S}}-{\mathcal {S}}^{*}\) that are cheap. \({\mathcal {S}}_{EL}\) refers to costly facilities in \({\mathcal {S}}-{\mathcal {S}}^{*}-{\mathcal {S}}_C\), as well as and \({\mathcal {S}}_{EH}\). \({\mathcal {S}}_{ELP}\) and \({\mathcal {S}}_{ELS}\) denote those expensive light APs which can be called primary or secondary.
Then we have the following candidate operations for APs in different classes:
AP \(i \in {\mathcal {S}}\cap {\mathcal {S}}^{*}\) and \({\mathcal {S}}_C\): do nothing.
AP \(i \in {\mathcal {S}}_{EH}\) and \({\mathcal {S}}_{ELP}\): use \(i^*\) to replace
i.
AP \(i \in {\mathcal {S}}_{ELS}\): drop
i and rearrange the load of i.
A
good operation means that the total cost can be cut down by \(c({\mathcal {S}})/p(n)\) or more with this operation. \(\beta\) is a refined allocation strategy with the cost of \(q_{{\hat{\beta }}}\), which satisfies \(q_{{\hat{\beta }}} \le c_s({\mathcal {S}})+c_s({\mathcal {S}}^*)\).
If we get
bad
operations for all facilities in \({\mathcal {S}}_{EH}\)
, then
$$\begin{aligned}&c_f({\mathcal {S}}_{EH})\bigg (1-\frac{n^2}{p(n)}\bigg )\nonumber \\&\quad \le 4c_f({\mathcal {S}}^* - {\mathcal {S}})+ \sum \limits _{i\in {\mathcal {S}}_{EH}}q_{{\hat{\beta }}}(i). \end{aligned}$$
(11)
If we get
bad
operations for all facilities in \({\mathcal {S}}_{ELP}\)
, then
$$\begin{aligned}&c_f({\mathcal {S}}_{ELP})\bigg (1-\frac{n^2}{p(n)}\bigg )\nonumber \\&\quad \le c_f({\mathcal {S}}^* - {\mathcal {S}})+ \sum \limits _{i\in {\mathcal {S}}_{ELP}}q_{{\hat{\beta }}}(i). \end{aligned}$$
(12)
If we get
bad
operations for all facilities in \({\mathcal {S}}_{ELS}\)
, then
$$c_f({\mathcal {S}}_{ELS})\bigg (1-\frac{n^2}{p(n)}\bigg )\le \sum \limits _{i\in {\mathcal {S}}_{ELS}}2q_{{\hat{\beta }}}(i).$$
(13)
We have the inequality \(c_f({\mathcal {S}}_C) \le c({\mathcal {S}})/n\)
and \(q_{{\hat{\beta }}} \le c_s({\mathcal {S}})+c_s({\mathcal {S}}^*)\)
. Noticed that \(\sum \limits _{i\in {\mathcal {S}}_{EH}}q_{{\hat{\beta }}}(i)+ \sum \limits _{i\in {\mathcal {S}}_{ELP}}q_{{\hat{\beta }}} (i)+\sum \limits _{i\in {\mathcal {S}}_{ELS}}q_{{\hat{\beta }}}(i) =\sum \limits _{i\in {\mathcal {S}}-{\mathcal {S}}^*}q_{{\hat{\beta }}}(i).\)
Combining all the conclusions above, we obtain
$$\begin{aligned}&c_f({\mathcal {S}})\bigg (1-\frac{n^2}{p(n)}\bigg )\nonumber \\&\quad =(c_f({\mathcal {S}}\cap {\mathcal {S}}^*)+c_f({\mathcal {S}}_C) +c_f({\mathcal {S}}_{EH})\nonumber \\&\qquad +c_f({\mathcal {S}}_{ELP})+c_f({\mathcal {S}}_{ELS}))\times \bigg (1-\frac{n^2}{p(n)}\bigg )\nonumber \\&\quad \le c_f({\mathcal {S}} \cap {\mathcal {S}}^*) +\frac{c({\mathcal {S}})}{n} + 5c_f({\mathcal {S}}^* -{\mathcal {S}})\nonumber \\&\qquad +\sum \limits _{i\in {\mathcal {S}}-{\mathcal {S}}^*}2q_{{\hat{\beta }}}(i)\nonumber \\&\quad \le \frac{c({\mathcal {S}})}{n}+2q_{{\hat{\beta }}}+5c_f({\mathcal {S}}^*)\nonumber \\&\quad \le \frac{c({\mathcal {S}})}{n}+2c_s({\mathcal {S}})+2c_s({\mathcal {S}}^*) +5c_f({\mathcal {S}}^*)\nonumber \\&\quad \le \frac{c({\mathcal {S}})}{n}+2c_s({\mathcal {S}})+5c_s({\mathcal {S}}^*). \end{aligned}$$
(14)
This completes the Proof of Lemma 1.2
. \(\square\)
Theorem 1 can be proved with Lemmas 1.1 and 1.2.
Theorem 1
(CFLS)
For any constant\(\epsilon >0\), the LS-based algorithm yields an\((8+\epsilon )\) -approximate solution in polynomial time. Proof
According to Lemma 1.1
, we have
$$c_s({\mathcal {S}})<c({\mathcal {S}}^{*})+\frac{nc({\mathcal {S}})}{p(n)}.$$
(15)
According to Lemma 1.2
, we have
$$\begin{aligned} c_f({\mathcal {S}})\left( 1-\frac{n^2}{p(n)}\right)\le & {} 5c({\mathcal {S}}^{*})+2c_s({\mathcal {S}})+\frac{c({\mathcal {S}})}{n}\nonumber \\< & {} 7c({\mathcal {S}}^{*})+nc({\mathcal {S}})\left( \frac{2}{p(n)}+\frac{1}{n^2}\right) . \end{aligned}$$
(16)
If we add the upper bound to the cost of AP, we can have
$$c({\mathcal {S}})\left( 1-\frac{n^2}{p(n)}\right) < 8c({\mathcal {S}}^{*})+nc({\mathcal {S}})\left( \frac{3}{p(n)}+\frac{1}{n^2}\right) .$$
(17)
By rearranging, we obtain
$$c({\mathcal {S}})\left( 1-\frac{n^2}{p(n)}-\frac{3n}{p(n)}-\frac{1}{n}\right) < 8c({\mathcal {S}}^{*}).$$
(18)
This completes the Proof of Theorem 1
. \(\square\)
In general, when obtaining the assignment of unsplittable case from the assignment of splittable case, the capacities of all APs in unsplittable case increase by a factor of at most two. If the expansion of capacity leads to a increase of facility cost, the cost of AP in an unsplittable scenario is double times than that in a splittable case at most. This conclusion has been confirmed in [24]. But in the system model of this paper, the installation cost of an AP is much bigger than the price of the general purpose processor, we assume that the facility cost of an AP will not change with the increase of SMs it serves, which means the facility cost of unsplittable case in this paper is equal to the splittable case. Corollary 1 below summarize our results for CFLU in this paper.
Corollary 1
(CFLU)
If the expansion of facility capacity does not lead to an increase of the facility cost, the LS-based algorithm has an\((8+\varepsilon )\) -approximate ratio (\(\varepsilon >0\)) and can yields a solution in polynomial time. Proof
For the special case in this paper, the cost of installing an AP is much bigger than the price of the general purpose processor in it, so we have
$$c_f^{u}({\mathcal {S}})=c_f^{s}({\mathcal {S}}).$$
(19)
Known that
$$c_s^{u}({\mathcal {S}}) \le c_s^{s}({\mathcal {S}}).$$
(20)
With simple mathematical operations, we have
$$c_f^{u}({\mathcal {S}})+c_s^{u}({\mathcal {S}}) \le c_f^{s}({\mathcal {S}})+c_s^{s}({\mathcal {S}})<(8+\epsilon )c(S^{*}).$$
(21)
This is the Proof of Corollary 1
. \(\square\) |
Given Sam's number is $x$ and Peter's number is $y$, most people seem to be working from the following premises:
$x, y \in \mathbb{N}$ where $\mathbb{N}$ is the natural integers; $\mathbb{N} = \{1, 2, ...\}$ $2002 = x + y \lor 2002 = xy$
If that's correct then you
must agree with Mike Earnest's answer or be logically impaired. A stricter reading of the problem provides these premises: $x \in \mathbb{N} = \{1, 2, ...\}$ $y \in \mathbb{R}$ where $\mathbb{R}$ is the set of all real numbers $2002 = x + y \lor 2002 = xy$
We can intuitively (and logically) exclude more interesting numbers (complex, imaginary, etc.) for the value of $y$. I've omitted that here.
Sam: I don't know your number.
Of course Sam would have no idea. $\forall n \in \mathbb{N}$ $\exists r_a, r_m \in \mathbb{R}$ such that $(r_a \neq r_m)$ $\land$ $(2002 = x + r_a)$ $\land$ $(2002 = xr_m)$. Thanks, Sam, you told us nothing.
Peter: I don't know your number either.
This is telling. This means that $\exists$ $n_a, n_m \in \mathbb{N}$ such that $(n_a \neq n_m)$ $\land$ $(2002 = n_a + y)$ $\land$ $(2002 = n_my)$ which implies:
$$y \in \mathbb{Z} \;where\; \mathbb{Z} = \{..., -2, -1, 0, 1, 2, ...\}$$
Because $\nexists$ $r \in \mathbb{R}, n \in \mathbb{N}$ such that $2002 = n + r$ $\land$ $r \notin \mathbb{Z}$. Which means if $y \notin \mathbb{Z}$, then Peter could find $x = {2002}/y$. But Peter doesn't know $x$, so $y \in \mathbb{Z}$.
Furthermore, we can deduce:
$$y \in \mathbb{N}$$
$\nexists$ $y \in \mathbb{Z}$ such that $y < 1$ $\land$ $2002 = xy$. If $y < 1$, then Peter could find $x = 2002 - y$.
And finally:
$$ y \in F_y = \{1, 2, 7, 11, 14, 22, 77, 91, 143, 154, 182, 286, 1001\} \subset F$$
Where $F$ is the factors of $2002$. (Note: $F = F_y \cup \{2002\}$.) We know this because $\exists n_a, n_m \in \mathbb{N}$ such that $(n_a \neq n_m)$ $\land$ $(2002 = n_a + y)$ $\land$ $(2002 = n_my)$; otherwise, Peter could eliminate one of the formulas and calculate $x$. The only numbers that satisfy this criteria are in $F_y$.
Sam: Now I know your number.
The big takeaway from this is $x \neq 1001$; $x = 1001$ is the case where Sam still doesn't know $y$. This is because $2002 = 1001 + 1001$ and $2002 = 1001 * 2$. Hence, $y$ could either be $1001$ or $2$, and Sam would not know which number.
Peter: Now I know yours too.
That one bit of info (other inferences aside), must have given Peter enough knowledge to solve the problem, so $x = 1001$ must have been a potential possibility prior to Sam's statement. There's only two values of $y$ for which this is the case:
$$2002 = 1001 + 1001,\;y = 1001$$$$2002 = 1001 * 2,\;y = 2$$
Which means the other formula will give us the potential values for $x$:
$$2002 = 2 * 1001,\;x = 2$$$$2002 = 2000 + 2,\;x = 2000$$
So there are two solutions: either Sam picked $2$ and Peter picked $1001$, or Sam picked $2000$ and Peter picked $2$.
Unlike the other version where $y \in \mathbb{N}$, in this case we do not know if $x \in F$. If $x \in F$ then there would only be one solution: Sam picked $2$ and Peter picked $1001$. |
X
Search Filters
Format
Subjects
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
Physical Review Letters, ISSN 0031-9007, 05/2017, Volume 118, Issue 21
Journal Article
2. Measurement of the tau Lepton Polarization and R(D) in the Decay (B)over-bar -> D tau(-) (v)over-bar(tau)
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 05/2017, Volume 118, Issue 21
We report the first measurement of the tau lepton polarization P-tau(D*) in the decay (B) over bar -> D* tau(-) (v) over bar (tau) as well as a newmeasurement...
PHYSICS, MULTIDISCIPLINARY
PHYSICS, MULTIDISCIPLINARY
Journal Article
3. Angular analysis of the e+e-?D() D process near the open charm threshold using initial-state radiation
Physical Review D, ISSN 2470-0010, 01/2018, Volume 97, Issue 1
Journal Article
4. Measurement of the τ Lepton Polarization and R(D^{}) in the Decay B[over ¯]→D^{}τ^{-}ν[over ¯]_{τ}
Physical review letters, 05/2017, Volume 118, Issue 21, p. 211801
We report the first measurement of the τ lepton polarization P_{τ}(D^{*}) in the decay B[over ¯]→D^{*}τ^{-}ν[over ¯]_{τ} as well as a new measurement of the...
Journal Article
5. Systematic studies of the centrality and √SNN dependence of the dET/dη and dNch/dη in heavy ion collisions at midrapidity
Physical Review C - Nuclear Physics, ISSN 0556-2813, 2005, Volume 71, Issue 3
Journal Article
6. Measurement of the $\tau$ lepton polarization and $R(D^)$ in the decay $\bar{B} \to D^ \tau^- \bar{\nu}_\tau
12/2016
Phys. Rev. Lett. 118, 211801 (2017) We report the first measurement of the $\tau$ lepton polarization $P_\tau(D^*)$ in the decay $\bar{B} \rightarrow D^*...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
7. Measurement of the branching ratio of B0 →d∗+τ- ν τ relative to B 0 →d∗+ℓ −νℓ decays with a semileptonic tagging method
Physical Review D, ISSN 2470-0010, 10/2016, Volume 94, Issue 7
Journal Article
Physical Review Letters, ISSN 0031-9007, 05/2017, Volume 118, Issue 21
Journal Article
9. Measurement of the $\tau$ lepton polarization and $R ( D ^ )$ in the decay $\bar{B} \to D ^ \tau ^- \bar{\nu} _\tau$ with one-prong hadronic $\tau$ decays at Belle
ISSN 1550-7998, 2018
With the full data sample of $772×10^6 B\bar{B}$ pairs recorded by the Belle detector at the KEKB electron-positron collider, the decay $\bar{B} \to...
Journal Article
10. Angular analysis of the $\mathrm{e^+ e^- \to D^{() \pm} D^{ \mp}}$ process near the open charm threshold using initial-state radiation
ISSN 0556-2821, 2018
We report a new measurement of the exclusive $e^+ e^- \to D^{(*) \pm} D^{* \mp}$ cross sections as a function of the center-of-mass energy from the $D^{(*)...
Journal Article
11. Absence of suppression in particle production at large transverse momentum in root s(NN)=200 GeV d+Au collisions
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 08/2003, Volume 91, Issue 7
Journal Article
12. Centrality dependence of π0 and η production at large transverse momentum in sNN=200GeV d+Au collisions
Physical Review Letters, ISSN 0031-9007, 04/2007, Volume 98, Issue 17
Journal Article
13. Measurement of the CKM angle $\varphi_1$ in $B^0\to\bar{D}{}^{()0}h^0$, $\bar{D}{}^0\to K_S^0\pi^+\pi^-$ decays with time-dependent binned Dalitz plot analysis
07/2016
Phys. Rev. D 94, 052004 (2016) We report a measurement of the CP violation parameter $\varphi_1$ obtained in a time-dependent analysis of...
Journal Article
NUCLEAR PHYSICS A, ISSN 0375-9474, 02/2019, Volume 982, pp. 839 - 842
The PHENIX experiment has excellent data for small systems including p+Au, d+Au, He-3+Au at 200 GeV as well as the d+Au beam energy scan down to 19.6 GeV. We...
PHYSICS, NUCLEAR
PHYSICS, NUCLEAR
Journal Article
Physical Review C - Nuclear Physics, ISSN 0556-2813, 08/2013, Volume 88, Issue 2
Journal Article
16. Measurement of the branching ratio of (B)over-bar(0) -> D(+)tau(-)(nu)over-bar(tau) relative to (B)over-bar(0) -> D(+)l(-)(nu)over-bar(l) decays with a semileptonic tagging method
PHYSICAL REVIEW D, ISSN 2470-0010, 10/2016, Volume 94, Issue 7
Journal Article
17. First Evidence for cos2β>0 and Resolution of the Cabibbo-Kobayashi-Maskawa Quark-Mixing Unitarity Triangle Ambiguity
Physical Review Letters, ISSN 0031-9007, 12/2018, Volume 121, Issue 26, p. 261801
Journal Article |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
June 2007 , Volume 2 , Issue 2
Select all articles
Export/Reference:
Abstract:
A model for traffic flow in street networks or material flows in supply networks is presented, that takes into account the conservation of cars or materials and other significant features of traffic flows such as jam formation, spillovers, and load-dependent transportation times. Furthermore, conflicts or coordination problems of intersecting or merging flows are considered as well. Making assumptions regarding the permeability of the intersection as a function of the conflicting flows and the queue lengths, we find self-organized oscillations in the flows similar to the operation of traffic lights.
Abstract:
We present a model which explains several experimental observations relating contact angle hysteresis with surface roughness. The model is based on the balance between released capillary energy and dissipation associated with motion of the contact line: it describes the stick–slip behavior of drops on a rough surface using ideas similar to those employed in dry friction, elasto–plasticity and fracture mechanics. The main results of our analysis are formulas giving the interval of stable contact angles as a function of the surface roughness. These formulas show that the difference between advancing and receding angles is much larger for a drop in complete contact with the substrate (Wenzel drop) than for one whose cavities are filled with air (Cassie-Baxter drop). This fact is used as the key tool to interpret the experimental evidence.
Abstract:
This article deals with the modeling of junctions in a road network from a macroscopic point of view. After reviewing the Aw & Rascle second order model, a compatible junction model is proposed. The properties of this model and particularly the stability are analyzed. It turns out that this model presents physically acceptable solutions, is able to represent the capacity drop phenomenon and can be used to simulate the traffic evolution on a network.
Abstract:
We consider a perturbed initial/boundary-value problem for the heat equation in a thick multi-structure $\Omega_{\varepsilon}$ which is the union of a domain $\Omega_0$ and a large number $N$ of $\varepsilon-$periodically situated thin rings with variable thickness of order $\varepsilon = \mathcal{O}(N^{-1}).$ The following boundary condition $\partial_{\nu}u_{\varepsilon} + \varepsilon^{\alpha} k_0 u_{\varepsilon}= \varepsilon^{\beta} g_{\varepsilon}$ is given on the lateral boundaries of the thin rings; here the parameters $\alpha$ and $\beta$ are greater than or equal $1.$ The asymptotic analysis of this problem for different values of the parameters $\alpha$ and $\beta$ is made as $\varepsilon\to0.$ The leading terms of the asymptotic expansion for the solution are constructed, the corresponding estimates in the Sobolev space $L^2(0,T; H^1(\Omega_{\varepsilon}))$ are obtained and the convergence theorem is proved with minimal conditions for the right-hand sides.
Abstract:
The paper examines a class of energies $W$ of nematic elastomers that exhibit ideally soft behavior. These are generalizations of the neo-classical energy function proposed by Bladon, Terentjev & Warner [7]. The effective energy (quasiconvexification) of $W$ is calculated for a large subclass of considered energies. Within the subclass, the rank 1 convex, quasiconvex, and polyconvex envelopes coincide and reduce to the largest function below $W$ that satisfies the Baker–Ericksen inequalities. Compressible cases are included. The effective energy displays three regimes: one fluid-like, one partially fluid-like and one hard, as established by DeSimone & Dolzmann [20] for the energy function of Bladon, Terentjev & Warner. Ideally soft deformation modes are shown to arise.
Abstract:
We consider a class of optimal control problems defined on a stratified domain. Namely, we assume that the state space $\mathbb{R}^N$ admits a stratification as a disjoint union of finitely many embedded submanifolds $\mathcal{M}_i$. The dynamics of the system and the cost function are Lipschitz continuous restricted to each submanifold. We provide conditions which guarantee the existence of an optimal solution, and study sufficient conditions for optimality. These are obtained by proving a uniqueness result for solutions to a corresponding Hamilton-Jacobi equation with discontinuous coefficients, describing the value function. Our results are motivated by various applications, such as minimum time problems with discontinuous dynamics, and optimization problems constrained to a bounded domain, in the presence of an additional overflow cost at the boundary.
Abstract:
Cell motion and interaction with the extracellular matrix is studied deriving a kinetic model and considering its diffusive limit. The model takes into account the chemotactic and haptotactic effects, and obtains friction as a result of the interactions between cells and between cells and the fibrous environment. The evolution depends on the fibre distribution, as cells preferentially move along the fibre direction and tend to cleave and remodel the extracellular matrix when their direction of motion is not aligned with the fibre direction. Simulations are performed to describe the behavior of an ensemble of cells under the action of a chemotactic field and in the presence of heterogeneous and anisotropic fibre networks.
Abstract:
We study degenerate quasilinear parabolic systems in two different domains, which are connected by a nonlinear transmission condition at their interface. For a large class of models, including those modeling pollution aggression on stones and chemotactic movements of bacteria, we prove global existence, uniqueness and stability of the solutions.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
States of Matter: Gases and Liquids Intermolecular Forces and Thermal Energy London force (or) dispersion force: Observed in between non polar atoms or non polar molecules. e.g. Xe and Xe CH 4 and CH 4 and CCl 4 and CCl 4 F\propto\frac{1}{r^{6}} Dipole-Dipole attraction: Attraction between polar compounds
\therefore\ F\propto\frac{1}{r^{3}} → Stationary solid state \therefore\ F\propto\frac{1}{r^{6}} → Rotational molecule Induced dipole-dipole attraction: These are in between polar and Non-polar compounds Ex: Solubility of inert gases in water
Volume: 1dm 3 = (10 cm) 3 = 1000 cc = 1000 mL = 1L Pressure: \tt P=\frac{F}{a} \tt P=\frac{F}{a}=\frac{mg}{a}\ \ P\Rightarrow\frac{d\times v}{a}=\frac{a\times h\times d\times g}{a} P = hdg
In C.G.S, P = hdg, P = 1.013125 × 10
6 dyne/cm 2 In S.I, P = hdg, 1 atm = 1.01325 Bar Temperature \tt \frac{F-32}{9}=\frac{C}{5} Part1: View the Topic in this Video from 2:20 to 57:11 Part2: View the Topic in this Video from 0:57 to 47:45
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. S.I unit of temperature is Kelvin (K) or absolute degree.
K = °C + 273
2. Relation between F and °C is \tt \frac{{^{o}}C}{5} =\frac{^{o}F-32}{9}
3. \tt Pressure \left(P\right) = \frac{Force\left(F\right)}{Area\left(A\right)} = \frac{Mass\left(m\right) \times Acceleration\left(a\right)}{Area\left(A\right)}
4. Absolute pressure = Gauge pressure + Atmosphere pressure |
A connected graph H with $$|H|\ge \sigma (G)$$ | H | ≥ σ ( G ) is said to be G-good if $$R(G,H)=(\chi (G)-1)(|H|-1)+\sigma (G)$$ R ( G , H ) = ( χ ( G ) - 1 ) ( | H | - 1 ) + σ ( G ) . For an integer $$\ell \ge 3$$ ℓ ≥ 3 , let $$P_\ell $$ P ℓ be a path of order $$\ell $$ ℓ , and $$H^{(\ell )}$$ H ( ℓ ) a graph obtained from H by joining the end vertices of $$P_\ell $$ P ℓ to distinct vertices u, v of H. It is widely known that for any graphs G and H, if $$\ell $$ ℓ is sufficiently large, then $$H^{(\ell )}$$ H ( ℓ ) is G-good. In this note, we show that there exists a constant $$c=c(\Delta )$$ c = c ( Δ ) such that for any graphs G and H with $$\Delta (G)\le \Delta $$ Δ ( G ) ≤ Δ and $$\Delta (H)\le \Delta $$ Δ ( H ) ≤ Δ , if $$\ell \ge c\cdot (|G|+|H|)$$ ℓ ≥ c · ( | G | + | H | ) , then $$H^{(\ell )}$$ H ( ℓ ) is G-good; and if $$n\ge 2\alpha (G)+\Delta ^2(G)+4$$ n ≥ 2 α ( G ) + Δ 2 ( G ) + 4 , then $$P_n$$ P n is G-good.
Graphs and Combinatorics – Springer Journals
Published: Jun 6, 2018
It’s your single place to instantly
discover and read the research that matters to you.
Enjoy
affordable access to over 18 million articles from more than 15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from
SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera |
Suppose $z_{t} = x_{t}y_{t}$ where $x_{t}$ and $y_{t}$ are 0 mean, independent stationary stochastic process. What is the autocovariance function of $z_{t}$? Show that the spectral density can be written as $f_{z}(\omega) = \int_{-0.5}^{0.5}f_{x}(\omega-v)f_{y}(v)dv$.
Attempt:
If the series are independent, wouldn't the autocovariance function be $E[z_{t}z_{t+h}]$? This would then be $E[x_{t}y_{t}x_{t+h}y_{t+h}]$. I am unsure how to proceed from here. I am unsure how to prove how the spectral density can be written as the above integral. I know the autocorrelation function $\gamma_{z}(h)$ can be written in terms of the integral but I did not know there was an integral representation for $f_{z}(\omega)$.
So how does one represent the spectral density of z. From my textbook, this is $\sum_{h=-\infty}^{h=\infty}\gamma_{z}(h)e^{-2\pi\omega h}$ |
This is a very deep question.
A proof in terms of numbered formulae and various $\Rightarrow$, resp. $\Leftrightarrow$-signs could be checked by an automated proof checker. On the other hand, a "figure" is just a bitmap, or a pixel heap, and I doubt that an automated proof checker would ever be able to make out what this figure is telling us.
In other words: Figures are viewed at and interpreted by
humans. Sometimes these humans consent in accepting such a figure as proof of some statement, but sometimes they are in error in doing so. When a figure mainly serves to explain a certain concept, say, the derivative of a function $f:\>{\mathbb R}^n\to{\mathbb R}^m$ at some point $p\in{\rm dom}(f)$, then there is not much harm possible, but as soon as there are "cases" involved, say in a geometric proof of $\sin(x+y)=\ldots\ $ for arbitrary angles, the question arises whether the power of $1$ (one) figure is sufficient to prove the general statement. To put it differently: A general statement might involve very different morphologies, only one of which is captured in a single figure.
Concerning your example, it is certainly not sufficient to draw a point $x$ and a circle of radius $\epsilon$ around $x$. But inserting another point $y$ into this circle and drawing a very small circle around $y$ would make the idea of the intended proof clear. Nevertheless, in a course & homework situation it is expected that the idea so obvious in the figure is "verbalized" in a coherent argument. |
In this work we consider the most general analysis
of $\tau\rightarrow (K\pi)^{-} \nu_{\tau}$ decays within an effective field theory description of heavy new physics (NP) including SM operators up to dimension six with massless neutrinos. All hadron form factors are built exploiting chiral symmetry, dispersion relations and (lattice) data. Within this framework we:
i) confirm that it is impossible to understand the BaBar anomaly in the CP asymmetry measurement (we find an upper bound for the NP contribution slightly larger than in Phys. Rev. Lett. 120 (2018) no.14, 141803, but still irrelevant compared to the experimental uncertainty by four orders of magnitude approximately);
ii) first show that the anomalous bump measured in the Belle experiment for the $K_S\pi^-$ invariant mass distribution at low energies is also impossible to understand in the presence of heavy NP;
iii) first bind the heavy NP effective couplings using $\tau^-\to(K\pi)^-\nu_\tau$ decays and show that they are competitive with those found in hyperon semileptonic decays (but clearly not with those obtained for non-standard scalar interactions in Kaon (semi)leptonic decays).
Finally to have a good control of potential new physics effects, we study carefully the SM contribution, namely, we compare the SM predictions with possible deviations caused by NP in three different observables: a couple of Dalitz plot distributions, in the forward-backward asymmetry and in the di-meson invariant mass distribution. |
I am investigating stability and convergence of series of approximations for coupled thermoelasticity problem yielded by one-step recurrent time-integration scheme.
I've managed to show that the one-step time integration scheme ( OSTIS ) is stable and convergent when the gradient of thermal field is positive (energy of heat of a body at every step is not less than energy on the previous time step of OSTIS).
But now I wonder if this is sufficient result or this is unnatural condition? Generally speaking can someone tell the example of thermo-elastic process that occurs in nature (or in machinery) where the gradient of temperature of a body is always positive?
P.S. As I understand this occurs when the system/body is constantly heated by external source, so that more heat comes is than comes out in every time step.
UPDATE:
I need to know when this is true: \begin{align} & \tfrac{1}{2}\Delta {{t}^{-1}}s({{z}^{j+1}}-{{z}^{j}},{{u}^{j+1}}+{{u}^{j}})\ge 0 \\ \end{align} Here I user the following notation: \begin{align} & \,\theta =\theta (x,t) \\ \end{align} is the temperature field over the body in every time; \begin{align} & \frac{\partial }{\partial t}{{\left. [{{\theta }^{\Delta t}}(t)] \right|}_{{{t}_{j}}}}={{\left. {{[{{\theta }^{\Delta t}}(t)]}^{\prime }} \right|}_{{{t}_{j}}}}={{z}^{j}} \\ \end{align} and
\begin{align} & s(\theta ,\xi )=\int_{\Omega }{\theta \xi dx} \\ \end{align}
So basically, my question concerns term
\begin{align} \Delta {{t}^{-1}}[{{z}^{j+1}}-{{z}^{j}}]\ge 0 \end{align}
Are there physical processes where this occurs?
P.S. Meaning of indexes on the top of variables can be understood from here:
\begin{align} & {{\theta }^{\Delta t}}(t)={{\theta }^{j+1/2}}+\Delta t[{{\omega }_{j}}(t)-\tfrac{1}{2}]{{z}^{j+1/2}}+\tfrac{1}{2}\Delta {{t}^{2}}{{\omega }_{j}}(t)[{{\omega }_{j}}(t)-1]{{{\dot{z}}}^{j+1/2}}, \\ & {\theta }'(t)={{z}^{j+1/2}}+\Delta t[{{\omega }_{j}}(t)-\tfrac{1}{2}]{{{\dot{z}}}^{j+1/2}} \\ & {\theta }''(t)={{{\dot{z}}}^{j+1/2}}, \\ & \forall t\in [{{t}_{j}},{{t}_{j+1}}]. \\ \end{align} |
Trigonometry is not only related to ratio triangles. It is about ratios and we can plot graphs with these ratios. Trigonometry graphs are periodic, that means that the graph repeats after a certain amount of time. These functions can be used if there is a regular occurrence of something, for example, the rotation of the earth, tides, temperature etc.
Graphs of Trigonometric Functions
The general form of the sine function is:
$\LARGE y = A \; \sin(Bx+C) + D$
$\large Amplitude \; = \; A$
$\large Period \; = \; \frac{2\pi}{|B|}$
$\large Phase \; Shift \; = \; \frac{C}{D}$
$\large Vertical \; Shift \; = \; D$ |
To understand either of these, you first have to understand the basic premise behind Graph Signal Processing (GSP) which is to map a signal to a graph and then work with it on the "Graph space". This is possible due to certain similarities of classic DSP concepts and Algebraic Graph Theory.
So, it is easier to start from the second question because, before we start applying GSP, we first need a graph.
How to label irregular graphs with high dimensional data?
The short answer is that this is still an open problem and currently, there are "signals" that are naturally mapping on graphs and others where the mapping is either arbitrary or in some way constructed.
Signals that
naturally map on graphs are usually expressed as weights of the graph's edges through a Weight Matrix that is similar to an Adjacency Matrix.
Typical examples are usually items and some form of similarity between them. For example, suppose that you have a set of $N$ time series $X$ and you evaluate their cross correlation. This will result in a Weight Matrix (let's call it $W$) whose $i^{th}, j^{th}$ element ($W_{i,j}$) is the cross correlation between time series $X_{:,i}$ and $X_{:,j}$. (So, $X$ is an $m \times n$ matrix of $n$ time series signals each being $m$ samples long.)
What does this graph look like? It looks like a Clique. In other words, because we have examined
all-to-all cross correlatons, all nodes are considered connected with each other. But, the strength of the connection is expressed by some weight.
So, yes, they are all connected, but some are much more closer than others.
For signals that
do not naturally map on graphs, you first have to solve the corresponding graph labeling problem. This is generally done in two ways, either by coming up with a function that maps a signal to some graph or arbitrarily.
In the arbitrary case, you select some graph whose order (the number of nodes) is equal to the number of samples in your signal. That graph's nodes can be arbitrarily connected, it could for example be an entirely random graph where there is equal chance for any two nodes to be connected.
This is what the author of the paper that you link is actually doing. They take an arbitrary graph (a road network) and they map on to it an exponential decay signal. How? Arbitrarily. Does it make sense? No, but it illustrates the point they are trying to make about showing the effect of the operators.
(See page 4:
"Note that the definitions of the graph Fourier transform and its inverse [...] depend on the choice of graph Laplacian eigenvectors, which is not necessarily unique. Throughout this paper, we do notspecify how to choose these eigenvectors, but assume they are fixed. The ideal choice of the eigenvectors in order to optimize the theoretical analysis conducted here and elsewhere remains an interesting open question; however, in most applications with extremely large graphs, the explicit computation of a full eigendecomposition is not practical anyhow, and methods that only utilize the graph Laplacian through sparse matrix-vector multiplication are preferred.")
The other way that you can do the mapping is with an intuitive or model fitting (in the sense of optimisation) way.
So, an intuitive way to map a signal to a graph is to put the samples of some $x[n]$ time series on the nodes of a graph that are simply connected as a "line" (so, something looking like $x[0] \rightarrow x[1] \rightarrow x[2] \rightarrow x[3] \ldots \rightarrow x[n]$ ).
And a constructed way is to use optimisation in order to construct a graph whose connectivity represents SOME aspect of your original signal $x[n]$.
Which brings us to the first question:
How do I understand translation on the graph?
The short answer is that translation on a graph is equivalent to a re-ordering of the edges that effects a new connectivity pattern on the nodes of the graph. In this way, the nodes appear to have "moved" or translated to a different "position".
So now the question is how do you define "position" and to an extent this question is a bit related to the first one because "position" and how you represent the signal are related.
But, here is a very simple example, just to demonstrate a trivial translation.
Say we have this signal: $x = \left\{ 0,1,2,3,2,1,0,1,2,3,2,1,0 \right\}$ and we map it to the "line" graph $G(V,E)$ we saw earlier that looks like $x[0] \rightarrow x[1] \rightarrow x[2] \rightarrow x[3] \ldots $. In other words, we assign $x[0]$ to $v_0$, $x[1]$ to $v[1]$ and so on and we
assume that nodes are connected "sequentially" (and cyclically).
The adjacency matrix (or the weight matrix) of this graph is:
$$A = \begin{vmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\end{vmatrix}$$
Notice here that I have connected $x[|V|]$ back to $x[0]$ with that last line.
So, how do you "move" nodes around?
In classic DSP, if you wanted to shift things in time, you did something like: $y[n] = x[n+2], n \in \left\{0 .. |x| \right\}$ and the shift is cyclic here. After this, our shifted sequence $y$ looks like: $y = \left\{ 2,3,2,1,0,1,2,3,2,1,0,0,1 \right\}$
Right, so how could we achieve the same, solely utilising the $A$ to effect the same shifting on our graph signal?
Well, that's easy here because instead of having our initial time series $x$ connected as $x[n] \rightarrow x[n+1]$ it is as if we now connect $x[0] \rightarrow x[2], x[1] \rightarrow x[3], x[2] \rightarrow x[4], \ldots$. So basically, the same $A$ as above, only that now the $1$s appear two places to
the right of their current position.
Did you notice how we expressed something that happened in
the time domain to something that happens in the graph domain?
The key idea here is that
we translated the signal by changing the way the nodes are connected. Translation is basically an operator on the connectivity of the graph.
BUT!
Notice here that we said earlier that we
assume a "line" graph as the underlying graph for our signal. We made an arbitrary decision. We could have mapped our signal on the road network of some city (as the authors of the paper that you link have done). Then, how do you define translation on that thing?!?".
This is where the Graph Laplacian and Algebraic Graph Theory come into play.
To cut a long story short, the Graph Laplacian is like the Discrete Fourier Transform for signals in the time domain.
It has been the topic of a lot of research in pure mathematics and its eigenvectors and eigenvalues are supposed to return a lot of information about the graph's connectivity structure (for example, whether it contains cycles or not, what sort of lengths of cycles, whether it is completely connected or not, etc).
So, basically, what the authors are working on in the paper that you linked are a translation operator and a "DFT" equivalent operator on the graph laplacian so that you can "translate" nodes around an arbitrary connected graph (not only one looking like a line, it could have
any shape) and decompose and recompose the graph connectivity matrix to elementary components no matter how complex the graph is.
You can see now how the representation of the graph and "translation" are connected. The Laplacian of the graph depends on the values of its adjacency (or weight) matrix (i.e. its
structure). You map your signal $x[n]$ on the node set of the graph $V$ and you assume (or construct) the edge set $E$. Therefore, any notions of "translation" or "frequency" now depend on the structure of the adjacency matrix.
Therefore, don't try to understand why Fig.7 in the paper that you link looks the way it looks. First of all, the mapping of the signal on the road network is arbitrary and second, the "translation" depends
both on the mapping and the connectivity matrix of the road network. Conceptually, this particular example does not have an immediate connection with reality. But at the same time, conceptually it shows you what translation means over a graph signal and a graph that can have arbitrary connectivity.
Perhaps it is easier to think about GSP in terms of linear algebra because at the end of the day, this is what it is all based on.
If we forget about graphs, adjacencies, nodes, edges, mappings, etc for a minute and focus on the Laplacian: The whole point of GSP is to come up with a new representation for $x[n]$ in the form of a matrix. A new "decomposition" if you like, similar to the way the DFT matrix decomposes a signal or similar to the way Wavelets decompose a signal.
In fact, wavelets are part of the family of "constructed" graphs I am talking about earlier. They are basically a matrix. This matrix
could also be expressed as a graph. When it is expressed as a graph, it opens the door to "new" ways of working with signals or "new" ways of working with graphs. For more information on this line of thinking please see this paper or this paper and this paper (for methods of discovering graph representations).
Hope this helps. |
Groups usually come with homomorphisms, defined as mappings preserving multiplication:
$$f(a\cdot b) = f(a)\cdot f(b)$$
From this definition, notions of subgroup (monomorphism), quotient group (epimorphism, normal subgroup) and the famous isomorphism theorem follow naturally. The category of groups with homomorphisms as arrows has products and sums, equalizers and coequalizers all well-known and with nice properties.
Consider, instead,
affine morphism, that can be defined by the following equivalent conditions: \(f(a \cdot b^{-1} \cdot c) = f(a) \cdot f^{-1}(b) \cdot f(c)\) \(f(a \cdot b) = f(a) \cdot f^{-1}(e) \cdot f(b)\) \(\exists t. f(a \cdot b) = f(a) \cdot t \cdot f(b)\)
The motivation for this definition is slightly roundabout.
The difference between homomorphism and affine morphism is similar to the difference between a vector subspace and an affine subspace of a vector space. A vector subspace always goes through the origin (for a homomorphism \(f\), \(f(e) = e\)), whereas an affine subspace is translated from the origin (\(f(e) \neq e\) is possible for an affine morphism).
Take points \(f(a)\) and \(f(b)\) in the image of an affine morphism, translate them back to the corresponding "vector subspace" to obtain \(f(a) \cdot f^{-1}(e)\) and \(f(b) \cdot f^{-1}(e)\). If translated points are multiplied and the result is translated back to the affine image, the resulting point should be the same as \(f(a \cdot b)\):
$$
f(a \cdot b) = (f(a) \cdot f^{-1}(e)) \cdot (f(b) \cdot f^{-1}(e)) \cdot f(e) = f(a) \cdot f^{-1}(e) \cdot f(b)
$$
which gives the definition (2).
(1) => (2) immediately follows by substituting \(e\) for \(b\).
(2) => (3) by substituting \(f^{-1}(e)\) for \(t\).
(3) => (2) by substituting \(e\) for \(a\) and \(b\).
(2) => (1)
\(f(a \cdot b^{-1} \cdot c)\) \(=\) { (2) for \(a \cdot (b^{-1} \cdot c)\) } \(f(a) \cdot f^{-1}(e) \cdot f(b^{-1} \cdot c)\) \(=\) { (2) for \(b^{-1} \cdot c\) } \(f(a) \cdot f^{-1}(e) \cdot f(b^{-1}) \cdot f^{-1}(e) \cdot f(c)\) \(=\) { \(e = f(b) \cdot f^{-1}(b)\), working toward creating a sub-expression that can be collapsed by (2) } \(f(a) \cdot f^{-1}(e) \cdot f(b^{-1}) \cdot f^{-1}(e) \cdot f(b) \cdot f^{-1}(b) \cdot f(c)\) \(=\) { collapsing \(f(b^{-1}) \cdot f^{-1}(e) \cdot f(b)\) by (2) } \(f(a) \cdot f^{-1}(e) \cdot f(b^{-1} \cdot b) \cdot f^{-1}(b) \cdot f(c)\) \(=\) { \(b^{-1} \cdot b = e\) } \(f(a) \cdot f^{-1}(e) \cdot f(e) \cdot f^{-1}(b) \cdot f(c)\) \(=\) { \(f^{-1}(e) \cdot f(e) = e\) } \(f(a) \cdot f^{-1}(b) \cdot f(c)\)
It is easy to check that each homomorphism is an affine morphism (specifically, homomorphisms are exactly affine morphisms with \(f(e) = e\)).
Composition of affine morphisms is affine and hence groups with affine morphisms form a category \(\mathbb{Aff}\).
A subset of a group \(G\) is called an
affine subgroupof \(G\) if one of the following equivalent conditions holds: \(\exists h \in G:\forall p, q \in H \rightarrow (p \cdot h^{-1} \cdot q \in H \wedge h \cdot p^{-1} \cdot h \in H)\) \(\forall p, q, h \in H \rightarrow (p \cdot h^{-1} \cdot q \in H \wedge h \cdot p^{-1} \cdot h \in H)\)
Finally for to-day, consider an affine morphism \(f:G_0\rightarrow G_1\). For \(t\in G_0\) define
kernel:
$$ker_t f = \{g\in G_0 | f(g) = f(t)\}$$
It's easy to check that a kernel is affine subgroup (take \(t\) as \(h\)). Note that in \(\mathbb{Aff}\) a whole family of subobjects corresponds to a morphism, whereas there is
thekernel in \(\mathbb{Grp}\).
To be continued: affine quotients, products, sums, free affine groups. |
Paragraphs are separated by a blank line.
2nd paragraph.
Italic, bold, and
monospace. Itemized lists
look like: this one that one the other one
Note that — not considering the asterisk — the actual text
content starts at 4-columns in.
Block quotes are
written like so.
They can span multiple paragraphs,
if you like.
Use 3 dashes for an em-dash. Use 2 dashes for ranges (ex., “it’s all
in chapters 12–14”). Three dots … will be converted to an ellipsis. Unicode is supported. ☺
Here’s a numbered list:
first item second item third item
Note again how the actual text starts at 4 columns in (4 characters
from the left side). Here’s a code sample:
# Let me re-iterate ...for i in 1 .. 10 { do-something(i) }
As you probably guessed, indented 4 spaces. By the way, instead of
indenting the block, you can use delimited blocks, if you like:
(which makes copying & pasting easier). You can optionally mark the
delimited block for Pandoc to syntax highlight it:
Now a nested list:
First, get these ingredients:
carrots celery lentils
Boil some water.
Dump everything in the pot and follow
this algorithm:
find wooden spoon uncover pot stir cover pot balance wooden spoon precariously on pot handle wait 10 minutes goto first step (or shut off burner when done)
Do not bump wooden spoon or it will fall.
Notice again how text always lines up on 4-space indents (including
that last line which continues item 3 above).
[^1]: Footnote text goes here.
Tables can look like this:
size material color
9 leather brown
10 hemp canvas natural 11 glass transparent
Table: Shoes, their sizes, and what they’re made of
(The above is the caption for the table.) Pandoc also supports
multi-line tables:
keyword text
red Sunsets, apples, and
other red or reddish things.
green Leaves, grass, frogs
and other things it’s not easy being.
A horizontal rule follows.
Here’s a definition list:
apples
: Good for making applesauce. oranges : Citrus! tomatoes : There’s no “e” in tomatoe.
Again, text is indented 4 spaces. (Put a blank line between each
term/definition pair to spread things out more.)
Here’s a “line block”:
| Line one
| Line too | Line tree
and images can be specified like so:
Inline math equations go in like so: $\omega = d\phi / dt$. Display
math should get its own line and be put in in double-dollarsigns:
$$I = \int \rho R^{2} dV$$
And note that you can backslash-escape any punctuation characters
which you wish to be displayed literally, ex.: `foo`, *bar*, etc. |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
One can do slightly better than the strategy proposed by Mike Earnest and obtain a $\frac{143}{300}\approx 48\%$ chance of success, which is $\frac{1}{300}$ more likely to succeed than the linked strategy. The strategy is as follows:
Let $f(N)=\max(1,N-1)$. Van Helsing should look to see if that are any two coffins $c_1$ and $c_2$ such that lockets $N_1$ and $N_2$ inside each respectively have $f(N_1)=c_2$ and $f(N_2)=c_1$. If so, he should switch $N_1$ and $N_2$. Otherwise, if there is any coffin $c$ with a locket $N$ inside such that $f(N)\neq c$, but such that there is some locket $N'$ with $f(N')=c$, he should switch $N'$ and $N$. He should do nothing if neither of those conditions are met.
After hearing the number $N$ which Dracula announces, Jonathan will open the coffin labelled $f(N)$.
Examples: If Van Helsing looked through the coffins numbered $1$ to $5$ and found lockets $1,\,3,\,5,\,4,\,2$ in that order, he would switch lockets $5$ and $4$ to get $1,\,3,\,4,\,5,\,2$, since now locket $4$ is in coffin $f(4)=3$ and locket $5$ is is coffin $f(5)=4$. In fact, every locket except $2$ is in the appropriate coffin, so Jonathan will find the proper locket with probability $4/5$. If the lockets were $1,\,2,\,3,\,4,\,5$, then Van Helsing would switch lockets $2$ and $3$ (or various other possibilities), since it is not possible to switch a pair of lockets into their rightful place, but it is possible to switch locket $3$ into the coffin $2=f(3)$. This leaves the arrangement $1,\,3,\,2,\,4,\,5$, where only two lockets are in the appropriate place, so Jonathan succeeds with probability $2/5$ - exactly when $1$ or $3$ is called. The last possibility is that Van Helsing does nothing, which would happen in an arrangement like $2,\,3,\,4,\,5,\,1$, where each coffin contains an appropriate locket. Note that locket $1$ is still out of place, but switching it with locket $2$ would move locket $2$ out of place, so makes no improvement. Jonathan again succeeds with probability $4/5$ here.
This succeeds with probability $\frac{143}{300}$. We will show this and that it is optimal below the horizontal line.
In particular, let us work through this game backwards. We can see that Jonathan only knows one number $N$, so his strategy is characterized by the function $f(N)$ which tells him which coffin to open given $N$.
Van Helsing's goal is therefore to maximize the number of $N$ such that the locket numbered as $N$ is in fact in the coffin labelled $f(N)$. His optimal strategy can easily be seen to be to act as follows, where he looks at only the first case below whose conditions are satisfied:
If there exist two lockets $N_1$ and $N_2$ such that $N_1$ is in coffin $f(N_2)$ and $N_2$ is in coffin $f(N_1)$, he should switch lockets $N_1$ and $N_2$, improving the probability of success by $2/5$.
Otherwise, if there exists a coffin $c$ containing a locket $N$ such that $f(N)\neq c$, but there is some $N'$ with $f(N')=c$, he should switch lockets $N$ and $N'$, improving the probability of success by $1/5$.
Otherwise, every coffin $c$ in the image of $f$ contains a locket $N$ with $f(N)=c$. In this case, the probability cannot be improved, so Van Helsing should do nothing.
Let us say that the first case happens with probability $P_1$, the second with probability $P_2$ and the third with probability $P_3$. Note the the probability of success if Van Helsing did nothing is $\frac{1}5$ since then the locket in the chosen coffin would be distributed uniformly randomly. Adding this baseline to the expected improvement in the probability due to Van Helsing gives the probability of success as$$\frac{1}5 + \frac{1}5P_2 + \frac{2}5P_1.$$Note that, as $P_2$ is not natural to calculate, we may use the relation $P_1+P_2+P_3=1$ and various simplifications to rewrite the probability of success as$$\frac{2-P_3+P_1}{5}$$
To calculate $P_1$, let us define the function $S(c)=|f^{-1}[\{c\}]|$. That is, $S(c)$ is the number of lockets $N$ which Jonathan will search coffin $c$ for. This is the only aspect of $f$ that matters.
Let us first calculate the probability of a given pair of coffins $c_1$ and $c_2$ containing lockets $N_1$ and $N_2$ such that swapping the lockets puts both in their proper place. That is, we want to know the probability that $f(N_1)=c_2$ and $f(N_2)=c_1$. This probability may be seen to be $S(c_1)S(c_2)/20$.
As we are interested in the probability of this being the case for any coffins, it is useful to consider the probability that both the pairs of coffins $(c_1,c_2)$ and $(c_3,c_4)$ could be switched to the same advantage. This probability will be $S(c_1)S(c_2)S(c_3)S(c_4)/120$, since we are determining that four locations must contain elements of disjoint groups. Usefully, since a set of four coffins $\{c_1,c_2,c_3,c_4\}$ may be partitioned into two groups of two in $3$ ways, the sum of the probabilities of there being two good switches among these four coffins is $S(c_1)S(c_2)S(c_3)S(c_4)/40$.
Obviously, having only $5$ coffins, one cannot have three possible pairs of good switches, as pairs of switches may not overlap. Thus, using the inclusion-exclusion principle, we calculate $P_1$ as follows$$P_1=\frac{\sum\limits_{\{c_1,c_2\}}S(c_1)S(c_2)}{20}-\frac{\sum\limits_{\{c_1,c_2,c_3,c_4\}}S(c_1)S(c_2)S(c_3)S(c_4)}{40}$$where the sums run over the subsets of the coffins in the image of $f$ of the desired size.
We may calculate $P_3$ in a similar manner. In particular, if one enumerates the coffins $c_i$ in the image of $f$ as $c_1,c_2,\ldots,c_n$, then the probability of each coffin $c_i$ containing an $N_i$ with $f(N_i)=c_i$ will be $\frac{S(c_1)S(c_2)\ldots S(c_n)}{5\cdot 4 \cdot \ldots \cdot (5-n+1) }$.
If $f$ is a bijection, then $S(c)=1$ for every $c$, so we calculate that $P_1=\frac{{5\choose 2}}{20}-\frac{{5\choose 4}}{40}=\frac{15}{40}$ and $P_3=\frac{1}{5!}$ so that the overall probability of success is $\frac{71}{150}$. This choice corresponds exactly to Mike Earnest's strategy, where essentially $f(n)=n$ was used.
However, we get a better value if we have $f$ have an image of size $4$ - then, Jonathan will only ever look in four coffins, where $S$ takes the values of $1,\,1,\,1,\,2$. We then get $P_3=\frac{2}{5!}$ and $P_1=\frac{9}{20}-\frac{2}{40}=\frac{2}5$ yielding a rate of success of $\frac{143}{300}$.
Lacking a clever argument to rule out $f$ with smaller images, note that when $f$ has an image of $3$ coffins and $S$ takes the values of $1,\,1,\,3$ or $1,\,2,\,2$, then $P_1$ is the same as in the case of $f$ having an image of $4$, but $P_3$ is larger, decreasing the rate of success. If the image of $S$ is no more than two coffins, it's easy to see that $P_1$ will be at most $\frac{3}{10}$, which necessarily gives a lower probability of success than the other strategies. Thus, we settle on $f$ having an image of size $4$ to get the best success rate. |
The
package is a powerful tool, based on pgfplots
tikz, dedicated to create scientific graphs.
Contents Pgfplots is a visualization tool to make simpler the inclusion of plots in your documents. The basic idea is that you provide the input data/formula and pgfplots does the rest.
\begin{tikzpicture} \begin{axis} \addplot[color=red]{exp(x)}; \end{axis} \end{tikzpicture} %Here ends the furst plot \hskip 5pt %Here begins the 3d plot \begin{tikzpicture} \begin{axis} \addplot3[ surf, ] {exp(-x^2-y^2)*x}; \end{axis} \end{tikzpicture}
Since
pgfplot is based on tikz the plot must be inside a tikzpicture environment. Then the environment declaration
\begin{axis},
\end{axis} will set the right scaling for the plot, check the Reference guide for other axis environments.
To add an actual plot, the command
\addplot[color=red]{log(x)}; is used. Inside the squared brackets some options can be passed, in this case we set the colour of the plot to
red; the squared brackets are mandatory, if no options are passed leave a blank space between them. Inside the curly brackets you put the function to plot. Is important to remember that this command must end with a semicolon ;.
To put a second plot next to the first one declare a new
tikzpicture environment. Do not insert a new line, but a small blank gap, in this case
hskip 10pt will insert a 10pt-wide blank space.
The rest of the syntax is the same, except for the
\addplot3 [surf,]{exp(-x^2-y^2)*x};. This will add a 3dplot, and the option
surf inside squared brackets declares that it's a surface plot. The function to plot must be placed inside curly brackets. Again, don't forget to put a semicolon ; at the end of the command.
Note:
It's recommended as a good practice to indent the code - see the second plot in the example above - and to add a comma , at the end of each option passed to \addplot. This way the code is more readable and is easier to add further options if needed.
To include
pgfplots in your document is very easy, add the next line to your preamble and that's it:
\usepackage{pgfplots}
Some additional tweaking for this package can be made in the preamble. To change the size of each plot and also guarantee backwards compatibility (recommended) add the next line:
\pgfplotsset{width=10cm,compat=1.9}
This changes the size of each
pgfplot figure to 10 centimeters, which is huge; you may use different units (pt, mm, in). The compat parameter is for the code to work on the package version 1.9 or later.
Since LaTeX was not initially conceived with plotting capabilities in mind, when there are several
pgfplot figures in your document or they are very complex, it takes a considerable amount of time to render them. To improve the compiling time you can configure the package to export the figures to separate PDF files and then import them into the document, add the code shown below to the preamble:
\usepgfplotslibrary{external}
\tikzexternalize
See this help article for further details on how to set up tikz-externalization in your Overleaf project.
Pgfplots 2D plotting functionalities are vast, you can personalize your plots to look exactly what you want. Nevertheless, the default options usually give very good result, so all you have to do is feed the data and LaTeX will do the rest:
To plot mathematical expressions is really easy:
\begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $x$, ylabel = {$f(x)$}, ] %Below the red parabola is defined \addplot [ domain=-10:10, samples=100, color=red, ] {x^2 - 2*x - 1}; \addlegendentry{$x^2 - 2x - 1$} %Here the blue parabloa is defined \addplot [ domain=-10:10, samples=100, color=blue, ] {x^2 + 2*x + 1}; \addlegendentry{$x^2 + 2x + 1$} \end{axis} \end{tikzpicture}
Let's analyse the new commands line by line:
axis lines = left.
xlabel = $x$ and
ylabel = {$f(x)$}.
\addplot.
domain=-10:10.
samples=100.
\addlegendentry{$x^2 - 2x - 1$}.
To add another graph to the plot just write a new
\addplot entry.
Scientific research often yields data that has to be analysed. The next example shows how to plot data with
pgfplots:
\begin{tikzpicture} \begin{axis}[ title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}, xlabel={Temperature [\textcelsius]}, ylabel={Solubility [g per 100 g water]}, xmin=0, xmax=100, ymin=0, ymax=120, xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (0,23.1)(10,27.5)(20,32)(30,37.8)(40,44.6)(60,61.8)(80,83.8)(100,114) }; \legend{CuSO$_4\cdot$5H$_2$O} \end{axis} \end{tikzpicture} There are some new commands and parameters here:
title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}.
xmin=0, xmax=100, ymin=0, ymax=120.
xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}.
legend pos=north west.
ymajorgrids=true.
xmajorgrids to enable grid lines on the x axis.
grid style=dashed.
mark=square.
coordinates {(0,23.1)(10,27.5)(20,32)...}
If the data is in a file, which is the case most of the time; instead of the commands
\addplot and
coordinates you should use
\addplot table {file_with_the_data.dat}, the rest of the options are valid in this environment.
Scatter plots are used to represent information by using some kind of marks, these are common, for example, when computing statistical regression. Lets start with some data, the sample below is to show the structure of the data file we are going to plot (see the end of this section for a link to the LaTeX source and the data file):
GPA ma ve co un
3.45 643 589 3.76 3.52
2.78 558 512 2.87 2.91
2.52 583 503 2.54 2.4
3.67 685 602 3.83 3.47
3.24 592 538 3.29 3.47
2.1 562 486 2.64 2.37
The next example is a scatter plot of the first two columns in this table:
\begin{tikzpicture} \begin{axis}[ enlargelimits=false, ] \addplot+[ only marks, scatter, mark=halfcircle*, mark size=2.9pt] table[meta=ma] {scattered_example.dat}; \end{axis} \end{tikzpicture}
The parameters passed to the
axis and addplot environments can also be used in a data plot, except for scatter. Below the description of the code:
enlarge limits=false
only marks
scatter
meta parameter explained below.
mark=halfcircle*
mark size=2.9pt
table[meta=ma]{scattered_example.dat};
Bar graphs (also known as bar charts and bar plots) are used to display gathered data, mainly statistical data about a population of some sort. Bar plots in
pgfplots are highly customisable, but here we are going to show an example that 'just works':
\begin{tikzpicture} \begin{axis}[ x tick label style={ /pgf/number format/1000 sep=}, ylabel=Year, enlargelimits=0.05, legend style={at={(0.5,-0.1)}, anchor=north,legend columns=-1}, ybar interval=0.7, ] \addplot coordinates {(2012,408184) (2011,408348) (2010,414870) (2009,412156)}; \addplot coordinates {(2012,388950) (2011,393007) (2010,398449) (2009,395972)}; \legend{Men,Women} \end{axis} \end{tikzpicture}
The figure starts with the already explained declaration of the
tikzpicture and axis environments, but the axis declaration has a number of new parameters:
x tick label style={/pgf/number format/1000 sep=}
\addplot commands within this
ybar parameter described below is mandatory for this to work).
enlargelimits=0.05.
legend style={at={(0.5,-0.2)}, anchor=north,legend columns=-1}
ybar interval=0.7,
The
coordinates in this kind of plot determine the base point of the bar and its height.
The labels on the y-axis will show up to 4 digits. If in the numbers you are working with are greater than 9999
pgfplot will use the same notation as in the example. pgfplots has the 3d Plotting capabilities that you may expect in a plotting software.
There's a simple example about this at the introduction, let's work on something slightly more complex:
\begin{tikzpicture} \begin{axis}[ title=Exmple using the mesh parameter, hide axis, colormap/cool, ] \addplot3[ mesh, samples=50, domain=-8:8, ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \addlegendentry{$\frac{sin(r)}{r}$} \end{axis} \end{tikzpicture}
Most of the commands here have already been explained, but there are 3 new things:
hide axis
colormap/cool
mesh
Note:
When working with trigonometric functions pgfplots uses degrees as default units, if the angle is in radians (as in this example) you have to use de deg function to convert to degrees.
In
pgfplots is possible to plot contour plots, but the data has have to be pre calculated by an external program. Let's see:
\begin{tikzpicture} \begin{axis} [ title={Contour plot, view from top}, view={0}{90} ] \addplot3[ contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}} ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \end{axis} \end{tikzpicture}
This is a plot of some contour lines for the same equation used in the previous section. The value of the
title parameter is inside curly brackets because it contains a comma, so we use the grouping brackets to avoid any confusion with the other parameters passed to the
\begin{axis} declaration. There are two new commands:
view={0}{90}
contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}}
levels is a list of values of elevation levels where the contour lines are to be computed.
To plot a set of data into a 3d surface all we need is the coordinates of each point. These coordinates could be an unordered set or, in this case, a matrix:
\begin{tikzpicture} \begin{axis} \addplot3[ surf, ] coordinates { (0,0,0) (0,1,0) (0,2,0) (1,0,0) (1,1,0.6) (1,2,0.7) (2,0,0) (2,1,0.7) (2,2,1.8) }; \end{axis} \end{tikzpicture}
The points passed to the
coordinates parameter are treated as contained in a 3 x 3 matrix, being a white row space the separator of each matrix row.
All the options for 3d plots in this article apply to data surfaces.
The syntax for parametric plots is slightly different. Let's see:
\begin{tikzpicture} \begin{axis} [ view={60}{30}, ] \addplot3[ domain=0:5*pi, samples = 60, samples y=0, ] ({sin(deg(x))}, {cos(deg(x))}, {x}); \end{axis} \end{tikzpicture}
There are only two new things in this example: first, the
samples y=0 to prevent
pgfplots from joining the extreme points of the spiral and; second, the way the function to plot is passed to the
addplot3 environment. Each parameter function is grouped inside curly brackets and the three parameters are delimited with parenthesis.
Command/Option/Environment Description Possible Values axis Normal plots with linear scaling semilogxaxis logaritmic scaling of x and normal scaling for y semilogyaxis logaritmic scaling for y and normal scaling for x loglogaxis logaritmic scaling for the x and y axes axis lines changes the way the axes are drawn. default is ' box box, left, middle, center, right, none legend pos position of the legend box south west, south east, north west, north east, outer north east mark type of marks used in data plotting. When a single-character is used, the character appearance is very similar to the actual mark. *, x , +, |, o, asterisk, star, 10-pointed star, oplus, oplus*, otimes, otimes*, square, square*, triangle, triangle*, diamond, halfdiamond*, halfsquare*, right*, left*, Mercedes star, Mercedes star flipped, halfcircle, halfcircle*, pentagon, pentagon*, cubes. (cubes only work on 3d plots). colormap colour scheme to be used in a plot, can be personalized but there are some predefined colormaps hot, hot2, jet, blackwhite, bluered, cool, greenyellow, redyellow, violet.
For more information see: |
15.1. Generative Adversarial Networks¶
Throughout most of this book, we’ve talked about how to make predictions. In some form or another, we used deep neural networks learned mappings from data points to labels. This kind of learning is called discriminative learning, as in, we’d like to be able to discriminate between photos cats and photos of dogs. Classifiers and regressors are both examples of discriminative learning. And neural networks trained by backpropagation have upended everything we thought we knew about discriminative learning on large complicated datasets. Classification accuracies on high-res images has gone from useless to human-level (with some caveats) in just 5-6 years. We’ll spare you another spiel about all the other discriminative tasks where deep neural networks do astoundingly well.
But there’s more to machine learning than just solving discriminative tasks. For example, given a large dataset, without any labels, we might want to learn a model that concisely captures the characteristics of this data. Given such a model, we could sample synthetic data points that resemble the distribution of the training data. For example, given a large corpus of photographs of faces, we might want to be able to generate a new photorealistic image that looks like it might plausibly have come from the same dataset. This kind of learning is called generative modeling.
Until recently, we had no method that could synthesize novel photorealistic images. But the success of deep neural networks for discriminative learning opened up new possibilities. One big trend over the last three years has been the application of discriminative deep nets to overcome challenges in problems that we don’t generally think of as supervised learning problems. The recurrent neural network language models are one example of using a discriminative network (trained to predict the next character) that once trained can act as a generative model.
In 2014, a breakthrough paper introduced Generative adversarial networks (GANs) [Goodfellow.Pouget-Abadie.Mirza.ea.2014], a clever new way to leverage the power of discriminative models to get good generative models. At their heart, GANs rely on the idea that a data generator is good if we cannot tell fake data apart from real data. In statistics, this is called a two-sample test - a test to answer the question whether datasets \(X=\{x_1,\ldots,x_n\}\) and \(X'=\{x'_1,\ldots,x'_n\}\) were drawn from the same distribution. The main difference between most statistics papers and GANs is that the latter use this idea in a constructive way. In other words, rather than just training a model to say “hey, these two datasets don’t look like they came from the same distribution”, they use the two-sample test to provide training signals to a generative model. This allows us to improve the data generator until it generates something that resembles the real data. At the very least, it needs to fool the classifier. Even if our classifier is a state of the art deep neural network.
The GAN architecture is illustrated in Fig. 15.1.1. As you can see, there are two pieces in GAN architecture - first off, we need a device (say, a deep network but it really could be anything, such as a game rendering engine) that might potentially be able to generate data that looks just like the real thing. If we are dealing with images, this needs to generate images. If we’re dealing with speech, it needs to generate audio sequences, and so on. We call this the generator network. The second component is the discriminator network. It attempts to distinguish fake and real data from each other. Both networks are in competition with each other. The generator network attempts to fool the discriminator network. At that point, the discriminator network adapts to the new fake data. This information, in turn is used to improve the generator network, and so on.
The discriminator is a binary classifier to distinguish if the input\(x\) is real (from real data) or fake (from the generator).Typically, the discriminator outputs a scalar prediction\(o\in\mathbb R\) for input \(\mathbf x\), such as using a denselayer with hidden size 1, and then applies sigmoid function to obtainthe predicted probability \(D(\mathbf x) = 1/(1+e^{-o})\). Assumethe label \(y\) for the true data is \(1\) and \(0\) for thefake data. We train the discriminator to minimize the cross entropyloss,
i.e.,
For the generator, it first draws some parameter\(\mathbf z\in\mathbb R^d\) from a source of randomness,
e.g., anormal distribution \(\mathbf z \sim \mathcal{N} (0,1)\). We oftencall \(\mathbf z\) as the latent variable. It then applies afunction to generate \(\mathbf x'=G(\mathbf z)\). The goal of thegenerator is to fool the discriminator to classify\(\mathbf x'=G(\mathbf z)\) as true data, i.e., we want\(D( G(\mathbf z)) \approx 1\). In other words, for a givendiscriminator \(D\), we update the parameters of the generator\(G\) to maximize the cross entropy loss when \(y=0\), i.e.,
If the generator does a perfect job, then \(D(\mathbf x')\approx 1\) so the above loss near 0, which results the gradients are too small to make a good progress for the discriminator. So commonly we minimize the following loss:
which is just feed \(\mathbf x'=G(\mathbf z)\) into the discriminator but giving label \(y=1\).
To sum up, \(D\) and \(G\) are playing a “minimax” game with the comprehensive objective function:
Many of the GANs applications are in the context of images. As a demonstration purpose, we’re going to content ourselves with fitting a much simpler distribution first. We will illustrate what happens if we use GANs to build the world’s most inefficient estimator of parameters for a Gaussian. Let’s get started.
%matplotlib inlineimport d2lfrom mxnet import np, npx, gluon, autograd, initfrom mxnet.gluon import nnnpx.set_np()
15.1.1. Generate some “real” data¶
Since this is going to be the world’s lamest example, we simply generate data drawn from a Gaussian.
X = np.random.normal(size=(1000, 2))A = np.array([[1, 2], [-0.1, 0.5]])b = np.array([1, 2])data = X.dot(A) + b
Let’s see what we got. This should be a Gaussian shifted in some rather arbitrary way with mean \(b\) and covariance matrix \(A^TA\).
d2l.set_figsize((3.5, 2.5))d2l.plt.scatter(data[:100,0].asnumpy(), data[:100,1].asnumpy());print("The covariance matrix is\n%s" % np.dot(A.T, A))
The covariance matrix is[[1.01 1.95] [1.95 4.25]]
batch_size = 8data_iter = d2l.load_array((data,), batch_size)
15.1.2. Generator¶
Our generator network will be the simplest network possible - a single layer linear model. This is since we’ll be driving that linear network with a Gaussian data generator. Hence, it literally only needs to learn the parameters to fake things perfectly.
net_G = nn.Sequential()net_G.add(nn.Dense(2))
15.1.3. Discriminator¶
For the discriminator we will be a bit more discriminating: we will use an MLP with 3 layers to make things a bit more interesting.
net_D = nn.Sequential()net_D.add(nn.Dense(5, activation='tanh'), nn.Dense(3, activation='tanh'), nn.Dense(1))
15.1.4. Training¶
First we define a function to update the discriminator.
# Save to the d2l package.def update_D(X, Z, net_D, net_G, loss, trainer_D): """Update discriminator""" batch_size = X.shape[0] ones = np.ones((batch_size,), ctx=X.context) zeros = np.zeros((batch_size,), ctx=X.context) with autograd.record(): real_Y = net_D(X) fake_X = net_G(Z) # Don't need to compute gradient for net_G, detach it from # computing gradients. fake_Y = net_D(fake_X.detach()) loss_D = (loss(real_Y, ones) + loss(fake_Y, zeros)) / 2 loss_D.backward() trainer_D.step(batch_size) return float(loss_D.sum())
The generator is updated similarly. Here we reuse the cross entropy loss but change the label of the fake data from \(0\) to \(1\).
# Save to the d2l package.def update_G(Z, net_D, net_G, loss, trainer_G): # saved in d2l """Update generator""" batch_size = Z.shape[0] ones = np.ones((batch_size,), ctx=Z.context) with autograd.record(): # We could reuse fake_X from update_D to save computation. fake_X = net_G(Z) # Recomputing fake_Y is needed since net_D is changed. fake_Y = net_D(fake_X) loss_G = loss(fake_Y, ones) loss_G.backward() trainer_G.step(batch_size) return float(loss_G.sum())
Both the discriminator and the generator performs a binary logistic regression with the cross entropy loss. We use Adam to smooth the training process. In each iteration, we first update the discriminator and then the generator. We visualize both losses and generated examples.
def train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G, latent_dim, data): loss = gluon.loss.SigmoidBCELoss() net_D.initialize(init=init.Normal(0.02), force_reinit=True) net_G.initialize(init=init.Normal(0.02), force_reinit=True) trainer_D = gluon.Trainer(net_D.collect_params(), 'adam', {'learning_rate': lr_D}) trainer_G = gluon.Trainer(net_G.collect_params(), 'adam', {'learning_rate': lr_G}) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5,5), legend=['generator', 'discriminator']) animator.fig.subplots_adjust(hspace=0.3) for epoch in range(1, num_epochs+1): # Train one epoch timer = d2l.Timer() metric = d2l.Accumulator(3) # loss_D, loss_G, num_examples for X in data_iter: batch_size = X.shape[0] Z = np.random.normal(0, 1, size=(batch_size, latent_dim)) metric.add(update_D(X, Z, net_D, net_G, loss, trainer_D), update_G(Z, net_D, net_G, loss, trainer_G), batch_size) # Visualize generated examples Z = np.random.normal(0, 1, size=(100, latent_dim)) fake_X = net_G(Z).asnumpy() animator.axes[1].cla() animator.axes[1].scatter(data[:,0], data[:,1]) animator.axes[1].scatter(fake_X[:,0], fake_X[:,1]) animator.axes[1].legend(['real', 'generated']) # Show the losses loss_D, loss_G = metric[0]/metric[2], metric[1]/metric[2] animator.add(epoch, (loss_D, loss_G)) print('loss_D %.3f, loss_G %.3f, %d examples/sec' % ( loss_D, loss_G, metric[2]/timer.stop()))
Now we specify the hyper-parameters to fit the Gaussian distribution.
lr_D, lr_G, latent_dim, num_epochs = 0.05, 0.005, 2, 20train(net_D, net_G, data_iter, num_epochs, lr_D, lr_G, latent_dim, data[:100].asnumpy())
loss_D 0.693, loss_G 0.693, 640 examples/sec
15.1.5. Summary¶ Generative adversarial networks (GANs) composes of two deep networks, the generator and the discriminator. The generator generates the image as much closer to the true image aspossible to fool the discriminator, via maximizing the cross entropyloss, i.e., \(\max \log(D(\mathbf{x'}))\). The discriminator tries to distinguish the generated images from thetrue images, via minimizing the cross entropy loss, i.e., \(\min - y \log D(\mathbf{x}) - (1-y)\log(1-D(\mathbf{x}))\). 15.1.6. Exercises¶ Does an equilibrium exist where the generator wins, i.e.the discriminator ends up unable to distinguish the two distributions on finite samples? |
Good evening everyone,
I'd like to discuss with you the following exercise :
$\sum\limits_{n=1}^{\infty} (-1)^{n} \frac{n^{2} +3n - \sin(n)}{n^{4}-\arctan(n^{2})}$
I can prove that $\lim\limits_{x \to \infty} a_{n} = 0$ , where $a_{n} = \frac{n^{2} +3n - \sin(n)}{n^{4}-\arctan(n^{2})}$
But I can't still proove its convergence, I'd have used Leibnitz alternating series test (due to $(-1)^{n}$), but I was unable to say $a_{n+1} \leq a_{n}$.
Maybe I could study the Absolute convergence and then by Comparison test find that converges ?
Any help would be appreciated,
Thanks anyway. |
In this post, a
ring is understood to be what one usually calls a ring, not assuming that it has a unit. Some people call such objects rng.
Question:Let R be a finitely generated (non-unital and associative) ring, such that $R=R^2$, i.e. the multiplication map $R \otimes R \to R$ is surjective (every element is a sum of products of other elements). Is it possible that every element of $R$ is contained in a proper two-sided ideal of $R$? Or, must it be the case that $R$ is singly generated as a two-sided ideal in itself?
Note, if $Z \subset R$, then the ideal generated by $Z$ is the span of $Z \cup RZ \cup ZR \cup RZR$, which in the case of idempotent rings is equal to the span of $RZR$.
More generally, one can ask:
Question:For a fixed natural number $k$, can it happen that every set of $k$ elements of $R$ generates a proper ideal of $R$?
So far, I do not know of any example where the ring $R$ is not generated by a single element as a two-sided ideal in itself. I first thought that it must be easy to find counterexamples, but I learned from Narutaka Ozawa that the free non-unital ring on a finite number of idempotents is singly generated as a two-sided ideal in itself. He also showed that no finite ring can give an interesting example. The commutative case is also well-known; Kaplansky showed that every finitely generated commutative idempotent ring must have a unit.
Update: Some partial results about this question and a relation to the Wiegold problem in group theory can be found in http://arxiv.org/abs/1112.1802 |
Level-set Based Segmentation in 2D
Before introducing the actual model used for segmenting 2D images, few things related to explicit and implicit curve representations are discussed. Before going any further, I would like to point out that the implicit curve representation techniques are not confined to 2D case as one might deduce from the header. In fact, level-sets are indeed used in medical imaging segmenting 3D data, such as MRI.
Two Segments
Explicit curve representation
The aim is to segment an input image into two different segments. The area bounded by the segment is 'similar' (homogeneous) as per the used similarity metric (e.g. similar color in the case of RGB/HSV images), while the boundary separating the segments is called the interface. Therefore, using explicit curve representation, the segmentation model can de described as an energy minimization model as follows:
\[ E\Big( \Gamma (s),\, \alpha_1,\, \alpha_2 \Big) = \nu \underbrace{\int_{\Gamma} 1 \, ds}_{\text{boundary length}} - \int_{\Omega_1} log \, p(I|\alpha_1) \, d\vec{x} - \int_{\Omega_2} log \, p(I|\alpha_2) \, d\vec{x} \]
where the curve is parameterized by \( s \) as \( \Gamma (s) = \Big( x(s), y(s) \Big) \). The first term is the length of the boundary separating the segments. The second and the third terms are the 'likelihoods' indicating that a particular image pixel \( I(x,y) \) belongs to a corresponding segment, while the parameters \( \alpha_1 \) and \( \alpha_2 \) are the likelihood functions' parameters. Likelihood is defined as follows:
\[ p \Big ( I | \alpha_{ i } \Big) = p \Bigg ( \{ I(x,y) : (x,y) \in \Omega_i \} | \alpha_i \Bigg ) \]
where \( \alpha_i \) is a parameter vector describing the likehood function. E.g. in the case of a Gaussian distribution \( \alpha_i = ( \mu_i, \, \sigma_i^2 ) \) (mean and variance).
Implicit curve representation
In implicit curve representation the isocontour defining the interface is one dimension lower than the dimensionality of the actual level-set function. Therefore, in \( \mathbb{R} ^n \) the isocontour has dimension \( n-1 \). Thus, it is natural to ask what are the benefits of such implicit curve representation?
Topological changes. Topological changes are handled 'implicitly' in the implicit curve representation. In explicit representation topological changes (e.g. a curve breaking into two) can cause problems, especially in higher dimensions. Discretization. Grid size in implicit representation stays the same (Eulerian formulation), whereas in the explicit curve representation case 'regridding' might be needed.Inside/outside regions. In implicit representation it is extremely easy to see whether a point belongs to outside or inside region (as is shown below). Using an implicit curve representation (e.g. level-set function), the boundary and the segments can be defined as follows:
\[\begin{align} \Gamma &:= \partial \Gamma \Big\{ (x,y),\, \Phi(x,y) = 0 \Big\} \\inside(\Gamma) &:= \Omega_1 = \Big\{ (x,y),\, \Phi(x,y) \ge 0 \Big\} \\outside(\Gamma) &:= \Omega_1 = \Big\{ (x,y),\, \Phi(x,y) < 0 \Big\} \end{align}\]
where \( \Gamma \) is the interface, \( \Omega_1 \) is the first segment and \( \Omega_2 \) is the second segment. In other words, the boundary is defined by the zero isocurve (in the image the interception of the surface and the plane), while those positions (x,y) where the level-set function has a zero or positive value belong to the first segment (the part of the surface above the plane), and those positions (x,y) where the level-set function has a negative value belong to the second segment (the part of the surface below the plane). Using the implicit curve representation equation 1 is defined as:
\[ E( \Phi,\, \alpha_1,\, \alpha_2 ) = \int_{\Omega} \Big( \alpha | \nabla H( \Phi ) | - H( \Phi ) log \, p (I|\alpha_1) - H( \Phi ) log \, p(I|\alpha_2) \Big) d\vec{x} \]
where \( \nabla \) is the gradient operator \( \Big[ \dfrac{\partial}{ \partial x} \, \dfrac{\partial}{\partial y} \Big] \), \( H(\Phi) \) is the Heaviside function and \( \delta (\Phi) \) is the one dimensional Dirac measure as follows:
\[ \left\{ \begin{align} 1, \,&\text{if } \Phi \ge 0 \\0, \, &\text{if } \Phi <0 \end{align} \right. \]
\[ H{\prime} (\Phi) := \delta (\Phi) = \dfrac{d}{d \Phi} H(\Phi) \]
Keeping \( \alpha_1 \) and \( \alpha_2 \) fixed the corresponding Euler-Lagrange equation can be obtained and therefore the energy can be minimized by gradient descent as follows:
\[ \dfrac{\partial \Phi}{\partial t} = H{\prime}(\Phi) \Bigg( \alpha DIV \left( \dfrac{\nabla \Phi}{|\nabla \Phi|} \right) + log\, p(I|\alpha_1) - log\, p (I|\alpha_2) \Bigg) \]
where DIV is the divergence operator. The first term minimizes the local curvature (e.g. boundary length) while the second and the third terms are the 'likehoods' of pixels belonging to the segments 1 and 2. Those pixels where \( \Phi(x,y) >= 0 \) belong to segment 1 as indicated by the Heaviside function. This leads to a two-stage algorithm:
Stage 1: approximate/resolve likelihood functions
Stage 2: solve the level-set function
Several Segments
So far we have seen how to segment an input image into two segments. There are, at least, two different possibilities of segmenting an input image into several meaningful segments. (1) successively keep on segmenting the formed segments until the energy of the boundary out weights the splitting of an segment into 2 or (2) directly search for meaningful segment until the whole image has been segmented. The latter approach is used in my article Hypothesis-Forming-Validation-Loops.
Example(s)
In the following there is an example of segmentation based on the disparity map, using the algorithm explained in the paper Hypothesis-Forming-Validation-Loops. Test images (i.e left- and right stereo images) have been provided by prof. Mårten Björkman from KTH. |
Ok, so I'm looking at Ballentine's
Quantum Mechanics right now, 7th reprint (2010).
On page 363, he starts with
12.7 Adiabatic Approximation and quickly moves on to explain Berry's phase on page 365.
In equation $(12.90)$, he gives a formula for the time evolution of a certain, up to now seemingly "unimportant", phase, namely \begin{equation}\tag{1} \dot{\gamma}_n(t)=\iota\langle n(R(t))|\dot{n}(R(t))\rangle, \end{equation} where $|n(R(t))\rangle$ is the $n$-th Eigenstate of the time-dependent Hamiltonian $\hat{H}(R(t))$ for some curve $R(t)$ in the parameter space.
Next, he states that we may rewrite this equation as \begin{equation}\tag{2} \dot{\gamma}_n(t)=\iota\langle n|\nabla_R\dot{n}\rangle \cdot \dot{R}(t). \end{equation} Comparing this to Berry's original equation $(4)$, which is \begin{equation}\tag{3} \dot{\gamma}_n(t)=\iota\langle n|\nabla_Rn\rangle \cdot \dot{R}(t), \end{equation} you might already see where my problem arises: The dot over the $n$ or lack thereof. My intuition tells me that Berry is right and that it's just an error in Ballentine's book. Which would kind of make sense, since Berry's paper is peer-reviewed and I would interpret $\nabla_R{n}$ as $$|\dot{n}(R(t))\rangle=|\frac{\partial n}{\partial R^i}\frac{\partial R^i}{\partial t}\rangle=|\frac{\partial n}{\partial R^i}\rangle\dot{R^i}\equiv|\nabla_Rn\rangle\cdot\dot{R}.$$ But Ballentine's error is consistent: On the next page, we can see it 3 more times, and it is quite hard to believe that such an error occurs this impertinently.
Could you please tell me whose side the error is on and whether my interpretation of $\nabla$ is right? |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Published March 2003,February 2011.
The Arclets problem set in September 2002 produced some very interesting and inspiring work from Madras College. This short article gives a flavour of the way that Sheila, Shona, Alison Colvin, Sarah, Kathryn and Gordan tackled the problem.
Each of the following shapesis made from arcs of a circle of radius r.
What is the perimeter of a shape with $3$, $4$, $5$ and $n$ "nodes".
What happens when $n$ is very large?
Explain the relationship between your answers and the perimeter of the original circle
Here are arclets with $3$, $4$ and $5$ nodes:
The angles at the centre of the inner circle are $60^{\circ}$
So the angles at the centre of the outer circles are $120^{\circ}$ and $240^{\circ}$ ($120^{\circ}+240^{\circ}= 360^{\circ}$.
We can therefore divide each circle into a $1/3$ ($120^{\circ}$ out of $360^{\circ}$) part and a $2/3$ ($240^{\circ}$ out of $360^{\circ}$) part.
The perimeter of the arclet is made up of 3 "inward" arcs of $1/3$ of the circumference and 3 "outward" arcs of $2/3$ of the circumference.
The inward arcs are $1/4$ of the circumference ($90^{\circ}$ - there are 4 inward arcs.
The outward arcs are $1/2$ of the circumference (4 lots of $45^{\circ}$ - there are 4 outward arcs.
Because the inner circle is surrounded by five outer circles there are 5 angles - all of $72^{\circ}$ at the centre.
Using the properties of isoseles triangles the outward arcs are $2/5$ of the circumference and the inward arcs are $1/5$ of the circumference.
An image of part of the work Sarah, Kathryn and Gordon did to find the perimeter of the 5-node arclet is shown below
Scanned diagrams showing the work of Sheila, Shona and Alison to find the perimeter of a 6-node arclet:
We know that these are the equations for the perimeter of 3, 4, 5 and 6 node arclets:
Number of nodes
3 4 5 6
Perimeter
$ \quad 3\times 2 \pi r = 6 \pi r \quad $
$ \quad 3\times 2 \pi r = 6 \pi r \quad$ $ \quad 3\times 2 \pi r = 6 \pi r \quad $ $ \quad 3\times 2 \pi r = 6 \pi r \quad$
If we substitute $N$ for the node number we get (in every case):
This is based on the fact that the angles at the centres of the circles will be $1/N$ of a full turn.
If $N$ is very large the node shape begins to look like a circle:
In other words. No matter how many nodes the perimeter will always be 3 circumferences. |
The language $Even = \{w\in\Sigma^{\ast}\mid \text{length of }w\text{ is even}\}$
is regular. user5507's answer demonstrates this with an NFA, and it's a basic exercise in most texts.
Then given that $L$ is regular, if we know that regular languages are closed under intersection for the purposes of the question, then the language $L'=L\cap Even$ is also regular.
If we're not allowed to use these closure properties, we can recapitulate the construction that gives the closure property (I'll just sketch it). Given a DFA $M_{L}$ for $L$ and a DFA $M_{Even}$ for $Even$, we can construct a DFA for $L' = L\cap Even$ that has a state space that is the product of the state spaces of $M_{L}$ and $M_{Even}$, with the follow transition rule: if $\delta_{L}(\sigma, q_{i}) = q_{j}$ and $\delta_{Even}(\sigma, p_{m})=p_{n}$ where $\sigma \in \Sigma$ and the $p$s and $q$s are states of the appropriate machines, then the product machine has a transition $\delta_{prod}(\sigma,(q_{i}, p_{m})) = (q_{j}, p_{n})$. Our accepting states are those states $(q_{i},p_{j})$ where $q_{i}$ and $p_{j}$ are accepting states in their respective machines.
That's the long an fiddly way around.
The second question depends on $L$, that is, if $L$ is not regular, then the intersection of $L$ and $Even$ may is not necessarily regular - the $w\in L$ in the definition is key.
EDIT: actually, reading vonbrand's answer, I misunderstood the second part. He is quite correct - the second language is the intersection of $L$ and $X=\{w\in\Sigma^{\ast}\mid \exists i \in \mathbb{N} \text{ such that } |w| = 2^{i}\}$ - not $Even$. So while what I said about $L \cap Even$ with $L$ not regular holds, $X$ isn't regular to begin with, so we get the same situation, but $L$ is regular and $X$ isn't. |
I need to numerically evaluate 2-D integrals of the form: $$ \mathcal{I}(\theta) = \int_{0}^{1} \int_0^1 \varphi_\theta(x,y) dx dy $$
where $\varphi_\theta$ is a family of smooth functions indexed by $\theta$; and I need to evaluate $\mathcal{I}(\theta)$ for a large number ($> 10^6$) of $\theta_i \in \Theta$, which are given.
For how the problem is set up, I can evaluate $\varphi_\theta$ on a fine, equispaced 2-D grid (each evaluation involves a few multiplications), and I cannot evaluate $\varphi_\theta$ on other points.
Since the grid is fine, I could simply compute $\mathcal{I}(\theta_i)$ via trapezoidal or Simpson's rule. However, this becomes extremely computationally expensive and unnecessary. For many (but not all) values of $\theta$, $\varphi_\theta(x,y)$ is $\approx 0$ for large part of the integration domain, and/or slowly-varying.
The obvious thing here is to use some sort of adaptive integration method compatible with a fixed equispaced grid. The basic idea I have is to start with a very coarse sub-grid, evaluate the integral and an estimate of the error in each sub-cell, and based on that decide which cells to "zoom in" and recompute with a refined grid. Iterate this a couple of times. There are lots of ways to do this naively, but I am interested in state-of-the-art solutions.
My question are:
What would be the best (fastest and precise) quadrature approach to compute the (sub)integrals? What would be the best method to pick the cells to refine? If I use a measure of relative or absolute error, what's the best method to compute that? Any alternative idea?
For the record, I am working with MATLAB but I plan to code this part in C via MEX files since I doubt that the adaptive bit can be efficiently vectorized, and I want it to be as fast as possible. |
On the nLab page for power objects, the object $\in_c$ is defined as the domain of a monomorphism $\in_c\hookrightarrow c\times\Omega^c$, and it is mentioned at the end of the article that in any topos we have that the power object of an arbitrary object is (isomorphic to) its exponential with the subobject classifier.
It is also mentioned that in the case $c\cong{\bf 1}$ the power object becomes a subobject classifier. This is easy to see considering the diagram
in the case $c\cong{\bf 1}$ since ${\bf1}\times\Omega^{\bf1}\cong\Omega$ and ${\bf 1}\times d\cong d$, which means that $\in_{\bf 1}\cong{\bf 1}$ with the mono in question being 'true' $\top:{\bf 1}\to \Omega$, $\chi_m:d\to\Omega$ the characteristic function of $r$, and the top of the square being $!:r\to{\bf 1}$.
I'm trying to understand the object $\in_c$ when $c\ncong{\bf 1}$, both in ${\bf Sets}$ and more generally in any topos. I've tried 'fusing' the universal properties of an exponential and a subobject classifier to produce $\in_c$, but it is unclear how the domain of $\top$ changes to yield something besides ${\bf 1}$ -- it seems like the monomorphism $\in_c\hookrightarrow c\times\Omega^c$ is perhaps a currying of some sort?
For a more precise version of the question:
Let $\mathcal{C}$ be a closed category with finite limits and a subobject classifier. How can we define $\in_c$ for an arbitrary $c\in{\bf Ob}_\mathcal{C}$ using just the above structure, and what set is $\in_c$ in the case $\mathcal{C}={\bf Sets}$? |
In other words: which physics experiment requires to know Pi with the highest precision?
closed as primarily opinion-based by ACuriousMind♦, Kyle Kanos, Chris Mueller, John Rennie, JamalS Apr 17 '15 at 9:00
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Pi is very far from being the only number we need in physics. Typical theoretical predictions depend on many other measured and calculated (or both) numbers besides pi.
Nevertheless, it is true that one needs to substitute the right value of pi to get the right predictions. Therefore, the right answer to your question is the most accurately experimental verified theoretical prediction we have in physics as of today, namely the anomalous magnetic dipole moment of the electron
In some natural units, the magnetic moment of the electron is expressed as a g-factor which is somewhat higher than two. Experimentally, $$\frac g2 = 1.00115965218111 \pm 0.00000000000074$$ Theoretically, $g/2$ may be written as $$\frac g2 = 1+\frac{\alpha}{2\pi} + \dots$$ where the $\alpha/2\pi$ first subleading term was obtained by Schwinger in 1948 and many other, smaller terms are known today. The theoretical prediction agrees with the experimental measurement within the tiny error margin; the theoretical uncertainty contains the effect of new species of virtual particles with the masses and couplings that have not yet been ruled out. This requires, among many and many other things, to substitute the correct value of $\pi$ in Schwinger's leading correction $\alpha/2\pi$. You need to know 9-10 decimal points of $\pi$ to make this correction right within the experimental error.
So in practice, $\pi\approx 3.141592654$ would be OK everywhere in the part of physics that is testable. However, theoretical physicists of course often need to make calculations more accurately if not analytically, to figure out what's really happening with the formulae.
39 digits of pi is enough to calculate circumference of visible universe to a margin of error equal to the width of a proton ( unverified computation, according to http://www.guardian.co.uk/science/blog/2011/mar/14/pi-day CordwainerBird ) .
16 digits, for converting frequencies from Hz to angular frequency. Frequencies can now be measured with a precision approaching 1 part in 10^16, so dealing with those numbers would require knowing Pi to 16 digits or so.
Is this really "which physics experiment REQUIRES to know Pi with the highest precision"? A purist (or theorist) might disagree that this example doesn't count, and I'd agree that there's nothing fundamental here, just conversion from one convention to another. But from a practical point of view, it's something people do in physics and it requires knowing Pi to 16 digits.
In principle you will
never reach the accuracy of the numerical value of $\pi$ in an experiment, it is much more important as an analytical tool. The part of the question concerning the most accurate experiment was given by Luboš.
So, just to give a little thing about the importance of $\pi$ as the area of a unit circle and also the ratio of a circle's circumference to diameter,
(from the corresponding wiki-article)
One very famous examples are Fourier transforms where you ask a periodic function to be represented by $\sin$ and $\cos$-terms, $$\mathcal{F}(f)(\xi)=\int_{-\infty}^{\infty}f(x)e^{2\pi\mathrm{i} \xi x}dx\ \mathrm{with}\ e^{\mathrm{i}\varphi}=\cos(\varphi) + \mathrm{i}\sin(\varphi)\ .$$ This is linked to $\pi$ because of the periodicity of the circle, $$r_1(\varphi) = r_2(\varphi + 2\pi)\ \mathrm{with}\ r_i\in \mathrm{circle}\ .$$
There are
much more things to add as the underlying group of rotations is $U(1)$ which is in turn the gauge group for (quantum)electrodynamics explaining why you will almost certainly find the most prominent examples of results linked to the value of $\pi$ in this field. |
Revista Matemática Iberoamericana Rev. Mat. Iberoamericana Volume 21, Number 2 (2005), 557-576. Extreme cases of weak type interpolation Abstract
We consider quasilinear operators $T$ of {\it joint weak type} $(a,b;p,q)$ (in the sense of [Bennett, Sharpley: Interpolation of operators, Academic Press, 1988]) and study their properties on spaces $L_{\varphi,E}$ with the norm $\|\varphi(t)f^*(t) \|_{\tilde E}$, where $\tilde E$ is arbitrary rearrangement-invariant space with respect to the measure $dt/t$. A space $L_{\varphi,E}$ is said to be ``close" to one of the endpoints of interpolation if the corresponding Boyd index of this space is equal to $1/a$ or to $1/p$. For all possible kinds of such ``closeness", we give sharp estimates for the function $\psi(t)$ so as to obtain that every $T:L_{\varphi,E}\to L_{\psi,E}$.
Article information Source Rev. Mat. Iberoamericana, Volume 21, Number 2 (2005), 557-576. Dates First available in Project Euclid: 11 August 2005 Permanent link to this document https://projecteuclid.org/euclid.rmi/1123766806 Mathematical Reviews number (MathSciNet) MR2174916 Zentralblatt MATH identifier 1092.46016 Subjects Primary: 46B70: Interpolation between normed linear spaces [See also 46M35] 46E30: Spaces of measurable functions (Lp-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.) Citation
Pustylnik, Evgeniy. Extreme cases of weak type interpolation. Rev. Mat. Iberoamericana 21 (2005), no. 2, 557--576. https://projecteuclid.org/euclid.rmi/1123766806 |
berylium? really? okay then...toroidalet wrote:I Undertale hate it when people Emoji movie insert keywords so people will see their berylium page.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When xq is in the middle of a different object's apgcode. "That's no ship!"
Airy Clave White It Nay
When you post something and someone else posts something unrelated and it goes to the next page.
Also when people say that things that haven't happened to them trigger them.
Also when people say that things that haven't happened to them trigger them.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Huh. I've never seen a c/posts spaceship before.drc wrote:"The speed is actually" posts
Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace!
It could be solved with a simple PM rather than an entire post.Gamedziner wrote:What's wrong with them?drc wrote:"The speed is actually" posts
An exception is if it's contained within a significantly large post.
I hate it when people post rule tables for non-totalistic rules. (Yes, I know some people are on mobile, but they can just generate them themselves. [citation needed])
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
OK this is a very niche one that I hadn't remembered until a few hours ago.
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids.
The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging.
When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think
omg why on Earth would you do that?!Surely they'd have realised by now? It's not that crazy to realise? Surely there is a clear preference for having them well packed; nobody would prefer an unwieldy mess?!
Also when I'm typing anything and I finish writing it and it just goes to the next line or just goes to the next page. Especially when the punctuation mark at the end brings the last word down one line. This also applies to writing in a notebook: I finish writing something but the very last thing goes to a new page.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: ... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
ON A DIFFERENT NOTE.
When i want to rotate a hexagonal file but golly refuses because for some reason it calculates hexagonal patterns on a square grid and that really bugs me because if you want to show that something has six sides you don't show it with four and it makes more sense to have the grid be changed to hexagonal but I understand Von Neumann because no shape exists (that I know of) that has 4 corners and no edges but COME ON WHY?! WHY DO YOU REPRESENT HEXAGONS WITH SQUARES?!
In all seriousness this bothers me and must be fixed or I will SINGLEHANDEDLY eat a universe.
EDIT: possibly this one.
EDIT 2:
IT HAS BEGUN.
HAS
BEGUN.
Last edited by 83bismuth38 on September 19th, 2017, 8:25 pm, edited 1 time in total.
Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
x₁=ηx
V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: oh okay yeah of course sureA for awesome wrote:Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature.
but really though, i wouldn't have cared.
When someone gives a presentation to a bunch of people and you
knowthat they're getting the facts wrong. Especially if this is during the Q&A section.
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When you watch a boring video in class but you understand it perfectly and then at the end your classmates dont get it so the teacher plays the borinh video again
Airy Clave White It Nay
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
when scientists decide to send a random guy into a black hole hovering directly above Earth for no reason at all.
hit; that random guy was me.
hit; that random guy was me.
83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact:
When I see a "one-step" organic reaction that occurs in an exercise book for senior high school and simply takes place under "certain circumstance" like the one marked "?" here but fail to figure out how it works even if I have prepared for our provincial chemistry olympiadEDIT: In fact it's not that hard.Just do a Darzens reaction then hydrolysis and decarboxylate.
Current status: outside the continent of cellular automata. Specifically, not on the plain of life.
An awesome gun firing cool spaceships:
An awesome gun firing cool spaceships:
Code: Select all
x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o!
When there's a rule with a decently common puffer but it can't interact with itself
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett
-Terry Pratchett
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X
When that oscillator is just
When you're sooooooo close to a thing you consider amazing but miss... not sparky enough.
When you're sooooooo close to a thing you consider amazing but miss...
Airy Clave White It Nay
People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.).
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive").
Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Posts where the quoted text is substantially longer than added text. Especially "me too" posts.
People whose signatures are longer than the actual text of their posts.
People whose signatures include graphics or pattern files, especially ones that are just human-readable text.
Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas.
Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X That's G U S T A V O right theremniemiec wrote:People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades.
Also, when you walk into a wall slowly and carefully but you hit your teeth on the wall and it hurts so bad.
Airy Clave White It Nay |
Let $n = pq$. By assumption, $3$ divides $\varphi(n) = (p-1)(q-1)$. Without loss of generality, I assume that $3$ divides $(p-1)$ or, equivalently, that $p \equiv 1 \pmod {3}$.
Fact Let $p$ be a prime such that $p \equiv 1 \pmod 3$. Let also $c$ be a cubic residue modulo $p$. If $y$ is a cubic root of $c$ then so are $y\cdot \omega \pmod p$ and $y \cdot \omega^2 \pmod p$, where $\omega$ is a non-trivial root of unity modulo $p$ (i.e., $\omega$ satisfies the equation $\omega^2 + \omega + 1 = 0 \pmod {p}$).
In your case, given $c \in \mathbb{Z}_n^*$, you know that $y$ and $z$ are two (distinct) cubic roots of $c$ modulo $n$. Namely, $y^3 \equiv z^3 \equiv c \pmod n$. In turn, this implies $y^3 - z^3 \equiv 0 \pmod n$ and thus $(y-z)(y^2+yz+z^2) \equiv 0 \pmod n$. Since $n = pq$, it follows that
Subcase 1 Assume $q \equiv 2 \pmod 3$ —in this case, cubic roots modulo $q$ are unique. This implies that $y \equiv z \pmod q$. But you cannot have then $y \equiv z \pmod p$ because otherwise you would have $y = z \pmod {n}$ (and $y$ and $z$ are supposed to be distinct). Therefore, since $y \equiv z \pmod q$ yields $(y-z) \equiv 0 \pmod q$ and $y \not\equiv z \pmod p$ yields $(y-z) \not\equiv 0 \pmod p$, you get $\gcd(y-z,n) = q$.
Subcase 2 Assume now $q \equiv 1 \pmod 3$. In this case, there is no guarantee that $\gcd(y-z, n)$ will reveal a factor of $n$. Indeed, it may be the case that, even if $y \neq z \pmod n$, $y^2 + yz + z^2 \equiv 0 \pmod p$ and $y^2 + yz + z^2 \equiv 0 \pmod q$. But you can always give it a try... |
I would like to know if there is a rule to prove this. For example, if I use the distributive law I will get only $(A \lor A) \land (A \lor \neg B)$.
I find pictures are great for anything simple enough to use them, which this is.
Remember:
AND means the area taken up by both things. So the middle one is what is taken up outside B, but also inside A. Their junction is not counted because it is inside A but not outside B.
OR means it is covered by either one or both. Both of them cover the part of A that is outside B, and the junction is covered by A (first picture) so it is counted too. All in all, you just have A again.
Sorry if this is too simplistic, not sure what level you are at.
There are many ways to see this. One is a truth table. Another is to use the distributive rule: $$ A \lor (A \land \lnot B) = (A \land \top) \lor (A \land \lnot B) = A \land (\top \lor \lnot B) = A \land \top = A. $$
I would use my least favourite inference rule: Disjunction Elimination. Basically, it says that if $R$ follows from $P$, and $R$ follows from $Q$, then $R$ must be true if $P \vee Q$: $$(P \to R), (Q \to R), (P \lor Q) \vdash R$$
So let's assume $A \lor (A \land \neg B)$. Set $P = A$, $Q = A \land \neg B$, $R = A$ and apply the rule:
If $P$ ($= A$) we are done. If $Q = A \land \neg B$ then $A$ (by conjunction elimination, $S \land T \vdash S$) By disjunction elimination $A \lor (A \land \neg B) \to A$.
The inverse is trivial: assume $A$, then by one of the variants of conjunction introduction ($S \vdash S \lor T$ for any $T$) $A \to A \lor (\cdots)$.
Here is a diagram of this proof:
Note that, when we know that $C$ implies $D$, we have $C \lor D = D$. This is analogous to taking the union of a set (corresponding to $D$) and one of its subsets ($C$): we get the largest set ($D$) back.
In your case, $C = A \land \lnot B$ and $D = A$, and the implication trivially holds.
A more intuitive look:
A is
always true when
A is true.
A & -B is
only true when
A is true.
Intuitively, applying OR to these two would produce a result
C which is
always true when
A is true. As such,
C is always true when
A is true.
(Stop reading here if this explanation works for you.)
This is how I think about this problem. However, this explanation is not complete since all we've shown is that
A -> C and not
A <-> C.
So, let's also also show that
C -> A.
A is
always false when
A is false.
A & -B is
always false when
A is false.
Intuitively, applying OR to these two would produce a result
C which is
always false when
A is false. As such,
C is always false when
A is false;
-A -> -C, which is the same thing as
C -> A.
So
A -> C and
C -> A, so
A <-> C.
Sometimes, people are confused by the letters. People like food, because it's easy to think about.
Pretend I ask you to flip a coin to choose between one OR the other of the following two options:
An Apple, OR... An Apple, and definitely no Banana.
[The first is equal to "A", the second "A and not B". But don't think of the letters. Think about the apple, and the whether you also get a banana.]
That first one really means "An apple fersure, and maybe you'll get a banana."
So leaving something out is the same as saying "maybe".
Looking at them as a pair, whichever you get, there's definitely going to be an Apple involved. Yay. And if your coinflip picks the right one, you might get a Banana.
But isn't that the same as saying "maybe you'll get a Banana"? Just, with half the likelihood?
So all you can definitely logically say is, you'll get an Apple. You can't say anything about whether you'll get a Banana.
Similar to answer of Yuval Filmus. Using boolean algebra, in engineering notaion, and factoring (or factorising) out A.
$A+A\cdot\bar B=A\cdot(1+\bar B)=A\cdot1=A$
It seems as though no one mentioned it yet so I will go ahead.
The law to deal with these kinds of problems is the
absorption lawit states that p v (p ^ q) = p and also that p ^ (p v q) = p.If you try to use distributive law on this it will keep you going in circles forever:
(A v A) ^ (A v ~B) = A ^ (A v ~B) = (A ^ A) v (A ^ ~B) = A v (A ^ ~B) = (A v A) ^ (A v ~B)
I used the wrong symbol for not and equals but the point here is that when you are going in circles / when there is an and-or mismatch usually you should look to the absoprtion law.
B is irrelevant to the outcome as you will notice if putting this in a truth table.
Another intuitive way to look at this:
If A is a set, then we can say any given object is either (in A) or (not in A).
Now look at
S = A or (A and not B):
If an object is in A, then "A or anything" contains all elements in A, so the object will also be in S.
If an object isn't in A, then "A and anything" excludes all elements not in A, so the object is neither in A nor in (A and not B), so it isn't in S.
So the outcome is that any object in A is in S, and any object not in A isn't in S. So intuitively, the objects in S must be exactly those in A, and no other objects.
When two sets have identical elements, they are defined to be the same set. So
A = S.
A simple method you can always use if you're stuck is case analysis.
Assume $A$ is true. In that case the left side is true, because true OR anything is true.
Assume $A$ is false. False AND anything is false. False OR false is false.
Since $A$ can have no more possible values, you've proven the proposition.
lets consider: 1) A as 1 and B as 0. 2) A as 0 and B as 1. 3) A as 1 and B as 1. 4) A as 0 and B as 0.using the first scenario : A or (A and !B) => 1 or ( 1 and 1) => 1 0r 1 => 1using the second scenario: A or (A and !B) => 0 or ( 0 and 0) => 0 or 0 => 0using the third scenario : A or (A and !B) => 1 or ( 1 and 0) => 1 or 0 => 1using the fourth scenario: A or (A and !B) => 0 or ( 0 and 1) => 0 or 0 => 0From the above four cases, the result always depends on A not on B, so the result is A.
protected by David Richerby Oct 9 '17 at 14:48
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I'm doing some research into the RSA cryptosystem but I just need some clarity on how it worked when it was published in the 70s. Now I know that it works with public keys but did it also work with private keys back then or did it use a shared public key first and then private keys were introduced later?
RSA never was intended as a symmetric/secret key cryptosystem, or extensively used as such. Public/Private key pairs have been used for RSA from day one.
A very close cousin of RSA, also using Public/Private key pairs, was known (but not published) at the GCHQ significantly before RSA was published. See Clifford Cocks's declassified
A Note on 'Non-secret Encryption' (1973). Public-key cryptography was theorized at the GCHQ even before, see James Ellis's declassified The Possibility of Secure Non-Secret Encryption (1969), and his account in the declassified The history of Non-Secret Encryption (1987). See this question for more references on these early works.
As kindly reminded by poncho: the Pohlig-Hellman exponentiation cipher is a symmetric analog of textbook RSA. It uses as public parameters a large public prime $p$ with $(p-1)/2$ also prime, and two random odd secret exponents $e$ and $d$ with the relation $e\cdot d\equiv1\pmod{(p-1)}$; encryption of $m$ with $1<m<p$ is $c\gets m^e\bmod p$, decryption is $m\gets c^e\bmod p$. By incorporating the computation of $d$ from the encryption key $e$ into the decryption, and using cycle walking to coerce down the message space to bitstrings and remove a few fixed points, it becomes a full-blown block cipher by the modern definition of that. Security is related to the discrete logarithm problem in $\mathbb Z_p^*$ . An algorithm for solving that is the main subject of the article, and what Pohlig-Hellman now designates. The encryption algorithm has little practical interest, because it is very slow for a symmetric-only algorithm. It never caught in practice, and I believe never was intended to do so.
I found no earlier reference than: Stephen C. Pohlig and Martin E. Hellman,
An Improved Algorithm for Computing Logarithm over $GF(p)$ and Its Cryptographic Significance published in IEEE Transactions on Information Theory, Volume 24 Issue 1, January 1978.
RSA was clearly known to the authors when they submitted this correspondence. They make explicit reference to:
Ronald L. Rivest, Adi Shamir, and Leonard Adleman, On Digital Signatures and Public-Key Cryptosystems, Technical Memo MIT/LCS/TM-82, dated April 1977 (received by the Defense Documentation Center on May 3, 1977; publication date unknown). Ronald L. Rivest, Adi Shamir, and Leonard Adleman, A Method for Obtaining Digital Signatures and Public-Key Cryptosystemspublished in Communications of the ACM, Volume 21 Issue 2, February 1978 (received April 4, 1977; revised September 1, 1977).
Note: I discovered (1.) only because it is referenced by Pohlig and Hellman; it has a number of rough edges fixed in (2.), including a byzantine and unnecessary complication in the handling of messages not coprime with the public modulus, that are telling of the novelty.
I refer to poncho's account on chronology. |
Content – Energy sources
Capture of energy from ocean surface waves to generate electricity or mechanical work.
There is an energy transfer from the wind passing over the ocean surface and to the sea.
The determining factors for wave height are; the wind speed, the duration of time the wind has been blowing, the distance over which the wind excites the waves (fetch) and the depth and topography of the seafloor.
The wave power is determined by wave speed, wavelength and the density of the water.
Normally and for pratical use the wave motion may be considered strongest at the surface.
Wave power or wave energy flux
FormulaIf the water depth is larger than half the wavelength, the wave energy flux is:
P = \dfrac{\rho g^2}{64\pi}H^2_{mo}\approx (0,5\dfrac{kw}{m^3s})H^2_{mo}T_e
P = Wave energy flux per unit of wave-crest length (kW)
H
m0= Significant wave height (m)
T
e= Wave energy period (s)
ρ = Water density
G = Acceleration by gravity.
Wave energy and wave-energy flux
The mean energy density waves on the water surface is:
E = \dfrac{1}{2}\rho gH^2_{mo}
E = The sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy, both contributing half to the wave energy density E.
DevicesWave power devices are generally categorized by the method used to capture the energy of the waves, by location and by the power take-off system. Locations are shoreline, nearshore and offshore. Types of power take-off include: Hydraulic ram Wlastomeric hose pump Pump-to-shore Hydroelectric turbine Air turbine Linear electrical generator Point absorber buoyThis device floats on the surface of the water, held in place by cables connected to the seabed. Buoys use the rise and fall of swells to drive hydraulic pumps and generate electricity. Surface attenuatorThese devices act similarly to point absorber buoys, with multiple floating segments connected to one another and are oriented perpendicular to incoming waves. A flexing motion is created by swells that drive hydraulic pumps to generate electricity. Oscillating water columnOscillating water column devices can be located on shore or in deeper waters offshore. With an air chamber integrated into the device, swells compress air in the chambers forcing air through an air turbine to create electricity Overtopping device Overtopping devices are long structures that use wave velocity to fill a reservoir to a greater water level than the surrounding ocean. The potential energy in the reservoir height is then captured with low-head turbines. Devices can be either on shore or floating offshore. Oscillating wave surge converter These devices typically have one end fixed to a structure or the seabed while the other end is free to move. Energy is collected from the relative motion of the body compared to the fixed point. Oscillating wave surge converters often come in the form of floats, flaps, or membranes. Some of these designs incorporate parabolic reflectors as a means of increasing the wave energy at the point of capture. These capture systems use the rise and fall motion of waves to capture energy. Once the wave energy is captured at a wave source, power must be carried to the point of use or to a connection to the electrical grid by transmission power cables. |
Given a set of pairs of words $P = \{(\alpha_1, \beta_1), \dots, (\alpha_n, \beta_n)\} \subseteq \Sigma^*\times\Sigma^*$, the
Post Correspondence Problem (PCP) is to decide wether or not there are indices $i_1, \dots, i_k \in \{1\dots n\}$ such that $\alpha_{i_1}\cdot \dots \cdot \alpha_{i_k} = \beta_{i_1}\cdot \dots \cdot \beta_{i_k}$.
It is well known that PCP is not computable in any Turing-equivalent machine model. Usually, this fact is proven by reducing the Halting Problem (HP) to PCP, i.e. describing how to create a PCP instance $\Pi$ for an arbitrary Turing machine $M$ and an input $x$ such that $\Pi$ has a solution if and only if $M$ terminates on $x$.
Arguably, undecidability of PCP is more useful than of HP because it is removed from the notion of computation itself and therefore might be understood without having to read up on Turing machines (or equivalent models) first. Also, it is a more natural choice for many reduction proofs for not computation-related problems, e.g. in formal language theory.
It seems only fair to ask:
Is there a proof for undecidability of PCP that does not employ reduction to HP (or similar problems)?
Note that I want to exclude chains of reductions that end up at HP in the end. Having such an independent proof might open up new ways for students and laymen to understand the underlying issues. Failing that, are there reasons for that lack? Do we
need the kind of self-applicability we employ when proving HP not to be computable? PS: I was unsure wether or not this question should go here or rather onto math.SE. As the proper place might depend on the level of answers, I went with the specialist community. |
Intersection point $A$ has coordinates $(1, -2)$.The first line equation can be written as $y = -\frac34x - \frac54$, the second: $y = \frac43x-\frac{10}{3}$, so $k_1 = -\frac34, k_2 = \frac43$.
$AB$ would lie on the first line if $y_B - y_A = k_1(x_B - x_A)$. $AC$ would lie on the second line if $y_C - y_A = k_2(x_C - x_A)$.
This gives us $y_B = -\frac34x_B - \frac54$, $y_C = \frac43x_C - \frac{10}{3}$. $AB = AC$ means that $\sqrt{(x_B - x_A)^2 + (y_B - y_A)^2} = \sqrt{(x_C - x_A)^2 + (y_C - y_A)^2}$, so $\sqrt{(x_B - 1)^2 + (y_B + 2)^2} = \sqrt{(x_C - 1)^2 + (y_C + 2)^2}$
Let $y = k_3x + b_3$ be the equation of line containing $BC$. If it passes through $D = (1,2)$, then it means that it contains segments $BD$ and $CD$, so $k_3 = \frac{y_B - y_D}{x_B - x_D} = \frac{y_D - y_C}{x_D - x_C}$ (as $D$ lies between $B$ and $C$), so his gives the last equation: $\frac{y_B - 2}{x_B - 1} = \frac{2 - y_C}{1 - x_C}$.
Finally, you have to solve the following system:
\begin{cases} y_B = -\frac34x_B - \frac54 \\ y_C = \frac43x_C - \frac{10}{3} \\ \sqrt{(x_B - 1)^2 + (y_B + 2)^2} = \sqrt{(x_C - 1)^2 + (y_C + 2)^2} \\ \frac{y_B - 2}{x_B - 1} = \frac{2 - y_C}{1 - x_C} \end{cases}
Its solution gives you coordinates of $B$ and $C$ from which you can obtain the line equation. |
My intuition is the following ...
Conditioning on $C$ means that we are considering only the cases when $C$ is given. Now, suppose that I live in a world where $C$ is always given.
My pepole know nothing about and cannot imagine a world without $C$. For some reason, our mathematicians denote probability of $X$ by $\hat{P}(X)$. They have also already discovered the rule
$$\hat P(A|B) = \frac{\hat P(A\cap B)}{\hat P(B)}\text{.}$$
Now, you as an Earthling, know a world where $C$ is not part of the assumptions in everyday life. So, when you come to our planet you can immediately notice, that every our probability $\hat P(X)$ actually correspond to your $P(X|C)$.
You are immediately able to rewrite the RHS, following the upper discovery:
$$\frac{P(A\cap B\mid C)}{P(B \mid C)}\text{.}$$
But ... What is the LHS? Well, what is the probability of $A$ when $B$ is given when $C$ is (also) given? Precisely $$P(A\mid B\cap C)\text{,}$$hence the formula. |
I'm trying to work through Altland and Simons' example of interacting fermions in one dimension. It's in chapter 2, page 70 (you can find it here).
They define fermionic operators $$ a_{sk}^\dagger $$ where $s=L/R$. $a_{Lk}^\dagger$ is an operator that creates an electron going to the left with momentum $(-k_F+k)$, and $a_{Rk}^\dagger$ is an operator that creates an electron going to the right with momentum $(k_F+k)$. So basically, $a_{Lk}^\dagger=a_{-k_F+k}^\dagger$, $a_{Rk}^\dagger=a_{k_F+k}^\dagger$. These operators are restricted to exist only for small $k$.
Then, they define density operators
$$ \rho_{sq}=\sum_k a^\dagger_{sk+q}a_{sk} $$
They go on to show that the commutation relations for the density operators are
$$ [\rho_{sq},\rho_{s'q'}]=\delta_{s,s'}\sum_k (a^\dagger_{sk+q}a_{sk-q'}-a^\dagger_{sk+q+q'}a_{sk}) $$
Now, here's the part I don't understand. They say they want to replace the right side of the equation with its ground state expectation value. They define the ground state of the theory by $|\Omega\rangle$. Then they claim that
$$ \langle\Omega|a^\dagger_{sk}a_{sk'}|\Omega\rangle = \delta_{kk'} $$
Why should this be true? I understand that in the noninteracting theory, $a^\dagger_{sk}a_{sk'}|\Omega\rangle$ is orthogonal to $|\Omega\rangle$ unless $k=k'$. But in the interacting theory, the ground state could be in a superposition of states that means $\langle\Omega|a^\dagger_{sk}a_{sk'}|\Omega\rangle\neq0$.
They ultimately use this to prove
$$ \langle\Omega|[\rho_{sq},\rho_{s'q'}]|\Omega\rangle = \delta_{s,s'}\delta_{q,-q'}\sum_k\langle\Omega|(a^\dagger_{sk+q}a_{sk+q}-a^\dagger_{sk}a_{sk})|\Omega\rangle $$ and I don't see any other way to prove this.
What am I missing? |
I need to solve the following linear program:
$$\displaystyle{\min_{\bar{X},t}} \hspace{0.1in}t$$
such that: $$A\bar{X}=\tilde{x} + td$$
where $A$ is $N\times N$ and known, $t$ is scalar, $\tilde{x}$ is a known $N\times 1$ vector, $d$ is a known $N\times 1$ vector and $\bar{X}$ lies in the set $\mathcal{X}$ where: $$\mathcal{X}=\{X | \underline{X} <X < \overline{X}, X \in \mathbb{R}^N\}$$ and $\overline{X}$ and $\overline{X}$ are known $N\times 1$ vectors.
To solve this I am essentially taking the $td$ to the left hand side, adding $-d$ as a column to the given A and adding $t$ as an additional unknown on the LHS.
The problem is that when implementing this in Gurobi, I get the error that the problem is infeasible or unbounded. Removing gurobi's DualReduction approximation gives us that the algorithm is infeasible.
I was thinking about writing the dual problem and figuring out if the issue becomes more apparent there, but I am unable to do so. Can anyone help me figure out how to write the dual problem?
Example:
X_L file corresponds to $\underline{X}$ and X_U file corresponds to $\overline{X}$. Q file corresponds to $A$ in the notation above. $x$ corresponds to $\tilde{x}$.
When I use lpSolve in R for this problem, I get weird solutions that are not within the bounds (I use -Inf and Inf as bounds for $t$).
When I use gurobi, it gives me a unbounded or infeasbile error. If I set DualReductions=0 (link), it says that the model is infeasible. However, if I solve this system with the objective set to 0, I get a solution. Again the bounds on $t$ are set to -Inf and Inf.
Therefore I am very confused as to what is happening with this problem. Is this problem infeasible? If so, then how come setting objective equal to 0 produces solutions? If not, then why won't gurobi or lpSolve work? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
A simulation I'm doing requires me to calculate the partial trace of a large density matrix. I am trying to calculate it using tools from numpy, but my code seems to be having some problems. For background, let me explain the arrays I am interested in a little more, and the way I'm defining the partial trace. Then, I will give the code I have and the errors I am getting.
First, the partial trace. If I have a tensor product of vector spaces
$$ V = \prod_i V_i$$
and a linear operator $T: V \to V$, then given a basis I can store all the information about $T$ as a multidimensional array $T_{out_1,..,out_n,in_1,..,in_n}$ where $T_{out_1,..,out_n,in_1,..,in_n}$ is the $v_{out_1} \otimes v_{out_2}\otimes \dots \otimes v_{out_n}$ component of $T(v_{in_1} \otimes \dots \otimes v_{in_n} )$. (Compare this to a matrix in a basis where $M_{ij}$ is the $i^{th}$ component of $M(v_j)$.)
The partial trace over some of these indices is a new operator given as a multidimensional array defined by $\widetilde{T}_{kept_{out},kept_{in}} = \sum_{traced} T_{keep_{out},traced, keep_{in},traced}$, where $kept_{in}$, $kept_{out}$, and $traced$ are all multi indices referring to a subset of basis states. The sum is taken over all combinations of indices in the traced set.
My code for computing this in numpy is:
def trace_index(array,i):"""Given an array and an index i, traces out the index.""" n = len(array.shape) tot = list(range(n)) tot[n//2+i-1] = i-1 return np.einsum(array,list(tot))def partial_trace(array,indices):"""Given an array and a list of indices to trace out, traces out the indices""" in_sorted = sorted(indices,reverse=True) cur_trace = array for i in in_sorted: cur_trace = trace_index(cur_trace,i) return cur_trace
I trace them in the descending order eg. 5,4,... because then I can apply trace_index to the indices one at a time. If I trace index 5 and then index 4, index 4 is still index 4 after tracing index 5. If I do it the other way, after I traced index 4, there is no index 5.
This code seems to work well for small cases, but for larger ones I
ValueError: invalid subscript '}' in einstein sum subscripts string, subscripts must be letters
So, my question is this: Is there a better way to do what I am trying to do than what I am doing? |
Bifurcation
John Guckenheimer (2007), Scholarpedia, 2(6):1517. doi:10.4249/scholarpedia.1517 revision #91057 [link to/cite this article]
A
bifurcation of a dynamical system is a qualitative change in its dynamics produced by varying parameters.
Contents Definition
Consider an autonomous system of ordinary differential equations (ODEs) \[\tag{1} \dot{x}=f(x,\lambda),\ \ \ x \in {\mathbb R}^n, \ \ \ \lambda \in {\mathbb R}^p \]
where \(f\) is smooth. A
bifurcation occurs at parameter \(\lambda = \lambda_0\) if there are parameter values \(\lambda_1\) arbitrarily close to \(\lambda_0\) with dynamics topologically inequivalent from those at \(\lambda_0\ .\) For example, the number or stability of equilibria or periodic orbits of \(f\) may change with perturbations of \(\lambda\)from \(\lambda_0\ .\) One goal of bifurcation theory is to produce parameter space maps or bifurcation diagramsthat divide the \(\lambda\) parameter space into regions of topologically equivalent systems. Bifurcations occur at points that do not lie in the interior of one of these regions. Bifurcation theory
Bifurcation theory provides a strategy for investigating the bifurcations that occur within a family. It does so by identifying ubiquitous patterns of bifurcations. Each
bifurcation type or singularityis given a name; for example, Andronov-Hopf bifurcation. No distinction has been made in the literature between "bifurcation" and "bifurcation type," both being called "bifurcations."
Associated with each bifurcation type are
defining equationsthat locate bifurcations of that type in a family \(\dot{x} = f(x,\lambda)\) normal forms that give model systems exemplifying the bifurcation type
Inequalities called
non-degeneracy conditions are part of the specification of a bifurcation type.The bifurcation types and their normal forms serve as templates that facilitate construction of parameter space maps.Bifurcation theory analyzes the bifurcations within the normal forms and investigates the similarity of the dynamics within systems having a given bifurcation type. The "gold standard" forsimilarity of systems used by the theory is topological equivalence. In some cases, bifurcationtheory proves structural stability of a family. One of the principal objectives of bifurcation theory is to prove the structural stability of normal forms. Note, however, that there are bifurcation types for which structurally stable normal forms do notexist. An important aspect of the definition of structural stability in the context of bifurcation theoryis the specification of which perturbations of a family are allowed. For example, bifurcation types of systems possessing specifiedsymmetries have been studied extensively (Equivariant Bifurcation Theory). Classification of bifurcations
One can view bifurcations as a failure of structural stability within a family. A starting point for classifying bifurcation types is the Kupka-Smale theorem that lists three generic properties of vector fields:
hyperbolic equilibrium points hyperbolic periodic orbits transversal intersections of stable and unstable manifolds of equilibrium points and periodic orbits.
Different ways that these Kupka-Smale conditions fail lead to different bifurcation types. Bifurcation theory constructs a layered graph of bifurcation types in which successive layers consist of types whosedefining equations specify more failure modes. These layers can be organized by the
codimension of the bifurcation types, defined as the minimal number of parameters of families in which that bifurcation typeoccurs. Equivalently, the codimension is the number of equality conditions that characterize a bifurcation.
Codimension one bifurcations comprise the top level of bifurcation types. Single failures of the Kupka-Smale properties yield the following types of codimension one bifurcations:
Equilibria Periodic Orbits Global Bifurcations
This is not a comprehensive list of codimension one bifurcations. Additional types can be found in systems with quasiperiodic oscillations or chaotic dynamics. Moreover, there are subcases in the list above that deal with such issues as whether an Andronov-Hopf bifurcation is sub-critical or super-critical, and the implications of eigenvalue magnitudes for homoclinic bifurcation.
The classification of bifurcation types becomes more complex as their codimension increases. There are five types of "local" codimension two bifurcations of equilibria:
Bautin Bifurcation Bogdanov-Takens Bifurcation Cusp Bifurcation Fold-Hopf Bifurcation Hopf-Hopf Bifurcation
One of the principal uses of bifurcation theory is to analyze the bifurcations that occur in specific families of dynamical systems. Investigations commonly identify the types of bifurcations in parameter space maps either by comparison of simulation results with normal forms or by solving defining equations for those bifurcation types in the systems under investigation and computing coefficients of the normal forms. Several software packages (AUTO, CONTENT, MATCONT, XPPAUT, PyDSTool) give implementations of algorithms that perform the latter type of analysis. The numerical core of these packages consist of
Regularimplementations of defining equations for the bifurcation types equation solvers such as Newton's method Numerical continuation methods for differential equations Computation of normal forms initial and boundary value solvers for differential equations.
The continuation methods compute curves of solutions to regular systems of \( N \) equations in \( N+1 \) variables. The bifurcation analysis of a system implemented to varying degrees in the packages listed above is based upon the following strategy:
An initial equilibrium or periodic orbit is located. Numerical continuation is used to follow this special orbit as a single activeparameter varies. Defining equations for codimension one bifurcations detect and locate bifurcations that occur on this branch of solutions. Starting at one of the located codimension one bifurcations,
two parameters are designated to be active and the continuation methods are used to compute a curve of codimension one bifurcations.
Defining equations for codimension two bifurcations detect and locate bifurcations that occur on this branch of solutions. Starting at one of the located codimension two bifurcations,
three parameters are designated to be active and the continuation methods are used to compute a curve of codimension two bifurcations.
This process can be continued as long as one has regular defining equations for bifurcations of increasing codimension, but these hardly exist beyond codimension three. Moreover, the dynamic behaviour near bifurcations with codimension higher than three is usually so poorly understood that the computation of such points is hardly worthwhile. In many cases, bifurcation analysis identifies additional curves of codimension k bifurcations that meet at a codimension k+1 bifurcation. Continuation methods can be started at one of these codimension k bifurcations to find curves of this type of bifurcation with k+1 active parameters. Switching to the continuation of a periodic orbit at an Andronov-Hopf bifurcation or to the continuation of a saddle homoclinic bifurcation curve from the Bogdanov-Takens bifurcation are examples of such starting techniques based on normal form computations.
Bifurcation Theory of Chaotic and Quasiperiodic Systems
Bifurcation theory has intensively investigated varied topics that bear on chaotic and quasiperiodic dynamics. Much of this theory has been developed in the context of discrete time dynamical systems defined by iteration of mappings. The bifurcation theory described above has analogous results for this setting. In some areas, bifurcation theory of discrete systems goes farther than that for continuous time systems. In particular, an extensive, deep theory describing the properties of iterations of one dimensional mappings was developed over the last quarter of the twentieth century. This theory characterizes universal sequences of bifurcations and the existence of chaotic attractors. Some of this theory carries over to the setting of invertible mappings in higher dimensions and to continuous time dynamical systems via Poincar\'e maps. There are also results that are specific to continuous time systems, especially those that apply to homoclinic orbits of equilibrium points. Early results in this area include the theory of the Lorenz Attractor and Silnikov's analysis of systems with a homoclinic orbit of a saddle-focus in three dimensional systems. Methods originating in KAM (Kolmogorov-Arnold-Moser) theory describe how quasiperiodic invariant sets arise naturally in families of vector fields. Sophisticated numerical methods have been developed based upon this theory to compute invariant tori with (quasi)periodic motion in families of vector fields.
References W. De Melo and S. Van Strien (1993) One Dimensional Dynamics, Springer. J. Guckenheimer and P. Holmes (1983) Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer Yu.A. Kuznetsov (2004) Elements of Applied Bifurcation Theory, Springer, 3rd edition. Internal references Yuri A. Kuznetsov (2006) Andronov-Hopf bifurcation. Scholarpedia, 1(10):1858. John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. John Guckenheimer and Yuri A. Kuznetsov (2007) Bautin bifurcation. Scholarpedia, 2(5):1853. John Guckenheimer and Yuri A. Kuznetsov (2007) Bogdanov-Takens bifurcation. Scholarpedia, 2(1):1854. Yuri A. Kuznetsov (2007) Conjugate maps. Scholarpedia, 2(12):5420. John Guckenheimer and Yuri A. Kuznetsov (2007) Cusp bifurcation. Scholarpedia, 2(4):1852. James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629. Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014. Jeff Moehlis and Edgar Knobloch (2007) Equivariant bifurcation theory. Scholarpedia, 2(9):2511. John Guckenheimer and Yuri A. Kuznetsov (2007) Fold-Hopf bifurcation. Scholarpedia, 2(10):1855. Lawrence F. Shampine and Skip Thompson (2007) Initial value problems. Scholarpedia, 2(3):2861. Willy Govaerts, Yuri A. Kuznetsov, Bart Sautois (2006) MATCONT. Scholarpedia, 1(9):1375. James Murdock (2006) Normal forms. Scholarpedia, 1(10):1902. Kendall E. Atkinson (2007) Numerical analysis. Scholarpedia, 2(8):3163. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Anatoly M. Samoilenko (2007) Quasiperiodic oscillations. Scholarpedia, 2(5):1783. Yuri A. Kuznetsov (2006) Saddle-node bifurcation. Scholarpedia, 1(10):1859. Leonid Pavlovich Shilnikov and Andrey Shilnikov (2007) Shilnikov bifurcation. Scholarpedia, 2(8):1891. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. James Murdock (2006) Unfoldings. Scholarpedia, 1(12):1904. Bard Ermentrout (2007) XPPAUT. Scholarpedia, 2(1):1399. |
Search
Now showing items 1-2 of 2
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
TIPS FOR SOLVING QUESTIONS RELATED TO PERCENTAGE: Percentage: A fraction whose denominator is 100 is called percentage. The numerator of the fraction is called the rate percent.
1. To express x% as a fraction:
We have, x% = \begin{aligned} \frac{x}{100} \\ Thus, 30\% = \frac{30}{100} = \frac{3}{10} \\ \end{aligned}
2. To express fraction as percentage, we have
\begin{aligned} \frac{a}{b} = \left(\frac{a}{b}\times100\right)\% \end{aligned}
3. If A is R% more than B, then B is less than A by
\begin{aligned} \left[ \frac{R}{(100+R)}\times 100 \right]\% \end{aligned}
4. If A is R% less than B, then B is more than A by
\begin{aligned} \left[ \frac{R}{(100-R)}\times 100 \right]\% \end{aligned}
5. If the price of a commodity increases by R%, then the reduction in consumption so as not to increase the expenditure is:
\begin{aligned} \left[ \frac{R}{(100+R)}\times 100 \right]\% \end{aligned}
6. If the price of a commodity decreases by R%, then the increase in consumption so as not to decrease the expenditure is:
\begin{aligned} \left[ \frac{R}{(100-R)}\times 100 \right]\% \end{aligned}
7. Let the population of a town be P now and suppose it increases at the rate of R% per annum, then
\begin{aligned} 1. & \text{Population after n years = }P\left(1+\frac{R}{100}\right)^n \\ 2.& \text{Population before n years =} \frac{P}{\left(1+\frac{R}{100}\right)^n} \\ \end{aligned}
8. Let the present value of a machine be P. Suppose it depreciates at the rate of R% per annum.
1. Value of the machine after n years = \begin{aligned} P\left(1-\frac{R}{100}\right)^n \end{aligned} 2. Value of the machine n years ago = \begin{aligned} \frac{P}{\left(1-\frac{R}{100}\right)^n} \\ \end{aligned}
9. For two successive changes of x% and y%, net change =
\begin{aligned} \left(x +y + \frac{xy}{100}\right)\%\\ \end{aligned} |
I work in a relatively illiquid and old-fashioned market (options on power), where trades are arranged via phone & broker, so the issue of low underlying liquidity is definitely there. To remedy this, all options are dealt with delta hedge, where the price level of the delta hedge is pre-agreed, so market moves during arrange a trade do not matter as much (unless of course they are very substantial).
In your case, I would refer to end-of-day quotes, where in the case of exchange-traded options, you have closing prices for options and futures. In this case, the exchange will probably poll several dealers in order to give a realistic market picture. In OTC markets, brokers will show end of day option rates, and explicitly reference them to a closing price of the underlying.
As for judging the mode of behaviour (sticky strike vs. sticky delta) intraday, I would be cautious. Imho, if you base your hedging decisions on this, you may overengineer, potentially not doing yourself a favour.
I have mostly been working on the assumption that a rangebound market with very modest moves will be sticky strike, whereas during more volatile periods it will behave in a sticky-delta way. Not having tested this explicitly, I would say you could try to look for a criterion along the lines of:
$S\sigma/\sqrt{252}\gg\mathit{daily move}$ (sticky strike) resp. $S\sigma/\sqrt{252}\ll\mathit{daily move}$ (sticky delta)
What you could do to make this into a more sound methodology is to run volatility analysis on end-of-day data and relate to daily moves. |
I'm having difficulty with numerically solving the inviscid burgers equation.Godunov's scheme is used in most of what I've found in literature . Now my question is if using a crank nicolson shceme is wrong or not?
$\frac{\partial u}{\partial t} + u\frac{\partial u}{\partial x}=0$
with this BC:
at $t> 0 ,x=0$ : $u=1$
using a finite volume method:
$\frac{\partial }{\partial t}\int udv + \int u\frac{\partial u}{\partial x}dv=0 $
$\frac{\partial u}{\partial t}\Delta v +(uuA) _{e}-(uuA) _{w}=0$
$\frac{\partial u}{\partial t} + F_{e}u_{e}-F_{w}u_{w}=0 $
$F=\frac{u}{\Delta x}$
and for discretization in time I used Crank-Nicolson
$u_{p}^{n+1}-u_{p}^{n}=\frac{\Delta t}{2}(-F_{e}u_{e}+F_{w}u_{w})^{n+1}+\frac{\Delta t}{2}(-F_{e}u_{e}+F_{w}u_{w})^{n}$
the final form of equation is as (upwind scheme) :
$(1+\frac{\Delta t}{2}F_{e})u_{P}^{n+1}=(-\frac{\Delta t}{2}F_{w}) u_{W}^{n+1} +u_{P}^{n}+\frac{\Delta t}{2}(F_{w}u_{w}-F_{e}u_{e})^{n} $
and for the first block :
$(1+\frac{\Delta t}{2}F_{e})u_{P}^{n+1}= u_{P}^{n}+\frac{\Delta t}{2}(-F_{e}u_{e})^{n}+\Delta t F_{w}u_{w}$
and after that I tried to solve a set of linear algebraic equation. The result in every time step seems to converge but with time passing the magnitude of velocity is also increasing which is an indication of a mistake. |
Recently I returned from a particle physics experiment at the Paul Scherrer Institut, a nuclear research lab in Switzerland. I was one of ten students from the University of Heidelberg and ETH, Zürich who had two weeks of (nearly) free reign to carry out an experiment on the PSI’s proton beam line. To put into perspective how crazy that is, ordinarily the going rate for such a privilege is €10,000s per
day!
Our goal was to measure a mysterious number called the “Panofsky ratio”. The ratio is named after Wolfgang Panofsky, first to attempt to measure it, and corresponds is the relative likelihood of two events involving particles called protons and pions occurring. It is important, because historically its value strongly contradicted the expectation of theoretical physicists. The two processes occur by means of two different forces — one by the weak interaction and the other by QED — and so the ratio was expected to be somewhere near the ratio of strengths of these interactions, give or take a few corrections, which happens to be around 30 (\(\gamma\)s represent photons):
\begin{equation}
\text{Panofsky Ratio } P = \frac{R(p\pi\rightarrow n^* \rightarrow \gamma \gamma)}{R(p \pi \rightarrow n \gamma)} \label{eq:pr} \end{equation}
However, when Panofsky measured the ratio, he got a value of around 0.8. Notwithstanding the fact that this is actually wrong, it’s far enough enough from 30 to prove there is more going on than was expected. Panofsky’s measurement is off by about half from the true value of 1.546±0.009;
1 initially we scoffed that such an eminent scientist could get a measurement so wrong. Well, having all but completed the analysis, how my view has changed! This is not an easy measurement to get right. Particle Physics: The Basic Principle
In line with almost all particle physics experiments, the measurement of the ratio was essentially a counting experiment. You essentially fire particles at a target, be that a stationary target as here, or an opposite-travelling beam as with the LHC, and try to detect the off-coming particles so you can reconstruct what processes happened.
With reporting on the news about the discovery of the Higgs boson, anyone could be forgiven for thinking, well, if you fire the right projectile at the right target, and watch and wait, eventually you’ll find what you’re looking for.
Of course, nothing is ever quite so simple. Take the LHC. On the right is what an LHC Higgs boson event really looks like. In reality, reconstructed events like this are primarily for show – the actual analysis of these things is all done by specialized software (and hardware?) – indeed, it was one of my responsibilities to co-write this software for our group.
Actually, the real challenge, not just in ours, but in all particle physics experiments, comes from the fact that
interesting events hardly ever occur – it’s rather like searching for a silver needle amidst a stack of silvery grey ones. It’s a stack of needles because, as a necessary artefact of the triggering mechanism described below, most of the events really look very similar; only careful offline analysis can sift out the silver.
The other main difficulty in the Panofsky ratio measurement is that all of the detectable particles are
electrically neutral, not charged, particles. Neutral particles are harder to detect, track, and measure accurately than are charged particles. To measure charged particles, you just stick them in a magnetic field and measure the curvature of their paths; they can literally be observed with the naked eye (as seen in cloud chambers). Neutral particles, however, just pass right through, invisible.
To measure neutral particles, all you can do is hope they will get stuck in your detector and dump all their energy there, that way you can read their energy off; although with more advanced tracking systems, you can actually reconstruct their trajectories and velocities to identify them more precisely. Our set-up was much more modest, but we knew exactly which particles to expect at exactly which energies, so particle identification was already done in the energy measurement.
Data Acquisition
In order to record the data from the experiment, some kind of data acquisition system is needed. For larger experiments, there can be literally millions of read-out channels, all of which must be accessed and their data read off and stored for each interesting event. Because of the extremely high rate of background events (thousands per second in this case), it is neither technologically possible nor desirable to record all of this data.
For example, we had access to seven 8-bit ADCs which ran at 200MHz to read off data – that’s a data throughput rate in the region of 1½ GB/s! Clearly even
we had to whittle this down somehow, but for larger experiments this aspect of the experiment is critical. The usual trick is to employ a triggering mechanism to wake the electronics up when something interesting (might have) happened.
To actually implement a triggering mechanism, we made use of
scintillators, the long black paddles covered in tape. Scintillators work by giving out a signal when a particle passes through; if two or more are properly aligned in time, this allows the use of simple electronics to detect co-incident signals etc. which can tell you immediately something about how interesting the event is, and therefore whether you want to store and analyze it. How are particles’ energies measured?
The actual energy measurement, the most important part of the whole endeavour, comes from the
calorimeters. We used two different calorimeter types, one of BGO (bismuth germanate) and the other NaI (sodium iodide). 2 Calorimeters measure energy by attempting to stop the particle which is entering, and sending signals about the electric charge inside to the electronics and data acquisition system. The signals are taken over a specific time interval, which creates an event trace showing how the voltage changed over a few nanoseconds. The area under the signal trace is related to how much energy the incident particle had, hopefully linearly. 3
I’ll upload an example event trace if I can get my hands on one.
The Panofsky Ratio
Ultimately, we were seeking a spectrum looking something along the lines of the spectrum on the left. The spectrum is a histogram of the area under the signal traces from the calorimeters. From the calorimeters, we obtained energy histograms such as on the right:
Obviously, there is an absolute
ton of background which needs to be dealt with! Removing Background and Recovering the Ideal Plot
Ideally, since we are looking for interactions of pions with protons, we should have used a liquid hydrogen target – essentially a pure sea of protons. Alas, for unspecified safety reasons, we had to pass on the rocket fuel and use something more practical; plastic was the tool of choice.
Plastic, though, being essentially solidified hydrocarbon, contains a lot of carbon which
also interacts with our incoming pions. This is what makes the raw spectrum look so terrible. So in order to see how just the hydrogen acted with the pions, we had to know the effect of carbon by taking runs with a pure carbon target, and statistically subtracting the result from the plot.
Even after a perfect statistical subtraction, of course, the resulting graph should not show such a sharp peak as in the left/top plot, but rather a Gaussian spread around the peak, as is the case with practically any experimental measurement.
And the Results?
Update: After much graft, the analysis genii of the group deduced a ratio of ~1. Not half bad, in my opinion! Progress is yet to be made on the precise value, and statistical and systematic uncertainties.
And we really don’t know! Apparently this statistical subtraction stuff is hard to do well, and when you don’t have as awesome statistics as you’d like (we didn’t start taking data 24/7 till we had only three days left!), you can’t really afford to get it wrong. The fact is it’s so hard, we still haven’t managed to do it convincingly enough to draw any conclusions. I’ll update when we’ve made some progress. Possible Improvements
There are a few aspects to our specific experiment which could be improved if it were repeated. Unfortunately, pure liquid hydrogen will forever remain out of bounds, but this was made worse in our case because the absorber, the series of white blocks directly in front of the beam outlet designed to taper the beam momentum (you can kind of see it in the experimental set-up) was also made of plastic. This poisoned our measurement in a way difficult to quantify.
In addition, we struggled initially with incredibly low rates (roughly 0.3Hz) and it wasn’t until the final few days that we changed our set-up in desperation, introducing the never-before-used big blue calorimeter, and had 24/7 data-taking (and very sleepy people). This is why you really should
make sure everything’s set up and functioning before the end of the first week! You’ll thank yourself. Conclusion
No group in over four years managed to measure the Panofsky ratio with any success. But even with the odds against us from the beginning, we had a decent stab at this experiment, and the outcome was far better than expected!
Whilst our stated aim was to obtain the measurement, in reality the aim was to experience being a part of a spontaneous physics collaboration, learning hands-on what life as an experimental particle physicist is like. It was a fantastic opportunity and I’m personally glad I took it. If you as a student are offered a similar opportunity, don’t let it pass you by.
See a recent remeasurement. [↩] The reason for two kinds is they respond differently, and it was hoped the NaI could detect the neutrons better than the BGO calorimeters. [↩] This is the response with an ideal calorimeter. Ours actually weren’t linear across a wide range of energies, only within a certain range large enough to be acceptable. [↩] |
In thinking about the superposition principle I ran into the following question: Say we take two known waves:
\begin{align}f(x,t) &= \sin(x-B\cdot t)\\ g(x,t) &= A\cdot x^2 + C\cdot t^2\end{align}
Then, by the superposition principle, their sum should be a solution. But directly applying the wave equation gives a propagation velocity, $v = \sqrt{\frac{2A-\sin(x-Bt)}{2C-B^2 \sin(x-Bt)}}$
But this cannot exist for all x and t so the sum cannot be a wave for any $A,B,C$.
Does superpositon actually fail or have I made some other mistake? |
In a 2006 paper Zhang and Zhu propose a model for VIX and VIX Futures based on Heston.
I am struggling in understanding how they get equation 6 and 8 (where they define the parameters).
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
Consider the following Heston dynamics written under the real world measure $\Bbb{P}$ \begin{gather} \frac{dS_t}{S_t} = \mu_t dt + \sqrt{v_t} dW_S^{\Bbb{P}}(t),\ S(0) = S_0 \\ dv_t = \kappa(\theta-v_t)dt + \xi \sqrt{v_t} dW_v^{\Bbb{P}}(t),\ v(0) = v_0 \\ d\langle W_S^\Bbb{P}, W_v^\Bbb{P} \rangle_t = \rho dt \end{gather} In order to be able to use that model to price financial instruments, arbitrage-free pricing theory (APT) tells us that we need to move to an equivalent measure $\Bbb{Q}$ under which discounted asset prices are martingales (or more generally: the value of any self-financing portfolio, when expressed in the risk-free money market account numéraire, should emerge as a $\Bbb{Q}$ martingale).
Because the Heston model is incomplete, there exists infinitely many such measures. Mathematically, these will differ by the drift attributed to the instantaneous variance process i.e.$$ dv_t = \kappa(\theta-v_t)dt - \lambda(t,S_t,v_t) dt + \xi \sqrt{v_t} dW_v^\Bbb{Q}(t) $$where the term $\lambda(t,S_t,v_t)$ is often referred to as the
market price of volatility risk.
In his original 93 paper, Heston makes a particular assumption regarding the market price of volatility risk which he considers being proportional to $v_t$ relying on some economic arguments $$ \lambda(t,S_t,v_t) = \lambda v_t$$ In that particular case, the dynamics under (Heston's) $\Bbb{Q}$ may be rewritten \begin{gather} \frac{dS_t}{S_t} = (r_t - q_t) dt + \sqrt{v_t} dW_S^{\Bbb{Q}}(t),\ S(0) = S_0 > 0 \\ dv_t = \kappa^*(\theta^*-v_t)dt + \xi \sqrt{v_t} dW_v^{\Bbb{Q}}(t),\ v(0) = v_0 \\ d\langle W_S^\Bbb{Q}, W_v^\Bbb{Q} \rangle_t = \rho dt \end{gather} with \begin{align} \kappa^* = \kappa + \lambda \\ \theta^* = \theta \frac{\kappa}{\kappa + \lambda} \tag{1} \end{align} and $r_t$ (resp. $q_t$) figures the risk-free rate (resp. equity dividend yield).
For a pure diffusion model, the fair variance strike $\hat{\sigma}^2_T(0)$ of a fresh-start variance swap of maturity $T$ calculated at $t=0$ is defined as $$ \hat{\sigma}_T(0)^2 = \frac{1}{T} \Bbb{E}_0^\Bbb{Q} \left[ \int_0^T d\langle \ln S \rangle_t \right] $$ In the particular case of the Heston model we can further write \begin{align} \hat{\sigma}_T(0)^2 &= \frac{1}{T} \Bbb{E}_0^\Bbb{Q} \left[ \int_0^T v_t dt \right] \tag{2} \\ &= \theta^* + (v_0 - \theta^*) \frac{1-e^{-\kappa^* T}}{\kappa^* T} \end{align} where the second equality can be obtained either by:
Because the VIX squared is by definition the fair variance strike of an (idealised) variance swap of maturity $T=\tau_0$ equal 30 days we then have, under Heston \begin{align} VIX^2(0) &= \theta^* + (v_0 - \theta^*) \frac{ 1-e^{-\kappa^* \tau_0}}{\kappa^* \tau_0} \\ &= \underbrace{\theta^* \left( 1 - \frac{1-e^{-\kappa^* \tau_0}}{\kappa^* \tau_0} \right)}_{A} + v_0 \underbrace{\frac{1-e^{-\kappa^* \tau_0}}{\kappa^* \tau_0}}_{B} \tag{3} \end{align} which is exactly the equation you mention, with the risk-neutral parameters of the Heston dynamics $(\kappa^*, \theta^*)$ related to the parameters under the real world measure $(\kappa, \theta)$ through $(1)$ |
The Annals of Statistics Ann. Statist. Volume 2, Number 4 (1974), 751-762. Probability Inequalities and Errors in Classification Abstract
Let $X$ and $Y$ be two $p \times 1$ random vectors distributed according to a normal distribution with respective mean vectors $\mu$ and $a\mu$ and covariance matrix $\begin{pmatrix}I_p & \rho I_p \\ \rho I_p & I_p\end{pmatrix}.$ Let $S$ be a random $p \times p$ matrix distributed as the Wishart distribution $W_p(I_p, r)$, independently of $X$ and $Y$. For fixed $a, \rho$, and $c$, some sufficient conditions are obtained for which $P\lbrack X'Y < c\rbrack$ and $P\lbrack X'S^{-1}Y < c\rbrack$ increase with $\mu'\mu$. These results are used to show a monotonicity property of the probabilities of correct classification of a class of rules for classifying an observation into one of two normal distributions. For the classification problem, some estimates of the probability of correct classification of the minimum distance rule are studied.
Article information Source Ann. Statist., Volume 2, Number 4 (1974), 751-762. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176342762 Digital Object Identifier doi:10.1214/aos/1176342762 Mathematical Reviews number (MathSciNet) MR365914 Zentralblatt MATH identifier 0285.62032 JSTOR links.jstor.org Subjects Primary: 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20] Secondary: 60E05: Distributions: general theory Citation
Gupta, Somesh Das. Probability Inequalities and Errors in Classification. Ann. Statist. 2 (1974), no. 4, 751--762. doi:10.1214/aos/1176342762. https://projecteuclid.org/euclid.aos/1176342762 |
Given an injective linear map $T$ between Banach spaces $X$ and $Y$, let
\begin{equation} d(T) = \sup \left \{ \frac{||x||_X}{||Tx||_Y}: x \in X \mbox{ is nonzero } \right\} \cdot ||T||_{\mathrm{op}} \end{equation}
Let \begin{equation} c(X,Y) = \inf \Bigl\{ d(T): T \mbox{ is an injective linear map from } X \mbox{ to }Y \Bigr \}. \end{equation}
Let $\ell^p_n$ be $\mathbb{R}^n$ with the $p$-norm and let $\mathcal{H}$ be an infinite dimensional Hilbert space. I would like to know if $c(\ell^1_n,\mathcal{H}) = O((\log n)^k)$ for some $k$. Alternatively, is it the case that $c(\ell^p_n,\mathcal{H}) = O_p((\log n)^{k(p)})$ for every $p > 1$?
There is a lot of literature about the Banach-Mazur distance but I have been unable to find information about linear embeddings (rather than isomorphisms). On the other hand, there is a well known result of Bourgain which asserts that a metric space with $n$ points can be embedded in $\mathcal{H}$ with distortion $O(\log n)$, but I don't know if this can be done in a uniform linear way. |
Background
I have seen a few variants of this Sum-and-Product puzzle in the past. The premise of these puzzles is as follows
Sam hears the sum of two numbers, Polly the product. The numbers are known to be between m and M.
S: "You don't know the numbers"
P: "That was true, but now I do"
S: "Now I do too"
What are the numbers?
A set of papers from 2006 refer to this as the Freudenthal Problem
Freudenthal(m,M). I have been specifically interested in classifying solutions when $m=3$, and $M$ is unbounded.
Assuming a modified form of Goldbach's conjecture, the authors prove that whether the numbers are known to be distinct or not does not change the solutions when $m=2$ and $m=3$, so I have removed their superscript distinguishing the case.
The authors also give a rather naive algorithm that enumerates solutions ordered by sum. They generate solutions for
Freudenthal(3,*) up to a sum of 50,000, and find that there are 804 stable solutions, and 288 phantom solutions that rely on the presence of an upper bound.
My own findings
I wrote a program that very efficiently generates solutions, also ordered by sum, and improved the highest sum by an order of magnitude overnight.
I then defined the "Freudenthal Partition Function" $F(s)$. The domain of this function is any sum $s$ which allows the first statement by Sam. The value $F(s)$ is the number of options remaining in Sam's mind after Polly's statement.
There is a lot to unpack in this function, so I will go into detail.
Domain of $F$
With $m=3$, the products which allow Polly to immediately deduce the numbers are any of the following:
Odd semiprimes 4 times an odd prime (numbers must be 4 and p) 2 times the square of an odd prime (numbers must be p and 2p)
For Sam to make his first claim, the number cannot be the sum of two odd primes, sum of a prime and 4, or 3p for some p. What remains is the set $S$, which is the domain of $F$.
Assume the Goldbach conjecture so that all even numbers are eliminated from $S$ outright. Then $s\in S$ iff $s$ is odd, $s-4$ is composite, and $s\ne 3p$ for some prime $p$. This set can be efficiently generated.
Computing $F(s)$
Start with a list of all even integers. Iterate over all pairs $\{(s,a)\mid s \in S, a \in \mathbb{Z}, 3 \le a \le s/2\}$ and place a tally next to $a(s-a)$. For computability's sake, proceed lexicographically by $s$ then $a$.
After this procedure passes $s_0$, all tally counts on integers up to $3s_0-9$ are stable. Define the set $P$ to be all the integers with exactly one tally.
$p \in P$ has the following properties:
$p$ is not uniquely factorable into two divisors in the set $\{z\mid z \in \mathbb{Z}, z\ge 3\}$ (guaranteed since they were reached from $S$).
$p$ has a unique divisor pair $\{d,p/d\}$ such that $d + p/d \in S$.
Equivalently, if Polly was told the number $p$, then the first two lines of the puzzle are satisfiable.
To compute $F(s)$ given $P$, iterate over $\{a \mid 3\le a\le s/2\}$ and count the number of $a$ for which $a(s-a) \in P$.
Using $F(s)$
If $F(s)=1$, then Sam, upon hearing Polly's statement, can deduce the numbers. This corresponds to a solution to the puzzle. Other claims I will make about the behavior of $F(s)$ are just conjectures.
If $F(s)=0$, then there seems to always exists some upper bound $M$ such that an integer which had two tallies loses one, and $P$ gets a new element which causes the puzzle to be satisfiable. These correspond identically to what the authors of the 2006 paper called "phantom solutions". Indeed, for $s<50,000$, there are 804 instances where $F(s)=1$, and 288 instances where $F(s)=0$.
A graph of $F(s)$ is found here for $s$ up to around 260,000. The function has three very distinct branches, corresponding to congruence classes mod 3. The bottom branch, which produces all known solutions, has $s\equiv 2 \pmod{3}$.
The solution pairs themselves
A list of the 3141 smallest pairs ordered by sum can be found here. I list the even number, then the odd number. The same pairs ordered by even number can be found here, and is more illuminating.
A notable oddity is $(4, 137233)$, the only currently-known pair to include the value 4. This was missed entirely by the original papers.
The pairs seem to all be of the form $(4^np,q)$, where $n>0$, $p$ is either 1 or prime, and $q$ is either prime or the product of a small number of primes. All prime factors of $p$ and $q$ are congruent to $1 \pmod{3}$.
Non-solutions and what we can learn from them
For every $s$, there are $F(s)$ pairs that can be considered "candidate solutions." If $F(s)>1$, they are not solutions, but in the lowest branch they share many properties of solutions.
For example, $F(53) = 2$, and the contributing pairs are $(4,49)$ and $(16,37)$. This could have allowed one to see that 4 is viable as one of the two numbers, even without finding the first example of $(4, 137233)$.
Looking at all candidate solutions for $s\equiv 2 \pmod{3}$, we find a very small set of exceptions to the $(4^np,q)$ trend.
The exceptional pairs with sum less than 500,000 are
$(32,27)$ $(128,9)$ $(512,3)$ $(2048,3)$ $(32768,3)$ $(32768,27)$
All are $(2^k,3^l)$ for $k$ odd and $l$ small. They all have $F(s)>1$ for their sum, so they are not solutions, but they lie in the bottom branch of the partition function and allow the puzzle to proceed through its first two statements.
I checked (inefficiently) all pairs $(2^k,3^l)$ for odd $k\le 31$ and $l\le 10$, and found 5 other pairs which allow the first two statements to be satisfied, but in every case $F(s)>1$. It remains open whether there exists a true solution with this form.
Edit: I have a program running to find pairs $(2^k,3^l)$ for which the first two statements can be satisfied, and for each pair, search for a witness to $F(s)>1$. For $k\le 219$ and $m\le 25$, there are 13 pairs with no witness of the form $(4^n,q)$. The much more computationally intensive task of confirming absence of a witness of the form $(4^n p,q)$ is likely intractable if the pair is a solution. Witnesses have been found for the smallest 10 pairs, the first pair with no known witness yet is $(2^{187}, 3)$.
Results for $m\ge 4$
I used the same program to examine other values of the lower bound, with what I believe were the appropriate changes and assumptions. The header comment can help others verify the correctness. I did not find any solutions, in agreement with the conjecture put forth in in the papers. Graphs of the corresponding Freudenthal Partition Functions diverge from 0, leaving no branch which could be believed to provide solutions.
So, my question to mathoverflow is: Has there been anyone else thinking about this problem, and do they have results that supplement/contradict/dwarf those I have gathered?
Asymptotic behavior of $F(s)$
I did some asymptotic analysis of $F(s)$. For a given $s$, I determined all the products with at most 3 odd factor sums such that one of those sums is $s$, and what numbers on the order of $s$ must be prime for $F(s)$ to increase by 1. This gives remarkably good fits for the three congruence classes. To reduce noise, I show here a moving average of $F(s)$, averaging the nearest hundred values in the same branch.
Notably, those $s \equiv 2 \pmod{3}$ have no growth at order $s/\log(s)^3$, always requiring an additional number to be prime and resulting in asymptotic growth at order $s/\log(s)^4$. This keeps $F(s)$ very near zero for those values we have seen, but means that $F(s)$ still grows without bound for these $s$.
This has potential to imply that there are only finitely many solutions, although such a claim is still far off. I don't know how best to determine $\liminf\limits_{s \to \infty} F(s)$.
Arithmetic progressions $s_a$ with $F(s_a)$ exceptionally low
If one rephrases the argument which gave the asymptotic formula in terms of the probability $F(s)=1$, they predict that there are only finitely many solutions, and that over 95% have been seen by the point $s=100,000$. We know at least the latter to be false, so there are systematic failures of the argument.
The leading order contribution to the lowest branch of $F(s)$ counts the instances where three things hold:
$s=4^kp+q$ for primes $p$ and $q$ with $k\ge 2$ $s^\prime=4^k+pq \not\in S$, that is, $4^k+pq-4$ is prime. $s^{\prime\prime}=4^kq+p \not\in S$, that is, $4^kq+p-4$ is prime.
Note that $s^{\prime}-s=(4^k-p)(q-1)$ and $s^{\prime\prime}-s=(4^k-1)(q-p)$. The factor $4^k-1$ is independent of $p$ and $q$. When $s-4$ shares a factor with $4^k-1$, $s^{\prime\prime}$ is not prime for any $p,q$.
Since $s\equiv 2\pmod{3}$ is necessary, $s-4$ cannot be a multiple of 3. If we define $k_m(s)$ to be the smallest $k\ge 2$ such that $(4^k-1,s-4)=1$, then we can refine the first bullet point above:
$s=4^kp+q$ for primes $p$ and $q$ with $k\ge k_m(s)$
We then refine our prediction about the asymptotic behavior of the low branch of $F(s)$:
$F(s) \sim s\log(s)^{-4}4^{-k_m(s)}$
We may define $g(k)$ to be the smallest value not divisible by 3 and sharing a factor with $4^n-1$ for all $n\le k$, so $k_m(g(k))=k+1$, then $g(2)=5$, $g(3)=35$, $g(5)=385$, etc. Each defines an arithmetic progression with exceptionally low values of $F(s)$. Specifically,
$s_{a,k}=\left\{\begin{array}{11} (6a+1)g(k)+4 & g(k)\equiv 1\pmod{6}\\ (6a+5)g(k)+4 & g(k)\equiv 5\pmod{6} \end{array} \right.$
Determining whether $F(s)$ takes the value 1 often, we must know the growth pattern of $g(k)$.
$F(s_{0,k})\sim g(k)\log(g(k))^{-4}4^{-k}$
This appears to grow unbounded, though erratically. It returns to values near 1 at $k=30$ and $k=60$ before jumping upward. I conjecture that $\liminf\limits_{s \to \infty} F(s) = \liminf\limits_{k \to \infty} F(s_{0,k})$.
I have checked the value of $F(s_{a,30})$ for $a \le 500$ and found 4 probable zeros and 7 probable solutions, the largest of which is $(4^{31}*459703, s_{34,30}-4^{31}*459703)$. This might be the largest solution. (Update: I did finish the check of the less-probable cases, completing a proof that this pair is a solution. The proof is found here).
To determine $g(k)$, we must make a statement about the smallest prime factor of $(4^k-1)/3$.
If $k$ is composite, then the smallest prime factor of $(4^k-1)/3$ is the smallest prime factor of $(4^d-1)/3$ for some divisor $d$ of $k$. This factor is present in $g(q)$ for all $q\ge d$, so if $k$ is composite we must have $g(k)=g(k-1)$.
If $k$ and $2k+1$ are both prime, then given the recurrence $r_0=0$, $r_{n+1}=4r_n+3 \pmod{2k+1}$, we have $r_k=0$, and so $4^k-1$ is divisible by $2k+1$.
If $k$ is prime and there is not a value of the form $ck+1$ for which the recurrence relation above holds, then the smallest prime factor of $(4^k-1)/3$ might be $(2^k+1)/3$. This happens for $k=7,13,17,19,31,61,...$. At the moment I don't have a full understanding of why primality of both $k$ and $6k+1$ is able to predict a factor of $6k+1$ in $4^k-1$ when $k=37$, but not when $k=17$.
The jumps upward in $g(k)$ when it must now include a factor of $(2^k+1)/3$ have increasing magnitude and seem to indicate unbounded growth of the expression $g(k)\log(g(k))^{-4}4^{-k}$, although proving this is beyond my ability.
Unbounded growth of this expression would imply with high certainty that there are finitely many solutions to the Freudenthal problem
Freudenthal(3,*). Is there an approach to proving this? |
For many planar graphs of bounded degree (binary tree, lattice, cycle) the (1/4)-mixing time and the cover time are equal, up to log-factors. Is this always the case?
The answer turns out to be
no. A counterexample, kindly suggested to me by Balász Gerencsér, is a $\sqrt{N}$ by $\sqrt{N}$ grid, with a path of length $\sqrt{N}$ glued to it. One can show that the mixing time of a simple random walk on this graph is $O(N\log N)$ (by combining Cheeger's inequality and the Spielman-Teng [1] bound on the spectral gap) the cover time of a simple random walk on this graph is $\Omega(N^{3/2})$ (by combining Matthews method with an expression of the hitting time using effective resistance)
[1] Spielman, Daniel A., and Shang-Hua Teng. "Spectral partitioning works: Planar graphs and finite element meshes." Linear Algebra and its Applications 421.2-3 (2007): 284-305. |
On the Nature of Correlation between Neutrino-SM CP Phase and Unitarity Violating New Physics Parameters Abstract
To perform leptonic unitarity test, understanding the system with the three-flavor active neutrinos with non-unitary mixing is required, in particular, on its evolution in matter and the general features of parameter correlations. In this paper, we discuss the nature of correlation between $$\nu$$SM CP phase $$\delta$$ and the $$\alpha$$ parameters, where $$\alpha$$'s are to quantify the effect of non-unitarity. A question arose on whether it is real and physical when the same authors uncovered, in a previous paper, the $$\delta - \alpha$$ parameter correlation of the form $$[ e^{- i \delta } \bar{\alpha}_{\mu e}, e^{ - i \delta} \bar{\alpha}_{\tau e}, \bar{\alpha}_{\tau \mu} ]$$ using the PDG convention of the flavor mixing matrix $$U_{\text{\tiny MNS}}$$. This analysis utilizes a perturbative framework which is valid at around the atmospheric MSW enhancement. In fact, the phase correlation depends on the convention of $$U_{\text{\tiny MNS}}$$, and the existence of the SOL convention ($$e^{ \pm i \delta}$$ attached to $$s_{12}$$) in which the correlation is absent triggered a doubt that it may not be physical. We resolve the controversy of whether the phase correlation is physical by examining the correlation in completely different kinematical phase space, at around the solar-scale enhancement. It reveals a dynamical $$\delta-$$(blob of $$\alpha$$ parameter) correlation in the SOL convention which prevails in the other conventions of $$U_{\text{\tiny MNS}}$$. It indicates that the phase correlation seen in this and the previous papers is physical and cannot be an artifact of $$U_{\text{\tiny MNS}}$$ convention.
Authors: Northwestern U. Virginia Tech. Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1561545 Report Number(s): arXiv:1908.04855; NUHEP-TH/19-09; FERMILAB-PUB-19-397-T oai:inspirehep.net:1749570 DOE Contract Number: AC02-07CH11359 Resource Type: Journal Article Journal Name: TBD Additional Journal Information: Journal Name: TBD Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Citation Formats
Martinez-Soler, Ivan, and Minakata, Hisakazu.
On the Nature of Correlation between Neutrino-SM CP Phase and Unitarity Violating New Physics Parameters. United States: N. p., 2019. Web.
Martinez-Soler, Ivan, & Minakata, Hisakazu.
On the Nature of Correlation between Neutrino-SM CP Phase and Unitarity Violating New Physics Parameters. United States.
Martinez-Soler, Ivan, and Minakata, Hisakazu. Tue . "On the Nature of Correlation between Neutrino-SM CP Phase and Unitarity Violating New Physics Parameters". United States. https://www.osti.gov/servlets/purl/1561545.
@article{osti_1561545,
title = {On the Nature of Correlation between Neutrino-SM CP Phase and Unitarity Violating New Physics Parameters}, author = {Martinez-Soler, Ivan and Minakata, Hisakazu}, abstractNote = {To perform leptonic unitarity test, understanding the system with the three-flavor active neutrinos with non-unitary mixing is required, in particular, on its evolution in matter and the general features of parameter correlations. In this paper, we discuss the nature of correlation between $\nu$SM CP phase $\delta$ and the $\alpha$ parameters, where $\alpha$'s are to quantify the effect of non-unitarity. A question arose on whether it is real and physical when the same authors uncovered, in a previous paper, the $\delta - \alpha$ parameter correlation of the form $[ e^{- i \delta } \bar{\alpha}_{\mu e}, e^{ - i \delta} \bar{\alpha}_{\tau e}, \bar{\alpha}_{\tau \mu} ]$ using the PDG convention of the flavor mixing matrix $U_{\text{\tiny MNS}}$. This analysis utilizes a perturbative framework which is valid at around the atmospheric MSW enhancement. In fact, the phase correlation depends on the convention of $U_{\text{\tiny MNS}}$, and the existence of the SOL convention ($e^{ \pm i \delta}$ attached to $s_{12}$) in which the correlation is absent triggered a doubt that it may not be physical. We resolve the controversy of whether the phase correlation is physical by examining the correlation in completely different kinematical phase space, at around the solar-scale enhancement. It reveals a dynamical $\delta-$(blob of $\alpha$ parameter) correlation in the SOL convention which prevails in the other conventions of $U_{\text{\tiny MNS}}$. It indicates that the phase correlation seen in this and the previous papers is physical and cannot be an artifact of $U_{\text{\tiny MNS}}$ convention.}, doi = {}, journal = {TBD}, number = , volume = , place = {United States}, year = {2019}, month = {8} } |
"In presence of heteroscedasticity, OLS estimators are unbiased but inefficient"
Showing the
part is relatively easy. Some authors have explained the unbiased with the help of new variance of the least square estimator. However, I was asked to show the same thing using the variance of weighted least square estimator ($\hat{\beta^*}$). I proceeded in the following way:- inefficiency
Consider the model, $Y_t=\alpha+\beta X_t+u_t$ where $u_t$'s are heteroscedastic.
Suppose ${\sigma _u}^2$ be the constant variance of $u_t$'s under the homoscedastic assumption.
Let $Var(u_t)={\sigma _t}^2$ be the variance of the disturbances under the heteroscedastic assumption.
In Particular, suppose ${\sigma _t}^2=k_t {\sigma _u}^2$, $k_t$'s being some non stochastic constant weights.
Now, consider the above model in deviation form
$y_t=\beta x_t +u_t$, where $y_t=Y_t-E(Y)$, $x_t=X_t-E(X)$
$\Rightarrow \frac{y_t}{k_t}=\beta \frac{x_t}{k_t}+\frac{u_t}{k_t}$
$\Rightarrow \frac{y_t}{k_t}=\beta \frac{x_t}{k_t}+v_t$
where $v_t$ has constant variance ${\sigma _u}^2$
Now, the weighted least square estimator is $\hat{\beta^*}=\frac{\sum \frac{y_t}{k_t}\frac{x_t}{k_t}}{\sum \frac{{x_t}^2}{{k_t}^2}}$
which would eventually give us $Var(\hat{\beta^*})=\frac{{\sigma _u}^2}{\sum \frac{{x_t}^2}{{k_t}^2}}$
Now, under the heteroscedastic assumption, we have $Var(\hat{\beta})=\frac{\sum {x_t}^2{\sigma_t}^2}{(\sum {x_t}^2)^2}$
Now, $$\frac{Var(\hat{\beta^*})}{Var(\hat{\beta})}=\frac{{\sigma _u}^2}{\sum \frac{{x_t}^2}{{k_t}^2}}\times \frac{(\sum {x_t}^2)^2}{\sum {x_t}^2{\sigma_t}^2}$$
$$\Rightarrow \frac{Var(\hat{\beta^*})}{Var(\hat{\beta})}=\frac{{\sigma _u}^2}{\sum \frac{{x_t}^2}{{k_t}^2}}\times \frac{(\sum {x_t}^2)^2}{\sum {x_t}^2{k_t\sigma_u}^2}$$ $$\Rightarrow \frac{Var(\hat{\beta^*})}{Var(\hat{\beta})}=\frac{(\sum {x_t}^2)^2} {\sum \frac{{x_t}^2}{{k_t}^2} \sum {x_t}^2{k_t}}$$
However, before I could proceed further, my instructor asked me to check it again. He was insisting that I should get something like $\frac{\sum (a_t b_t)^2}{\sum {a_t}^2 {b_t}^2}$, so that I can use Cauchy-Schwartz inequality resulting in $Var(\hat{\beta^*})<Var(\hat{\beta})$. This would eventually prove the
*inefficiency * part.
Since I am not getting that form, I think I have made mistakes. I would be happy if somebody can point it. |
The $abc$-conjecture states that if $a,b,c$ are positive, relatively prime integers satisfying $a+b=c$, then the product of the primes dividing $abc$ (the radical of $abc$) is $\gg_\varepsilon c^{1-\varepsilon}$ for every $\varepsilon>0$.
We know several examples of $abc$-triples that refute the stronger assertion that the radical of $abc$ is $\gg c$. One example I have seen involves taking $a=1$, $b=2^n-1$, and $c=2^n$, so that the radical of $abc$ is twice the radical of $2^n-1$. By taking $n$ highly composite - say
the least common multiple of the first $k$ integers - one forces $2^n-1$ to be divisible by lots of squares of primes (those not exceeding $k$, in this case), which implies that the radical of $2^n-1$ is $\ll 2^n/n$. That is, the radical of $abc$ is $\ll c/\log c$.
I'd like to cite this example in a paper I'm writing. Can someone tell me where to find it in the literature? I'd love the original citation, but even an accessible source that explicitly works out the upper bound $\ll c/\log c$ for the radical would suffice.
(Note that I don't need to be pointed to other examples of good $abc$-triples. I've put a couple of phrases above in boldface to emphasize the specific example I'm hoping to cite.) |
80 1
I'm trying to fit together my understanding of quantum mechanics, quantum field theory, given my lacking maths education.
In quantum mechanics we have a time displacement operator and a space displacement operator, which are respectively: [itex] \hat{T}(t) = e^{-i\hat{H}t} [/itex] [itex] \hat{D}(\underline{x}) = e^{-i\hat{\underline{p}}\cdot\underline{x}} [/itex] In quantum field theory there is a spacetime displacement operator: [itex] \hat{U}(a_{\mu}) = e^{-i\hat{P}^{\mu}a_{\mu}} [/itex] So as I understand this can be written out as: [itex] \hat{U}(a_{\mu}) = e^{-i\hat{P}^{0}a_{0}-i\hat{P}^{j}a_{j}} [/itex] Or, depending on the metric convention: [itex] \hat{U}(a^{\mu}) = e^{-i\hat{P}^{0}a^{0}+i\hat{P}^{j}a^{j}} [/itex] , or: [itex] \hat{U}(a^{\mu}) = e^{+i\hat{P}^{0}a^{0}-i\hat{P}^{j}a^{j}} [/itex] Now while the 1st expression is in agreement with what I know from quantum mechanics, the latter two have sign differences, and also what's surprising is that there are sign ambiguities depending on the convention of the metric - as I understand it - depending on it, either time is displaced forwards and space backwards; or time backwards and space forwards. I'm guessing that for some reason we should only take the first expression, but I don't understand why, and what is the significance of the latter two? Why are t and x in the quantum mechanics formulae for D and T necessarily covariant? Is it that when we multiply vectors like in these formulae - one quantity must be covariant, and the other contravariant?
In quantum mechanics we have a time displacement operator and a space displacement operator, which are respectively:
[itex]
\hat{T}(t) = e^{-i\hat{H}t}
[/itex]
[itex]
\hat{D}(\underline{x}) = e^{-i\hat{\underline{p}}\cdot\underline{x}}
[/itex]
In quantum field theory there is a spacetime displacement operator:
[itex]
\hat{U}(a_{\mu}) = e^{-i\hat{P}^{\mu}a_{\mu}}
[/itex]
So as I understand this can be written out as:
[itex]
\hat{U}(a_{\mu}) = e^{-i\hat{P}^{0}a_{0}-i\hat{P}^{j}a_{j}}
[/itex]
Or, depending on the metric convention:
[itex]
\hat{U}(a^{\mu}) = e^{-i\hat{P}^{0}a^{0}+i\hat{P}^{j}a^{j}}
[/itex] , or:
[itex]
\hat{U}(a^{\mu}) = e^{+i\hat{P}^{0}a^{0}-i\hat{P}^{j}a^{j}}
[/itex]
Now while the 1st expression is in agreement with what I know from quantum mechanics, the latter two have sign differences, and also what's surprising is that there are sign ambiguities depending on the convention of the metric - as I understand it - depending on it, either time is displaced forwards and space backwards; or time backwards and space forwards.
I'm guessing that for some reason we should only take the first expression, but I don't understand why, and what is the significance of the latter two? Why are t and x in the quantum mechanics formulae for D and T necessarily covariant? Is it that when we multiply vectors like in these formulae - one quantity must be covariant, and the other contravariant? |
Solver interfaces are implemented as "plug-ins". A new plugin [2] allows to run solvers on NEOS [3]. This gives us access to a number of powerful solvers, without the need to install things locally.
Linear models
To illustrate how we can express a simple transportation model in ROI, let's try a very simple transportation model [4]: \[\bbox[lightcyan,10px,border:3px solid darkblue]{\begin{align}\min\>&\sum_{i,j} c_{i,j} x_{i,j}\\&\sum_j x_{i,j} \le a_i\>\>\forall i\\&\sum_i x_{i,j} \ge b_j\>\>\forall j\\&x_{i,j}\ge 0\end{align}}\]
The GAMS representation is:
Note that all vectors and matrices have strings as indices. The ordering does not matter (like in a relational database table). Now let's do the same for ROI.
I just used vectors with integer indices: i.e. the position determines the meaning of a number (in other words: the ordering is important). We could have used vectors/matrices with row and column names, although I am not sure if this would make things much more readable.
The results look like:
When reading the GAMS code, you can follow this is about a transportation model. The R code is much more obscure, it is close to write-only code. I don't understand how you can implement large and/or complex models in this way.
The R code builds up a "large" matrix \(A\). For this tiny example it is small, but in general it is very large: in practice even way too large to handle even medium sized models. A model with 10,000 equations and 10,000 variables (not very large by today's standards) leads to a 10,000 x 10,000 matrix with 100 million entries! This type of matrix based interface essentially brings us back to the days of matrix generators [7] (although they paid attention to sparsity). This technology is largely obsolete.
NEOS
For some solvers you need to provide an e-mail address:
In this case, add email="...".
Behind the scenes
All structure is lost: we are just modeling:\[\begin{align} \min \>& c^Tx \\ & A_1 x \le b_1 \\& A_2 x \ge b_2 \end{align}\]
Quadratic models
This plugin also allows quadratic models (but not general nonlinear models). In [2] a trivial non-convex QP model is presented: \[\begin{align}\max\>&F(P_F - 150) + C(P_C - 100) \\ & 2 F + C \le 500\\ & 2F + 3C \le 800 \\ & F \le 490 - P_F \\ & C \le 640 - 2P_C\\ & F, C, P_F, P_C\ge 0\end{align}\] ROI does not understand mathematics, so we have to pass this on in terms of matrices:
Unfortunately, this is not really close to the mathematical model and makes the problem setup difficult to read and understand.
In [2] a table is presented with some of the supported solvers:
Linear models: OMPR
For linear models, we can use the OMPR package [5,6] on top of ROI. This will give us more readable models. Here we implement the above transportation model. The model can look like:
This code at least has some resemblance to the mathematical model. The solution looks like:
We can clean up the solution a bit, so we get a more meaningful solution report:
But.. x[i,j], only use a subset. An example is a network of nodes and arcs. The arcs can be represented by x[i,j]indicating the flow from node ito node j. Of course, if we have a sparse network only a few of all (i,j)combinations are allowed. x[i], we can do:
This looks great. We see
x[2]is not included in the variables. Now let's do the same thing for a two dimensional variable x[i,j]: > n <- 3 > ok <- matrix(c(0,1,1, 0,0,1, 0,0,0),nrow=3,ncol=3,byrow=T) > ok
[,1] [,2] [,3]
[1,] 0 1 1
[2,] 0 0 1
[3,] 0 0 0
> (sol <- MIPModel() %>% + add_variable(x[i,j], i=1:n, j=1:n, ok[i,j]==1) + )
Error in validate_quantifier_candidates(candidates, zero_vars_msg) :
only_integer_candidates are not all TRUE
Something goes wrong here. The error message is not very informative: I have no clue what it means. I also have no idea how to fix this.
References Many Solvers, One Interface, ROI, the R Optimization Infrastructure Package, https://www.r-project.org/conferences/useR-2010/slides/Theussl+Hornik+Meyer.pdf Ronald Hochreiter and Florian Schwendinger, ROI Plug-in NEOS, https://cran.r-project.org/web/packages/ROI.plugin.neos/vignettes/ROI.plugin.neos_Introduction.pdf NEOS Server: State-of-the-Art Solvers for Numerical Optimization, https://neos-server.org/neos/ Matlab vs GAMS: linear programming, http://yetanothermathprogrammingconsultant.blogspot.com/2016/09/matlab-vs-gams-linear-programming.html Mixed Integer Linear Programming in R, https://dirkschumacher.github.io/ompr/ MIP model in R using the OMPR package, http://yetanothermathprogrammingconsultant.blogspot.com/2017/04/mip-model-in-r-using-ompr-package.html Robert Fourer, Modeling Languages versus Matrix Generators for Linear Programming, ACM Transactions on Mathematical Software, vol. 9, pp. 143–183, 1983. |
TL;DR Your Maxwell–Boltzmann diagram up there is not sufficient to describe the variation of rate with $E_\mathrm{a}$. Simply evaluating the shaded area alone does not reproduce the exponential part of the rate constant correctly, and therefore the shaded area should not be taken as a quantitative measure of the rate (only a qualitative one).
There is a subtle issue with the way you've presented your drawing. However, we'll come to that slightly later. First, let's establish that the "proportion of molecules with sufficient energy to react" is given by
$$P(\varepsilon) = \exp \left(-\frac{\varepsilon}{kT}\right) \tag{1}$$
Therefore, for a reaction $\ce{X <=> Y}$ with uncatalysed forward activation energy $E_\mathrm{f}$ and uncatalysed backward activation energy $E_\mathrm{b}$, the rates are given by
$$k_\mathrm{f,uncat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f}}{kT}\right) \tag{2} $$
$$k_\mathrm{b,uncat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b}}{kT}\right) \tag{3} $$
The equilibrium constant of this reaction is given by
$$K_\mathrm{uncat} = \frac{k_\mathrm{f,uncat}}{k_\mathrm{b,uncat}} = \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{4}$$
As you have noted, the change in activation energy due to the catalyst is the same. I would be a bit careful with using "$\mathrm{d}E$" as the notation for this, since $\mathrm{d}$ implies an infinitesimal change, and if the change is infinitesimal, your catalyst isn't much of a catalyst. So, I'm going to use $\Delta E$. We then have
$$k_\mathrm{f,cat} = A_\mathrm{f} \exp \left(-\frac{E_\mathrm{f} - \Delta E}{kT}\right) \tag{5} $$
$$k_\mathrm{b,cat} = A_\mathrm{b} \exp \left(-\frac{E_\mathrm{b} - \Delta E}{kT}\right) \tag{6} $$
and the new equilibrium constant is
$$\begin{align}K_\mathrm{cat} = \frac{k_\mathrm{f,cat}}{k_\mathrm{b,cat}} &= \frac{A_\mathrm{f}\exp[-(E_\mathrm{f} - \Delta E)/kT]}{A_\mathrm{b}\exp[-(E_\mathrm{b} - \Delta E)/kT]} \tag{7} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \frac{\exp(\Delta E/kT)}{\exp(\Delta E/kT)} \tag{8} \\[0.2cm]&= \frac{A_\mathrm{f}\exp(-E_\mathrm{f}/kT)}{A_\mathrm{b}\exp(-E_\mathrm{b}/kT)} \tag{9}\end{align}$$
Equations $(9)$ and $(4)$ are the same, so there is no change in the equilibrium constant.
The question then arises as to how eq. $(1)$ is obtained. The simplest way is to invoke a Boltzmann distribution, which almost by definition gives the desired form. However, since you have a
Maxwell–Boltzmann curve, I guess I should talk about it a bit more. The fraction of molecules with energy $E_\mathrm{a}$ or greater is simply the shaded area under the curve, i.e. one can obtain it by integrating the curve over the desired range.
$$P(\varepsilon) = \int_{E_\mathrm{a}}^\infty f(\varepsilon)\,\mathrm{d}\varepsilon \tag{10}$$
where the Maxwell–Boltzmann distribution of energies is given by (see Wikipedia)
$$f(\varepsilon) = \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \tag{11}$$
At first glance, we would expect this to be directly proportional to the exponential part of the rate constant, i.e. $\exp(-E_\mathrm{a}/kT)$. Alas, it is not that simple. If you try to work out the integral
$$\int_{E_\mathrm{a}}^{\infty} \frac{2}{\sqrt{\pi}}\left(\frac{1}{kT}\right)^{3/2} \sqrt{\varepsilon} \exp\left(-\frac{\varepsilon}{kT}\right) \,\mathrm{d}\varepsilon \tag{12}$$
you don't get anything
close to the form of $\exp(-E_\mathrm{a}/kT)$. Instead, you get some "error function" rubbish, and some nasty square roots and exponentials. (You can use WolframAlpha to verify this.)
Why is this so? Well, it turns out that there are other terms that also depend on $\varepsilon$ and therefore need to go inside that integral (they aren't constants and can't be taken out).
The simplest example is that faster molecules tend to collide more often, so even though the right-hand tail of the diagram seems to contribute very little to the "proportion of molecules with sufficient energy", it actually contributes more significantly to the overall
rate because these molecules collide more often. In collision theory this is described using the "relative velocity" of the particles $v_\mathrm{rel}$.
There is also another complication, in that the Maxwell–Boltzmann distribution, the direction of the particles is not accounted for. (For more insight please refer to Levine
Physical Chemistry 6th ed., p 467.) Therefore, there has to be yet another term that takes into account the direction of movement of the particles. The idea is that a head-on collision between two molecules is more likely to overcome the activation barrier than is a $90^\circ$ collision. The term that compensates for this is the "collision cross-section" $\sigma$.
If you go through the maths (and I don't really intend to type it out here, it's rather long, but I will give some references) then you will find that at the end you will recover the form $\exp(-\varepsilon/kT)$. Once you have arrived at this, it's very straightforward to see that the increases in rate of both the forward and backward reaction cancel each other out.
Now, as for the promised references, Pilling and Seakins's
Reaction Kinetics pp 61-2 have a short outline of the proof. Atkins's Physical Chemistry 10th ed. has a slightly longer proof on pp 883-4. |
Let $S_n$ be the symmetric group. Let $s_i$ denote the adjacent transposition $(i \ i+1)$. For any permutation $w\in S_n$, an expression $w=s_{i_1}s_{i_2}\cdots s_{i_p}$ of minimal possible length is called a reduced decomposition of $w$, where $p=\ell(w)$, the length of $w$ (number of inversions). For any function $f=f(x_1,\ldots,x_n)$ and $w\in S_n$, define $wf:=f(x_{w^{-1}(1)},\ldots,x_{w^{-1}(n)})$ and define the divided difference operator $\partial_i$ via $\partial_i f:=\frac{f-s_if}{x_i-x_{i+1}}$.
The
Schubert polynomials $\mathfrak{G}_w$ are defined as follows: $\mathfrak{G}_w:=\partial_{w^{-1}w_0}x^\delta$, where $w_0$ is the longest element of $S_n$, given by $w_0(i)=n+1-i$ and $x^\delta:=x_1^{n-1}x_2^{n-2}\cdots x_{n-1}$. There are correspondingly double Schubert polynomials $\mathfrak{G}(x,y)$, which are defined through $\mathfrak{G}_w(x,y)=\partial_{w^{-1}w_0}\prod_{i+j\leq n}(x_i-y_j)$.
On the other hand,
quantum Schubert polynomials $\mathfrak{G}^q_w$ are defined by Fomin, Gelfand, and Postnikov in this paper (see Theorem 1.1) and by Kirillov in this paper as well. Roughly speaking, one expands the usual Schubert polynomials in terms of elementary polynomials and then replaces each elementary polynomial with a deformed quantum elementary polynomial. See (1.3) and (1.4) of the above link for further details. There are correspondingly double quantum Schubert polynomials $\mathfrak{G}(x,y)$
Here is slightly more background: In their paper on the Nil-Coxeter algebra and Schubert Polynomials, Fomin and Stanley present a beautiful construction of Schubert polynomials using operators $A_i(x)$ which are defined on the so called Nil-Coxeter Algebra. You first define the Nil-Coxeter algebra via the generators
$$u_i^2=0$$ $$u_iu_j=u_ju_i, \ \ \ |i-j|\geq 2$$ $$u_iu_{i+1}u_i=u_{i+1}u_iu_{i+1}$$
and then the operators $A_i(x):=(I+xu_{n-1})(I+xu_{n-2})\cdots (I+xu_i)$ from which one can show that $G_w(x_1,\ldots,x_n):=A_1(x_1)A_2(x_2)\cdots A_{n-1}(x_{n-1})$ correspond naturally to the usual Schubert polynomials above when one projects $G_w$ onto $w$.
FIRST QUESTION: Is there any hope for generalizing the operators $A_i(x)$ to something akin to $A_i^q(x)$ or similar in hopes of writing quantum Schubert polynomials in the form $G_w^q=A^q_1(x_1)\cdots A^q_{n-1}(x_{n-1})$? Note that in their definition, quantum Schubert polynomials actually have $n-1$ $q_i$ terms. SECOND QUESTION: Purely in terms of reduced decompositions (and not anything related to quantum cohomology, Schubert varieties or Gromov-Witten invariants), what exactly do quantum Schubert polynomials count? It's well known that Schubert polynomials and their infinite relatives can be used for example to count the number of reduced words of a permutation $w\in S_n$. The infinite Schubert polynomial $G_w(x)=A_1(x_1)A_1(x_2)\cdots$ projected onto $w$ corresponds exactly to the Stanley Symmetric function:
$$G_w(x)=\sum_{(s_1,\cdots,s_p)\in R(w)}\sum_{\overset{\overset{b_1\cdots, b_p}{1\leq b_1\leq\cdots \leq b_p}}{s_i<s_{i+1}\Rightarrow b_i<b_{i+1}}}x_{b_1}\cdots x_{b_p}$$
where $R(w)$ is the set of all reduced decompositions of $w$. Then we have the famous result $|R(w)|=[x_1\cdots x_p]G_w(x)=\sum_{\lambda\vdash p}\alpha_{w\lambda}[x_1\cdots x_p]s_\lambda==\sum_{\lambda\vdash p}\alpha_{w\lambda}f^\lambda$, where $\lambda$ are standard Young tableau, $f^\lambda$ is the number of standard Young tableau of shape $\lambda$ and $s_\lambda$ are Schur polynomials. The $\alpha_{w\lambda}$ coefficients have a combinatorial interpretation due to Fomin and Greene: they count the number of Semistandard Young Tableau of shape $\lambda$ such that the row-read word is a reduced decomposition for $w$. It seems to me that in the quantum version, one gets quantum Schur polynomials instead, with quantum $\alpha_{w\lambda}$ but this is as far as I can see. What would the $\alpha^q_{\lambda w}$ count in the quantum case?
Here is another interpretation. In their paper on the Yang Baxter equation and Schubert Polynomials, Fomin and Kirillov show that Schubert polynomials can be constructed by looking at braid relations on wiring diagrams (see for example Fig 10, pg 134). In this way, how do quantum Schubert polynomials count such generalized configurations? More importantly, what do the weights $q_i$ contribute to the counts?
Here is an example: for $w=321$, the double Schubert polynomial can be written as
$$\mathfrak{G}_w(x,y)=(x_1-y_2)(x_1-y_1)(x_2-y_1)$$
which has the interpretation that we assign weights $x_i,y_i$ to the threads (pseudolines) of any reduced decomposition of $w$ and thus we are explicitly showing which lines cross at a given time. The quantum version is $$\mathfrak{G}^q_w(x,y)=(x_1-y_2)(x_1-y_1)(x_2-y_1)+q_1(x_1-y_2)$$.
What exactly is this $q_1$ counting? |
If $$P$$ dollars are invested for one year, at a rate of interest $$i$$, it will accumulate to amount $$A$$ at the end of the year by the following equation:
$$!A=P(1+i)$$
How long does it take in years $$t$$ to double your principal $$P$$ with an interest rate of $$i$$ per annum?
$$!2P=P(1+i)^t$$ $$!2=(1+i)^t$$ $$!\log_e 2=t\log_e(1+i)$$
$$!t=\frac{\log_e 2}{\log_e(1+i)}\approx \frac{0.7}{i}$$
You can buy Viagra in the nearest drugstore if you have a prescription from your doctor. It means, your awkward health disorder becomes public inevitably. Do not be frustrated! Fortunately, you can avoid any publicity at all if you choose to buy Viagra online. Our reliable online pharmacy will offer you top-quality pills from the manufacturer at the best price.
Attention! viagra online pharmacy Do not fail to see your doctor before ordering Viagra online! Use the lowest dose of the drug that works. Never overdose your ‘love pills’ – take only 1 pill about 1 hour before intended intercourse. Never take Viagra repeatedly within 24 hours! |
During the Kaggle Data Science Bowl 2017, the leaderboard was based on only $198$ samples. The opportunity for overfitting was quickly understood, but initially only the naive option was mentioned, testing 1 submission per sample taking 66 days (still doable within the competition duration, but less than ideal).
But then Oleg Trott got a perfect score in just 14 submissions! topic) I was really curious how he managed to do this. Together with Cas, I found out one way it can be done. Perhaps Oleg will be posting his solution soon, but as I write this, it's not yet public.
EDIT: Kaggle is not happy with these overfitted perfect scores, which is reasonable. It gives a wrong impression to press or newcomers. Therefore I ask you to not post your perfect solution to the leaderboard. You can still post test ones to get extra training data though, giving you a slight advantage over others who will get this data later. Test submissions with have terrible logloss so won't influence the leaderboard.
The score is calculated as logloss, with probabilities capped at $1 \cdot 10^{-15}$. Therefore, the maximum error is $-\ln(10^{-15})$. With a resolution of $0.00001$ on the public leaderboard, that leaves $-\ln(10^{-15})/198/0.00001 = 17443$ discernible values or $14.090$ bits of information. The solution has $198$ bits of information, so theoretically, it seems possible to get it in 15 attempts.
Here's how to do it. You want to be able to tell from the sum of the sample log losses which position had an error. Therefore, you want the errors in "binary representation" to be like $0000$, $0001$, $0010$, $0100$, $1000$, but with a "bit" being the minimum resolution. That way you can tell from the sum which position was wrong.
EDIT: I initially used only half the probability range for positives, to prevent collisions between scores from positives and negatives. By "collision" I mean the score being decomposable in multiple ways. As it turns out, collisions are pretty rare, so we use the whole range and sample 15 bits each time (including a 0 prediction).
We can check 15 positions per submission. To have the other positions not interfere, we predict $p=0.5$ for all of them, which will give a logloss of $(198-15)/198 \cdot \ln(0.5)$ whether they're positives or negatives. This will simply add a constant to the score.
EDIT: I initially used exponential function, but after seeing that Oleg uses sigmoid, I found that there are fewer collisions that way. You should use that way, by simply replacing "exp" by "logistic.cdf" (scipy). Comments about why this works are welcome!
Let's look at a simplified example, where we probe 4 positions and predict $p=0.5$ for the rest. For the first position we predict $p=1$, for an logloss of $0$ if correct. For the second position, we want the minimum discernible error if it's a positive. The minimum discernible error is $0.00001$, so we predict $p = \exp(-1 \cdot 2^0 \cdot 0.00001 \cdot 198) = 0.99802$. For the third one, we want double ($2^1$) that error ($0.00002$), so we predict $\exp(-1 \cdot 2^1 \cdot 0.00001 \cdot 198) = 0.99605$. For the fourth one, we want 4 ($2^2$) times that error ($0.00004$) so $p=0.99211$.
We submit this solution and get an error of (for example) $0.70714$. We first subtract the constant term due to 194 times $p=0.5$ which is $\ln(0.5) / 198 \cdot 194 = 0.67914$, which leaves us $0.02800$. Position 1 can contribute either $0.00000$ or $-\log(1o^{-15})/198 = 0.17444$, position 2 can contribute either $0.00001$ or $0.03144$, position 3 is either $0.00002$ or $0.02795$ and position 4 is either $0.00004$ or $0.02446$.
In this case it's fairly obvious which sums to $0.02800$, but you can generally just try all possibilities, there are just 32768 for 15 bits. The solution is $0.00000 + 0.00001 + 0.02795 + 0.00004$. So we know position 1, 2 and 4 are positives, while position 3 is a negative.
Just repeat this for 15 positions at a time until you have everything. If there were ever multiple solutions for how to obtain the score, then you can do another submission to compare them.
See my implementation on Kaggle. (Jan 18) |
Let E be a holomorphic bundle over algebra surface X, let $H$ be a Hermitian metric of $E$, recall the Hermitian-Yang-mills equation is $\wedge F_H=\lambda.1$.
Let $H_t$ be Hermitian metrics over $E$ parametrized by $t$, Donaldson in [1] consider the following flow equation: \begin{equation} H_t^{-1}\frac{\partial H_t}{\partial t}=-2i(\wedge F_{H_t}-\lambda.1),\;\;H_t|_{t=0}=H_0, \label{flow} \end{equation} for some initial metric $H_0$.
In [1], page 15, there is a note: If $E$ is indecomposable and has a solution $K$ to the Hermitian-Yang-mills equation, then for any initial condition $H_0$, the corresponding solution $H_t$ of the flow equation converges in $\mathcal{C}^{\infty}$ to $K$ as $t\to\infty$. In addition, consider the distance function $\sigma$ between two metric $H_t,K$, $\sigma(H_t,K):=Tr(H_t^{-1}K)+Tr(K^{-1}H_t)-2\;\mathrm{rank}\;E$, then we have a bound \begin{equation} \|(\frac{\partial}{\partial t}+\Delta)\sigma(K,H_t)\|_{L^1}\leq -const.\|\sigma(K,H_t)\|_{L^1} \end{equation} and $\sigma$ decays exponentially.
My question is how to verify these two claims:
(A)If a solution $K$ exists, then the flow convergence to the solution in $\mathcal{C}^{\infty}$.
(B)This convergence is exponentially decays.
[1] S.Donaldson, Anti Self-Dual Yang-Mills Connections over Complex Algebraic Surfaces and Stable Vector Bundles |
In [1] an attempt was made to model the following situation:
There are \(n=25\) bins (indicated by \(i\)), each with \(q_i\) parts (\(q_i\) is given). There are \(m=5\) pallets. We can load up to 5 bins onto a pallet. (Note: this implies each pallet will get exactly 5 bins.) The set of pallets is called \(j\). We want to minimize the standard deviation of the number of parts loaded onto a pallet.
The question of how to model the standard deviation thing comes up now and then, so let’s see how we can model this simple example.
First we notice that we need to introduce some kind of binary assignment variable to indicate on which pallet a bin is loaded:
\[x_{i,j}=\begin{cases}1 & \text{if bin $i$ is assigned to pallet $j$}\\0&\text{otherwise}\end{cases}\]
\[\bbox[lightcyan,10px,border:3px solid darkblue]{
The variables \(p_j\) and \(\mu\) are (unrestricted) continuous variables (to be precise: \(p_j\) will be integer automatically).
The objective is complicated and would require an MINLP solver. We can simplify the objective as follows:
\[\min\>\sum_j (p_j-\mu)^2 \]
Now we have a MIQP model that can be solved with a number of good solvers.
It is possible to look at the spread in a different way (there is nothing sacred about the standard deviation) and come up with a linear objective:
\[\begin{align}\min\>&p_{\text{max}}-p_{\text{min}}\\
This is the range of the \(p_j\)’s. In practice we might even try to use a simpler approach:
\[\begin{align}\min\>&p_{\text{max}}\\
This objective will also reduce the spread although in an indirect way. To be complete I also want to mention that I have seen cases where a quadratic objective of the form:
\[\min\>\sum_j p_j^2\]
was used. Indirectly, this objective will also minimize the spread.
The original post [1] suggest to use as number of parts:
Obviously drawing from the Normal distribution will give us fractional values. In practice I would expect the number of parts to be a whole number. Of course if we would consider an other attribute such as weight, we could see fractional values. In the model per se, we don’t assume integer valued \(q_i\)’s so let’s stick with these numbers. We will see below this is actually quite a difficult data set (even though we only have 125 discrete variables, which makes the model rather small), and other data will make the model much easier to solve.
To reduce some symmetry in the model we can add:
\[p_j \ge p_{j-1}\]
Not only will this help with symmetry, it also makes the variable \(p\) easier to interpret in the solution. You can see this in the results below.
The two quadratic models were difficult to solve to optimality. I stopped after 1000 seconds and both were still working. Interestingly just minimizing \(\sum p_j^2\) seems to get a somewhat better solution within the 1000 second time limit: it reduces the range from 0.1 to 0.052 and the standard deviation from 0.04 to 0.02.
The linear models solve to global optimality quickly and get better results.
By using a linear approximation we can reduce the range to 0.034 and 0.038 (and the standard deviation to 0.014 and 0.016). The model that minimizes \(p_{max}-p_{min}\) seems to be the best performing: both the quality of the solution is very good and the solution time is the fastest (34 seconds).
This is an example of a model where MIQP solvers are not as fast as we want: they really fall behind their linear counterparts.
Optimality
Can we find the best standard deviation possible? We can use the following algorithm to give this a try:
Solve the linear model with objective \(\min p_{max}-p_{min}\) to optimality. Solve the quadratic model with objective \(\min \sum_j (p_j-\mu)^2\) using MIPSTART.
This data set although small is still a big problem for state-of-the-art MIQP solvers around. Step 2 took hours on a small laptop (somewhat faster on a faster machine after throwing extra threads at it). This step did not improve the solution found in in step 1 but rather proved it has the best possible standard deviation. Still very poor performance for a very small model!
Dataset
The dataset we used turned out to be very difficult. If we use a different dataset for \(q_i\): integer values from a uniform distribution, we get much better performance.
Using the following data:
---- 35 PARAMETER q
bin1 13.000, bin2 27.000, bin3 21.000, bin4 16.000, bin5 16.000
the model with quadratic objective \(\min \sum_j (p_j-\mu)^2\) solves this in just 0.2 seconds to optimality (see the solution below). This model has exactly the same size as before, just different data for \(q_i\). This is one more proof that problem size (measured by number of variables and equations) is not necessarily a good indication of how difficult a problem is to solve.
References LPSolve API, Minimize a function of constraints, https://stackoverflow.com/questions/46203974/lpsolve-api-minimize-a-function-of-constraints |
Let $G$ be a reductive algebraic group and $\varrho$ a representation of $G$ in $GL(n)$. Is it true that $\varrho$ is completely reducible? Moreover, how are related the representations of the Lie algebra $\mathfrak{g}$ of $G$ with the one of $G$?Finally, the centre of the identity component of $G$ consists of semisimple transformations, is it true also for $\mathfrak{g}$?
You need to be over a field of zero characteristic and your representation needs to be rational, i.e. matrix entries need to be algebraic functions on $G$. Then it is completely reducible, see any book on algebraic groups, e.g., Jantzen or Humphreys.
You can always differentiate, so a differential of a map $G\rightarrow GL(V)$ is a representation of ${\mathfrak g}$. In the opposite direction, a certain care is required. To integrate a vector field, you need exponential function, which is not, in general, algebraic. However, for a semisimple group in characteristic zero, you have enough nilpotent elements $X\in{\mathfrak g}$, so that the polynomials $e^{\rho (X)}$ define a representation of the group.
Finally, the answer is no. Take ${\mathfrak g}$ to be one-dimensional Lie algebra acting on $K^2$ by the nilpotent nonzero transformation.
Edit: I was too fast with my answer, but I am going to keep it here to possibly prevent others from the misunderstanding that I had. The problem is that you ask about Lie groups in the title of your question, and about algebraic groups in the body of your question! The answers differ: for an algebraic group, reductive = "trivial unipotent radical" (and this, in char=0, gives the complete reducibility), while for a Lie group, it is "Lie group whose Lie algebra is reductive" (so the additivel group is perfectly fine). In my view, this mismatch is terrifying, but not much can be done, and there probably will be misconceptions about it forever.
Is it true that $\rho$ is completely reducible?
Certainly not. The counterexamples given to you in your previous question easily adapt for groups. For example, the additive group of the ground field has a 2d representation $x\mapsto\begin{pmatrix}1&x\\\ 0&1\end{pmatrix}$. |
The package
CircuiTikz provides a set of macros for naturally typesetting electrical and electronic networks. This article explains basic usage of this package.
Contents CircuiTikz includes several nodes that can be used with standard tikz syntax.
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{circuitikz} \begin{document} \begin{center} \begin{circuitikz} \draw (0,0) to[ variable cute inductor ] (2,0); \end{circuitikz} \end{center} \end{document}
To use the package it must be imported with
\usepackage{circuitikz} in the preamble. Then the environment circuitikz is used to typeset the diagram with tikz syntax. In the example a node called variable cute inductor is used.
As mentioned before, to draw electrical network diagrams you should use tikz syntax, the examples even work if the environment
tikzpicture is used instead of circuitikz; below a more complex example is presented.
\begin{center} \begin{circuitikz}[american voltages] \draw (0,0) to [short, *-] (6,0) to [V, l_=$\mathrm{j}{\omega}_m \underline{\psi}^s_R$] (6,2) to [R, l_=$R_R$] (6,4) to [short, i_=$\underline{i}^s_R$] (5,4) (0,0) to [open, v^>=$\underline{u}^s_s$] (0,4) to [short, *- ,i=$\underline{i}^s_s$] (1,4) to [R, l=$R_s$] (3,4) to [L, l=$L_{\sigma}$] (5,4) to [short, i_=$\underline{i}^s_M$] (5,3) to [L, l_=$L_M$] (5,0); \end{circuitikz} \end{center}
The nodes
short,
V,
R and
L are presented here, but there a lot more. Some of them are presented in the next section.
Below most of the elements provided by
CircuiTikz are listed: Monopoles Bipoles Diodes dynamical bipoles
For more information see: |
An
inner product is a positive definite bilinear form (双线性形式) on a vector space:for a vector space $X$ with underlying scalar field $\mathbb{F}$,a map $\langle \cdot, \cdot \rangle: X \times X \to \mathbb{F}$ that satisfies
An
inner product space $(X, (+, \cdot_\mathbb{F}), \langle \cdot, \cdot \rangle)$is a vector space with an inner product.Inner product specifies the geometry of a vector space.Inner product space has norm $\|x\| = \sqrt{\langle x, x \rangle}$.Inner product space may have a finite or infinite number of dimensions.
A
Hilbert Space is a complete inner product space.Hilbert spaces typically arise as infinite-dimensional function spaces,e.g. the $l^2$ space of square summable infinite sequences,$L^2$ spaces of square-integrable functions,$H^s$ Sobolev spaces of twice square-integrably weakly-differentiable functions,and Hardy spaces $H^2(U)$ and Bergman spaces $L^2_a(G)$ of holomorphic functions.
A
Euclidean space $(\mathbb{R}^n, (\cdot,\cdot))$is a finite-dimensional real vector space $\mathbb{R}^n$, $n \in \mathbb{N}$,with an inner product $(x,y) = \sum_{i=1}^n x_i y_i$.
Two elements of an inner product space are
orthogonalif their inner product is zero: $\langle x, y \rangle = 0$.
Orthogonal complement.
Orthogonal projection.
Orthogonal set; Orthogonal set; Maximal/complete orthogonal set; Orthonormal basis.
Gram-Schmidt orthogonalization process.
For a vector space,an
approximation of a point on a (closed) subspace is a point on the subspace that is closest to the point.
Theorem (Approximation in Banach and Hilbert spaces): For a Banach space, an approximation may not exist. For a Hilbert space, the approximation of any point on any subspace exists and is unique, which is the orthogonal projection of that point.
Theorem (Riesz representation):
Theorem (Lax-Milgram):
For a Hilbert space $(H, \langle \cdot , \cdot \rangle)$ of scalar functions on $X$,a scalar function $K(x,y)$ on $X \times X$ is a
reproducing kernel of the Hilbert space if:let $K_y(x) \equiv K(x,y)$,
Reproducing kernels are symmetric and positive definite.
Reproducing kernel Hilbert space (RKHS) is a Hilbert space with a reproducing kernel[@Aronszajn1950].
Theorem (equivalent definition of RKHS): A Hilbert space of functions on a set $X$ is a reproducing kernel Hilbert space iff $f(y) \le c(y) \|f\|, \forall y \in X$, where $c(y) \equiv \|K_y\|$.
A reproducing kernel Hilbert space uniquely defines a reproducing kernel which is symmetric and positive definite.
Theorem: For a symmetric positive definite kernel $K$ on a set $X$, there is a unique Hilbert space of functions on $X$ for which $K$ is a reproducing kernel.
Therefore, reproducing kernel Hilbert spaces of functions on a domain are in one-to-one correspondence with positive definite kernels on the domain. We can denote any reproducing kernel Hilbert space as $H_K$, where $K$ is the unique reproducing kernel of the Hilbert space and $H_K$ is the unique Hilbert space generated by the symmetric positive definite kernel $K$.
If the space $H_K$ is sufficiently rich, then the reproducing kernel $K$ is positive definite.
Let $X$ has measure $\mu$, which defines a Hilbert space $(L^2_\mu, \langle \cdot , \cdot \rangle_\mu)$. Define integral operator $L_K$ on $H_K$ as $(L_K f)(x) = \langle K_x, f \rangle_\mu$. If $X$ is compact, then $L_K$ is compact and self-adjoint w.r.t $L^2_\mu$, so its eigenfunctions $\{e_i\}_{i \in \mathbb{N}}$ form an orthonormal basis of $L^2_\mu$, its eigenvalues $\{\lambda_i\}_{i \in \mathbb{N}}$ have finite multiplicities and converge to zero, and $K(x,y) = \sum_{i \in \mathbb{N}} \lambda_i e_i(x) e_i(y)$. If $f(x) = \sum_{i \in \mathbb{N}} a_i e_i(x)$, then $L_K f = \sum_{i \in \mathbb{N}} \lambda_i a_i e_i(x)$. It can be shown that the eigenfunctions are in $H_K$, so $\langle e_i, e_j \rangle = \delta_{ij} / \lambda_i$, and $f(x) = \sum_{i \in \mathbb{N}} a_i e_i(x) \in H_K$ iff $\sum_{i \in \mathbb{N}} a_i^2 / \lambda_i < \infty$. Let $L_K^{1/2}$ be the only positive definite self-adjoint operator that satifies $L_K^{1/2} \circ L_K^{1/2} = L_K$, then $L_K^{1/2}$ is an isomorphism from $L^2_\mu$ to $H_K$. |
61 0 1. Homework Statement
For a lightly damped harmonic oscillator and driving frequencies close to the natural frequency [itex]\omega \approx \omega_{0}[/itex], show that the power absorbed is approximately proportional to
[tex]
\frac{\gamma^{2}/4}{\left(\omega_{0}-\omega\right)^{2}+\gamma^{2}/4}
[/tex]
where [itex]\gamma[/itex] is the damping constant. This is the so called Lorentzian function.
2. Homework Equations
[tex]
\text{Average power absorbed} = P_{avg} = \frac{F_{0}^{2} \omega_{0}}{2k Q} \frac{1}{\left(\frac{\omega_{0}}{\omega}-\frac{\omega}{\omega_{0}}\right)^{2}+\frac{1}{Q^{2}}} \\
\omega_{0} = \sqrt{\frac{k}{m}}\\
m = \frac{b}{\gamma}\\
\text{where $b$ is the damping constant and $m$ is the mass}\\
\Delta \omega = \frac{\gamma}{2}
[/tex]
3. The Attempt at a Solution
The course of action that I took goes like:
1.Find [itex]k[/itex] and [itex]Q[/itex] in terms of [itex]\omega_{0}[/itex] and [itex]\gamma[/itex].
2. Chug through and do some algebra (and it is here that its very possible that a mistake was made, but I'll put my result not all the steps).
3. Expand a function about [itex]w_{0}[/itex] and make approximations so that [itex]\Delta \omega[/itex] is small.
(4) See the above equation fall out. This is the stage that I'm stuck at.
[tex]
k = b \frac{\omega_{0}^{2}}{\gamma}\\
Q = \frac{\omega_{0}}{\gamma}\\
2 \Delta \omega = \gamma
\\
P_{avg} = \text{plug in and do lots of algebra...}\\
P_{avg} = \frac{\frac{\omega^{2}\gamma^{2}}{(\omega+\omega_{0})^{2}}}{(\omega_{0}-\omega)^{2}+\frac{\omega^{2}\gamma^{2}}{(\omega+\omega_{0})^{2}}}
[/tex]
Then taylor expanding [itex]f(\omega) = \frac{\omega^{2}}{(\omega+\omega_{0})^{2}}[/itex] about [itex]\omega_{0}[/itex]....
Am I on the right try here? I'd like that taylor expansion to equal [itex]\frac{1}{4}[/itex] because then the equation would match the one described in the question but I'm trying it by hand and with mathematica and I'm not seeing them match. |
If the L-function $L(s)$ satisfies the functional equation \[\Lambda(s) := N^{s/2}\prod_{j=1}^J \Gamma_{\mathbb R}(s+\mu_j) \prod_{k=1}^K \Gamma_{\mathbb C}(s+\nu_k)\cdot L(s) = \varepsilon \overline{\Lambda}(1-s),\]then $\Lambda(s)$ is called the
completed L-function.
The completed L-function is the product of the L-function and its gamma factors.
Authors: Knowl status: Review status: beta Last edited by David Farmer on 2019-05-14 07:38:22 Referred to by: History:(expand/hide all) 2019-05-14 07:38:22 by David Farmer 2019-05-14 07:11:41 by David Farmer 2019-05-14 07:10:58 by David Farmer Differences(show/hide) |
Recall that an object $a$ in a symmetric monoidal category $(\mathcal{C}, \otimes, e)$is
dualizable if there exists an object $b$ and morphisms $\varepsilon\colon b \otimes a \to e$and $\eta\colon e \to a \otimes b$ such that the compositions$$a \xrightarrow{\eta \otimes a} a \otimes b \otimes a \xrightarrow{a\otimes \varepsilon} a$$and$$b \xrightarrow{b \otimes \eta} b \otimes a \otimes b \xrightarrow{\varepsilon \otimes b} b$$are the identities.
Let $\mathcal{O}$ be a ring in a topos $\mathcal{T}$.An object in the category of $\mathcal{O}$-modules is dualizableif and only if it is locally isomorphic to a direct summand of a finite free module.A
strictly perfect complex is a bounded cochain complex of direct summands of finite free modules.An object in the category of cochain complexes of $\mathcal{O}$-modules is dualizableif and only if it is locally isomorphic to a strictly perfect complex.For instance, if $\mathcal{T}$ is the category of sets,then the dualizable objects in the categories mentioned above are the finite projective modulesand the bounded complexes of finite projective modules, respectively.
A complex in the derived category $D(\mathcal{O})$ of $\mathcal{O}$-modules is called
perfectif it is locally quasi-isomorphic to a strictly perfect complex.Any perfect complex in $D(\mathcal{O})$ is dualizable. My question is: Is the converse statement true?That is, is any dualizable object in $D(\mathcal{O})$ perfect?
I know that this is true in the special case when $\mathcal{T}$ is the category of sets. One argument for this is that $\mathcal{O}$ is compact in $D(\mathcal{O})$, something which is not true in any topos, from which it easily follows that any dualizable complex is compact. Hence the statement follows from the fact that compact objects are perfect, again using that $\mathcal{T}$ is the category of sets. |
Advances in Operator Theory Adv. Oper. Theory Volume 3, Number 4 (2018), 781-793. On behavior of Fourier coefficients and uniform convergence of Fourier series in the Haar system Abstract
Suppose that $\hat{b}_m\downarrow 0,\ \{\hat{b}_m\}_{m=1}^\infty\notin l^2,$ and $b_n=2^{-\frac{m}{2}}\hat{b}_m$ for all $ n\in(2^m,2^{m+1}].$ In this paper, it is proved that any measurable and almost everywhere finite function $f(x)$ on $[0,1]$ can be corrected on a set of arbitrarily small measure to a bounded measurable function $\widetilde{f}(x)$; so that the nonzero Fourier-Haar coefficients of the corrected function present some subsequence of $\{b_n\}$, and its Fourier-Haar series converges uniformly on $[0,1]$.
Article information Source Adv. Oper. Theory, Volume 3, Number 4 (2018), 781-793. Dates Received: 21 January 2018 Accepted: 12 May 2018 First available in Project Euclid: 8 June 2018 Permanent link to this document https://projecteuclid.org/euclid.aot/1528444822 Digital Object Identifier doi:10.15352/aot.1801-1300 Mathematical Reviews number (MathSciNet) MR3856172 Citation
Grigoryan, M. G.; Kobelyan, A. Kh. On behavior of Fourier coefficients and uniform convergence of Fourier series in the Haar system. Adv. Oper. Theory 3 (2018), no. 4, 781--793. doi:10.15352/aot.1801-1300. https://projecteuclid.org/euclid.aot/1528444822 |
This is the final part of this hands-on tutorial. I will assume from now on that you have read Part I, Part II, and Part III of this series.
As promised, this post will deal with:
As promised, this post will deal with:
Some tweaks to the protocol presented in the previous posts. A complexity analysis of the protocol, A small optimization. A few words about modern ZK proving protocols Lior's Tweaks
A colleague at Starkware, Lior Goldberg, pointed out a caveat in the zero-knowledge aspect, and suggested a nice simplification of the protocol, here they are:
A ZK Fix
Now if the random query $i$ happens to be $1$, then the prover is required to reveal the second and third elements in the witness. If they happen to be 15 and 21, then the verifier knows immediately that 5 and 6 (from the problem instance) belong to the
same sidein the solution. This violates the zero knowledge property that we wanted.
This happened because we chose uniformly at random from a very small range, and $r$ happened to be the maximal number in that range.
There are two ways to solve this. One is by chosing some arbitrary number and doing all computations modulo that number. A simpler way would be chosing $r$ from a huge domain, such as $0..2^{100}$, which makes the probability of getting a revealing $r$ negligible.
Modern ZK proving protocols, such as ZK-SNARKS, ZK-STARKS, Bulletproof, Ligero, Aurora, and others, are often compared along these four axes: Simplify By Having A Cyclic List
Our witness originally had $n + 1$ elements such that the first was a random number, and the rest were partial sums of the problem and the assignment dot product (plus the initial random number).
This meant we had two types of queries, one to check that two consecutive elements in the list differ in absolute value by the corresponding element in the problem list. Another type of query just checked that the first and last element are equal.
As Lior pointed out, it is much more elegant to omit the last element from the witness entirely, and if $i = n$ - check that the first and last elements in the witness differ, in absolute value, by the last element in the problem instance. Essentially, this is like thinking of the witness as cyclic. The nice thing about this is that now we only have one type of queries - a query about the difference between two consecutive elements
modulo nin the witness. Proof Size / Communication Complexity
We'd like to analyze the size of the proof that our code generates. This often referred to as
communication complexity,because the Fiat-Shamir Heuristic (that was described in Part III) transforms messages (from an interactive protocol) to a proof, making these two terms interchangeable in this context.
So, for each query, the proof stores:
The value of i. The value of the $i$-th element in the witness and the $(i \mod n)$-th element. Authentication paths for both elements.
The authentication paths here are the heavy part. Each of them is a $\log(n)$-element long list of 256bit values.
As was discussed in the last post, to get a decent soundness, the number of queries has to be roughly $100n$.
Putting these two together, the proof size will be dominated by the $~200 \cdot n \cdot \log(n)$ hashes that form the authentication paths.
So a proof that one knows an assignment to a Partition Problem instance with 1000 numbers, will require roughly $2,000,000$ hashes, which translate to 64 Megabytes of data.
Small Merkle Optimization
Since merkle authentication paths, somewhat surprisingly, make up the vast majority of the proof, maybe we can reduce their number by a little.
Note that all queries (except for one) ask about consecutive leaves in the tree.
Consecutive leaves share, on average, have their LCA (least common ancestor) at height $\frac {\log n} {2}$. Up to the LCA, their authentication paths may differ, but from the LCA up to the root, they're authentication paths are identical, so we're wasting space writing both in the proof.
Omitting the path from the LCA to the root from one of them will bring the proof size down to $150 \cdot n \cdot \log (n)$, which is a nice 25% improvement.
Implementing this optimization, as well as Lior's tweaks, is left - as they say in textbooks - as an exercise for the reader.
Modern Protocols
Modern ZK proving protocols, such as ZK-SNARKS, ZK-STARKS, Bulletproof, Ligero, Aurora, and others, are often compared along these four axes:
What type of statements can be proved using the protocol. How much space the proof takes up. How long it takes to create a proof. How long it takes to verify a proof.
Often the topic of trusted setup is discussed, but we won't get into that here.
Let's see how our toy-protocol fares:
Which statements can be proved?
In the toy-protocol, only knowledge of a solution to a Partition Problem instance could be proved. In contrast with most protocols, where one can use the protocol to prove knowledge of an input that satisfies some arbitrary arithmetic circuit, or even that a specific program ran for $T$ steps, and provided a specified output (this is what ZK-STARKS do).
Well, you may say, if you can prove one NP-complete problem (and the Partition Problem is one) - you can prove them all, due to polynomial time reductions. And theoretically speaking you would be right. However, in the practical world of ZK-proofs, all these manipulations have costs of their own, and conversions often incur a blow-up of the problem, since "polynomial reduction" is a theoretical term that can translate to non-practical cost. For this reasons, modern protocols make an effort to take as input more expressive forms (such as arithmetic circuits and statements about computer programs).
Well, you may say, if you can prove one NP-complete problem (and the Partition Problem is one) - you can prove them all, due to polynomial time reductions. And theoretically speaking you would be right. However, in the practical world of ZK-proofs, all these manipulations have costs of their own, and conversions often incur a blow-up of the problem, since "polynomial reduction" is a theoretical term that can translate to non-practical cost. For this reasons, modern protocols make an effort to take as input more expressive forms (such as arithmetic circuits and statements about computer programs).
Space
As the analysis showed, our proof takes up $O(n \log (n))$ space, whereas in most modern protocols, the proof size is somewhere between constant and polylogarithmic in $n$ (e.g. $O(\log ^2 (n))$).
This You can trace this gap to the fact we need a The approach I took was inspired by tricks from the ZK-STARK protocol, that is slightly more expensive than others in terms of proof size, but is expressive, requires relatively short prover time, and very short verifier time. In STARK, indeed the lion share of the proof is comprised of Merkle authentication paths, but great care is taken so that the number of queries will be minuscule.
This
hugegap is what this makes the proposed protocol nothing more than a toy example, that - while demonstrating certain approaches and tricks - is useless for any real application.
You can trace this gap to the fact we need a
linear number of queries, each costing a logarithmic number of hashes (the Merkle authentication paths).
The approach I took was inspired by tricks from the ZK-STARK protocol, that is slightly more expensive than others in terms of proof size, but is expressive, requires relatively short prover time, and very short verifier time. In STARK, indeed the lion share of the proof is comprised of Merkle authentication paths, but great care is taken so that the number of queries will be minuscule.
Prover Running Time
In our protocol it is roughly $O(n \log (n))$, which is not far from modern protocols.
Verifier Running Time
In our protocol it is linear in the proof size, so $O(n \log n)$ which is not so good. Recall that, at least in the context of blockchains, a proof is written once but verified many times (by miners for example). Modern protocols thus strive to make the verifier workload as small as they possibly can without impeding soundness.
This concludes what I hoped to cover in this tutorial. It was fun to write and code. Let's do it again sometime. :) |
Here is another proof, that actually proves a bit more:
Let $G,H$ be two non-abelian groups of order $6$. Then $G \cong H$.
First off, we seek to show each group has an element of order $3$, and an element of order $2$. Since both are non-abelian, we don't have
any elements in either of order $6$, for such an element would then generate the entire group, which would then be cyclic, and thus abelian.
So we only have non-identity elements of orders $2$ and/or $3$. Could all the non-identity elements be order $2$? No, because if $x$ and $y$ were two distinct such elements (in either group), then $xy$ would be a third element, by supposition
also of order $2$ (Note $xy \neq e$ since $x \neq y$, and $x,y$ are of order $2$). Since:
$e = (xy)^2 = x^2y^2$, we see $x$ and $y$ commute, and thus $\{e,x,y,xy\}$ is a subgroup of $G$ or $H$ of order $4$. But $4\not\mid 6$, so this is impossible.
On the other hand, it is clear that non-identity elements of order $3$ occur in pairs, so it is impossible to have $5$ such. So $G$ (or $H$) has at least one element (and thus at least $2$) of order $3$, and at least one element of order $2$. Let us call these elements $a,b$ (for $G$, and say $a',b'$ for $H$-in what follows, in your mind "put the primes on" to make corresponding statements for $H$)).
Straight off the bat we know $4$ distinct elements: $e,a,a^2,b$. By closure we see that $ab$ is in the group, and cannot equal any of the four prior elements:
$ab = e \implies b = a^{-1} = a^2\\ab = a \implies b = e\\ab = a^2 \implies b = a\\ab = b \implies a = e$
Similarly, we know that $a^2b$ is also in the group, and a similar process of elimination shows it is distinct from the $5$ given so far.
Now $ba$ must also be in the group, and we have just two possibilities: $ba = ab$, or $ba = a^2b$.
If $ba = ab$, then $(ab)^n = a^nb^n$, and thus $ab$ has order $6$. Since $G$ (and $H$) are non-abelian, this cannot be the case.
Thus $G = \{e,a,a^2,b,ab,a^2b\} = \langle a,b: a^3 = b^2 = e, ba = a^2b\rangle$ and similarly:
$H = \langle a',b': a'^3 = b'^2 = e', b'a' = a'^2b'\rangle$ and $\phi:G \to H$ given by:
$e \mapsto e'\\a \mapsto a'\\a^2 \mapsto a'^2\\b \mapsto b'\\ab \mapsto a'b'\\a^2b \mapsto a'^2b'$
is the desired isomorphism (it's clearly bijective, and a homomorphism).
So your particular problem comes down to identifying
one element of order $3$ in $D_3$ and one element of order $3$ in $S_3$, and similarly one element of order $2$ in each group, and showing they (the element of order $3$, and the element of order $2$) do not commute. The isomorphism you exhibit is (essentially) the same one as in my post above (and if you look closely enough, actually shows we have thereby $4$ possible automorphisms of any non-abelian group of order $6$ (because we might have $G = H$)). |
I have just started to read about DMRG and MPS.
It is said that in case of simple 1D chain with spins states $|\uparrow\rangle$; $|\downarrow\rangle$ and any state in the complete Hilbert space of such a system could be written as :
$|\Psi\rangle=\sum\limits_{i_1,...,i_n} C^{i_1,...,i_n}|i_1,...,i_n\rangle$
Where each index runs over the local basis at each site. This presents a complexity of the order $2^N$ and by writing it in the form of matrix product states we would making a simplification a mean field simplification and reduce the complexity to $2N$.
$C^{i_1,...,i_n}=C^{i_1}C^{i_2}...C^{i_n}$
EDIT: A General case:
Suppose I have Heisenberg chain kind of a 1D Lattice problem with N sites, where each site lives on a Hilbert space of dimensionality D. Given the fact that the dimensionality of Hilbert space of the entire system scales exponentially as $D^N$ because of the entanglement, it becomes a very complex problem to solve.
Questions:
How does writing the system in terms of MPS reduce the complexity of the problem and bring it down to $ND$ ? What is the link between using MPS ansatz and mean field approximation? |
(This is a work in progress. If you have alternative solution put as comment)
(A) no solution; (B) Exactly one solution; (C) exactly two solutions; (D) infinitely many solutions;
2. Let Then is equal to
(A) ; (B) ; (C) ; (D) ;
3. The number of solutions of the equation tan x + sec x = 2 cos x , where
(A) 0; (B) 1; (C) 2; (D) 3;
(4) Using the digits 2, 3, 9 how many six digit numbers can be formed which are divisible by 6?
(A) 41; (B) 80; (C) 81; (D) 161;
(A) ; (B) { } ; (C) ; (D) ;
(A) and ;
(B) and ; (C) and ; (D) and ;
(A) 4; (B) 5; (C) 6; (D) 7;
(A) 63; (B) 70; (C) 126; (D) 144;
(A) ; (B) (A) ; (C) ; (D) is a positive integer
(A) ; (B) ; (C) ; (D) ;
(22) Consider a cyclic trapezium whose circumcenter is one of the sides. If the ratio of the two parallel sides is 1:4, what is the ratio of the sum of the two oblique sides to the longer parallel side?
(A) ; (B) 3:2 ; (C) ; (D) ;
(A) f decreases upto some point and increases after that
(B) f increases upto some point and decreases after that (C) f increases initially, then decreases and then again increases (D) f decreases initially, then increases and then again decreases
(A) 64; (B) 100; (C) 200; (D) 560;
(A) exists and is equal to 3; (B) exists and is equal to e; (C) exists and is always equal to f(3) ; (D) need not always exist.
(A) 1; (B) ; (C) 2; (D) ;
(A) 0; (B) ; (C) ; (D) 6;
(i) ;
(ii) Of then
Then,
(A) both (i) and (ii) are always true;
(B) (i) is always true, but (ii) may not always be true. (C) (ii) is always true, but (i) may not always be true. (D) neither (i) nor (ii) is always true.
(29) Let f be a function such that f”(x) exists, and f”(x) > 0 for all . For any point , let A(c) denote the area of the region bounded by y = f(x) the tangent to the graph of f at x = c and the lines x = a and x = b. Then,
(A) A(c) attains it’s minimum at for any such f;
(A) A(c) attains it’s maximum at for any such f; (A) A(c) attains it’s minimum at both for any such f; (D) the points c where A(c) attains its minimum depend on f.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
(A) no solution; (B) Exactly one solution; (C) exactly two solutions; (D) infinitely many solutions;
Discussion:
If we add all of these inequalities we get
This implies
Sum of squares can never be less than 0. They can be equal to zero if and only if each of the squares are zero. Hence the only solution is
ANSWER: (B)
(A) ; (B) ; (C) ; (D) ;
Discussion:
Hence
Now
Replacing log 3 by we get
ANSWER: (A).
(A) 0; (B) 1; (C) 2; (D) 3;
Discussion: Assume f(x) = \tan x + \sec x – 2 \cos x
Taking derivative we have
Now in the interval f'(x) >0
Hence the function is always increasing in the given interval (except at point of discontinuities).
The function is discontinuous at So we separately consider two intervals and
Now f(0) = -1 and
Since the function is continuous in the interval and it is negative at x = 0 and positive at hence by intermediate value property theorem, the function ‘cuts’ the x axis at least once.
Since the derivative is positive, hence it would ‘cross the x axis’ exactly once in the interval
Similarly we can show that it has exactly one solution in the interval .
ANSWER: (C)
(A) 41; (B) 80; (C) 81; (D) 161;
Discussion:
Since the number is divisible by 6, it has to be even and divisible by 3.
Hence sum of it’s digits must be divisible by 3. Therefore number of 2’s present can be either 0, 3 or 6
(why? Suppose there are two 2’s present. Then sum of the digits will be (some 6) + (some 9) + 4 . But 6’s and 9’s are divisible by 3 and 4 is not. Hence the sum of the digits is not divisible by 3 therefore the number is not divisible by 3. In similar manner we can discuss the remaining cases).
Case 1: 6 two’s are use: 1 case (222222)
Case 2: 3 two’s are used: remaining three digits are a mix of 3’s and 9’s. Now the Last digit has to be 2 (the number must be even). So first 5 digits are filled up with two 2’s and a mix of 3’s and 9’s.
We choose 2 out of 5 spots (for the two’s) in ways and for each of the remaining three spots we have 2 choices (3 or 9) hence .
Hence total number of ways =
Case 3: 0 two’s are used: not possible as the number is not even.
Total number of numbers = 80 +1 = 81
ANSWER: (C)
(A) ; (B) ; (C) ; (D) ;
(A) 4x – 3y = 5; (B) 3x – 4y = 2; (C) x – y = 1; (D) 2x – 3y = 1;
Since BD bisects hence by angle bisector theorem (assuming y is negative).
Squaring both sides we get
Value of y by solving this equation is -5/3.
Hence equation of the reflected line is
This implies 6(y -1) = 8(x -2) or 4x – 3y = 5
ANSWER: (A)
(A) 1; (B) 2; (C) 3; (D) 4;
Discussion:
Let x = I + f (where I = [x] and f = x – [x] )
Hence |2I + 2f – I| = 4
or |I + 2f| = 4 or I + 2f = 4 or -4
Now 2f = 4 – I or 2f = -4 -I
Hence 2f must be an integer. There f = 1/2 or 0
Thus I = 4, -4, 3 or -5
Hence 4 solutions.
ANSWER: (D)
(A) ; (B) ; (C) ; (D) ;
Discussion:
Note that ratio of the areas of two pentagons is = = =
But since
Hence answer is
ANSWER: (B)
(A) (B) (C) (D)
Discussion:
Suppose A is the origin.
Let us complete the parallelogram with and . By parallelogram law of vector addition the other vertex . So the length of Also the length of
So according to the problem length of two diagonals of the parallelogram are equal.
Hence it is a rectangle. Therefore is a right angled triangle implying circumcenter is in the middle of hypotenuse. It is
ANSWER: (C)
(A) 308; (B) 364; (C) 616; (D)
Discussion:
First we give one chocolate to each student. That leaves us with 12 more chocolates. Now we must give one chocolate more to each of two students. So we choose the two students in ways.
Finally remaining 10 chocolates have to be distributed among those two students (as if we give a chocolate to any other student then he will have two or more chocolates which we do not want) . This is same as the number of non negative integer solutions of the equation which is .
Hence the total number of ways is
ANSWER: (A)
(A) ; (B) ; (C) ; (A) ;
Discussion:
Teacher: This diagram is your clue. Side of the triangle is of length ‘s’. FEDC is the square. A is the center. FE is tangent to the circle at B. BA extended intersects CD at H. Suppose Student: Is B the midpoint of EF? Teacher: Right guess! Can you prove this? Student: Let us extend BA to meet CD at H. AB is perpendicular to EF at B as A is the center and EF is tangential to the circle at B. So AH is perpendicular to CD at H (as CD parallel EF).
Now H is the center of CD as A is the center of the circle and AH is perpendicular to the chord CD at H. As H is the midpoint of CD, so B is the midpoint of EF (as BH is parallel to CF as both are perpendicular to EF).
Teacher: Very nice. Now apply a little trigonometry. Alternatively you may use Pythagoras Theorem. Student: (A solution due to Tiya Chakrabarty). Let us use Pythagoras Theorem first.
Also AH + AB = s or
Hence \( (s-r)^2 = \sqrt { r^2 – \frac{s^2}{4} } \)
(A) 8; (B) 20; (C) 24; (D) 25;
Discussion:
Teacher: We have to maximize the number of 2’s in the product of five numbers chosen. Student: So we have to choose numbers from 1 to 100 which have the largest number of 2’s in their prime factorization. Like 64, 32, 16 etc. Teacher: Right. So which five numbers will you choose? Student: First 64 because it is the highest power of 2 (6th power) from 1 to 100. Next we choose 32. Then we can pick 96 because it also has 5 two’s. Finally we pick 16 and 48 (with four 2’s each).
So total 24 two’s in the product.
Teacher: Excellent! So ANSWER is (C) .
(A) zero for some ; (B) positive for all ; (C) negative for all ; (D) strictly increasing;
(14) Let A be the set of all points (h, k) such that the area of the triangle formed by (h, k), (5, 6) and (3, 2) is the 12 square units. What is the least possible length of a line segment joining (0, 0) to a point in A?
(A) ; (B) ; (A) ; (A) ;
Teacher: There is a formula to compute area of a triangle when coordinates of three vertices are given. Suppose are the three vertices then \( { \frac{|x_1 y_2 – x_1 y_3 + x_2 y_3 – x_2 y_1 + x_3 y_1 – x_3 y_2|}{2}} \) . You may apply it here. Student: Okay. Applying it we have two cases : k = 2h + 8 or k = 2h – 16. Now distance of (h, k) from origin is . Replacing k in terms of h we have two expressions for distance: ; \( \sqrt {5h^2 + 32h + 64} \) . I think we can separately compute the lowest value of these two expressions and use the lower of the two. Teacher: Precisely so. One may use calculus or normal ‘completing the square method’ here.
(A) 1; (B) 2; (C) 3; (D) 4;
Teacher: There is a little problem with this problem! The statement is not very clear. Assume that the highest power of 3 that divides c is 1. (Otherwise c could have any power of 3 in it’s prime factorization. So highest power of 3 dividing abc could be as large as you please).
Student: Okay. Now as 3 divides c hence it also divides . Hence it divides . Now any square quantity can be either 0 or 1 mod 3. Here both and must be 0 mod 3 (because if one of them is 0 and one of them 1 mod 3 then their sum is 1 mod 3; if both are 1 mod 3 then their sum is 2 mod 3. Both are contradictions as the sum is 0 mod 3).
(A) ; (B) { } ; (C) ; (D) ;
(A) and ;
(B) and ; (C) and ; (D) and ;
(A) 4; (B) 5; (C) 6; (D) 7;
(A) 63; (B) 70; (C) 126; (D) 144;
(A) ; (B) (A) ; (C) ; (D) is a positive integer
(A) ; (B) ; (C) ; (D) ;
(22) Consider a cyclic trapezium whose circumcenter is one of the sides. If the ratio of the two parallel sides is 1:4, what is the ratio of the sum of the two oblique sides to the longer parallel side?
(A) ; (B) 3:2 ; (C) ; (D) ;
(A) f decreases upto some point and increases after that
(B) f increases upto some point and decreases after that (C) f increases initially, then decreases and then again increases (D) f decreases initially, then increases and then again decreases
(A) 64; (B) 100; (C) 200; (D) 560;
(A) exists and is equal to 3; (B) exists and is equal to e; (C) exists and is always equal to f(3) ; (D) need not always exist.
(A) 1; (B) ; (C) 2; (D) ;
(A) 0; (B) ; (C) ; (D) 6;
(i) ;
(ii) Of then
Then,
(A) both (i) and (ii) are always true;
(B) (i) is always true, but (ii) may not always be true. (C) (ii) is always true, but (i) may not always be true. (D) neither (i) nor (ii) is always true.
(29) Let f be a function such that f”(x) exists, and f”(x) > 0 for all . For any point , let A(c) denote the area of the region bounded by y = f(x) the tangent to the graph of f at x = c and the lines x = a and x = b. Then,
(A) A(c) attains it’s minimum at for any such f;
(A) A(c) attains it’s maximum at for any such f; (A) A(c) attains it’s minimum at both for any such f; (D) the points c where A(c) attains its minimum depend on f.
Discussion:
Teacher: This can be done by simple angle chasing. Focus on . Try to Show XY bisects Student: In triangle , BQ bisects and CN bisects . So Y must be the incenter of implying XY bisects .
Now . Hence .
ANSWER: (C) |
Differential and Integral Equations Differential Integral Equations Volume 25, Number 7/8 (2012), 657-664. Quasilinear equations involving nonlinear Neumann boundary conditions Abstract
We study the multiplicity of positive solutions of the problem $$ -\Delta_p u+|u|^{p-2}u=0 $$ in a bounded smooth domain $\Omega\subset{\mathbb{R}}^N$, with a nonlinear boundary condition given by $$ |\nabla u|^{p-2}\partial u/\partial\nu=\lambda f(u) +\mu\varphi(x)|u|^{q-1}u, $$ where $f$ is continuous and satisfies some kind of $p-$superlinear condition at 0 and $p-$sublinear condition at infinity, $0<q< p-1$ and $\varphi$ is $L^\beta(\partial\Omega)$ for some $\beta>1$. In addition, we consider the case $q=0$, where the nonlinear boundary condition becomes an elliptic inclusion. Our approach allows us to show that these problems have at least six nontrivial solutions, three positive and three negative, for some positive parameters $\lambda$ and $\mu$. The proof is based on variational arguments.
Article information Source Differential Integral Equations, Volume 25, Number 7/8 (2012), 657-664. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356012656 Mathematical Reviews number (MathSciNet) MR2975688 Zentralblatt MATH identifier 1265.35075 Citation
Iturriaga, Leonelo; Lorca, Sebastián; Saavedra, Eugenio; Ubilla, Pedro. Quasilinear equations involving nonlinear Neumann boundary conditions. Differential Integral Equations 25 (2012), no. 7/8, 657--664. https://projecteuclid.org/euclid.die/1356012656 |
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 47, Number 1 (2012), 121-141. Algebraic independence of certain numbers related to modular functions Abstract
In previous papers the authors established a method how to decide on the algebraic independence of a set $\{ y_1,\dots ,y_n \}$ when these numbers are connected with a set $\{ x_1,\dots ,x_n \}$ of algebraic independent parameters by a system $f_i(x_1,\dots ,x_n,y_1,\dots ,y_n) =0$ $(i=1,2,\dots ,n)$ of rational functions. Constructing algebraic independent parameters by Nesterenko's theorem, the authors successfully applied their method to reciprocal sums of Fibonacci numbers and determined all the algebraic relations between three $q$-series belonging to one of the sixteen families of $q$-series introduced by Ramanujan. In this paper we first give a short proof of Nesterenko's theorem on the algebraic independence of $\pi$, $e^{\pi\sqrt{d}}$ and a product of Gamma-values $\Gamma (m/n)$ at rational points $m/n$. Then we apply the method mentioned above to various sets of numbers. Our algebraic independence results include among others the coefficients of the series expansion of the Heuman-Lambda function, the values $P(q^r), Q(q^r)$, and $R(q^r)$ of the Ramanujan functions $P,Q$, and $R$, for $q\in \overline{\ACADQ}$ with $0<|q|<1$ and $r=1,2,3,5,7,10$, and the values given by reciprocal sums of polynomials.
Article information Source Funct. Approx. Comment. Math., Volume 47, Number 1 (2012), 121-141. Dates First available in Project Euclid: 25 September 2012 Permanent link to this document https://projecteuclid.org/euclid.facm/1348578282 Digital Object Identifier doi:10.7169/facm/2012.47.1.10 Mathematical Reviews number (MathSciNet) MR2987116 Zentralblatt MATH identifier 1290.11109 Subjects Primary: 11J85: Algebraic independence; Gelʹfond's method Secondary: 11J89: Transcendence theory of elliptic and abelian functions 11J91: Transcendence theory of other special functions 11F03: Modular and automorphic functions Citation
Elsner, Carsten; Shimomura, Shun; Shiokawa, Iekata. Algebraic independence of certain numbers related to modular functions. Funct. Approx. Comment. Math. 47 (2012), no. 1, 121--141. doi:10.7169/facm/2012.47.1.10. https://projecteuclid.org/euclid.facm/1348578282 |
Let $i:S^1 \vee S^1 \rightarrow S^1 \times S^1$ be the inclusion of the figure eight in the torus. We can consider the following induced homomorphisms:
$i_\star :\pi_1 (S^1 \vee S^1) \rightarrow \pi_1 (S^1 \times S^1)$
$j_\star: H_1(S^1 \vee S^1) \rightarrow H_1 (S^1 \times S^1)$
Which, if we calculate them using Van Kampen's Theorem these are just:
$i_\star :{\text{free group on two generators}} \rightarrow \mathbb{Z} \times \mathbb{Z} $
and since we know $H_1$ is the abelianization of $\pi_1$ we also have
$j_\star: {\text{free abelian group on two generators}} \rightarrow \mathbb{Z} \times \mathbb{Z} $
Since $\mathbb{Z} \times \mathbb{Z}$ is already abelian.
Now, my question is, what more can I say about the homomorphisms $i_\star$ and $j_\star$ ? All I know is that $i_\star$ is not injective, since if $a$ and $b$ are the generators of the free group, then $i_\star (ab)=i_\star (a) i_\star (b)=i_\star (b) i_\star (a)=i_\star (ba)$ where the third equality follows from the fact that $\mathbb{Z} \times \mathbb{Z}$ is abelian, but $ab\neq ba$, so the map cannot be injective. I do not know anything more. Is there a way to explicitly describe the induced homomorphisms? If so is this in genera possible? Any help is appreciated. Thank you. |
How do we know that the randomness is not caused by the state
preparation itself?
It depends what you mean here. In some sense
it is caused by the state preparation: the "state preparation" generates the quantum state, which is the cause of the randomness in the measurement outcomes.
But what you probably mean is: how do we know that the randomness in the measurement outcomes is not just due to some "classical uncertainty", that is, how do we know that the state preparation procedure is not just generating half the time the state $\lvert\uparrow\rangle$ and half the time the state $\lvert\downarrow\rangle$?
In other words, how can we make sure that the received state is a pure state $\lvert\psi\rangle=\frac{1}{\sqrt2}(\lvert\uparrow\rangle+\lvert\downarrow\rangle)$ and not a mixture of the form $\rho=\frac{1}{2}(\lvert\uparrow\rangle\langle\uparrow\rvert + \lvert\downarrow\rangle\langle\downarrow\rvert)$?The answer is that, by just looking at the measurement outcome in a fixed basis, you can't. But what you can do is to see what you measure after a rotation of the state.
In this simple case you can for example apply an Hadamard gate to the state (what this means depends on what kind of system is being considered), and measure after this gate.An Hadamard gate is the unitary transformation$H=\frac{1}{\sqrt2}\begin{pmatrix}1&1\\1&-1\end{pmatrix}$,and you can easily check that $H\lvert\psi\rangle=\lvert\uparrow\rangle$.On the oher hand, as you can verify, $H\rho H = \rho$.What this means is that
after the Hadamard, the measurement statistics will tell you whether your initial state was $\lvert\psi\rangle$ or $\rho$: if you always get the result corresponding to $\lvert\uparrow\rangle$ then you had the pure state $\lvert\psi\rangle$ (that is, a spin state prepared along the X axis), while if you still get half the time a result and half the time another result, it means that your initial state wasn't really what you expected (that is, the state preparation procedure was "cheating" by just feeding you sometimes $\lvert\uparrow\rangle$ and sometimes $\lvert\downarrow\rangle$).
While in this case it's quite simple to distinguish the two cases, the more general problem of discriminating one state from another is nontrivial, and a vast literature has been devoted to its study (just google for
quantum state discrimination to get some references).
If one is instead only interested in certifying the
purity of a state, like in your case, then the problem is somewhat easier than the general quantum state discimination problem, but still not really trivial in general.One interesting thing is that it turns out that the statistics generated by a random pure state is different than that generated by a random mixed state. This means that one can certify the purity of a state by just applying a random evolution to the state and looking at the output distribution of probabilities (see Beenakker et al. 2009 and Enk and Beenakker 2011)
or by a shot-to-shot fluctuation in the coupling to the measurement device?
This doesn't change much the above argument.You can just model this by saying that the measurement device (that is, the measurement basis) is fixed, while the state that is being prepared changes from shot to shot. Or you can do the opposite and say that the state prepared is fixed while the measurement basis (your "shot-to-shot alignment") changes.
A case in which a similar situation can make sense is the following: I have the state $\lvert\psi\rangle$, and this state undergoes a random unitary rotation (modeling your "shot-to-shot fluctuation") before the measurement.What can you expect to learn about $\lvert\psi\rangle$ in this situation?The answer is: not much.In particular, assuming the rotation is truly random, you can never say whether your state is $\lvert\uparrow\rangle$ or $\lvert\downarrow\rangle$ or $\frac{1}{\sqrt2}(\lvert\uparrow\rangle+\lvert\downarrow\rangle)$ or whatever else.What you
can still say is whether the state is pure or not, thanks to the same protocol employing random rotations mentioned above. |
First off, I should mention that I am a new contributor on this site and as such, please bear with me if this has been asked and answered before. I searched to the best of my ability to find similar questions and didn't find any.
The Question
Consider the definition of a deterministic finite automaton given below:
A deterministic finite automaton $M$ is an
ordered$5$-tuple $ (Q,\Sigma, \delta, q_0, F)$ where: $Q$ is the set of input states; $\Sigma$ is a finite set of input symbols; $\delta\colon Q \times \Sigma \to Q$ is the transition function; $q_0$ is the initial state $F$ is the set of final states
My main question is, would it be inaccurate to define $M$ as the set $ \{Q,\Sigma, \delta, q_0, F\}$ of five elements? I arrived at this question when wondering if a DFA must necessarily be
ordered since n-tuples are, by definition, ordered. In other words, the tuple $(Q,\Sigma, \delta, q_0, F)$ is, by definition, different from $(Q,\Sigma, \delta, F,q_0)$. But what if we define its elements the same way as given in the definition above? Will it be a DFA? If it will, then can a DFA be instead defined as a set? |
This was described by EIP 658 which was implemented in the Byzantium fork. The text of the EIP is here, though strangely it doesn't seem to have been formally finalised before the fork.
In any case, the relevant text is this:
For blocks where block.number >= METROPOLIS_FORK_BLKNUM, the
intermediate state root is replaced by a status code, 0 indicating
failure (due to any operation that can cause the transaction or
top-level call to revert) and 1 indicating success.
In terms of your question, then, 1 always equals success. I'm pretty certain that "revert" here doesn't mean "resulting from the
revert opcode", but means, essentially, any conditition that causes the state to be reverted - including all the conditions that were formerly called "throws".
Now, note that EIP 658 is also called EIP 98, and is also described here
EIP98 is authored by Vitalik:
Option 3 (update 2017.07.28: we are going with this one): For blocks
where block.number >= METROPOLIS_FORK_BLKNUM, the intermediate state
root parameter in the receipt should be set to a \x01 byte if the
outermost code execution succeeded, or a zero byte if the outermost
code execution failed.
This confirms that failed transactions (whatever the failure mode) result in 0, only successful should result in 1. It applies only to the "outermost code execution".
Finally, for the ultimate authority, see the Yellow Paper update by Yoichi (not yet merged).
It defines a status code,
s'.
It's a bit difficult to read in that form, but I think the relevant definition of the status code is this:
Line 775:
The account's associated code (identified as the fragment whose Keccak
hash is $\boldsymbol{\sigma}[c]_c$) is executed according to the
execution model (see section \ref{ch:model}). Just as with contract
creation, if the execution halts in an exceptional fashion (i.e. due
to an exhausted gas supply, stack underflow, invalid jump destination
or invalid instruction), then no gas is refunded to the caller and the
state is reverted to the point immediately prior to balance transfer
(i.e. $\boldsymbol{\sigma}$).
Line 782 I think says, if the state is thus reverted, then
s' is zero:
+s' & \equiv & \begin{cases}
+0 & \text{if} \quad \boldsymbol{\sigma}^{**} = \varnothing \\
+1 & \text{otherwise}
Other references to
s' are in there, and may shed more light. |
As Todd has already written an answer for me, maybe I can claim it as an Answer:
Exercise 1.1 in my book
Practical Foundations of Mathematics(CUP 1999) reads,
When Bo Peep got too many sheep to see where each one was throughout
the day, she found a stick or a pebble for each individual sheep and
moved them from a pile outside the pen to another inside, or vice
versa, as the corresponding sheep went in or out.
Then one evening there was a storm, and the sheep came home too
quickly for her to find the proper objects, so for each sheep coming
in she just moved ANY one object. She moved all of the objects, but
she was still worried about the wolf. By the next morning she had
satisfied herself that the less careful method of reckoning was
sufficient. Explain her reasoning without the aid of numbers.
My reason for putting it in the book was to try to get someanthropologist to say when and what the original "proof" was, ie thecognitive basis of the long-universal belief that this is valid, whichprovides the justification of counting with numbers.
What I am trying to imagine is how one of our distant ancestorswith the cognitive abilities but not the education of a mathematicianmight approach this. Of course they would not have formulatedPeano Induction or Euclidean Infinite Descent. They would havean argument (that we would more or less accept as rigorous) thatthe result is true for three sheep, then four and five. Afterthat they would use Induction in the naive epistemological senseto convince themselves that it is true for arbitrarily large sets.
It seems plausible that anyone who is challenged to come up witha proof would give the following (albeit non-constructive) proof.
Suppose that some sheep $s_0$ is missing in the evening.
Then the pebble $p_0$ that served as its "name" in the morning was
used for some other sheep $s_1$ in the evening.
But then $s_1$ must have been named by a different pebble $p_1$
in the morning, which named yet another sheep $s_2$ in the evening.
And so on.
All of the sheep $s_0$, $s_1$, $s_2$, ... are different individuals,
Likewise all of the pebbles $p_0$, $p_1$, $p_2$, ...
But, essentially as Euclid says in Book VII, Proposition 31,
this is impossible for a set of sheep.
By chance, this issue came up recently following an internal seminar byMartin Escardo in Birmingham (where I am now an Honorary ResearchFellow).He was developing the foundations of arithmetic (in the settingof Homotopy Type Theory, though this was not essential)in such a way that $3\times 5=5\times 3$ could be seen ina primary-school fashion as transposing a rectangle.
He based this on a function $F:{\mathbb{N}}\to{\mathsf{Set}}$with $F0=\emptyset$ and $F(\mathsf{succ} n)=F(n)\coprod{\mathbf{1}}$.In his treatment the most difficult Proposition is$$ F(n)\cong F(m) \Longrightarrow n=m, $$which he deduced from the Lemma$$ X\coprod{\mathbf{1}}\cong Y\coprod{\mathbf{1}} \Longrightarrow X \cong Y. $$
This Lemma holds in any
lextensive category, i.e. one with finite limits and stable disjoint coproducts.The Proposition follows using Peano induction, since then $$ F(n+1)\cong F(m+1) \Longrightarrow F(n)\cong F(m) \Longrightarrow n=m \Longrightarrow n+1=m+1. $$
I think it is reasonable to suppose that Bo Peep could formulatethis Lemma, but I feel it is more plausible that shewould use the "infinite descent" argument than the Proposition.
I am not sure whether this answers the original questionabout justifying "the use of categories, or isomorphisms or equivalences",although maybe Martin's treatment of arithmetic does so. |
According to Gardiner-Zoller (
Quantum Noise), operators acting on the density matrix can be mapped via e.g. (I'm taking Wigner space as an example, but the same holds for P and Q)
$$a\rho\leftrightarrow\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)W( \alpha,\alpha^*)$$ $$\rho a^\dagger\leftrightarrow\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right) W(\alpha,\alpha^*)$$
Below, an example is given using the P-function, frow which it is clear that if multiple operators are applied on the left or the right of the density matrix, the same correspondences hold, as long as the operators closest to $\rho$ are applied first (i.e. the phase-space representation most to the right, so closest to $W$).
Now my question is: what if there are operators acting on both sides of $\rho$? In the simplest case of $a\rho a^\dagger$ this does not seem to be an issue, because $\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right)$ =$\left(\alpha^*+\frac{1}{2}\frac{\partial}{\partial \alpha}\right)\left(\alpha+\frac{1}{2}\frac{\partial}{\partial\alpha^*}\right)$,
but it does lead to ambiguity for example for $aa\rho a^\dagger a^\dagger$.
I would expect that the proper way of doing this is still from the inside out, alternating operators from the left and the right; also because this way I obtain a result that is real. Is this correct? How to properly see this? |
Let P be a Sylow p- group of a finite group G and let H be a subgroup of G containing \( N_{G}(P) \) . Prove that \( H = N_{G}(H) \).
Solution
Let \( P \in Syl_{P}(G) \ and H \leq G \ such \ that \ N_{G}(P) \subset H \)
Claim : Frattinis Argument :
If G is a finite group with normal subgroup h and if \( P \in Syl_{P}(H) \) then G is not great notation can be confirmed in \( \backslash[G ,E] \) .
\( G’ = H N_{G}(P) \)
where \( N_{G} \) is the normalizer of P in G’ .
& \( H N_{G}(P) \) means the product of group subsets .
PROOF :
By Sylow’s 2nd theorem we know that any two Sylow subgroups are conjugate to each other and here we are considering P to be a Sylow P subgroup of H .
So they are conjugate in H . ………………………………….(1)
Now , for any \(g \in G \ or \ gHg^{-1} = H \ \ [as \ H \leq G \ given] \Rightarrow gPg^{-1} \subset H \ but \ gPg^{-1 } is \ also \ a \ group . \\ \\ and |gPg^{-1}| = |P| \)
So \( gPg^{-1} \) is also a sylow – p -subgroup in H .
so \( \exists h \in H \ such \ that gPg^{-1} = hPh^{-1} \ \ [by \ (1)] \Rightarrow h^{-1}gPg^{-1}h = P \Rightarrow h^{-1}g \in N_{G} (P) \Rightarrow g \in HN_{G}(P) \\ \Rightarrow G \subset HN_{G}(P) = H \)
Now clearly \( HN_{G}(P) \subset G \) .
So , we have \( N_{\epsilon} = HN_{G}(P) = H \)
hence the proof of the claim .
Now come back to our question ,
If we consider \( G’ = N_{G}(H) \) So , \( H \leq G’ \) .
and \( P \leq H \) where P is a Sylow p-group of G then it is na Sylow p- group of G’ as well .
So by Frattinis Argument we have that . |
I just have the feeling that there must be some relation between Alexander duality and linking numbers, but I don't know what is that. Will anyone tell me anything about that? Or could anyone give some references? Thanks.
Not only there is a relation between them, but in fact the linking number may be defined (and generalized to higher dimensions) via Alexander duality as follows:
Let $M,N\subset \mathbb{R}^{m}$ be $p$ and $q$ dimensional manifolds (compact, connected, oriented and without boundary) embedded in $\mathbb{R}^{m}$ with $m=p+q+1$.
Alexander duality tells us
$$ H^{p}(M)\cong H_{q}(\mathbb{R}^{m}\setminus M) \quad (\cong \mathbb{Z})$$
(in particular the homology of $M$ does not depend on the embedding). Now consider the inclusion $i\colon N \to \mathbb{R}^{m}\setminus M$, which induces a map on homology
$$i_{*}\colon H_{q}(N)\to H_{q}(\mathbb{R}^{m}\setminus M)\cong \mathbb{Z}$$
This map sends then the fundamental class of $N$ (a fixed generator of the infinite cyclic group $H_{q}(N)$ defining the orientation of $N$) to
some integer times the generator of the infinite cyclic group $H_{q}(\mathbb{R}^{m}\setminus M)$ which is the image under the previous isomorphism of the fundamental class of $M$.
This integer is the linking number of $M$ and $N$.
You may want to check that this definition agrees with the classical one for the lower dimensions where it is defined. Namely, you can check that for two knots in space ($p=q=1$) you get the same integer (if you choose the sign carefully, and up to a sign in any case) with the classical definition:
Project both knots onto the plane in a nice way and for every time the first one passes under the second one, sum $+1$ or $-1$ depending on the orientation of the crossing.
You can check that these definitions agree using for example the
Wirtinger presentation of $\pi_{1}(\mathbb{R}^{3}\setminus K)$, which then gives you a presentation of the homology on degree 1 (its abelianization).
Finally, a reference as requested:
Knots and links by Dale Rolfsen (see page 132). |
I have found a table of the logs of gamma functions at basic fractions, accurate to 60 decimal digits. It omits $\ln(\Gamma(1/2)) = \ln(\pi)/2$ and I want that number.
Where is a cit-able source that contains this this number to high accuracy?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have found a table of the logs of gamma functions at basic fractions, accurate to 60 decimal digits. It omits $\ln(\Gamma(1/2)) = \ln(\pi)/2$ and I want that number.
Where is a cit-able source that contains this this number to high accuracy?
Project Gutenburg has published a compilation of 'Miscellaneous Mathematical Constants' which includes the natural log of pi to 2000 decimal places as:
log(Pi) natural logarithm of Pi to 2000 places.
1.1447298858494001741434273513530587116472948129153115715136230714721377698848 260797836232702754897077020098122286979891590482055279234565872790810788102868 252763939142663459029024847733588699377892031196308247567940119160282172273798 881265631780498236973133106950036000644054872638802232700964335049595118150662 372524683433912698965797514047770385779953998258425660228485014813621791592525 056707638686028076345688975051233436078143991414426429596712897781136526452345 041059007160818570824981188183186897672845928110257656875172422338337189273043 288217348651042761532375161028392221340143696717585616442473718780506046692056 283377310133621627451589875201512996545465739691528252391695852453793594601400 379956519666036538000112659858500129765699060744667455472671045084950668558743 390774251341592412652317771784917799588095767880510296444750901508911403278080 768337337938949488075152890091875363766086707435833345108139232535574067684327 431198049633999761803046221286361595859836404758009861799938264629277646275948 484896414107483132593462053635073046055030768215494444154778884559535228440047 850918217255915179900785243523837112867132342905566964492585582623118824223244 661476739136153339414264534600881979155478967757529878307593230499751706785370 666315222134751026417324918906534257373051835228316776877311442944368108997522 287634554909933469253981028398378467695079971965163008386496663274223886761392 944112379606529081463545502415193643368404005225615575618053680459613160686367 226297126848055518038239624057983138433955882483556816617339018195508924667782 042898879384623081953507082523699065543916029676565349509487102686726405036344 889957813954840804697878603723560031033518890166410542245140400821480026071893 924502077785635698810693233664357379481092927781936265980614204270094398298364 733767922501305495445975380037647617519082652294857728828349379913418698964043 483457091550460629912859614271432256377699794328889523074041463529466113313641 884192574888189320796571991444939402534883228262813
Here it is, to 80 digits.
0.57236494292470008707171367567652935582364740645765578575681153573606\ 888494241304
Youcan cite me, "personal communication".
I know that you are asking for a table or software where you can just look this up, but this number is quickly computable to high accuracy with the right approach. (Assuming that you have an engine that can keep track of high precision in the first place.)
To speed up computations, use that $$\ln(\pi)=\ln\left(\frac{\pi}{22/7}\right)+\ln(2)+\ln(11)-\ln(7)$$ The latter three terms can be looked up to high precision, and now you have to find $\ln(7\pi/22)$, where $7\pi/22$ is very close to $1$. In fact $|1-7\pi/22|<0.00041$. So even the slowly converging Taylor series for $\ln$ can output a result quickly, assuming that you again use a table, this time to look up many digits of $\pi$.
$$\begin{align} \ln(1+(7\pi/22-1)) & = -\sum_{n=1}^\infty\frac{(-1)^n}{n}(7\pi/22-1)^n \end{align}$$ Since $(7\pi/22-1)<0$, we do not really have an alternating series, and we'll have to think about Taylor's error bound. The $n+1$st derivative of $\ln(1+x)$ is $(-1)^{n}n!/(1+x)^{n+1}$. On the interval $[-0.00041,0.00041]$, this is bounded in absolute value by $n!/0.99959^{n+1}$. So an error bound for the $n$th partial sum would be $$\frac{n!/0.99959^{n+1}}{(n+1)!}0.00041^{n+1}=\frac{1}{n+1}\left(\frac{0.00041}{0.99959}\right)^{n+1}$$ Using $n=17$ brings this under $10^{-62}$.
At present, I do not have access to an engine that could keep track of enough decimal precision, or else I would conclude my answer with the end result of this process. But here is the formula it yields, with an absolute error bound of less than $10^{-62}$:$$\ln(\pi)\approx-\sum_{n=1}^{17}\frac{(-1)^n}{n}(7\pi/22-1)^n+\ln(2)+\ln(11)-\ln(7)$$ |
Let $E$ be a finite set, let $2^E$ denote its power set, then it is well-known that $(2^E, \subseteq)$ is not only a poset, but even a poset with the meets equal to intersections and joins equal to unions.
From this it follows that every family of open sets defined on $E$ is a sub-lattice, as well as every family of closed sets, since "arbitrary intersections" corresponds to "finite intersections" since $2^E$ is finite, likewise "arbitrary intersections" corresponds to "finite intersectins" for the same reason.
The question in the title corresponds then to the converse of this fact, namely:
Question:For any finite set $E$, does any sub-lattice of $(2^E, \subseteq)$ define a topology?
Clearly being a sub-lattice implies closure under finite
non-empty unions and finite non-empty intersections. But whether a sub-lattice has to be closed under empty unions and empty intersections, hence contain both $\emptyset$ and $E$ and therefore be a topology is what makes my head hurt. Issue 1: One will often read that the third axiom of open sets (or closed sets), that both $\emptyset$ and $E$ have to be open (respectively closed) sets is supposed to be redundant, since closure under finite intersections (respectively finite unions) implies closure under the empty intersection so that $E$ is open (respectively closure under the empty union so that $\emptyset$ is closed).
However, this logic seems somewhat flawed, depending specifically on how the closure under finite unions/intersections axiom is stated. For definiteness consider the case of closed sets. One will often write that closed sets are closed under finite unions in the following manner:
If $X_1, X_2$ are both closed, then $X_1 \cup X_2$ is also closed.
Clearly,
by induction, it follows from this that all non-empty finite unions are again closed sets (the union of one set, case $n=1$, following from taking $X_1 = X_2$).
But $n=0< 1$ and $n=0<2$; so how can closure under an
empty union follow from finite unions for $n=1,2$ as base cases? by induction
However, if one simply wrote "Closed sets are closed under finite unions", then it would be clear to me that the empty union should be included, because that is clearly a finite union.
Issue 2: The flats of a matroid with ground set $E$ always form a sub-lattice of $(2^E, \subseteq)$. So if this converse were true, it would mean that every family of flats forms a topology on the ground set.
However, in general matroid axiomatizations and topological space axiomatizations are supposed to be inequivalent. This answer gives an example of how the matroid analog of topological closed sets, flats, need not be closed under unions.
The Kuratowski closure axioms are also evidently different from the matroid closure axioms. Likewise the cyclic set axioms for matroids guarantee closure under finite unions, but not under finite intersections. The hyperplane axioms are similar but not the same as the closed base axioms, and the circuit axioms are similar but not the same as the open base axioms.
Finally, Oxley, in
Matroid Theory, only states that the least element of the flat lattice is $cl(\emptyset)$, which seems to imply that the (matroid) closure of the empty set is not always the empty set, hence $\emptyset$ is not always a flat thus not always the least element of the flat sub-lattice of $(2^E, \subseteq)$. In contrast, $cl(\emptyset)=\emptyset$ always for the topological (Kuratowski) closure, since the empty set is always closed, either per fiat or due to closure under finite unions.
The lattice of flats is not a sub-lattice of $(2^E, \subseteq)$, since the join operation is different. The meet operation is the same though.
Also the claim in issue one corresponds to one of the flat axioms being "$E$ is a flat" even though the second flat axiom is "$F_1, F_2$ flats implies $F_1 \cap F_2$ are flats". I.e. the second axiom doesn't seem to imply closure under empty intersection, hence not closure under
all finite intersections. Issue 3: 2: The footnote 6 at the bottom of page 10 here says that [emphasis mine]:
A lattice is a partial order in which any finite
non-emptyset has an infimum and supremum. Equivalently, any two elements $x$ and $y$ have a supremum $ x \lor y$, and an infimum $ x \land y$.
In particular, based on this statement it does
not seem like any empty set has to have an infimum (it can, but it doesn't have to), which for $(2^E, \subseteq)$ would seem to correspond to sub-lattices not needing to be closed under empty intersections. Likewise, it does not seem like any empty set has to have a supremum, in particular it does not seem like sub-lattices of $(2^E, \subseteq)$ need to contain $\emptyset$.
Moreover, the statement that "the fact that the supremum and infimum only have to exist for
finite sets in the lattice is equivalent to any two elements being closed under meet or join" non-empty seemsto correspond to my claim that the statement [$X_1, X_2$ closed $\implies$ $X_1 \cup X_2$ closed] can't use induction to prove closure under empty unions. |
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor, export_graphviz
import graphviz
The dataset¶
To ease comparison with the former approaches, I stuck with the airline passengers dataset from before. Since tree based estimators can only work with stationary data, we need to remove and form of non-stationarity. This is the same problem as with the Naive-Bayes approach from [2], therefore the preprocessing is the same as for that model.
data = pd.read_csv("passengers.csv", header=0)
data.head()
Month International airline passengers: monthly totals in thousands. Jan 49 ? Dec 60 0 1949-01 112 1 1949-02 118 2 1949-03 132 3 1949-04 129 4 1949-05 121
data.set_index("Month",inplace=True)
plt.figure(figsize=(12,6))
plt.plot(data.values)
[<matplotlib.lines.Line2D at 0x1a1d1d15c0>]
train_size = len(data) - 36
test_size = len(data) - train_size
train, test = data.iloc[:train_size], data.iloc[train_size:]
train_diffed = train.diff().dropna().values
test = test.values
t_train = np.arange(len(train_diffed)).reshape(-1,1)
t_test = np.arange(len(train_diffed),test_size+len(train_diffed)).reshape(-1,1)
plt.figure(figsize=(12,6))
plt.plot(train_diffed)
[<matplotlib.lines.Line2D at 0x1a1d3df0f0>]
trend_removed = train_diffed.reshape(-1) / ((t_train+1)**(1/2)).reshape(-1)
plt.figure(figsize = (12,6))
plt.plot(trend_removed)
[<matplotlib.lines.Line2D at 0x1a1d53f4a8>]
train_full = trend_removed[5:]
t_train = np.arange(len(train_full)).reshape(-1,1)
t_test = np.arange(len(train_full),test_size+len(train_full)).reshape(-1,1)
The model¶
Now comes the fun part. As an interesting twist, we will stay completely in the time-domain for the depending variables and won't employ any sort of autoregressive approach - i.e. we will not regress the present realization on past realizations like
$$X_t=f(X_{t-1},X_{t-2},...,X_1)$$
but rather go with
$$X_t=f(t)$$
The challenge when trying to use a tree model to regress on the time-index, is obviously the continuous increase of $t$ once we leave the training data. Imagine building a Decision Tree with data from periods $[1,50]$ and want to forecast periods $[51,75]$. Per inductive bias of the tree algorithm, predictions for times outside of the training period will be flat:
tree_model = DecisionTreeRegressor(max_depth=3, random_state=123)
tree_model.fit(t_train, train_full)
pred = tree_model.predict(t_test)
pred_mean = np.full(36,np.mean(train_full))
plt.figure(figsize=(12,6))
plt.plot(np.concatenate([train_full, pred]), label="Training data")
plt.plot(np.arange(len(train_full),36+len(train_full)), pred, label="Out of sample forecast")
plt.plot(np.full(int(len(train_full)+36),np.mean(train_full)), label="Unconditional Mean")
plt.legend()
<matplotlib.legend.Legend at 0x1a1d5a5c50>
In fact, each future time-index will fall into the exact same leaf where
$$t_{future}>path\,with\,largest\,time\,index\,among\,all\,binary\,splits$$
While we could assume that the flat line is the best possible prediction, we likely miss out on the obviously recurring patterns in the time-series. Also, the height of the line seems to be a much worse predictor than the unconditional mean of the time-series - not a good model so far.
What we want is to somehow express the periodic patterns in our time variable and use those in our tree model. A first solution would be to create new features by squashing the time-index through (co-)sine functions with different frequencies $p$:
$$g_{sin}(t)=sin(p\cdot t)$$
$$g_{cos}(t)=cos(p\cdot t)$$
This might indeed make sense and be a valid solution. However there is an even easier way that avoids having to find the right frequencies and - as a bonus - allows to make the resulting Decision Tree interpretable (as long as its size is small enough to be human readable).
The simple trick here is to create new features by using the modulo operator on $t$:
$$g_i^*(t)=t\,mod\,i,\quad i\in\mathbb{Z}^+$$
By doing so, we project time onto an integer circle that gets traversed every $i$ periods. We then have
$$g_i^*(t)\in\{0,...,i-1\}\quad\forall t$$
regardless of whether we are in the training or forecasting period. We can then create multiple features $g_i^*(t)$ features by varying $i$ over some range. Let's implement the proposed procedure:
mod_train = np.concatenate([t_train%t for t in range(1,37)],1)
mod_test = np.concatenate([t_test%t for t in range(1,37)],1)
np.random.seed(123)
tree_model = DecisionTreeRegressor(max_depth=3, random_state=123)
tree_model.fit(mod_train, train_full)
DecisionTreeRegressor(criterion='mse', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=123, splitter='best')
pred_tree = tree_model.predict(mod_test)
plt.figure(figsize=(12,6))
plt.plot(np.concatenate([train_full, pred_tree]), label="Training data")
plt.plot(np.arange(len(train_full),36+len(train_full)), pred_tree, label="Out of sample forecast")
plt.plot(pred_mean, label="Unconditional Mean")
plt.legend()
<matplotlib.legend.Legend at 0x1a1d728ba8>
The forecast looks reasonable - the obvious patterns from the transformed training data seem to be recognized by our model. Now we can compare the evaluate on our actual test set (of course, we need to invert the initial transformation of our dataset first).
pred = (np.cumsum(pred_tree)*((t_test+1)**(1/2)).reshape(-1) + train.iloc[-1,0])
pred_mean = (np.cumsum(pred_mean)[:36]*((t_test+1)**(1/2)).reshape(-1) + train.iloc[-1,0])
plt.figure(figsize=(12,6))
plt.plot(test, label = "Out of sample test data")
plt.plot(pred, label = "Forecast")
plt.plot(pred_mean, label = "Unconditional mean")
plt.legend()
<matplotlib.legend.Legend at 0x1a1d894a20>
np.sqrt(np.mean((pred - test.reshape(-1))**2))
26.69749158609182
np.sqrt(np.mean((pred_mean - test.reshape(-1))**2))
75.27880388952589
The result on the test set are looking fine - our model clearly outperforms the naive forecast and is close to the former Naive Bayes model. Now let's output the exact model that the Decision Tree has learnt:
graphviz.Source(export_graphviz(tree_model, out_file = None, feature_names = ["Period=%s" %(i) for i in range(1,37)]))
We can see that the model learnt the rather obvious yearly pattern (Period=12) another reasonable quarterly (Period=3) and a pattern that repeats itself every four months (Period=4). Interestingly, the model also learnt two patterns that aren't as obvious as the other three, namely a Period=13 and a Period=25 pattern. Although these two patterns don't contribute as much to the reduction of MSE in each node, it might be interesting to use this knowledge for further modeling.
Conclusion¶
In this rather short post, we looked at a reasonable way to use a Decision Tree for time-series forecasting. The proposed approach can be made quite sparse in terms of model parameters - in the simplest case we could go with a Decision Tree stump that splits the input space only once. This can be quite advantageous for time-series problems where the amount of available training data is small as well in order to avoid overfitting. On the other hand is of course the easy interpretability of tree models that allows us to easily explain our forecasts to potential stakeholders. Obviously, we could also add an autoregressive component or external regressors here to make our model more powerful. To enhance our predictive power while keeping the model interpretable, we could switch over to the RuleFit algorithm which I explained here and also applied to a non time-series problem here. |
If you start with a monatomic gas then the only degrees of freedom available are the three translational degrees of freedom. Each of them absorbs $\tfrac{1}{2}kT$ of energy, so the specific heat (at constant volume) is $\tfrac{3}{2}k$ per atom or $\tfrac{3}{2}R$ per mole.
If you move to a diatomic molecule there are two rotational modes as well - only two extra modes because rotation about the axis of the molecule has energy levels too widely spaced to be excited at normal temperatures. Each of those two rotational degrees of freedom will soak up another $\tfrac{1}{2}kT$, giving a specific heat of $\tfrac{5}{2}k$ per molecule or $\tfrac{5}{2}R$ per mole.
But the rotational energy levels are quantised with an energy spacing of $E = 2B, 6B, 12B$ and so on, where $B$ is the rotational constant for the molecule:
$$ B = \frac{\hbar^2}{2\mu d^2} $$
where $\mu$ is the reduced mass and $d$ is the bond length. So these rotational energy levels will only be populated when $kT$ is a lot greater than $B$ - say 10 to 100 times greater. You can look up the rotational constant of nitrogen, or it's easy enough to calculate, and the result is:
$$ B \approx 3.97 \times 10^{-23} \text{J} $$
which is about $3k$. So as long as the temperature is above say $30K$ the rotational modes will be excited and nitrogen will have a specific heat of $\tfrac{5}{2}R$. If you go down to temperatures of $3K$ and below then the specific heat will fall to $\tfrac{3}{2}R$ just like a monatomic gas.
The specific heat of nitrogen at constant volume is 0.743 kJ/(kg.K), and converting this to J/mole.K we get 20.8 J/(mole.K) and this is indeed 2.50R (to three significant figures).
The conformist mentions that the vibrations of the nitrogen molecule will contribute to the specific heat, and indeed they will. However the energy of the first vibrational mode is 2359 cm$^{-1}$, which converted to non-spectrogeek units is $4.7 \times 10^{-20}$ J or about $3400k$. So the vibrational mode isn't going to contribute to the specific heat until the temperature gets above 3400K. |
This is a Test of Mathematics Solution (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance.
If \(a_0, a_1, \cdots, a_n \) are real numbers such that $$ (1+z)^n = a_0 + a_1 z + a_2 z^2 + \cdots + a_n z^n $$ for all complex numbers z, then the value of $$ (a_0 – a_2 + a_4 – a_6 + \cdots )^2 + (a_1 – a_3 + a_5 – a_7 + \cdots )^2 $$ equals
(How to use this discussion:Do not read the entire solution at one go. First, read more on the Key Idea, then give the problem a try. Next, look into Step 1 and give it another try and so on.)
Key Idea
This is the generic use case of Complex Number \( \iota =\sqrt {-1} \) and binomial theorem.
Step 1
Note that \( i^2 = -1 \). Also, geometrically speaking, i = (0,1). Hence adding (1,0) to i (=(0,1)) gives us the point (1, 1). Polar coordinate of this point is \( ( \sqrt 2, \frac{pi}{4} ) \). Here is a picture:
Try the problem with this hint before looking into step 2. Remember, no one learnt mathematics by looking at solutions.
At Cheenta we are busy with Complex Number and Geometry module. Additionally I.S.I. Entrance Mock Test 1 is also active now.
Replace \( z \) by \( i \). We have \( (1+z)^n = (\sqrt 2 , \frac {\pi}{4} )^n = (2^{n/2}, \frac{n \cdot \pi }{4} ) \) on the left hand side.
Now, replace \( z \) by \( i \) on the right hand side.
Replacing z by \( i \) on the right hand side we have $$(2^{n/2}, \frac{n \cdot \pi }{4} ) = a_0 + a_1 i + a_2 i^2 + a_3 I^3 \cdots + a_n I^n $$. This implies $$ (2^{n/2}, \frac{n \cdot \pi }{4} ) = a_0 – a_2 + a_4 – \cdots + i (a_1 – a_3 + a_5 – \cdots ) $$
Think now, what the following expression represents: $$ (a_0 – a_2 + a_4 – a_6 + \cdots )^2 + (a_1 – a_3 + a_5 – a_7 + \cdots )^2 $$
It represents the square of the length of point \( (2^{n/2}, \frac{n \cdot \pi }{4} ) \). That is simply \( (2^{n/2})^2 = 2^n \) |
Adding Gradient Noise Improves Learning for Very Deep NetworksAdding Gradient Noise Improves Learning for Very Deep NetworksArvind Neelakantan and Luke Vilnis and Quoc V. Le and Ilya Sutskever and Lukasz Kaiser and Karol Kurach and James Martens2015
Paper summarydavidstutzNeelakantan et al. study gradient noise for improving neural network training. In particular, they add Gaussian noise to the gradients in each iteration:$\tilde{\nabla}f = \nabla f + \mathcal{N}(0, \sigma^2)$where the variance $\sigma^2$ is adapted throughout training as follows:$\sigma^2 = \frac{\eta}{(1 + t)^\gamma}$where $\eta$ and $\gamma$ are hyper-parameters and $t$ the current iteration. In experiments, the authors show that gradient noise has the potential to improve accuracy, especially given optimization.Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Adding Gradient Noise Improves Learning for Very Deep NetworksArvind NeelakantanandLuke VilnisandQuoc V. LeandIlya SutskeverandLukasz KaiserandKarol KurachandJames MartensarXiv e-Print archive - 2015 via Local arXivKeywords:stat.ML, cs.LG
First published: 2015/11/21 (3 years ago) Abstract: Deep feedforward and recurrent networks have achieved impressive results inmany perception and language processing applications. This success is partiallyattributed to architectural innovations such as convolutional and longshort-term memory networks. The main motivation for these architecturalinnovations is that they capture better domain knowledge, and importantly areeasier to optimize than more basic architectures. Recently, more complexarchitectures such as Neural Turing Machines and Memory Networks have beenproposed for tasks including question answering and general computation,creating a new set of optimization challenges. In this paper, we discuss alow-overhead and easy-to-implement technique of adding gradient noise which wefind to be surprisingly effective when training these very deep architectures.The technique not only helps to avoid overfitting, but also can result in lowertraining loss. This method alone allows a fully-connected 20-layer deep networkto be trained with standard gradient descent, even starting from a poorinitialization. We see consistent improvements for many complex models,including a 72% relative reduction in error rate over a carefully-tunedbaseline on a challenging question-answering task, and a doubling of the numberof accurate binary multiplication models learned across 7,000 random restarts.We encourage further application of this technique to additional complex modernarchitectures.
Neelakantan et al. study gradient noise for improving neural network training. In particular, they add Gaussian noise to the gradients in each iteration:$\tilde{\nabla}f = \nabla f + \mathcal{N}(0, \sigma^2)$where the variance $\sigma^2$ is adapted throughout training as follows:$\sigma^2 = \frac{\eta}{(1 + t)^\gamma}$where $\eta$ and $\gamma$ are hyper-parameters and $t$ the current iteration. In experiments, the authors show that gradient noise has the potential to improve accuracy, especially given optimization.Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/). |
Considering the $\textit{Divisor Summatory Function}$, $D(n)$, defined as $$ D(n) = \sum_{k=1}^{n}d(k) , $$ where $$ d(n) = \sum_{k|n}^{n}1. $$
One can observe the following pattern in the values of $D(n)$,
$$ \lbrace{D(n)\rbrace}=\lbrace \overbrace{1,3,5,\;}^{3,odd}\overbrace{8,10,14,16,20,}^{5,even}\overbrace{23,27,29,35,37,41,45,}^{7,odd}\cdots \rbrace $$
where groups of odd elements alternate with groups of even elements and where the $n^{th}$ group has $2n-1$ elements, (we can see that the pattern pesists). Now, based on this (but it is not necessary that this pattern of $D(n)$ is verified for all $n$ we can only assume that such a similar pattern exists), and considering that any number can be written as $$ \begin{align*} n=p_{1}^{\alpha_{1}}\cdot p_{2}^{\alpha_{2}}\cdot p_{3}^{\alpha_{3}} \cdots p_{n}^{\alpha_{n}} \end{align*} $$ where $p_{i}$ are prime numbers, one can define the following arithmetical functions $$ a(n)= \begin{cases} 1, & \text{if all } D(\alpha_1), D(\alpha_2), \ldots, D(\alpha_n) \text{ are even} , \\\\ \\\\ 0, & \text{if one or more of the } D(\alpha_{i}) \text{ is odd}, \end{cases} $$ and $$b(n)\;=\; \begin{cases} (-1)^{n_1+n_2+\cdots+n_i}, &\text{if } \alpha_i = n_i^2, \\\\ \\\\ 0, & \text{if } \alpha_i \text{ is not of the form } n_i^2, \end{cases} $$ ($b(n)=$A197774) then we can define two Dirichlet series $A(s)=\sum_{k=1}^{\infty}\frac{a(k)}{n^{s}}$ with $a(1)=1$ and $B(s)=\sum_{k=1}^{\infty}\frac{b(k)}{n^{s}}$ with $b(1)=1$.
Both of the Dirichlet series have, respectively, the following Euler's products$$A(s) = \prod_{p\in \mathbb{P}}\left(1+\frac{1}{p^{4s}}+\cdots+\frac{1}{p^{8s}}+\frac{1}{p^{16s}}+\cdots+\frac{1}{p^{24s}}+\cdots\right)$$and$$B(s) = \prod_{p\in \mathbb{P}}\left(1 - \frac{1}{p^{s}}+\frac{1}{p^{4s}}-\frac{1}{p^{9s}}+\frac{1}{p^{16s}}-\frac{1}{p^{25s}}+\frac{1}{p^{36s}}-\cdots\right)$$First we can see that $A(s)$ is absolutely convergent for $s>\frac{1}{4}$. Secondly we can observe that $B(s)$ is related to $\vartheta_{4}(0,x)=1-x+x^{4}-x^{9}+x^{16}\cdots$ (the Jacobi Theta function) by$$\begin{equation*}B(s) = \prod_{p\in \mathbb{P}}\left(\frac{1}{2} \vartheta_{4}(0,p^{-s})+1 \right)\end{equation*}$$and thirdly$$\begin{equation*}\zeta(s) = \frac{A(s)}{B(s)}\end{equation*}$$
We can think of $b(n)$ as a generalization of $\mu(n)$, the Möbius function and we can assume that if $B(s)$ converges for $\Re{s}>\frac{1}{2}$ then $\zeta(s)$ has no zeros on the right of $\frac{1}{2}$, just like the Mertens function, where$$\begin{equation*}\frac{1}{\zeta(s)} = s\int_{1}^{\infty}\frac{M(x)}{x^{s+1}}dx\end{equation*}$$similarly for $\zeta(s)$ we have$$\begin{equation*}\zeta(s) = \frac{A(s)}{ s\int_{1}^{\infty}\frac{B(x)}{x^{s+1}}dx}\end{equation*}$$where $M(x)$ is the Mertens function$$\begin{equation*}M(x)=\sum_{1\leq n \leq x}\mu(x)\end{equation*}$$and $B(x)$ is$$\begin{equation*}B(x)=\sum_{1 \leq n \leq x}b(x)\end{equation*}$$My question is: Were these Dirichlet series, $A(s)$ and $B(s)$, studied before and related to the $\zeta(s)$-function the way I did? Or... is this something new?
Thanks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.