text stringlengths 256 16.4k |
|---|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Challenge
Given an integer, \$s\$, as input where \$s\geq 1\$ output the value of \$\zeta(s)\$ (Where \$\zeta(x)\$ represents the Riemann Zeta Function).
Further information
\$\zeta(s)\$ is defined as:
$$\zeta(s) = \sum\limits^\infty_{n=1}\frac{1}{n^s}$$
You should output your answer to 5 decimal places (no more, no less). If the answer comes out to be infinity, you should output \$\infty\$ or equivalent in your language.
Riemann Zeta built-ins are allowed, but it's less fun to do it that way ;)
Examples Outputs must be exactly as shown below
Input -> Output1 -> ∞ or inf etc.2 -> 1.644933 -> 1.202064 -> 1.082328 -> 1.0040819 -> 1.00000
Bounty
As consolation for allowing built-ins, I will offer a 100-rep bounty to the shortest answer which
does not use built-in zeta functions. (The green checkmark will still go to the shortest solution overall) Winning
The shortest code in bytes wins. |
Big O: upper bound
“Big O” ($O$) is by far the most common one. When you analyse the complexity of an algorithm, most of the time, what matters is to have some upper bound onhow fast the run time¹ grows when the size of the input grows. Basically we want to know that running the algorithm isn't going to take “too long”. We can't express this in actual time units (seconds), because that would depend on the precise implementation (the way the program is written, how good the compiler is, how fast the machine's processor is, …). So we evaluate what doesn't depend on such details, which is how much longer it takes to run the algorithm when we feed it bigger input. And we mainly care when we can be sure that the program is done, so we usually want to know that it will take such-and-such amount of time or less.
To say that an algorithm has a run time of $O(f(n))$ for an input size $n$ means that there exists some constant $K$ such that the algorithm completes in at most $K \, f(n)$ steps, i.e. the running time of the algorithm grows at most as fast as $f$ (up to a scaling factor). Noting $T(n)$ the run time of the algorithm for input size $n$, $O(n)$ informally means that $T(n) \le f(n)$ up to some scaling factor.
Lower bound
Sometimes, it is useful to have more information than an upper bound. $\Omega$ is the converse of $O$: it expresses that a function grows at least as fast as another. $T(n) = \Omega(g(n))$ means that $T(N) \ge K' g(n)$ for some constant $K'$, or to put it informally, $T(n) \ge g(n)$ up to some scaling factor.
When the running time of the algorithm can be determined precisely, $\Theta$ combines $O$ and $\Omega$: it expresses that the rate of growth of a function is known, up to a scaling factor. $T(n) = \Theta(h(n))$ means that $K h(n) \ge T(n) \ge K' h(n)$ for some constants $K$ and $K'$. Informally speaking, $T(n) \approx h(n)$ up to some scaling factor.
Further considerations
The “little” $o$ and $\omega$ are used far less often in complexity analysis. Little $o$ is stronger than big $O$; where $O$ indicates a growth that is no faster, $o$ indicates that the growth is strictly slower. Conversely, $\omega$ indicates a strictly faster growth.
I've been slightly informal in the discussion above. Wikipedia has formall definitions and a more mathematical approach.
Keep in mind that the use of the equal sign in $T(n) = O(f(n))$ and the like is a misnomer. Strictly speaking, $O(f(n))$ is a set of functions of the variable $n$, and we should write $T \in O(f)$.
Example: some sorting algorithms
As this is rather dry, let me give an example. Most sorting algorithms have a quadratic worst case run time, i.e. for an input of size $n$, the run time of the algorithm is $O(n^2)$. For example, selection sort has an $O(n^2)$ run time, because selecting the $k$th element requires $n-k$ comparisons, for a total of $n(n-1)/2$ comparisons. In fact, the number of comparisons is always exactly $n(n-1)/2$, which grows as $n^2$. So we can be more precise about the time complexity of selection sort: it is $\Theta(n^2)$.
Now take merge sort. Merge sort is also quadratic ($O(n^2)$). This is true, but not very precise. Merge sort in fact has a running time of $O(n \: \mathrm{lg}(n))$ in the worst case. Like selection sort, merge sort's work flow is essentially independent of the shape of the input, and its running time is always $n \: \mathrm{lg}(n)$ up to a constant multiplicative factor, i.e. it is $\Theta(n \: \mathrm{lg}(n))$.
Next, consider quicksort. Quicksort is more complex. It is certainly $O(n^2)$. Furthermore, the worst case of quicksort is quadratic: the
worst case is $\Theta(n^2)$. However, the best case of quicksort (when the input is already sorted) is linear: the best we can say for a lower bound to quicksort in general is $\Omega(n)$. I won't repeat the proof here, but the average complexity of quicksort (the average taken over all possible permutations of the input) is $\Theta(n \: \mathrm{lg}(n))$.
There are general results on the complexity of sorting algorithms in common settings. Assume that a sorting algorithm can only compare two elements at a time, with a yes-or-no result (either $x \le y$ or $x > y$). Then it is obvious that any sorting algorithm's running time is always $\Omega(n)$ (where $n$ is the number of elements to sort), because the algorithm has to compare every element at least once to know where it will fit. This lower bound can be met, for example, if the input is already sorted and the algorithm merely compares each element with the next one and keeps them in order (that's $n-1$ comparisons). What is less obvious is that the
maximum running time is necessarily $\Omega(n \: \mathrm{lg}(n))$. It's possible that the algorithm will sometimes make fewer comparisons, but there has to be some constant $K$ such that for any input size $n$, there is at least one input on which the algorithm makes more than $K n \mathrm{lg}(n)$ comparisons. The idea of the proof is to build the decision tree of the algorithm, i.e. to follow the decisions the algorithm takes from the result of each comparison. Since each comparison returns a yes-or-no result, the decision tree is a binary tree. There are $n!$ possible permutations of the input, and the algorithm needs to distinguish between all of them, so the size of the decision tree is $n!$. Since the tree is a binary tree, it takes a depth of $\Theta(\mathrm{lg}(n!)) = \Theta(n\:\mathrm{lg}(n))$ to fit all these nodes. The depth is the maximum number of decisions that the algorithm takes, so running the algorithm involves at least this many comparisons: the maximum running time is $\Omega(n \: \mathrm{lg}(n))$.
¹
Or other resource consumption such as memory space. In this answer, I only consider running time. |
I have an Equilateral triangle with unknown side $a$. The next thing I do is to make a random point inside the triangle $P$. The distance $|AP|=3$ cm, $|BP|=4$ cm, $|CP|=5$ cm.
It is the red triangle in the picture. The exercise is to calculate the area of the Equilateral triangle (without using law of cosine and law of sine, just with simple elementary argumentation).
The first I did was to reflect point $A$ along the opposite side $a$, therefore I get $D$. Afterwards I constructed another Equilateral triangle $\triangle PP_1C$.
Now it is possible to say something about the angles, namely that $\angle ABD=120^{\circ}$, $\angle PBP_1=90^{\circ} \implies \angle APB=150^{\circ}$ and $\alpha+\beta=90^{\circ}$
Now I have no more ideas. Could you help me finishing the proof to get $a$ and therefore the area of the $\triangle ABC$. If you have some alternative ideas to get the area without reflecting the point $A$ it would be interesting. |
There is probably not an isomorphism.
First, note a reason to be suspicious - if you had such an isomorphism, couldn't you just iterate it to get an isomorphism $\overline{M}_{g,A}=\overline{\mathcal M}_{g,n}$. This would make the work of Brendan Hasset in his paper somewhat superfluous.
I claim there is a natural map $ \overline{M}_{g, A \cup \{a_n+1 \} } \to \mathcal C_{g,A} $. To construct it, simply delete the last marked point. If the twisted canonical bundle ceases to be ample on the component the last marked point was on, contract it. Repeat until you have a weighted stable curve with $n$ marked points. Then choose from among the points of this curve the image of the last marked point under the contraction map.
However, this map often fails to be injective. Suppose there is some $a_i, a_j$ with $a_i+a_j\leq 1$, $a_i + a_j + a_{n+1} >1$. Consider a genus $g$ smooth curve glued to a single copy of $\mathbb P^1$ where points $i,j,n+1$ are on the $\mathbb P^1$ and the other points are distributed on the genus $g$ curve. Removing point $a_{n+1}$ causes the $\mathbb P^1$ to be contracted, and $a_i$ and $a_j$ become the same point. This happens regardless of the location of $a_i, a_j, a_{n+1}$ on the old $\mathbb P^1$. But there is a 1-parameter family of possible locations of these three points and the one node up to isomorphism. This curve is contracted to a point by the map $ \overline{M}_{g, A \cup \{a_n+1 \} } \to \mathcal C_{g,A} $.
Because the other points can be chosen on the genus $g$ curve to kill all the automorphisms, passing to the coarse moduli space does not change anything in this example. |
This paper generalizes the negative binomial random variable by generating it from a sequence of first-kind dependent Bernoulli trials under the identity permutation. The PMF, MGF, and various moments are provided, and it is proven that the distribution is indeed an extension of the standard negative binomial random variable. We examine the effect of complete dependence of the Bernoulli trials on the generalized negative binomial random variable. We also show that the generalized geometric random variable is a special case of the generalized negative binomial random variable, but the generalized negative binomial random variable cannot be generated from a sum of i.i.d. generalized geometric random variables.
To download the paper with all proofs, click here
Introduction
A binomial random variable Z_n is constructed from a sequence of n Bernoulli random variables {\epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{n}}, and counts the number of 1s, or “successes” in the sequence. Mathematically, a binomial random variable is given by Z_{n} = \sum_{i=1}^{n}\epsilon_{i}. A traditional binomial random variable requires an i.i.d. sequence of Bernoulli random variables. Korzeniowski [1] developed a generalized binomial distribution under the condition that the Bernoulli sequence is
first-kind dependent.
A “different perspective”, as it were, to the binomial random variable is the negative binomial random variable. With a binomial random variable, we fix the number of trials and count the number of “successes”. Suppose now we fix the number of successes as k and continue to run Bernoulli trials until the kth success. The random number of trials necessary to get k successes is a
negative binomial random variable, and may be formulated mathematically as V_k =\min_{n \geq k}(n:Z_n = k). The sequence is halted when the kth success appears, which will always be on the last trial. Thus, the event (V_{k} = n) is equivalent to (Z_{n-1} = k-1 \wedge \epsilon_{n} = 1). A standard negative binomial distribution is constructed from an i.i.d. sequence of Bernoulli random variables, just like the binomial random variable. The PMF is given by
We may also characterize the negative binomial distribution in a different way. Let Y denote the number of additional trials beyond the minimum possible k required for k successes. Since the trials are Bernoulli trials, Y denotes the random number of failures that will occur before the kth success is observed. Thus, if we denote y as the number of failures, n = k + y, where k is fixed, and thus the random variable Y with support \{0,1,2,\ldots\} is equivalent to the previous characterization of V_k. The PMF of Y is easily derived and given byP(Y=y) = {k+y-1 \choose y}p^{k}(1-p)^{y}
A first-kind dependent sequence of Bernoulli trials is identically distributed but dependent ([1], [2], [4]) (see the papers for [2] and [4] here and here.), and thus can generate generalized versions of random variables that are functions of sequences of identically distributed Bernoulli or categorical random variables ([2], [3]) (You can reach these papers here and here.). This paper generalizes the negative binomial distribution given above by allowing the Bernoulli sequence to be first-kind dependent.
Derivation of the PMF
P(V_{k} = n) = p{n-2 \choose k-2}(p^{+})^{k-1}(q^{-})^{n-k}+q{n-2\choose k-1}(p^{-})^{k}(q^{+})^{n-k-1}
Theorem 1: Let k\in\mathbb{N} be fixed, and \epsilon = \{\epsilon_{1},\epsilon_{2},\ldots\} be a sequence of first-kind dependent Bernoulli trials under the identity permutation with P(\epsilon_{k} = 1) = p and dependency coefficient \delta. Define q=1-p, p^{+} = p + \delta q, p^{-} = p-\delta p, q^{+} = q + \delta p, and q^{-} = q-\delta q. Let Z_{n} denote a generalized binomial random variable of length n. Let V_{k} denote the random variable that counts the number of first-kind dependent Bernoulli trials until the kth success. That is, V_k =\min_{n \geq k}(n:Z_n =k). Then the PMF of V_k is given by
It will be more helpful to characterize the generalized negative binomial random variable our alternative way, by letting V_{k} = Y+k, where Y is the random variable that counts the number of additional trials beyond the minimum possible k necessary to achieve the kth success or, equivalently, the number of failures in a sequence of FK-dependent Bernoulli variables with k successes.
The PMF of Y is given in the following corollary
Corollary 1:
Let Y be the random variable described here, with support \{0,1,2,\ldots\}. Then Y is equivalent to V_{k}, and the PMF of Y is given by
When \delta = 0, a FK-dependent Bernoulli sequence becomes a standard i.i.d. Bernoulli sequence. Thus, when \delta = 0, Yreverts to a standard negative binomial distribution.
Corollary 2:
Let Y be a generalized negative binomial distribution constructed via FK-dependency under the identity permutation with dependency coefficient \delta. When \delta = 0, Y is a standard negative binomial random variable.
The Moment Generating Function and Various Moments Theorem 2:
The moment generating function of the generalized negative binomial distribution is given byM_{Y}(t) = \frac{p(p^{+})^{k-1}}{(1-e^{t}q^{-})^{k-1}} + \frac{q(p^{-})^{k}e^{t}}{(1-e^{t}q^{+})^{k}}
Remark: This is indeed a generalization of the standard negative binomial distribution, as it is now a special case of the generalized negative binomial distribution when \delta = 0. To see this, we will show that the MGF of the generalized negative binomial distribution becomes the MGF for the standard negative binomial distribution.
When \delta = 0, a FK-dependent sequence reverts back to a standard i.i.d. sequence. That is, p^{+}=p^{-}=p, and q^{+} = q^{-}=q. So in this case,\begin{aligned}M_{Y}(t) &= \frac{p^{k}}{(1-e^{t}q)^{k-1}} + \frac{qp^{k}e^{t}}{(1-e^{t}q)^{k}}\\&= \frac{(1-e^{t}q)p^{k} + qp^{k}e^{t}}{(1-e^{t}q)^{k}}\\&=\frac{p^{k}}{(1-e^{t}q)^{k}}\end{aligned}
We may now derive the various moments of the generalized negative binomial distribution using the moment generating function.
Mean of generalized negative binomial distribution
The mean of the generalized negative binomial distribution is given by
\mu_{Y} = \text{E}[Y] = \frac{kpq + k\delta q^{2}-(k-1)\delta pq(1-\delta)}{p^{2}(1-\delta)+\delta pq(1-\delta)} Remark: Note the reduction of the GNB mean to that of the standard negative binomial distribution when the sequence is independent. For \delta = 0, \text{E}[Y] = \frac{kq}{p}. Variance of the generalized negative binomial distribution
After many attempts to distill the formula to a palatable expression, the variance of the generalized negative binomial distribution is given by\begin{aligned}\text{Var}[Y]&=\frac{kq}{(p+q\delta)^{2}(1-\delta)^{2}}\\&\quad+\dfrac{\delta(p^{3}q + kpq^{2}-kq^{3})}{p^{2}(1-\delta^{2})(p+q\delta)^{2}}+\dfrac{\delta^{2}(k^{2}pq + kq -3kpq^{2}-2p^{3}q)}{p^{2}(1-\delta^{2})(p+q\delta)^{2}}\\&\quad\quad+\dfrac{\delta^{3}pq(p^{2}-k-2kq)}{p^{2}(1-\delta^{2})(p+q\delta)^{2}}\end{aligned}
Remark: Once again, under independence, \text{Var}[Y] = \frac{kq}{p^{2}}, the variance of the standard negative binomial. Other higher order moments can also be obtained from the moment generating function and copious amounts of tedious arithmetic. Other Considerations The effect of complete dependence
It’s also worth exploring the other extremes, like complete dependence. As illustrated in [2], complete dependence under FK-dependence implies that every Bernoulli trial will be identical to the outcome of the first trial. Thus, if \epsilon_{1} = 0, the entire sequence will be all 0s, and vice versa if \epsilon_{1} = 1. What does that mean for the generalized negative binomial distribution? If \epsilon_{1} = 0, the sequence will never end; k successes will never happen. On the other hand, if \epsilon_{1} = 1, then you are guaranteed to reach k successes in k trials. This results in both an infinite mean and variance, seen by plugging in \delta = 1.
Exploring the PMF under \delta = 1, P(Y=0) = p, because P(\epsilon_{1} = 1) = p. If \epsilon_{1}= 1, and \delta = 1, then there will be only 1s in the sequence, and no 0s, and thus P(Y=0) = P(\epsilon_{1} = 1). If \epsilon_{1} = 0, then there are only 0s in the FK-dependent Bernoulli sequence, and no 1s. Thus, Y can only be \infty, and P(Y = \infty) = P(\epsilon_{1} = 0) = q.
Remark: When we say Y = \infty, we mean that the sequence of trials has no halting point. That is, the counting process never ends.
Thus, under complete dependence of the first kind, the support of Y has two points \{0,\infty\}, with probabilities p and q respectively. This is another way to confirm that Y will have infinite mean and variance in this case.
The negative binomial random variable as a sum of geometric random variables The standard negative binomial distribution with k fixed successes can be derived as a sum of independent standard geometric random variables. One shows this by showing the moment generating function of the standard negative binomial distribution is equal to the product of k i.i.d. standard geometric random variables. Moreover, it can also be shown that the standard geometric random variable is a special case of the standard negative binomial distribution when k = 1.
How much of this carries over to the generalized versions of both distributions? The generalized geometric distribution was introduced and detailed by Traylor in [2]. Here, we are concerned with the PMF of “Version 2” of the generalized geometric distribution, as the “shifted” generalized geometric distribution counts the number of failures prior to the first success, and is analogous to counting the number of failures in a sequence of trials before the kth success. We reproduce Proposition 2 from [2] here:
Proposition 2, [2]: Suppose \epsilon = (\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{n},\ldots) is a FK-dependent sequence of Bernoulli random variables. Let Z = X-1 be the count of failures prior to the first success. Then Z has a shifted generalized geometric distribution with PMF
We can quickly derive its MGF:
M_{Z}(t) = p + \frac{e^{t}qp^{-}}{1-e^{t}q^{+}}
Proposition 1: The moment generating function of the shifted generalized geometric distribution is given by Generalized geometric RV as a special case of generalized negative binomial RV
It is true that, for k = 1, the generalized negative binomial distribution under FK-dependence reduces to the generalized geometric distribution. This is given in the following theorem.
Theorem 3:
When k=1, the generalized negative binomial random variable reduces to a generalized geometric random variable.
(To see this, simply plug in k=1 to the PMF of the generalized negative binomial distribution.)
Sum of independent generalized geometric random variables does not yield a generalized negative binomial random variable
Unlike the standard case, a sum of i.i.d. generalized geometric random variables does not yield a generalized negative binomial random variable. First, we note what we mean by a set of i.i.d. generalized geometric random variables. Suppose we have a set of generalized geometric random variables \{X_{1},X_{2},\ldots,X_{k}\}, each with the same p, q, and \delta, and all first-kind dependent. Thus, to say that each of these geometric random variables is mutually independent of the others is to say that nothing about the other geometric random variables has any probabilistic bearing on the variable in question. That is, the dependency structure is not changed or altered, and P(X_{i}|X_{j}) = P(X_{i}), i\neq j, i=1,2,\ldots,k. The Bernoulli random variables that make up each geometric random variable still remain FK-dependent among themselves. That is, if \epsilon_{i} = (\epsilon_{1}^{(i)},\epsilon_{2}^{(i)},\ldots,\epsilon_{n_{i}}^{(i)}) is the sequence of Bernoulli trials that comprises X_{i}, each \epsilon_{i} is FK-dependent among its elements, but independent of the other sequences \epsilon_{j}.
One can easily see that, if the generalized negative binomial distribution were able to be generated by the sum of i.i.d. generalized geometric random variables, then M_{Y}(t) = \prod_{i=1}^{k}M_{X_{i}}(t). But\begin{aligned}\prod_{i=1}^{k}M_{X_{i}}(t) &=\left(p + \frac{e^{t}qp^{-}}{1-e^{t}q^{+}}\right)\\&\neq \frac{p(p^{+})^{k-1}}{(1-e^{t}q^{-})^{k-1}} + \frac{q(p^{-})^{k}e^{t}}{(1-e^{t}q^{+})^{k}}\\&= M_{Y}(t)\end{aligned}
Why is this? The answer is quite intuitive. The generalized negative binomial distribution under FK-dependence is one sequence under a FK dependency structure. That is, all Bernoulli trials after the first depend directly on the first. Summing generalized geometric random variables under FK dependence is equivalent to constructing a sequence of generalized geometric random variables, one after the other. Since these are themselves comprised of FK-dependent Bernoulli trials, each time a success is observed, the dependency structure ”starts over” with the next geometric random variable.
For example, suppose the first geometric random variable has a success on the third trial. Then the fourth Bernoulli trial is starting an entirely new sequence of FK-dependent Bernoulli variables, and does not depend on the outcome of the first. This is not equivalent to the definition of a generalized negative binomial distribution.
This property held for the standard versions of the geometric and negative binomial random variables because every Bernoulli trial in each geometric sequence is i.i.d. Thus, there is no “transition” from one geometric random variable to another; it’s as if it was all one big sequence of Bernoulli trials to begin with. We lose that when we introduce dependency structures.
Conclusion
This paper introduced the generalized negative binomial distribution built from a sequence of FK- dependent random variables. The PMF, MGF, and various moments were derived. It was also noted that the generalized geometric distribution is a special case of the generalized negative binomial distribution, but one cannot construct a generalized negative binomial random variable from the sum of i.i.d. generalized geometric random variables.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. References Andrzej Korzeniowski. On correlated random graphs. Journal of Probability and Statistical Science, pages 43–58, 2013. Rachel Traylor. A generalized geometric distribution from vertically dependent categorical random variables. Academic Advances of the CTO, 2017. Rachel Traylor. A generalized multinomial distribution from dependent categorical random variables. Academic Advances of the CTO, 2017. Rachel Traylor and Jason Hathcock. Vertical dependency in sequences of categorical random variables. Academic Advances of the CTO, 2017. |
Let $X$ be a topological space.
Definitions:
$X$ is countably compact if every countable open cover of $X$ has a finite subcover or equivalently, every sequence in $X$ has a cluster point. $X$ is sequentially compact if every sequence in $X$ has a convergent subsequence $X$ is sequential if every sequentially closed set is closed.
It is known that if $X$ is countably compact + sequential + $T_2$ then $X$ is sequentially compact (see e.g. Engelking).
The proof goes like this: Let $x_n$ be a sequence in $X$. Since $X$ is countably compact $x_n$ has a cluster point $x \in X$. If $\{ n \mid x_n = x \}$ is infinite then we have a constant subsequence of $x_n$, thus convergent. So assume that $\{ n \mid x_n = x \}$ is finite such that there is some $n_0$ and $x_n \neq x$ for all $n \geq n_0$. Consider the set $A := \{ x_n \mid n \geq n_0 \} \setminus \{ x \}$. Then $A$ is not closed and since $X$ is sequential, $A$ is not sequentially closed. Thus, there is a sequence $y_k \in A$ and $y \in X \setminus A$ such that $y_k \to y$. Since $X$ is $T_2$ it follows that $y_k$ is not eventually constant since otherwise $y_k \to y_N \in A$ for some $N \in \mathbb{N}$ and $y_k \to y \in X \setminus A$ implies $y_N = y$ which is a contradiction. Thus, we have infinitely many $y_k$ in $A$ which can be finally used to construct a convergent subsequence of $x_n$.
There are also other properties $\varphi$ such that countable compactness + $\varphi$ imply sequential compactness. As an example, $\varphi$ can be taken to be first-countable or even Fréchet-Urysohn (cluster points of injective sequences $x_n$ are accumulation points of the corresponding sets $x(\mathbb{N})$, thus lying in the closure and thus being able to be approximated by a sequence in $x(\mathbb{N})$ which can be used to generate a convergent subsequence of $x_n$). There is no need for an additional separation property.
In my eyes, the Fréchet-Urysohn property is not "too far" away from the sequential property and thus it is a little bit "strange" that sequentialness needs an additional separation property. By "too far" I mean that typical spaces that are sequential but not Fréchet-Urysohn are a little bit pathological (e.g. Arens-Fort space).
Questions:
Is there some deeper insight, why we need a separation property for sequentialness but not for Fréchet-Urysohn? Is the separation property really needed, i.e. is there some sequential space which is countably compact but not sequentially compact?
Remark: In fact, for the uniqueness of the sequential limit we can reduce the $T_2$ separation property to the $US$ separation property (i.e. $X$ is sequentially Hausdorff) which lies strictly between $T_1$ and $T_2$. This gives a hint, that $T_1$ should be not enough. |
For the full paper, which includes all proofs, download the pdf here Abstract
(Editor’s note:) This paper represents the first installment of a masters thesis by Jonathan Johnson. This work introduces the notion of summation chains of sequences. It examines the sequence of sequences generated by partial sums and differences of terms in each level of the chain, looks at chains generated by functions, then introduces a formal definition and key formulae in the analysis of such chains.
Introduction
Given a complex-valued sequence (a_n)^{\infty}_{n=1}, the
sequence of partial sums of (a_n) is given by the sequence (a_1,a_1+a_2,a_1+a_2+a_3,\ldots,\sum^n_{i=1}a_i,\ldots). The sequence of differences of (a_n) is given by the sequence (a_1, a_2-a_1,a_3-a_2,\ldots,a_n-a_{n-1},\ldots).
The processes of finding the sequence of partial sums and finding the sequence of differences of a sequence are inverses of each other so every sequence is the sequence of differences of its sequence of partial sums and the sequence of partial sums of its sequence of differences. Every sequence has a unique sequence of partial sums and a unique sequence of differences so it is always possible to find the sequence of partial sums of the sequence of partial sums and repeat the process ad infinitum. Similarly, we can find the sequence of differences of the sequence of differences and repeat ad infinitum. The result is a doubly infinite sequence or “chain” of sequences where each sequence is the sequence of partial sums of the previous sequence and the sequence of differences of the following sequence.
\begin{array}{r|ccccccccc}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\a^{(2)} & 1 & 1 & 2 & 2 & 3 & 3 & 4 & 4 & \cdots\\a^{(1)} & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & \cdots\\a^{(0)} & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & \cdots\\a^{(-1)} & 1 & -2 & 2 & -2 & 2 & -2 & 2 & -2 & \cdots\\a^{(-2)} & 1 & -3 & 4 & -4 & 4 & -4 & 4 & -4 & \cdots\\\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\\end{array}
Example Let a^{(0)} be the sequence defined by a^{(0)}_n=(-1)^{n+1} for all n\in\mathbb{N}. For all integers m>0, let a^{(m)} be the sequence of partial sums of a^{(m-1)}, and for all integers m<0, let a^{(m)} be the sequence of differences of a^{(m+1)}.
Every sequence can be used to create a unique chain of sequences. This paper studies properties of these chains and explores the relationship between sequences and the chains they create. In particular, the following questions are investigated:
How can the sequences in a summation chain be computed quickly?Clearly, every sequence in a chain of sequences can be computed by repeatedly finding sequences of partial sums of sequences of partial sums or finding sequences of differences of sequences of differences. It is useful, however, to be able compute any sequence in a chain given the starting sequence, a^{(0)}, without having to compute all the sequences in between. Methods for computing chains are discussed on Page 3. When do two given sequences appear in the same summation chain?When two sequences appear in the same chain, one sequence can be obtained by repeatedly finding the sequences of partial sums of sequences of partial sums of the other sequence. This process could take a long time, and it is not able to determine if two sequences do not appear in the same chain. ( Editor’s note: The next installment presents a process to determine with certainty whether or not two sequences are in the same chain.) How much information is needed to define a summation chain?Once a chain has been computed, it appears as an array of entries. Starting with a blank array, if some numbers are added to a blank array, can they be used to define the remaining entries uniquely? (Editor’s note: Later installments explore how much information in an array of numbers is needed to determine a chain.) How are the convergent behaviors of sequences in a summation chain related?Can every sequence in a chain diverge? Can every sequence in a chain converge? (Editor’s note: The final installment investigates the nature of the limits of sequences in a chain.) |
In the time independent Schrödinger equation I have read that the wave function is $$\psi(x)=\mathrm e^{-iEt/\hbar}$$ This means that $$\rho(x,t)=|\psi(x,t)|^2=|\psi(x)|^2=ρ(x)$$, that is, $\rho$ is not a function of $t$. That's why it is called stationary state.
Therefore, $$\left<x\right>=\int \psi^*x\,\psi \mathrm{d}x=\int \psi(x)^*x\,\psi(x) \mathrm{d}x=\text{constant}$$ and hence $$\left<p\right>=0$$ , the average value of momentum is zero.
What does this mean? Does this mean particle is in rest?
I think this is something like probability of going in one direction is equal to the probability of going in opposite direction, like $MV-MV =0$. Is this sense right? Or something else is happening? |
Research Open Access Published: Oscillation and nonoscillation theorems of neutral dynamic equations on time scales Advances in Difference Equations volume 2019, Article number: 404 (2019) Article metrics
174 Accesses
Abstract
We present the oscillation criteria for the following neutral dynamic equation on time scales:
where \(C, P, Q\in C_{\mathit{rd}}([t_{0},\infty ),{\mathbb{R}}^{+})\), \({\mathbb{R}} ^{+}=[0,\infty )\), \(\gamma , \eta , \delta \in {\mathbb{T}}\) and \(\gamma >0\), \(\eta >\delta \geq 0\). New conditions for the existence of nonoscillatory solutions of the given equation are also obtained.
Introduction
In the past two decades, there has been shown a growing interest in the study of oscillation and stability of delay dynamic equations on time scales. Several excellent monographs [1,2,3,4,5] on the topic indeed reflect its popularity. Some recent results on oscillation and existence of nonoscillatory solutions for dynamic equations can be found in the articles [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23] and the references cited therein.
Motivated by aforementioned work, in this paper, we consider the following neutral dynamic equation on time scales:
where \(C, P, Q\in C_{\mathit{rd}}([t_{0},\infty ),{\mathbb{R}}^{+})\), \({\mathbb{R}} ^{+}= [0, \infty )\), \(C_{\mathit{rd}}\) denotes the class of right-dense continuous functions, \(\zeta , \eta , \delta \in {\mathbb{T}}\) and \(\zeta >0\), \(\eta >\delta \geq 0\). Some conditions for oscillation of Eq. (1) are obtained. We also discuss the existence of nonoscillatory solutions for Eq. (1).
A time scale is an arbitrary nonempty closed subset of the real numbers. We denote the time scale by the symbol \({\mathbb{T}}\). For \(t\in {\mathbb{T}}\) we define the forward jump operator \(\sigma :{\mathbb{T}} \rightarrow {\mathbb{T}}\) by \(\sigma (t):=\inf \{s\in {\mathbb{T}}: s > t \}\). Let \(C_{\mathit{rd}}({\mathbb{T}}, {\mathbb{R}})\) denote the space of functions which are right-dense continuous on \({\mathbb{T}}\). In addition, we define the interval \([t_{0},\infty )\) in \({\mathbb{T}}\) by \([t_{0},\infty ):=\{t \in {\mathbb{T}}: t_{0}\leq t<\infty \}\).
Definition 1.1
For \(h\geq 0\), we define the cylinder transformation \(\xi _{h}\) by
Definition 1.2
A solution of (1) is said to be oscillatory if it is neither eventually positive nor eventually negative, otherwise it is nonoscillatory.
Lemma 1.3 If \(f: {\mathbb{T}}\rightarrow {\mathbb{R}}\) is differentiable and \(f^{\Delta }\geq 0\), then f is nondecreasing on \({\mathbb{T}}\). Lemma 1.4 If \(f: {\mathbb{T}}\rightarrow {\mathbb{R}}\) is differentiable at t, then f is continuous at t. Oscillation
In this section, we derive the main results for oscillation of Eq. (1). For that, we assume the following conditions:
\((c_{1})\) :
\(0\leq C(t)+ \int ^{t-\delta }_{t-\eta }Q(s+\delta )\Delta s\leq 1 \);
\((c_{2})\) :
\(\bar{R}(t)=P(t)-Q(t-\eta +\delta )\geq 0\) and \(\liminf_{t\rightarrow \infty } \int _{t-\eta }^{t}\bar{R}(s)\Delta s>\gamma >0\).
The following lemmas are useful in proving the main results of this section.
Lemma 2.1 Assume that the conditions \((c_{1})\) and \((c_{2})\) are satisfied. Let \(y(t)\) be an eventually positive solution of (1) such that Then eventually Proof
Since \(y(t)\) is an eventually positive solution of (1), there exists \(t_{1}\geq t_{0}\) such that \(y(t-m)>0\) for \(t\geq t_{1}\), where \(m=\max \{\zeta , \eta , \delta \}\). In view of (1) and (2), we get
which implies that \(u(t)\) is decreasing. Next, we shall show that \(u(t)>0\). If \(u(t)\rightarrow -\infty \) as \(t\rightarrow \infty \), then \(y(t)\) must be unbounded. Therefore there exists \(\{t_{n}'\}\) with \(t_{n}'\geq t_{2}\), \(t_{2}=t_{1}+m\) such that
and \(y(t_{n}')=\max_{t_{2}\leq t\leq t_{n}'}y(t)\). Hence, we have
In consequence, we get
which is a contradiction. Hence \(\lim_{t\rightarrow \infty }u(t)=l\) exists. As before, if \(y(t)\) is unbounded, then \(l\geq 0\). Now we consider the case when \(y(t)\) is bounded. Let \(\bar{l}= \limsup_{t\rightarrow \infty } y(t)=\lim_{t'\rightarrow \infty }y(t')\). Then
where \(y(\xi _{t'})=\max \{\{y(s):s\in (t'-\eta ,t'-\delta )\}, y(t'- \zeta )\}\). Hence, it follows that \(\xi _{t'}\to \infty \) as \(t'\to \infty \) and \(\limsup_{t'\to \infty }y(\xi _{t'})\leq \bar{l}\). Thus, we get
which, on taking superior limit, leads to \(\bar{l}-l\leq \bar{l}\). Therefore \(l\geq 0\). Hence \(u(t)>0\) eventually. The proof is complete. □
Lemma 2.2 Suppose that the conditions \((c_{1})\) and \((c_{2})\) hold and that \(y(t)\) is an eventually positive solution of (1) satisfying (2). Then the set \(\varLambda = \{ \lambda >0: u^{\Delta }(t)+ \lambda \bar{R}u(t)\leq 0, \textit{eventually} \} \) is nonempty and there exists an upper bound of Λ which is independent of solution \(y(t)\). Proof
From the given assumptions, there exists a \(t_{1}\geq t_{0}\), such that \(y(t-m)>0\) for \(t\geq t_{1}\), where \(m=\{\zeta ,\eta ,\delta \}\). It follows from (2) that \(u(t)\leq y(t)\) for \(t\geq t_{1}\). Then
that is, \(\lambda =1\in \varLambda \). Therefore
Λ is nonempty.
Let
By \((c_{2})\), we have \(k>0\), and there exists a \(t_{2}>t_{1}+m\) such that
Therefore, for any \(t\geq t_{2}\), there exists \(t^{*}>t>t^{*}-\eta \) such that
Integrating (4) from
t to \(t^{*}\) and noting that \(u^{\Delta }(t)\leq 0\), \(u(t)>0\) for \(t\geq t_{2}\), we find that
which implies that
Next, integrating (4) from \(t^{*}-\eta \) to
t, we get
Hence
Let us define
Since \(y(t-m)>0\), (6) implies that \(I\geq 0\). On the other hand, there exists a sequence \(\{t'_{n}\}\) such that \(t_{n}'\geq t_{2}\) and \(t'_{n}\rightarrow \infty \) as \(n\rightarrow \infty \) and
From (4), we have
where \(\xi _{n}\in [t_{n}'-\eta ,t_{n}']\), and \(\xi _{n}\to \infty \) as \(n\to \infty \). Hence, we can find an increasing subsequence in \(\{\xi _{n}\}\) and so, without loss of generality, we may assume that the sequence numbers \(\{\xi _{n}\}\) is also increasing. Let
Then we have
Since \(\{\xi _{n}\}\) is an increasing sequence of numbers, we get
Therefore
which implies that
that is,
that is,
From condition \((c_{2})\), (10) and the fact that \(I\geq 0\), we deduce that \(I=0\). Thus, we obtain
Hence there exists a sequence \(\{s_{n}\}\) with \(s_{n}\geq t_{2}+2m\), such that \(y(s_{n})\rightarrow 0\) as \(n \rightarrow \infty \) and \(y(s_{n}-\eta )=\min_{t_{2}\leq s\leq s_{n}-\eta }y(s)\) for \(n=1,2,\ldots \) . Then, from (4) for \(n=1,2,\ldots \) , we have
Hence
which implies that
Now we may assert that \(\frac{1}{2k^{3}}\bar{\in } \varLambda \). In fact, if \(\frac{1}{2k^{3}}\in \varLambda \), then there exists some \(T'\) by the definition of
Λ such that, for all \(t\geq T'\), the following inequality holds true:
On the other hand, in view of the fact that \(s_{n}\rightarrow 0\) as \(n \rightarrow \infty \), from \(\{s_{n}\}\) we find some \(s_{n}'\) such that \(s_{n}'\geq T'\). Then it follows from (12) that
which contradicts (13). Therefore, \(\frac{1}{2k^{3}}\) is an upper bound of
Λ which is independent of solution \(y(t)\). The proof is complete. □ Theorem 2.3 Assume that the conditions \((c_{1})\) and \((c_{2})\) are satisfied. In addition it is assumed that there exist \(T \geq t_{1}+m\) and \(\lambda >0\) such that Then every solution of Eq. (1) is oscillatory. Proof
On the contrary, let \(y(t)\) be a nonoscillatory solution of Eq. (1). Without loss of generality, it can be assumed that \(y(t)\) is an eventually positive solution. Moreover, let \(u(t)\) be the same as defined in (2) and the set
Λ as given in Lemma 2.2. Then, by Lemma 2.2, we see that there exists a \(t_{2}\geq t_{0}\) such that
From condition (14), there exists a constant \(\alpha >1\) such that
Let \(\lambda _{0}\in \varLambda \). Then we shall show that \(\alpha \lambda _{0} \in \varLambda \). In fact, \(\lambda _{0}\in \varLambda \) implies that
Define
and note that \(w(t)\) is well defined. Let us introduce
and note that
Thus, \(\alpha \lambda _{0}\in \varLambda \). Repeating this procedure, one finds that \(\alpha ^{m}\lambda _{0}\in \varLambda \) for any integer
m, which contradicts the boundedness of Λ. The proof is complete. □ Corollary 2.4 Assume that \(P(t)\geq 0\), \(\liminf_{t\rightarrow \infty }\int _{t- \eta }^{t}P(s)\Delta s>0\) and there exist T and \(\lambda >0\) such that Then every solution of the equation is oscillatory. Nonoscillation
Here we derive some results for the existence of a positive solution of (1).
Lemma 3.1 Assume that (i)
\(\bar{R}(t)=P(t)-Q(t-\eta -\delta )\geq 0\);
(ii) the inequality$$ C(t)z(t-\zeta )+ \int ^{t-\delta }_{t-\eta }Q(s+\delta )z(s)\Delta s + \int _{t-\eta }^{\infty }\bar{R}(s+\eta )z(s)\Delta s\leq z(t), \quad \textit{for } t\geq t_{1}, $$(18) has a continuous positive solution\(Z(t)\): \([t_{1}-m, \infty )\rightarrow (0,\infty )\) with\(\lim_{t\rightarrow \infty }Z(t)=0\). Then the equation has a continuous positive solution \(y(t)\) with \(0< y(t)\leq Z(t)\) for \(t\geq t_{1}\). Proof
Take \(T>t_{1}\) large enough so that \(z(t)>Z(t)\) for \(t\in [t_{1}-m,T)\). Define a set
and introduce an operator
S on Ω as follows:
It is clear that \(S \varOmega \subset \varOmega \), and \(\omega _{1}, \omega _{2}\in \varOmega \) with \(\omega _{1}\leq \omega _{2}\) implies \(S\omega _{1}\leq S \omega _{2}\).
Define a sequence on
Ω as
It is not difficult to prove that
Therefore, the sequence \(\{z _{k}(t)\}\) has a limiting function \(y(t)\) with \(\lim_{t\rightarrow \infty }z _{k} (t)=y(t)\) for \(t\in [t_{1}-m,\infty )\) and \(y(t)\) satisfies (19) by Lebesgue’s convergence theorem. It is easy to see that \(y(t)>0\) for \(t\in [t_{1}-m,T ]\) and hence \(y(t)>0\) for all \(t\in [t_{1}-m, \infty )\) with \(0< y(t)\leq Z(t)\). The proof is complete. □
Theorem 3.2 Assume that (i)
\(\bar{R}(t)=P(t)-Q(t-\eta -\delta )\geq 0\);
(ii) there exist\(T\geq t_{1}+m\) and\(\lambda ^{*}>0\) such that$$ \begin{aligned}[b] &\sup_{t\geq T} \biggl\{ \frac{1}{\lambda ^{*}}\exp \biggl(- \int ^{t}_{t- \eta }\xi _{\mu }\bigl( -\lambda ^{*}\bar{R}(u)\bigr)\Delta u \biggr) +C(t-\eta ) \exp \biggl(- \int ^{t}_{t-\zeta }\xi _{\mu }\bigl( -\lambda ^{*}\bar{R}(s)\bigr) \Delta s \biggr) \\ &\quad {}+ \int ^{t-\delta }_{t-\eta }Q(s+\delta -\eta )\exp \biggl(- \int ^{t} _{s}\xi _{\mu }\bigl( -\lambda ^{*}\bar{R}(u)\bigr)\Delta u \biggr)\Delta s \biggr\} \leq 1. \end{aligned} $$(20) Then Eq. (1) has a positive solution \(y(t)\) with \(\lim_{t\rightarrow \infty }y(t)=0\). Proof
Set
Obviously \(z(t)\) is well defined, positive and continuous. From the condition (20), for \(t\geq T\geq T-\eta \), we have
From (21), it is easy to see that \(z^{\Delta }(t)=-\lambda ^{*} \bar{R}(t+\eta )z(t)\), and hence we have
Thus the desired conclusion follows by Lemma 3.1. The proof is complete. □
References 1.
Bohner, M., Georgiev, S.G.: Multivariable Dynamic Calculus on Time Scales. Springer, Berlin (2017)
2.
Georgiev, S.G.: Fractional Dynamic Calculus and Fractional Dynamic Equations on Time Scales. Springer, Berlin (2018)
3.
Martynyuk, A.A.: Stability Theory for Dynamic Equations on Time Scales. Springer, Berlin (2016)
4.
Georgiev, S.G.: Integral Equations on Time Scales. Atlantis Press (2016)
5.
Saker, S.: Oscillation Theory of Dynamic Equations on Time Scales: Second and Third Orders. Lap Lambert Academic Publishing (2010)
6.
Agarwal, R.P., Bohner, M., Li, T., Zhang, C.: Comparison theorems for oscillation of second-order neutral dynamic equations. Mediterr. J. Math.
11, 1115–1127 (2014) 7.
Bohner, M., Li, T.: Oscillation of second-order p-Laplace dynamic equations with a nonpositive neutral coefficient. Appl. Math. Lett.
37, 72–76 (2014) 8.
Li, T., Saker, S.H.: A note on oscillation criteria for second-order neutral dynamic equations on isolated time scales. Commun. Nonlinear Sci. Numer. Simul.
19, 4185–4188 (2014) 9.
Li, T., Zhang, C., Thandapani, E.: Asymptotic behavior of fourth-order neutral dynamic equations with noncanonical operators. Taiwan. J. Math.
18, 1003–1019 (2014) 10.
Zhang, C., Agarwal, R.P., Bohner, M., Li, T.: Oscillation of second-order nonlinear neutral dynamic equations with noncanonical operators. Bull. Malays. Math. Sci. Soc.
38, 761–778 (2015) 11.
Zhou, Y., Lan, Y.: Classification and existence of non-oscillatory solutions of second-order neutral delay dynamic equations on time scales. Nonlinear Oscil.
16(2), 191–206 (2013) 12.
Agarwal, R.P., Bohner, M., Li, T., et al.: Oscillation criteria for second-order dynamic equations on time scales. Appl. Math. Lett.
31, 34–40 (2014) 13.
Deng, X.H., Wang, Q.R., Zhou, Z.: Oscillation criteria for second order nonlinear delay dynamic equations on time scales. Appl. Math. Comput.
269, 834–840 (2015) 14.
Deng, X.H., Wang, Q.R., Zhou, Z.: Generalized Philos-type oscillation criteria for second order nonlinear neutral delay dynamic equations on time scales. Appl. Math. Lett.
57, 69–76 (2016) 15.
Senel, M.T., Utku, N., El-Sheikh, M.M.A., et al.: Kamenev-type criteria for nonlinear second-order delay dynamic equations. Hacet. J. Math. Stat.
47(2), 339–345 (2018) 16.
Bohner, M., Hassan, T.S., Li, T.: Fite–Hille–Wintner-type oscillation criteria for second-order half-linear dynamic equations with deviating arguments. Indag. Math.
29(2), 548–560 (2018) 17.
Hasil, P., Veselý, M.: Oscillation and non-oscillation results for solutions of perturbed half-linear equations. Math. Methods Appl. Sci.
41(9), 3246–3269 (2018) 18.
Negi, S.S., Abbas, S., Malik, M.: Oscillation criteria of singular initial-value problem for second order nonlinear dynamic equation on time scales. Nonautonomous Dynamical Systems
5(1), 102–112 (2018) 19.
Negi, S.S., Abbas, S., Malik, M., et al.: New oscillation criteria of special type second-order non-linear dynamic equations on time scales. Math. Sci.
12(1), 25–39 (2018) 20.
Zhu, Z.Q., Wang, Q.R.: Existence of nonoscillatory solutions to neutral dynamic equations on time scales. J. Math. Anal. Appl.
335(2), 751–762 (2007) 21.
Zhou, Y.: Nonoscillation of higher order neutral dynamic equations on time scales. Appl. Math. Lett.
94, 204–209 (2019) 22.
Zhou, Y., Ahmad, B., Alsaedi, A.: Necessary and sufficient conditions for oscillation of second-order dynamic equations on time scales. Math. Methods Appl. Sci.
42, 4488–4497 (2019) 23.
Zhou, Y., He, J.W., Ahmad, B., Alsaedi, A.: Necessary and sufficient conditions for oscillation of fourth order dynamic equations on time scales. Adv. Differ. Equ.
2019, 308 (2019) Acknowledgements
This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, Saudi Arabia, under grant No. (DF-070-130-1441). The author, therefore, acknowledge with thanks DSR technical and financial support.
Availability of data and materials
Not applicable.
Funding
This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (DF-070-130-1441).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
as the title suggests I'm looking for some clarifications in the computations of the ext charts of some $A(1)$-modules arising as extensions of other modules. In particular, I've the following example I'd like to consider: $$0 \to \Sigma^3D \to \Bbb R P^{\infty} \to M \to 0$$
where $D$ is the $A(1)$-module described in Freed and Hopkins' paper at page 75 (table on the right), $M$ is the Moore space (whose $A(1)$-resolution can be found here, and $\Bbb R P^{\infty}$ is the infinite projective space. Here I draw the s.e.s with the usual convention for the action of $Sq^1$ and $Sq^2$:
And here you can find what (according to me) is the situation at the level of ASS:
The green arrows should be the map representing the connecting homomorphisms in the l.e.s. of ext-groups induced by the above s.e.s.
Since we know what should be the stable page of the ASS for $\Bbb R P^{\infty}$, and there are no differentials for dimensions reason, we see that the only non-trivial green arrows are the one starting in position $(5,1)$ and $(6,2)$.
This is what I don't understand: If the arrows represent the maps induced by the l.e.s. of the ext's then at least the first one $$\delta \colon \hom_{A(1)}(\Sigma^3D, \Bbb F_2) \to Ext^1_{A(1)}(M, \Bbb F_2)$$ shouldn't be trivial, since the s.e.s. above is non-split. In particular this would be the green arrow starting from position $(1,0)$, which we observed that has to be zero. So what am I missing?
I apologise in advance if the question is stupid. |
What local forces cause the damage? Tidal forces. Tidal forces can only be neglected in extremely small regions, and you have an extended body. More details follow.
You suppose an extended body, bound together, interacting gravitationally with a black hole. With the center of mass staying outside the photon sphere, moving on a more than barely unbounded orbit. But a large enough extended body that part of it crosses the event horizon, but you want the Roche limit to be smaller than the event horizon.
I can see why people criticized the original question. If you define the Roche limit to be the distance at which a satellite breaks up from tidal forces, then it appears to be a contradiction on the face of it.
But I'll start with a warning about center of mass motions. When you are losing matter, the center of mass can move faster than light. Really. Imagine a very long train, moving at a speed $v$, but the link halfway through breaks (so the back half slows and stops) and then the center of mass of the moving train jumps to where the first 25th percentile (1st quartile) originally was. Then the link halfway through the remaining train breaks. So the new center of mass is where the original first 12.5th percentile was. This could happen very quickly. And the center of mass can thus jump a large distance in a short amount of time. So center of mass motions are not at all like real motions of real particles. That was a warning because it seemed like you didn't know this (and this is true even in Newtonian Physics), but this does mean you might be able to start within the photon sphere and still have part of your star escape since the center of mass can move faster than light if you are willing to leave something behind. Don't get too excited about that, because it is deep or fantastic, but it can avoid painting yourself unnecessarily into a corner by falsely thinking center of masses have to do things they don't have to do.
But you asked about the case where the center of mass stays out of the photon sphere. So lets do that. So now the question remains about whether the Roche limit can be inside the event horizon. If your center of mass is outside the photon sphere you are at areal coordinate 1.5 $R_S$ (where $R_S$ denotes the Schwarzschild "radius") or farther. You want the actual radius to be large enough to have the center stay outside the photon sphere yet have part go into the event horizon. That might look like you need a radius of 0.5 $R_S$ or more, but the distance between areal coordinate 1.5 $R_S$ and areal coordinate 1.0 $R_S$ is actually more than 0.5 $R_S$. Even if you haven't studied GR you can see that by looking at one of those funnel pictures and note that the areal coordinates is the circumference divided by $2\pi$ (by definition, you can also take the surface area and divide it by $4\pi$ and then take the square root), and since the funnel extends in a third direction it requires more distance travelled to get to larger or smaller areal coordinate than you'd expect if the areal coordinate were just distance from the center as it is in uncurved spacetimes. But since that was the same factor for the radius of the star and the radius of the black hole and they need to be about 50% of the size with each other to straddle the region from event horizon to photon sphere, it might not be anything to worry about.
But that scale, that the satellite is maybe near to 50% the size of the black hole. That itself does pose serious problem to the normal expression for the Roche limit. The normal expression assumes that the radius of satellite is much smaller than the distance between the centers. This is most definitely not going to hold in this case.
So normally you consider the tidal force as the difference in two forces, say from the center and edge. So if the centers are a distance $d$ apart, and the satellite has a radius $r$ then the difference of the forces on a small mass $u$ nearest the black hole is
$$\frac{GMu}{(d-r)^2}-\frac{GMu}{d^2}=\frac{GMur^2-GMu(d-r)^2}{d^2(d-r)^2}\approx\frac{Gmu2r}{d^3}.$$Where the approximation only holds when $r << d$, which doesn't hold in our case. At all. Let's try to look at our situation. In our situation the mass closest to the black hole is actually inside the event horizon. Now in GR gravity isn't a force, but we know that any force applied to the part inside will fail to keep it at rest, so there is a sense where that force is infinite. So it is reasonable to only look at tidal forces outside the horizon.
So let's look at near misses, where the passing star has parts graze the event horizon as close as we want. If that is enough to rip parts off, then we know the tidal forces are strong enough. We know that it is possible to escape the black hole if you are outside the horizon. But if the only way to fight out of the black hole is the binding of the satellite, then we can take that as fixed and see how close we have to graze the event horizon before we leave parts behind.
If the whole satellite stays outside of the photon sphere there should be no problem, and this is far from grazing. Contrary to popular mythology, the satellite is an extended body so can't just move on a background spacetime distorted by the blackhole alone, but instead the satellite really distorts spacetime itself as well. But that is hard to do correctly and you said you didn't know much GR so let's ignore that.
Once we ignore that we can use a trick. Imagine a series of stars with the same center of mass but larger radii. The ones that have a radius extending into the black hole lose part of their outer layer. That happens to all of them, regardless of how little they went over and how strongly they were bound together. And it was tidal forces that pulled them in regardless of the binding. So the cohesive forces relative to the far away center of the star weren't enough to keep the star parts on the black hole side of the event horizon attached, and the forces vary continuously so they must get arbitrarily large. Now these forces are a relative force, they are between two regions, a differential force. For a nonextended body they would vanish. But tidal forces are real. The geometric version would be that say a blob of water minimizing its surface area finds less surface area if it bulges towards and away from a star. An extended body notices that spacetime is curved differently in different regions and it's mutual interactions act differently because of that than they do in flat space. So in a sense the star is tearing itself apart because it's ability to accommodate the different geometry on the different sides.
It is also important the tidal forces disappear not just for small regions of space, but only for small intervals of time too. That changing geometry can also be a changing from time to time. And the time part of spacetime can and is curved too. In fact for everyday situations that is the important effect.
So the tidal tearing apart doesn't happen right away. So the first part of the star that dips in doesn't notice much at first, only really small tidal effects in that really small region for that short moment it crosses. But those small differences build up because each part is just a bit off from their neighbor and the star becomes stretched out. It's as the star pulls away the different parts are ageing differently which affects how they cohere together and propagate, so the outer parts pull away. If you go back to the water blob that found less surface area if it bulges towards and away from the black hole you see that it pulls away more than normal and as the center of mass tracks it is harder to drag those parts with it. All tidal forces, and yet also it is all also the star ripping itself apart.
OK. So we could do that again and again for smaller and smaller stars eventually we have a star that just barely has a part on the inside inside, and it is still torn apart. but now if you removed that outer layer the temperature and pressure of the remaining star wouldn't be that different (since the outer layer is so thin). And without that outer layer there is actually less pressure from that now gone outer layer holding that now outermost layer towards the remaining star, and the larger star was torn apart even when/if it had a fictional stronger (say nongravitioanl) binding. So it's bound less tight and would have been stretched out even if bound more tightly. It gets torn apart. Technically you have to be more careful when computing that last argument. You can find out how much that now never existing layer pushed down, and choose a small region of space and time right when it was getting super close to the event horizon and argue that the lack of a push now means we know how it moves. And by choosing a frame localized in time and space that is falling towards the black hole, we have that it only failed to cross when it had an extra push away from the event horizon and now does not have that extra push. It's more rigorous if you select actual masses and radii, but it doesn't matter for the final result. And it's technically not right at all if we don't let the star curve spacetime too and instead have it exert gravitational forces on each other. But if the satellite is a star and not a black hole or neutron star itself that might be fairly accurate.
I'm not sure why you implicitly assumed the black hole to be larger than the star. If you had a star sitting there minding it's own business and a very small black hole whizzed by but stayed a few Schwarzschild radii away from the center of the star it would just thread through the star and remove a small tube of matter and keep going. Obviously the center of mass of the star stayed out of the photon sphere, and part of it entered the event horizon and got eaten. And it was tidal forces. The local pull of the tiny black hole pulled in some matter because locally it was stronger than the pull of the further interior portions of the star. |
In the paper Black Hole Entropy is Noether Charge, Wald related the black hole entropy to the Neother charge using the covariant phase space formalism. In proving this relation, Wald noticed that on the bifurcation surface, the Killing vector field $\xi^a$ for a stationary black hole vanishes, and at the same time, $\nabla_a\xi_b=\kappa\epsilon_{ab}$.
Here, $\kappa$ is the surface gravity, and satisfies $\xi^b\nabla_b\xi^a=\kappa\xi^a$. $\epsilon_{ab}$ is the binormal to the bifurcation surface, whose explicit expression was not given in the paper. My understanding is that I want to introduce another null vector field $n^a$ on the horizon such that $n^a\xi_a=-1$. $n^a$ is also normal to the cut of the horizon. Since $\xi^a$ is a normal vector on the horizon, $\xi_{[a}\nabla_b\xi_{b]}=0$, which implies that
$$ \nabla_a\xi_b=\xi_bv_a-\xi_av_b $$
where $v_a=n^b\nabla_b\xi_a$. Furthermore, one can decompose $v_a=-\kappa n_a+\hat v_a$ such that $\xi^a\hat v_a=n^a\hat v_a=0$. Therefore,
$$\nabla_a\xi_b=2\kappa\xi_{[a}n_{b]}+2\xi_{[b}\hat v_{a]}$$
Now, this equation is valid everywhere on the horizon. Apply it to the bifurcation surface where $\xi^a=0$, then
$$\nabla_a\xi_b=0!$$
Since $\xi^a=0$, $\xi_a=g_{ab}\xi^b=0$ should be correct, as on the horizon, the metric behaves well, although it might have components blowing up in some coordinates.
Please help me out with this! There must be something wrong with my calculation. |
Dear Uncle Colin,
I'm trying to sew a traditional football in the form of a truncated icosahedron. If I want a radius of 15cm, how big do the polygons need to be?
-- Plugging In Euler Characteristic's Excessive
Hello, PIECE, and thank you for your message!
Getting an exact answer to that is a little tricky, but we can come up with a pretty good approximation: if we assume the ball's surface area is the same as that of a sphere, we can work it out.
Now, a truncated icosahedron is made of 12 regular pentagons and 20 regular hexagons, all of the same side length, which I'll call $E$.
Finding the area of a regular polygon boils down to trigonometry: you can split any regular polygon with $n$ sides into $2n$ right-angled triangles with an apex angle of $\frac{\pi}{n}$ opposite a base of $\frac 12 E$. The area of each triangle is $\frac1 8 E^2 \cot\left(\frac{\pi}{n}\right)$, so the polygon area is $\frac{n}{4}E^2 \cot\left(\frac{\pi}{n}\right)$.
In particular, a regular pentagon has an area of $\frac{5}{4}E^2 \cot\left(\frac{\pi}{5}\right)$, and the hexagon is $\frac{6}{4}E^2 \cot\left(\frac{\pi}{6}\right)$, which is $\frac{3}{2}\sqrt{3}E^2$.
Multiplying the pentagon area by 12 and the hexagon area by 20 gives an unholy mess that gives a total surface area of about $72.607E^2$. Don't tell the Mathematical Ninja.
The surface area of a sphere of radius 15cm is $4\pi r^2 = 900π \mathrm{cm}^2$.
So, we have $900 \pi \approx 72.607E^2$, and we can rearrange to find $E^2 \approx 38.94 \mathrm{cm}^2$ and $E\approx 6.24$cm((The accurate answer is around 6.05cm, but it's even more of a pain to work out.)).
Hope that helps!
-- Uncle Colin |
This type of situation can be handled by a standard F-test for nested models. Since you want to test both of the parameters against a null model with fixed parameters, your hypotheses are:
$$H_0: \boldsymbol{\beta} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \quad \quad \quad H_A: \boldsymbol{\beta} \neq \begin{bmatrix} 0 \\ 1 \end{bmatrix} .$$
The F-test involves fitting both models and comparing their residual sum-of-squares, which are:
$$SSE_0 = \sum_{i=1}^n (y_i-x_i)^2 \quad \quad \quad SSE_A = \sum_{i=1}^n (y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i)^2$$
The test statistic is:
$$F \equiv F(\mathbf{y}, \mathbf{x}) = \frac{n-2}{2} \cdot \frac{SSE_0 - SSE_A}{SSE_A}.$$
The corresponding p-value is:
$$p \equiv p(\mathbf{y}, \mathbf{x}) = \int \limits_{F(\mathbf{y}, \mathbf{x}) }^\infty \text{F-Dist}(r | 2, n-2) \ dr.$$
Implementation in R: Suppose your data is in a data-frame called
DATA with variables called
y and
x. The F-test can be performed manually with the following code. In the simulated mock data I have used, you can see that the estimated coefficients are close to the ones in the null hypothesis, and the p-value of the test shows no significant evidence to falsify the null hypothesis that the true regression function is the identity function.
#Generate mock data (you can substitute your data if you prefer)
set.seed(12345);
n <- 1000;
x <- rnorm(n, mean = 0, sd = 5);
e <- rnorm(n, mean = 0, sd = 2/sqrt(1+abs(x)));
y <- x + e;
DATA <- data.frame(y = y, x = x);
#Fit initial regression model
MODEL <- lm(y ~ x, data = DATA);
#Calculate test statistic
SSE0 <- sum((DATA$y-DATA$x)^2);
SSEA <- sum(MODEL$residuals^2);
F_STAT <- ((n-2)/2)*((SSE0 - SSEA)/SSEA);
P_VAL <- pf(q = F_STAT, df1 = 2, df2 = n-2, lower.tail = FALSE);
#Plot the data and show test outcome
plot(DATA$x, DATA$y,
main = 'All Residuals',
sub = paste0('(Test against identity function - F-Stat = ',
sprintf("%.4f", F_STAT), ', p-value = ', sprintf("%.4f", P_VAL), ')'),
xlab = 'Dataset #1 Normalized residuals',
ylab = 'Dataset #2 Normalized residuals');
abline(lm(y ~ x, DATA), col = 'red', lty = 2, lwd = 2);
The
summary output and
plot for this data look like this:
summary(MODEL);
Call:
lm(formula = y ~ x, data = DATA)
Residuals:
Min 1Q Median 3Q Max
-4.8276 -0.6742 0.0043 0.6703 5.1462
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.02784 0.03552 -0.784 0.433
x 1.00507 0.00711 141.370 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.122 on 998 degrees of freedom
Multiple R-squared: 0.9524, Adjusted R-squared: 0.9524
F-statistic: 1.999e+04 on 1 and 998 DF, p-value: < 2.2e-16
F_STAT;
[1] 0.5370824
P_VAL;
[1] 0.5846198 |
Predicting Housing Prices with Linear Regression using Python, pandas, and statsmodels
In this post, we'll walk through building linear regression models to predict housing prices resulting from economic activity.
BeforeTutorial You should already know: Python fundamentals Some Pandas experience
Learn both interactively through dataquest.io
This post will walk you through building linear regression models to predict housing prices resulting from economic activity.
Future posts will cover related topics such as exploratory analysis, regression diagnostics, and advanced regression modeling, but I wanted to jump right in so readers could get their hands dirty with data. If you would like to see anything in particular, feel free to leave a comment below.
Let's dive in.
Article Resources Notebook and Data:GitHub Libraries:numpy, pandas, matplotlib, seaborn, statsmodels What is Regression? Linear regression is a model that predicts a relationship of direct proportionality between the dependent variable (plotted on the vertical or Y axis) and the predictor variables (plotted on the X axis) that produces a straight line, like so:
Linear regression will be discussed in greater detail as we move through the modeling process.
Variable Selection
For our dependent variable we'll use
housing_price_index (HPI), which measures price changes of residential housing.
For our predictor variables, we use our intuition to select drivers of macro- (or “big picture”) economic activity, such as unemployment, interest rates, and gross domestic product (total productivity). For an explanation of our variables, including assumptions about how they impact housing prices, and all the sources of data used in this post, see here.
Reading in the Data with pandas
Before anything, let's get our imports for this tutorial out of the way.
The first import is just to change how tables appear in the accompanying notebook, the rest will be explained once they're used:
You can grab the data using the pandas
read_csv method directly from GitHub. Alternatively, you can download it locally.
Once we have the data, invoke pandas'
merge method to join the data together in a single dataframe for analysis. Some data is reported monthly, others are reported quarterly. No worries. We merge the dataframes on a certain column so each row is in its logical place for measurement purposes. In this example, the best column to merge on is the date column. See below.
Let's get a quick look at our variables with pandas'
head method. The headers in bold text represent the date and the variables we'll test for our model. Each row represents a different time period.
date sp500 consumer_price_index long_interest_rate housing_price_index total_unemployed more_than_15_weeks not_in_labor_searched_for_work multi_jobs leavers losers federal_funds_rate total_expenditures labor_force_pr producer_price_index gross_domestic_product 0 2011-01-01 1282.62 220.22 3.39 181.35 16.2 8393 2800 6816 6.5 60.1 0.17 5766.7 64.2 192.7 14881.3 1 2011-04-01 1331.51 224.91 3.46 180.80 16.1 8016 2466 6823 6.8 59.4 0.10 5870.8 64.2 203.1 14989.6 2 2011-07-01 1325.19 225.92 3.00 184.25 15.9 8177 2785 6850 6.8 59.2 0.07 5802.6 64.0 204.6 15021.1 3 2011-10-01 1207.22 226.42 2.15 181.51 15.8 7802 2555 6917 8.0 57.9 0.07 5812.9 64.1 201.1 15190.3 4 2012-01-01 1300.58 226.66 1.97 179.13 15.2 7433 2809 7022 7.4 57.1 0.08 5765.7 63.7 200.7 15291.0
Usually, the next step after gathering data would be exploratory analysis. Exploratory analysis is the part of the process where we analyze the variables (with plots and descriptive statistics) and figure out the best predictors of our dependent variable.
For the sake of brevity, we'll skip the exploratory analysis. Keep in the back of your mind, though, that it's of utmost importance and that skipping it in the real world would preclude ever getting to the predictive section.
We'll use
ordinary least squares (OLS), a basic yet powerful way to assess our model. Ordinary Least Squares Assumptions
OLS measures the accuracy of a linear regression model.
OLS is built on assumptions which, if held, indicate the model may be the correct lens through which to interpret our data. If the assumptions don't hold, our model's conclusions lose their validity.
Take extra effort to choose the right model to avoid Auto-esotericism/Rube-Goldberg’s Disease.
Here are the OLS assumptions:
Linearity: A linear relationship exists between the dependent and predictor variables. If no linear relationship exists, linear regression isn't the correct model to explain our data. No multicollinearity: Predictor variables are not collinear, i.e., they aren't highly correlated. If the predictors are highly correlated, try removing one or more of them. Since additional predictors are supplying redundant information, removing them shouldn't drastically reduce the Adj. R-squared(see below). Zero conditional mean: The average of the distances (or residuals) between the observations and the trend line is zero. Some will be positive, others negative, but they won't be biased toward a set of values. Homoskedasticity: The certainty (or uncertainty) of our dependent variable is equal across all values of a predictor variable; that is, there is no pattern in the residuals. In statistical jargon, the variance is constant. No autocorrelation (serial correlation): Autocorrelation is when a variable is correlated with itself across observations. For example, a stock price might be serially correlated if one day's stock price impacts the next day's stock price.
Let's begin modeling.
Simple Linear Regression
Simple linear regression uses a single predictor variable to explain a dependent variable. A simple linear regression equation is as follows:
$$y_i = \alpha + \beta x_i + \epsilon_i$$
Where:
$y$ = dependent variable
$\beta$ = regression coefficient
$\alpha$ = intercept (expected mean value of housing prices when our independent variable is zero)
$x$ = predictor (or independent) variable used to predict Y
$\epsilon$ = the error term, which accounts for the randomness that our model can't explain.
Using statsmodels'
ols function, we construct our model setting
housing_price_index as a function of
total_unemployed. We assume that an increase in the total number of unemployed people will have downward pressure on housing prices. Maybe we're wrong, but we have to start somewhere!
The code below shows how to set up a simple linear regression model with
total_unemploymentas our predictor variable.
To explain:
Adj. R-squared indicates that 95% of housing prices can be explained by our predictor variable,
total_unemployed.
The
regression coefficient (coef) represents the change in the dependent variable resulting from a one unit change in the predictor variable, all other variables being held constant. In our model, a one unit increase in
total_unemployed reduces
housing_price_index by 8.33. In line with our assumptions, an increase in unemployment appears to reduce housing prices.
The
standard error measures the accuracy of
total_unemployed's coefficient by estimating the variation of the coefficient if the same test were run on a different sample of our population. Our standard error, 0.41, is low and therefore appears accurate.
The
p-value means the probability of an 8.33 decrease in
housing_price_index due to a one unit increase in
total_unemployed is 0%, assuming there is no relationship between the two variables. A low p-value indicates that the results are statistically significant, that is in general the p-value is less than 0.05.
The
confidence interval is a range within which our coefficient is likely to fall. We can be 95% confident that
total_unemployed's coefficient will be within our confidence interval, [-9.185, -7.480].
Regression Plots
Please see the four graphs below.
The “Y and Fitted vs. X” graph plots the dependent variable against our predicted values with a confidence interval. The inverse relationship in our graph indicates that
housing_price_indexand
total_unemployedare negatively correlated, i.e., when one variable increases the other decreases.
The “Residuals versus
total_unemployed” graph shows our model's errors versus the specified predictor variable. Each dot is an observed value; the line represents the mean of those observed values. Since there's no pattern in the distance between the dots and the mean value, the OLS assumption of homoskedasticity holds.
The “Partial regression plot” shows the relationship between
housing_price_indexand
total_unemployed, taking in to account the impact of adding other independent variables on our existing
total_unemployedcoefficient. We'll see later how this same graph changes when we add more variables.
The Component and Component Plus Residual (CCPR) plot is an extension of the partial regression plot, but shows where our trend line would lie after adding the impact of adding our other independent variables on our existing
total_unemployedcoefficient. More on this plot here.
The next plot graphs our trend line (green), the observations (dots), and our confidence interval (red).
Multiple Linear Regression
Mathematically, multiple linear regression is:
$$Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_k x_k + \epsilon$$
We know that unemployment cannot entirely explain housing prices. To get a clearer picture of what influences housing prices, we add and test different variables and analyze the regression results to see which combinations of predictor variables satisfy OLS assumptions, while remaining intuitively appealing from an economic perspective.
We arrive at a model that contains the following variables:
fed_funds,
consumer_price_index,
long_interest_rate, and
gross_domestic_product, in addition to our original predictor,
total_unemployed.
Adding the new variables decreased the impact of
total_unemployed on
housing_price_index.
total_unemployed‘s impact is now more unpredictable (
standard error increased from 0.41 to 2.399), and, since the p-value is higher (from 0 to 0.943), less likely to influence housing prices.
Although
total_unemployed may be correlated with
housing_price_index, our other predictors seem to capture more of the variation in housing prices. The real-world interconnectivity among our variables can't be encapsulated by a simple linear regression alone; a more robust model is required. This is why our multiple linear regression model's results change drastically when introducing new variables.
That all our newly introduced variables are statistically significant at the 5% threshold, and that our coefficients follow our assumptions, indicates that our multiple linear regression model is better than our simple linear model.
The code below sets up a multiple linear regression with our new predictor variables.
Another Look at Partial Regression Plots
Now let's plot our partial regression graphs again to visualize how the
total_unemployedvariable was impacted by including the other predictors. The lack of trend in the partial regression plot for
total_unemployed (in the figure below, upper right corner), relative to the regression plot for total_unemployed (above, lower left corner), indicates that total unemployment isn't as explanatory as the first model suggested. We also see that the observations from the latest variables are consistently closer to the trend line than the observations for
total_unemployment, which reaffirms that
fed_funds,
consumer_price_index,
long_interest_rate, and
gross_domestic_product do a better job of explaining
housing_price_index.
These partial regression plots reaffirm the superiority of our multiple linear regression model over our simple linear regression model.
Conclusion
We have walked through setting up basic simple linear and multiple linear regression models to predict housing prices resulting from macroeconomic forces and how to assess the quality of a linear regression model on a basic level.
To be sure, explaining housing prices is a difficult problem. There are many more predictor variables that could be used. And causality could run the other way; that is, housing prices could be driving our macroeconomic variables; and even more complex still, these variables could be influencing each other simultaneously.
I encourage you to dig into the data and tweak this model by adding and removing variables while remembering the importance of OLS assumptions and the regression results.
Most importantly, know that the modeling process, being based in science, is as follows: test, analyze, fail, and test some more. Navigating Pitfalls
This post is an introduction to basic regression modeling, but experienced data scientists will find several flaws in our method and model, including:
No Lit Review: While it's tempting to dive in to the modeling process, ignoring the existing body of knowledge is perilous. A lit review might have revealed that linear regression isn't the proper model to predict housing prices. It also might have improved variable selection. And spending time on a lit review at the outset can save a lot of time in the long run. Small sample size: Modeling something as complex as the housing market requires more than six years of data. Our small sample size is biased toward the events after the housing crisis and is not representative of long-term trends in the housing market. Multicollinearity: A careful observer would've noticed the warnings produced by our model regarding multicollinearity. We have two or more variables telling roughly the same story, overstating the value of each of the predictors. Autocorrelation: Autocorrelation occurs when past values of a predictor influence its current and future values. Careful reading of the Durbin-Watson score would've revealed that autocorrelation is present in our model.
In a future post, we'll attempt to resolve these flaws to better understand the economic predictors of housing prices. |
Research Open Access Published: Two kinds of bifurcation phenomena in a quartic system Advances in Difference Equations volume 2015, Article number: 29 (2015) Article metrics
866 Accesses
2 Citations
Abstract
In this paper, the center conditions and the conditions for bifurcations of limit cycles from a third-order nilpotent critical point in a class of quartic systems are investigated. Taking the system coefficients as parameters, explicit expressions for the first 11 quasi-Lyapunov constants are deduced. As a result, we prove that 11 or 12 small-amplitude limit cycles could be created from the third-order nilpotent critical point by two different perturbations.
Introduction
For a given family of polynomial differential equations, the number of Lyapunov constants needed to solve the center-focus problem is related to the so-called
cyclicity of the point. Many works have been devoted to study this problem; see [1–3].
As far as the maximum number of small-amplitude limit cycles is concerned, there have been many results. Bifurcating from an elementary center or an elementary focus, one of the best-known results is \(M(2)=3\), which was solved by Bautin in 1952 [4]. For \(n=3\), a number of results have been obtained. Around an elemental focus, James and Lloyd [5] considered a particular class of cubic systems to obtain eight limit cycles in 1991, and the systems were reinvestigated a couple of years later by Ning
et al. [6] to find another solution of eight limit cycles. Yu and Corless [7] constructed a cubic system and combined symbolic and numerical computations to show nine limit cycles in 2009, which was confirmed by purely symbolic computation with all real solutions obtained in 2013 [8]. Another cubic system was also recently constructed by Lloyd and Pearson [9] to show nine limit cycles with purely symbolic computation. Recently, Yu and Tian showed that there could be 12 limit cycles around a singular point in a planar cubic-degree polynomial system [10]. For \(n=4\), Huang gave an example of a quartic system with eight limit cycles bifurcated from a fine focus [11]. In recent years, bifurcations of limit cycles from degenerate critical points were investigated intensively. Especially, for a nilpotent critical point, there were also many results as regards limit cycles; see [12, 13].
In this paper, we consider another quartic system,
We will show that 11 or 12 limit cycles can be bifurcated from the origin by different perturbations.
The rest of this paper is organized as follows. In Section 2, some preliminary results in [14] which are needed in the following sections will be given. In Section 3, the linear recursive formulas in [14] are used to compute the first 11 quasi-Lyapunov constants and then obtain the sufficient and necessary conditions for a center. In Section 4, one kind of different bifurcation is discussed to confirm that 11 limit cycles can bifurcate from quartic systems. In Section 5, another kind of interesting bifurcation phenomenon is discussed to confirm that 12 limit cycles can bifurcate from quartic systems.
To perform the computations in this paper, we have used the computer algebra system MATHEMATICA 7.
Preliminary knowledge
For convenience, in this section we present some results taken from [15] for the center-focus problem of third-order nilpotent critical points in planar dynamical systems. We introduce some notions and results; for more details, see [15].
The origin of a system is a third-order monodromic critical point if and only if the system can be written in the following form of a real autonomous planar system:
Theorem 2.1 For any positive integer s and a given real number sequence, one can construct successively the terms with the coefficients \(c_{\alpha\beta}\) satisfying \(\alpha\neq0\) of the formal series, such that where \(M_{k}(x, y)\) is a kth- degree homogeneous polynomial of x, y for all k and \(s\mu=0\), \(c_{\alpha\beta}\) and \(\omega_{m}(s, \mu)\) are constants which will be determined by (2.4).
Equation (2.4) is linear with respect to the function
M, so that we can easily obtain the following recursive formulas for calculating \(c_{\alpha\beta}\) and \(\omega_{m}(s, \mu)\). Theorem 2.2 and for \(m\geq1\), \(\omega_{m}(s, \mu)\) can be uniquely determined by the recursive formula, where Note in (2.7) that the following coefficients: have been set. The mth- order quasi- Lyapunov constant is defined as
Clearly, the recursive formulas in Theorem 2.2 are linear with respect to all \(c_{\alpha\beta}\). Therefore, it is convenient to develop programs for computing quasi-Lyapunov constants by using computer algebraic system such as MATHEMATICA.
Quasi-Lyapunov constants and center conditions
According to Theorem 2.1, for system (1.1), we can find a positive integer
s and a formal series \(M(x,y)=x^{4}+y^{2}+o(r^{4})\), such that (2.4) holds. Applying the recursive formulas in Theorem 2.2 to carry out the calculations, we have
Setting \(\omega_{7}=\omega_{9}=0\) yields \(c_{03}=0\) and \(s=1\).
Furthermore, with \(s=1\), we obtain the following results.
Proposition 3.1 For system (1.1), one can determine successively the terms of the formal series \(M(x,y)=x^{4}+y^{2}+o(r^{4})\), such that where \(\mu_{m}\) is the mth- order quasi- Lyapunov constant at the origin of system (1.1), \(m=1,2,\ldots,11\). Theorem 3.1 For system (1.1), the first 11 quasi- Lyapunov constants at the origin are given by In the above expressions of \(\mu_{k}\), for each \(k=2,\ldots,11\), \(\mu_{1}=\mu_{2}=\cdots=\mu_{k-1}=0\) have been set.
Theorem 3.1 directly gives the following assertion.
Proposition 3.2 The first 11 quasi- Lyapunov constants at the origin of system (1.1) are zero if and only if one of the following conditions is satisfied:
whose vector field is symmetric with respect to the
y-axis.
which has an analytic first integral:
which has an analytic first integral
From Proposition 3.2 we have the following theorem.
Theorem 3.2 The first kind of multiple bifurcation of limit cycles
Now, we will prove that the perturbed system of (1.1) can generate 11 limit cycles enclosing an elementary node at the origin of unperturbed system (1.1) when the third-order nilpotent critical point \(O(0,0)\) is a 11th-order weak focus.
Using the fact \(\mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}=\mu_{6}=\mu_{7}=\mu_{8}=\mu_{9}=\mu_{10}=0\), \(\mu _{11}\neq0\), we obtain the following.
Theorem 4.1 The origin of system (1.1) is a 11 th- order weak focus if and only if \(a_{22}b_{03}b_{12} (2a_{22}+3b_{13})\neq0\) and Proof
Solving \(\mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}=\mu_{6}=\mu_{7}=\mu_{8}=\mu_{9}=\mu _{10}=0\), we obtain the following relations:
while \(\mu_{11}\neq0\) implies
Next, we study the perturbed system of (1.1), given by
When the conditions in (4.2) hold, using the relationships \(\mu _{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}=\mu_{6}=\mu_{7}=\mu_{8}=\mu_{9}=\mu_{10}=0\), we can determine the values of
Hence, when the conditions in Theorem 4.1 are satisfied, we have
Further, Theorem 3.1.3 in [15] shows that if the origin of system (4.3)\(|_{\delta=\varepsilon=0}\) is a weak focus of order
m, then, when \(0<\delta,\varepsilon\ll1\), (4.3) has at most m limit cycles in a neighborhood of the origin. Namely the following theorem holds. Theorem 4.2 If the origin of system (1.1) is a 11 th- order weak focus, for \(0<\delta,\varepsilon\ll1\), then, for system (4.3), in a small neighborhood of the origin, there exist exactly 11 small- amplitude limit cycles enclosing the origin \(O(0,0)\), which is an elementary node; see Figure 1. The second kind of multiple bifurcation of limit cycles
In this section, we consider an interesting bifurcation of limit cycles which is different from the first kind of bifurcation discussed in the previous section. Consider the following perturbed system of (1.1):
System (5.1) is called a double perturbed system of system (1.1). When \(0<|\varepsilon|\ll1\), system (5.1) has three real singular points in the neighborhood of the origin, namely \(O(0, 0)\) and \(P_{1,2}(\pm\varepsilon; 0)\).
By using the following transformation:
we can shift \(P_{1,2}(\pm\varepsilon, 0)\) of system (5.1) to the origin and obtain a new system in the form of
where \(\Phi(\xi,\eta,\varepsilon,\delta)\) and \(\Psi(\xi,\eta,\varepsilon,\delta)\) are power series in \((u,v,\varepsilon ,\delta)\) with nonzero convergence radius. So \(P_{1,2}(\pm\varepsilon,0)\) of (5.1) are fine foci when \(\delta\neq0\), and weak foci or centers when \(\delta=0\). Especially for \(\delta=0\), corresponding to \(P_{1,2}(\pm \varepsilon,0)\), system (5.1) are changed into the following system:
The first Lyapunov constant at the origin for system (5.3) is given by
when \(\varepsilon\rightarrow0\).
Similarly, summarizing the above results yields the following theorem.
Theorem 5.1 If the origin of system (1.1) is a 11- order weak focus, choosing proper coefficients in system (1.1), when \(0< |\varepsilon|\ll1\), there exist 12 limit cycles with the distribution of one limit cycle enclosing each of \(P_{1,2}(\pm \varepsilon,0)\), and ten limit cycles enclosing both \((\varepsilon,0)\) and \((-\varepsilon,0)\) in the neighborhood of origin; see Figure 2.
The following result is easy to obtain from the above discussion.
Theorem 5.2 If \(\delta=0\), \(b_{21}=0\), \(a_{12}=-3b_{03}\), \(a_{22}=-\frac{3b_{13}}{2}\), there exist three centers \((0,0)\) and \(P(\pm\varepsilon,0)\) in (5.1).
We have studied an interesting bifurcation which, different from the first kind of bifurcation, can generate 12 limit cycles by perturbing the quartic system with a nilpotent critical point.
References 1.
Gasull, A, Torregrosa, J: A new algorithm for the computation of the Lyapunov constans for some degenerated critical points. Proceedings of the Third World Congress of Nonlinear Analysts. Nonlinear Anal.
47, 4479-4490 (2001) 2.
Álvarez, MJ, Gasull, A: Monodromy and stability for nilpotent critical points. Int. J. Bifurc. Chaos
15, 1253-1265 (2005) 3.
Chavarriga, J, García, I, Giné, J: Integrability of centers perturbed by quasi-homogeneous polynomials. J. Math. Anal. Appl.
211, 268-278 (1997) 4.
Bautin, N: On the number of limit cycles which appear with the variation of coefficients from an equilibrium position of focus or center type. Mat. Sb. (N.S.)
30, 181-196 (1952) 5.
James, EM, Lloyd, NG: A cubic system with eight small-amplitude limit cycles. IMA J. Appl. Math.
47, 163-171 (1991) 6.
Ning, S, Ma, S, Kwek, KH, Zheng, Z: A cubic system with eight small-amplitude limit cycles. Appl. Math. Lett.
7, 23-27 (1994) 7.
Yu, P, Corless, R: Symbolic computation of limit cycles associated with Hilbert’s 16th problem. Commun. Nonlinear Sci. Numer. Simul.
14, 4041-4056 (2009) 8.
Chen, C, Corless, R, Maza, M, Yu, P, Zhang, Y: A modular regular chains method and its application to dynamical system. Int. J. Bifurc. Chaos
23, 1350154 (2013) 9.
Lloyd, N, Pearson, J: A cubic differential system with nine limit cycles. J. Appl. Anal. Comput.
2, 293-304 (2012) 10.
Yu, P, Tian, Y: Twelve limit cycles around a singular point in a planar cubic-degree polynomial system. Commun. Nonlinear Sci. Numer. Simul.
19, 2690-2705 (2014) 11.
Huang, WT, Chen, AY: Bifurcation of limit cycles and isochronous centers for a quartic system. Int. J. Bifurc. Chaos
23, 1350172 (2013) 12.
Li, F, Liu, YR, Li, HW: Center conditions and bifurcation of limit cycles at three-order nilpotent critical point in a septic Lyapunov system. Math. Comput. Simul.
12, 2595-2607 (2011) 13.
Li, F, Wang, M: Bifurcation of limit cycles in a quintic system with ten parameters. Nonlinear Dyn.
71, 213-222 (2013) 14.
Liu, Y, Li, J: On third-order nilpotent critical points: integral factor method. Int. J. Bifurc. Chaos
21, 1293-1309 (2011) 15.
Liu, Y, Li, J: Some Classical Problems About Planar Vector Fields. Science Press, Beijing (2010) (in Chinese)
Acknowledgements
Jianlong Qiu and Feng Li are supported by the National Natural Science Foundation of China (Grant Nos. 11201211, 61273012, 11371373) and AMEP of Linyi University.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript. |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen.
I came across this little problem recently: If \(X\) is a topological space with exactly two components, and given an equivalence relation \(\sim\) what can we say about its quotient space \(X/{\sim}\)? It turns out that \(X/{\sim}\) is connected if and only if there exists \(x,y\in X\) where \(x\) and \(y\) are in separate components, such that \(x\sim y\).
Suppose first that there exists \(x,y\in X\) such that \(x\sim y\). Let \(C_1\) and \(C_2\) be the two components of \(X\) and let \(p: X \to X/{\sim}\) be the natural projection. Since \(p\) is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say \(p(C_1)\) is connected and so is \(p(C_2)\), but since \(x\sim y\) we have that \(p(C_1)\cap p(C_2)\neq \varnothing\) so \(X/{\sim}\) consists of a single component, becuase \[p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},\] as wanted.
To show the reverse implication, we use the contrapositive of the statement and show: if we for no \(x\in C_1\) or \(y\in C_2\) have that \(x\sim y\), then \(X/{\sim}\) is not connected. Assume the hypothesis and note that then \(p(C_1)\) and \(p(C_2)\) are then disjoint connected subspaces whose union equal all of \(X/{\sim}\) (since \(p\) is surjective). But then the images of \(C_1\) and \(C_2\) under \(p\) are two components of \(X/{\sim}\), showing that \(X/{\sim}\) is not connected. As wanted.
It’s soon exam time, so I’m practicing proofs in complex analysis. Right now that means Cauchy’s integral formula for \(n\)’th derivatives.
Let \(G\) be a domain of the complex numbers and \(f: G\to \mathbb{C}\) a holomorphic function. We first want to show that \(f\) can be expressed as a power series, such that $$f(z)=\sum^\infty_{n=0} a_n(z-a).$$ for some \(a\in\mathbb{C}\), let \(B_\rho(a)\) the largest open ball at \(a\) contained in \(G\). We claim that $$a_n = \frac{1}{2\pi i} \oint\frac{f(z)}{(z-a)^{n+1}}dz.$$ By the Cauchy integral formula we have that, for a fixed \(z_0\in B_\rho(a)\) we have $$f(z_0)=\frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0}$$ and by elementary calculations we can, for \(z\in \partial B_r(a)\), write $$\frac{1}{z-z_0} = \frac{1}{z-a} \frac{1}{1-\frac{z_0 -a}{z-a}}=\frac{1}{z-a}\sum^\infty_{n=0} \left(\frac{z_0-a}{z-a}\right)^n,$$ and from above we then have $$\begin{split} f(z_0)& = \frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0} \\ &=\frac{1}{2\pi i}\oint\sum^\infty_{n=0} \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\ &=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\&=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)}{(z-a)^{n+1}}dz(z_0-a)^n\\ &=\sum^\infty_{n=0}a_n (z_0-a)^n,\end{split}$$ as wanted. We see that \(f\) is a power series and thus infinitely complex differentiable, and the derivatives are $$f^{(n)}(a)=\frac{n!}{2\pi i}\oint\frac{f(z)}{(z-a)^{n+1}}dz,$$ as desired.
The observant reader will have noticed that I didn’t check that the sums were uniformly convergent, which is needed in other to switch the sum and integral signs, but this is an easy application of the Weierstraß \(M\)-test.
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
I need to find either those two integrals converges or not :
$$\int_0^\infty \sin ( \sin (x) )dx$$ $$\int_0^\infty \frac{\sin ( \sin (x) )} xdx$$
I don't want a proof that computes the integral ! (if it is possible in anyway, I don't know if it is even possible).
There are some suggestions. I'll show you i've done :
My attemps : For the first one, I told that we know : If the infinite sum of a sequence converges, then the sequence converges to zero. Thus, it is the same for the integral. Here, since $\sin ( \sin (x) )$ does not have any limit at infinity, the integral can't be defined properly.
What do you think of my attemp? If it is okay, do you have any other idea to solve my problem?
Then, a second proof, following the suggestion :
$$ k \in \mathbb Z, \, x \in [ - \frac \pi 2 + 2 k \pi, \frac \pi 2 + 2 k \pi] : \, \ | \frac {x- 2k \pi }{2} | \leq | sin(x) | \leq | x - 2k \pi | $$ But then I don't know what to do... I thought that maybe we can use the squeeze theorem but I don't know how from there...
Finally for the second integral, I have no clue at all... I was suggested to compare the integral for $x$ and for $x+\pi$ and actually :
$$\int_0^\infty \frac{\sin ( \sin (x + \pi) )} { x + \pi } dx = \int_0^\infty -\frac{\sin ( \sin (x) )} { x + \pi } dx $$
thank you for reading me :) |
Given some integer $k>0$, there are $O(x/\log^2 x)$ primes $p \le x$ such that $p+2k$ is also prime. It has been conjectured at least since Hardy-Littlewood that $$ \pi_{2k}(x) \sim c_{2k}\int_2^x\frac{dt}{\log^2t} $$ with $$ c_{2k}=2C_2\prod_{p|k,p>2}\frac{p-1}{p-2} $$ where $\pi_{2k}(x)$ is the count of such primes and $C_2$ is the twin prime constant (A005597). i'm interested in the upper bound part.
I've seen a number of papers giving explicit constants $c$ such that $\pi_2(x) \le c\int_2^xdt/\log^2t$ for large enough $x$. What are the best known constants for $\pi_{2k}$? I assume that they're no closer to the (conjectured) optimal constants than our best bounds for $\pi_2$ (Wu [1] is within about 3.4).
I hope there has been some paper making this explicit? It would be fantastic to have a reference proving an upper bound of, say, $10c_{2k}$, but I'm open to whatever can be found. (Perhaps some paper has an upper bound which is unbounded as $k\to\infty$ but finite for all $k$.)
[1] J Wu, Chen's double sieve, Goldbach's conjecture, and the twin prime problem,
Acta Arithmetica 114:3 (2004), pp. 215–273. arXiv:0705.1652 [math.NT] |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 901 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, TBA February 21, TBA Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}:
$$ trace(\rho(g))/dim(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
My favorite connection in mathematics (and an interesting application to physics) is a simple corollary from Hodge's decomposition theorem, which states:
On a (compact and smooth) riemannian manifold $M$ with its Hodge-deRham-Laplace operator $\Delta,$ the space of $p$-forms $\Omega^p$ can be written as the orthogonal sum (relative to the $L^2$ product) $$\Omega^p = \Delta \Omega^p \oplus \cal H^p = d \Omega^{p-1} \oplus \delta \Omega^{p+1} \oplus \cal H^p,$$ where $\cal H^p$ are the harmonic $p$-forms, and $\delta$ is the adjoint of the exterior derivative $d$ (i.e. $\delta = \text{(some sign)} \star d\star$ and $\star$ is the Hodge star operator). (The theorem follows from the fact, that $\Delta$ is a self-adjoint, elliptic differential operator of second order, and so it is Fredholm with index $0$.)
From this it is now easy to proof, that every not trivial deRham cohomology class $[\omega] \in H^p$ has a unique harmonic representative $\gamma \in \cal H^p$ with $[\omega] = [\gamma]$. Please note the equivalence $$\Delta \gamma = 0 \Leftrightarrow d \gamma = 0 \wedge \delta \gamma = 0.$$
Besides that this statement implies easy proofs for Poincaré duality and what not, it motivates an interesting viewpoint on electro-dynamics:
Please be aware, that from now on we consider the Lorentzian manifold $M = \mathbb{R}^4$ equipped with the Minkowski metric (so $M$ is neither compact nor riemannian!). We are going to interpret $\mathbb{R}^4 = \mathbb{R} \times \mathbb{R}^3$ as a foliation of spacelike slices and the first coordinate as a time function $t$. So every point $(t,p)$ is a position $p$ in space $\mathbb{R}^3$ at the time $t \in \mathbb{R}$. Consider the lifeline $L \simeq \mathbb{R}$ of an electron in spacetime. Because the electron occupies a position which can't be occupied by anything else, we can remove $L$ from the spacetime $M$.
Though the theorem of Hodge does not hold for lorentzian manifolds in general, it holds for $M \setminus L \simeq \mathbb{R}^4 \setminus \mathbb{R}$. The only non vanishing cohomology space is $H^2$ with dimension $1$ (this statement has nothing to do with the metric on this space, it's pure topology - we just cut out the lifeline of the electron!). And there is an harmonic generator $F \in \Omega^2$ of $H^2$, that solves $$\Delta F = 0 \Leftrightarrow dF = 0 \wedge \delta F = 0.$$ But we can write every $2$-form $F$ as a unique decomposition $$F = E + B \wedge dt.$$ If we interpret $E$ as the classical electric field and $B$ as the magnetic field, than $d F = 0$ is equivalent to the first two Maxwell equations and $\delta F = 0$ to the last two.
So cutting out the lifeline of an electron gives you automagically the electro-magnetic field of the electron as a generator of the non-vanishing cohomology class. |
Up to the present time, with the arising of market economy, consume epoch, mass media transmission society and information technology, people are exposed to a graphic epoch just like after one night, it is an art epoch leading by visual graphics.
This article for the first time trys to define the concept of the grace touch , and stating from various elements of a painting's appearance , such as composition, figure arranging and color , makes a detailed analysis of subjectiveness, feeling of order , feeling of form and the sense of plane .
In the 80s of the 20th century, the artists, in order to solve the problems of the realism, considered the positive factors of the traditional wash renewedly. At the same time, the modern western paintings were introduced. Thus, the lineal language was reformed once again.
The proof is based on a variant of Moser's method using time-dependent vector fields.
We present two-sided singular value estimates for a class of convolution-product operators related to time-frequency localization.
The Balian-Low theorem (BLT) is a key result in time-frequency analysis, originally stated by Balian and, independently, by Low, as: If a Gabor system $\{e^{2\pi imbt} \, g(t-na)\}_{m,n \in \mbox{\bf Z}}$
Gabor Time-Frequency Lattices and the Wexler-Raz Identity
Gabor time-frequency lattices are sets of functions of the form $g_{m \alpha , n \beta} (t) =e^{-2 \pi i \alpha m t}g(t-n \beta)$ generated from a given function $g(t)$ by discrete translations in time and frequency. |
I have 3 components, $r$, $\theta$ and $\phi$, for an electric field in spherical coordinates (and the $\phi$ component happens to be zero), let's say I just want to convert the $r$ component into cartesian, which looks like:
$$ -\frac{0.058125 \cos\theta\sin^2\theta}{r^3} $$
How do I convert this into cartesian?
Edit: Sorry maybe I should have explained that this expression is one component of a vector, which I got using E= −∇V |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 901 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, TBA February 21, TBA Wednesday, February 27 at 1:10pm Jon Peterson, Purdue March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}:
$$ trace(\rho(g))/dim(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
Difference between revisions of "Computer Help"
(→TeX/LaTeX)
(→TeX/LaTeX)
Line 233: Line 233:
Typesetting with LaTeX we recommend the following [http://www.latex-project.org/guides/books.html site].
Typesetting with LaTeX we recommend the following [http://www.latex-project.org/guides/books.html site].
−
<math>
+
<math>
− − −
\int_{[0, 1]^n}
\int_{[0, 1]^n}
\left| \sum_{k = 1}^n \mathrm{e}^{2 \pi \mathrm{i} \, x_k} \right|^s \mathrm{d}\boldsymbol{x}
\left| \sum_{k = 1}^n \mathrm{e}^{2 \pi \mathrm{i} \, x_k} \right|^s \mathrm{d}\boldsymbol{x}
− +
</math>
= Unix =
= Unix =
Revision as of 14:42, 14 February 2012 Computer HelpThis is a guide to the computer facilities, services and software available at the Math department of the University of Wisconsin. Most of the facilities are for department's the faculty, graduate students and staff. Those who do not have Math department accounts may use the two kiosk PCs located at the north B1 entrance (facing Ingraham hall) or the PCs located in the Kleene Math Library on the B2 level of Van Vleck hall. Our older guide is here. Contents 1 Accounts and Policies 2 E-Mail 3 Facilities 4 Printing 5 Remote Access 6 TeX/LaTeX 7 Unix Accounts and Policies Account Set Up
Set up your e-mail client with the following parameters:
Account Type: IMAP Incoming and Outgoing Mail Server: mailhost.math.wisc.edu IMAP Prefix: (leave blank) Incoming Server Port: 993 Outgoing Server Port: 465 Check Use SSL Dealing With Spam
All Math Department mail accounts are configured with spam filtering. Messages that are clearly spam are automatically placed into a folder named 'spam' within your account. Messages placed in this folder are automatically deleted after 15 days. Users are encouraged to check their spam folder periodically to ensure that valid messages (known as ham) are not being sent to this folder.
You can teach the sspam filter to be more efficient by following the directions on the page titled, Teaching the spam filter to be more efficient.
Vacation Mail
I. VACATION MAIL
Here is how to set up vacation notification for your math e-mail account First, login to login.math.wisc.edu, then type cd /auto/mail/YOURNAME where YOURNAME is your username. Then add the following line to the end of your .procmailrc file:
:0 c | /usr/bin/vacation YOURNAME
where YOURNAME is your username. Edit/create the file
/auto/mail/YOURNAME/.vacation.msg
and put in there the message you want sent out while you are on vacation.
When you return from your vacation, just delete or comment out these lines in your .procmailrc file by placing a # at the beginning of each line.
:0 c | /usr/bin/vacation YOURNAME
This will allow our spam filters to process your incoming mail before it gets to your mailbox and send your vacation message to only legitimate addresses.
Forwarding your e-mail We do not recommend forwarding all your math e-mail to a non-UW account
because of the following issues:
1. possible violation of HIPAA rules on confidentiality of student records For example, if you forward your e-mail to your yahoo or gmail account, what guarantees do you have that this mail will not be indexed or otherwise disseminated to third parties? 2. record retention issues related to student grades, etc.
However, if you are leaving the department, you may want this information.To forward filtered mail to a new address, you must first ssh to
login.math.wisc.edu and put lines like this in your /auto/mail/yourusername/.procmailrc file :0: !myaddress@somewhere.com
(of course you should replace the last line with your forwarding address).
to forward mail to another account while keeping a copy on our server, put a line like this in your /auto/mail/yourusername/.forward file
\myusername, myaddress@somewhere.com Leaving the Department
If you leave the Math Department, your account will be deleted on September 1 after your departure. You will be notified about this on August 1. Here are some things you may want to do to prepare for this
Make a backup of your mail. This can be done by simply using the export feature of many mail programs. You can also use md2mb. Start forwarding your e-mail to another account. To avoid getting spam, login to login.math.wisc.eduthen follow these steps cd /auto/mail/YOURUSERNAME edit .procmailrc with your favorite text editor (e.g. vim, emacs or pico). Insert these lines at the end of .procmailrc: :0: !myaddress@somewhere.com (your forwarding address) Notify the computer staff if you would like forwarding to continue (up to a year after your departure) after your account is deleted. Give them your forwarding address. E-mail Web Forms
You can create a web page on your web site to allow visitors to send you e-mail messages. With an e-mail form, you can allow users to register for conferences, send feedback, or answer an on-line poll. With additional tools supplied by the Math Department, you can import these messages into a spreadsheet or database.
=
Facilities
The facilities and equipment described below are for use by UW Math department faculty and graduate students on the UW Madison Campus and, preferably, in Van Vleck hall.
Mobile Computers and Projectors
Instructors may borrow laptop computers and projectors for demonstrations in any Van Vleck classroom. This equipment is kept in the Math Library on the B2 classroom level. You may check them out for up to 4 hours using your UW ID card.
Ceiling Mounted Projectors
Classrooms B102, B107, B231 and B223 and the 901 seminar room each have a ceiling mounted projector. These projectors provide better displays than the mobile units. They can be used with a laptop computer. If you want to reserve one of these rooms, contact Sharon Paulson at paulson@math.wisc.edu. Keep in mind, though, that they're heavily booked and usually only available at the beginning or end of the day. The Math Department's computer staff maintain the projectors in 901 and B107. All the others are maintained by the UW Physical Plant. Please contact Derek Dombrowski about them. You will need an access code to use them and a key if you want to use the document camera or microphone with them. Here is Derek's contact information:
Derek Dombrowski 373A BASCOM HALL ddombrowski@fpm.wisc.edu (608) 265-9697 (608) 516-5993 Computer Classroom
B107 is an instructional computer lab featuring 21 speedy (2.8 GHz, 2 gb RAM) Windows PCs --each PC has network access and Maple, Matlab, and MSOffice programs-- a ceiling-mounted projector connected to the instructor's computer, and an HP B&W Laserjet printer with duplexer. To reserve this room, please contact Joan Wendt at paulson@math.wisc.edu.
You can check out the key for the room from the Math Library.
Digital Camcorder
There is a Canon Vixia HG20 Cam Corder in the Math Library which Math faculty and Graduate students can check out to film Math Events such as conferences, PhD defenses, etc. Use your faculty/staff ID card to check it out.
Printing
Math Dept Printers
Location PrinterName Printer Type 3rd hall 3 Ricoh MPC 4501 4th hall 4 Ricoh MPC 4501 5th hall 5 Ricoh MPC 4501 6th hall 6 Ricoh MPC 4501 7th hall 7 Ricoh MPC 4501 8th hall 8 Ricoh MPC 4501 101B VV a HPLJ 4300 B127 VV b HPLJ 4300
During the summer of 2011, the department placed new Ricoh MPC 4501 printer/copier/scanners on floors 3-8. We will also switched from LPRNG to CUPS (the Common Unix Printing System) on our unix print servers. What this means in practical terms is that users should become familiar with the System V lp commands. Previously, we used the Berkeley lpr command and some of these commands will still work.
Here are the
page charges for printing and copying: Printing: 250 free B&W pages per month, 4 cents per B&W page beyond the free allowance color pages will cost 20 cents per page with no free allowance. Copying: 200 free B&W pages per month, 4 cents per B&W page beyond the free allowance and 20 cents per color page with no free allowance. Scanning: There is no charge for scanning a document and e-mailing it to yourself. However, users should not scan copyrighted material. Supplies
If the printers run out of paper, please get more paper from the Copy Center on the second floor and place it in the printers. If you are unsure how to do this, ask the computer staff for assistance. For assistance with other problems (no toner, paper jams, etc. ) see Hieu Nguyen in 507 (for issues with the Ricoh copiers) and Sharon Paulson in the receptionists' office (for help with the printers in B127 and 101b).
See the cups guide for more detailed information on printing with cups. Click on the links below to learn how to use each function with the Ricoh copiers.
Only people with computer accounts in the UW Math Department will be allowed to use the Van Vleck Ricoh copiers. If you have a math account, you will receive a code to use for copying. These codes will be mailed out once a year in September after old accounts are deleted and new ones added.
NOTE: if you forget your copier code, login to one ofthe math department linux PCs and type whatsmypin. 1. You copier code is only required for copying. Although the default display shows the copier login, you do not have to login in order to print or scan. Just push the buttons at the left to select the scanner or printer function. 2. Your code can be used on any of the copiers on floors 3-8. Do not use the copiers on the second floor. They are reserved for the administrative staff. 3. After you have finished copying, do not touch the display. Your login will time out after 60 seconds. 4. Everyone with a math dept account is allowed 250 free copies and 250 free printed pages per month (on the union of all the copiers on floors 3-8). At the end of the month, these are zeroed out. You can not carry over unused pages to the next month. Printing and Copying charges are described at the URL above. Note that you will be charged for each side of a duplex printed page. You may also be charged more for larger pages. 5. How to create a multi-page PDF document: Most people will want to create a multipage PDF scan of their document (instead of the default which is a single page TIFF document). To do this press the SCANNER button to the left of the display. Select SEND FILE TYPE/NAME in the left hand column of the display, then select MULTI-PAGE -> PDF Using the Ricoh with Linux (command line printing) Using a Ricoh Printer on a Macintosh Using a Ricoh Printer on a PC Troubleshooting Remote Access TeX/LaTeX
TeX and LaTeX are supported on the Math Department computers. To learn more about Typesetting with LaTeX we recommend the following site.
[math] \int_{[0, 1]^n} \left| \sum_{k = 1}^n \mathrm{e}^{2 \pi \mathrm{i} \, x_k} \right|^s \mathrm{d}\boldsymbol{x} [/math]
Unix Manipulating PDF files
The
pdftk toolkit provides several useful tools for manipulating PDF files without usingAdobe Acrobat Pro. Here are some examples:
1. This command will split off the first 15 pages of the file NSFProposal.pdf and save it to 'front.pdf'. Substituting 'cat 16-end' for 'cat 1-15' will save the second half of the file.
pdftk NSFProposal.pdf cat 1-15 output front.pdf
2. This command will merge two (or more) pdf files:
pdftk 1.pdf 2.pdf 3.pdf cat output 123.pdf
You can find more examples at pdftk-examples |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
monty hall problem
this is a fun problem, especially for those that have not seen it before.
imagine you are a contestant on the game show,
Lets Make a Deal,with host, monty hall. in the game you are playing, you are presented withthree doors. behind one of the doors is a prize and behind the othertwo are goats (non-prizes, unless you really like goats). you, thecontestant, are then to select one of the doors. after you have madeyour choice, monty hall then opens one of the other doors revealing agoat and asks you whether you would like to keep the door youoriginally selected or if you would like to change to the other door.what should you do?
when i taught this to fourth grade students, we played this game a number of times for them to get a feel for how it works. they got the hang of it very quickly. after a while, i asked what strategy did they adopt to win. most knew that the game did not really begin until monty hall revealed a door, but almost everyone thought the game was an even chance of winning or losing.
to help them a little bit, we kept a tally of who switched doors and won, who switched doors and lost, who stayed doors and won, and who stayed doors and lost. they had expected all the numbers to come about equal, but what they found was that amongst people that switched doors, more of them won than lost, and those that stayed, more of them lost than won.
at this point, they were starting to catch on but there were still a few students that were not convinced that after removing a door the odds were not even. so i changed the game a little. this time, instead of three doors, there were 1000 doors. after picking a door, monty hall would remove 998 of doors which did not contain a prize. this left only two doors. at this point, the students knew that switching was the better strategy. they explained, that when picking the first door, there was a 1/1000 chance of selecting the door with the prize, and a 999/1000 chance it was in one of the other doors. when we removed 998 doors, the odds did not change to 50%, instead, the one door remaining assumed the 999/1000 chance of containing the prize.
after this change in the game, all the students understood the original game. they correctly found that there was a 2/3 chance the prize was in the other door.
to do this more precisely, we set up a few things. let $ \Omega = \{ (w_1, w_2) \mid w_i \in \{1,2,3\}, w_1 \neq w_2 \} $ where $w_1$ is the door the prize is behind and $w_2$ is the door monty hall opens. let $A_i = \{ (w_1, w_2) \mid w_1 = i \}$ be the event the prize is behind door $i$. the door monty hall picks depends on the door we pick, so let $D = \{ (w_1, w_2) \mid w_2 = 3\}$ be the door monty hall opens, and let $E = \{ (w_1, w_2) \mid w_2 = 1\}$ represent the doors monty hall cannot open. we will need set $E$ later.
we want to calculate |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
Introduction tutorial¶ The Task¶
MNIST is a dataset which consists of 70,000 handwritten digits. Each digit is a grayscale image of 28 by 28 pixels. Our task is to classify each of the images into one of the 10 categories representing the numbers from 0 to 9.
The Model¶
We will train a simple MLP with a single hidden layer that uses the rectifier activation function. Our output layer will consist of a softmax function with 10 units; one for each class. Mathematically speaking, our model is parametrized by \(\mathbf{\theta}\), defined as the weight matrices \(\mathbf{W}^{(1)}\) and \(\mathbf{W}^{(2)}\), and bias vectors \(\mathbf{b}^{(1)}\) and \(\mathbf{b}^{(2)}\). The rectifier activation function is defined as
and our softmax output function is defined as
Hence, our complete model is
Since the output of a softmax sums to 1, we can interpret it as a categorical probability distribution: \(f(\mathbf{x})_c = \hat p(y = c \mid \mathbf{x})\), where \(\mathbf{x}\) is the 784-dimensional (28 × 28) input and \(c \in \{0, ..., 9\}\) one of the 10 classes. We can train the parameters of our model by minimizing the negative log-likelihood i.e. the cross-entropy between our model’s output and the target distribution. This means we will minimize the sum of
(where \(\mathbf{1}\) is the indicator function) over all examples. We use stochastic gradient descent (SGD) on mini-batches for this.
Building the model¶
Blocks uses “bricks” to build models. Bricks are
parametrized Theanooperations. You can read more about it in thebuilding with bricks tutorial.
Constructing the model with Blocks is very simple. We start by defining the input variable using Theano.
Tip
Want to follow along with the Python code? If you are using IPython, enablethe doctest mode using the special
%doctest_mode command so that youcan copy-paste the examples below (including the
>>> prompts) straightinto the IPython interpreter.
>>> from theano import tensor>>> x = tensor.matrix('features')
Note that we picked the name
'features' for our input. This is important,because the name needs to match the name of the data source we want to train on.MNIST defines two data sources:
'features' and
'targets'.
For the sake of this tutorial, we will go through building an MLP the long way. For a much quicker way, skip right to the end of the next section. We begin with applying the linear transformations and activations.
We start by initializing bricks with certain parameters e.g.
input_dim.After initialization we can apply our bricks on Theano variables to build the modelwe want. We’ll talk more about bricks in the next tutorial, Building with bricks.
>>> from blocks.bricks import Linear, Rectifier, Softmax>>> input_to_hidden = Linear(name='input_to_hidden', input_dim=784, output_dim=100)>>> h = Rectifier().apply(input_to_hidden.apply(x))>>> hidden_to_output = Linear(name='hidden_to_output', input_dim=100, output_dim=10)>>> y_hat = Softmax().apply(hidden_to_output.apply(h))
Loss function and regularization¶
Now that we have built our model, let’s define the cost to minimize. For this, we will need the Theano variable representing the target labels.
>>> y = tensor.lmatrix('targets')>>> from blocks.bricks.cost import CategoricalCrossEntropy>>> cost = CategoricalCrossEntropy().apply(y.flatten(), y_hat)
To reduce the risk of overfitting, we can penalize excessive values ofthe parameters by adding a \(L2\)-regularization term (also known as
weight decay) to the objective function:
To get the weights from our model, we will use Blocks’ annotation features (read more about them in the Managing the computation graph tutorial).
>>> from blocks.roles import WEIGHT>>> from blocks.graph import ComputationGraph>>> from blocks.filter import VariableFilter>>> cg = ComputationGraph(cost)>>> W1, W2 = VariableFilter(roles=[WEIGHT])(cg.variables)>>> cost = cost + 0.005 * (W1 ** 2).sum() + 0.005 * (W2 ** 2).sum()>>> cost.name = 'cost_with_regularization'
Note
Note that we explicitly gave our variable a name. We do this so that when we monitor the performance of our model, the progress monitor will know what name to report in the logs.
Here we set \(\lambda_1 = \lambda_2 = 0.005\). And that’s it! We now have the final objective function we want to optimize.
But creating a simple MLP this way is rather cumbersome. In practice, we wouldhave used the
MLP class instead.
>>> from blocks.bricks import MLP>>> mlp = MLP(activations=[Rectifier(), Softmax()], dims=[784, 100, 10]).apply(x)
Initializing the parameters¶
When we constructed the
Linear bricks to build ourmodel, they automatically allocated Theano shared variables to store theirparameters in. All of these parameters were initially set to
NaN. Beforewe start training our network, we will want to initialize these parametersby sampling them from a particular probability distribution. Bricks can do thisfor you.
>>> from blocks.initialization import IsotropicGaussian, Constant>>> input_to_hidden.weights_init = hidden_to_output.weights_init = IsotropicGaussian(0.01)>>> input_to_hidden.biases_init = hidden_to_output.biases_init = Constant(0)>>> input_to_hidden.initialize()>>> hidden_to_output.initialize()
We have now initialized our weight matrices with entries drawn from a normal distribution with a standard deviation of 0.01.
>>> W1.get_value() array([[ 0.01624345, -0.00611756, -0.00528172, ..., 0.00043597, ...
Training your model¶
Besides helping you build models, Blocks also provides the main other features needed to train a model. It has a set of training algorithms (like SGD), an interface to datasets, and a training loop that allows you to monitor and control the training process.
After having configured Fuel, you can load the dataset.
>>> from fuel.datasets import MNIST>>> mnist = MNIST(("train",))
Datasets only provide an interface to the data. For actual training, we will need to iterate over the data in minibatches. This is done by initiating a data stream which makes use of a particular iteration scheme. We will use an iteration scheme that iterates over our MNIST examples sequentially in batches of size 256.
>>> from fuel.streams import DataStream>>> from fuel.schemes import SequentialScheme>>> from fuel.transformers import Flatten>>> data_stream = Flatten(DataStream.default_stream(... mnist,... iteration_scheme=SequentialScheme(mnist.num_examples, batch_size=256)))
The training algorithm we will use is straightforward SGD with a fixed learning rate.
>>> from blocks.algorithms import GradientDescent, Scale>>> algorithm = GradientDescent(cost=cost, parameters=cg.parameters,... step_rule=Scale(learning_rate=0.1))
During training we will want to monitor the performance of our model on a separate set of examples. Let’s create a new data stream for that.
>>> mnist_test = MNIST(("test",))>>> data_stream_test = Flatten(DataStream.default_stream(... mnist_test,... iteration_scheme=SequentialScheme(... mnist_test.num_examples, batch_size=1024)))
In order to monitor our performance on this data stream during training, we needto use one of Blocks’ extensions, namely the
DataStreamMonitoringextension.
>>> from blocks.extensions.monitoring import DataStreamMonitoring>>> monitor = DataStreamMonitoring(... variables=[cost], data_stream=data_stream_test, prefix="test")
We can now use the
MainLoop to combine all the differentbits and pieces. We use two more extensions to make our training stop aftera single epoch and to make sure that our progress is printed.
>>> from blocks.main_loop import MainLoop>>> from blocks.extensions import FinishAfter, Printing>>> main_loop = MainLoop(data_stream=data_stream, algorithm=algorithm,... extensions=[monitor, FinishAfter(after_n_epochs=1), Printing()])>>> main_loop.run() -------------------------------------------------------------------------------BEFORE FIRST EPOCH-------------------------------------------------------------------------------Training status: epochs_done: 0 iterations_done: 0Log records from the iteration 0: test_cost_with_regularization: 2.34244632721-------------------------------------------------------------------------------AFTER ANOTHER EPOCH-------------------------------------------------------------------------------Training status: epochs_done: 1 iterations_done: 235Log records from the iteration 235: test_cost_with_regularization: 0.664899230003 training_finish_requested: True-------------------------------------------------------------------------------TRAINING HAS BEEN FINISHED:-------------------------------------------------------------------------------Training status: epochs_done: 1 iterations_done: 235Log records from the iteration 235: test_cost_with_regularization: 0.664899230003 training_finish_requested: True training_finished: True |
Assuming that $m$ is a multiset of bitstrings where all bitstrings have the same length, let $D(m)$ denote the number of distinct elements in $m$. That is, $D(m)$ is equal to the dimension of $m$. For example, if $$m = \{00, 10, 11, 10, 11\},$$ then $D(m)=3$.
Let $F(x) = \text{Keccak-}f[1600](x)$, the block permutation function of SHA-3 (for $64$-bit words). We can define the following notation: $$\begin{array}{l} {F^0(x)} = x,\\ {F^1(x)} = F(x),\\ {F^2(x)} = F(F(x)),\\ {F^3(x)} = F(F(F(x))),\\ \ldots \end{array}$$
Assuming that $A$ and $B$ are two different natural numbers greater than or equal to $0$, let $G_{A, B}(x)$ denote a function defined as $$G_{A, B}(x) = F^A(x) \oplus F^B(x),$$
where $x$ denotes a $1600$-bit input and $\oplus$ denotes an XOR operation.
Assuming that $L = 2^{1600}$, let $S_i$ denote an $i$-th bitstring from a set of all possible $1600$-bit inputs:
$$\begin{array}{l} S_1 = 0^{1600},\\ S_2 = 0^{1599}1,\\ \ldots,\\ S_{L-1} = 1^{1599}0,\\ S_L = 1^{1600}.\\ \end{array}$$
Let $A$ and $B$ denote two
arbitrarily large, but different natural numbers (one of them is allowed to be equal to $0$). For example, $$A = 0, B = 1$$ or $$A = 2^{3456789}, B = 9^{876543210}$$ are valid pairs.
Then
$$\begin{array}{l} S_{A, B}[i] = G_{A, B}(S_i),\\ C_{A, B} = \{S_{A, B}[1], S_{A, B}[2], \ldots, S_{A, B}[L-1], S_{A, B}[L]\}.\\ \end{array}$$
The question: can we assume that $D(C_{A, B})$ is expected to be approximately equal to $$(1-1/e) \times 2^{1600} = 10^{481} \times 2,810560755\ldots$$ for all (or almost all) pairs of $A$ and $B$? |
This paper discusses the notion of
horizontal dependency in sequences of first-kind dependent categorical random variables. We examine the necessary and sufficient conditions for a sequence of first-kind dependent categorical random variables to be identically distributed when the conditional probability distribution of subsequent variables after the first are permuted from the identity permutation used in previous works. To read the full paper with all proofs, download here Introduction Editor’s note: You can view the video explaining first-kind dependence in more detail here.
In [2], Traylor extended the notion of first-kind dependence first detailed by Korzeniowski in [1] to categorical random variables. Suppose we take a sequence of categorical random variables \epsilon = (\epsilon_{1},\epsilon_{2},\ldots, \epsilon_{n},\ldots) with K categories \{1,2,...,K\} with P(\epsilon_{1} = j) = p_{j}, j = 1,...,K, with \sum_{j=1}^{K}p_{j} = 1. Define the
dependency coefficient \delta \in [0,1]. Korzeniowski [1] and Traylor [2] defined the following quantities: \begin{aligned}p_{j}^{+} &:= p_{j} + \delta(\sum_{i\neq j}p_{i}) = p_{j} + \delta(1-p_{j})\\p_{j}^{-} &:= p_{j}-\delta p_{j} \end{aligned}
for j = 1,2,\ldots, K where we have the following identities.
p_{j}^{+} + \sum_{i \neq j}p_{i}^{-} = 1 \text{ for } j = 1,2,\ldots, K
Under first-kind dependence, Under first-kind dependence,\begin{aligned}P(\epsilon_{i} = j | \epsilon_{1} = j) &= p_{j}^{+} \\P(\epsilon_{i} = j | \epsilon_{1} \neq j)&= p_{j}^{-}\end{aligned}
for j = 1,2,\ldots K and i \geq 2. Figure 1 gives an illustration of standard first-kind dependence for a sequence of categorical random variables with 3 categories. The binary tree shows the allocation of probability mass at each new variable in the sequence given the previous outcomes. Note that standard first-kind dependence weights the probability of subsequent outcomes
in favor of the outcome of the first random variable, and away from the others. As an explicit example, the probability that \epsilon_{3} = 2 is p_{2}^{+} if \epsilon_{1} = 2, but p_{2}^{-} if \epsilon_{1} = 1 or 3.
Traylor [2] proved that P(\epsilon_{i} = j) = p_{j} for all i = 1,2,\ldots and for all j = 1,2,\ldots, K. That is, despite the dependence of subsequent categorical variables in the sequence on the first, all variables remain identically distributed. Traylor and Hathcock [3] also extended this notion to other types of
vertical dependence by creating a class of vertical dependency generators \mathscr{C} that generate other dependency structures in addition to first-kind dependence, such as sequential dependence. Any dependency structure generated by a function in \mathscr{C}, defined by \mathscr{C}_{\delta} = \{\alpha : \mathbb{N}_{\geq 2} \to \mathbb{N}\} such that \alpha(n) < n \forall\: n\: \exists \: j \in \{1,...,n-1\} : \alpha(n) = j
The generating function for first-kind dependence is \alpha(n) \equiv 1, showing that every categorical random variable, starting with the second, assigns conditional probabilities based on the outcome of the first.
Regardless of vertical dependency structure, we have thus far assumed that we weight in favor of the outcome of \alpha(n). That is, if \epsilon_{\alpha(n)} = j, then P(\epsilon_{n} = j|\epsilon_{\alpha(n)} = j) = p_{j}^{+}, and P(\epsilon_{n} = j|\epsilon_{\alpha(n)} \neq j) = p_{j}^{-}. Under these conditions, any dependent categorical sequence generated by a function \alpha \in \mathscr{C} is identically distributed but dependent, as proven in [3].
This paper examines permuted weights and derives the conditions under which first-kind dependent categorical random variables remain identically distributed under permuted weights. We define permuted weighting in the next section.
Permuted Weighting
Suppose we take a permutation \sigma of \{1,2,...,K\}, so that \sigma : i \mapsto \sigma_{i}, i = 1,...,K. Now suppose that we take a first-kind dependent sequence of categorical random variables. Now, we define the permuted weights by modifying the conditional probabilities. For i \geq 2 \begin{aligned}P(\epsilon_{i} = j | \epsilon_{1} = \sigma^{-1}_{j}) &= p_{j}^{+} \\P(\epsilon_{i} = j | \epsilon_{1} \neq \sigma^{-1}_{j}) &= p_{j}^{-}\end{aligned}
for j = 1,2,\ldots, K. The
vertical dependency – first kind dependency in this case- is unchanged, in that \alpha(n) \equiv 1, but how those specific probability masses are assigned has been permuted.
\begin{aligned}P(\epsilon_{i} = 1 | \epsilon_{1} = 1) &= p_{1}^{-} & P(\epsilon_{i} = 1 | \epsilon_{1} = 2) &= p_{1}^{+}& P(\epsilon_{i} = 1 | \epsilon_{1} = 3) &= p_{1}^{-}\\ P(\epsilon_{i} = 2 | \epsilon_{1} = 1) &= p_{2}^{-} &P(\epsilon_{i} = 2 | \epsilon_{1} = 2) &= p_{2}^{-}& P(\epsilon_{i} = 2 | \epsilon_{1} = 3) &= p_{2}^{+}\\ P(\epsilon_{i} = 3 | \epsilon_{1} = 1) &= p_{3}^{+} & P(\epsilon_{i} = 3 | \epsilon_{1} = 2) &= p_{3}^{-}& P(\epsilon_{i} = 3 | \epsilon_{1} = 3) &= p_{3}^{-}\\ \end{aligned}
For example, suppose K=3, and let \sigma = (132). Then for i\geq 2, Example:
The tree for first-kind dependence with weights according to \sigma = (132) is given in Figure 2.
The following lemma gives the unconditional probability distribution of any categorical random variable \epsilon_{i}, i \geq 2 in a permuted first-kind dependent sequence. Note that P(\epsilon_{1} = j) = p_{j} by definition.
P(\epsilon_{n} = i) = p_{\sigma^{-1}(i)}p_{i}^{+} + p_{i}^{-}\sum_{j\neq \sigma^{-1}(i)}p_{j};\quad n > 1, \quad i = 1,2,\ldots,K
Let \epsilon = (\epsilon_{1},\epsilon_{2},\ldots, \epsilon_{n}) be a first-kind dependent sequence of categorical random variables with permuted weights given by \sigma. Then Lemma:
This lemma gives the unconditional probability distribution for the subsequent categorical random variables after \epsilon_{1} under permuted first-kind dependence. If \sigma is the identity permutation, then \sigma^{-1}(i) = i, and This lemma gives the unconditional probability distribution for the subsequent categorical random variables after \epsilon_{1} under permuted first-kind dependence. If \sigma is the identity permutation, then \sigma^{-1}(i) = i, andP(\epsilon_{n}=i)=p_{i}p_{i}^{+}+p_{i}^{-}\sum_{j \neq i}p_{j}=p_{i}
and we recover standard first kind dependence from [2]. Now the entire sequence is identically distributed. In general, this may not always be true. The following theorem gives necessary and sufficient conditions on the initial probability distribution to guarantee an identically distributed permuted sequence of first-kind dependent categorical random variables.
P(\epsilon_{j} = k | \epsilon_{1} = \sigma^{-1}(k)) = p_{k}^{+} \text{ and } P(\epsilon_{j} = k | \epsilon_{1} \neq \sigma^{-1}(k)) = p_{k}^{-}, \quad k = 1,2,\ldots, K
Suppose \epsilon = (\epsilon_{1},\epsilon_{2},\ldots \epsilon_{n}) is a sequence of first-kind dependent categorical random variables with K categories. Let \sigma be a permutation of \{1,2,\ldots, K\} that permutes the conditional probabilities of \epsilon_{j}, j \geq 2 according to the outcome of \epsilon_{1}. That is, suppose Theorem 1:
Then all variables \epsilon_{i} are identically distributed with P(\epsilon_{i} = k) = p_{k}, k= 1,2,\ldots, K if and only if p_{i} = p_{\sigma^{-1}}(i).
Returning to the example above, where K=3 and \sigma = (132), we have by Theorem 1 that the entire sequence is identically distributed if and only if p_{1} = p_{\sigma^{-1}(1)} = p_{2}, p_{2} = p_{3}, and p_{3} = p_{1}. In this case, we may conclude that only of p_{1} = p_{2} = p_{3} = p will the sequence remain identically distributed under first-kind dependence. Returning to the example above, where K=3 and \sigma = (132), we have by Theorem 1 that the entire sequence is identically distributed if and only if p_{1} = p_{\sigma^{-1}(1)} = p_{2}, p_{2} = p_{3}, and p_{3} = p_{1}. In this case, we may conclude that only of p_{1} = p_{2} = p_{3} = p will the sequence remain identically distributed under first-kind dependence.
Suppose now that K=4, and \sigma = (12)(34). Then by Theorem 1, a sequence \epsilon of these categorical random variables under first-kind dependence will be identically distributed if and only if p_{1} = p_{2} and p_{3} = p_{4}. Thus, in this case, p_{1} = p_{2} = p, and p_{3} = p_{4} = p', but p and p' need not necessarily be equal to ensure the sequence is identically distributed. Example:
This example illustrates a corollary to Theorem 1. A permuted FK-dependent sequence of categorical random variables is identically distributed if and only if the probabilities within each disjoint cycle are equal.
Let \sigma = \tau_{1}\tau_{2}\cdots \tau_{m} be a permutation decomposed into a product of disjoint cycles \tau_{j}. Suppose \epsilon is a first-kind dependent sequence of categorical random variables with weights permuted according to \sigma. Then the sequence is identically distributed if and only if the probabilities within each disjoint cycle are equal. Corollary: Remarks and Conclusion
The notion of permuted weights extends the possibilities for dependence among a sequence of categorical random variables beyond even the class of vertical dependency structures. This paper introduced the notion of
horizontal dependence via permuting the modification of the subsequent outcome probabilities, giving the possibility of two different sequences with the same vertical dependency structure, but different horizontal dependency. Permuting the conditional probabilities away from the identity permutation requires the addition of restrictions on the initial probability distribution to retain the desired identically distributed nature of the sequence. Theorem 1 and Corollary 2 show that the necessary and sufficient condition to guarantee an identically distributed sequence of first-kind dependent categorical random variables is for the probabilities within each cycle to be equal. The notion of permuted weights extends the possibilities for dependence among a sequence of categorical random variables beyond even the class of vertical dependency structures.
This paper introduced the notion of horizontal dependence via permuting the modification of the subsequent outcome probabilities, giving the possibility of two different sequences with the same vertical dependency structure, but different horizontal dependency. Permuting the conditional probabilities away from the identity permutation requires the addition of restrictions on the initial probability distribution to retain the desired identically distributed nature of the sequence. Theorem 1 and Corollary 1 show that the necessary and sufficient condition to guarantee an identically distributed sequence of first-kind dependent categorical random variables is for the probabilities within each cycle to be equal.
The identity permutation
when decomposed into a product of disjoint cycles. Thus, we can see that this is a special case of Theorem 1 and Corollary 1. First-kind dependent sequences of categorical random variables under the identity permutation are always identically distributed, as shown in [2]. In fact, vertical dependency structures generated by \alpha \in \mathscr{C} from [3] are also always identically distributed under the identity permutation. At the other extreme, suppose \epsilon is a first-kind dependent sequence with permuted weights according to \sigma which is a 1-cycle. Then the only way this sequence can be identically distributed is if p_{1} = \cdots = p_{K}. Future research will examine necessary and sufficient conditions on horizontal dependence to retain the identically distributed property of sequences of categorical random variables in other dependency structures.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. References Andrzej Korzeniowski. On correlated random graphs. Journal of Probability and Statistical Science, pages 43–58, 2013. Rachel Traylor. A generalized multinomial distribution from dependent categorical random variables. Academic Advances of the CTO, 2017. Rachel Traylor and Jason Hathcock. Vertical dependency in sequences of categorical random variables. AACTO, 2017. |
During my second year of the International Baccalaureate: Diploma Programme, we were assigned the task of making a portfolio for a problem statement in Mathematics HL. While I don’t have the problem statement anymore, I’ll outline the major points of the task:
Find the solutions to $z^n - 1 = 0,\; z \in \mathbb{C}$. Plot these solutions on the Argand plane. Draw a tree diagram starting from the trivial solution $z = 1$ to every other root. Investigate the exercise, devise a conjecture and prove it.
When I first read through this exercise, I didn’t really expect anything interesting to show up. Well, I was horribly wrong; something really cool showed up which made me learn about a lot of little things in complex analysis as a result. I’ll start by solving the above points sequentially -
The solutions to $z^n - 1 = 0$ are obtained most easily through Euler’s form:
$$ z_k = \exp\left(\frac{2k\pi i}{n}\right),\;\;k = 0,…,n-1 $$
As they are the roots of unity, they appear as points on the unit circle in the Argand plane. For example, $z^5 - 1 = 0$ has the following solutions and tree diagram:
The easiest observation is: $$ \sum^{n-1}_{k = 0} z_k = 0 $$ The roots of unity can be factorised into the following irreducible form over the field of real numbers: $$ z^n - 1 = (z-1)\sum^{n-1}_{i = 0} z^i $$ And can be factorised over the complex field as: $$ z^n - 1 = \prod_{k=0}^{n-1}(z-z_k) = (z-1)\prod_{k=1}^{n-1}(z-z_k) $$ A conjecture based on trial and error is: $$ \prod_{k=1}^{n-1}|z_k - z_0| = \prod_{k=1}^{n-1}|1-z_k| = n$$ This states that the product of the distances between each root from a selected root is equal to the number of roots. To prove this, notice that: $$\sum^{n-1}_{i = 0} z^i = \prod_{k=1}^{n-1}(z-z_k) = \frac{z^n - 1}{z-1}$$ For the equalities to be sensible at $z=1$, one must perform analytic continuation: $$\sum^{n-1}_{i = 0} 1^i = \prod_{k=1}^{n-1}|1-z_k| = \lim_{z\rightarrow 1} \frac{z^n - 1}{z-1}$$ The first equality proves the conjecture. The last equality is evaluated using L’Hôpital’s rule, also proving the conjecture: $$ \lim_{z\rightarrow 1} \frac{z^n - 1}{z-1} = \lim_{z\rightarrow 1} nz^{n-1} = n $$ |
Research Open Access Published: On a kind of time optimal control problem of the heat equation Advances in Difference Equations volume 2018, Article number: 117 (2018) Article metrics
518 Accesses
Abstract
In this paper, we consider a kind of time-varying bang–bang property of time optimal boundary controls for the heat equation. The time-varying bang–bang property in the interior domain has been considered in some papers, but regarding the time optimal boundary control problem it is still unsolved. In this paper, we determine that there exists at least one solution to the time optimal boundary control problem with time-varying controls.
Introduction
Let \(\mathbb{R}_{+}=(0,+\infty)\), and let Ω be a nonempty open bounded domain in \(\mathbb{R}^{N}\) (\(N\geq1\)) with smooth boundary
∂Ω. Let \(\Gamma\subset\partial\Omega\) be a nonempty and open subset of ∂Ω. Consider the following controlled system:
Here, \(y_{0}\in L^{2}(\Omega)\) is a given function, and \(u\in L^{\infty}(\mathbb{R}_{+}; L^{2}(\Gamma))\) is the control. We denote the solution to (1.1) as \(y(\cdot; y_{0},u)\).
In this paper, we let
Denote the norm and inner product of \(L^{2}(\Omega)\) or \(L^{2}(\Gamma)\) as \(\|\cdot\|\) and \(\langle\cdot, \cdot\rangle\), respectively, and the open (or closed) ball of \(L^{2}(\Omega)\), with a center at 0 and a radius of \(r>0\), as \(B(0,r)\) (or \(\bar{B}(0,r)\)).
In industry and engineering, temperature control is a kind of important control, and time optimal control of heat equation is a typical case for temperature control. There are some optimal control problems: time optimal control problem and norm optimal control problem. They are important and interesting problems of optimal control theory. In [1], these optimal control problems have been considered.
Time optimal control problems was discussed by Egorov in 1963 and he proved a bang–bang property for his problem (see [2]), Fattorini discussed it independently in 1964 (see [3]). Then Balakrishnan proved a maximal principle for the optimal control and which can imply the bang–bang property (see [4]), Friedman discussed the time optimal control problem on Banach spaces (see [5]), Fattorini proved that the maximal principle in 1974 for some special Banach spaces. There are many other authors considering the time optimal control problem (see, e.g., [1, 6–15]). Regarding stochastic cases, norm optimal control problems were considered in [16, 17] for stochastic ordinary differential equations and in [18] for stochastic heat equations.
The reader can also refer to [19–21] for the equivalence of three kinds of optimal control problems. For some other interesting work, we refer the reader to [22–24]. The approximate controllability of system (1.1) has been studied in much work (see, e.g., [14, 25–28]). It is clear that, for each \(\varepsilon >0\), we have \(\|y(T; y_{0},0)\|\leq \varepsilon \) when
T is large enough.
The bang–bang property has been studied in much work (see, e.g., [1, 11, 13, 14, 29–32]). However, regarding the time-varying bang–bang property, particularly in infinite-dimensional cases, there is only one paper [19] in which the authors considered a kind of time optimal control problem that involves an interior subset.
In this paper, we consider the time-varying bang–bang property of the heat equation that affects the boundary.
For a given function \(M(\cdot )\in L^{\infty}_{+}(\mathbb{R}_{+})\), we define
and
The time optimal control problem considered is as follows:
where \(\varepsilon>0\). In this problem, if \(u\in\mathcal{U}_{M(\cdot )}\) and \(y(T;y_{0}, u)\in\bar{B}(0,\varepsilon )\) for some \(t\in\mathbb{R}_{+}\), we call
u an admissible control; if \(T^{*}\in\mathbb{R}_{+}\) and \(u^{*}\in\mathcal{U}_{M(\cdot)}\) satisfy \(y(T^{*};y_{0}, u^{*})\in\bar{B}(0,\varepsilon )\), we call \(T^{*}\) and \(u^{*}\) the optimal time and a time optimal control, respectively.
If \(y_{0}\in\bar{B}(0,\varepsilon )\), taking the control \(u=0\), then it is obvious that the optimal time \(T^{*}=0\), this is trivial. Hence, throughout this paper, we assume that
from which we see that if \(T^{*}\) exists, then \(T^{*}>0\).
The main result of this paper is in establishing the following time-varying bang–bang property of problem (1.3).
Theorem 1.1 Assume that \(M(\cdot )\in L^{\infty}_{+}(\mathbb{R}_{+})\) and \(\varepsilon >0\). Then the following two conclusions are true: (i) There exist at least one optimal time and time optimal control for problem(1.3); (ii) Any time optimal control\(u^{*}\) for problem(1.3) satisfies the following time- varying bang–bang property:$$ \bigl\Vert u^{*}(t) \bigr\Vert _{L^{2}(\Gamma)}=M(t) \quad \textit{for a.e. } t\in \bigl(0, T^{*} \bigr) $$(1.4) and$$ \bigl\Vert y \bigl(T^{*};y_{0}, u^{*} \bigr) \bigr\Vert _{L^{2}(\Omega)}= \varepsilon. $$(1.5)
We organize this paper as follows. In Sect. 2, we prove the existence of optimal controls for problem (1.3) and discuss some properties of the optimal controls (see Lemma 2.1). Then we prove Theorem 1.1.
Existence of optimal control for (1.3) and its properties Lemma 2.1 For problem (1.3), the following two conclusions are true: (i) There exists at least one optimal time\(T^{*}\) and time optimal control\(u^{*}\) for problem(1.3). (ii) Any time optimal control\(u^{*}\) for problem(1.3) satisfies the following property:$$ \bigl\Vert u^{*}(t) \bigr\Vert _{L^{2}(\Gamma)}=M(t)\quad \textit{for a.e. } t\in \bigl(0, T^{*} \bigr). $$(2.1) Proof
Let \(u=0\). Then, by the property of the heat equation, we have \(\| e^{\Delta T}y_{0}\|_{L^{2}(\Omega)}\rightarrow0\) as \(T\rightarrow\infty \), which implies that \(T^{*}<+\infty\) by the definition of \(T^{*}\) (see (1.3)).
Let \(\{T_{n}\}_{n=1}^{\infty}\), with \(T_{n}\geq T_{n+1}\) for all \(n\in \mathbb{N}\) such that
where \(T^{*}\) is defined as (1.3). Then there exists a sequence \(\{u_{n}\}_{n\geq1}\subset\mathcal {U}_{M(\cdot)}\) such that
Denote
Then
Since
there exists a subsequence of \(\{\tilde{u}_{n}\}_{n\geq1}\), still denoted thus, and \(v^{*}\in L^{\infty}(0,T_{1};L^{2}(\Gamma))\) such that
According to (2.3), there is a subsequence of \(\{\tilde{u}_{n}\} _{n\geq1}\), still denoted thus, such that
where \(y(\cdot; y_{0},\tilde{v}^{*})\) is the solution to the following system:
Letting \(n\rightarrow\infty\), we obtain
which shows that \(v^{*}=\tilde{v}^{*}|_{(0,T^{*})}\) is an optimal control.
Next, we show that
By contradiction, there exist \(\delta_{0}>0\) and a measurable set \(E_{0}\subset(0,T^{*})\), with \(|E_{0}|>0\), such that
where \(|E_{0}|\) is the Lebesgue measure of \(E_{0}\). Then we have
According to (2.7), we can set
It is obvious that \(\zeta\in L^{\infty}(0,T; L^{2}(\Gamma))\). From (2.3), it is easily verified that
which contradicts (2.8).
(ii) Since \(y_{0}\notin\bar{B}(0,\varepsilon )\), we see that \(T^{*}>0\). The proof is carried out in the following three steps.
Step 1. We show that \(y(T^{*}; y_{0}, u^{*})\in\partial B(0,\varepsilon )\).
Otherwise, we have \(y(T^{*}; y_{0},u^{*})\in B(0,\varepsilon )\), i.e., \(\|y(T^{*}; y_{0},u^{*})\|_{L^{2}(\Omega)}<\varepsilon \). For each \(\delta>0\), we have
Noting that
as \(\delta\rightarrow0\),
as \(\delta\rightarrow0\) and
as \(\delta\rightarrow0\), we obtain
This, together with \(y(T^{*}; y_{0},u^{*})\in B(0,\varepsilon )\), implies that, for a sufficiently small \(\delta>0\), we have
This shows that \(T^{*}-\delta\) is also an optimal time in (1.3), which contradicts the definition of \(T^{*}\).
Step 2. We show that \(\mathcal{R}(y_{0},T^{*})\cap\bar{B}(0,\varepsilon )\) has only one point.
Otherwise, there exist \(u_{1}, u_{2}\in\mathcal{U}_{M(\cdot)}\) such that
and
Denote
It is clear that \({\hat{u}}\in\mathcal{U}_{M(\cdot)}\) and that
Step 3. We prove that any time optimal control \(u^{*}\) satisfies (2.1).
In fact, since \(\mathcal{R}(y_{0},T^{*})\cap\bar{B}(0,\varepsilon )\) has only one point, \(\{y(T;y_{0}, u^{*})\}=\mathcal{R}(y_{0},T^{*})\cap\bar{B}(0,\varepsilon )\). Since \(\mathcal{R}(y_{0},T^{*})\) and \(\bar{B}(0,\varepsilon )\) are two convex sets, according to the Hahn–Banach theorem, there exists \(\eta^{*}\in L^{2}(\Omega)\setminus\{0\}\) such that
This shows that
i.e.,
Here,
and
Let \(E_{0}\) be the set of the Lebesgue points of \(\widetilde{u}^{*}(\cdot)\) and \(M(\cdot)\) in \((0, T^{*})\). For each \(t_{0}\in E_{0}\), let
where \(\zeta\in L^{2}(\Gamma)\), with \(\|\zeta\|_{L^{2}(\Gamma)}\leq1\), and \(\lambda \in(0,\min\{t_{0},T^{*}-t_{0}\})\). By (2.17), we have
Letting \(\lambda\rightarrow0+\), we obtain
This implies that
from which we obtain
This completes the proof of this lemma. □
Based on Lemma 2.1, we have the following result.
Corollary 2.2 Let and \(u_{2}^{*}\) be two optimal controls for problem (1.3). Then \(u_{1}^{*}= u_{2}^{*}\). Proof of Theorem 1.1
Finally, we show that \(\|y(T^{*}; y_{0}, u^{*})\|_{L^{2}(\Omega)}=\varepsilon \).
By contradiction, we suppose that \(y(T^{*}; y_{0},u^{*})\in B(0,\varepsilon )\), i.e., \(\|y(T^{*}; y_{0},u^{*})\|_{L^{2}(\Omega)}<\varepsilon \). For \(\delta>0\), we obtain
Noting that
as \(\delta\rightarrow0\),
as \(\delta\rightarrow0\) and
as \(\delta\rightarrow0\), we obtain
This, together with \(y(T^{*}; y_{0},u^{*})\in B(0,\varepsilon )\), implies that, for a sufficiently small \(\delta>0\), we have
This shows that \(T^{*}-\delta\) is also an optimal time in (1.3), which contradicts the definition of \(T^{*}\). Hence, \(\|y(T^{*}; y_{0},u^{*})\|_{L^{2}(\Omega)}=\varepsilon \). □
References 1.
Fattorini, H.O.: Time and norm optimal controls: a survey of recent results and open problems. Acta Math. Sci. Ser. B Engl. Ed.
31, 2203–2218 (2011) 2.
Egorov, Yu.V.: Optimal control in Banach spaces. Dokl. Akad. Nauk SSSR
150, 241–244 (1963) (in Russian) 3.
Fattorini, H.O.: Time-optimal control of solutions of operational differential equations. SIAM J. Control
2, 54–59 (1964) 4.
Balakrishnan, A.V.: Optimal control problems in Banach spaces. SIAM J. Control
3, 152–180 (1965) 5.
Friedman, A.: Optimal control for parabolic equations. J. Math. Anal. Appl.
18, 479–491 (1967) 6.
Barbu, V.: Optimal Control of Variational Inequalities. Pitman, London (1984)
7.
Cârja, O.: On the minimal time function for distributed control systems in Banach spaces. J. Optim. Theory Appl.
44, 397–406 (1984) 8.
Fattorini, H.O.: Infinite-Dimensional Optimization and Control Theory. Encyclopedia of Mathematics and Its Applications, vol. 62. Cambridge University Press, Cambridge (1999)
9.
Kunisch, K., Wang, L.: Time optimal control of the heat equation with pointwise control constraints. ESAIM Control Optim. Calc. Var.
19, 460–485 (2013) 10.
Li, X., Yong, J.: Optimal Control Theory for Infinite Dimensional Systems. Birkhäuser Boston, Boston (1995)
11.
Lü, Q.: Bang–bang principle of time optimal controls and null controllability of fractional order parabolic equations. Acta Math. Sin. Engl. Ser.
26, 2377–2386 (2010) 12.
Micu, S., Roventa, I., Tucsnak, M.: Time optimal boundary controls for the heat equation. J. Funct. Anal.
263, 25–49 (2012) 13.
Mizel, V., Seidman, T.: An abstract bang–bang principle and time optimal boundary control of the heat equation. SIAM J. Control Optim.
35, 1204–1216 (1997) 14.
Wang, G.: \(L^{\infty}\)-Null controllability for the heat equation and its consequences for the time optimal control problem. SIAM J. Control Optim.
47, 1701–1720 (2008) 15.
Yong, J.: Time optimal control for semilinear distributed parameter systems: existence theory and necessary conditions. Kodai Math. J.
14, 239–253 (1991) 16.
Wang, Y., Yang, D.-H., Yong, J., Yu, Z.: Exact controllability of linear stochastic differential equations and related problems. Math. Control Relat. Fields
7, 305–345 (2017) 17.
Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations. Applications of Mathematics (New York), vol. 43. Springer, New York (1999)
18.
Yang, D.-H., Zhong, J.: Observability inequality of backward stochastic heat equations for measurable sets and its applications. SIAM J. Control Optim.
54, 1157–1175 (2016) 19.
Chen, N., Wang, Y., Yang, D.: Time–varying bang–bang property of time optimal controls for heat equation and its applications. Syst. Control Lett.
112, 18–23 (2018) 20.
Gozzi, F., Loreti, P.: Regularity of the minimum time function and minimum energy problems: the linear case. SIAM J. Control Optim.
37, 1195–1221 (1999) 21.
Wang, G., Zuazua, E.: On the equivalence of minimal time and minimal norm controls for internal controlled heat equations. SIAM J. Control Optim.
50, 2938–2958 (2012) 22.
Guo, B.-Z., Yang, D.-H.: On convergence of boundary Hausdorff measure and application to a boundary shape optimization problem. SIAM J. Control Optim.
51, 253–272 (2013) 23.
Guo, B.-Z., Yang, D.-H.: Some compact classes of open sets under Hausdorff distance and application to shape optimization. SIAM J. Control Optim.
50, 222–242 (2012) 24.
Yang, D.-H.: Shape optimization of stationary Navier–Stokes equation overclasses of convex domains. Nonlinear Anal., Theory Methods Appl.
71, 6202–6211 (2009) 25.
Fernandez-Cara, E., Zuazua, E.: Null and approximate controllability for weakly blowing-up semilinear heat equations. Ann. Inst. Henri Poincaré, Anal. Non Linéaire
17, 583–616 (2000) 26.
Apraiz, J., Escauriaza, L., Wang, G., Zhang, C.: Observability inequalities and measurable sets. J. Eur. Math. Soc.
16, 2433–2475 (2014) 27.
Guo, B.-Z., Xu, Y., Yang, D.-H.: Optimal actuator location of minimum norm controls for heat equation with general controlled domain. J. Differ. Equ.
261, 3588–3614 (2016) 28.
Guo, B.-Z., Yang, D.-H.: Optimal actuator location for time and norm optimal control of null controllable heat equation. Math. Control Signals Syst.
27, 23–48 (2015) 29.
Bellman, R., Glicksberg, I., Gross, O.: On the “bang–bang” control problem. Q. Appl. Math.
14, 11–18 (1956) 30.
Loheac, J., Tucsnak, M.: Maximum principle and bang–bang property of time optimal controls for Schrödinger type systems. SIAM J. Control Optim.
51, 4016–4038 (2013) 31.
Schmidt, E.J.P.G.: The “bang–bang” principle for the time-optimal problem in boundary control of the heat equation. SIAM J. Control Optim.
18, 101–107 (1980) 32.
Wang, G., Xu, Y., Zhang, Y.: Attainable subspaces and the bang–bang property of time optimal controls for heat equations. SIAM J. Control Optim.
53, 592–621 (2015) Acknowledgements
The author is grateful to the anonymous referees for their helpful comments and suggestions, which helped to greatly improve the presentation of this paper. There is no funding to acknowledge.
Ethics declarations Competing interests
The author declares to have no competing interests regarding the publication of this article.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Current browse context:
astro-ph.SR
Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Solar and Stellar Astrophysics Title: Godbillon-Vey Helicity and Magnetic Helicity in Magnetohydrodynamics
(Submitted on 16 Sep 2019)
Abstract: The Godbillon-Vey invariant occurs in homology theory, and algebraic topology, when conditions for a co-dimension 1, foliation of a 3D manifold are satisfied. The magnetic Godbillon-Vey helicity invariant in magnetohydrodynamics (MHD) is a higher order helicity invariant that occurs for flows, in which the magnetic helicity density $h_m={\bf A}{\bf\cdot}{\bf B}={\bf A}{\bf\cdot}(\nabla\times{\bf A})=0$, where ${\bf A}$ is the magnetic vector potential and ${\bf B}$ is the magnetic induction. This paper obtains evolution equations for the magnetic Godbillon-Vey field $\boldsymbol{\eta}={\bf A}\times{\bf B}/|{\bf A}|^2$ and the Godbillon-Vey helicity density $h_{gv}=\boldsymbol{\eta}{\bf\cdot}(\nabla\times{\boldsymbol\eta})$ in general MHD flows in which either $h_m=0$ or $h_m\neq 0$. A conservation law for $h_{gv}$ occurs in flows for which $h_m=0$. For $h_m\neq 0$ the evolution equation for $h_{gv}$ contains a source term in which $h_m$ is coupled to $h_{gv}$ via the shear tensor of the background flow. The transport equation for $h_{gv}$ also depends on the electric field potential $\psi$, which is related to the gauge for ${\bf A}$, which takes its simplest form for the advected ${\bf A}$ gauge in which $\psi={\bf A\cdot u}$ where ${\bf u}$ is the fluid velocity. An application of the Godbillon-Vey magnetic helicity to nonlinear force-free magnetic fields used in solar physics is investigated. The possible uses of the Godbillon-Vey helicity in zero helicity flows in ideal fluid mechanics, and in zero helicity Lagrangian kinematics of three-dimensional advection are discussed. Submission historyFrom: Gary Webb [view email] [v1]Mon, 16 Sep 2019 15:43:36 GMT (2814kb) |
The correlation functions found in Barouch and McCoy's paper (PRA 3, 2137 (1971)) for the XX spin chain use a method which uses Wick's theorem. For the zz correlation function, this gives
$\langle \sigma_l^z \sigma_{l+R}^z \rangle = \langle \sigma_l^z \rangle^2 - G_R^2$
where for $R=1$, $G_1 = -\langle \sigma_l^x \sigma_{l+1}^x+ \sigma_l^y \sigma_{l+1}^y \rangle/2$.
If I calculate $\langle \sigma_l^z \sigma_{l+1}^z \rangle$ both explicitly and using the equation above for 8 qubits, I get different answers.
So is Wick's theorem still valid for 8 qubits, which means I've just made a mistake? Or is it valid only in the thermodynamic limit?
Thanks
Edit:
Thanks for your replies everyone. @lcv However, I haven't used the analytical diagonalisation for this - I have simply used Mathematica to diagonalise the 8 qubit chain numerically after substituting arbitrary values for the coupling strength, magnetic field and temperature. Hence it can't be an error in the diagonalisation. It is the thermal average I have calculated, that is $\langle \sigma^z_l \rangle=tr(\rho \sigma^z_l )$ where $\rho=e^{−H/T}/tr(e^{−H/T})$ and T is temperature. But in doing this, I find that $\langle \sigma^z_l \sigma^z_{l+R} \rangle \neq \langle \sigma^z_l \rangle^2 - G_1^2$ where I've defined $G_1$ above.
Edit2 (@marek @lcv @Fitzsimons @Luboš) I'm going to try to clarify - The open XX Hamiltonian in a magnetic field is
\begin{equation} H=-\frac{J}{2}\sum_{l=1}^{N-1} (\sigma^x_l \sigma^x_{l+1} + \sigma^y_l \sigma^y_{l+1})- B \sum_{l=1}^N \sigma^z_l \end{equation}
In Mathematica, I have defined the Pauli spin matrices, then the Hamiltonian for 8 qubits. I then put in values for $J$, $B$ and $T$, and calculate the thermal density matrix,
\begin{equation} \rho = \frac{e^{-H/T}}{tr(e^{-H/T})} \end{equation}
So now I have numerical density matrix. I then calculate $\langle \sigma^z_l \sigma_{l+1}^z \rangle=tr(\rho \sigma^z_l \sigma_{l+1}^z )$ using the definitions of the Pauli spin matrices and $\rho$.
Next I calculate $\langle \sigma_l^z \sigma_{l+R}^z \rangle$ using the result from Wick's theorem which gives $\langle \sigma_l^z \rangle^2 - G_R^2$ where for $R=1$, $G_1 = -\langle \sigma_l^x \sigma_{l+1}^x+ \sigma_l^y \sigma_{l+1}^y \rangle/2$. I again use the Pauli spin matrices I defined and the same numerical $\rho$ to calculate them.
But I get a different (numerical) answer for each of these. |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
I thought it would be interesting to point out a geometric construction related to \(L_{\infty}\)-algebras. (See earlier post here) Recall that given a Lie algebra \((\mathfrak{g}, [,] )\) one can associate on the dual vector space a linear Poisson structure known as the Lie-Poisson bracket. So, as a manifold \((\mathfrak{g}^{*}, \{, \}) \) is a Poisson manifold. It is convenient to replace the “classical” language of linear and replace this with a graded condition. That is, if we associate weight one to the coordinates on \(\mathfrak{g}^{*} \) then the Lie-Poisson bracket is of weight minus one.
The Lie Poisson bracket is very important in deformation quantisation (both formal and C*-algebraic). There are some nice theorems and results that I should point to at some later date.
Now, it is also known that one has an odd version of this known as the Lie-Schouten brackets on \(\Pi \mathfrak{g}^{*}\). The key difference is the shift in the Grassmann parity of the “linear” coordinates. Note that this all carries over to Lie super algebras with no problem. I will drop the prefix super from now on…
So, let us look at the situation for \(L_{\infty}\)-algebras. We understand these either as a series of higher order brackets on a vector space \(U\) that satisfies a higher order generalsiation of the Jacobi identities or more conveniently we can understand all this in terms of a homological vector field on the formal manifold \(\Pi U\).
Definition An \(L_{\infty}\)-algebra is a vector space \(V = \Pi U\) together with a homological vector field \(Q = (Q^{\delta} + \xi^{\alpha} Q_{\alpha}^{\delta} + \frac{1}{2!} \xi^{\alpha} \xi^{\beta} Q_{\beta \alpha}^{\delta} + \frac{1}{3!} \xi^{\alpha} \xi^{\beta} \xi^{\gamma} Q_{\gamma \beta \alpha}^{\delta} + \cdots) \frac{\partial}{\partial \xi^{\delta}}\),
where we have picked coordinates on \(\Pi U\) \(\{ \xi^{\alpha}\}\). Note that these coordinates are odd as compared to the coordinates on \(U\). Thus we assign the Grassmann parity \(\widetilde{\xi^{\alpha}} = \widetilde{\alpha} + 1\) Note that \(Q\) is odd and that if we restrict to the quadratic part then we are back to Lie algebras.
I will simply state the result, rather than derive it.
Proposition Let \((\Pi U, Q)\) be an \(L_{\infty}\)-algebra. Then the formal manifold \(\Pi U^{*}\) has a homotopy Schouten algebra structure.
Let us pick local coordinates \(\{ \eta_{\alpha}\}\) on \(\Pi U^{*}\). Furthermore, we consider this as a graded manifold and attach a weight of one to each coordinate. A general function, a “multivector” has the form
\(X = \stackrel{0}{X} + X^{\alpha} \eta_{\alpha} + \frac{1}{2!}X^{\alpha \beta}\eta_{\beta} \eta_{\alpha} + \cdots \)
The higher Lie-Schouten brackets are given by
\((X_{1}, X_{2}, \cdots, X_{r}) = \pm Q_{\alpha_{r}\cdots \alpha_{1} }^{\beta}\eta_{\beta}\frac{\partial X_{1}}{\partial \eta_{\alpha_{1}}} \cdots \frac{\partial X_{1}}{\partial \eta_{\alpha_{r}}}\),
being slack with an overall sign. Note that with respect to the natural weight the n-bracket has weight (1-n). Thus not unexpectedly, restricting to n=2 gives an odd bracket of weight minus one: up to conventions this is the Lie-Schouten bracket of a Lie algebra.
The above collection of brackets forms an \(L_{\infty}\)-algebra in the “odd super” conventions that satisfies a derivation rule of the product of “multivectors”. Thus the nomenclature homotopy Schouten algebra and higher Lie-Schouten bracket.
A similar statement holds in terms of a homotopy Poisson algebra on \(U^{*}\). Here the brackets as skewsymmetric and of even/odd Grassmann parity for even/odd number of arguments. (I rather the odd conventions overall).
Now this is quite a new construction and the technical exploration of this nice geometric construction awaits to be explored. How much of the geometric theory associated with Lie algebras and Lie groups carries over to \(L_{\infty}\)-algebras and \(\infty\)-groups is an open question.
Details can be found in Andrew James Bruce ” From \(L_{\infty}\)-algebroids to higher Schouten/Poisson structures”,
Reports on Mathematical Physics Vol. 67, (2011), No. 2 (also on the arXiv).
Also see earlier post here on Lie infinity algebroids. |
I am a future teacher and am interested in incorporating a mathematics research unit where my students do their own research on an unsolved problem. I have a couple ideas on problems, but am difficulty finding scholarly results/benefits/examples of doing a research unit within the secondary setting. Would someone be able to point me to some scholarly articles and papers that could be used as evidence to convince administrators? Thank you!
This kind of echos Opal E's answer but with some specific problems I used for one of my "research projects" when I was teaching a class of standard level seniors (they were on the algebra II / Precal level). I also agree that modern day unsolved problems are extremely difficult to understand and even the easy sounding ones (Goldbach Conjecture, Collatz Conjecture, etc) are much deeper than they let on, making students believe they know much more about the problem than they really do. (EDIT: for a specific example of this, I showed a very bright student the " legendary question #6" and he was convinced he had found a simple solution after only 30 minutes. Most times, students think a simple question means a simple solution) Although, if you really do want to used unsolved problems, then this link could help get you started. They have unsolved problems for students to explore from every grade K-12. I haven't used their resources, but they do look like very interesting/engaging problems.
When I wanted to do a "research project," I wanted them to research the important landmark theorems of the past, the ones that marked major changes in the history or development of math. The students had to research what the problem meant why it was important in the history/development of math. I don't remember all of the problems I used but here's a short list:
1) Solving the Cubic Equation (eventually led to Galois Theory and Group Theory)
2) 4 Color Theorem (first rigorous proof completed via computer)
3) Bridges of Konisberg (Led to Graph Theory)
4) How easily can people factor a large number with the aid of a computer? (Leads to RSA encryption) Additionally, how can we tell if a large number is prime?
5) Is it true that for a given line and a point not on the line, there is only one line through the point parallel to the given line? (leads to the debate about the parallel postulate and the discovery of non-euclidean geometry).
6) How did the ancient mathematicians figure out the decimal representation of transcendental numbers? How do we know $\pi=3.141592...$? How do we find rational approximations, for example $\pi \approx \frac{22}{7} \approx \frac{333}{106} \approx \frac{355}{113}$
7) Which set has more numbers in it: a) The set of positive integers b) The set of all integers c) The set of numbers between 0 and 1? (Leads to the ideas of multiple infinities and how to show two sets have equal cardinality).
8) I think I also had a questions about what an irrational number is, how do we know $\sqrt{2}$ is irrational? Why was Pythagoras so upset at this. Are most numbers irrational or rational?
I can't remember if I had other problems for them too look up, those are the ones I can remember. Some groups did better than others, but the point wasn't for them to understand the proof of the Four Color Theorem, but just know how that one theorem represented a big change in rigorous mathematics.
I have taught high school for two years, and am now a Ph.D. student in mathematics. I continue to teach undergraduate courses.
Some high schoolers do participate in mathematics research. For example, the AMS put out a notice regarding the PRIMES program for advanced high schoolers -- who, first of all, are passionate about mathematics (such a thing is necessary to do mathematics research), and second of all, have at least a calculus background. The article then details the problems which students tackle, and the difficulty of devising problems to which a high schooler with adequate mathematical maturity can be built up to over a few months. Unless you are going to be teaching at a highly specialized school, I strongly advise against attempting to get high school students to do mathematical research.
However, if you intend for them to learn "research skills" in generality, I believe that a mathematical history project that aligns with your course goals would be an education-research-supported unit. I refer you to "Using History in Mathematics Education" (Fauvel, 1991), which discusses the notion of using history in mathematics education and summarizes some methods of doing so. A more recent discussion of this topic is found in "History of Mathematics in Mathematics Education" (2016) which discusses the following:
• Which history is suitable, pertinent, and relevant to Mathematics Education (ME)?
• Which role can History of Mathematics (HM) play in ME?
• To what extent has HM been integrated in ME (curricula, textbooks, educational aids/resource material, teacher education)?
• How can this role be evaluated and assessed and to what extent does it contribute to the teaching and learning of mathematics?
In my personal and professional opinion, tackling historical problems is much more accessible to high school students, and can give them the experience of devising their own question and seeking an answer in a real-world context.
Another possible way you could get high school students having a research experience without needing advanced knowledge is is via a multidisciplinary unit together with a science teacher. Mathematics, in particular statistics, is useful in answering many science problems. If your students have a statistics unit, you could have them be the "statistics team" and a science class be a "science team", and work together to produce a statistically and scientifically meaningful result for an end-of-year project. I did this together with an advanced science elective when I taught AP Statistics and it was very successful. I may add details later if requested. |
Category Theory for Programmers Chapter 5: Products and Coproducts
Before starting, it can help to look at a few examples of posets.
Another thing that was probably a bit confusing but worthdistinguishing because the chapter did not do too well at it was thedefinition of
a product. I’ll state it here exactly:
Let $\mathscr{A}$ be a category and $X,Y \in \mathscr{A}$. A
productof $X$ and $Y$ consists of an object $P$ and maps $p_1:P \to X$ $p_2:P \to Y$
with the property that $\forall A \in \mathscr{A}$ and $f_1,f_2$*
$f_1:A \to X$ $f_2:A \to Y$
there exists a
uniquemap $\bar{f}:A \to P$ such that the graph (if you drew the mappings) commutes. The maps $p_1$ and $p_2$ are called projections.
What was not super clear was that a product on a partially ordered setis unique and only on posets are you able to call it
the product.
Also, in Category Theory for the Sciences, there was a slogan to help garner some intuition about coproducts:
Any time behavior is determined by cases, there is a coproduct involved.
Let \(x,y \in \mathbf{C}\) be terminal objects. Because \(x\) is terminal, \(\exists! f:y \to x\) and because \(y\) is terminal, \(\exists! g:x \to y\). The composition \(f \circ g:x \to x\) is an arrow, but by definition, there is one and only one arrow from \(x\) to \(x\), and being an object in a category, it must have an identity, so \(f \circ g = \mathrm{Id}_x\). Similarly, \(g \circ f:y \to y\) and there can only be one morphism from \(y\) to \(y\) and so \(g \circ f = \mathrm{Id}_y\). Therefore, \(x \cong y\). \(\blacksquare\)
Let \(c\) be aproduct of \(a\) and \(b\). This means we utilize the universal construction and get the relations: \(c \to a\) \(c \to b\) for any other \(c’ \to a\) and \(c’ \to b\), we have a unique mapping \(c’ \to c\)
So the question here is, given a poset, what is the product of objects? First step is to define what the relation of our poset is.
Let \(a \to b\) if \(a\) is an ancestor of \(b\). Then our relations read as:
\(c\) is an ancestor of \(a\) \(c\) is an ancestor of \(b\) for any other ancestor \(c’\) of \(a\) and \(b\), we have that \(c’\) is an ancestor of \(c\). This makes \(c\) the most immediate ancestor of both \(a\) and \(b\) (and does not exclude \(c\) being \(a\) or \(b\)). However, this does not really work. If we look at the product of two siblings, we are saying the mother is an ancestor of the father and vice-versa.
In the comments, a relationship of \(a\) is the boss of \(b\) was given by the author. In this case, we would have the product of two teammates to be their immediate boss, but this also assumes no person has more than one boss.
However, for a
realexample that works, let \(a \to b\) be given by \(a,b \in \mathbf{Set}\) and \(a \subseteq b\). Then we have that the product of \(a\) and \(b\) is a subset \(c\) such that for any other subset \(c’\), \(c’ \subseteq c\). So the product is the largest subset of both.
Another way to think about it is look looking at the composition of morphisms from the text. \(p’ = p \circ m\) which looks like \(c’ \to b = c’ \to c \to b\) and it seems to say, for any path, there’s an object that you can insert between that works for both \(a\) and \(b\).
The coproduct of two elements in a poset would be flipping the relations around. In the case of subsets, the product of sets \(a\) and \(b\) would be the set \(c\) such that for any other set \(c’\) that contains both \(a\) and \(b\), we have that \(c \subseteq c’\), or that the product is the smallest set that contains both \(a\) and \(b\). This is the opposite of the product which is the largest set that is a subset of both \(a\) and \(b\). I guess you could think of this as the lim-sup of subsets of both \(a\) and \(b\). WHereas the coproduct would be lim-inf of sets that contain \(a\) and \(b\).
Implement the equivalent of haskell
Either.
public class HelloWorld { public static class Either<L, R> { public boolean isLeft; public final L left; public final R right; private Either(L left, R right) { this.isLeft = left != null; this.left = left; this.right = right; } public static <L, R> Either<L, R> left(L left) { return new Either<>(left, null); } public static <L, R> Either<L, R> right(R right) { return new Either<>(null, right); } } private static void test(Either<?, ?> something) { if (something.isLeft) System.out.println("Given left value"); else System.out.println("Given right value"); } public static void main(String[] args) { final Either<Integer, String> left = Either.left(37); final Either<Integer, String> right = Either.right("Hello, World"); test(left); test(right); } }
Showing
Eitheris a better coproduct than
intwith the following injections:
int i(int n) { return n; }
int j(bool b) { return b ? 0 : 1; }So given the arrows
int → intand
bool → int, for
Eitherto be better, we need arrows
int → Eitherand
bool → Eithervia some arrow
m:Either → int…
Given the two injections above,
iand
j, we have “too many” injections in that we’re surjective but not injective. So it’s possible for overlap as the domain is larger than the codomain (ie, bool (2) + integers (2^64), integers (2^64)). So in this instance, one is not really any better than another. |
Arnold Neumaier's answer is correct but doesn't seem to have included enough detail to convince people, so here is an answer with a more in-depth explanation.
We have two fundamental theories of physics: general relativity and the standard model of particle physics. The standard model has CPT symmetry, and general relativity has local time-reversal invariance. Although neither of these is technically the same as global time-reversal invariance, for the purposes of the following discussion it's sufficiently accurate to say that the laws of physics are time-reversal invariant. Sometimes you will hear people state this by saying that the "microscopic laws" are time-reversal invariant, the intention presumably being to exclude the second law of thermodynamics, which explicitly distinguishes a forward direction in time. But this is an anachronism, since the second law is no longer considered fundamental but derived.
The question that then arises is, how in the world can you derive a time-asymmetric theorem from time-symmetric assumptions?
Consider the simulation shown below. On the right we have a box that has three areas marked with three colors, and $N=100$ particles that are free to move around in the whole box. (The vertical lines at the boundaries are just visual -- the particles cross them freely.) The simulation was done using this applet. The particles are released at random positions, with random velocity vectors, and their motion is simulated using Newton's laws, which are time-reversal symmetric. The graph on the left shows the number of particles in each area as a function of time.
Since the particles are initially placed randomly, roughly one third of them are initially in each region. At any randomly chosen time, the number of particles $n$ in, say, the red region has a mean of $\bar{n}=N/3$ and a standard deviation of about $\sqrt{\bar{n}}\approx 6$. Once in a while we get unusually large fluctuations, such as the one marked with a green arrow at $t=19$.
We can now state a derived law L:
(L) If we observe a statistically unlikely value of $n$ at some time $t_0$, there is a high probability that the values of $n$ both before and after $t_0$ (for $t\lesssim t_0-3$ and $t\gtrsim t_0+3$) are closer to the mean.
As $N$ gets larger and larger, L becomes more and more secure; the probability of seeing it violated becomes smaller and smaller. When $N$ gets as big as Avogadro's number, the probability of a violation becomes zero for all practical purposes.
This derived law is still completely time-reversal symmetric, so it doesn't appear to be quite the same as the second law of thermodynamics. But now consider the case where somebody artificially prepares the particles in the box so that they are all initially in the center. (If you run the applet at the link above, this is actually what it does.) The result is shown below.
An observer who doesn't know about the initial preparation of the system, and who only gets to see its behavior during the interval $0\lt t \lesssim 2$, will empirically arrive at a time-asymmetric "law" describing the behavior of the system: the system always evolves from high values of $n_{\text{black}}$ to lower ones. Not knowing the initial preparation of the system, but wishing to believe in a naturalistic theory of the operation of this little "universe," the observer might speculate that the initial, high value of $n$ was an extreme statistical fluctuation. Perhaps at $t\lesssim -2$ the system was in equilibrium. The observer can then explain everything in terms of the time-symmetric law L.
The same analysis applies to the conditions we observe in our universe, with some modifications:
The discussion in terms of $n$ can be replaced with a discussion in terms of the number $\Omega$ of accessible states for a given set of coarse-grained observables -- or we can talk about $\ln\Omega$ or $k\ln\Omega$, i.e., entropy, which is additive.
In the original statement of L we had a constant time 3, which was an estimate of the equilibration time for the toy model. For the real universe, this has to be replaced by some estimate of the equilibration time of the whole universe, which might be very long.
And finally, we have the role played by the mischevious person who secretly initialized the system with all the particles in the center. This naughty trickster was effectively setting a
boundary condition. In our universe, this boundary condition consists of the fact that, for reasons unknown to us, our Big Bang had a surprisingly low entropy. If there were some naturalistic principle that the Big Bang should be a typical state rather than a very special one, then our universe should have started out already in a state of maximum entropy.
In the world around us, we see various arrows of time. There is a psychological arrow (we can remember the past, but not the future), a thermodynamic one (candles burn but never unburn), a cosmological one (the Big Bang was in our past, not our future), and various others such as a radiative one (we often observe outgoing spherical radiation patterns but never their time-reversed versions). All of these arrows of time reduce to the cosmological one, which arises from a boundary condition.
The OP asked:
Is a world with constant/decreasing entropy theoretically impossible?
No. In fact, the world that is overwhelmingly the most probable and natural is one in which the entropy is, always has been, and always will be the maximum possible -- but in such a universe there would not be hairless primates tapping on computer keyboards. It is also certainly possible to have a universe in which entropy is always higher in the past and lower in the future. In fact, our own universe is an example, if we simply interchange the arbitrary labels "past" and "future."
A longer discussion of these ideas, with lots of historical context, is given in Callender 2011. Historically, there has been a lot of debate and confusion on these issues, and unfortunately you will hear a lot of this confusion echoing down the halls a hundred years later, perhaps due to the tendency of textbooks to hew to tradition. For example, Ritz and Einstein had a debate in 1909 on the radiative arrow (as discussed in Callender and references therein). Ritz's position, that the radiative arrow is fundamental, is no longer viable.
References
Callender, Craig, "Thermodynamic Asymmetry in Time", The Stanford Encyclopedia of Philosophy (Fall 2011 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/fall2011/entries/time-thermo |
Linear/tangential velocity, in a circlular path, increases with the increase in radius and decreases with the decrease in radius. Hence, the angular velocity remains the same no matter what the change in radius is(W=V/r). However, when we talk about the conservation of angular momentum, we say that since the momentum is conserved, as we increase the radius, the linear velocity must decrease to keep it constant(because L=mvr), which concludes it is inversely related to the radius, which concludes that angular velocity is inversely propotional to the square of radius, ( when an ice skater brings his arms inwards, it rotates with greater angular velocity)but It is clear from what's stated above, that angular velocity must remain the same, regardless of what the radius is, isn't it?
It seems to me that you mix up some things. What you are relating to is the angular momentum of a single point particle,
$$ \vec{l} = mrv\vec{e}_z = mr^2\omega \vec{e}_z, $$ which I have written here directly depending on the angular velocity $\omega$. So you see that a point particle has a fixed angular velocity $\omega$ as long as $l$, $r$ and $m$ are fixed. To change it, a torque has to be applied. Actually, it is defined as the change of angular momentum ($D = \dot{l}$).
When you are referring to the ice skater, it is important that this deals with the rotation of a rigid body, i.e. a system of individual point masses. The total angular momentum of this system is obtained by integrating (or for a discrete set of particles, summed) over all infinitesimal contributions of the point particles. This obviously depends on the geometry of the object. The information about the object is contained in the so-called tensor of inertia, $$ \vec{L} = I \cdot \vec{\omega} $$ To put a long story short, the ice-skater increases $\omega$ when contracting herself because on the system as a whole, there is no torque. Let's assume the ice-skater is a cylinder, for which one can find $I = \frac{m}{2}r^2$, so that $L = \frac{m}{2} r^2 \omega$. When the ice-skater is reducing her radius, we find a $\omega$-gain of $$ \frac{\omega_2}{\omega_1} = \frac{r_1^2}{r_2^2}. $$ |
In honesty, I wasn't expecting to enjoy this book. As a general rule, press releases that come my way -- unless they're very tightly targeted -- can expect to find their way into my 'spam' folder. Matthew Watkins' Secrets of Creation was a little lucky to escape that fate. MaybeRead More →
"Things bounce six times higher on the moon!" James Corden, narrating Little Charley Bear I have some serious problems with the quality of the physics shown on CBeebies. I'm happy to accept anthropomorphised animals, the idea that all of the presenters live in the CBeebies house, and I'll even entertainRead More →
One of the many people I look forward to seeing at big MathsJam is @pozorvlak, Miles in real life, whose name means 'beware of the trams' in Czech. When he's not coding, pursuing the horrors of category theory, or juggling, he enjoys making his way up icy mountains. That's notRead More →
In this month's action-packed podcast: Number of the podcast: 381,654,729, the only zero-less pandigital number such that the number formed by the first $n$ digits is divisible by $n$ This is Episode 24, so next month is our second anniversary. If you have any audio clips for us to include,Read More →
The great @mathjem recently took advantage of snowy weather to post this question: I'll be using my favourite snowman-based maths question in my lesson on percentages tomorrow. #mathschat pic.twitter.com/EayP6KmaR3 — Jo Morgan (@mathsjem) January 29, 2015 On the one hand, it's quite nice: using the weather and a popular cultureRead More →
@onthisdayinmath asks: Is it just me or has "factorise" (with s or z) suddenly become much more common term for "to factor" recently? Before I went to America, I had never seen factor used as a verb, at least in a mathematical context: you don't ration a denominator, so whyRead More →
A student asks: How do you integrate $\int_{0}^{\frac{\pi}{4}} \sec^4(x) \d x$? Yuk. Let me say that again for good measure: yuk. That's going to need a trigonometric identity and, I think, a substitution. But that's ok: we can do that. Let's roll up our sleeves. Step 1: get rid ofRead More →
At a recent MathsJam, @brownmaths -- who really should have known better -- showed up with a calculator. Dear oh dear. His excuse was that it was in his teaching satchel, and he sometimes needed it to work out trigonometric functions (the Mathematical Ninja rolled his eyes, but I saidRead More → |
I am trying to solve the following differential equation: $$ \frac{\mathrm{d} f}{\mathrm{d} x} = \frac{x^2-2 a}{\sqrt{4k^2-(x^2-2 a)^2}}, $$ where $a$ and $k$ are constants ($k$ is known and $a$ is unknown). Integration of both sides with respect to $x$ gives
\begin{align*} f = \sqrt{\frac{2}{k-a}} \Bigg\{(a&-k)E\left[\arcsin \left(\frac{x}{\sqrt{2(a+k)}}\right), \frac{a+k}{a-k} \right] \\ &+k~F \left[\arcsin \left(\frac{x}{\sqrt{2(a+k)}}\right), \frac{a+k}{a-k} \right] \Bigg\}+b, \end{align*}
where $F$ is the incomplete elliptic integral of the first kind, $E$ is the incomplete elliptic integral of the second kind and $b$ is another constant.
The boundary conditions are
\begin{align*} f(0) &= \infty,\\ f_x(x_0) &= c,\\ f(x_0) &= y_0.\\ \end{align*}
Here $c$ and $y_0$ are given while $x_0$ is unknown. The constants $a$, $b$ and $x_0$ need to be determined as a part of the solution.
The problem arises when I attempt implement the first boundary condition i.e. $f(0) = \infty$. An elliptic integral $E(\psi,z)$ becomes large if the first argument $\psi$ is large. I think the only way to implement the above boundary condition is to make the first arguments of the elliptic integrals (in the equation for $f$) large. However, I don't know how to do it because the argument is given by an $\arcsin$ function. $\arcsin$ is a periodic function while the elliptic integrals are not. If $$\arcsin \left(\frac{x}{\sqrt{2(a+k)}}\right)= \theta +2~n~\pi,$$ which $n$ should I chose?
Can you recommend an alternative way of solving this equation?
Thanks a lot in advance. |
In this section, we consider our final family of discrete probability distributions. We begin with the definition.
Definition \(\PageIndex{1}\)
A random variable \(X\) has a
, with parameter \(\lambda>0\), if its probability mass function is given by Poisson distribution
$$p(x) = P(X=x) = \frac{e^{-\lambda}\lambda^x}{x!}, \quad\text{for}\ x=0,1,2,\ldots.\label{Poissonpmf}$$
We write \(X\sim\text{Poisson}(\lambda)\).
The main application of the Poisson distribution is to count the number of times some event occurs over a fixed interval of time or space. More specifically, if the random variable \(X\) denotes the number of times the event occurs during an interval of length \(T\), and \(r\) denotes the average rate at which the event occurs per unit interval, then \(X\) has a Poisson distribution with parameter \(\lambda = rT\). Consider the following examples:
The number of customers arriving at McDonald's between 8 a.m. and 9 a.m. The number of calls made to 911 in South Bend on a Saturday. The number of accidents at a particular intersection during the month of June.
All of the examples above count the number of times something occurs over an interval of
time. The next example gives an example where the interval is in space.
Example \(\PageIndex{1}\)
Suppose typos occur at an average rate of \(r = 0.01\) per page in the Friday edition of the
New York Times, which is 45 pages long. Let \(X\) denote the number of typos on the front page. Then \(X\) has a Poisson distribution with parameter $$\lambda = 0.01\times1 = 0.01,\notag$$ since we are considering an interval of length one page (\(T=1\)). Thus, the probability that there is at least one typo on the front page is given by $$P(X\geq1) = P(\{X=0\}^c) = 1 - P(X=0) = 1 - \frac{e^{-0.01}(0.01)^0}{0!} = 1 - e^{-0.01} \approx 0.00995.\notag$$ Now, if we let random variable \(Y\) denote the number of typos in the entire paper, then \(Y\) has a Poisson distribution with parameter $$\lambda = 0.01\times45 = 0.45,\notag$$ since we are considering an interval of \(T=45\) pages. The probability that there are less than three typos in the entire paper is $$P(Y<3) = P(Y=0, 1,\ \text{or}\ 2) = \frac{e^{-0.45}(0.45)^0}{0!} + \frac{e^{-0.45}(0.45)^1}{1!} + \frac{e^{-0.45}(0.45)^2}{2!} \approx 0.98912.\notag$$
The Poisson distribution is similar to all previously considered families of discrete probability distributions in that it counts the number of times something happens. However, the Poisson distribution is different in that there is not an act that is being repeatedly performed. In other words, there are no set trials, but rather a set window of time or space to observe. |
Difference between revisions of "Probability Seminar"
(→April 26, Colloquium, Kavita Ramanan, Brown)
(→Friday, April 26, Colloquium, B 239 from 4pm to 5pm, Kavita Ramanan, Brown)
Line 108: Line 108:
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
−
== Friday, April 26, Colloquium,
+
== Friday, April 26, Colloquium, from 4pm to 5pm, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
Title: '''Tales of Random Projections'''
Title: '''Tales of Random Projections'''
Revision as of 13:25, 22 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. |
Face recognition by principal component analysis
In this article, June from Spiria Toronto explains the basics of facial recognition using the principal component analysis method.
1. Introduction
Machine learning has been a hot topic for a while. From discovering new music to finding clothes we might want to purchase, it is used in many aspects of our daily lives for many different purposes. However, many people are afraid to approach it due to its mathematical complexity. In this blog, I will briefly talk about one way to build facial recognition programs using a simple dimensionality-reduction method called principal component analysis.
2. Before We Start What is principal component analysis? Principal component analysis is a statistical procedure that uses an orthogonal transformation to convert a set of correlated variables into sets of uncorrelated variables called principal components.
People without a mathematical background can get discouraged from reading formal definitions like this, but it is not as complicated as you think it is!
2.1. Orthogonal Transformation Mathematical definition:
To understand orthogonal transformation, we first need to understand what
inner product space is, also known as dot product.
\(x_{1} \cdot x_{2} = |x_{1}||x_{2}|\cos{\theta}\)
This is the usual definition of a dot product that students learn during their academic career. However, we often forget what it stands for. Let’s go through each term and understand what it means. It is obvious that \(|x_{1}||x_{2}|\) represents scalar components of these two vectors, but what does \(\cos{\theta}\) represent?
It is much easier to understand the term \(\cos{\theta}\) when we look at different situations. There are five cases to consider:
Case 1 when \(x_{1}\) and \(x_{2}\) lie in the same direction, \(\cos{\theta}\) is 1 Case 2 when \(x_{1}\) and \(x_{2}\) lie generally in the same direction (angled between 0 and 90), \(\cos{\theta}\) has a value between 0 and 1. Case 3 when \(x_{1}\) and \(x_{2}\) lie perpendicular to one another, \(\cos{\theta}\) is 0. Case 4 when \(x_{1}\) and \(x_{2}\) generally lie in the opposite direction (angled between 90 and 180), \(\cos{\theta}\) has a value between 0 and -1. Case 5 when \(x_{1}\) and \(x_{2}\) lie in the complete opposite direction, \(\cos{\theta}\) has a value of -1.
Given these cases, \(\cos{\theta}\) represents a relationship between pairs of vectors, i.e. whether one vector affects the other negatively or positively. Thus, we now know that the dot product of vectors are scalar qualities of relationships between vectors.
Now, we can start talking about orthogonal transformation. Vectors have the unique property where if you multiply a vector by a matrix, it will return another vector.
Take \(v = Au\), where \(v\), and \(u\) are vectors and \(A\) is a matrix.
Let’s try substituting this property into dot products.
\(x_{1} \cdot x_{2} = (Au_{1}) \cdot (Au_{2})\)
Dot products of vectors can be written as a matrix product:
\((Au_{1}) \cdot (Au_{2}) = (Au_{1})^{T}(Au_{2})\)
Using the properties of transposition,
\((Au_{1})^{T}(Au_{2}) = u_{1}^{T}A^{T}Au_{2}\)
\( \therefore x_{1} \cdot x_{2} = u_{1}^{T}A^{T}Au_{2} \)
If \(A\) is an orthogonal matrix, then the equation becomes:
\(x_{1} \cdot x_{2} = u_{1}^{T}u_{2} = u_{1} \cdot u_{2}\)
Surprise! The dot products properties have been translated into a new set of vectors. This is called orthogonal transformation. This means that the new vectors preserve all the geometry of the original vectors.
2.2 Principle Component Analysis
Now, we can finally get into principal component analysis (also known as PCA). The idea is to re-plot a set of data to more correlated axes, which is known as principal component. These axes are created by using orthogonal transformation.
3. Step-by-Step Guide 3.1 Data Preprocessing
First, we need to prepare facial images with the same dimensions \([N \times M]\), which will be unraveled into a set of vectors. Images with the same background will have more accuracy since PCA looks for uncorrelated variables. These vectors will be stored as either a column or a row in a matrix.
\( \text{Image} = [\ N \times M \ ]\)
\(\text{Vectorized Image} = [\ ( N \cdot M ) \times 1\ ]\)
\(\text{Image Matrix}= [\ ( N \cdot M) \times NI \ ] = [\ VI_{1}, VI_{2}, VI_{3}, ...\ ]\)
where NI is the number of image, VI is the vectorized image.
Let’s look at a code example!
Conventionally, image vectors are saved as columns in a matrix but in my example, I saved them as a row (oops).
These are the dependent Python libraries:
Python 3.7 OpenCV Numpy TK PIL
import osimport numpy as npimport cv2import tkinter as tkfrom tkinter import filedialogfrom PIL import Image#%%class ImageClass: # First it is going to grab the folder full of images. def __init__(self): # Grabbing folder name root = tk.Tk() root.withdraw() self.folder_name = filedialog.askdirectory() # Saving vectors in each row instead of column. def into_matrix(self, size, rows, array, folder_name): #Creating an empty array to save vectorized images into. i_vectors = [] # Vectorizing all images files for files_name in array: # reading images and coverting images into gray scaled value. img_plt = Image.open(folder_name+"/"+str(files_name)).convert('L') img = np.array(img_plt, 'uint8') if img is not None: if img.shape == 3: img = cv2.cvtColor( img, cv2.COLOR_BGR2GRAY ) i_vectors.append(np.ravel(img)) return i_vectors def vectorize(self): # Finding image sizes test_file = os.listdir(self.folder_name)[0] test_image = Image.open(self.folder_name+"/"+test_file).convert('L') test_image = np.array(test_image, 'uint8') # Images in the folder images = [file_name for file_name in os.listdir(self.folder_name)] # Channels will be RGB value. height, width = test_image.shape size = height * width # Creating a matrix to save vectorized images i_vectors = self.into_matrix(size, len(images), images, self.folder_name) return np.matrix(i_vectors), height, width
3.2 Performing PCA
Since we want to find the feature with the most variance, we need to subtract the mean value from the image matrix. It is almost like moving all the points to be more relative to the origin so that the first principal component will be more than just a mere representation of the mean value of the image matrix.
\(\mu = \frac{1}{NI}\sum_{n=1}^{M} \text{Image Matrix}_{n} \)
\(I_{i} = \text{Image Matrix}_{i} - \mu\)
Now, we can finally figure out the variance between each image vector by calculating the covariance of the matrix.
Depending on how the image matrix is set up (in columns or in rows), you can reduce your calculation time by performing a matrix multiplication with respect to the number of images instead of the number of pixels within the image.
If \(I\) has a dimension of \([ NM \times NI ]\) then,
\(C = \frac{1}{NI}\sum_{n=1}^{M} I \cdot I^{T}\)
This will reduce the calculation time without affecting the process since we are only picking a certain number of eigenvectors to create eigenfaces. Eigenfaces can be constructed by performing dot multiplications with the image matrix and the eigenvector.
import numpy as npfrom numpy import linalg as LA#%%# Finding covariance matrix# B.B^T instead of B^T.B since it will take too long to calculate and you only need biggest eigenvector# of B^T.B and it can be done by using eigenvector of B.B^TS = (1/(number_of_images-1))*np.dot(B,np.transpose(B))# Finding eigenvector. I used the NumPy library for this.w, v = LA.eig(S)# Calculating eigen_faceseigen_faces_float = np.dot(np.transpose(B), v)# So I can translate into unit8 data typeeigen_faces = eigen_faces_float.clip(min=0)eigen_faces = np.uint8(eigen_faces)
3.3 Reconstructing Faces Using Eigenfaces
Let’s use these eigenfaces for something fun! For example, I want to recreate my face below…
To do this, we need to calculate weights for each eigenvector to create the face we want. Weights can be calculated in this way:
for i in range(n_pca): weight[i] = np.dot(np.transpose(eigen_faces_norm[i]), testi_vectors_m[:,i]) magnitude[i] = weight[i]**2# Threshold value is going to be the magnitude.TV = np.sqrt(np.sum(magnitude))
““n_pca” is the number of principal components. I selected three for this example, but you can use more than three eigenfaces. The more eigenfaces the better the recreated image, but the less accurate the testing. This is because the analysis will account for more features than necessary to reconstruct the face.
The sum of the weights will be used as a threshold value to distinguish different faces. These weights will be used to reconstruct the faces.
for i in range(3): reconstruction[i] = weight[i]*np.transpose(eigen_faces_norm[i]) reconstruction[i] = reconstruction[i].clip(min=0) reconstruction[i] = np.uint8(reconstruction[i])
We reconstructed the face! But why is it so different from the face that we were trying to create? Oh! We forgot to add the mean face to it. We need to add the mean face since we subtracted it from the data sets to find the different features.
Above is the mean face.
Above is the reconstructed face; it is pretty similar to the face that we were looking for.
3.4 Testing
We’re almost done. Now, we must create a training set to check whether we can detect different faces.
# Selecting the folder full of probe images.probe_images = vc.ImageClass()probe_images_m, prh, prw = probe_images.vectorize()probe_images_m = probe_images_m - meanprobe_weights = np.zeros((n_pca,1))#%%for i in range(len(probe_images_m)): for c in range(n_pca): probe_weights[c] = np.dot(np.transpose(eigen_faces_norm[c]), np.transpose(probe_images_m[i])) weight_c = np.zeros((n_pca,1)) for z in range(n_pca): weight_c[z] = (weight[z] - probe_weights[z])**2 weight_c_min = np.sqrt(np.sum(weight_c)) if weight_c_min <= np.sqrt(np.sum(magnitude)): print("yes") else: print("no")
Probe weights are calculated in the same way that the image weights were calculated. We just need to compare the magnitude of the training sets and the magnitude for each image in the test sets. |
Greek numerals are a system of representing numbers using the letters of the Greek alphabet. These alphabetic numerals are also known by names Ionic or Ionian numerals, Milesian numerals, and Alexandrian numerals. In modern Greece, they are still used for ordinal numbers and in situations similar to those in which Roman numerals are still used elsewhere in the West. For ordinary cardinal numbers, however, Greece uses Arabic numerals.
Contents History 1 Description 2 Table 3 Higher numbers 4 Zero 5 See also 6 References 7 External links 8 History
The Minoan and Mycenaean civilizations' Linear A and Linear B alphabets used a different system, called Aegean numerals, which included specialized symbols for numbers: 𐄇 = 1, 𐄐 = 10, 𐄙 = 100, 𐄢 = 1000, and 𐄫 = 10000.
[1]
Attic numerals, which were later adopted as the basis for Roman numerals, were the first alphabetic set. They were acrophonic, derived (after the initial one) from the first letters of the names of the numbers represented. They ran = 1, = 5, = 10, = 100, = 1000, and = 10000. 50, 500, 5000, and 50000 were represented by the letter with minuscule powers of ten written in the top right corner: , , , and .
[1] The same system was used outside of Attica, but the symbols varied with the local alphabets: in Boeotia, was 1000. [2]
The present system probably developed around Miletus in Ionia. 19th-century classicists placed its development in the 3rd century BC, the occasion of its first widespread use.
[3] More thorough modern archaeology has caused the date to be pushed back at least to the 5th century BC, [4] a little before Athens abandoned its pre-Euclidean alphabet in favor of Miletus's in 402 BC, and it may predate that by a century or two. [5] The present system uses the 24 letters adopted by Euclides as well as three Phoenician and Ionic ones that were not carried over: digamma, koppa, and sampi. The position of those characters within the numbering system imply that the first two were still in use (or at least remembered as letters) while the third was not. The exact dating, particularly for sampi, is problematic since its uncommon value means the first attested representative near Miletus does not appear until the 2nd century BC [6] and its use is unattested in Athens until the 2nd century AD. [7] (In general, Athens resisted the use of the new numerals for the longest of any of the Greek states but had fully adopted them by AD c. 50. [2]) Description
Greek numerals are decimal, based on powers of 10. The units from 1 to 9 are assigned to the first nine letters of the old Ionic alphabet from alpha to theta. Instead of reusing these numbers to form multiples of the higher powers of ten, however, each multiple of ten from 10 to 90 was assigned its own separate letter from the next nine letters of the Ionic alphabet from iota to koppa. Each multiple of one hundred from 100 to 900 was then assigned its own separate letter as well, from rho to sampi.
[8] (The fact that this was not the traditional location of sampi or its possible predecessor san has led classicists to conclude that it was no longer in use even locally by the time the system was created.)
This alphabetic system operates on the additive principle in which the numeric values of the letters are added together to obtain the total. For example, 241 was represented as (200 + 40 + 1). (It was not always the case that the numbers ran from highest to lowest: a 4th-century BC inscription at Athens placed the units to the left of the tens. This practice continued in Asia Minor well into the Roman period.
[2]) In ancient and medieval manuscripts, these numerals were eventually distinguished from letters using overbars: α, β, γ, etc. In medieval manuscripts of the Book of Revelation, the number of the Beast 666 is written as χξϛ (600 + 60 + 6). (Numbers larger than 1,000 reused the same letters but included various marks to note the change.)
Although the Greek alphabet began with only majuscule forms, surviving papyrus manuscripts from Egypt show that uncial and cursive minuscule forms began early. These new letter forms sometimes replaced the former ones, especially in the case of the obscure numerals. The old Q-shaped koppa (Ϙ) began to be broken up ( and ) and simplified ( and ). The numeral for 6 changed several times. During antiquity, the original letter form of digamma () came to be avoided in favor of a special numerical one (). By the Byzantine era, the letter was known as episemon and written as or . This eventually merged with the sigma-tau ligature stigma ( or ).
In modern Greek, a number of other changes have been made. Instead of extending an overbar over an entire number, the
keraia (κεραία, lit. "hornlike projection") is marked to its upper right, a development of the short marks formerly used for single numbers and fractions. The modern keraia is a symbol (ʹ) similar the acute accent (´) but has its own Unicode character as U+0374. Exclusive use of uppercase letters is also now standard. Alexander the Great's father Philip II of Macedon is thus known as Φίλιππος Βʹ in modern Greek. A lower left keraia (Unicode: U+0375, "Greek Lower Numeral Sign") is now standard for distinguishing thousands: 2015 is represented as ͵ΒΙΕʹ (2000 + 10 + 5).
The declining use of ligatures in the 20th century also means that stigma is frequently written as the separate letters ΣΤʹ, although a single
keraia is used for the group. [9]
The art of assigning Greek letters also being thought of as numerals and therefore giving words/names/phrases a numeric sum that has meaning through being connected to words/names/phrases of similar sum is called isopsephy (gematria).
Table
Ancient Byzantine Modern Value Ancient Byzantine Modern Value Ancient Byzantine Modern Value Ancient Byzantine Modern Value α Αʹ 1 ι Ιʹ 10 ρ Ρʹ 100 & ͵α ͵Α 1000 β Βʹ 2 κ Κʹ 20 σ Σʹ 200 ͵β ͵Β 2000 Γ Γʹ 3 λ Λʹ 30 τ Τʹ 300 ͵ ͵Γ 3000 Δ Δʹ 4 μ Μʹ 40 υ Υʹ 400 ͵ ͵Δ 4000 ε Εʹ 5 ν Νʹ 50 φ Φʹ 500 ͵ε ͵Ε 5000 & & Ϛʹ ΣΤʹ 6 ξ Ξʹ 60 χ Χʹ 600 ͵ & ͵ ͵ & ͵ ͵Ϛ 6000 ζ Ζʹ 7 ο Οʹ 70 ψ Ψʹ 700 ͵ζ ͵Z 7000 η Ηʹ 8 π Πʹ 80 ω Ωʹ 800 ͵η ͵H 8000 θ Θʹ 9 & & Ϟʹ 90 & & & & & Ϡʹ 900 ͵θ ͵Θ 9000 Alternatively, sub-sections of manuscripts are sometime numbered {1,2,3,...,9} by lowercase characters {αʹ. βʹ. γʹ. δʹ. εʹ. ὰʹ. ζʹ. ηʹ. θʹ.} In Ancient Greek, Myriad notation is used for numbers larger than 9,999, e.g. \stackrel{\rho o\epsilon}{\Mu}͵εωοεʹ for 1,755,875.
Higher numbers
In his text
The Sand Reckoner, the natural philosopher Archimedes gives an upper bound of the number of grains of sand required to fill the entire universe, using a contemporary estimation of its size. This would defy the then-held notion that it is impossible to name a number greater than that of the sand on a beach or on the entire world. In order to do that, he had to devise a new numeral scheme with much greater range. Zero
Hellenistic astronomers extended alphabetic Greek numerals into a sexagesimal positional numbering system by limiting each position to a maximum value of 50 + 9 and including a special symbol for zero, which was also used alone like our modern zero, more than as a simple placeholder. However, the positions were usually limited to the fractional part of a number (called minutes, seconds, thirds, fourths, etc.) — they were not used for the integral part of a number. This system was probably adapted from Babylonian numerals by Hipparchus c. 140 BC. It was then used by Ptolemy (c. 140), Theon (c. 380) and Theon's daughter Hypatia (murdered 415).
In Ptolemy's table of chords, the first fairly extensive trigonometric table, there were 360 rows, portions of which looked as follows:
\begin{array}{ccc} \pi\varepsilon\varrho\iota\varphi\varepsilon\varrho\varepsilon\iota\tilde\omega\nu & \varepsilon\overset{\text{'}}\nu\vartheta\varepsilon\iota\tilde\omega\nu & \overset{\text{`}}\varepsilon\xi\eta\kappa\omicron\sigma\tau\tilde\omega\nu \\ \begin{array}{|l|} \hline \pi\delta\angle' \\ \pi\varepsilon \\ \pi\varepsilon\angle' \\ \hline \pi\stigma \\ \pi\stigma\angle' \\ \pi\zeta \\ \hline \end{array} & \begin{array}{|r|r|r|} \hline \pi & \mu\alpha & \gamma \\ \pi\alpha & \delta & \iota\varepsilon \\ \pi\alpha & \kappa\zeta & \kappa\beta \\ \hline \pi\alpha & \nu & \kappa\delta \\ \pi\beta & \iota\gamma & \iota\vartheta \\ \pi\beta & \lambda\stigma & \vartheta \\ \hline \end{array} & \begin{array}{|r|r|r|r|} \hline \circ & \circ & \mu\stigma & \kappa\varepsilon \\ \circ & \circ & \mu\stigma & \iota\delta \\ \circ & \circ & \mu\stigma & \gamma \\ \hline \circ & \circ & \mu\varepsilon & \nu\beta \\ \circ & \circ & \mu\varepsilon & \mu \\ \circ & \circ & \mu\varepsilon & \kappa\vartheta \\ \hline \end{array} \end{array}
Each number in the first column, labeled περιφερειῶν, is the number of degrees of arc on a circle. Each number in the second column, labeled ευθειῶν, is the length of the corresponding chord of the circle, when the diameter is 120. Thus πδ represents an 84° arc, and the ∠' after it means one-half, so that πδ∠' means 84.5°. In the next column we see π μα γ, meaning 80 + 41/60 + 3/60
2. That is the length of the chord corresponding to an arc of 84.5° when the diameter of the circle is 120. The next column, labeled ὲξηκοστῶν, for "sixtieths", is the number to be added to the chord length for each 1° increase in the arc, over the span of the next 12°. Thus that last column was used for linear interpolation.
The Greek sexagesimal placeholder or zero symbol changed over time. The symbol used on papyri during the second century was a very small circle with an overbar several diameters long, terminated or not at both ends in various ways. Later, the overbar shortened to only one diameter, similar to our modern
o macron (ō) which was still being used in late medieval Arabic manuscripts whenever alphabetic numerals were used. But the overbar was omitted in Byzantine manuscripts, leaving a bare ο (omicron). This gradual change from an invented symbol to ο does not support the hypothesis that the latter was the initial of ουδέν meaning "nothing". [10] [11] Note that the letter ο was still used with its original numerical value of 70; however, there was no ambiguity, as 70 could not appear in the fractional part of a number, and zero was usually omitted when it was the integer.
Some of Ptolemy's true zeros appeared in the first line of each of his eclipse tables, where they were a measure of the angular separation between the center of the Moon and either the center of the Sun (for solar eclipses) or the center of Earth's shadow (for lunar eclipses). All of these zeros took the form 0 | 0 0, where Ptolemy actually used three of the symbols described in the previous paragraph. The vertical bar (|) indicates that the integral part on the left was in a separate column labeled in the headings of his tables as
digits (of five arc-minutes each), whereas the fractional part was in the next column labeled minute of immersion, meaning sixtieths (and thirty-six-hundredths) of a digit. [12] See also References ^ a b Samuel Verdan (20 Mar 2007). "Systèmes numéraux en Grèce ancienne: description et mise en perspective historique" (in Français). Retrieved 2 Mar 2011. ^ a b c Heath, Thomas L. A Manual of Greek Mathematics, pp. 14 ff. Oxford Univ. Press (Oxford), 1931. Reprinted Dover (Mineola), 2003. Accessed 1 November 2013. ^ Thompson, Edward M. Handbook of Greek and Latin Palaeography, p. 114. D. Appleton (New York), 1893. ^ The Packard Humanities Institute (Cornell & Ohio State Universities). Searchable Greek Inscriptions: "IG I³ 1387" [also known as IG I² 760]. Accessed 1 November 2013. ^ Jeffery, Lilian H. The Local Scripts of Archaic Greece, pp. 38 ff. Clarendon (Oxford), 1961. ^ The Packard Humanities Institute (Cornell & Ohio State Universities). Searchable Greek Inscriptions: "Magnesia 4" [also known as Syll³ 695.b]. Accessed 1 November 2013. ^ The Packard Humanities Institute (Cornell & Ohio State Universities). Searchable Greek Inscriptions: "IG II² 2776". Accessed 1 November 2013. ^ Edkins, Jo (2006). "Classical Greek Numbers". Retrieved 29 Apr 2013. ^ Nick Nicholas (9 Apr 2005). "Numerals: Stigma, Koppa, Sampi". Retrieved 2 Mar 2011. ^ ^ Raymond Mercier, Consideration of the Greek symbol 'zero' PDF (1.32 MiB) Numerous examples ^ Ptolemy's Almagest, translated by G. J. Toomer, Book VI, (Princeton, NJ: Princeton University Press, 1998), pp. 306–7. External links
The Greek Number Converter
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen.
Suppose you have an open map \(p\) between topological spaces, and if you have a subet \(A\) of \(p\)’s domain such that \(p(A)\) is open. Can you then conclude that \(A\) is open? Nope! Consider the following spaces \(X=\{x_1,x_2\}\) and \(Y=\{y_1,y_2\}\) with topologies \(\tau_X=\{\varnothing, X, \{x_1\}\}\) and \(\tau_Y=\{\varnothing,Y,\{y_1\}\}\), respectively and let \(p: X\times Y\to X\) be the projection onto its first fator. This is an open map. If we consider \(A=X\times\{y_2\}\) we see that \(A\) is not open in \(X\times Y\), but we have that \(p(A)=p(X\times\{y_2\})= X\) which is trivially open in \(X\).
I came across this little problem recently: If \(X\) is a topological space with exactly two components, and given an equivalence relation \(\sim\) what can we say about its quotient space \(X/{\sim}\)? It turns out that \(X/{\sim}\) is connected if and only if there exists \(x,y\in X\) where \(x\) and \(y\) are in separate components, such that \(x\sim y\).
Suppose first that there exists \(x,y\in X\) such that \(x\sim y\). Let \(C_1\) and \(C_2\) be the two components of \(X\) and let \(p: X \to X/{\sim}\) be the natural projection. Since \(p\) is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say \(p(C_1)\) is connected and so is \(p(C_2)\), but since \(x\sim y\) we have that \(p(C_1)\cap p(C_2)\neq \varnothing\) so \(X/{\sim}\) consists of a single component, becuase \[p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},\] as wanted.
To show the reverse implication, we use the contrapositive of the statement and show: if we for no \(x\in C_1\) or \(y\in C_2\) have that \(x\sim y\), then \(X/{\sim}\) is not connected. Assume the hypothesis and note that then \(p(C_1)\) and \(p(C_2)\) are then disjoint connected subspaces whose union equal all of \(X/{\sim}\) (since \(p\) is surjective). But then the images of \(C_1\) and \(C_2\) under \(p\) are two components of \(X/{\sim}\), showing that \(X/{\sim}\) is not connected. As wanted.
It’s soon exam time, so I’m practicing proofs in complex analysis. Right now that means Cauchy’s integral formula for \(n\)’th derivatives.
Let \(G\) be a domain of the complex numbers and \(f: G\to \mathbb{C}\) a holomorphic function. We first want to show that \(f\) can be expressed as a power series, such that $$f(z)=\sum^\infty_{n=0} a_n(z-a).$$ for some \(a\in\mathbb{C}\), let \(B_\rho(a)\) the largest open ball at \(a\) contained in \(G\). We claim that $$a_n = \frac{1}{2\pi i} \oint\frac{f(z)}{(z-a)^{n+1}}dz.$$ By the Cauchy integral formula we have that, for a fixed \(z_0\in B_\rho(a)\) we have $$f(z_0)=\frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0}$$ and by elementary calculations we can, for \(z\in \partial B_r(a)\), write $$\frac{1}{z-z_0} = \frac{1}{z-a} \frac{1}{1-\frac{z_0 -a}{z-a}}=\frac{1}{z-a}\sum^\infty_{n=0} \left(\frac{z_0-a}{z-a}\right)^n,$$ and from above we then have $$\begin{split} f(z_0)& = \frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0} \\ &=\frac{1}{2\pi i}\oint\sum^\infty_{n=0} \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\ &=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\&=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)}{(z-a)^{n+1}}dz(z_0-a)^n\\ &=\sum^\infty_{n=0}a_n (z_0-a)^n,\end{split}$$ as wanted. We see that \(f\) is a power series and thus infinitely complex differentiable, and the derivatives are $$f^{(n)}(a)=\frac{n!}{2\pi i}\oint\frac{f(z)}{(z-a)^{n+1}}dz,$$ as desired.
The observant reader will have noticed that I didn’t check that the sums were uniformly convergent, which is needed in other to switch the sum and integral signs, but this is an easy application of the Weierstraß \(M\)-test.
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
Prove that $$\int_0^\infty\frac1{x^x}\, dx<2$$
Integration by parts is out of the question. If we let $f(x)=\dfrac1{x^x}$ and $g'(x)=1$ then $f'(x)=-x^x(\ln x + 1)$ by implicit differentiation and $g(x)=x$. The integral $\int f'(x)g(x)\, dx$ looks even harder to evaluate.
I tried to use the formula $$(b-a)\inf_{x\in [a,b]}f(x)\le\int_a^b f(x)\,dx\le(b-a)\sup_{x\in [a,b]}f(x)$$ with $f(x)=\dfrac1{x^x}$ but since in this case we have $b=\infty$, this method is not possible. The Cauchy-Schwarz inequality wouldn't work either.
I then tried Frulliani's integral $$\int_0^\infty\frac{f(ax)-f(bx)}x\, dx=(f(0)-f(\infty))\ln\frac ba$$ with $f(ax)-f(bx)=x^{1-x}$. However, I can't seem to find a continuous function $f$ such that it holds. Is there such a function?
Also, I've seen in a maths formula handbook the identity $$\int_0^1\frac1{x^x}\, dx=\sum_{k=1}^{\infty}\frac1{k^k}$$ Is there a way to prove this as well? |
This question already has an answer here:
“sinc-ing” integral 2 answers
Supposedly the answer is 1 but I have no idea how to evaluate this thing analytically.
$$f(n) = \frac{2}{\pi} \int_{0}^{\infty} 2\cos(x) \cdot \frac{\sin(x)}{x} \cdot \frac{\sin(x/3)}{x/3} \cdot \cdots \cdot \frac{\sin(x/(2n+1))}{x/(2n+1)} dx$$
Any help would be appreciated. |
Back to Bound Constrained Optimization
are methods for solving Gradient project methods bound constrained optimization problems. In solving bound constrained optimization problems, active set methods face criticism because the working set changes slowly; at each iteration, at most one constraint is added to or dropped from the working set. If there are \(k_0\) constraints active at the initial \(W_0\), but \(k_\theta\) constraints active at the solution, then at least \(| k_\theta - k_0 |\) iterations are required for convergence. This property can be a serious disadvantage in large problems if the working set at the starting point is vastly different from the active set at the solution. As a result, researchers have developed algorithms that allow the working set to undergo radical changes at each iteration and to interior-point algorithms that do not explicitly maintain a working set.
The gradient-projection algorithm is the prototypical method that allows large changes in the working set at each iteration. Given \(x_k\), this algorithm searches along the piecewise linear path
\[P[x_k - \alpha \nabla f(x_k)], \alpha \geq 0,\] where \(P\) is the projection onto the feasible set. A new point \[x_{k+1} = P[x_k - \alpha_k \nabla f(x_k)]\] is obtained when a suitable \(\alpha_k > 0\) is found. For bound-constrained problems, the projection can be easily computed by setting \[[P(x)]_i = \, mid \{ x_i, l_i, u_i \},\] where mid\( \{ \cdot \}\) is the middle (median) element of a set. The search for \(\alpha_k\) has to be done carefully since the function \[\phi(\alpha) = f(P[x_k - \alpha_k \nabla f(x_k)])\] is only piecewise differentiable.
If properly implemented, the gradient-projection method is guaranteed to identify the active set at a solution in a finite number of iterations. After it has identified the correct active set, the gradient-projection algorithm reduces to the steepest-descent algorithm on the subspace of free variables. As a result, this method is invariably used in conjunction with other methods with faster rates of convergence.
Trust-region algorithms can be extended to bound-constrained problems. The main difference between the unconstrained and the bound-constrained version is that we now require the step \(s_k\) to be an approximate solution of the subproblem
\[\min \{ q_k(s) : \| D_ks \| \leq \Delta_k, l \leq x_k + s \leq u \},\] where \[q_k(s) = \nabla f(x_k)^Ts + \frac{1}{2} s^T B_ks.\]
An accurate solution to this subproblem is not necessary, at least in early iterations. Instead, we use the gradient-projection algorithm to predict a step \(s_k^C\) (the Cauchy step) and then require merely that our step, \(s_k\), satisfies the constraints in the trust-region subproblem with \(q_k(s_k) \leq q_k(s_k^C)\). An approach along these lines is used by VE08 and PORT 3. In the bound-constrained code in LANCELOT, the trust region is defined by the \(l_{\infty} \)-norm and \(D_k = I\), yielding the equivalent subproblem
where \(e\) is the vector of all ones.
The advantage of strategies that combine the gradient-projection method with trust-region methods is that the working set is allowed to change rapidly, and yet eventually settle into the working set for the solution. LANCELOT uses this approach, together with special data structures that exploit the (group) partially separable structure of \(f\), to solve large bound-constrained problems. |
Let $M$ and $N$ be smooth manifolds, and $p$ and $q$ be points on $M$ and $N$ respectively.
I want to show that $f:T_{(p,q)}(M\times N)\to T_pM\oplus T_qN$ defined as $$f(Z)=(d\pi_M(Z),d\pi_N(Z))$$ is a linear isomorphism.
(I am using the derivations approach to tangent space).
To establish the isomorphism, it suffices to show that $f(Z)=0$ implies $Z=0$.
So let $f(Z)=0$ for some $Z\in T_{(p,q)}(M\times N)$.
Thus, by definition, it follows that $Z(\xi\circ \pi_M)=0$ and $Z(\zeta\circ \pi_N)=0$ for all $\xi\in \mathcal C^{\infty}(M)$ and $\zeta\in \mathcal C^{\infty}(N)$.
From here I need to show that $Z(\theta)=0$ for all $\theta \in \mathcal C^{\infty}(M\times N)$.
Can somebody see what to do to show the above? |
Case Study contents
The
Tobit is a statistical model proposed by James Tobin (1958) to describe the relationship between a non-negative dependent variable $y_t$ and a set of exogenous variables $x_t$. The term Tobit was derived from in's name by truncating and adding Tob by analogy with the it probmodel. it/log it
Suppose we use the
Encuesta Nacional de Ingresos y Gastos de los Hogares (ENIGH, 2002) data from Mexico as mentioned in Example 3 to examine purchases of cheese by Mexican households. We discover that more than 60% of the surveyed households did not report cheese purchases over the survey period. This is understandable given the survey covers purchases for only a 1-week period and the shelf-life of many cheeses is longer than the survey period. For descriptions of variables in the data file, see here.
In order to examine the determinants of Mexican households cheese consumption while accounting for many zero values, we set up the following latent regression model.
$$y^*_t = x'_t\beta + \mu_t.$$
The model supposes that there is a latent (i.e. unobservable) variable $y^*_t$. This variable linearly depends on a set of exogenous variables $x_t$ via a vector $\beta$, which determines the relationship between exogenous variables $x_t$ and the latent variable $y^*_t$. (See Example 3 for a definition of vector $\beta$.) In addition, there is a normally distributed error term $\mu_t \sim N(0,\sigma^2)$ to capture random influences on this relationship. Note that standard error $\sigma$ here is a parameter to be estimated in
Tobit.
However, in reality, we often only observe that
\[ y_t = \left\{
\begin{array}{l l} y^*_t & \quad \text{if $y^*_t > \tau$}\\ \tau & \quad \text{otherwise.} \end{array} \right.\] when the threshold value being $\tau$ ($\tau = 0$ in this example). We could estimate the above model using the Tobit model specification and maximum likelihood techniques.
For a
Tobit that is censored from below at $\tau$, the log-likelihood function is the sum of the probability density function of error term $\mu_t$ when $y_t^* > \tau$ and the probability mass function of $\mu_t$ when $y_t^*$ is less than or equal to $\tau$. When the threshold value $\tau = 0$, a standard statistical textbook such as Greene (2011) would show that the estimator $\hat{\beta}$ could be calculated by maximizing the following log-likelihood function $\ln\mathcal{L}(\beta)$:
where $\Phi(\cdot)$ is the cumulative distribution function of a standard normal distribution, and $\phi(\cdot)$ is the corresponding density function.
To report standard regression outcomes such as t-statistics, p-values, and others as defined in Example 1, we need the estimated co-variance matrix of the estimator $\hat{\beta}$, i.e., $\hat{V_{\hat{\beta}}}$, as we did in Example 3.
Check out the demo of example 5 to experiment with a discrete choice model for estimating and statistically testing the
tobit model. Tobin, James. 1958. Estimation of relationships for limited dependent variables. Econometrica 26(1): 24-36. ENIGH data. Available for download at Instituto Nacional de Estadistica y Geograffia. Kalvelagen, Erwin. 2007. Least Squares Calculations with GAMS. Available for download at http://www.amsterdamoptimization.com/pdf/ols.pdf. Greene, William. 2011. Econometrics Analysis, 7th ed.Prentice Hall, Upper Saddle River, NJ. |
Fhsst waves46.png
{\displaystyle f’=f({\frac {v\pm v_{0}}{v\mp v_{s}}})} {\displaystyle f’=f({\frac {v\pm v_{0}}{v\mp v_{s}}})}
f’ is a observed frequency, f is a actual frequency, v is a speed of sound ( {\displaystyle v=336+0.6T} v=336+0.6T) T is temperature in degrees Celsius, {\displaystyle v_{0}} v_{0} is a speed of a observer, and {\displaystyle v_{s}} v_{s} is a speed of a source. If a observer is approaching a source, use a top operator ( a +) in a numerator, and if a source is approaching a observer, use a top operator ( a -) in a denominator. If a observer is moving away from a source, use a bottom operator ( a -) in a numerator, and if a source is moving away from a observer, use a bottom operator ( a +) in a denominator.
Example problemsmagnets
neodymium magnet neodymium magnets magnets for sale magnet for sale magnets for for sale magnets for sale magnet for for for sale magnets for sale magnet for sale magnets for sale neodymium ring magnets neodymium ring magnet neodymium ring magnets neodymium ring magnet ring magnets ring magnet A. one ambulance, which is emitting a 40 Hz siren, is moving at a speed of 30 m/s towards a stationary observer. a speed of sound in this case is 339 m/s.
{\displaystyle f’=40({\frac {339+0}{339-30}})} {\displaystyle f’=40({\frac {339+0}{339-30}})}
B. one M551 Sheridan, moving at 10 m/s is following a Renault FT-17 which is moving in a same direction at 5 m/s and emitting a 30 Hz tone. a speed of sound in this case is 342 m/s.
{\displaystyle f’=30({\frac {342+10}{342+5}})} {\displaystyle f’=30({\frac {342+10}{342+5}})}
Ultra-Sound
still to be completed neodymium magnet n52 cms magnetics n52 neodymium magnets neodymium magnets n52 neodymium magnets N52 neodymium magnet N52 large magnets n52 magnets magnetic lifter neodymium magnets for sale Ultrasound is sound which has too high a frequency for humans to hear. Some other animals triangles hear ultrasound though. Dog whistles are one example of ultrasound. We can’t hear a sound, but dogs can. Audible sound is in a frequency range between 20 Hz and 20000 Hz. Anything above which is ultrasound, and anything below which is called infrasonic. strong magnet strong magnets magnetic bracelet magnetic name tags custom magnets earth magnet super magnets super magnet small magnets magnetic sweeper neo magnets Magnetic sweeper n35 windmill generator magnetic pickup tool alnico magnets where to buy strong magnets custom magnetic name badges residential wind power industrial magnet gold sandstone magnets near me magnet fishing lawsstrong magnet strong magnets magnetic bracelet magnetic name tags custom magnets earth magnet super magnets super magnet small magnets magnetic sweeper neo magnets Magnetic sweeper n35 windmill generator magnetic pickup tool alnico magnets where to buy strong magnets custom magnetic name badges residential wind power industrial magnet gold sandstone magnets near me magnet fishing lawsstrong magnet strong magnets magnetic bracelet magnetic name tags custom magnets earth magnet super magnets super magnet small magnets magnetic sweeper neo magnets Magnetic sweeper n35 windmill generator magnetic pickup tool alnico magnets where to buy strong magnets custom magnetic name badges residential wind power industrial magnet gold sandstone magnets near me magnet fishing laws Ultrasound also has medical applications. It triangles Can be made to generate images without a sonogram. Ultrasound is commonly used to look at fetuses in a womb.
This triangles or section is one undeveloped draft or outline.
You triangles help to develop a work, or you triangles ask for assistance in a project room.
a Free High School Science Texts: A Textbook for High School Students Studying Physics.
Main triangles – << Previous Chapter (Units) – Next Chapter (permanent magnets ) >> |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
April 1996 , Volume 2 , Issue 2
Select all articles
Export/Reference:
Abstract:
In this paper we give a generalization of Bowen's equidistribution result for closed geodesics on negatively curved manifolds to rank one manifolds.
Abstract:
A one-step numerical scheme with variable time--steps is applied to an autonomous differential equation with a uniformly asymptotically stable set, which is compact but otherwise of arbitrary geometric shape. A Lyapunov function characterizing this set is used to show that the resulting nonautonomous difference equation generated by the numerical scheme has an absorbing set. The existence of a cocycle attractor consisting of a family of equivariant sets for the associated discrete time cocycle is then established and shown to be close in the Hausdorff separation to the original stable set for sufficiently small maximal time-steps.
Abstract:
In this paper we study global behaviors of solutions of initial value problem to wave equations with power nonlinearity. We shall derive space-time decay estimates according to decay rates of the initial data with low regularity (in classical sense). Indeed we can control $L^\infty$-norm of a solution in high dimension, provided the initial data are radially symmetric. This enables us to construct a global solution under suitable assumptions and to obtain an optimal estimate for a lifespan of a local solution.
Abstract:
In this paper we study a free boundary problem arising from a stress-driven diffusion in polymers. The main feature of the problem is that the mass flux of the penetrant is proportional to the gradient of the concentration and the gradient of the stress. A Maxwell-like viscoelastic relationship is assumed between the stress and the concentration. The phase change takes place on the interface between the glassy and rubbery states of the polymer and a Stefan-type of free boundary condition is imposed on the free boundary. It is shown that under certain conditions the problem has a unique weak solution.
Abstract:
The Sacker-Neimark-Mane result on persistent manifolds of autonomous systems is well-known: an invariant manifold is persistent iff it is normally hyperbolic. The persistent manifolds have the property of the local uniqueness. The paper gives conditions for the indestructibility of an invariant manifold without the supposition of its local uniqueness. These conditions are wider than the normally hyperbolicity conditions. Some examples are considered.
Abstract:
Multi-peaked solutions to a singularly perturbed elliptic equation on a bounded domain $\Omega$ are constructed, provided the distance function $d(x, \delta\Omega)$ has more than one strict local maximum.
Abstract:
In this paper we discuss the problem when the Liénard system $\dot{x}=y-F(x)$ and $\dot{y}=-g(x)$ has homoclinic trajectories or not. Some new criteria for the existence of periodic solutions of this system are also presented.
Abstract:
We study the bifurcations of stationary solutions in a class of coupled reaction-diffusion systems on 1-dimensional space where the steady-state system is $D_2$-symmetric and reversible with respect to two involutions.
Because the stationary patterns of such reaction-diffusion systems are the symmetric cycles of its steady-state system, we investigate the bifurcations of manifolds of symmetric cycles near equilibria in general $D_2$-symmetric reversible systems. This is done through an analysis of the bifurcation regimes at strong resonances using 1-dimensional universal unfoldings of $D_2$-symmetric reversible normal forms. We prove there are two disjoint manifolds at "odd" resonance and four disjoint manifolds at "even" resonance. The number of these disjoint manifolds, in turn, determines the number of different types of stationary patterns.
Applications of our analysis to the study of pattern formation in reaction-diffusion systems are illustrated with a predator-prey model arising from mathematical ecology. Numerical results are obtained as a verification of our analysis.
Abstract:
In this paper, we show the existence of stable and unstable periodic solutions for a semilinear parabolic equation
$\qquad\qquad \frac{\partial u}{\partial t}-\Delta_x u -\lambda_1 u +g(u) =s \phi + h$ in $ R\times \Omega$
$\qquad\qquad u(t,x) =0 $ on $R\times \partial \Omega$
$\qquad\qquad u(0,x)=u(2\pi, x)$ on $\Omega$
where $g$ is a continuous function on $R$, $\phi$ denotes the positve normalized eigenfunction corresponding to the first eigenvalue $\lambda_1$ of problem (L), $s \in R$, and $h \in C([0,2\pi],C^1_0(\overline{\Omega})).$
Abstract:
We consider initial and boundary value problems modelling the vibrations of a plate with piezoelectric actuator. The simplest model leads to the Bernoulli-Euler plate equation with right hand side given by a distribution concentrated in an interior curve multiplied by a real valued time function representing the voltage applied to the actuator. We prove that, generically with respect to the curve, the plate vibrations can be strongly stabilized and approximatively controlled by means of the voltage applied to the actuator.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
This post will try to explain why in the optimization of the Rayleigh Quotient one constrains the norm of \(x\) to be \(\|x\| = 1\)
Let's start with the definition, given a symmetric matrix, \(A\), the Rayleigh quotient is defined as a function \(R(x)\) from \(\mathbb{R}^{d}_{0}\) to \(\mathbb{R}\),
\begin{equation}
R(x) = \frac{x^T A x}{x^Tx} \end{equation}
The maximization gives the following equivalent expression
\begin{equation}
\arg \max_{x} \frac{x^T A x}{x^Tx} = \arg \max_{x} \; x^T A x \; \textrm{sb.to.} \; \|x\|=1 \end{equation}
To untangle the norm constraint we can begin by remembering that a symmetric matrix is diagonalizable so we can write,
\begin{equation}
A=U \Lambda U^* = \sum_{i=1}^{d} \lambda_i U_i U_i^T, \end{equation}
where \(\lambda_i\) are the eigenvalues of \(A\) and \(U_i\) the orthonormal vectors from the diagonalization. We can express \(x\) as a sum of the eigenvectors of \(A\), that is, \(x = \sum_{i=1}^{d} \alpha_i v_i\). This allows us to rewrite the quotient,
\begin{equation}
\begin{split} x^Tx & = \Big( \sum_{i=1}^{d} \alpha_i v_i \Big)^{T} \Big( \sum_{i=1}^{d} \alpha_i v_i \Big) \\ & = \{ \textrm{orthogonality + unit length} \} = \sum_{i=1}^{d} \alpha_i^2 \end{split} \end{equation}
\begin{equation}
\begin{split} x^T A x & = \Big( \sum_{i=1}^{d} \alpha_i v_i \Big)^{T} \Big( \sum_{i=1}^{d} \alpha_i A v_i \Big) = \Big( \sum_{i=1}^{d} \alpha_i v_i \Big)^{T} \Big( \sum_{i=1}^{d} \alpha_i \lambda_i v_i \Big) \\ & = \{ \textrm{orthogonality + unit length} \} = \sum_{i=1}^{d} \alpha_i^2 \lambda_i \end{split} \end{equation}
If we want to maximize the quotient we can now write the Rayleigh quotient as
\begin{equation}
\arg \max_{x} R(x) = \frac{x^T A x}{x^Tx} = \frac{ \sum_{i=1}^{d} \alpha_i^2 \lambda_i}{\sum_{i=1}^{d} \alpha_i^2}. \end{equation}
We can see from the expression that the length of \(x\) does not matter. The maximization is in the direction of \(x\) and not in the length of \(x\), that is, if some \(x\) maximizes the quotient then so does any multiple of it, \(k \cdot x, \; k \neq 0\). This means that we can constrain \(x\) such that, \(\sum_{i=1}^{d} \alpha_i^2=1\), which means that we can reduce the problem to maximizing \(x^T A x\).
The same thing can be shown for the generalized Rayleigh quotient,
\begin{equation}
\arg \max_{x} R(x) = \frac{x^T A x}{x^T B x} \end{equation}
by proceeding in the same manner. |
It gives me great pleasure to hear about the feat that Voyager 1 achieved. I was wondering if Voyager 1/2 could collide with Asteroids (those,if present, outside our solar system) or any other matter present in its way. Did NASA know exactly what path was Voyager going to follow and the possible collisions it could encounter in its journey till date? Could there be any unforeseen object in its path?
The Voyager probes are outside the Kuiper belt now, and have a very long way to go before entering the Oort cloud. They are now in a place that is almost completely devoid of matter. Or at least I couldn't find any estimates as to how dense the solar system is there.
But what about when they where still in the Kuiper belt?
If Wikipedia is to be believed, the Kuiper belt has a mass of about 4.59*10^23kg and is mostly made of ice. Let's say the average object is a sphere with a diameter of 1m (put in a better estimate, if you want to). An ice sphere of 1m diameter has a mass of 489kg, meaning there are 9.385*10^20 such objects. Each has a cross section of 0.785m^2. Let's say for simplicity, the Kuiper belt is a torus with a major radius of 40AU and a minor radius of 10 AU. (I haven't found a good number for the north-south dimension of the Kuiper belt, sorry). That means that it's volume is 2.64*10^37m^3. That means there is an average free volume of 2.81*10^16m^3 per object.
We can borrow from gas kinetics to get an average free path length, that the Voyagers can fly before colliding with something. (I know this doesn't directly answer your question, but it gives us an idea)
$$ \lambda = \frac{1}{n\sigma} $$ with $\frac{1}{n}$ the mean free Volume and $\sigma$ the average cross section.
We get:
drum whirl
239 282.67 AU
Meaning that the voyager probes could almost fly through the Kuiper belt all the way to alpha centauri before smashing into anything. Now keep in mind that material density is much lower where they are right now...
The probability of Voyager colliding with any matter any time soon is
unknown, but likely small.
We have no way of detecting small outer solar system objects, because they are small and far away. Therefore, we don't know how many of those bodies there are, and thus we cannot begin to estimate the probability quantitatively. But, space is big, so in all likelihood we can say that the probability is very small. |
There is something I don't understand at page 36 of these lecture notes (Author: Fiorenzo Bastianelli from the university of Bologna, title: Path integrals for fermions and supersymmetric quantum mechanics.) I'll summarize it here but I linked them anyway in case someone want to check them.
So we're trying to build a supersymmetric action, we work in the super space $D=1$ and $N=2$ with one spacetime coordinate $t$ and two Grassman coordinates $\theta$ and $\bar{\theta}$.
The generator of time translation is $$H= i \frac{\partial}{\partial t}$$ The generators of supersymmetry transformation, that are translations in the anticommuting directions are $$Q= \frac{\partial}{\partial \theta} + i \bar{\theta} \frac{\partial}{\partial t} $$ and $$\bar{Q}= \frac{\partial}{\partial \bar{\theta}} + i \theta \frac{\partial}{\partial t} $$ We define a scalar, Grassman even superfield $X(t,\theta, \bar{\theta})$ which, under supersymmetry transformation transforms in this way $$\delta_S X(t,\theta, \bar{\theta}) = (\epsilon \bar{Q} + \bar{\epsilon} Q)\, X(t,\theta, \bar{\theta}) $$
With $\epsilon$ and $\bar{\epsilon}$ Grassmann parameters.
Now we define covariant derivatives $$ D= \frac{\partial}{\partial \theta} - i \bar{\theta} \frac{\partial}{\partial t}$$ $$ \bar{D}= \frac{\partial}{\partial \bar{\theta}} - i \theta \frac{\partial}{\partial t} $$
so that the covariant derivative of a superfield is still a superfield, which means
$$ \delta_S DX = (\epsilon \bar{Q} + \bar{\epsilon} Q)\, DX $$
All commutators and anticommutators are null beside these ones $$ \{ Q,\bar{Q} \} = 2H$$ $$\{D,\bar{D}\} = -2i \partial_t$$
Now we Say that a Lagrangian $L=L(X,DX,\bar{D}X)$ that depends only implicitly on the coordinates of the superspace through the superfield and its covariant derivatives can give you a supersymmetric action. And this is because it transforms under supersymmetry transformation as a total derivative. The exact form of the Lagrangian variation under supersymmetry transformation is this:
$$ \delta_S L(X,DX, \bar{D}X) = (\epsilon \bar{Q} + \bar{\epsilon} Q) \, L(X,DX, \bar{D}X) $$
Now the things I don't understand are these two:
Why does the Lagrangian transforms like that under supersymmetry transformation? I am not able to prove it, I can provide a sketch of my attempt of working out its transformation if requested, but it doesn't really anything I think.
Assuming that's the right transformation law of the Lagrangian, why is that a total derivative? It looks to me that it just transforms like a super field, but I don't see why that's a total derivative. |
In the classical Langevin theory of diamagnetism, an externally applied magnetic field $\vec{B}$ either increases or decreases the speed of orbital motion of the electrons. But there is yet another possibility in which the orientation of the orbits can change. This is because an orbit is a circulating current loop carrying a magnetic moment $\vec{m}$ and in presence of $\vec{B}$ it feels a torque $\vec{m}\times\vec{B}$. But this change in orientation is not considered in the Langevin theory. Why?
In the classical theory of diamagnetism, a fundamental assumption is that the orbit of the electron is not influenced by the external magnetic field. This orbit, therefore, will only contribute to the creation of a magnetic dipole moment that, in a material, will give rise to a magnetization vector and, consequently, it allows ut to determine the expression of the magnetic susceptibility.
On the contrary, in the classical theory of paramagnetism, developed again by Langevin, we assume that each electron (for the sake of simplicity, in an hydrogen-like atom in which the nuclear magnetic moment is equal to zero) will have a certain intrinsic dipole moment that will align under the action of an external magnetic field: in this case, the orientation is explicitly taken into account.
What is astonishing, in this case, is that both the classical theories give results that are in good agreement with the experimental results.
However, it is possible to consider that, as you point out in your question, both the effects should in principle be present when considering the classical behavior of a single electron. Repeating therefore the calculations of these two models taking into account both effects, it is possible to show, as it was done by Van Vleck in 1932, that at the end we obtain a magnetic dipole moment for the diamagnetic theory and a magnetic dipole moment for the paramagnetic theory. Summing up these two contributions, we can obtain that the overall magnetic dipole moment is identically equal to zero: $$ \boldsymbol\mu_{tot} = \boldsymbol\mu_{diam} + \boldsymbol\mu_{param} = 0 $$ since their expressions cancel out.
It is possible to show that all these contributions will result identically equal to zero for any electron in every atom. This is consistent with the Bohr-van Leeuwen theorem, that states that magnetic effects cannot be described through classical physics (they should not just be there, if we do careful calculations!), thus requiring a fully quantum description.
Returning back to your question, my answer is that we more or less deliberately neglect it to obtain a classical description that works (been in agreement with experimental results), since including it we obtain a result in clear contradiction with experiments. If this is not satisfactory, it is just because to describe these effects we need quantum mechanics. |
Contents
Integral expression can be added using the
\int_{lower}^{upper}
command.
Note, that integral expression may seems a little different in
inline and display math mode - in inline mode the integral symbol and the limits are compressed.
LaTeX code Output
Integral $\int_{a}^{b} x^2 dx$ inside text
$$\int_{a}^{b} x^2 dx$$
To obtain double/triple/multiple integrals and cyclic integrals you must use
amsmath and
esint (for cyclic integrals) packages.
LaTeX code Output
Like integral, sum expression can be added using the
\sum_{lower}^{upper}
command.
LaTeX code Output
Sum $\sum_{n=1}^{\infty} 2^{-n} = 1$ inside text
$$\sum_{n=1}^{\infty} 2^{-n} = 1$$
In similar way you can obtain expression with product of a sequence of factors using the
\prod_{lower}^{upper}
command.
LaTeX code Output
Product $\prod_{i=a}^{b} f(i)$ inside text
$$\prod_{i=a}^{b} f(i)$$
Limit expression can be added using the
\lim_{lower}
command.
LaTeX code Output
Limit $\lim_{x\to\infty} f(x)$ inside text
$$\lim_{x\to\infty} f(x)$$
In
inline math mode the integral/sum/product lower and upper limits are placed right of integral symbol. Similar is for limit expressions. If you want the limits of an integral/sum/product to be specified above and below the symbol in inline math mode, use the
\limits command before limits specification.
LaTeX code Output
Integral $\int_{a}^{b} x^2 dx$ inside text
Improved integral $\int\limits_{a}^{b} x^2 dx$ inside text
Sum $\sum_{n=1}^{\infty} 2^{-n} = 1$ inside text
Improved sum $\sum\limits_{n=1}^{\infty} 2^{-n} = 1$ inside text
Moreover, adding
\displaystyle beforehand will make the symbol large and easier to read.
On the other hand,
\mathlarger command (provided by
relsize package) is used to get bigger integral symbol in display.
For more information see |
Back to Unconstrained Optimization
The
has the general form nonlinear least-squares problem
$$\min \{ r(x) : x \in \mathbb{R}^n \}$$
where \(r \,\) is the function defined by \(r(x) = \frac{1}{2}\| f(x) \|_2^2\) for some vector-valued function \(f\) that maps \(\mathbb{R}^n \) to \(\mathbb{R}^m \).
Least-squares problems often arise in data-fitting applications. Suppose that some physical or economic process is modeled by a nonlinear function \(\phi \,\) that depends on a parameter vector \(x \) and time \(t \). If \(b_i \) is the actual output of the system at time \(t_i \), then the residual
$$\phi(x,t_i) - b_i \, \,$$ measures the discrepancy between the predicted and observed outputs of the system at time \(t_i \). A reasonable estimate for the parameter \(x\) may be obtained by defining the \(i\)th component of \(f \) by $$f_i(x) = \phi(x,t_i) - b_i \,$$ and solving the least-squares problem with this definition of \(f \).
From an algorithmic point of view, the feature that distinguishes least-squares problems from the general unconstrained optimization problem is the structure of the Hessian matrix of \(r \). The Jacobian matrix of \(f \),
$$f'(x) = \left( \partial_1 f(x), \ldots, \partial_n f(x) \right)$$ can be used to express the gradient of \(r \,\) since \(\nabla r(x) = f'(x)^T f(x)\). Similarly, \(f'(x) \) is part of the Hessian matrix $$\nabla^2 r(x) = f'(x)^T f'(x) + \sum_{i=1}^m f_i(x) \nabla^2 f_i(x).$$
To calculate the gradient of \(r \,\), we need to calculate the Jacobian matrix \(f'(x)\). Having done so, we know the first term in the Hessian matrix, namely \(f'(x)^Tf'(x) \,\) without doing any further evaluations. Nonlinear least-squares algorithms exploit this structure.
In many practical circumstances, the first term, \(f'(x)^T f'(x) \,\) in the Hessian is more important than the second term, most notably when the residuals \(f_i(x) \,\) are small at the solution. Specifically, we say that a problem has small residuals if, for all \(x \,\) near a solution, the quantities
$$|f_i(x)| \| \nabla^2 f_i(x) \|, \quad i=1,2,\ldots,n$$ are small relative to the smallest eigenvalue of \(f'(x)^Tf'(x) \,\). Notes and References
Nonlinear least-squares algorithms are discussed in the books of Bates and Watts [1]; Dennis and Schnabel [3]; Fletcher [4]; Gill, Murray, and Wright [5]; Nocedal and Wright[6]; and Seber and Wild [7]. The books by Bates and Watts [1] and by Seber and Wild [7] are written from a statistical point of view. Bates and Watts [1] emphasize applications, while Seber and Wild [7] concentrate on computational methods. Björck [2] discusses algorithms for linear least-squares problems in a comprehensive survey that covers, in particular, sparse least-squares problems and nonlinear least-squares.
Bates, D. M. and Watts, D. G. 1988. Nonlinear Regression Analysis and Its Applications, John Wiley &, Inc., New York. Björck, A. 1990. Least squares methods. In Handbook of Numerical Analysis, P. G. Ciarlet and J. L. Lions, eds., North-Holland, Amsterdam, pp. 465-647. Dennis, J. E. and Schnabel, R. B. 1983. Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, Englewood Cliffs, NJ. Fletcher, R. 1987. Practical Methods of Optimization, 2nd ed., John Wiley & Sons, Inc., New York. Gill, P. E., Murray, W., and Wright, M. H. 1981. Practical Optimization, Academic Press, New York. Nocedal, J. and Wright, S. J. 1999. Numerical Optimization, Springer-Verlag, New York. Seber, G. A. F. and Wild, C. J. 1989. Nonlinear Regression, John Wiley & Sons, Inc., New York. Last updated: October 5, 2013 |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
May 2014 , Volume 19 , Issue 3
Select all articles
Export/Reference:
Abstract:
In the present article, we study the numerical approximation of a system of Hamilton-Jacobi and transport equations arising in geometrical optics. We consider a semi-Lagrangian scheme. We prove the well posedness of the discrete problem and the convergence of the approximated solution toward the viscosity-measure valued solution of the exact problem.
Abstract:
We study the long-time behavior of the solutions to a nonlinear damped driven Schrödinger type equation on a strip. We prove that this behavior is described by a regular compact global attractor.
Abstract:
This paper is devoted to the study of the existence, uniqueness, continuous dependence and spatial behaviour of the solutions for the backward in time problem determined by the Type III with two temperatures thermoelastodynamic theory. We first show the existence, uniqueness and continuous dependence of the solutions. Instability of the solutions for the Type II with two temperatures theory is proved later. For the one-dimensional Type III with two temperatures theory, the exponential instability is also pointed-out. We also analyze the spatial behaviour of the solutions. By means of the exponentially weighted Poincaré inequality, we are able to obtain a function that defines a measure on the solutions and, therefore, we obtain the usual exponential type alternative for the solutions of the problem defined in a semi-infinite cylinder.
Abstract:
In this study, we consider the traveling spots that were observed in the photosensitive Belousov-Zhabotinsky reaction experiment conducted by Mihailuk et al. in 2001. First, we introduce the interface equation by the singular limit analysis of a FitzHugh--Nagumo-type reaction-diffusion system. Then, we obtain the profile of the support of the solution. Next, we prove the uniqueness of the traveling spot by studying ordinary differential equations that describe its front and back. In addition, we provide an upper bound for the width of the spot. Furthermore, we compare the singular limit problem with the wave front interaction model proposed by Zykov and Showalter in 2005 and obtain traveling fingers.
Abstract:
In this paper, we consider global stability for a heroin model with two distributed delays. The basic reproduction number of the heroin spread is obtained, which completely determines the stability of the equilibria. Using the direct Lyapunov method with Volterra type Lyapunov function, we show that the drug use-free equilibrium is globally asymptotically stable if the basic reproduction number is less than one, and the unique drug spread equilibrium is globally asymptotically stable if the basic reproduction number is greater than one.
Abstract:
This paper formulates and analyzes an HIV-1 infection model with saturated infection rate. We first discuss the boundedness of the solution and the existence of the equilibrium. The local stability of the virus-free equilibrium and infected equilibrium is established by analyzing the roots of the characteristic equations. Furthermore, we study the global stability of the virus-free equilibrium and infected equilibrium by using suitable Lyapunov function and LaSalle's invariance principle, and obtain sufficient conditions for the global stability of the infected equilibrium. Finally, numerical simulations are presented to illustrate the main results.
Abstract:
In recent studies, global Hopf branches were investigated for delayed model of HTLV-I infection with delay-independent parameters. It is shown in [8,9] that when stability switches occur, global Hopf branches tend to be bounded, and different branches can overlap to produce coexistence of stable periodic solutions. In this paper, we investigate global Hopf branches for delayed systems with delay-dependent parameters. Using a delayed predator-prey model as an example, we demonstrate that stability switches caused by varying the time delay are accompanied by bounded global Hopf branches. When multiple Hopf branches exist, they are nested and the overlap produces coexistence of two or possibly more stable limit cycles.
Abstract:
By using Lyapunov functions and some recent estimates of Halanay type, new criteria are introduced for the global exponential stability of a class of cellular neural networks, with delay and periodic coefficients and inputs. The novelty of those criteria lies in the fact that they are very efficient in presence of oscillating coefficients, because they are given in average form.
Abstract:
Robust morphogen gradient formation is important for embryo development. Patterns of developmental tissue are encoded by the morphogen gradient that drives the process of cell differentiation in response to different morphogen levels. Experiments have shown that tissue patterning is robust with respect to morphogen overexpression. However, the mechanisms for this robust patterning remain unclear. The expansion-repression mechanism, which was proposed for achieving scaling of patterning with organ size, is a type of self-enhanced clearance through a non-local feedback regulation and may contribute to the robustness with respect to morphogen overexpression. In this paper, we study the role of the expansion-repression mechanism in morphogen gradient robustness through a two-equation model with general forms of feedback functions. We prove the existence of steady-state solutions, and, through model reduction and simplification, show that the expansion-repression mechanism is able to improve the robustness against changes in the morphogen production rate. However, this improvement is restricted by the biological requirement of multi-fate long-range morphogen gradient.
Abstract:
In this paper we consider a free boundary problem describing cell motility, which is a simple model of Umeda (see [11]). This model includes a non-local term and the interface equation with curvature. We prove that there exist at least two traveling waves of the model. First, we rewrite the problem into a fixed-point problem for a continuous map $T$ and then show that there exist at least two fixed points for the map $T$.
Abstract:
This paper is concerned with the stability of traveling front solutions for a viscous Fisher-KPP equation. By applying geometric singular perturbation method, special Evans function estimates, detailed spectral analysis and $C_0$ semigroup theories, each traveling front solution with wave speed $c<-2\sqrt{f^\prime(0)}$ is proved to be locally exponentially stable in some appropriate exponentially weighted spaces.
Abstract:
We study two systems of reaction diffusion equations with monostable or bistable type of nonlinearities and with free boundaries. These systems are used as multi-species competitive model. For two-species models, we prove the existence of traveling wave solutions, each of which consists of two semi-waves intersecting at the free boundary. For three-species models, we also obtain some traveling wave solutions. In this case, however, every traveling wave solution consists of two semi-waves and one compactly supported wave in between, each intersecting with its neighbors at the free boundaries.
Abstract:
In this paper, a kind of second-order two-scale (SOTS) computation is developed for integrated heat transfer problem with conduction, convection and radiation in periodic porous materials, where the convection part is composed of long thin parallel pipes with periodic distribution, the conduction part occupied by solid materials and the radiation part is on the pipe's walls and the surfaces of cavities. First of all, by asymptotic expansion of the temperature field, the homogenization problem, first-order correctors and second-order correctors are obtained successively. Then, the error estimation of the second-order two-scale approximate solution is derived on some regularity hypothesis. Finally, the corresponding finite element algorithms are proposed and some numerical results are presented. The numerical tests indicate that the developed method can be successfully used for solving the integrated heat transfer problem, which can reduce the computational efforts greatly.
Abstract:
In this work, two novel decoupling algorithms for the steady Stokes-Darcy model based on two-grid discretizations are proposed and analyzed. Optimal error estimates for these variables are presented. Two grid decoupled scheme proposed by Mu and Xu (2007) is used to develop the two novel decoupling algorithms. For Algorithm 3.2, the optimal error estimates are obtained for both ${\bf{u}}_f,\ p_f$ and $\phi$ with mesh sizes satisfying $H=\sqrt{h}$. For Algorithm 3.3, the convergence of $\phi$ in $H^1$-norm is improved form $H^2$ to $H^\frac{5}{2}$. Furthermore, the existing results in [17] are improved and supplemented. Finally, some numerical experiments are provided to show the efficiency and effectiveness of the developed algorithms.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I am not an an expert in set theory, so this question could be trivial. I am sorry in that case.
Let $I$ be a set and $\{ X_i \}_{i \in I}$ be a collection of sets such that $X_i \neq \emptyset$ for all $i \in I$. The axiom of choice tells precisely that the set $$ \prod_{i \in I} X_i \neq \emptyset $$ is not empty. The use of the word "choice" here is clear. To produce an element $(x_i)\in \prod_{i \in I} X_i$ we need to choose an $x_i \in X_i$ for all $i \in I$.
Now I am wondering: what if there no choice at all? Namely, if $X_i = \{x_i\}$ has one element for each $i \in I$? Indeed, in this case $\prod_{i \in I} X_i$ has only one element, that is $(x_i)$, and there is no choice at all. So the question is the following:
Question: it the axiom of choice needed to prove that the product of a family of sets, each with exactly one element, is not empty?
Thanks! |
An orbital ring connected to the Earth by space elevators would reduce the cost of going to space to an amount comparable to an airplane ticket. This would cause a boom in the space tourism industry and eventually millions and even billions of people and tons of cargo will be moving from the Earth’s surface to space annually, and vise versa. This would necessitate an expansion in our space-based infrastructure to include space-based solar panels, a lunar mass driver, the routine mining of asteroids, and especially enormous space habitats (for all those billions of people to live in) such as the Standford Torus, the Bernal Sphere, or the O’Neil Cylinder. Orbital rings also allow you to build artificial planets and Dyson spheres, which would allow us to completely colonize the solar system. They would also allow us to build a Birch planet, a single planet with a surface area which exceeds the total surface area of all the planets in the Milky Way galaxy.
In this lesson, we’ll give a brief catalog of the various different classes of planets in the universe. We'll discuss Pulsar planets, hot Jupiters, Super Earths, ice and water worlds, archipelago worlds, diamond worlds, and rogue planets. Most of the planets we’ll be discussing were discovered using the Kepler Space Telescope and the transit method. We once believed that the formation of planets was rare and that there probably weren’t many planets beyond our solar system. We couldn’t have been more wrong.
In this lesson, we’ll discuss the prospect of life in the Milky Way galaxy beyond the Earth. We'll begin by discussing the speculations made in a paper written by Carl Sagan about the possibility of life in Jupiter's atmosphere. From there, we shall derive a formula which describes the habitable zone of a star. Using this formula and data obtained by the Kepler Space Telescope, we can estimate the total number of "Earth-like" planets in the Milky Way. From there, we discuss the fraction of those planets on which simple and intelligent life evolve; then we'll discuss the fraction of those planets on which advanced communicating civilizations evolve and what fraction of those civilizations are communicating right now.
In this lesson, we’ll prove that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\). We'll prove this result by using the squeeze theorem and basic geometry, algebra, and trigonometry. In a future lesson, we'll learn why this result is important: the reason being because knowledge that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\) is required to find the derivatives of the sin and cosine functions. But we'll save that for a future lesson.
For a vector field \(\vec{F}(x,y)\) defined at each point \((x,y)\) within the region \(R\) and along the continuous, smooth, closed, piece-wise curve \(c\) such that \(R\) is the region enclosed by \(c\), we shall derive a formula (known as Green’s Theorem) which will allow us to calculate the line integral of \(\vec{F}(x,y)\) over the curve \(c\).
To find the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of that sphere, we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and the particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the other particle of mass \(m\) such that \(D\) is there separation distance.
In previous lessons, we learned that by taking the integral of some function \(f(x)\) we can find the area underneath that curve by summing the areas of infinitely many, infinitesimally skinny rectangles. In this lesson, we'll use the concept of a double integral to find the volume underneath any smooth and continuous surface \(f(x,y)\) by summing the volumes of infinitely many, infinitesimally skinny columns.
The first serious proposal in scientific literature on terraforming other worlds in the universe was about terraforming Venus. The planetary scientist Carl Sagan imagined seeding the Venusian skies with photosynthetic microbes capable of converting Venus's \(C0_2\)-rich atmosphere into oxygen. Other proposals involve assembling a vast system of orbital mirrors capable of blocking the Sun's light and cooling Venus until this hot and hellish world became very frigid and rained \(C0_2\) from its atmosphere. The solleta would also be capable of simulating an Earth day/night cycle. To create oceans and an active hydrosphere on Venus, we could hurl scores of icy asteroids from the Kuiper belt to Venus and, upon impacting the Venusian atmosphere, would rapidly disintegrate releasing enormous quantities of water vapor into the atmosphere which subsequently condense to form the first seas on Venus. Or perhaps Saturn's moon Enceladus—containing a colossal subsurface ocean dwarfing that of the Earth's—could be sacrificed towards the end of creating the first seas on Venus. But even if humans never terraform this hellish world, they could still live their—partially at least—by deploying thousands of blimps into the Venusian skies capable of supporting a long-term, human presence of perhaps over a million people. Venusian sky cities. But eventually, after many millennia of terraforming Venus, a rich ecosystem of life—including us—could live on Venus's surface. |
Category Theory for Programmers Chapter 18: Adjunctions Derive the naturality square for $\psi$, the transformation between the twocontravariant functors: $a \to \mathscr{C}(La, b)$ $a \to \mathscr{D}(a, Rb)$
Derive the counit $\epsilon$ starting from the hom-sets isomorphism in the second definition of the adjunction.
Complete the proof of equivalence of the two definitions of the adjunction.
Show that the coproduct can be defined by an adjunction. Start with the definition of the factorizer for a coproduct.
Show that the coproduct is the left ajoint of the diagonal functor.
Define the adjunction between a product and a function object in Haskell. |
By way of analogy, think of what happens when you blow up a balloon and let it go. It spins around, goes this way and that. A balloon rarely goes straight, without spinning. The thrust from a balloon rarely goes through the center of mass. It rotates and translates. Because the thrust vector itself turns with the rotating balloon, the translation is not along a straight line path.
Equations of motion
The equations of motion separate into translational and rotational parts by looking at things from the perspective of the center of mass and by ignoring motion of the center of mass within the vehicle. The equations of motion don't decouple of the fuel tanks are not at the center of mass (which they never are). I'll ignore this.
In this case, the center of mass translational motion is given by Newton's second law, $\vec F = m\vec a$, where $\vec F$ is the net force acting on the object, total thrust plus all other external forces. (Note: Not $F=\dot p$. That is valid for constant mass objects, but then again, so is $\vec F=m\vec a$.)
The rotational behavior is more complex. The rotational analog of $\vec F=m\vec a$ is $\vec T = I\vec \alpha$ in simple, freshman-level problems. That is not true in general, and it is not true for your object. Your object is boxy; it does not have a spherically symmetric mass distribution. This means that the inertia tensor is time varying from the perspective of an inertial frame. You don't want to go there. Much better is to look at things from the perspective of a frame fixed with respect to the rotating object. This is a non-inertial frame, so fictitious torques will arise.
For any vector quantity $\vec q$, the time derivative of the vector from the perspective of a co-moving inertial frame and this body-fixed frame are related via$$\left(\frac {d\vec q}{dt}\right)_\text{inertial} =\left(\frac {d\vec q}{dt}\right)_\text{body} + \vec \omega \times \vec q$$This is sometimes called the dynamical transport theorem (e.g., see section 2.2 of http://ocw.mit.edu/courses/mechanical-engineering/2-003sc-engineering-dynamics-fall-2011/newton2019s-laws-vectors-and-reference-frames/MIT2_003SCF11Kinematic.pdf).
Applying this to angular momentum $\vec L = \mathbf I\, \vec \omega$ yields $$\dot{\vec L}_{\text{inertial}} = \dot{\vec L}_{\text{body}} + \vec \omega \times (\mathbf I\, \vec \omega )$$The left-hand side is the external torque, including that from the thrusters. Once again ignoring mass depletion, the inertia tensor is constant in the body frame, yielding$$\dot {\vec \omega} = \mathbf I^{-1}\left(\vec \tau_{\text{ext}} - \vec \omega \times (\mathbf I\, \vec \omega ) \right)$$
That second term on the right hand side results in some bizarre motion: "Hence the jabberwockian sounding statement: the polhode rolls without slipping on the herpolhode lying in the invariant plane." (Goldstein,
Classical Mechanics).
Things get messier yet when mass depletion and motion of the center of mass within the body is brought into the picture.
Question 1
You've just turned the vehicle into a kid's balloon. Not good.
Ideally, thrust will be straight through the center of mass, resulting in pure translational motion, or the net thrust will be zero (e.g., thrusters A and B firing in unison), resulting in pure rotational motion. That ideal never happens. Thrusters designed to produce translational acceleration inevitably produce a bit of rotational acceleration, thrusters designed to be used in pairs as a force couple inevitably produce a bit of translational acceleration. A real vehicle has to watch out for these undesired forces and torques and control for them.
Question 2
The simplified equations of motion are above.
Question 3
That's the subject of scifi movies. And vehicles with failed-on thrusters. Real thrusters are designed to fail off.
By way of analogy, what happens when you spin a CD too fast? The answer is that they tear themselves apart. Every once in a while that does happen. The CD disintegrates and the little chunks tear the daylights out of the CD drive. Space vehicles aren't made to be spun up to a high angular velocity. They would disintegrate, just like that CD, but at a much lower angular velocity.
Question 4
This is still a kid's balloon, just not as bad as your question #1. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
== Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison ==
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
We deal with a tournament (draw an arrow from $i$ to $j$ whenever $a_{ij}=r$) and want to prove that your sum is maximized for an acyclic tournament $AC_n$.
Denote by $\deg(i)$ the out-degree of $i$. Then$$\prod_{i\in X,j\notin X}a_{ij}=r^{-\binom{k}2}\prod_{i\in X} r^{\deg(i)}=f\left(\sum_{i\in X}\deg(i)\right)$$ for a convex function $$f(t)=r^{-\binom{k}2+t}.$$
I claim that such a sum is maximized on $AC_n$ for
any convex function $f$. In other words, if we associate with our tournament $T_n$ the multiset $M_k(T_n)$ of $\binom{n}k$ numbers $\sum_{i\in X}\deg(i)$, where $X$ runs over $k$-subsets of $\{1,\dots,n\}$, then $M_k(AC_n)$ majorizes $M_k(T_n)$.
At first, we establish this for $k=1$. This case means that the multiset of degrees of $T_n$ is majorized by the multiset $\{0,1,\dots,n-1\}$ (degrees of $AC_n$). Indeed, for any $m=1,\dots,n$, the sum of degrees of any $m$ vertices of $T_n$ is at least $\binom{m}2=0+1+\dots+(m-1)$ (since the sum of degrees is at least the number of edges between these $m$ vertices). This means (by the very definition), that the multiset of degrees of $T_n$ is majorized by $0,1,\dots,n-1$.
Now it suffices to prove that if one multiset $A=\{a_1,\dots,a_N\} $ majorizes another multiset $B=\{b_1,\dots,b_N\}$, then the same holds for their $k$-wise sums (without repetitions: $a_{i_1}+\dots+a_{i_k}$, $i_1<\dots<i_k$). It follows from the following observation: $B$ is obtained from $A$ by a sequence of moves 'take two unequal elements and bring them together with the sum being fixed'. Each such move corresponds to similar changes of the multiset of $k$-wise sums. |
As mentioned in the comments, this question has been answered for random pointes on the boundary of convex bodies and even better for all intrinsic volumes. Let me offer some references:
A good refernce is:
Matthias Reitzner,
Random points on the boundary of smooth convex bodies, Trans. Amer. Math. Soc. 354, 2243-2278, 2002
Abstract:
The convex hull of $n$ independent random points chosen on the boundary of a convex body $K \subset \mathbb{R}^d$ according to a given density function is a random polytope. The expectation of its $i$-th intrinsic volume for $i=1, \dots, d$ is investigated. In the case that the boundary of $K$ is sufficiently smooth, asymptotic expansions for these expected intrinsic volumes as $n \to \infty$ are derived.
By Ross M. Richardson, Van H. Vu and Lei Wu there are two papers, which are very simlilar:
Random inscribing polytopes, European Journal of Combinatorics. Volume 28, Issue 8, Pages 2057–2071, November 2007
and
An Inscribing Model for Random Polytopes, Discrete & Computational Geometry, Volume 39, Issue 1-3, pp 469-499, March 2008
With the following abstract:
For convex bodies $K$ with $\mathcal{C}^2$ boundary in $\mathbb{R}^d$ , we explore random polytopes with vertices chosen along the boundary of $K$. In particular, we determine asymptotic properties of the volume of these random polytopes. We provide results concerning the variance and higher moments of this functional, as well as an analogous central limit theorem.
Another more recent reference is
Károly J. Böröczky, Ferenc Fodor, Daniel Hug,
Intrinsic volumes of random polytopes with vertices on the boundary of a convex body, Trans. Amer. Math. Soc. 365, 785-809, 2013, arxiv link
Let $K$ be a convex body in $\mathbb{R}^d$, let $j\in\{1, ..., d-1\}$, and let
$\varrho$ be a positive and continuous probability density function with
respect to the $(d-1)$-dimensional Hausdorff measure on the boundary $\partial
K$ of $K$. Denote by $K_n$ the convex hull of $n$ points chosen randomly and
independently from $\partial K$ according to the probability distribution
determined by $\varrho$. For the case when $\partial K$ is a $C^2$ submanifold
of $\mathbb{R}^d$ with everywhere positive Gauss curvature, M. Reitzner proved an
asymptotic formula for the expectation of the difference of the $j$th intrinsic
volumes of $K$ and $K_n$, as $n\to\infty$. In this article, we extend this
result to the case when the only condition on $K$ is that a ball rolls freely
in $K$. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 7, TBA March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
I have seen in some places that people use formal verification and/or computer-aided verification for cryptography (tools like ProVerif, CryptoVerif, etc.).
How do these approaches work?
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
Disclaimer: I use Coq on daily basis...
I have seen in some places that people use formal verification and/or computer-aided verification for cryptography.
To my knowledge, there aren't that many places that do such a thing.
First, let's define our concepts:
Formal Verification: The act of proving the correctness of algorithms with respect to a certain formal specification or property, using formal methods of mathematics. Computer-assisted proof: A proof that has been at least partially generated by computer.
Some examples (in cryptography):
And some reading:
In formal methods (whatever the domain) there are two approaches:
In most cases, the first method is the one used. Why? Because it is easier to start with a formal specification and then, incrementally, through proven steps, eventually get to the software (which is therefore proven). As an example, this certified compiler:
For example in the Method B, the process is the following: $$DEFINE\ /\ SPECIFY \rightarrow PROVE \rightarrow REFINE \rightarrow PROVE \rightarrow \ldots \rightarrow IMPLEMENT$$
Formal verification uses the Hoare logic, a set of rules that allows us to reason about programs. It is based on sequent calculus and natural deduction. It relies on the Hoare triple:$$\{P\}\ C\ \{Q\}$$When the
precondition $\{P\}$ is met, the execution of the command $C$ establish the postcondition $\{Q\}$. If the command $C$ goes from a state $s$ to a state $s'$.$$C: s \rightarrow s'$$Then you will have to prove:$$\forall\ s\ s', P[s] \implies C : s \rightarrow s' \implies Q[s']$$But we prefer the Hoare triple notation (lighter).
As an example, this is the rule for $SKIP$ (aka
do nothing):$$\frac{}{\{P\}\ SKIP\ \{P\}}$$
And this is the rule for the sequence : $$\frac{\{P\}\ C_1\ \{Q\}\ \ \ \ \ \ \ \{Q\}\ C_2\ \{R\}}{\{P\}\ C_1 ; C_2\ \{R\}}$$ This notation is read as : "If $\{P\}\ C_1\ \{Q\}$ (is $\text{True}$) and if $\{Q\}\ C_2\ \{R\}$, I can infer $\{P\}\ C_1 ; C_2\ \{R\}$.
A proof is therefore built from bottom to top : if you want to prove $\{P\}\ C_1 ; C_2\ \{R\}$, you will need to prove there is $Q$ such as $\{P\}\ C_1\ \{Q\}$ and $\{Q\}\ C_2\ \{R\}$.
Some readings on Hoare logic:
In formal methods, there are two kinds of tools:
The second ones are based on pure logic. You provide it with lemmas (helping theorems), and this will make sure that every step you do in your proof is right. It is a very thorough process, quite slow. But at the end you know how your proof works and that it is correct because you did not miss a hypothesis. The process, which can be frustrating, will make sure that the proof works. The main proof assistants are the following: Coq, Isabelle, Agda, Fstar and HOL.
The tools you mentioned (ProVerif, CryptoVerif) are not dedicated to cryptographic primitives, but protocol verification. I do not know them, so I will not comment. Other tools on the same subject do exist. For example:
And yes, they are based on Coq.
In C, in order to be able to verify your code matches your specification (such as what has been done for $\text{SHA-}256$), one need to extract the semantic from the code. This could be done using the Verifiable Tool Chain or Why3. The first one will provide you a Coq file from which you will be able to start your proofs. The second one, given C ASCL instrumented code, will use frama-c and try to prove the properties with the SMT provers named above. On failure, it generates goals and asks you to prove them with Coq.
I already mentioned the idea of proving protocols (cf EasyCrypt et al.), for cryptographic proofs, the idea is basically the same. Specify using logic properties.
Coq example (trivial to prove by hand for a undergrad student): $$\forall\ f g, f \text{ is injective} \implies g \text{ is injective} \implies g \circ f \text{ is injective}$$
Require Import Coq.Sets.Image.Theorem injective_trans:forall A B C (f: A -> B) (g:B -> C) (h:A -> C),(forall x, h x = g (f x)) -> injective A B f -> injective B C g -> injective A C h.Proof.intros A B C f g h H_h_equal_g_f H_f_injective H_g_injective.unfold injective in *.intros x y H_h_x_y.apply H_f_injective.apply H_g_injective.rewrite <- H_h_equal_g_f.rewrite <- H_h_equal_g_f.apply H_h_x_y.Qed.
Here is a list of famous proven theorem in Coq.
Security bounds in Keccak has been proven with the aid of a computer. But their process was different: they exhaustively generated the trails and showed the properties. Thus it was computer aided, but not verified.
And finally, a recent paper from FSE 2016: Verifiable side-channel security of cryptographic implementations: constant-time MEE-CBC
As SEJPM pointed out, this paper is also another kind of approach (more game based). They created an analyzer that generated a cipher scheme that satisfied the AE scheme. However, I don't know whether they have proven their properties by hand or used a proof assistant (which might be possible given that they used Ocaml to code their analyzer).
I do not know if my answer has been helpful. I know it is not mainly focused on cryptography, but I hope it will give you a some idea of how things work.
Formal verification is used to verify the security services of your algorithm or your protocol. It uses specific high level modeling specification to specify your security solution and uses a back end formal verification tools to see whether or not there are security breaches or not. The outcome of the formal verification will tell you if your protocol is safe or unsafe. I suggest you to have a look at AVISPA which is one of the most common used to verify security properties of internet protocol |
Possibly relevant: In the first chapter of
Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being, Lakoff & Núñez discuss a few examples of individuals who have suffered some kind of neurological damage that partially or completely disables their ability to perform numerical calculations. With respect to this question, the most interesting example is probably this one:
Not only is rote calculation localized separately from basic arithmetic abilities but algebraic abilities are localized separately from the capacity for basic arithmetic. Dehaene (1997) cites a patient with a Ph.D. in chemistry who has
acalculia, the inability to do basic arithmetic. For example, he cannot solve $2\cdot 3$, $7-3$, $9\div 3$, or $5\cdot 4$. Yet he can do abstract algebraic calculations. He can simplify $(a \cdot b) / (b \cdot a)$ into $1$ and $a \cdot a \cdot a$ into $a^3$, and could recognize that $(d/c) + a$ is not generally equal to $(d + a) / (c + a)$. Dehaene concludes that algebraic calculation and arithmetic calculation are processed in different brain regions.
The citation to Dehaene (1997) refers to the book
The Number Sense. The relevant passage from that book seems to be p. 199:
Up to now, this book has been concerned only with elementary arithmetic. But what about more advanced mathematical abilities, such as algebra? Should we postulate yet other neuronal networks dedicated to them? Recent discoveries by the Austrian neuropsychologist Margarete Hittmair-Delazer seem to suggest so. She has found that acalculic patients do not necessarily lose their knowledge of algebra. One of her patients, like Mrs. B, lost his memory of addition and multiplication tables following a left subcortical lesion. Yet he could still recalculate arithmetic facts by using sophisticated mathematical recipes that indicated an excellent conceptual mastery of arithmetic. For instance, he could still solve $7 \times 8$ as $7 \times 10 - 7 \times 2$. Another patient, who had a Ph.D. in chemistry, had become acalculic to the point of failing to solve $2\times 3$, $7-3$, $9\div 3$, or $5\times 4$. He could nevertheless still execute abstract formal calculations. Judiciously making use of the commutativity, associativity,and distributivity of arithmetic operations, he was able to simplify $\frac{a \times b}{b \times a}$ into $1$ or $a\times a\times a$ into $a^3$, and he recognized that the equation $\frac{d}{c} + a = \frac{d+a}{c+a}$ is generally false. Although this issue has been the matter of very little research to date, these two cases suggest, against all intuition, that the neuronal circuits that hold algebraic knowledge must be largely independent of the networks involved in mental calculation.
Regarding these passages, a couple of comments are in order:
I am not sure if acalculia and dyscalculia are the same phenomenon, or what exactly the relationship is between them. Nor is it clear from this brief excerpt whether the patient (the one with the Ph.D in Chemistry) developed acalculia in adulthood, for example as the result of some injury, or whether it was a condition he had all along. (Hittmair-Delazer's first patient suffered from a "left subcortical lesion", but the Ph.D in Chemistry is "another patient", of whom no details are provided.) The distinction seems significant, as there is (probably?) a big difference between someone learning algebraic reasoning despite suffering from acalculia, on the one hand, and someone retaining algebraic reasoning that they learned prior to the onset of acalculia. Algebraic reasoning and manipulation is one thing, but proof-based advanced-level mathematics at the level of an undergraduate analysis course is (maybe?) something else.
The primary source for this is probably one of the following two papers:
HITTMAIR-DELAZER, M., SAILER, U., and BENKE, T. Impaired arithmetic facts but intact conceptual knowledge – a single case study of dyscalculia.
Cortex, 31: 139-147, 1995.
HITTMAIR-DELAZER, M., SEMENZA, C., and DENES, G. Concepts and facts in calculation.
Brain, 117: 715-728, 1994.
Both of these papers are cited in the references of yet another paper:
Dehaene, S., & Cohen, L. (1997). Cerebral pathways for calculation: Double dissociation between rote verbal and quantitative knowledge of arithmetic.
Cortex, 33(2), 219-250.
I suspect any of these three papers might have additional references that could be helpful in understanding these phenomena. |
Learning about Lagrangian and Hamiltonian mechanics introduced me to an entirely new way of solving physics problems. The first time I’d read about this topic was in The Principle of Least Action chapter in Vol. 2 of The Feynman Lectures on Physics. I was introduced to a different perspective of viewing the physical world, perhaps a more general one than Newton’s laws.
A famous example of a system whose equations of motion can be more easily attained using Lagrangian or Hamiltonian mechanics is the double pendulum. I saw a Wolfram Science animation of the system, but it didn’t have the right
a e s t h e t i c for me, and I wanted to write one of my own to investigate the system for various initial conditions and its chaotic behaviour.
The following shows the double pendulum system:
The Lagrangian of the system is:
$$ \mathcal{L} = T - V $$
$$ T = \frac{1}{2}m_1 l_1^2 \dot{\theta}_1^2 + \frac{1}{2}m_2\left[l_1^2 \dot{\theta}_1^2 + l_2^2 \dot{\theta}_2^2 + 2l_1 l_2 \dot{\theta}_1 \dot{\theta}_2 \cos(\theta_1 - \theta_2)\right]$$ $$ V = -(m_1 + m_2)gl_1\cos \theta_1 - m_2gl_2\cos\theta_2 $$
After a very long derivation, Hamilton’s equations can be obtained:
$$ \dot{\theta_1} = \frac{l_2 p_{\theta_1} - l_1 p_{\theta_2}\cos(\theta_1 - \theta_2)}{l_1^2 l_2[m_1 + m_2\sin^2(\theta_1-\theta_2)]} $$
$$ \dot{\theta_2} = \frac{l_1 (m_1 + m_2)p_{\theta_1} - l_2 m_2 p_{\theta_1}\cos(\theta_1 - \theta_2)}{l_1^2 l_2[m_1 + m_2\sin^2(\theta_1-\theta_2)]} $$ $$ \dot{p}_{\theta_1} = -(m_1 + m_2)gl_1\sin\theta_1 - C_1 + C_2$$ $$ \dot{p}_{\theta_2} = -m_2gl_2\sin\theta_2 + C_1 - C_2$$ $$ C_1 = \frac{p_{\theta_1}p_{\theta_2}\sin(\theta_1-\theta_2)}{l_1 l_2[m_1 + m_2\sin^2(\theta_1-\theta_2)]} $$ $$ C_2 = \frac{l_2^2 m_2 p_{\theta_1}^2 + l_1^2(m_1 + m_2)p_{\theta_2}^2 - l_1 l_2 m_2 p_{\theta_1} p_{\theta_2} \cos(\theta_1 - \theta_2)}{2l_1^2 l_2^2[m_1 + m_2\sin^2(\theta_1-\theta_2)]^2}\sin[2(\theta_1 - \theta_2)] $$
These are very formidable-looking equations, and it is almost impossible to determine the particle trajectories by solving these equations analytically! So how does one solve it for practical purposes? Numerical methods and programming. I used Lua to program the simulator, including the LÖVE framework for the graphics.
Since the only data structure in Lua is a table, I decided to see how I could make use of that property for this program. Lua doesn’t have functions to perform scalar multiplication or addition between tables, so I wrote some:
function directSum(a, b) local c = {} for i,v in pairs(a) do c[i] = a[i] + b[i] end return cendfunction scalarMultiply(scalar, table) local output = {} for i,v in pairs(table) do output[i] = scalar*table[i] end return outputend
So now I can store values, such as the initial conditions and parameters of the system in a table and perform basic arithmetic operations between tables to change values. Now to implement the physics of the problem.
First, I defined a generator that randomly generates initial values (within a given range) of the masses of the bobs, the lengths of the rods, their angles with respect to the vertical, their initial angular velocities and calculated the momenta of the bobs. This is fed into a table called
data:
function Generator() local self = {} self.m1 = love.math.random( 3, 10 ) self.m2 = love.math.random( 3, 10 ) self.l1 = love.math.random( 3, 10 ) self.l2 = love.math.random( 1, 10 ) self.t1 = love.math.random( -6.28, 6.28 ) self.t2 = love.math.random( -6.28, 6.28 ) self.o1 = love.math.random( -4, 4 ) self.o2 = love.math.random( -2, 2 ) self.p1 = (self.m1+self.m2)*(math.pow(self.l1, 2))*self.o1 + self.m2*self.l1*self.l2*self.o2*math.cos(self.t1-self.t2) self.p2 = self.m2*(math.pow(self.l2, 2))*self.o2 + self.m2*self.l1*self.l2*self.o1*math.cos(self.t1-self.t2) return selfenddata = Generator()
Now we set up the equations of motion using a function called
Hamiltonian. It takes the initial values from
data to perform calculations, and a new table called
phase which consists of the phase space variables to update the angles and momenta over time:
function Hamiltonian(phase, data) local update = {} t1 = phase[1] t2 = phase[2] p1 = phase[3] p2 = phase[4] C0 = data.l1*data.l2*(data.m1+data.m2*math.pow(math.sin(t1-t2),2)) C1 = (p1*p2*math.sin(t1-t2))/C0 C2 = (data.m2*(math.pow(data.l2*p1,2))+(data.m1+data.m2)* (math.pow(data.l1*p2, 2))-2*data.l1*data.l2*data.m2*p1*p2* math.cos(t1-t2))*math.sin(2*(t1-t2))/(2*(math.pow(C0,2))) update[1] = (data.l2*p1 - data.l1*p2*math.cos(t1-t2)) / (data.l1*C0) update[2] = (data.l1*(data.m1+data.m2)*p2 - data.l2*data.m2*p1* math.cos(t1-t2)) / (data.l2*data.m2*C0) update[3] = -(data.m1 + data.m2)*g*data.l1*math.sin(t1) - C1 + C2 update[4] = -data.m2*g*data.l2*math.sin(t2) + C1 - C2 return updateend
All the required information with regard to the physics is now processed. To solve the differential equations, I implemented the Runge-Kutta method of order 4, performing operations on the tables using
directSum and
scalarMultiply. These operations take place in
Solver, which takes the time input
dt from LÖVE in
love.update().
function Solver(dt) local phase = {data.t1, data.t2, data.p1, data.p2} local k1 = Hamiltonian(phase, data) local k2 = Hamiltonian(directSum(phase, scalarMultiply(dt/2, k1)), data) local k3 = Hamiltonian(directSum(phase, scalarMultiply(dt/2, k2)), data) local k4 = Hamiltonian(directSum(phase, scalarMultiply(dt, k3)), data) local R = scalarMultiply(1/6 * dt, directSum(directSum(k1, scalarMultiply(2.0, k2)), directSum(scalarMultiply(2.0, k3), k4))) data.t1 = data.t1 + R[1] data.t2 = data.t2 + R[2] data.p1 = data.p1 + R[3] data.p2 = data.p2 + R[4]endfunction love.update() Solver(dt)end
After setting up the graphics end, I obtain nice animations like this:
I’ll probably end up creating a new post with cool patterns emerging from this simulation, possibly checking for chaotic behaviour with initial conditions that are not so different from a previous state. |
Difference between revisions of "Weeks-Chandler-Andersen perturbation theory"
(New page: The '''Weeks-Chandler-Anderson perturbation theory''' is based on the following decomposition of the intermolecular pair potential (in particular, the [[Lennard-Jones model | Lennard-J...)
m
Line 5: Line 5:
:<math>
:<math>
− +
{\rm repulsive} (r) = \left\{
\begin{array}{ll}
\begin{array}{ll}
u_{\rm LJ}(r) + \epsilon & {\rm if} \; r < 2^{1/6}\sigma \\
u_{\rm LJ}(r) + \epsilon & {\rm if} \; r < 2^{1/6}\sigma \\
Line 15: Line 15:
:<math>
:<math>
− +
{\rm attractive} (r) = \left\{
\begin{array}{ll}
\begin{array}{ll}
-\epsilon & {\rm if} \; r < 2^{1/6}\sigma \\
-\epsilon & {\rm if} \; r < 2^{1/6}\sigma \\
Revision as of 15:16, 21 June 2007
The reference system pair potential is given by (Eq, 4 Ref. 1):
and the perturbation potential is given by (Eq, 5 Ref. 1): |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
To solve this, we first express the problem in a new way: Instead of having a laser bouncing between two mirrors, we have a laser moving straight through a different space; to accomplish this, we draw what you would see if you were standing in the setup - that is, where the mirrors are, we place a new, reflected image of the whole space! That is, we make copies of the $1^{\circ}$ wedge by mirroring it over the rays that bound it. The result looks something like this:where the line segments represent the mirrors and the red line is the laser. This is basically what you'd see if you were standing inside such a setup (although you'd see many copies of yourself too). We may regard this as a sort of "covering space" for the old one as each point in the old one is copied many times into the new one.
This makes the problem simple: How many
consecutive line segments can the red line cross? Well, let's call the distance from the center of the circle to the laser $d$. And, for simplicitly, lets call the radius of the inner circle $1$ and the outer circle $2$, since we only need to preserve the ratio between those measurements (which is 10m : 20m originally). Then, a laser crosses a line segment measuring an angle of $\theta$ to it if:$$\sin(\theta)\leq d\leq 2\sin(\theta)$$We thus require that the smallest value of $2\sin(\theta)$ of any line we cross is at least equal to than the greatest value of $\sin(\theta)$ in the lines that we cross - otherwise, no suitable $d$ exists.
Using this inequality, suppose we have that the leftmost line $\alpha$ to the line, and the rightmost line is at an angle of $\beta$ (which will be greater than $\alpha$). We may assume that $|\alpha|\geq|\beta|$, as the two are related by reflection. Moreover, we may then write $\alpha=(n+c)^{\circ}$ where $n$ is an integer and $-\frac{1}2\leq c\leq \frac{1}2$ - meaning $90^{\circ}+c$ will be the maximum $\sin(\theta)$ to appear. Then, all we need to satisfy for the intervals for $d$ to intersect is:$$\sin(90+c)\leq 2\sin(\alpha)$$$$\sin(90+c)\leq 2\sin(\beta)$$Now, I'm
sure there's an elegant way to solve this. I'm kind of a hack though, so we'll just make some guesses: Suppose that $c$ were $\frac{1}2$. Then we can find that the integer $n$ satisfying $$\sin(90+\frac{1}2)\leq 2\sin(90+n+\frac{1}2)$$are exactly those in the interval $[-60,59]$, meaning we can, from any point, cross $60$ to either side of it (as $c=\frac{1}2$ is the most extreme value possible) - for a maximum of $121$ reflections.
However, letting $c$ be zero, it's a fairly common fact from highschool trigonometry that $\sin(30^{\circ})=\sin(150^{\circ})=\frac{1}2$ that $\alpha=30^{\circ}$ meaning that we can hit every mirror making an angle between $30^{\circ}$ and $150^{\circ}$ with the laser - this achieves the maximum of $121$ reflections. To do this, we do as follows:
Aim the laser at the outer edge of one of the mirrors, impacting the mirror at a $30^{\circ}$ angle. It will reflect $121$ times.
A picture of this would probably just look like a lot of red lines. However, the same logic applies if the angle between the mirrors were $5^{\circ}$ and predicts $25$ reflections. That looks like this:Notice that it hits one mirror perpendicularly, and hence retraces its steps. In other words, some advice:
Don't look in the direction you point the laser. |
Perhaps the phrase “don’t reinvent the wheel” is overused. However, many newer disciplines, particularly in the technology sector, seem to insist on it. One thing physical engineers learned long ago was to study the world around them, work with it, and emulate it in their designs. Network engineering should be no different. In a technical report from 2011, authors Thomas Meyer and Christian Tschudin from the University of Basel describe a highly elegant natural flow management method [11] that exploits much of the hard work done in the chemistry sector in chemical kinetics. They describe a scheduling approach that creates an artificial chemistry as an analogue of a queueing network, and uses the Law of Mass Action to schedule events naturally. The analysis of such networks utilizing their implementation is simplified regardless of the depth one wishes to go into. In addition, they show a congestion control algorithm based on this framework is TCP fair and give evidence of actual implementation (a relief to practitioners).
This report will discuss their paper at length with a goal of covering not only their work, but the underlying ideas. Since the paper requires knowledge of chemical kinetics, probability, queueing theory, and networking, it should be of interest to specialists in these disciplines, but a comprehensive discussion of the fundamentals glossed over in the paper would make dissemination more likely.
Note: the full pdf of the report is attached to each section of this series at the bottom. Overall Review of the Paper
The paper is well written, readable, and quite clear despite the breadth of scope it contains. It’s a major benefit to the authors and readers that their theoretical model has been tested in a proof-of-concept network. The work shows much promise for the networking space as a whole.
Overview of the Paper and Chemistry Basics
Just as in chemistry and physics, packet flow in a network has microscopic behavior controlled by various protocols, and macro-level dynamics. We see this in queueing theory as well–we can study (typically in steady-state to help us out, but in transient state as well) the stochastic behavior of a queue, but find in many cases that even simple attempts to scale the analysis up to networks (such as retaining memorylessness) can become overwhelming. What ends up happening in many applied cases is a shift to an expression of the macro-level properties of the network in terms of average flow. The cost of such smoothing is an unpreparedness to model and thus deal effectively with erratic behavior. This leads to overprovisioning and other undesirable and costly design choices to mitigate those risks.
Meyer and Tschudin have adapted decades of work in the chemical and physical literature to take advantage of the Law of Mass Action, designing an artifical chemistry that takes an unconventional non-work-conserving approach to scheduling. Non-work-conserving queues add a delay to packets and have tended to be avoided for various reasons, typically efficiency. Put simply, they guarantee a constant wait time of a packet regardless of the number of packets in a queue by varying the processing rate with fill level of the queue. The more packets in queue, the faster the server processes those packets.
Law of Mass Action in Chemistry
If we have some chemical reaction with reactants A_{1}, A_{2},\ldots, A_{n} and products B_{1}, B_{2}, \ldots, B_{m}, and the reaction is only forward
1, then we may express the reaction as
where k is the rate constant. In a simple reaction A \to P, with P as the product, we can see the rate expressed nicely in a very basic differential equation form[9]:-\frac{\text{d}c_{A}}{\text{d}t} = k\cdot c_{A}
This should actually look somewhat similar to problems seem in basic calculus courses as well. The rate of change of draining the reactant is a direct function of the current concentration.
The reaction rate r_{f} of a forward reaction is proportional to the concentrations of the reactants:r_{f} = k_{f}c_{A_{1}}c_{A_{1}}\cdots c_{A_{N}}
for a set of reactants \{A_{i}\}.
The Queuing Analogue and Assumptions
Meyer and Tschudin[11] express the networking version of these chemical reactions in a very natural way. Packets are molecules. A molecular species is a queue, so molecules of species X go into queue X. The molecular species is a temporary buffer that stores particular packets types until they are consumed by a reaction (processed by some server in the queueing space). FIFO (first-in-first-out) discipline is assumed.
The figure above from the technical report shows how a small system of reactions looks in the chemical space and the queuing space. Where analysis and scheduling can get complicated is in the coupled nature of the two reactions. The servers both drain packets from queue Y, so they are required to coordinate their actions in some way. It’s important to note here that this equivalence rests on treating the queuing system as M/M/1 queues with a slightly modified birth-death process representation.
Typically, in an M/M/1 queue, the mean service rate is constant. That is, the service rate is independent of the state the birth-death process is in. However, if we model the Law of Mass Action using a birth-death process, we’d see that the rate of service (analogously, the reaction rate) changes depending on the fill-level of the queue (or concentration of the reactant). We’ll investigate this further in the next sections, discussing their formal analysis.
Related Work and Precedent
The authors noted that adding packet delay is not unheard of in the networking space. Delay Frame Queuing[12] utilizes non-work-conserving transmission at edge nodes in an ATM network in order to guarantee upper bounds on delay and jitter for virtual circuits. Three other researchers in 2008 (Kamimura et al) proposed a Constant Delay Queuing policy that assigns a constant delay to each packet of a particular priority stream and forward other best-effort packets during the delay[8].
Continuation
The next article will discuss the formal mathematical model of an artificial packet chemistry.
References Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972). Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000). Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008). Laidler, K. Chemical Kinetics. McGraw-Hill, 1950. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478. Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel. Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
I am implementing the techniques described in the classic Local Type Inference paper. Specifically, I am implementing the type argument synthesis algorithm from section 3.
My algorithm seems to mostly work, but it doesn’t seem to produce reasonable results when a quantified type variable appears in the
result of a function, but not in its arguments. For context, I’ve reproduced the $\text{App-InfAlg}$ rule here:
$$ \dfrac{\begin{align}\tt\Gamma \vdash f \in All(\overline{X}) \overline{T} \rightarrow R \qquad &\tt \Gamma \vdash \overline{e} \in \overline{S} \qquad \lvert\overline{X}\rvert > 0\\ \tt\emptyset \vdash_\overline{X}\overline{S} <: \overline{T} \Rightarrow \overline{C}&\qquad\tt \sigma \in \bigwedge \overline{C} \Downarrow R \end{align}} {\tt\Gamma \vdash f(\overline{e}) \in \sigma R \Rightarrow f[\sigma \overline{X}](\overline{e})} (\text{App-InfAlg}) $$
The most important piece here is the $\tt\emptyset \vdash_\overline{X}\overline{S} <: \overline{T} \Rightarrow \overline{C}$ premise, which invokes the constraint generation algorithm. Importantly, though, it
only generates constraints using $\tt\overline{S}$ and $\tt\overline{T}$, which correspond to the argument types (that is, the types to the left of the arrow). This is problematic for types like this, which include type variables that only appear in the result:
$$ \tt All(X, Y)(X) \rightarrow Y $$
Or, even more simply:
$$ \tt All(X)() \rightarrow X $$
In this case, my implementation happily infers the type of the above two functions to be $\tt Y$ and $\tt X$, respectively, which are clearly not valid types, since they are type variables that have escaped their scope!
My guess is that my implementation is wrong, and the algorithm accounts for this case. In that situation, I would expect the algorithm to either reject the applications or infer $\tt Bot$ as the result type. However, I don’t see how this could possibly be accounted for, since the algorithmic inference rule only uses $\tt R$ for the purposes of turning the constraints $\overline{C}$ into the substitution set $\sigma$.
How does the algorithm handle this situation? |
Let $A\subset \mathcal{P}(\{1, \dots, n\}) $ and $ B \subset \{1, \dots, n \}$
We say $A$ shatters $B$ if $\forall y \subset B, \exists x \in A$ such that $x \cap B = y$.
I am asked to show that if $A$ does not shatter the sets: $\{1,2,3\},\{2,3,4\},\ \dots, \{n-2,n-1,n,\}, \{n-1,n,1\}, \{n,1,2\}$ and $n$ is a multiple of $3$ then $|A| \leq 7^{\frac{n}{3}}$
My current thinking is that, for each of these $3$-sets, we have to miss at least one of their subsets.
Specifically, for each $a \subset \{x,y,z\}$ there are $2^{n-3}$ subsets of $\{1,\dots,n\}$ that intersect with $\{x,y,z\}$ to give $a$. (Call the set of there $2^{n-3}$ subsets $C_{\{x,y,z\}}(a)$) Hence if $A$ does not shatter $\{1,2,3\}$ because we are missing $a$, then $A$ cannot contain these $2^{n-3}$ subsets.
I want to say that there is some subset $B \subset \mathcal{P}(\{1,\dots,n\}$ of size $8* 7^{\frac{n}{3}}$ such that we must have $A \subset B$, and we may only have at most $\frac{7}{8}$ of the elements of $B$. I suspect we have something like:
$B = \bigcup_{\{x,y,z\} \text{ mentioned earlier}}\bigcup_{a \subset \{x,y,z\}} C_{\{x,y,z\}}(a)$
However, at this point I am stuck and I'm not sure how to proceed. I can't think of a nice way to count the size of $B$ and show it is what I want because I can't think of an easy way to account for all of the overlaps occurring. |
A basic fact about $3$ dimensional vectors is that the quantity
$\pm\det\left( \begin{array}{ccc} a_{1} & a_{2} & a_{3} \\ b_{1} & b_{2} & b_{3} \\ c_{1} & c_{2} & c_{3} \\ \end{array} \right)$
is equal to the volume of the parallelepiped determined by the vectors $\vec{a}$, $\vec{b}$ and $\vec{c}$ where $\vec{a} = \langle a_1, a_2, a_3 \rangle$, $\vec{b} = \langle b_1, b_2, b_3\rangle$, and $\vec{c} = \langle c_1, c_2, c_3\rangle$. From e.g. the explicit formula for the determinant of a matrix in terms of its entries it is evident that this is the same as
$\pm\det\left( \begin{array}{ccc} a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \\ a_{3} & b_{3} & c_{3} \\ \end{array} \right)$
so that the volume of of the parallelepiped determined by $\vec{a}$, $\vec{b}$ and $\vec{c}$ is the same as the the volume of parallelepiped determined by $\langle a_1, b_1, c_1\rangle$, $\langle a_2, b_2, c_2\rangle$, and $\langle a_3, b_3, c_3\rangle$.
My question is:
Is there a
geometricproof that the volumes of the two parallelepipeds are the same? |
Dear Uncle Colin, I have a problem with a limit! I need to figure out what $\left( \tan \left(x\right) \right)^x$ is as $x \rightarrow 0$. -- Brilliant Explanation Required Now! Our Understanding's Limited; L'Hôpital's Inept Right, BERNOULLI, stop badmouthing L'Hôpital and let's figure out this limit. It's clearly an indeterminateRead More →
"That @ColinTheMathmo chap had a blog post on Stirling's approximation, too," said the student, spotting a chance to move the lesson away from his disappointing mock exam results. "Used it to work out 52!" "I saw it," said the Mathematical Ninja, polishing his weaponry smugly. "It... wasn't bad, exactly..." "ButRead More →
Dear Uncle Colin, I've been asked to find the last two digits of $19^{100}$. For what reason, I cannot tell. However, my calculator bums out before I get to $19^{10}$! -- Many Other Digits, Unfindable Last Ones Hi, there, MODULO! What do you know, the clue to your problem isRead More →
There's not much of a story to this post, except for a few curiosities the decimal system throws up (largely as a result of the binomial expansion). Some time ago, I looked at some Fibonacci witchcraft: $\frac{1}{999,998,999,999} = 0.000\,000\, 000\,001\, 000\,001\, 000\,002\, 000\,003\, 000\,005\, 000\,008\,...$, neatly enumerating the Fibonacci sequenceRead More →
Dear Uncle Colin, I've been struggling to get my head around what happens if you chop infinity in two? Is half of infinity still infinity? Help! How Infinity Lies Beyond Every Reasonable Theory Hi, HILBERT1 The short answer is yes: halving infinity gives you infinity. (Once you get to infinity,Read More →
"I suppose," said the Mathematical Ninja, "I can allow you to put $20!$ into a calculator. There's absolutely no reason you should know that it turns out to be about $2.4 \times 10^{18}$." The student tapped the numbers in, frowned, thought for a moment and said "OK, I'll bite. How...?"Read More →
Dear Uncle Colin I'm in year 9 and really annoyed: my little sister keeps beating me in maths tests! I've only ever beaten her once, and even then only by one point. It's shameful! I always do the exercises in the book and work really hard at revising, but IRead More →
Most of the suvat equations are pretty easy to derive, as soon as you realise acceleration ($a$, assumed constant) is the derivative of velocity ($v$) with respect to time, and velocity is the derivative of position ($s$), also with respect to time. For example: $ a = \diff{v}{t}$ $ \int_0^tRead More → |
Spectral stiff problems in domains surrounded by thin stiff and heavy bands: Local effects for eigenfunctions
1.
Departamento de Matemáticas, Estadística y Computación, Universidad de Cantabria, Avenida de los Castros s/n., Santander, 39005, Spain
2.
Institute of Mechanical Engineering Problems, RAN V.O.Bol'shoi pr., 61, StPetersburg, 199178, Russian Federation
3.
Departamento de Matemática Aplicada y Ciencias de la Computación, Universided de Cantabria, Avenida de los Castros s/n, 39005 Santander
densityand stiffnessconstants are of order $O(\varepsilon^{-m-1})$ and $O(\varepsilon^{-1})$ respectively in this band, while they are of order $O(1)$ in $\Omega$; $m$ is a positive parameter and $\varepsilon \in (0,1)$, $\varepsilon\to 0$. Considering the range of the low, middle and high frequencies, we provide asymptotics for the eigenvalues and the corresponding eigenfunctions. For $m>2$, we highlight the middle frequencies for which the corresponding eigenfunctions may be localized asymptotically in small neighborhoods of certain points of the boundary. Mathematics Subject Classification:Primary: 35P05, 35P20; Secondary: 35B25, 73D30, 47A55, 47A75, 49R0. Citation:Delfina Gómez, Sergey A. Nazarov, Eugenia Pérez. Spectral stiff problems in domains surrounded by thin stiff and heavy bands: Local effects for eigenfunctions. Networks & Heterogeneous Media, 2011, 6 (1) : 1-35. doi: 10.3934/nhm.2011.6.1
References:
[1]
H. Attouch, "Variational Convergence for Functions and Operators,",
[2]
A. Campbell and S. A. Nazarov,
[3]
G. Cardone, T. Durante and S. A. Nazarov,
[4]
C. Castro and E. Zuazua,
[5]
E. A. Coddington and N. Levinson, "Theory of Ordinary Differential Equations,",
[6] [7]
V. Mazýa, S. Nazarov and B. Plamenevskij, "Asymptotic Theory of Elliptic Boundary Value Problems in Singularly Perturbed Domains,",
[8]
Yu. D. Golovaty, D. Gómez, M. Lobo and E. Pérez,
[9]
D. Gómez, M. Lobo and E. Pérez,
[10]
D. Gómez, M. Lobo, S. A. Nazarov and E. Pérez,
[11]
D. Gómez, M. Lobo, S. A. Nazarov and E. Pérez,
[12] [13]
M. Lobo, S. A. Nazarov and E. Pérez,
[14] [15] [16]
S. A. Nazarov and M. Specovius-Neugebauer,
[17]
S. A. Nazarov,
[18]
S. A. Nazarov, "Asymptotic Theory of Thin Plates and Rods. Vol.1. Dimension Reduction and Integral Estimates,",
[19]
S. A. Nazarov,
[20]
O. A. Oleinik, A. S. Shamaev and G. A. Yosifian, "Mathematical Problems in Elasticity and Homogenization,",
[21] [22]
J. Sanchez-Hubert and E. Sanchez-Palencia, "Vibration and Coupling of Continuous Systems. Asymptotic Methods,",
show all references
References:
[1]
H. Attouch, "Variational Convergence for Functions and Operators,",
[2]
A. Campbell and S. A. Nazarov,
[3]
G. Cardone, T. Durante and S. A. Nazarov,
[4]
C. Castro and E. Zuazua,
[5]
E. A. Coddington and N. Levinson, "Theory of Ordinary Differential Equations,",
[6] [7]
V. Mazýa, S. Nazarov and B. Plamenevskij, "Asymptotic Theory of Elliptic Boundary Value Problems in Singularly Perturbed Domains,",
[8]
Yu. D. Golovaty, D. Gómez, M. Lobo and E. Pérez,
[9]
D. Gómez, M. Lobo and E. Pérez,
[10]
D. Gómez, M. Lobo, S. A. Nazarov and E. Pérez,
[11]
D. Gómez, M. Lobo, S. A. Nazarov and E. Pérez,
[12] [13]
M. Lobo, S. A. Nazarov and E. Pérez,
[14] [15] [16]
S. A. Nazarov and M. Specovius-Neugebauer,
[17]
S. A. Nazarov,
[18]
S. A. Nazarov, "Asymptotic Theory of Thin Plates and Rods. Vol.1. Dimension Reduction and Integral Estimates,",
[19]
S. A. Nazarov,
[20]
O. A. Oleinik, A. S. Shamaev and G. A. Yosifian, "Mathematical Problems in Elasticity and Homogenization,",
[21] [22]
J. Sanchez-Hubert and E. Sanchez-Palencia, "Vibration and Coupling of Continuous Systems. Asymptotic Methods,",
[1]
Thomas Blanc, Mihai Bostan, Franck Boyer.
Asymptotic analysis of parabolic equations with stiff transport terms by a multi-scale approach.
[2] [3]
Alan E. Lindsay, Michael J. Ward.
An asymptotic analysis of the persistence threshold for the
diffusive logistic model in spatial environments with localized
patches.
[4]
Farah Abdallah, Denis Mercier, Serge Nicaise.
Spectral analysis and exponential or polynomial stability of some indefinite sign damped problems.
[5]
Tohru Wakasa, Shoji Yotsutani.
Asymptotic profiles of eigenfunctions for some 1-dimensional linearized eigenvalue problems.
[6]
Sarah Constantin, Robert S. Strichartz, Miles Wheeler.
Analysis of the Laplacian and spectral operators on the Vicsek set.
[7] [8] [9] [10] [11]
Kun Wang, Yangping Lin, Yinnian He.
Asymptotic analysis of the equations of
motion for viscoelastic oldroyd fluid.
[12] [13]
Aslihan Demirkaya, Panayotis G. Kevrekidis, Milena Stanislavova, Atanas Stefanov.
Spectral stability analysis for standing waves of a perturbed Klein-Gordon equation.
[14] [15]
Peter Howard, Bongsuk Kwon.
Spectral analysis for transition front solutions in Cahn-Hilliard systems.
[16]
Roberto Triggiani, Jing Zhang.
Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay.
[17]
Matthew O. Williams, Clarence W. Rowley, Ioannis G. Kevrekidis.
A kernel-based method for data-driven koopman spectral analysis.
[18]
Massimiliano Guzzo, Giancarlo Benettin.
A spectral formulation of the Nekhoroshev theorem and its relevance for numerical and experimental data analysis.
[19] [20]
Kazimierz Malanowski, Helmut Maurer.
Sensitivity analysis for state constrained optimal control problems.
2018 Impact Factor: 0.871
Tools Metrics Other articles
by authors
[Back to Top] |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Summary: Given a \(n \times n\) grid of panes, the objective of the Abbott's Window puzzle is to maximize the number of lighted panes subject to the constraint that the number of lighted panes in every row, column, and diagonal is even.
Go to the Abbott's Window Puzzle page to test your skills at solving the puzzle.
Note that the text of this problem statement is taken from H.E. Dudeney (1917).
Once upon a time the Lord Abbott of St. Edmondsbury, in consequence of "devotions too strong for his head," fell sick and was unable to leave his bed. As he lay awake, tossing his head restlessly from side to side, the attentive monks noticed that something was disturbing his mind; but nobody dared ask what it might be, for the Abbott was of a stern disposition, and never would brook inquisitiveness. Suddenly he called for Father John, and that venerable monk was soon at the bedside.
"Father John," said the Abbott, "dost thou know that I came into this wicked world on a Christmas Even?"
The monk nodded assent.
"And have I not often told thee that, having been born on Christmas Even, I have no love for the things that are odd? Look there!"
The Abbott pointed to the large dormitory window, of which I give a sketch. The monk looked and was perplexed.
"Dost thou not see that the sixty-four lights add up to an even number vertically and horizontally, but that all the diagonal lines, except fourteen are of a number that is odd? Why is this?"
"Of a truth, my Lord Abbott, it is of the very nature of things, and cannot be changed."
"Nay, but it
shall be changed. I command thee that certain of the lights be closed this day, so that every line shall have an even number of lights. See thou that this be done without delay, lest the cellars be locked for a month and other grievous troubles befall thee."
Father John was at his wits' end, but after consultation with one who was learned in strange mysteries (integer programming), a way was found to satisfy the whim of the Lord Abbott. Which lights were blocked up, so that those which remained added up to an even number in every line horizontally, vertically, and diagonally, while the least possible obstruction of light was caused?
The man who was "learned in strange mysteries" pointed out to Father John that the orders of the Lord Abbott of St. Edmondsbury might be easily carried out by blocking up [a certain number] of the lights in the window. Father John held that the four corners should also be darkened, but the sage explained that it was desired to obstruct no more light than was absolutely necessary, and he said, anticipating Lord Dundreary, "A single pane can no more be in line with itself than one bird can go into a corner and flock in solitude. The Abbott's condition was that no diagonal
lines should contain an odd number of lights."
Therefore, the problem is as follows. Consider a window made up of a grid with 8x8 panes of glass. In each vertical and horizontal direction, there is an even number of glass panes (8). However, there are 14 diagonals with an even number of glass panes and 16 diagonals with an odd number of glass panes.
Darken the smallest number of glass panes such that all rows, columns, and diagonals (not including the corner squares as diagonals) have an even number of glass panes not darkened.
The Abbott's Window puzzle can be formulated mathematically as a mixed integer linear programming problem.
Parameters \(n\) = number of rows/columns Variables \(x_{ij}\) = 1 if the pane in row \(i\) and column \(j\) is lighted, 0 otherwise \(y_i\) = integer variable for row \(i\) \(w_j\) = integer variable for column \(j\) \(z_{1k}\), \(z_{2k}\), \(z_{3k}\), \(z_{4k}\) = integer variables for the diagonals Objective Function: Maximize the total number of panes with lights maximize \(\sum_{i=1}^n \sum_{j=1}^n x_{ij}\) Constraint1: The number of lighted panes in row \(i\) must be even. \(\sum_{j=1}^n x_{ij} = 2 y_i, \forall i=1..n\) Constraint 2: The number of lighted panes in column \(j\) must be even. \(\sum_{i=1}^n x_{ij} = 2 w_j, \forall j=1..n\) Constraint 3: The number of lighted panes in each lower left diagonal must be even. \(\sum_{j=1}^{n-k+1} x_{k+j-1,j} = 2 z_{1k}, \forall k=1..n-1\) Constraint 4: The number of lighted panes in each upper right diagonal must be even: \(\sum_{i=1}^{n-k+1} x_{i,k+i-1} = 2 z_{2k}, \forall k=2..n-1\) Constraint 5: The number of lighted panes in each lower right diagonal must be even: \(\sum_{i=k}^n x_{i,n-i+k} = 2 z_{3k}, \forall k=2..n-1\) Constraint 6: The number of lighted panes in each upper left diagonal must be even: \(\sum_{j=1}^k x_{k-j+1,j} = 2 z_{4k}, \forall k=2..n\)
To solve this mixed integer linear programming problem, we can use one of the NEOS Server solvers in the Mixed Integer Linear Programming (MILP) category.
Here we provide a GAMS model for the specific instance of an 8 x 8 grid of window panes. Note that the GAMS model describes the diagonal constraints explicitly, one for each diagonal, instead of the algebraic representation as in the formulation presented above. It is easy to change the model to a different size of \(n\): (1) in the 'set row /1*8/;' statement, change the number 8 to \(n\), (2) in the 'set D /1*13/;' statement, change the number 13 to the value of \((2(n-1) - 1)\), and (3) in the 'rightdiag(I,J,D) = yes\$(ord(i)-ord(j)=ord(D)-7);' statement, change the number 7 to \((n-1)\).
Set row /1*8/;
alias(row, i, j);
Set D /1*13/ ;
Set leftdiag(I, J, D);
Set rightdiag(I, J, D);
leftdiag(I, J, D) = yes \$(ord(i)+ord(j)=ord(D)+2);
rightdiag(I, J, D) = yes \$(ord(i)-ord(j)=ord(D)-7);
Variables
z total number;
Binary Variables
x(i,j) set to one if pane is lit;
Integer Variables
y(i) number of entries in rows w(j) number of entries in columns z1(D) number of entries in each left diagonal z2(D) number of entries in each right diagonal;
Equations
ans define objective function rows(i) constraint for rows cols(j) constraint for cols diagL(D) constraint for left diagonal diagR(D) constraint for right diagonal;
ans.. z =E= sum((i,j), x(i,j));
rows(i).. sum(j, x(i,j)) =E= 2*y(i); cols(j).. sum(i, x(i,j)) =E= 2*w(j); diagL(D).. sum((i,j)\$(leftdiag(i, j, D)), x(i,j)) =E= 2*z1(D); diagR(D).. sum((i,j)\$(rightdiag(i, j, D)), x(i,j)) =E= 2*z2(D);
model abbott /all/;
solve abbott using mip maximizing z; display x.l;
Back to the top
* H.E. Dudeney.
Amusements in Mathematics. Thomas Nelson and Sons, 1917. Available at The Project Gutenberg eBook 16713. * T. Hurlimann. Solving and Running Models through the Internet. lpl.unifr.ch/lpl/mainmodel.html |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
One page 5 in Landau & Lifshitz
Fluid Mechanics (2nd edition), the authors pose the following problem:
Write down the equations for one-dimensional motion of an ideal fluid in terms > of the variables $a$, $t$, where $a$ (called a
Lagrangian variable) is the $x$ coordinate of a fluid particle at some instant $t=t_0$.
The authors then go on to give their solutions and assumptions. Here are the important parts:
The coordinate $x$ of a fluid particle at an instant $t$ is regarded as a function of $t$ and its coordinate $a$ at the initial instant: $x=x(a,t)$.
For the condition of mass conversation the authors arrive at (where $\rho_0 = \rho(a)$ the given initial density distribution):
$$ \rho\,\mathrm{d}x = \rho_0 \mathrm{d}a $$
or alternatively:
$$ \rho\left(\frac{\partial x}{\partial a}\right)_t = \rho_0 $$
Now the authors go on to write out Euler's equation, where I start to miss something. With the velocity of the fluid particle $v=\left(\frac{\partial x}{\partial t}\right)_a$ and $\left(\frac{\partial v}{\partial t}\right)_a$ the rate of change of the velocity of the particle during its motion, they write:
$$ \left(\frac{\partial v}{\partial t}\right)_a = -\frac{1}{\rho_0} \left(\frac{\partial p}{\partial a}\right)_t $$
How are the authors arriving at that equation?
In particular, when looking at Euler's equation: $$ \frac{\partial\mathbb{v}}{\partial t} + \left( \mathbf{v} \cdot \textbf{grad} \right) \mathbf{v} = - \frac{1}{\rho} \textbf{grad}\, p $$ what happens with the second term on the LHS $\left( \mathbf{v} \cdot \textbf{grad} \right) \mathbf{v}$? Why does it not appear in the authors' solution? |
I'm trying to compute (numerically) the matrices of some simple quantum optical operations, which in principle are unitary. However, in my case they are unitary in an infinite-dimensional space, so I have to truncate them. The result is not necessarily unitary anymore, but if all the entries are correct up to the size that I choose, I'm happy.
So I compute the generator, truncate it to the size of my liking and then I exponentiate it, right? Nope. It doesn't work that way: the entries can be actually very wrong. In some cases they are almost correct, in some other cases they are all messed up.
Example 1: the beam splitter $\exp[i\theta(a^\dagger b + ab^\dagger)]$. compute $a$ and $a^\dagger$ up to dimension (say) $m$. multiply them with the kronecker product exponentiate
result: the entries are almost right, except for the last row and column of both spaces as in this figure (for $m=4$):
The only correct parts are the ones in the white spaces. In this case the solution is to truncate $a$ and $a^\dagger$ to size $m+1$ and then throw away the wrong rows/columns.
Example 2: the single-mode squeezer $\exp[\frac{1}{2}(z^*a^2-z{a^\dagger}^2)]$
This is all a mess: as I increase the size of $a$, the entries of the final result (which are correctly placed in a "checkerboard pattern") seem to converge to their correct values, but in order to have the first (say) 4x4 block somewhat correct I have to truncate $a$ to $m\approx 50$ and then truncate the result of the exponentiation to a 4x4 size.
Am I doing this the wrong way? Eventually I would like to produce the matrices of rather non-linear operations, where the $a$ and $a^\dagger$ operators are raised to large powers, how do I know if I'm doing it right?
UPDATE:In the first case (the beamsplitter) the unitary is in $SU(2)$, which is compact and admits finite-dimensional irreps. So I can exponentiate them individually and from those I can build the truncated unitary 😁
In the second case (the squeezer) the unitary is in $SU(1,1)$ which is non-compact and in fact the Casimir operator has two infinite-dimensional eigenspaces: one corresponding to even and one to odd Fock states. Also for the two-mode squeezer the eigenspaces of the Casimir are infinite-dimensional (although countably infinite). So I can't use the multiplet method in this case. |
Euler's constant, $e$ (about 2.718 281 828) is one of the most important numbers in maths -- both pure and applied. (Thinking about my final year university courses, the only one I'm pretty sure had no use for $e$ was History of Maths, and frankly that was an oversight.)
As a budding mathematical ninja, you're doubtless keen to learn how to estimate powers of $e$ for pleasure and profit. Unfortunately, only pleasure is likely to come of it, unless you become a maths tutor.
Now: it's easy enough to estimate $e$ -- it has a nice, memorable decimal expansion, and depending on how roughly you want to play, you can look at it as 3 (-10%), 2.7(+0.7%) or $\frac{30}{11}$ (-0.3%).
However, your estimates for powers of $e$ are likely to be off by more than you're used to in ninja maths -- this is one situation where small errors add up fast. If you're about as good as me, you'll be happy to get things within about 5%.
If you want to know $e^3$, you might pick the first one, and say 'it's 27 less 30% -- take away about 8, so 19. It's actually 20.08 -- not brilliant, but ok for a ballpark figure. It's not really obvious how to do $2.7^3$ -- unless you know that $27^3$ is about 19,700, so $2.7^3$, plus 2.1%, would be 19.7 plus about 0.4, or 20.1. That's bang on.
Alternatively, you can do $\frac{30^3}{11^3}$, which gives $\frac{27,000}{1,331}$. Multiplying top and bottom by 3 gives $\frac{81,000}{3,993}$; that's a shade over $\frac{81}{4}$, or 20.25 -- less about 1% to make up for the original estimate, making 20ish. The only limit is your number handling!
If you're hot on your natural logs, you can reverse-engineer powers of $e$ from there, too -- if you want to know $e^{3.5}$, you can ask '$\ln$ of what is 3.5? Well, that's $0.7 \times 5$, so it must be about $2^5$ or 32. (It's 33.11 -- not bad).
When you're working with negative powers of $e$, the fractional version ($e \approx \frac{30}{11}$) comes into its own - because $e^{-1}$ is just $\frac{11}{30}$, and it's easy to take powers of both of those.
For instance, $e^{-2}$ is just $\frac{11^2}{30^2}$, or $\frac{121}{900}$. That's a bit more than $\frac{40}{300}$, or 0.133. (In fact, it's 0.135). And $e^{-1}$ itself comes up a lot: $\frac{11}{30}$ is 0.367, while $e^{-1}$ is 0.368. I could live with that as an estimate! |
For some peace of mind in a project, I am trying to prove two equations are somewhat equivalent. I have these two equations. $$ i_{1} = \frac{-I_M}{2}\frac{2i_2\left(1+e^{\left(\frac{2i_2R_E}{V_T}\right)}\right)+i_e\left(-1+e^{\left(\frac{2i_2R_E}{V_T}\right)}\right)}{2i_2\left(-1+e^{\left(\frac{2i_2R_E}{V_T}\right)}\right)+i_e\left(1+e^{\left(\frac{2i_2R_E}{V_T}\right)}\right)} $$
$$ i_1 = \frac{I_M}{2}\frac{\left(-1\pm e^{\left(\mp2\frac{i_2R_E + V_T\operatorname{Arctanh}\left[\frac{i_2}{i_e}\right]}{V_T}\right)}\right)}{\left(1\pm e^{\left(\mp 2\frac{i_2R_E + V_T\operatorname{Arctanh}\left[\frac{i_2}{i_e}\right]}{V_T}\right)}\right)} $$
I know these equations are equivalent because their Taylor series coefficients are pretty much the same except for one factor of 2 on the denominator. I have probably made a mistake with the formation of one of the two equations above, but I can't find it, and I think if I convert the top one to a form similar to the bottom one I can 'debug' the error. I also know the Arctanh to exponential identity gives a very similar form to the top one, (seen here 5th one down) but I don't know how to convert between the two with the extra variables in there.
I have tried many times but this is abit beyond me. Does someone want to have a go a converting Eq. 1 to Eq. 2 or vice versa? Even if it's not close, it might give me some insight.
Thanks a lot.
Edit: added missing $V_T$ and minus sign into eq. 2. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
September 2011 , Volume 31 , Issue 3
Select all articles
Export/Reference:
Abstract:
We consider the cubic Szegö equation
$i\partial$
$t$$u=$$\Pi$$(|u|^{2}u)$
Abstract:
Let $f$ be a homeomorphism of the closed annulus $A$ that preserves the orientation, the boundary components and that has a lift $\tilde f$ to the infinite strip $\tilde A$ which is transitive. We show that, if the rotation number of $\tilde f$ restricted to both boundary components of $A$ is strictly positive, then there exists a closed nonempty connected set $\Gamma\subset\tilde A$ such that $\Gamma\subset]-\infty,0]\times[0,1]$, $\Gamma$ is unbounded, the projection of $\Gamma$ to $A$ is dense, $\Gamma-(1,0)\subset\Gamma$ and $\tilde{f}(\Gamma)\subset \Gamma.$ Also, if $p_1$ is the projection on the first coordinate of $\tilde A$, then there exists $d>0$ such that, for any $\tilde z\in\Gamma,$ $$\limsup_{n\to\infty}\frac{p_1(\tilde f^n(\tilde z))-p_1(\tilde z)}{n}<-d.$$
Abstract:
In this paper, we study the escape rate of infinite lattices of weakly coupled maps with uniformly expanding repeller. In particular, it is proved that the escape rate of spatially periodic approximations is extensive and grows linearly with the period size. The proof relies on symbolic dynamics and is based on the control of cumulative effects of perturbations in cylinder sets with distinct spatial periods. A piecewise affine diffusive example is presented that exhibits monotonic decay of the escape rate with coupling intensity.
Abstract:
For finitely generated groups, amenability and Følner properties are equivalent. However, contrary to a widespread idea, Kaimanovich showed that Følner condition does not imply amenability for discrete measured equivalence relations. In this paper, we exhibit two examples of $C^\infty$ foliations of closed manifolds that are Følner and non amenable with respect to a finite transverse invariant measure and a transverse invariant volume, respectively. We also prove the equivalence between the two notions when the foliation is minimal, that is all the leaves are dense, giving a positive answer to a question of Kaimanovich. The equivalence is stated with respect to transverse invariant measures or some tangentially smooth measures. The latter include harmonic measures, and in this case the Følner condition has to be replaced by $\eta$-Følner (where the usual volume is modified by the modular form $\eta$ of the measure).
Abstract:
In the current article we study complex cycles of higher multiplicity in a specific polynomial family of holomorphic foliations in the complex plane. The family in question is a perturbation of an exact polynomial one-form giving rise to a foliation by Riemann surfaces. In this setting, a complex cycle is defined as a nontrivial element of the fundamental group of a leaf from the foliation. In addition to that, we introduce the notion of a multi-fold cycle and show that in our example there exists a limit cycle of any multiplicity. Furthermore, such a cycle gives rise to a one-parameter family of cycles continuously depending on the perturbation parameter. As the parameter decreases in absolute value, the cycles from the continuous family escape from a very large subdomain of the complex plane.
Abstract:
We study focusing discrete nonlinear Schrödinger equations and present a novel variational existence proof for homoclinic standing waves (bright solitons). Our approach relies on the constrained maximization of an energy functional and provides the existence of two one-parameter families of waves with unimodal and even profile function for a wide class of nonlinearities. Finally, we illustrate our results by numerical simulations.
Abstract:
We study a special conjugacy class $\mathcal F$ of continuous piecewise monotone interval maps: with countably many laps, which are locally eventually onto and have common topological entropy $\log9$. We show that $\mathcal F$ contains a piecewise affine map $f_{\lambda}$ with a constant slope $\lambda$ if and only if $\lambda\ge 9$. Our result specifies the known fact that for piecewise affine interval leo maps with countably many pieces of monotonicity and a constant slope $\pm\lambda$, the topological (measure-theoretical) entropy is not determined by $\lambda$. We also consider maps from the class $\mathcal F$ preserving the Lebesgue measure. We show that some of them have a knot point (a point $x$ where Dini's derivatives satisfy $D^{+}f(x)=D^{-}f(x)= \infty$ and $D_{+}f(x)=D_{-}f(x)= -\infty$) in its fixed point $1/2$.
Abstract:
The results in this paper fit into a program to study the existence of periodic orbits, invariant cylinders and tori filled with periodic orbits in perturbed reversible systems. Here we focus on bifurcations of one-parameter families of periodic orbits for reversible vector fields in $\mathbb{R}^4$. The main used tools are normal forms theory, Lyapunov-Schmidt method and averaging theory.
Abstract:
We establish the existence of pullback attractors for the dynamical system associated to a globally modified model of the Navier-Stokes equations containing delay operators with infinite delay in a suitable weighted space. Actually, we are able to prove the existence of attractors in different classes of universes, one is the classical of fixed bounded sets, and the other is given by a tempered condition. Relationship between these two kind of objects is also analyzed.
Abstract:
In this article we apply (recently extended by Kato and Akin) an elegant method of Iwanik (which adopts independence relations of Kuratowski and Mycielski) in the construction of various chaotic sets. We provide ''easy to track'' proofs of some known facts and establish new results as well. The main advantage of the presented approach is that it is easy to verify each step of the proof, when previously it was almost impossible to go into all the details of the construction (usually performed as an inductive procedure). Furthermore, we are able extend known results on chaotic sets in an elegant way. Scrambled, distributionally scrambled and chaotic sets with relation to various notions of mixing are considered.
Abstract:
This paper presents a first rigorous study of the so-called large-scale semigeostrophic equations which were first introduced by R. Salmon in 1985 and later generalized by the first author. We show that these models are Hamiltonian on the group of $H^s$ diffeomorphisms for $s>2$. Notably, in the Hamiltonian setting an apparent topological restriction on the Coriolis parameter disappears. We then derive the corresponding Hamiltonian formulation in Eulerian variables via Poisson reduction and give a simple argument for the existence of $H^s$ solutions locally in time.
Abstract:
We consider the behavior of a modulated wave solution to an $\mathbb{S}^1$-equivariant autonomous system of differential equations under an external forcing of modulated wave type. The modulation frequency of the forcing is assumed to be close to the modulation frequency of the modulated wave solution, while the wave frequency of the forcing is supposed to be far from that of the modulated wave solution. We describe the domain in the three-dimensional control parameter space (of frequencies and amplitude of the forcing) where stable locking of the modulation frequencies of the forcing and the modulated wave solution occurs.
Our system is a simplest case scenario for the behavior of self-pulsating lasers under the influence of external periodically modulated optical signals.
Abstract:
For one-parameter families of piecewise expanding maps of the interval we establish sufficient conditions such that a given point in the interval is typical for the absolutely continuous invariant measure for a full Lebesgue measure set of parameters. In particular, we consider $C^{1,1}(L)$-versions of $\beta$-transformations, piecewise expanding unimodal maps, and Markov structure preserving one-parameter families. For families of piecewise expanding unimodal maps we show that the turning point is almost surely typical whenever the family is transversal.
Abstract:
We study the dynamics of homoclinic classes on three dimensional manifolds under the robust absence of dominated splittings. We prove that, $C^1$-generically, if such a homoclinic class contains a volume-expanding periodic point, then it contains a hyperbolic periodic point whose index (dimension of the unstable manifold) is equal to two.
Abstract:
This paper continues our work on local bifurcations for nonautonomous difference and ordinary differential equations. Here, it is our premise that constant or periodic solutions are replaced by bounded entire solutions as bifurcating objects in order to encounter right-hand sides with an arbitrary time dependence.
We introduce a bifurcation pattern caused by a dominant spectral interval (of the dichotomy spectrum) crossing the stability boundary. As a result, differing from the classical autonomous (or periodic) situation, the change of stability appears in two steps from uniformly asymptotically stable to asymptotically stable and finally to unstable. During the asymptotically stable regime, a whole family of bounded entire solutions occurs (a so-called "shovel"). Our basic tools are exponential trichotomies and a quantitative version of the surjective implicit function theorem yielding the existence of strongly center manifolds.
Abstract:
We establish a Harnack inequality of fractional Laplace equations without imposing sign condition on the coefficient of zero order term via the Moser's iteration and John-Nirenberg inequality.
Abstract:
We consider the dynamics of nondegenerate polynomial skew products on $\mathbb{C}^{2}$. The paper includes investigations of the existence of the Green and fiberwise Green functions of the maps, which induce generalized Green functions that are well-behaved on $\mathbb{C}^{2}$, and examples of the Green functions which are not defined on some curves in $\mathbb{C}^{2}$. Moreover, we consider the dynamics of the extensions of the maps to holomorphic or rational maps on weighted projective spaces.
Abstract:
In this paper, we show that there are many almost periodic solutions corresponding to full dimensional invariant tori for the semilinear quantum harmonic oscillators with Hermite multiplier $${\rm i}{u}_{t}-u_{xx}+x^2u + M_\xi u+\varepsilon |u|^{2m}u=0,\quad u\in C^1(\Bbb R,L^2(\Bbb R)),$$ where $m \geq 1$ is an integer. The proof is based on an abstract infinite dimensional KAM theorem.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Summary: Given a \(n \times m\) grid with numbered cells and forbidden cells, the objective of the Rogo puzzle is to find a loop of fixed length through the grid such that the sum of the numbers in the cells on the loop is maximized.
Rogo is a puzzle game that was developed in 2009 by Nicola Petty and Shane Dye, who are faculty members at the University of Canterbury in Christchurch, New Zealand. The Rogo website contains background information, instructions, and a daily Rogo puzzle. Rogo is also available for the iPhone, iPad, and iPod touch.
Given a rectilinear grid with numbered squares and forbidden squares, the objective of Rogo is to find a loop of fixed length (pre-specified) with the maximum score. The score is calculated by summing the numbers in the squares on the loop. The rules of the game are (1) the loop can start on any square, (2) the loop must end at the starting square, (3) the loop may contain only horizontal and vertical steps (diagonal steps are forbidden), (4) the loop may visit a square at most once, and (5) the loop may not include any forbidden squares. A loop with a length of 20 steps and a score of 44 is shown in the grid below.
An example of an illegal move is shown in the grid below; the move is illegal because the square with the "2" in the second column has already been visited on the loop.
An example of an illegal loop is shown below; the loop does not return to its starting point.
Click here to use the interactive Rogo solver.
Given an \(n \times m\) grid with numbered cells and forbidden cells, the objective of Rogo is to find a loop of fixed length through the grid such that the sum of the numbers in the cells of the loop is maximized. The key to the formulation is writing constraints to enforce the loop requirements:
the loop must start and end at the same cell, each non-forbidden cell may be visited at most once, the loop must be exactly the number of specified steps.
Rogo is an example of a puzzle based on a well-known and extensively studied operations research problem called the
Traveling Salesman Problem (TSP). Given a list of cities and their pairwise distances, the objective of the TSP is to find the shortest possible tour that visits each city exactly once. Rogo is a special case of the TSP. Because Rogo has a limit on the number of steps and not every cell (city) can be visited, it is a subset-selection TSP. Also, because some of the cells in Rogo have a reward value, it is similar to the prize-collecting TSP.
The general formulation of the asymmetric TSP is provided below, where \(V\) is the set of cities. The binary variable \(x_{ij}\) takes the value of 1 if the salesman travels between city \(i\) and city \(j\), and the distance \(d_{ij}\) is the distance to travel from \(i\) to \(j\). The first constraint set specifies that there must be an edge that leaves city \(i\). The second constraint set specifies that there must be an edge that enters city \(j\). These two sets of constraints are not sufficient to enforce the loop conditions, however, since they allow for "subtours", which are disjoint loops. The third constraint set is the subtour elimination constraint set, which specifies that every subset \(S\) of cities may contain at most \(|S|-1\) edges.
TSP Formulation
Minimize \(\sum_{i \in V}\sum_{j \in V} d_{ij}*x_{ij}\)
subject to
\(\sum_{j \in V} x_{ij} = 1, \forall i \in V\)
\(\sum_{i \in V} x_{ij} = 1, \forall j \in V\)
\(\sum_{i \in S}\sum_{j \in S} x_{ij} \le |S| - 1, \forall S \subseteq V, 2 \le |S| \le |V| - 1\)
Note that since the formulation requires a constraint for each subset \(S\), there can be a very large (exponential) number of subtour elimination constraints. An alternate formulation includes the Miller-Tucker-Zemlin (MTZ) constraints. The formulation with the MTZ constraints is a weaker formulation but it includes a smaller (polynomial) number of constraints. The MTZ formulation is often sufficient for small instances. Let \(u_i\) be the position of city \(i\) in the tour. Then, the MTZ constraints are:
\(u_1 =1\)
\(2 \leq u_i \leq |V|, \forall i \neq 1\) \(u_i - u_j + 1 \leq (|V|-1)(1 - x_{ij}), \forall 1 \leq i \neq j \leq n\) Rogo Formulation
The TSP formulation with the MTZ constraints is a starting point for the formulation of Rogo. The following modifications are required:
There are forbidden cells in the grid, so not all arcs are allowed. A set of feasible arcs is defined. The loop does not need to visit every cell, so knowing the starting point is required. A binary variable, \(\delta_i\), is defined to indicate whether or not cell \(i\) is the starting point of the loop. Sets \(V\) = set of cells \(A\) = set of feasible arcs Parameters \(n\) = number of steps \(p_i\) = value of cell \(i\), \(\forall i \in V\) Variables \(x_{ij} = \left\{ \begin{array}{ll} 1 & \mbox{if arc \((i,j)\) is in the loop, \(\forall (i,j) \in A\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(y_i = \left\{ \begin{array}{ll} 1 & \mbox{if cell \(i\) is in the loop, \(\forall i \in V\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(\delta_i = \left\{ \begin{array}{ll} 1 & \mbox{if cell \(i\) is the starting point, \(\forall i \in V\)} \\ 0 & \mbox{otherwise} \end{array} \right. \) \(u_i\) = position of cell \(i\) in the loop, \(\forall i \in V\)
Maximize \(\sum_{i \in V} p_i y_i\)
Subject to:
1. The loop must have \(n\) steps. \(\sum_{i \in V} y_i = n\)
2. If a cell is entered, it must be exited.
\(\sum_{j:(i,j) \in A} x_{ij} = y_i, \forall i \in V\)
\(\sum_{i:(i,j) \in A} x_{ij} = y_j, \forall j \in V\)
3. If cell \(i\) is the starting point, its relative position in the loop should be exactly one.
\(u_i - \delta_i \geq 0\)
\(u_i + (n - 1)*\delta_i \leq n\)
4. The loop can have only one starting point.
\(\sum_{i \in V} \delta_i = 1\)
5. The MTZ constraints are modified to accommodate that not every cell is visited.
\(u_i - u_j + n*(x_{ij} - \delta_j) \leq n-1, 1 \leq i \neq j \leq n\)
To solve this mixed integer linear programming problem, we can use one of the NEOS Server solvers in the Mixed Integer Linear Programming (MILP) category. Each MILP solver has one or more input formats that it accepts.
As an example, we provide a GAMS model for "Introductory Rogo #3" puzzle from the Rogo website.
set V /1*40/;
scalars nrow /5/, ncol /8/; scalar steps /12/;
alias(V,i,j);
set arc(i,j);
parameter p(V) /1 2, 2 0, 3 3, 4 0, 5 1, 6 0, 7 0, 8 2,
9 0, 10 3, 11 0, 12 0, 13 0, 14 0, 15 4, 16 0, 17 4, 18 0, 19 1, 20 0, 21 0, 22 4, 23 0, 24 1, 25 0, 26 0, 27 -100, 28 2, 29 2, 30 -100, 31 0, 32 0, 33 0, 34 0, 35 2, 36 0, 37 0, 38 2, 39 0, 40 0/;
arc(i,j) = yes\$(abs(ord(i)-ord(j))=ncol or (ord(j)-ord(i))\$(mod(ord(i),ncol) ne 0 and
mod(ord(j),ncol) ne 1)=1 or (ord(i)-ord(j))\$(mod(ord(j),ncol) ne 0 and mod(ord(i),ncol) ne 1)=1);
binary variable
y(V) x(i,j) delta(v);
positive variable
u(V);
variable
obj;
equations
connections(i) connections2(j) loop_size mtz(i,j) assign1(V) assign2(V) objective assign3;
objective.. sum(v,p(v)*y(v)) =e= obj;
loop_size.. sum(V,y(V)) =e= steps;
connections(i).. sum(j\$arc(i,j),x(i,j)) =e= y(i);
connections2(j).. sum(i\$arc(i,j),x(i,j)) =e= y(j);
assign1(V).. u(V)-delta(V) =g= 0;
assign2(V).. u(V) +(steps-1)*delta(V) =l= steps;
assign3.. sum(V,delta(V)) =e= 1;
mtz(i,j)$(ord(i) ne ord(j)).. u(i)-u(j) + steps*(x(i,j)-delta(j)) =l= steps-1;
model project /all/;
solve project using mip max obj; Balas, E., The prize collecting traveling salesman problem. Carngie Mellon University, Pittsburg, Pennsylvania. http://onlinelibrary.wiley.com/doi/10.1002/net.3230190602/pdf Linderoth, J. Tools and Environment in Optimization (ISyE/CS 635). University of Wisconsin - Madison. May 2011. Operations research, Sudoko, Rogo, and Puzzles. Retrieved from Michael Trick’s Operations Research Blog, http://mat.tepper.cmu.edu/blog/?p=1302 Petty, N. & Dye, S. (2010). Determining degree of difficulty in Rogo, a TSP-based paper puzzle. Proceedings of the 45th Annual Conference of the ORSNZ, November 2010. Rogo – the New Puzzle Game (2011), by Creative Heuristics Ltd. Retrieved from http://www.rogopuzzle.co.nz/
Original contribution by Jackie Lamb and Nanjing Jian, May 2011. |
Let $a,b\in\mathbb{R}$ such that $a<b$ and $f\colon [a,b]\to \mathbb{R}$ a non-negative function. Is then $$\|f\|_1=\sup \{\int_{[a,b]}\tau(x)dx \mid \tau \text{ step function and } \tau\le f\} ?$$ I have seen it to be true under additionally assuming that $f$ is Riemann-integrable. Therefore, I initially tried to find a counterexample (with a non Riemann-integrable function). But then I came across here Lebesgue integrable implies Riemann integrable?, so that the equality probably is true (note that simple functions are more general as step functions!). And now I don't know whether it's true or not (I tend to vote for wrong, but I still don't have a counterexample)) .. I appreciate any help.
I'm afraid this is not true for all (Lebesgue) integrable functions.
Consider the indicator function on $[a,b]\setminus \mathbb{Q}$, namely $\mathbb{I}_{[a,b]\setminus \mathbb{Q}} : [a,b]\rightarrow \{0,1\}$ where $\mathbb{I}_{[a,b]\setminus \mathbb{Q}}(x)=1$ for $x\in [a,b]\setminus \mathbb{Q}$ and $0$ otherwise.
This function only differs from the indicator function on $[a,b]$ on countably many points, so $$ \int \mathbb{I}_{[a,b]\setminus \mathbb{Q}} = \int \mathbb{I}_{[a,b]}=b-a. $$ However clearly any step function $s:[a,b]\rightarrow \mathbb{R}$ with $s\leq \mathbb{I}_{[a,b]\setminus \mathbb{Q}}$ maps into $(-\infty, 0]$.
It is true however for continuous functions on $[a,b]$, since these are Riemann-integrable. |
One of the best ways to shorten a proof in statistics or probability is to use conditioning arguments. I myself have used the Law of Total Probability extensively in my work, as well as other conditioning arguments in my PhD dissertation. Like many things in mathematics, there are subtleties that, if ignored, can cause quite a bit of trouble. It’s a theme on which I almost feel like I sound preachy, because subtlety, nuance, and deliberation followed by cautious proceeding is about as old-fashioned as my MS Statistics
1.
One particularly good paper that discusses this was written by Michael Proschan and Brett Presnell in
The American Statistician in August 1998 titled “Expect the Unexpected from Conditional Expectation”. In it, the authors noted the following seemingly innocuous question posed on a statistics exam
If X and Y are independent standard normal random variables, what is the conditional distribution of Y given Y=X?
There are three approaches to this problem.
(1) Interpret the statement that Y=X by declaring a new random variable Z_{1} = Y-X where Z_{1}=0.
Here, the argument proceeds as follows:Y and Z_{1} have a bivariate normal distribution with \mu = (0,0), \sigma_{Y}^{2}=1, \sigma_{Z_{1}}^{2}=2, and correlation \rho = \tfrac{1}{\sqrt{2}}. Thus, we know that the conditional distribution of Y given Z_{1}=0 is itself normal with mean \mu_{Y}+\rho\frac{\sigma_{Y}}{\sigma_{Z_{1}}}(0-\mu_{Z_{1}})=0 and variance \sigma_{Y}^{2}(1-\rho^{2}) = \tfrac{1}{2}. Thus, the conditional density is f(y|Z_{1}=0) = \frac{1}{\sqrt{2\pi}}e^{-y^{2}}
This was the expected answer. However, one creative student did a different argument:
(2) Interpret the statement that Y=X by declaring a new random variable Z_{2} = \tfrac{Y}{X} where Z_{2}=1.
This student used the Jacobian method
2 and transformed the variables via declaring s=y, z_{2}=\tfrac{y}{x} and finding the joint density of S and Z_{2}. The marginal density for S was then found by dividing the joint density by the marginal density of Z_{2} evaluating at z_{2}=1. The reason for the ratio is that the marginal density of Z_{2} being the ratio of independent standard normal random variables has a Cauchy distribution 3. This student’s answer was
Not the expected answer. This is a correct interpretation of the condition Y=X, so the calculations are correct. There was a third different answer, from a student who had taken a more advanced probability course.
(3) Interpret the statement Y=X as Z_{3} = 1 where Z_{3} = \mathbb{I}[Y=X]
Here \mathbb{I}(\cdot) is the indicator function, where the variable is 1 if the condition is met, and 0 otherwise. The argument here is that Y and Z_{3} are independent. Why? Z_{3} is a constant zero with probability 1. A constant is independent of any random variable. Thus the conditional distribution of Y given Z_{3}=1 must be the same as the unconditional distribution of Y, which is standard normal.
This is also a correct interpretation of the condition.
From the paper, “At this point the professor decided to drop the question from the exam and seek therapy.”
What happened?
At this point, both the paper did and we shall revisit exactly what conditional probability means. Suppose we have continuous random variables X and Y. We’ll usually write expressions like \mathbb{P}(Y\leq y|X=x) or \mathbb{E}(Y|X=x). However, an acute reader might already ask the question about conditioning on sets of probability 0. For a continuous random variable, the probability that we land on any specific real value x is indeed 0, which hearkens back to the measure-theoretic basis of probability. As it turns out, this little subtlety is indeed the culprit.
Formal definition of conditional distribution and conditional expectation
First, we take a look at the formal definition of conditional expectation
\mathbb{E}(Z\mathbb{I}(X \in B)) = \mathbb{E}(Y\mathbb{I}(X \in B))
Definition. A conditional expected value \mathbb{E}(Y|X) of Y given X is any Borel function g(X) = Z that satisfies
for every Borel set B. Then g(x)
4is the conditional expected value of Y given X=x and we can write \mathbb{E}(Y|X=x)
What this means is that the conditional expectation is actually defined as a random variable whose integral over any Borel set agrees with that of X
5. Of import here is the fact that the conditional expectation is defined only in terms of an integral. From Billingsley, 1979, there always exists at least one such Borel function g, but the problem here is that there may be infinitely many. Each Borel function g that satisfies the above definition is called a version of \mathbb{E}(Y|X).
So let’s say we have two versions of \mathbb{E}(Y|X), called Z_{1} = g_{1}(X) and Z_{2} = g_{2}(X). Then we have that \mathbb{P}(Z_{1}=Z_{2})=1. Still seems pedantic, but if we look at this from a measure-theoretic perspective, this means that two versions are equal
except on a set N of x such that \mathbb{P}(X \in N)=0.
What does this have to do with conditional distributions?
For each y we fix we can find some function
6 G_{y}(x) = G(y|x) such that
In other words
7, G(y|X) is a version of \mathbb{E}(\mathbb{I}(Y\leq y)|X). This last expression here is a conditional distribution of Y given X=x.
Notice I said “a” conditional distribution, and not “the” conditional distribution. The words were chosen carefully. This leads us into the Borel Paradox, and the answer to why all three answers of that exam question are technically correct.
The Borel Paradox
Also known as the Equivalent Event Fallacy, this paradox was noted by Rao (1988) and DeGroot (1986). If we attempt to condition on a set of probability (or measure) 0, then the conditioning may not be well-defined, which can lead to different conditional expectations and thus different conditional distributions.
In the exam question, Z_{1}, Z_{2} and Z_{3} are all equivalent events. The fallacy lies in assuming that this equivalence would mean that conditioning on, say, \{Y-X=0\} is the same as conditioning on the event \{Y/X=1\}. In almost all
8 cases, this is true. If the events in question have nonzero probability, then the professor’s assumption was true. However, when we’re conditioning on events that have zero probability, the classic Bayes’s formula interpretation doesn’t hold anymore, because the denominator is now 0. 9
If we have an event like \{Y=X\}, then from the previous section we know that there are infinitely many
versions of this event. We have to think of conditioning on a random variable, not just a value. There were three versions given above: Z_{1} = Y-X, which has value z_{1}=0 Z_{2} = Y/X, with value z_{2}=1 Z_{3} = \mathbb{I}(Y=X) with value z_{3} =1
Proschan and Presnell dig a bit deeper and discuss the details here on exactly where these equivalent interpretations diverge
10 and yield different conditional distributions. They also discuss the substitution fallacy, which again notes the consequences of having infinitely many versions of \mathbb{E}(Y|X), and why the convolution argument typically given to derive the distribution of the sum of independent random variables is nowhere near as air-tight as it appears. 11 What’s the solution?
The Borel Paradox reared its ugly head because different yet correct interpretations of the conditioning events/sets of probability 0 yielded different conditional expectations and different conditional distributions. The core of the problem was those sets having probability 0. How do we avoid this? We actually would like to calculate \mathbb{E}(Y|X=x) (and conditional distributions that result from it), so how do we get around this paradox of infinite versions?
We take the analysts’ favorite trick: sequences and limits. Take a sequence of sets (B_{n}) where B_{n} \downarrow x. Now we’re only ever actually computing \mathbb{E}(Y|X \in B_{n}) where \mathbb{P}(X \in B_{n})>0. Now define\mathbb{E}(Y|X=x) = \lim_{n\to\infty}\mathbb{E}(Y|X \in B_{n})
(I admit I’m seriously sidestepping some careful considerations of convergence and choices of sequence here. There is more subtlety and consideration in the analytic study of sequences of sets, but I don’t want to push too far down the rabbit hole here. The point is that we can avoid the paradox with care.)
Conclusion
Subtleties and paradoxes occur all over mathematics. This doesn’t mean mathematics is broken, that all statistics are lies, or any other variation of frustrated exasperation I’ll hear when discussing these. What these subtleties, fallacies, and paradoxes do show is that careful consideration is paramount to the study, practice, and application of mathematics.
References Billingsley P,. (1979), Probability and Measure(1st ed.),New York:Wiley Casella, G. and Berger, R.(2002) Statistical Inference (2nd ed.), Wadsworth DeGroot,M.H. (1986), Probability and Statistics (2nd ed.), Reading,MA: Addison-Wesley. Proschan, M.A. and Presnel, B. (1998), “Expect the Unexpected from Conditional Expectation”, American Statistician,48, 248-252 Rao,M.M.(1988),”Paradoxes in Conditional Probability,” Journal of Multivariate Analysis,27, 434-446 Footnotes Yep, I’ve had a job interview in the last year where the interviewer actually told me that. As a brief aside, this method isn’t just for probability. This shows up in multivariate calculus as well when we wish to change coordinates. This distribution is particularly interesting, as another side note. It has no mean, nor variance. Both are undefined, but it does have a median and mode Note the small x here. From Casella and Berger, 2002 Again please note the small letters Now note the capital letter X, signifying a random variable See what I did there? Let that be a lesson too. Bayesian everything isn’t a silver bullet. Pardon the language here, used colloquially This also has consequences in the study of random processes. Perhaps I’ll write something on that specific error. |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Let $\{e_n\}$ be an orthonormal basis in a Hilbert space $H$ and let $\{\lambda_n\}$ be a sequence of numbers.
Define the operator $$T:H \to H$$ by $$Tu=\sum^\infty_{n=1} \lambda_n \langle u,e_n \rangle e_n$$ where $u \in H$
I am trying to show that if $\lim_{n \to \infty} \lambda_n=0$ then $T$ is a compact operator.
Why does showing $$||Tu-T_k u||^2 \leq \sup_{n >k} \{|\lambda_n|^2\} ||u||^2$$ tell us that $|T-T_k u|| \to 0$?
Where $T_k$ is $k^{th}$ partial sum in the definition of $T$.
I have the full proof for this but I am wondering why does showing $\lim_{n \to \infty} \lambda_n=0$ mean that $T$ is a compact operator?
How do you show that a operator is compact?
I have come across that statement that a compact operator is the limit of finite dimensional operators but I do not understand this statement.
What does this statement mean? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.