text
stringlengths
256
16.4k
One-Class Support Vector Machines for Anomaly Detection In my research area (error detection in software systems), we typically want to perform 2-class classification. For example, we want to determine whether the condition of a system is normal (class 1) or abnormal (class 2) based on some metrics. However, sometimes we only have training data of one class, typically of normal behavior. In those cases, we cannot use traditional Support Vector Machines (SVM) because they are aimed for 2-class classification problems. Support Vector Domain Description (SVDD) [1] is a technique that I have found useful for cases when we only have data of one class. It is essentially a modification of SVM to work in one-class scenarios. Here's a brief description of the technique summarized from [1]. Support Vector Domain Description SVDD has been proposed in [1] for outlier or novelty detection. The basic idea is that it tries to find a hyper-sphere in the feature space which can enclose all (or almost all) the training data with the minimum volume. Mathematically, it is aimed to minimize the following objective $ F(R,a,\xi_i) = R^2 + C \sum_{i=1}^N \xi_i $ where $ a $ is the center of the sphere, $ R $ is the radius, $ C $ gives the trade-off between simplicity (or volume of the sphere) and the number of errors (number of target objects rejected), and $ \xi $'s are slack variables that have the same functionality as in traditional SVM for classification. This has to be minimized under the constraints: $ (x_i - a)^T (x_i - a) \leq R^2 + \xi_i\text{,} \qquad \xi_i \geq 0 $. By incorporating those constraints into the objective function, we get the Lagrangian [1]: $ L(R,a,\alpha_i,\xi_i) = R^2 + C \sum_{i=1}^N \xi_i - \sum_{i=1}^N \gamma_i \xi_i - \sum_{i=1}^N \alpha_i \left( R^2 + \xi_i - (x_i - c)^T (x_i - c) \right) $ with Lagrange multipliers $ \alpha_i, \gamma_i \geq 0 $. By setting the derivatives with respect to the primal variables $ c $, $ \xi_i $ and $ R $ to zero, we get $ c = \sum_{i=1}^N \alpha_i x_i $, $ 0\leq \alpha_i \leq C $, and $ \sum_{i=1}^N \alpha_i = 1 $. Substituting the above formulas into the Lagrangian we obtain the following dual problem where we maximize with respect to $ \alpha_i $: $ L = \sum_{i=1}^N \alpha_i (x_i^T \cdot x_i) - \sum_{i,j=1}^N \alpha_i \alpha_j (x_i^T \cdot x_i) $ with constraints $ \sum_{i=1}^N \alpha_i = 1 $ and $ 0 \leq \alpha_i \leq C $. Like in SVM, we can replace the inner product $ (x_i^T \cdot x_i) $ by any positive definite kernel $ K(x,x') $. Then, the resulting hyper-sphere will enclose data examples that are mapped into a high dimensional feature space. If we do so, we maximize with respect to $ \alpha_i $: $ L = \sum_{i=1}^N \alpha_i K(x_i,x_i) - \sum_{i,j=1}^N \alpha_i \alpha_j K(x_i,x_j). $ To test whether a new data point $ x $ is within the spherical region (i.e. whether it is of normal behavior data or not), we evaluate the sign of $ y(x) = \sum_{i=1}^N \alpha_i K(x,x_n) + b $ where the threshold parameter $ b $ is determined as in traditional SVM by summing $ \alpha_i K(x',x_i) $ for all the support vectors $ x' $, and then getting an average for the different values of $ b $. A new point $ x $ is said to be in the sphere if $ y(x) \geq 0 $, and outside the sphere if $ y(x) < 0 $. In terms of my research area, if a new point is in the sphere we say that the system is running normally, otherwise it is running abnormally. Some Experiments I have used the Gaussian kernel function $ K(x,x') = \text{exp}(-{\parallel x - x' \parallel}^2/{2 s^2}) $ to build the hyper-sphere using normal behavior data of a real system. When training SVDD we basically have to find the best values for parameters $ s $ and $ C $. Depending on these values the shape of the hyper-sphere and the amount of points that are allowed outside it will vary. Below are some plots for different values of these parameters (the code is all implemented in Matlab). References [1] D. M. J. Tax and R. P. W. Duin, "Support vector domain description," Pattern Recogn. Lett., 20(11-13):1191–1199, 1999. --ilaguna 21:06, 20 April 2010 (EDT)
Mertens' third theorem is just the exponentiated version of the second theorem (without the bounds that Mertens proved for his second theorem): \begin{align}-\ln\Biggl(\ln n\prod_{p\leqslant n}\biggl(1 - \frac{1}{p}\biggr)\Biggr)&= -\ln \ln n - \sum_{p\leqslant n} \ln \biggl(1 - \frac{1}{p}\biggr)\\&= \Biggl(\sum_{p\leqslant n}\frac{1}{p} - \ln \ln n - M\Biggr) + \Biggl(M - \sum_{p\leqslant n} \biggl(\ln\biggl(1-\frac{1}{p}\biggr) + \frac{1}{p}\biggr)\Biggr),\end{align} where the first term converges to $0$ by Mertens' second theorem, and the second term converges to $\gamma$ by definition of $M$. Mertens' bounds in the second theorem and estimates for $$\sum_{p > n}\biggl(\ln\biggl(1-\frac{1}{p}\biggr)+\frac{1}{p}\biggr)$$ give you bounds for $$e^\gamma\ln n\prod_{p\leqslant n}\biggl(1-\frac{1}{p}\biggr),\tag{$\ast$}$$ and conversely bounds for that give you bounds for $$\left\lvert\sum_{p\leqslant n}\frac{1}{p} - \ln \ln n - M\right\rvert,\tag{$\ast\!\ast$}$$ but it is doubtful whether one can directly prove bounds for $(\ast)$ that give you back Mertens' bounds for $(\ast\ast)$. One can use Mertens' first theorem to derive the second via an integration by parts, Hardy and Wright for example do that, but don't give explicit bounds on $(\ast\ast)$. For $x > 0$ we define $$S(x) := \sum_{p\leqslant x} \frac{\ln p}{p}.$$ Mertens' first theorem tells us $$\lvert S(x) - \ln x\rvert \leqslant 2 + O(x^{-1}),$$ and we can write $$T(x) := \sum_{p\leqslant x} \frac{1}{p} = \int_{3/2}^x \frac{1}{\ln t}\,dS(t)$$ with a (Riemann/Lebesgue-) Stieltjes integral. Integration by parts yields \begin{align}T(x) &= \int_{3/2}^x \frac{1}{\ln t}\,dS(t)\\&= \frac{S(x)}{\ln x} - \frac{S(3/2)}{\ln \frac{3}{2}} - \int_{3/2}^x S(t)\,d\biggl(\frac{1}{\ln t}\biggr)\\&= \frac{S(x)}{\ln x} + \int_{3/2}^x \frac{S(t)}{t(\ln t)^2}\,dt\\&= \frac{S(x)}{\ln x} + \int_{3/2}^x \frac{dt}{t\ln t} + \int_{3/2}^x \frac{S(t) - \ln t}{t(\ln t)^2}\,dt\\&= \ln \ln x + \underbrace{1 - \ln \ln \frac{3}{2} + \int_{3/2}^\infty \frac{S(t) - \ln t}{t(\ln t)^2}\,dt}_M + \underbrace{\frac{S(x)-\ln x}{\ln x} - \int_x^\infty \frac{S(t)-\ln t}{t(\ln t)^2}\,dt}_{O\bigl(\frac{1}{\ln x}\bigr)}.\end{align} I'm not sure, however, whether one can get exactly Mertens' bounds on $(\ast\ast)$ easily from that. So in a way, Mertens' first theorem is the most powerful, since it implies the others, at least if we don't need explicit bounds for the differences.
Using Laplace Transform you have$$s^2 \hat{u} - \hat{u}_{xx} = 0$$with boundary conditions$$\hat{u}(0,s) = \hat{\phi}(s), \quad \hat{u}_x(\pi,s) = 0$$wich leads to the solution$$\hat{u}(x,s) = \hat{\phi}(s) \frac{\cosh s(\pi-x)}{\cosh \pi s}$$This is fantastic modulo inverting the Laplace transform, which I think it can be done, but I'm still not clear on the details. Another way would be taking $v(x,t) = u(x,t) - \phi(t)$ and using separation of variables. This is the battle horse and it's kind of fool proof, but it involves a lot of work. The most intuitive and, in my opinion, beautiful way to solve the equation is using the fact that for any parallelogram $ABCD$ in the $xt$-plane bounded by four characteristic lines, the sums of the values of u at opposite vertices are equal, that is$$u(A) + u(C) = u(B) + u(D) \tag{1}$$If we divide the $xt$-plane in regions delimited by the characteristis as shown in the figure $\hskip1in$ and take $A = (x,t) \in \mbox{II}$, $B = (0,t_B) \in \mbox{II}$, $C = (x_C,0) \in \mbox{I}$, $D=(x_D,t_D) \in \mbox{I}$, D'Alambert solution implies that $$u(x,t) = 0, \quad (x,t) \in \mbox{I}$$and using $(1)$, we have that$$u(x,t) = \phi(t-x), \quad (x,t) \in \mbox{II}$$For III, no wave can reach the region, hence$$u(x,t) = 0, \quad (x,t) \in \mbox{III}$$For region IV, take $A = (x,t) \in \mbox{IV}$, $B = (0,t_B) \in \mbox{II}$, $C = (x_C,t_C) \in \mbox{I}$ and $D = (x_D,t_D) \in \mbox{III}$ and equation $(1)$ implies that$$u(x,t) = \phi(t-x), \quad (x,t) \in \mbox{IV}$$ For Region V we take $A = (x,t) \in \mbox{V}$, $B = (0,t_B) \in \mbox{V}$, $C = (x_C,t_C) \in \mbox{III}$ and $D = (x_D,t_D) \in \mbox{III}$ we have that$$u(x,t) = \phi(t-x), \quad (x,t) \in \mbox{V}$$ Region VI is more interesting: taking $A = (x,t) \in \mbox{VI}$, $B = (0,t_B) \in \mbox{II}$, $C = (x_C,t_C) \in \mbox{II}$ and $D = (\pi,t_D) \in \mbox{VI}$, equation $(1)$ implies that$$u(x,t) + u(x_C,t_C) = u(0,t_B) + u(\pi,t_D) \, \Longrightarrow$$$$u(x,t) = \phi(t_B) - \phi(t_C - x_C) + u(\pi, t_D)$$but\begin{align}t_B &= t - x \\t_D &= t + x - \pi\\x_C &= \pi - x\\t_C &= t - \pi\end{align}and then$$u(x,t) = \phi(t-x) - \phi(t + x -2\pi) + u(\pi,t + x - \pi)$$Now, using $u_x(\pi,t) = 0 = -2 \phi'(t - \pi) + u_t(\pi,t)$we have$$u(x,t) = \phi(t-x) + \phi(t + x - 2\pi), \quad (x,t) \in \mbox{VI}$$ In region VII, taking $A = (x,t) \in \mbox{VII}$, $B = (0,t_B) \in \mbox{V}$, $C = (x_C,t_C) \in \mbox{IV}$ and $D = (\pi,t_D) \in \mbox{VI}$, equation $(1)$ implies that$$u(x,t) = \phi(t-x) + \phi(t + x - 2\pi), \quad (x,t) \in \mbox{VII}$$So far, it's easy to understand the results of all regions. For I and III, there is no wave, for II, IV and V, there is only the wave originating from the boundary $x = 0$. In VI there are two waves, the one originated at II plus the reflection on the boundary $x = \pi$. In VII, there is the wave from $x=0$ and region VI. This logic tells us that in VIII there will be three waves: the one from the boundary $x = 0$, the one coming from VI and it's reflection. To see this, we take $A = (x,t) \in \mbox{VIII}$, $B = (0,t_B) \in \mbox{VIII}$, $C = (x_C,t_C) \in \mbox{VI}$ and $D = (\pi,t_D) \in \mbox{VI}$,\begin{align}u(x,t) &= \phi(t_B) - \phi(t_C - x_C) - \phi(t_C + x_C -2\pi) + 2\phi(t_D - \pi)\\&= \phi(t-x) + \phi(t + x - 2\pi) - \phi(t - x - 2\pi), \quad (x,t) \in \mbox{VIII}\end{align} Why the change of sign on the reflecting wave? one might ask. The answers is simple: the boundary $x = 0$ is hard (Dirichlet), while the boundary in $x = \pi$ is soft (Neumann). For region IX, we take $A = (x,t) \in \mbox{IX}$, $B = (0,t_B) \in \mbox{V}$, $C = (x_C,t_C) \in \mbox{V}$ and $D = (\pi,t_D) \in \mbox{IX}$,\begin{align}u(x,t) &= \phi(t_B) - \phi(t_C-x_C) + u(\pi,t_D)\\&= \phi(t-x) - \phi(t + x - 2\pi) + u(\pi, t + x - \pi)\end{align}Again, using the boundary condition in $x = \pi$ we have $u(\pi,t) = 2\phi(t - \pi)$ for $(x,t) \in \mbox{IX}$ and$$u(x,t) = \phi(t-x) + \phi(t + x - 2\pi), \quad (x,t) \in \mbox{IX}$$ There is clearly a pattern arising in the triangular regions. In the parallelograms, a little more work must be performed, but all in all, the table is set to propose a general solution by induction. Can you finish it off?
Code Subdivision Curves A subdivision curve is defined as the limit of recursive refinement of the input polyline $\mathbf x_i = \mathbf x_i^0$.Your today’s task is to implement three curve subdivision schemes from the lecture.For the sake of simplicity, we’ll be working with closed curves only. Generally, one iteration of curve subdivision has the following form: Topological step: curve is upsampled by inserting a new vertex between each two adjacent vertices. This doubles the number of vertices: if there were $n$ vertices, now there is $2n$ of them. Geometric step: new positions are computed for all vertices. There are two kinds of schemes: approximating schemes change the positions of old vertices,while the interpolating schemes don’t. Chaikin’s scheme Introduced in 1974 by George Chaikin, this algorithm revolutionized the world of numerical geometry. Limit curve is a uniform quadratic B-spline. Corner-cutting Generalization of Chaikin’s algorithm with two parameters $ 0 < a < b < 1 $. Setting $a=0.25, b=0.75$ gives Chaikin. The following example uses $a = 0.1, b = 0.6$: Four-point Unlike corner cutting, the four-point scheme is interpolatory. ToDo Implement the three subdivision schemes. Experiment with different values of $a,b$ in corner cutting. Specifically, try using $b=a+\frac12$ $b \neq a+\frac12$ What do you observe? Generalized four-pointuses the mask $[-\omega,\frac12+\omega,\frac12+\omega,-\omega].$ ($\omega=\frac1{16}$ in the original scheme.) Modify your implementation of this algorithm to account for the tension parameter $\omega$ and try varying its value. You should get $\mathcal C^1$ limit curves for $\omega \in \left[0,(\sqrt 5 - 1)/8 \approx 0.154 \right]$.
Nondeterministic Finite Automata CS390, Fall 2019 Abstract Next consider an extension of our FA model that allows nondeterminism, a kind of parallel processing. We call the result a nondeterministic finite automata (NFA). Curiously, we will show that although NFAs make it easier to describe some languages, the parallelism does not actually increase the power of our automata. These lecture notes are intended to be read in concert with Chapter 2.3-2.6 of the text (Ullman). 1 Nondeterminism Nondeterminism usually refers to process or computation that can have more than one result even when its input is fixed. In the real world, we often view things as non-deterministic because there are inputs that we cannot control (e.g., “If I throw this frisbee at a target, I expect it to hit, but if there’s a sudden gust of wind, I might miss.”) or cannot perceive (e.g., “I swear that I’ve used the exact same motion and strength to flip this coin each time, yet I can’t control whether it comes up heads or tails”). One of the revolutions in Physics was quantum mechanics, which argued that the universe was not, as 19th century physicists had believed, a complex but predictable “clockwork” mechanism, but was instead fundamentally based on probabilities (Sometimes the electron jumps states, sometimes it doesn’t. Sometimes the cat lives. Sometimes it dies.) We can predict what is most probable, but cannot say with absolute certainty that almost anything definitely will or will not happen.[^There is a small, but non-zero probability that all of air molecules in the room with you right now will suddenly hop outside the nearest door or window, leaving you gasping. — Just doing my bit to help you sleep at night!] Ullman et al consider two different ways of introducing non-determinism. One is to simply relax the rule that says that, for any given input, there can only be a single transition to another state. By allowing the same input to take you to two or more states, the overall behavior of the automaton becomes harder to predict because it can be in multiple states at once, with each of those states contributing new transitions on subsequent inputs. A second way to introduce non-determinism is to allow instantaneous transitions from one state to others without waiting for the next input characters. These are modeled as transitions on a special character $\epsilon$, and these transitions are therefore referred to as $\epsilon$-transitions. Now, $\epsilon$ is the symbol used earlier to denote an empty string, i.e., $\epsilon$ == "". But FAs don’t do transitions on entire strings, they do transitions on individual characters. So we’re playing a little game here. We introduce a special character that’s not in our regular alphabet. We’ll call it $\epsilon$ because, well, because we can. We’re going to introduce a special rule that, when translating strings from our augmented alphabet to the original alphabet, that it behaves like the empty string $\epsilon$. For any strings $u$ and $v$, \[ u\epsilon v \rightarrow uv \] So this “translation” rule works as expected whether the “$\epsilon$” is the empty string or the special character. Then we pretend that all of our inputs (in the original alphabet) are augmented by placing a whole bunch 1 of these special characters before and after each of the original characters. e,g, the input $abc$ becomes $\epsilon\epsilon\ldots\epsilon a\epsilon\epsilon\ldots b\epsilon\epsilon\ldots c\epsilon\epsilon\ldots$. If we are willing to play all of these games, an NFA with $\epsilon$-transitions is actually just an “ordinary” NFA in which we can have transitions to multiple states on the same input. 1: As many as we have states in the automaton. 1.1 Example: Taking the union of two languages To see how this would be useful, consider the problem of finding an automaton that describes the union of two languages for which we already have FAs. For example, consider the language of all strings over the alphabet $\{a, b\}$ that begin with ‘a’. A DFA for this language is shown here. Then consider the language of strings (over the same alphabet) in which every ‘b’ is (eventually) followed by an ‘a’. An automaton for that language is shown here. Suppose that I want to construct an automaton for the union of these languages: the set of strings that being with ‘a’ or that have every ‘b’ followed by an ‘a’. We can do that very easily by simply allowing those two automata to run “in parallel” on the same input. We add a new start state for the combined automaton, but add a pair of $\epsilon$-transitions to immediately transition into the original start states of the two DFAs. Notation warning: Many authors in this area use $\epsilon$ to represent empty strings and, therefore, simultaneous transitions in NFAs. Many others use $\lambda$ for the exact same purposes, however. As it happens, JFLAP defaults to $\lambda$. This default can be changed, however, from the Preferencesmenu. So, before we see a single “real” character, we transition into two new states, in effect “activating” both of the original automata. They will then start accepting input characters simultaneously. After any given input string, if either of the original DFAs would have been in a final state, our combined automaton will be in a final state as well. For example, after the input “abbb”, we will be in states B and Y, and state B is a final state. After the input “baa”, we will be in states C and Z, and Z is a final state. On the other hand, after the input “bb”, we would be instates C and Y, neither of which is final. Try running this NFA in JFLAP and verify that it works as I have claimed. 1.2 Example: Decomposition of Set Expressions In an earlier example on creating DFAs, we looked at how to systematically construct a DFA from simple set expressions. I stated then that “This whole approach will be easier for the NFAs we discuss later than for DFAs, because they allow sub-solutions to be linked together more easily.” Let’s take at a look at that process. If I asked you, for example, to create an FA for, say, $\{01,101\}^*$, that breaks down into The *, a “loop” that has to come back to its starting state the body of the loop, consisting of a choice between a concatenation of two characters 0, 1, and a concatenation of three characters 1, 0, 1 So you could… … start by writing out the straight-line sequences for the two concatenations. Then, because the expression $\{01,101\}$ involves choosing wither of these set elements, we tie them together at their beginning and their end to indicate a “union” or “either-or” structure. For DFAs, you may recall, tying these together required merging the beginning states, as shown here. Later we merged the end states as well. But when constructing an NFA, we can instead leave the original sequences untouched and join them using $\epsilon$-transitions, as shown here. This expresses the idea that “you can go to this subexpression or that subexpression” in an easily recognized form. Then we add the * in $\{01,101\}^*$ by simply connecting the end back to the beginning so that we can do any number of repetitions. Or we can, as we did for the union a moment ago, add a new begin and end note, joined to the inner subexpression with $\epsilon$-transitions. Either of these last two automata will not only recognize the language $\{01,101\}^*$, but will do so in a way that can be easily recognized by someone reading the patterns in the NFA itself. We’ll formalize this approach in the next module. 2 Every NFA can be reduced to a DFA DFAs and NFAs are fundamentally equivalent. Every language that can be accepted by an DFA can be also be accepted by some NSA, and vice-versa. It’s not hard to show that every language accepted by a DFA can be accepted by some NFA. That’s because an NFA is actually a relaxed form of the definition of a DFA. Every DFA is itself also an NFA – it’s just one in which we have not taken advantage of any of the extra possibilities permitted to the nondeterministic form. Proving that the language accepted by any NFA can also be accepted by some DFA is harder, but much more interesting. That’s because the proof amounts to an algorithm for converting an NFA into an equivalent DFA. I’m not going to type out all of the formal arguments from the text, but I do want to highlight some of the key ideas that lead to the conversion algorithm. First, of the five components that make up an NFA $A = (Q, \Sigma, \delta, q_0, F)$, The alphabet $\Sigma$ stays the same. The set of states $Q$ and the function $\delta$ that controls the transitions between them will, obviously, change. Our algorithm will be tasked with computing these. Similarly, the conversion algorithm will need to figure out what, in the new DFA, corresponds to the original starting state $q_0$ and which of the newly created states are to be final. The defining characteristic of an NFA is that it can be in multiple states at once. So the “labels” for our new DFA states will actually a set of the original state label from the NFA. For example, if we can find an input for which an NFA can simultaneously be in states A and Y, the generated DFA will have a state labeled AY. (This is a set of the original labels because order doesn’t matter. having decided to label one DFA state ‘AY’, if we should ever be tempted to create a state ‘YA’, we will have to remember that these are actually the same state because $\{A, Y\} = \{Y, A\}$. So generally, we will choose to simply list all such combined labels in ascending order. Now, in the absolute worst case, we might anticipate that, given the right inputs, we could drive an NFA into any possible combination of its states. That would mean that an NFA with $N$ states could translate into a DFA with as many as $2^N$ states – one for each possible subset of the set of states in the NFA. That doesn’t happen often – most practical problems would see a much smaller growth in the number of states – but it actually illustrates a big reason why we use NFAs in the first place. They can allow us to describe a language in much more compact form than if we were using a DFA. Once we understand that the labels of our new DFA states will be sets of labels from the original NFAs, it is easy to see how the final states will be identified. Any DFA state whose label includes a final NFA state will, itself, be final. So if, for example, we were converting this NFA to a DFA and discovered, at the end, a state CY, that would not be final because neither c nor y are final in the NFA. But if our converted DFA has states CZ or BY, those would be final final because Z was final and B was final, respectively. A key idea in the conversion algorithm is the $\epsilon$-closure of a set of states. The $\epsilon$-closure of a single state s would be the set consisting of s and all states that can be reached from s by following only $\epsilon$-transitions. The $\epsilon$-closure of an entire set of states S would be the union of the $\epsilon$-closures of all the elements of S. The conversion algorithm, referred to as a “subset construction”, then has the structure The starting state of the new DFA will be labeled with the $\epsilon$-closure of $q_0$, the starting state of the NFA. Add that to a queue of not-yet-analyzed states. Pick a state labeled $s_1 s_2 \ldots$ from that queue. For each symbol ain the alphabet, consult the NFA to see what states it enters when starting from one of those states in the label. Take the union of all those, and then take the $\epsilon$-closure of that union. The result $t_1 t_2 \ldots$ is the label of a state in the new DFA. If $t_1 t_2 \ldots$ is not already a state in our DFA, add it to the DFA and to the queue of not-yet-analyzed states. Whether it was a new state or not, add a transition in our DFA from $s_1 s_2 \ldots$ to this $t_1 t_2 \ldots$. Repeat step 2 until the entire queue has been processed. If we were to convert this NFA, for example, we would start with a state \[ \epsilon\mbox{-closure}(\mbox{start}) = \{\mbox{start}, A, X\} \] We would then pick that state from the queue to analyze. Out alphabet is $\{a, b\}$, so starting with ‘a’, we would see: \[ \begin{align} \delta(\mbox{start},a) &= \{ \} \\ \delta(A,a) &= \{ B \} \\ \delta(X,a) &= \{ Z \} \\ & \\ \epsilon\mbox{-closure}(\{B, Z\})&= \{B, Z\} \end{align} \] So we add a new state and transition to our DFA. I’ve marked the state as final because at least one of its component NFA states is final. The state B,Z goes into our queue for future processing. But first, we need to consider the other symbol in our alphabet: \[ \begin{align} \delta(\mbox{start},b) & = \{ \} \\ \delta(A,b) & = \{ C \} \\ \delta(X,b) & = \{ Y \} \\ & \\ \epsilon\mbox{-closure}(\{C, Y\}) & = \{C, Y\} \end{align} \] So we get another new state and transition, and CY goes into our queue. We’re done with our starting state now, so we pull a state from our queue. Now we will analyze BZ. \[ \begin{align} \delta(B,a) &= \{ B \} \\ \delta(Z,a) &= \{ Z \} \\ & \\ \epsilon\mbox{-closure}(\{B, Z\}) &= \{B, Z\} \end{align} \] We don’t get a new state this time, but we do add a transition: \[ \begin{align} \delta(B,b) &= \{ B \} \\ \delta(Z,b) &= \{ Y \} \\ & \\ \epsilon\mbox{-closure}(\{B, Y\}) &= \{B, Y\} \end{align} \] And we have a new state. Still remaining in our queue to be processed are CY and BY. You should end up with this. 3 Closing Thoughts: Nondeterminism and Programming When we begin learning to program, we place a high value on determinism. We rely on the fact that, for example, that if we repeatedly feed in the same input to a program, that program will always give us the same output. Think how important that is to you generally when trying to test and debug your code! But, in the back of our minds, we understand that there are external influences that can change this deterministic behavior. If other processes on the same computer steal all of the available memory, for example, our usually reliable computer may crash in strange and unpredictable ways. As we begin to consider things like GUIs, the timing of inputs is often as important as the content of those inputs to the behavior and outputs of the program. Later we may start designing programs that themselves consist of multiple processes or threads that interact with one another. Because we can’t control when each process gets to use the CPU and for how long, we have only limited control on which of these threads will finish their respective tasks first. This leads to all kinds of unpredictable behaviors. Why do we even bother, if non-determinism can introduce so many problems? Well, for one thing, sometimes we actually value the unpredictability. Suppose you are playing a game against a computer opponent. How boring it is if the computer always makes exactly the same moves against you (even if you are repeating your own favorite sequence moves). So game programmers often incorporate non-determinism or simulate it through the use of random number generators. More often, however, we design programs that way because the resulting designs are much, much, simpler. In my earlier example, we saw that, taking advantage of NFAs, it’s very easy to take the union of two languages. By contrast, if we had to do so without using NFAs, the procedure would have been actually quite similar to the subset construction that we use to convert an NFA to an FSA. Now, the non-determinism that is provided by NFAs is a very limited form. It’s still an entirely predictable non-determinism. In fact, one might argue that an NFA is, despite its name, an entirely deterministic machine that we happen to view through a non-deterministic window. But that’s perfectly OK if it simplifies things for us. And, in the next chapter, NFAs will simplify things for us a lot!
$C_I$ is the set of points that belong to every $A_n$ with $n\in I$ and to no $A_n$ with $n\notin I$. For instance, $C_{\{2\}}$ is the set of points that are in $A_2$ but not in any other $A_n$. $C_{\{1,5\}}$ is the set of points that are in $A_1\cap A_5$ but not in any other $A_n$. To show that $$A_n=\bigcup_{|I|<\infty,n\in I}C_I\;,\tag{1}$$ you can try to show that each side of $(1)$ is a subset of the other. Show that if $I$ is finite, and $n\in I$, then $C_I\subseteq A_n$. (This is very straightforward.) Conclude that $$\bigcup_{|I|<\infty,n\in I}C_I\subseteq A_n\;.$$ Then try to show that if $a\in A_n$, there is a finite $I\subseteq\Bbb Z^+$ such that $a\in C_I$. This, however, need not be the case. If $a\in A_n$ for all $n$, for instance, then $a\notin C_I$ for any finite $I$. You will be able to prove this only if you have the additional hypothesis that each point is in only finitely many of the sets $A_n$. What you can prove, even without that extra hypothesis, is that $$A_n=\bigcup_{n\in I\subseteq\Bbb Z^+}C_I\;,$$ without any restriction on the size of $I$. The first inclusion above is still fine, and it is true that if $a\in A_n$, there is a (not necessarily finite) $I\subseteq\Bbb Z^+$ such that $a\in C_I$. Can you find it?
Motivation: Given the roots of the quadratic $2x^2+6x+7=0$ find a quadratic with roots $\alpha^2-1$ and $\beta^2-1$ I was able to solve this problem in two ways: Method 1: Sum of the roots $\alpha+\beta=-\frac{b}{a}$ Product of roots $\alpha\beta=\frac{c}{a}$ Hence $\alpha+\beta=-3$ and $\alpha\beta=\frac{7}{2}$ We want an equation with roots $\alpha^2-1$ and $\beta^2-1$ The sum of the roots of the new quadratic will be $\alpha^2-1+\beta^2-1=\alpha^2+\beta^2-2$ The product of the roots of the new quadratic will be $(\alpha^2-1)(\beta^2-1)=\alpha^2\beta^2-(\alpha+\beta)+1$ We are able to compute $\alpha^2+\beta^2$ as it is $(\alpha+\beta)^2-2\alpha\beta$ and so the problem is solved. Plugging the numbers gives $4u^2+45=0$ Method 2 Let $u=\alpha^2-1\implies\alpha=\sqrt{u+1}$ but we know that $\alpha$ solves the original equation so: $$\begin{align}2\alpha^2+6\alpha+7&=0\\2(u+1)+6\sqrt{u+1}+7&=0\\\sqrt{u+1}&=\frac{-2u-9}{6}\\u+1&=\frac{1}{36}(-2u-9)^2\\36u+36&=4u^2+36u+81\\0&=4u^2+45\end{align}$$ Question:The first method clearly uses the values of $\alpha$ and $\beta$ but the second seemingly only requires $\alpha$. How is this possible? Sure, one man's $\alpha$ is another's $\beta$ and so you could relabel as the choice of $\alpha$ and $\beta $ is arbitrary. This is believable because of the symmetry involved in the new roots $\alpha^2-1$ looks much like a $\beta^2-1$ but I feel there must be more to this. Supposing one root of the new quadratic was $\alpha^2-1$ but the other was $\beta^3-2\beta$ or something worse? How would the second method know? This leads me to a more fundamental question. Are there only certain functions of the roots of an old quadratic that can we can find a new quadratic in this way? I suppose we could consider $(x-f(\alpha)(x-g(\beta)=0$ where $f$ and $g$ are the functions of the old roots but then could we alway compute these numerically? Thanks for taking the time to read this and for any contributions.
Let us start with a problem of the form $$(\mathcal{L} + k^2) u=0$$ with a set of given boundary conditions (Dirichlet, Neumann, Robin, Periodic, Bloch-Periodic). This corresponds with finding the eigenvalues and eigenvectors for some operator $\mathcal{L}$, under some geometry, and boundary conditions. One can obtain a problem like this in acoustics, electromagnetism, elastodynamics, quantum mechanics, for example. I know that one can discretize the operator using different methods, e.g, Finite Difference Methods to obtain $$[A]\{U\} = k^2 \{U\}$$ or using, Finite Element Methods to obtain $$[K]\{U\} = k^2 [M]\{U\} \enspace .$$ Some thoughts The method of Manufactured Solutions is not useful in this case since there is no source term to balance the equation. One can verify that the matrices $[K]$ and $[M]$ are well captured using a frequency domain problem with source term, e.g. $$[\nabla^2 + \omega^2/c^2] u(\omega) = f(\omega) \enspace ,\quad \forall \omega \in [\omega_\min, \omega_\max]$$ instead of $$[\nabla^2 + k^2] u = 0 \enspace .$$ But this will not check the solver issues. Maybe, one can compare solutions for different methods, like FEM and FDM. Question What is the way to verify the solutions (eigenvalue-eigenvector pairs) for discretization schemes due to numerical methods like FEM and FDM for eigenvalue problems?
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Background A binary decision tree $T$ is a rooted tree where each internal node (and root) is labeled by an index $j \in \{1,..., n\}$ such that no path from root to leaf repeats an index, the leafs are labeled by outputs in $\{A,B\}$, and each edge is labeled by $0$ for the left child and $1$ for the right child. To apply a tree to an input $x$: Start at the root if you are at leaf, you output the leaf label $A$ or $B$ and terminate Read the label $j$ of your current node, if $x_j = 0$ then move to the left child and if $x_j = 1$ then move to the right child. jump to step (2) The tree is used as a way to evaluate a functions, in particular we say a tree $T$ represents a total function $f$ if for each $x \in \{0,1\}^n$ we have $T(x) = f(x)$. The query complexity of a tree is its depth, and the query complexity of a function is the depth of the smallest tree that represents it. Problem Given a binary decision tree T output a binary decision tree T' of minimal depth such that T and T' represent the same function. Question What is the best known algorithm for this? Are any lower bounds known? What if we know that the $\text{depth}(T') = O(\log \text{depth}(T))$? What about if we only require $T'$ to be of approximately minimal depth? Naive approach The naive approach is given $d = \text{depth}(T)$ to recursively enumerate all binary decision trees of depth $d - 1$ while testing if they evaluate to the same thing as $T$. This seems to require $O(\frac{d 2^n n!}{(n - d)!})$ steps (assuming that it takes $d$ steps to check what $T(x)$ evaluates to for an arbitrary $x$). Is there a better approach? Motivation This question is motivated by a previous question on the trade off between query complexity and time complexity. In particular, the goal is to bound the time separation for total functions. We can make a tree $T$ from a time optimal algorithm with runtime $t$, and then we would like to convert it to a tree $T'$ for a query optimal algorithm. Unfortunately, if $t \in O(n!/(n - d)!)$ (and often $d \in \Theta(n)$) the bottleneck is the conversion. It would be nice if we could replace $n!/(n - d)!$ by something like $2^d$.
What is the best way to simulate the short rate $r(t)$ in a simple one factor Hull White process? Suppose I have $$ dr(t) = (\theta(t)-\alpha r(t))dt+\sigma dW_t $$ where $\theta(t)$ is calibrated to swap curve, constants $\alpha$ and $\sigma$ are calibrated to caps using closed form solution for zero-coupon bond options. The best way I can think to do it is an Euler discretisation, that is: $$ r(t+\Delta t) = r(t) + \theta(t)\Delta t - \alpha r(t) \Delta t + \sigma \sqrt {\Delta t} Z $$ where $Z \sim N(0,1)$. In this case, I need $t$ to go from 0 to 10 years, ideally in 0.25 increments. But with Euler, I'd need to use small $\Delta t$, so perhaps 0.025 or less? Once I have string of $r(t)$, I can easily calculate $P(t,T)$ zero coupon bonds. Appreciate any other ideas or if someone could point me in the right direction. I'm quite new to rates modelling!
Category:Definitions/Powers (Abstract Algebra) Jump to navigation Jump to search Let $a \in S$. $\forall n \in \N_{>0}: \circ^n a = \begin{cases} a & : n = 1 \\ \paren {\circ^r a} \circ a & : n = r + 1 \end{cases}$ The mapping $\circ^n a$ is known as the $n$th power of $a$ (under $\circ$). Pages in category "Definitions/Powers (Abstract Algebra)" The following 8 pages are in this category, out of 8 total. P Definition:Power Associativity Definition:Power of Element Definition:Power of Element/Also defined as Definition:Power of Element/Magma Definition:Power of Element/Magma with Identity Definition:Power of Element/Notation Definition:Power of Element/Notation/Semigroup Definition:Power of Element/Semigroup
Prove that closed and bounded subsets of metric spaces are compact My Attempted Proof Let $(X, d)$ be a metric space. Suppose $A \subset X$ is closed and bounded, since $A$ is bounded $\exists \ r > 0$ such that $d(x_1, x_2) \leq r$ for all $x_1, x_2 \in A$. Now let $B_d(x, r)$ be a neighbourhood of $x \in A$. Since $B_d(x, r)$ is open in the topology, $\mathcal{T}_d$ induced by the metric $d$, we have $$\bigcup_{x\ \in\ A} B_d(x, r) \in \mathcal{T}_d$$. Put $V = \bigcup_{x\ \in\ A} B_d(x, r)$, then $V$ is an open cover of $A$, and $A \subset V$ so that $A$ is comapct. $\ \square$ Is my proof correct? If so how rigorous is it?
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I have a RKF45 numerical integrator that simulates polymerization of proteins using CUDA. It does so by tracking the populations of discrete length polymers, e.g. monomers, dimers, trimers, etc. all the way up to 8192-mers using the equations of the type that follow. $$\frac{dc_r}{dt}= k_nc_r^{n_c}+2k_a(c_{r-1}-c_r) + k_m(2\sum^{max}_{s=r+2}c_s-(r-1)c_r) $$ where r represents the length of the polymer and c its concentration at that given time. The k's are the parameters that I'd like to minimize an objective function with respect to. These equations are integrated over about 20,000-30,000 timesteps. My goal is to start fitting the model to experimental data. The first type of fit I'm trying is fitting a value derived from these polymer populations at arbitrary timesteps. For instance, I want to calculate the average length of polymers at five different times in the simulation/integration and compare them to externally supplied data. The average length is derived for each timestep from the concentrations like this : $$L(t) = \frac{M(t)}{N(t)}$$ $$M(t) = \sum^{max}_{s=n_c}s*c_s(t)$$ $$N(t) = \sum^{max}_{s=n_c}c_s(t)$$ The objective function I'm trying to minimize thusly looks like this : $$y(t)=\sqrt{\sum_{fit data}(L_{calculated}-L_{experimental})^2}$$ I've already implemented a solution to do this and run it through the following norm and minimize said norm/objective function using Scipy's Nelder-Mead optimize method. It seems to work, but if it's possible to make it perform better, I'd like to make it happen. I want to try and utilize more robust and swifter converging methods like L-BFGS-B, especially because they also allow me to impose bounds on the parameters, that need to remain positive. Now, the Runge-Kutta algorithm, for the unfamiliar, is here : http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods Due to the recursive nature of the Runge Kutta calculations such as $$k_1=f(y_n)$$ $$k_2=f(y_n+a_{21}k_1)$$ $$k_3=f(y_n+a_{31}k_1+a_{32}k_2)$$ I'm having issues figuring out how to calculate the Jacobian, which I believe would be this. $$J(t) = [\frac{dy}{dk_a} \frac{dy}{dk_m} \frac{dy}{dk_n}]$$ So, if I framed this question properly, is it possible at some point during the RKF calculation to construct a Jacobian to pass out to Scipy along with the objective function evaluation? I'd like to get an optimized optimization script up and running ASAP.
This is for homework, so please hints only. Also I know that there are questions similar to this one, but those questions use a different definition of totally bounded. The definition my book uses for totally bounded is: A metric space $X$ is totally bound if $\forall \epsilon > 0$ there is a finite subset $A \subset X$ such that $X = \bigcup_{a \in A} S_{\epsilon}(a)$, where $S_r(x_0) = \{x \in X: d(x,x_0) < r\}$. The difference here is that other questions only require containment, not equality. The question I have to answer is: Let $X$ be a metric space, show that $A \subset X$ is totally bounded $\iff$ $\bar{A}$ is totally bounded. I have shown that if $A$ is totally bounded then you can construct a cover in the that contains $\bar{A}$. My professor has said that I may need to show that $\forall \epsilon \exists \delta$ such that there exists a finite subset $C \subset \bar{A}$ such that $\bar{A} = \bigcup_{c \in C} S_{\delta}(c)$. But I'm not sure how to proceed. I would really appreciate any help you can offer. Thank you.
There are $n$ students and $m$ courses. Each student $i$ wants to attend a subset $C_i \subseteq \{1,\ldots,m\}$ of the courses. There are $k$ time slots on a weekly schedule. The goal is to select a time slot for each course such that the students can attend as many courses as possible, that is, find a mapping $f: \{1,\ldots,m\} \rightarrow \{1,\ldots,k\}$ such that the following objective function is maximized: $$\sum_{i = 1}^n \big| \bigcup_{j \in C_i} \{f(j)\} \big| $$ For my (artificial) instance $n = 10^5, m = 5000, k = 25$, and any student wants to attend at most 16 courses. I have tried a hill climbing algorithm which moves in the search space by changing one course at a time in the mapping $f$ if it improves the objective function, until it can improve no more. This gives a local maximum, but I'd like to do better. I've tried to implement a simulated annealing approach with the same neighbourhood structure, but I haven't been able to tune the parameters to beat the hill climbing algorithm. I've read that the number of moves from any part of the search space to another part should be small. In my solution I need at most $5000$ moves, is this too much? If make bigger changes into the mapping, the evaluation of the cost function becomes expensive, as I have to recompute the objective function for every student affected by the change. What kind of a neighbourhood structure would suit this problem the best?
Let $i : H \to G$ be a subgroup of finite index. The transfer map is a special homomorphism $V(i) : G^\mathrm{ab} \to H^\mathrm{ab}$. The usual ad hoc definition uses a set of representatives of $H$ in $G$ and then you have to check that it is independent from this choice and that it is a homomorphism at all. I think this definition is not enlightening at all (although it is, of course, useful for explicit calculations). A better one uses group homology. Namely, for a $G$-module $A$ there is a natural transformation $A_G \to \mathrm{res}^{G}_{H} A_H$, $[a] \mapsto \sum_{Hg \in H/G} [ga]$, which extends to a natural transformation $H_\*(G;A) \to H_\*(H;\mathrm{res}^{G}_{H} A)$ (usually called corestriction or transfer). Now evaluate at $A = \mathbb{Z}$ and $* = 1$ to get $G^\mathrm{ab} \to H^\mathrm{ab}$. One can then calculate this map using the explicit isomorphisms and homotopy equivalences involved; but now you know by the general theory that it is a well-defined homomorphism. It also follows directly that the transfer is actually a functor $V : \mathrm{Grp}_{mf} \to \mathrm{Ab}^{\mathrm{op}}$ with object function $G \mapsto G^{\mathrm{ab}}$, where $\mathrm{Grp}_{mf}$ is the category whose objects are groups and whose morphisms are monomorphisms of finite index. I would like to know if there is an even more "abstract" definition. To be more precise: Is there a categorical characterization of the functor $V$ which only uses the adjunction $\mathrm{Grp} {\longleftarrow \atop \longrightarrow} \mathrm{Ab}$? Edit: There are many interesting answers so far which give, in fact, very "enlightening" definitions of the transfer. But I would also like to know if there is a pure categorical one, such as the one given by Ralph. Edit: A very interesting note by Daniel Ferrand is A note on transfer. There a more general statement is proven (even in a topos setting): Let $G$ act freely on a set $X$ such that $X/G$ is finite with at least two elements. Then there is an isomorphism of abelian groups $(\mathrm{Ver},\mathrm{sgn}) : {\mathrm{Aut}_{G}(X)}^{\mathrm{ab}} \cong G^{\mathrm{ab}} \times \mathbb{Z}/2$. It is natural with respect to $G$-isomorphisms. Here again I would like to ask if it is possible to characterize this isomorphism by its properties (instead of writing it down via choices, whose independence has to be shown afterwards). Proposition 7.1. in this paper includes the interpretation via determinants mentioned by Geoff in his answer, actually something more general: For w.l.o.g. abelian $G$ there is a commutative diagram $\begin{matrix} {\mathrm{Aut}_{G}(X)}^{\mathrm{ab}} & \cong & \mathrm{Aut}_{\mathbb{Z}G}{\mathbb{Z}X}^{\mathrm{ab}} \\\\ \downarrow & & \downarrow \\\\ G \times \mathbb{Z}/2 & \rightarrow & (\mathbb{Z} G)^{x} \end{matrix} $ Thus we may think of transfer and signature as the embedding the standard units into the group ring.
The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation 1. Department of Mathematics, Nanjing University, Nanjing 210093, China 2. Department of Mathematics, Nanjing Univerisity, Nanjing 210093 i$u_t=u_{x x}-mu-f(\beta t,x)|u|^2 u,$ with the boundary conditions $u(t,0)=u(t,a\pi)=0, \ -\infty < t < \infty,$ where $m$ is real and $f(\beta t,x)$ is real analytic and quasi-periodic on $t$ satisfying the non-degeneracy condition $\lim_{T\rightarrow\infty}\frac{1}{T}\int_0^Tf(\beta t,x)dt\equiv f_0=$ const., $\quad 0\ne f_0 \in\mathbb R,$ with $\beta\in\mathbb R^b$ a fixed Diophantine vector. Mathematics Subject Classification:Primary: 70H08, 70H12; Secondary: 37J4. Citation:Lei Jiao, Yiqian Wang. The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1585-1606. doi: 10.3934/cpaa.2009.8.1585 [1] [2] Hongzi Cong, Lufang Mi, Yunfeng Shi, Yuan Wu. On the existence of full dimensional KAM torus for nonlinear Schrödinger equation. [3] [4] Hans Zwart, Yann Le Gorrec, Bernhard Maschke. Relating systems properties of the wave and the Schrödinger equation. [5] [6] Xiaocai Wang, Junxiang Xu, Dongfeng Zhang. A KAM theorem for the elliptic lower dimensional tori with one normal frequency in reversible systems. [7] Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. [8] Claude Bardos, François Golse, Peter Markowich, Thierry Paul. On the classical limit of the Schrödinger equation. [9] [10] [11] [12] [13] [14] [15] [16] Mostafa Abounouh, H. Al Moatassime, J. P. Chehab, S. Dumont, Olivier Goubet. Discrete Schrödinger equations and dissipative dynamical systems. [17] [18] [19] [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Forgot password? New user? Sign up Existing user? Log in Let f:N+→Q+f:\mathbb{N}^+ \to \mathbb{Q}^+f:N+→Q+ satisfies f(1)=2016f(1) = 2016f(1)=2016 and ∑i=1nf(i)=n2f(n)\displaystyle \sum_{i=1}^n f(i) = n^2 f(n)i=1∑nf(i)=n2f(n) for n>1n>1n>1. If the value of f(2016)f(2016)f(2016) can be represented as ab\dfrac {a}{b}ba where aaa and bbb are coprime positive integers, what is the value of a+ba+ba+b? Problem Loading... Note Loading... Set Loading...
All LTI systems possess the eigenfunction property for complex exponential inputs. That is (restricting our attention to periodic complex exponentials), if $e^{j\omega_k t}$ is an input to the LTI system, then the output is $H(j\omega_k) e^{j\omega_k t}$, where $H(j\omega_k)$ is the frequency response of the systsem evaluated at $\omega_k$. When a linear constant coefficient differential equation with zero initial conditions can represent a causal LTI system. Why is it that when we talk about systems, specifically systems with dynamics described by differential equations, we speak of a “steady-state” sinusoidal output? This seems to imply that transients are always present. But if the system is LTI to begin with, would their be transients in the output if a sinusoidal signal, or complex exponential, is applied? It can be shown mathematically that the output of an LTI system when a complex exponential is applied is just a scaled version of the input, with no transient terms. What’s going on here? It can be shown mathematically that the output of an LTI system when a complex exponential is applied is just a scaled version of the input, with no transient terms. that's for an input that has been going on since forever ago (and will continue forever to the future): if $$ x(t) = e^{j \omega_0 t} \qquad \forall t \in \mathbb{R} $$ then $$ y(t) = H(j \omega_0) e^{j \omega_0 t} = H(j \omega_0) x(t) $$ What’s going on here? problem is, sometimes we have to apply an input at some time we might call "$t=0$", and before the LTI settles down to the $y(t)$ shown above, there are transients. but if the LTI is stable (and not "marginally stable"), the transients will go away eventually.
Speaker Description We show that in the presence of the torsion tensor $S^k_{\phantom{k}ij}$, whose existence is required by the consistency of the conservation law for the total angular momentum of a Dirac particle in curved spacetime with relativistic quantum mechanics, the quantum commutation relation for the four-momentum is given by $[p_i,p_j]=2i\hbar S^k_{\phantom{k}ij}p_k$. We propose that this relation replaces the integration in the momentum space in Feynman diagrams with the summation over the discrete momentum eigenvalues. We derive a prescription for this summation that agrees with convergent integrals: $\int\frac{d^4p}{(p^2+\Delta)^s}\rightarrow 4\pi U^{s-2}\sum_{l=1}^\infty \int_0^{\pi/2} d\phi \frac{\sin^4\phi\,n^{s-3}}{[\sin\phi+U\Delta n]^s}$, where $n=\sqrt{l(l+1)}$ and $1/\sqrt{U}$ is a constant on the order of the Planck mass, determined by the Einstein-Cartan theory of gravity. We show that this prescription regularizes ultraviolet-divergent integrals in loop diagrams. We extend this prescription to tensor integrals and apply it to vacuum polarization. We derive a finite, gauge-invariant vacuum polarization tensor and a finite running coupling that agrees with the low-energy limit of the standard quantum electrodynamics. Including loops from all charged fermions, we find a finite value for the bare electric charge of an electron: $\approx -1.22\,e$. Torsional regularization, originating from the noncommutativity of the momentum and spin-torsion coupling, therefore provides a realistic, physical mechanism for eliminating infinities in quantum field theory: quantum electrodynamics with torsion is ultraviolet complete. arXiv 1712.09997
Kakeya problem Define a Kakeya set to be a subset [math]A[/math] of [math][3]^n\equiv{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. General lower bounds Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Next, the Cartesian product of two Kakeya sets is another Kakeya set; hence, [math]k_{n+m} \leq k_m k_n[/math], implying that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. From Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below. We have [math]k_n(k_n-1)\ge 3(3^n-1)[/math] since for each [math]d\in {\mathbb F}_3^r\setminus\{0\}[/math] there are at least three ordered pairs of elements of a Kakeya set with difference [math]d[/math]. (I actually can improve the lower bound to something like [math]k_r\gg 3^{0.51r}[/math].) For instance, we can use the "bush" argument. There are [math]N := (3^n-1)/2[/math] different directions. Take a line in every direction, let E be the union of these lines, and let [math]\mu[/math] be the maximum multiplicity of these lines (i.e. the largest number of lines that are concurrent at a point). On the one hand, from double counting we see that E has cardinality at least [math]3N/\mu[/math]. On the other hand, by considering the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that E has cardinality at least [math]2\mu+1[/math]. If we minimise [math]\max(3N/\mu, 2\mu+1)[/math] over all possible values of [math]\mu[/math] one obtains approximately [math]\sqrt{6N} \approx 3^{(n+1)/2}[/math] as a lower bound of |E|, which is asymptotically better than [math](3/2)^n[/math]. Or, we can use the "slices" argument. Let [math]A, B, C \subset ({\Bbb Z}/3{\Bbb Z})^{n-1}[/math] be the three slices of a Kakeya set E. We can form a graph G between A and B by connecting A and B by an edge if there is a line in E joining A and B. The restricted sumset [math]\{a+b: (a,b) \in G \}[/math] is essentially C, while the difference set [math]\{a-b: (a-b) \in G \}[/math] is all of [math]({\Bbb Z}/3{\Bbb Z})^{n-1}[/math]. Using an estimate from this paper of Katz-Tao, we conclude that [math]3^{n-1} \leq \max(|A|,|B|,|C|)^{11/6}[/math], leading to the bound [math]|E| \geq 3^{6(n-1)/11}[/math], which is asymptotically better still. General upper bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. Question: can the upper bound be strengthened to [math]k_{n+1}\le 2k_n+1[/math]? Another construction uses the "slices" idea and a construction of Imre Ruzsa. Let [math]A, B \subset [3]^n[/math] be the set of strings with [math]n/3+O(\sqrt{n})[/math] 1's, [math]2n/3+O(\sqrt{n})[/math] 0's, and no 2's; let [math]C \subset [3]^n[/math] be the set of strings with [math]2n/3+O(\sqrt{n})[/math] 2's, [math]n/3+O(\sqrt{n})[/math] 0's, and no 1's, and let [math]E = \{0\} \times A \cup \{1\} \times B \cup \{2\} \times C[/math]. From Stirling's formula we have [math]|E| = (27/4 + o(1))^{n/3}[/math]. Now I claim that for most [math]t \in [3]^{n-1}[/math], there exists an algebraic line in the direction (1,t). Indeed, typically t will have [math]n/3+O(\sqrt{n})[/math] 0s, [math]n/3+O(\sqrt{n})[/math] 1s, and [math]n/3+O(\sqrt{n})[/math] 2s, thus [math]t = e + 2f[/math] where e and f are strings with [math]n/3 + O(\sqrt{n})[/math] 1s and no 2s, with the 1-sets of e and f being disjoint. One then checks that the line [math](0,f), (1,e), (2,2e+2f)[/math] lies in E. This is already a positive fraction of directions in E. One can use the random rotations trick to get the rest of the directions in E (losing a polynomial factor in n). Putting all this together, I think we have [math](3^{6/11} + o(1))^n \leq k_n \leq ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207\ldots+o(1))^n \leq k_n \leq (1.88988+o(1))^n[/math]
Show that $d$ is a metric for $X$ then , $d'(x,y) =\frac{d(x,y)}{1+d(x,y)}$ is a bounded metric space that gives the topology of $X$. In dbfin.com the solution reads as follows : Now, we show that d′ induces the same topology as d . Since $f$ and $f^{−1}(y)=\frac{y}{1−y}:[0,1)→R^+$ are continuous, $d′=f∘d$ and $d=f^{−1}∘d′$ , $d′$ is continuous in the $d$ -topology, and $d$ is continuous in the $d′$ -topology, implying that the topologies are the same . Which topologies they are talking about? I think they talk about the coarsest topologies on $X \times X$ such that $d : X \times X \to \mathbb R$ and $d' : X \times X \to \mathbb R$ are continuous. Can anyone please correct me if I have gone wrong anywhere?
Search Now showing items 1-10 of 55 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
№ 9 All Issues Myronyuk V. V. Ukr. Mat. Zh. - 2016. - 68, № 8. - pp. 1080-1091 We establish the exact-order estimates of Kolmogorov and orthoprojective widths of anisotropic Besov classes of periodic functions of several variables in the spaces $L_q$. Ukr. Mat. Zh. - 2016. - 68, № 5. - pp. 634-643 We establish exact-order estimates for the Kolmogorov widths of the anisotropic Besov classes of periodic functions of many variables in the spaces $L_q,\; 1 \leq q \leq \infty$. Trigonometric Approximations and Kolmogorov Widths of Anisotropic Besov Classes of Periodic Functions of Several Variables Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1117–1132 We describe the Besov anisotropic spaces of periodic functions of several variables in terms of the decomposition representation and establish the exact-order estimates of the Kolmogorov widths and trigonometric approximations of functions from unit balls of these spaces in the spaces L q . Approximation of Functions of Many Variables from the Classes $B_{p,θ}^{Ω} (ℝ^d)$ By Entire Functions of Exponential Type Ukr. Mat. Zh. - 2014. - 66, № 2. - pp. 244–258 We obtain the decomposition representation of the norm of functions of many variables from the spaces $B_{p,θ}^{Ω} (ℝ^d)$ and establish the exact order estimates for the approximations of functions from the unit balls of these spaces by entire functions of exponential type in the space $L_q (ℝ^d)$. Approximation of the classes $B^{\Omega}_{p, \theta}$ of periodic functions of many variables by Fourier sums in the space $L_p$ with $p = 1, \infty$ Ukr. Mat. Zh. - 2012. - 64, № 9. - pp. 1204-1213 We obtain an exact-order estimate for the deviation of Fourier sums of periodic functions of many variables from the classes $B^{\Omega}_{p, \theta}$ in the space $L_p$ for $p = 1, \infty$.
Q: Suppose $p_0, \dots , p_m$ are polynomeals in $P_m(F)$ s.t. $p_j(2)=0$ for all $j$. Prove that $p_0, \dots , p_m$ is not linearly independant in $P_m(F)$. Edit: $P_m(F)$ is the vector space of all polynomials of degree at most m in a Field My Proof Suppose that $p_0, \dots , p_m$ is linearly independant. Now, consider the set $z, p_0, \dots , p_m$ where this family spans $P_m(F)$. Note: $1,x, \dots , x^m$ spans $P_m(F)$ and has $m+1$ elements. So, no linearly independant set in $P_m(F)$ has more than $m+1$ elements So, $z, p_0, \dots , p_m$ is linearly dependent (has $m+2$ elements) So, for $\beta, \alpha_j \in F$ (not all $=0$), $\beta z + \alpha_0 p_0 + \dots +\alpha_m p_m = 0$ if $\beta = 0$, then $\alpha_0 p_0 + \dots +\alpha_m p_m = 0$ and since it is assumed that $p_0, \dots , p_m$ is linearly independant, then this forces all $\alpha_j = 0$, which contradicts the fact that $z, p_0, \dots , p_m$ is linearly dependant. Hence, $\beta \neq 0$. Then, $z = (-1/\beta)(\alpha_0 p_0 + \dots +\alpha_m p_m)$ which is in the $span\{p_0 , \dots , p_m\}$, so $z \in span\{p_0, \dots , p_m\}$. So, $\exists \alpha_j \in F$ such that $z=\alpha_0 p_0 (z) + \dots + \alpha_m p_m (z)$. But, $p_j(2)=0$ for $\forall j$, and $2 \neq 0 + \dots + 0 = 0$, so contradiction. SO, $p_0, \dots , p_m$ is not linearly independent. Is this proof sufficient? Are there any holes or inconsistencies that I should be worried about?
Moser-lower.tex \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. The first lower bounds may be due to Koml\'{o}s \cite{komlos}, who observed that the sphere $S_{i,n}$ of elements with exactly $n-i$ 2 entries (see Section \ref{notation-sec} for definition), is a Moser set, so that \begin{equation}\label{cin} c'_{n,3}\geq \vert S_{i,n}\vert \end{equation} holds for all $i$. Choosing $i=\lfloor \frac{2n}{3}\rfloor$ and applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq (C-o(1)) 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$; in fact \eqref{cin} gives \eqref{cpn3} with $C := \sqrt{\frac{9}{4\pi}}$. In particular $c'_{3,3} \geq 12, c'_{4,3}\geq 24, c'_{5,3}\geq 80, c'_{6,3}\geq 240$. Asymptotically, the best lower bounds we know of are still of this type, but the values can be improved by studying combinations of several spheres or semispheres or applying elementary results from coding theory. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$, and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^e$ (or $S_{i-1,n} \cup S_{i,n}^o$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^e$ are separated by a Hamming distance of at least two. (Recall Section \ref{notation-sec} for definitions), this leads to the lower bound \begin{equation}\label{cn3-low} c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}. \end{equation} It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120; c'_{6,3} \geq 336$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. Asymptotically, Stirling's formula and \eqref{cn3-low} then give the lower bound \eqref{cpn3} with $C = \frac{3}{2} \times \sqrt{\frac{9}{4\pi}}$, which is asymptotically $50\%$ better than the bound \eqref{cin}. The work of Chv\'{a}tal \cite{chvatal1} already contained a refinement of this idea which we here translate into the usual notation of coding theory: Let $A(n,d)$ denote the size of the largest binary code of length $n$ and minimal distance $d$. Then \begin{equation}\label{cnchvatal} c'_{n,3}\geq \max_k \left( \sum_{j=0}^k \binom{n}{j} A(n-j, k-j+1)\right). \end{equation} With the following values for $A(n,d)$: {\tiny{ \[ \begin{array}{llllllll} A(1,1)=2&&&&&&&\\ A(2,1)=4& A(2,2)=2&&&&&&\\ A(3,1)=8&A(3,2)=4&A(3,3)=2&&&&&\\ A(4,1)=16&A(4,2)=8& A(4,3)=2& A(4,4)=2&&&&\\ A(5,1)=32&A(5,2)=16& A(5,3)=4& A(5,4)=2&A(5,5)=2&&&\\ A(6,1)=64&A(6,2)=32& A(6,3)=8& A(6,4)=4&A(6,5)=2&A(6,6)=2&&\\ A(7,1)=128&A(7,2)=64& A(7,3)=16& A(7,4)=8&A(7,5)=2&A(7,6)=2&A(7,7)=2&\\ A(8,1)=256&A(8,2)=128& A(8,3)=20& A(8,4)=16&A(8,5)=4&A(8,6)=2 &A(8,7)=2&A(8,8)=2\\ A(9,1)=512&A(9,2)=256& A(9,3)=40& A(9,4)=20&A(9,5)=6&A(9,6)=4 &A(9,7)=2&A(9,8)=2\\ A(10,1)=1024&A(10,2)=512& A(10,3)=72& A(10,4)=40&A(10,5)=12&A(10,6)=6 &A(10,7)=2&A(10,8)=2\\ A(11,1)=2048&A(11,2)=1024& A(11,3)=144& A(11,4)=72&A(11,5)=24&A(11,6)=12 &A(11,7)=2&A(11,8)=2\\ A(12,1)=4096&A(12,2)=2048& A(12,3)=256& A(12,4)=144&A(12,5)=32&A(12,6)=24 &A(12,7)=4&A(12,8)=2\\ A(13,1)=8192&A(13,2)=4096& A(13,3)=512& A(13,4)=256&A(13,5)=64&A(12,6)=32 &A(13,7)=8&A(13,8)=4\\ \end{array} \] }} Generally, $A(n,1)=2^n, A(n,2)=2^{n-1}, A(n-1,2e-1)=A(n,2e), A(n,d)=2$, if $d>\frac{2n}{3}$. The values were taken or derived from Andries Brower's table at\\ http://www.win.tue.nl/$\sim$aeb/codes/binary-1.html \textbf{include to references? or other book with explicit values of $A(n,d)$ } For $c'_{n,3}$ we obtain the following lower bounds: with $k=2$ \[ \begin{array}{llll} c'_{4,3}&\geq &\binom{4}{0}A(4,3)+\binom{4}{1}A(3,2)+\binom{4}{2}A(2,1) =1\cdot 2+4 \cdot 4+6\cdot 4&=42.\\ c'_{5,3}&\geq &\binom{5}{0}A(5,3)+\binom{5}{1}A(4,2)+\binom{5}{2}A(3,1) =1\cdot 4+5 \cdot 8+10\cdot 8&=124.\\ c'_{6,3}&\geq &\binom{6}{0}A(6,3)+\binom{6}{1}A(5,2)+\binom{6}{2}A(4,1) =1\cdot 8+6 \cdot 16+15\cdot 16&=344. \end{array} \] With k=3 \[ \begin{array}{llll} c'_{7,3}&\geq& \binom{7}{0}A(7,4)+\binom{7}{1}A(6,3)+\binom{7}{2}A(5,2) + \binom{7}{3}A(4,1)&=960.\\ c'_{8,3}&\geq &\binom{8}{0}A(8,4)+\binom{8}{1}A(7,3)+\binom{8}{2}A(6,2) + \binom{8}{3}A(5,1)&=2832.\\ c'_{9,3}&\geq & \binom{9}{0}A(9,4)+\binom{9}{1}A(8,3)+\binom{9}{2}A(7,2) + \binom{9}{3}A(6,1)&=7880. \end{array}\] With k=4 \[ \begin{array}{llll} c'_{10,3}&\geq &\binom{10}{0}A(10,5)+\binom{10}{1}A(9,4)+\binom{10}{2}A(8,3) + \binom{10}{3}A(7,2)+\binom{10}{4}A(6,1)&=22232.\\ c'_{11,3}&\geq &\binom{11}{0}A(11,5)+\binom{11}{1}A(10,4)+\binom{11}{2}A(9,3) + \binom{11}{3}A(8,2)+\binom{11}{4}A(7,1)&=66024.\\ c'_{12,3}&\geq &\binom{12}{0}A(12,5)+\binom{12}{1}A(11,4)+\binom{12}{2}A(10,3) + \binom{12}{3}A(9,2)+\binom{12}{4}A(8,1)&=188688.\\ \end{array}\] With $k=5$ \[ c'_{13,3}\geq 539168.\] It should be pointed out that these bounds are even numbers, so that $c'_{4,3}=43$ shows that one cannot generally expect this lower bound gives the optimum. The maximum value appears to occur for $k=\lfloor\frac{n+2}{3}\rfloor$, so that using Stirling's formula and explicit bounds on $A(n,d)$ the best possible value known to date of the constant $C$ in equation \eqref{cpn3} can be worked out, but we refrain from doing this here. Using the Singleton bound $A(n,d)\leq 2^{n-d+1}$ Chv\'{a}tal \cite{chvatal1} proved that the expression on the right hand side of \eqref{cnchvatal} is also $O\left( \frac{3^n}{\sqrt{n}}\right)$, so that the refinement described above gains a constant factor over the initial construction only. For $n=4$ the above does not yet give the exact value. The value $c'_{4,3}=43$ was first proven by Chandra \cite{chandra}. A uniform way of describing examples for the optimum values of $c'_{4,3}=43$ and $c'_{5,3}=124$ is the following: Let us consider the sets $$ A := S_{i-1,n} \cup S_{i,n}^e \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^o$. By the previous discussion we see that this is a Moser set, and we have the lower bound \begin{equation}\label{cnn} c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|. \end{equation} This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 3331, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 11111, 11333, 33311, 33331 \}$, we obtain $c'_{5,3} \geq 124$. \item By taking $n=6$, $i=5$, and $A' = \{ 111111, 111113, 111331, 111333, 331111, 331113\}$, we obtain $c'_{6,3} \geq 342$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$, but the bound for $n=6$ is inferior to the lower bound $c'_{6,3}\geq 344$ given above. A modification of the construction in \eqref{cn3-low} leads to a slightly better lower bound. Observe that if $B \subset \Delta_n$, then the set $A_B := \bigcup_{\vec a \in B} \Gamma_{a,b,c}$ is a Moser set as long as $B$ does not contain any ``isosceles triangles $(a+r,b,c+s), (a+s,b,c+r), (a,b+r+s,c)$ for any $r,s \geq 0$ not both zero; in particular, $B$ cannot contain any ``vertical line segments $(a+r,b,c+r), (a,b+2r,c)$. An example of such a set is provided by selecting $0 \leq i \leq n-3$ and letting $B$ consist of the triples $(a, n-i, i-a)$ when $a \neq 3 \mod 3$, $(a,n-i-1,i+1-a)$ when $a \neq 1 \mod 3$, $(a,n-i-2,i+2-a)$ when $a=0 \mod 3$, and $(a,n-i-3,i+3-a)$ when $a=2 \mod 3$. Asymptotically, this set occues about two thirds of the spheres $S_{n,i}$, $S_{n,i+1}$ and one third of the spheres $S_{n,i+2}, S_{n,i+3}$ and (setting $i$ close to $n/3$) gives a lower bound \eqref{cpn3} with $C = 2 \times \frac{\sqrt{9}}{4\pi}$, which is thus superior to the previous constructions. An integer program was run to obtain the optimal lower bounds achievable by the $A_B$ construction (using \eqref{cn3}, of course). The results for $1 \leq n \leq 20$ are displayed in Figure \ref{nlow-moser}: \begin{figure}[tb] \centerline{ \begin{tabular}{|ll|ll|} \hline n & lower bound & n & lower bound \\ \hline 1 & 2 &11& 71766\\ 2 & 6 & 12& 212423\\ 3 & 16 & 13& 614875\\ 4 & 43 & 14& 1794212\\ 5 & 122& 15& 5321796\\ 6 & 353& 16& 15455256\\ 7 & 1017& 17& 45345052\\ 8 & 2902&18& 134438520\\ 9 & 8622&19& 391796798\\ 10& 24786& 20& 1153402148\\ \hline \end{tabular}} \caption{Lower bounds for $c'_n$ obtained by the $A_B$ construction.} \label{nlow-moser} \end{figure} More complete data, including the list of optimisers, can be found at {\tt http://abel.math.umu.se/~klasm/Data/HJ/}. This indicates that greedily filling in spheres, semispheres or codes is no longer the optimal strategy in dimensions six and higher. The lower bound $c'_{6,3} \geq 353$ was first located by a genetic algorithm: see Appendix \ref{genetic-alg}. \begin{figure}[tb] \centerline{\includegraphics{moser353new.png}} \caption{One of the examples of $353$-point sets in $[3]^6$ (elements of the set being indicated by white squares).} \label{moser353-fig} \end{figure} Actually it is possible to improve upon these bounds by a slight amount. Observe that if $B$ is a maximiser for the right-hand side of \eqref{cn3} (subject to $B$ not containing isosceles triangles), then any triple $(a,b,c)$ not in $B$ must be the vertex of a (possibly degenerate) isosceles triangle with the other vertices in $B$. If this triangle is non-degenerate, or if $(a,b,c)$ is the upper vertex of a degenerate isosceles triangle, then no point from $\Gamma_{a,b,c}$ can be added to $A_B$ without creating a geometric line. However, if $(a,b,c) = (a'+r,b',c'+r)$ is only the lower vertex of a degenerate isosceles triangle $(a'+r,b',c'+r), (a',b'+2r,c')$, then one can add any subset of $\Gamma_{a,b,c}$ to $A_B$ and still have a Moser set as long as no pair of elements in that subset is separated by Hamming distance $2r$. For instance, in the $n=10$ case, the set $$B = \{(0 0 10),(0 2 8 ), (0 3 7 ),(0 4 6 ),(1 4 5 ),(2 1 7 ),(2 3 5 ),(3 2 5 ),(3 3 4 ),(3 4 3 ),(4 4 2 ),(5 1 4 ),(5 3 2 ),(6 2 2 ),(6 3 1 ),(6 4 0 ),(8 1 1 ),(9 0 1 ),(9 1 0 ) \}$$ generates the lower bound $c'_{10,3} \geq 24786$ given above (and, up to reflection $a \leftrightarrow c$, is the only such set that does so); but by adding twelve elements from $\Gamma_{5,0,5}$ one can increase the lower bound slightly to $24798$. However, we have been unable to locate a lower bound which is asymptotically better than \eqref{cpn3}. Indeed, any method based purely on the $A_B$ construction cannot do asymptotically better than the previous constructions: \begin{proposition} Let $B \subset \Delta_n$ be such that $A_B$ is a Moser set. Then $|A_B| \leq (2 \sqrt{\frac{9}{4\pi}} + o(1)) \frac{3^n}{\sqrt{n}}$. \end{proposition} \begin{proof} By the previous discussion, $B$ cannot contain any pair of the form $(a,b+2r,c), (a+r,b,c+r)$ with $r>0$. In other words, for any $-n \leq h \leq n$, $B$ can contain at most one triple $(a,b,c)$ with $c-a=h$. From this and \eqref{cn3}, we see that $$ |A_B| \leq \sum_{h=-n}^n \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!}.$$ From the Chernoff inequality (or the Stirling formula computation below) we see that $\frac{n!}{a! b! c!} \leq \frac{1}{n^{10}} 3^n$ unless $a,b,c = n/3 + O( n^{1/2} \log^{1/2} n )$, so we may restrict to this regime, which also forces $h = O( n^{1/2}/\log^{1/2} n)$. If we write $a = n/3 + \alpha$, $b = n/3 + \beta$, $c = n/3+\gamma$ and apply Stirling's formula $n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n}$, we obtain $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) - (\frac{n}{3}+\beta) \log (1 + \frac{3\beta}{n} ) - (\frac{n}{3}+\gamma) \log (1 + \frac{3\gamma}{n} ) ).$$ From Taylor expansion one has $$ (\frac{n}{3}+\alpha) \log (1 + \frac{3\alpha}{n} ) = -\alpha - \frac{3}{2} \frac{\alpha^2}{n} + o(1)$$ and similarly for $\beta,\gamma$; since $\alpha+\beta+\gamma=0$, we conclude that $$ \frac{n!}{a! b! c!} = (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{2n} (\alpha^2+\beta^2+\gamma^2) ).$$ If $c-a=h$, then $\alpha^2+\beta^2+\gamma^2 = \frac{3\beta^2}{2} + \frac{h^2}{2}$. Thus we see that $$ \max_{(a,b,c) \in \Delta_n: c-a=h} \frac{n!}{a! b! c!} \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \exp( - \frac{3}{4n} h^2 ).$$ Using the integral test, we thus have $$ |A_B| \leq (1+o(1)) \frac{3^{3/2}}{2\pi n} 3^n \int_\R \exp( - \frac{3}{4n} x^2 )\ dx.$$ Since $\int_\R \exp( - \frac{3}{4n} x^2 )\ dx = \sqrt{\frac{4\pi n}{3}}$, we obtain the claim. \end{proof}
Geometric Meaning of the Geometric Mean The geometric mean of two positive numbers $a\;$ and $b\;$ is the (positive) number g whose square equals the product $ab\;$: $g^{2} = ab\;$. While it is possible to (at least partially) adapt the definition to handle negative numbers, I do not believe this is ever done. The geometric mean then answers this question: given a rectangle with sides $a\;$ and $b\;$, find the side of the square whose area equals that of the rectangle. In all likelihood, this particular problem gave that number its commonly used name: the geometric mean. It appears in a more algebraic setting as the mean proportional $p\;$ between two numbers $a\;$ and $b\;$: $a : p = p : b.$ Euclid VI.13 gives a geometric construction of the mean proportional: Draw a semicircle on a diameter of length $a + b\;$ and a perpendicular to the diameter where the two segments join. The length of the perpendicular from the circumference to the diameter is exactly the geometric mean of $a\;$ and $b\;$. (In passing, the are two more terms in Euclid VI that relate to the proportions like the above. The fourth proportional of the given numbers $a\;$, $b\;$, $c\;$ is number $x\;$ such that $ab = c/x.\;$ The third proportional of two numbers $a\;$ and $b\;$ is number $y\;$ such that $a/b = b/y.\;$ Also, this same construction is used by Euclid II.14 to construct a square of the same area as a given rectangle.) The above construction of the mean proportional is based on a Corollary from Euclid VI.8: If in a right-angled triangle a perpendicular is drawn from the right angle to the base, then the straight line so drawn is a mean proportional between the segments of the base. However this is not the only appearance of the mean proportional in right triangle. For example, in a right triangle with the altitude to the hypotenuse drawn we may observe three similar triangles: the given one, and the smaller ones cut off by the altitude. The corollary to VI.8 is derived from the similarity of two small triangles. If we pair the big triangle with any of the smaller ones, we'll find that a leg of a right triangle is the mean proportional between its projection on the hypotenuse and the hypotenuse itself. In fact the geometric mean makes quite frequent appearances in a variety of geometric situations. I'll mention a few. The length of the common tangent of two circles of diameters $a\;$ and $b\;$ that are tangent externally is the geometric mean of the diameters: The geometric mean appeared as a tangent to a circle in John Wallis' very first geometric interpretation of complex numbers: In the framework of the See-Saw Lemma, if a semicircle is inscribed between two perpendiculars to its diameter and a transversal tangent to the semicircle cut segments of lengths $a\;$ and $b\;$ from the two lines, then the radius of the semicircle is the geometric mean of $a\;$ and $b\;$. In one of the simplest sangaku a square is inscribed into a right triangle. The process of inscribing a square continues for two more steps with the cut off triangles. Inscribe the incircles into three similar and similarly obtained triangles. Let their radii be $a\;$, $p,\;$ $b\;$ in the decreasing order, then $p\;$ is the mean proportional of $a\;$ and $b\;$. Let $AB\;$ be a chord in a circle and $P\;$ a point on the circle. Draw perpendiculars $PQ,\;$ $PR,\;$ and $PS\;$ from $P\;$ to $AB\;$ and the tangents to the circle at $A\;$ and $B.\;$ Then $PQ^{2} = PR\cdot PS.$ Let points $C\;$ and $D\;$ lie on a semicircle with diameter $AB.\;$ Let $E\;$ be the intersection of $AC\;$ and $BD\;$ and $F\;$ the intersection of $AD$ and $BC.\;$ Let $EF\;$ meet the semicircle in $G\;$ and $AB\;$ in $H.\;$ Then $GH^{2} = EH\cdot FH$. [W. H. Besant, Conic Sections Treated Geometrically, George Bell & Sons, London, 1895, p. 28]. If from a point $Q\;$ tangents $QP,\;$ $QP'\;$ be drawn to a parabola, the two triangles $SPQ\;$ and $SQP'\;$ $(S\;$ the focus of the parabola), are similar, and $SQ\;$ is a mean proportional between $SP\;$ and $SP'.$ Produce $PQ\;$ to meet the axis in $T,\;$ and draw $SY,\;$ $SY'\;$ perpendicularly on the tangents. Then $Y\;$ and $Y'\;$ are points on the tangent at $A.\;$ $\begin{align} \angle SPQ &= \angle STY\\ &= \angle SYA\\ &= \angle SQP', \end{align}$ since $S, Y', Y, Q\;$ are points on a circle, and $SYA,\;$ $SQP'\;$ are in the same segment. Also, since the tangents drawn from any point to a conic subtend equal angles at the focus, $\angle PSQ = \angle QSP';\;$ therefore the triangles $PSQ,\;$ $QSP'\;$ are similar, and $SP:SQ = SQ:SP'.$ If two isosceles triangles $OTB\;$ and $OAT\;$ are similar, as in the diagram below, we get an easy proportion $OT/BO = AO/OT,\;$ meaning $OT\;$ is the geometric mean of $AO\;$ and $BO.\;$ In case the common base angle equals $72^{circ}\;$ we have a dissection of the golden triangle; however, the geometric mean stays on even for pedestrian angles. The configuration of two isosceles triangles has been used for a fast construction of the geometric mean of two line segments. One consequence of Bui Quang Tuan's Lemma of equal areas is an assertion about the areas of triangles in this configuration: Namely, $[BDE]^{2} = [ABD][BCE],\;$ where $[X]\;$ denotes the area of shape $X.$ The diagonals of a trapezoid cut it into four triangles: Two of them have equal areas, say $X,\;$ if the areas of the other two are $M\;$ and $N\;$ then $X=\sqrt{M\cdot N}.$ Have you seen the geometric mean elsewhere? Let me know. Thank you. Related material 65617633
Code Bézier splines Bézier curves become harder to work with as the number of control points gets bigger.The main reason is their global nature – moving a single control point influences the whole curve.Do you know why? ( Hint: take a look at the De Casteljau schema in the previous TP.) Also, more control points means higher-degree polynomial, which quickly becomes impractical.Today, we’ll be dealing with one possiblity on how to overcome this problem: the Bézier splines. Informally, spline is a collection of curves connected with some degree of smoothness. There is more than one way to define what does it mean for two curves to be smoothly connected. The most commonly used is the $\mathcal C^k$ smoothness. $\mathcal C^k$ smoothness A collection of $ \mathcal C^k$-smooth splines, row-wise interpolating the same data, left to right $k=0,1,2$. Mathematically speaking, two parametric curves $\mathbf x_0(t), \mathbf x_1(t), t \in [0,1]$ are $\mathcal C^k$ smooth if they are also $\mathcal C^{k-1}$, and the following condition holds: meaning the two curves agree up to their $k$-th derivatives. Let’s look at a particular case when $\mathbf x_0$ and $\mathbf x_1$ are Bézier curves. $\mathcal C^1$ quadratic spline Recall that a quadratic Bézier curve has the form where $\mathbf b_i$ are the control points and $B_i^2(t)$ are the quadratic Bernstein polynomials. Its first derivative $\dot{\mathbf x} = \frac{d^2}{dt^2} \mathbf x$ can be nicely written as In fact, the derivative is also a Bézier curve, of degree $n-1$. Now, imagine the two Bézier curves $\mathbf x_0, \mathbf x_1$ are to be joined $\mathcal C^1$-smoothly. As $\mathcal C^1$ also includes $\mathcal C^0$, we have two conditions: If $\mathbf b_i^0$ are the control points of $\mathbf x_0$, and $\mathbf b_i^1$ are the control points of $\mathbf x_1$, this comes down to ToDo$^1$ Your first task today will be to implement $\mathcal C^1$ quadratic Bézier spline,interpolating a given sequence of points $\mathbf p_i, i=0,\dots,n$.To construct the spline, you’ll need $n$ quadratic Bézier curves $\mathbf x_i, i=0,\dots,n-1$, each with three control points:$\mathbf p_{i} = \mathbf b_0^i, \mathbf b_1^i, \mathbf b_2^i = \mathbf p_{i+1}$.(Don’t get confused: here, the upper indices have nothing to do with de Casteljau!)This means that for each curve, you only need to compute a single control point $\mathbf b_1^i$.Applying the formula from above for the joint between $\mathbf x_i$ and $\mathbf x_{i+1}$ we get and from there, directly This gives us an iterative way to compute all of $\mathbf b_1^i$. You will need to manually fix $\mathbf b_1^0$. Try the midpoint $0.5(\mathbf p_0 + \mathbf p_1)$; later, you can change its position to see how it affects the computed spline. Implement the computation of control points of a quadratic interpolating Bézier spline for a given sequence of points $\mathbf p_i$ (function ComputeSplineC1). Evaluate and visualise for the available datasets. Try changing the position of $\mathbf b_1^0$. What happens? $\mathcal C^2$ cubic spline Splines, especially the cubic splines, are very common in the world of digital geometry. The algorithm you just implemented (hopefully) works well, but it has one major drawback: it requires setting the first $\mathbf b_1^0$ by hand. That’s no fun! That is why, to compute an interpolating cubic spline in this part, we will adopt a slightly different approach – by solving a linear system. We’ll do the math, crunch in the data, and let the solver do the work. To do that, take a situation much like the one before: given a sequence of points $\mathbf p_i, i=0,\dots,n$, find a $\mathcal C^2$ cubic spline (i.e. $n$ cubic curves) which interpolates these datapoints. This time, there will be two unknown interior control points for each curve, not one as in the quadratic case. Again, let’s take only two cubic curves to understand what’s going on; $\mathbf x_0$ with control points $\mathbf b_i^0$ and $\mathbf x_1$ with control points $\mathbf b_i^1$. $\mathcal C^2$ means also $\mathcal C^1$: but we already know how to write $\mathcal C^1$ in terms of control points! Let’s add the $\mathcal C^2$: Well, it’s starting to look like a system, but all this indexing is confusing. So let’s take a step back. Imagine we want to interpolate four points, i.e. we have three curves to compute. That’s 10 unique control points in total. We’ll denote those as $A,B,C,D,E,F,G,H,I,J$. (Phew.) The points to interpolate are $A,D,G,J$. Let’s rewrite our conditions in terms of this notation. $\mathcal C^0$: Seems pretty obvious, right? We have two joints, that means two equations for $\mathcal C^1$: And two equations for $\mathcal C^2$: If you’ve counted well, this makes 8 equations for 10 points; we need two more equations to be able to solve the system. (In general, this is $(n+1)+(n-1)+(n-1)=3n-1$ equations for $3n+1$ points and we still need two more equations.) One example is the so-called natural spline with vanishing second derivatives at the endpoints, i.e. : where $\mathbf 0 = (0,0)$ is the zero vector. And that’s it! Everything’s much clearer using the matrix notation (blank spaces indicate zeros): ToDo$^2$ (bonus) In the second, bonus part of today’s TP, your task is to implement $\mathcal C^2$ cubic spline as a solution of the above system. Implement the computation of control points of a cubic interpolating Bézier spline for a given sequence of points $\mathbf p_i$ (function ComputeSplineC2). Evaluate and visualise for the available datasets. Compare the results with C1 splines. What changed? Resources & Trivia Even if you don’t realize it, you’re using Bézier splines everyday; in fact, you’re using them right now! Among other things, they are used in typography to represent fonts: TrueType uses quadratic Bézier splines, while PostScript uses cubic Bézier splines. This very page is in fact a collection of some 6000 Bézier splines.
I have a set of $n$ matrices $M_i$, $1 \le i \le n$, which can each be written as the sum of three rank-1 matrices: $$ M_i = x_i x_i^T + y_i y_i^T + z_i z_i^T $$ for $x_i,y_i,z_i \in \mathbb R^p$. The vectors $x_i,y_i,z_i$ are orthogonal: $$ x_i^T y_i = x_i^T z_i = y_i^T z_i = 0 $$ I am looking for solutions $A \in G \le GL(p,\mathbb R)$, which satisfy: $$ A^T M_i A = M_i $$ Now $A = \pm I$ are trivial solutions to this equation, so I am looking for nontrivial solutions. Progress so far: We can make some progress for the special case $n = 1$. In this case, since $x_1,y_1,z_1$ are orthogonal, we can choose a basis in which $x_1 = e_1$, $y_1 = e_2$, $z_1 = e_3$. Then for some $w \in \mathbb R^p$,\begin{align}w^t M_1 w &= w^t e_1 e_1^T w + w^t e_2 e_2^T w + w^t e_3 e_3^T w \\&= (e_1^T w)^2 + (e_2^T w)^2 + (e_3^T w)^2 \\&= w_1^2 + w_2^2 + w_3^2\end{align}Similarly for the left hand side note that $e_j^T A = A_j^T$, where $A_j^T$ is the $j$-th row of $A$. Then we get\begin{align}w^t A^T M_1 A w &= w^t \left( A_1 A_1^T + A_2 A_2^T + A_3 A_3^T \right) w \\&= (A_1^T w)^2 + (A_2^T w)^2 + (A_3^T w)^2\end{align}Since this expression is equal to $w_1^2 + w_2^2 + w_3^2$ it must be true that the upper 3-by-3 block of $A$ is orthogonal in order to preserve the norm of the first three elements of $w$. Therefore, $A$ can be written in block form as$$A = \pmatrix{O(3) & 0 \\\textrm{Mat}(p-3,3,\mathbb R) & GL(p - 3, \mathbb R)}$$ Does anyone have any ideas how to get a solution for general $n$?
Revista Matemática Iberoamericana Full-Text PDF (344 KB) | Metadata | Table of Contents | RMI summary Volume 16, Issue 3, 2000, pp. 477–513 DOI: 10.4171/RMI/281 Published online: 2000-12-31 An elliptic semilinear equation with source term involving boundary measures: the subcritical caseMarie-Françoise Bidaut-Véron [1]and Laurent Vivier [2](1) Université de Tours, France (2) Université de Toulon et du Var, La Garde, France We study the boundary behaviour of the nonnegative solutions of the semilinear elliptic equation in a bounded regular domain $\Omega$ of $\mathbb R^N (N≥2)$, $${\Delta u+u^q = 0}, in \Omega$$ $$u=\mu, on \partial \Omega$$, where $1 < q < (N+1) / (N-1)$ and $\mu$ is a Radon measure on $\partial \Omega$. We give a priori estimates and existence results. They lie on the study of the superharmonic functions in some weighted Marcinkiewicz spaces. No keywords available for this article. Bidaut-Véron Marie-Françoise, Vivier Laurent: An elliptic semilinear equation with source term involving boundary measures: the subcritical case. Rev. Mat. Iberoam. 16 (2000), 477-513. doi: 10.4171/RMI/281
A natural number $$N$$ is said to be a prime number if it can be divided only by $$1$$ and itself. Primality Testing is done to check if a number is a prime or not. The topic explains different algorithms available for primality testing. Basic Method: This is an approach that goes in a way to convert definition of prime numbers to code. It checks if any of the number less than a given number($$N$$) divides the number or not. But on observing the factors of any number, this method can be limited to check only till $$\sqrt N$$. This is because, product of any two numbers greater than $$\sqrt N$$ can never be equal to $$N$$. A C++ function for basic method is shown below. int PrimeTest(int N){ for (int i = 2; i*i <= N; ++i) { if(N%i == 0) { return 0; } } return 1;} The function returns $$1$$ if $$N$$ is a prime number and $$0$$ for a composite number. This function runs with a complexity of $$O(\sqrt n)$$. That implies, this method can at most be used for numbers of range $$10^{15}$$ to $$10^{16}$$ to determine if it's a prime or not in reasonable amount of time. One major application of prime numbers are that they are used in cryptography. One of the standard cryptosystem - RSA algorithm uses a prime number as key which is usually over $$1024$$ bits to ensure greater security. When dealing with such large numbers, definitely doesn't make the above mentioned method any good. Also, should be noticed that it is not easy to work with such large numbers especially when the operations performed are / and % at the time of primality testing. Thus most primality testing algorithms that are developed can only determine if the given number is a "probable prime" or composite. Couple of widely used of these algorithms are explained below. Sieve of Eratosthenes: This is a simple algorithm useful in finding all the prime numbers up to a given number($$N$$). The algorithm takes all the numbers from $$2$$ to $$N$$ all initially unmarked. It starts from $$2$$. If the number is unmarked, mark all its multiples $$\leq N$$ as composites. The performance can be improved by doing the above operation only till $$\sqrt N$$ and all the numbers in range $$[2,N]$$ that remained unmarked are primes. The reason that we can stop after doing the iterations only till $$\sqrt N$$ is that, no composites $$\leq N$$ would have a prime factor greater than $$\sqrt N$$. A pseudocode for this algorithm is as below A[N] = {0}for i from 2 to sqrt(N): if A[i] = 0: for j from 2 to N: if i*j > N: break A[i*j] = 1 In the final array, starting from 2, if for any index, value is 0, it is a prime, else is a composite. Fermat Primality Testing: This testing is based on Fermat's Little Theorem. The theorem states that, given a prime number $$P$$, and any number $$a$$ (where $$0 < a < p$$), then $$a^{p-1} \equiv 1 mod p$$. In Fermat Primality Testing, $$k$$ random integers are selected as the value of $$X$$ (where all $$k$$ integers follow $$0 < X < p$$). If the statement of Fermat's Little Theorem is accepted for all these $$k$$ values of $$X$$ for a given number $$N$$, then $$N$$ is said as a probable prime. Pseudocode for Fermat primality testing is as below. function: FermatPrimalityTesting(int N): pick a random integer k //not too less. not too high. LOOP: repeat k times: pick a random integer X in range (1,N-1) if(X^(N-1)%N != 1): return composite return probably prime Miller-Rabin Primality Testing: Similar to Fermat primality test, Miller-Rabin primality test could only determine if a number is a probable prime. It is based on a basic principle where if $$X^2 \equiv Y^2 mod N$$, but $$X !\equiv \pm Y mod N$$, then $$N$$ is composite. The algorithm in simple steps can be written as, Given a number $$N$$($$> 2$$) for which primality is to be tested, Step 1: Find $$N-1 = 2^R . D$$ Step 2: Choose $$A$$ in range $$[2,N-2]$$ Step 3: Compute $$X_0 = A^D mod N$$. If $$X_0$$ is $$\pm 1$$, $$N$$ can be prime. Step 4: Compute $$X_i = X_i - 1 mod N$$. If $$X_i = 1$$, $$N$$ is composite. If $$X_i = -1$$, $$N$$ is prime. Step 5: Repeat Step 4 for $$R-1$$ times. Step 6: If neither $$-1$$ or $$+1$$ appeared for $$X_i$$, $$N$$ is composite. Pseudocode for Miller-Rabin primality testing is given below. function: MillerRabin_PrimalityTesting(int N): if N%2 = 0: return composite write N-1 as (2^R . D) where D is odd number pick a random integer k //not too less. not too high. LOOP: repeat k times: pick a random integer A in range [2,N-2] X = A^D % N if X = 1 or X = N-1: continue LOOP for i from 1 to r-1: X = X^2 % N if X = 1: return composite if X = N-1: continue LOOP return composite return probably prime There are other methods too like AKS primality test, Lucas primality test which predicts if a number could be prime number or not. A method called Elliptic curve primality testing proves if a given number is prime, unlike predicting in the above mentioned methods.
Theorem:Suppose that $Y_1, \dots Y_n \sim \mathcal{N}(\mu, \sigma^2)$ and these variables are mutually independent. Then $\overline{Y} = \frac{1}{n} \sum_{i=1}^n Y_i$ and $S^2 = \frac{1}{n-1}\sum_{i=1}^n (Y_i - \overline{Y})^2$ are independent variables. I have a problem with the proof of this theorem and I will only post the relevant sections. Proof: Define for $j= 1, \dots, n$: $X_j = \sigma Z_j + \mu$ where $Z_1, \dots , Z_n$ are independent variables that are standard normally distributed. I.e., $Z_i \sim \mathcal{N}(0,1)$. My book then provides argumentation that $\overline{X}$ and $S_X^2$ (defined similarly as the things written in the theorem) are independent variables. How is this sufficient to conclude that $\overline{Y}$ and $S_{Y}^2$ are independent? Note that $X_j \sim Y_j$, and this should probably be used but I can't see how. Thanks in advance.
This question already has an answer here: How to calculate the closed form of the integral $$\int\limits_0^1 {\frac{{\int\limits_0^x {{{\left( {\arctan t} \right)}^2}dt} }}{{x\left( {1 + {x^2}} \right)}}} dx$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: How to calculate the closed form of the integral $$\int\limits_0^1 {\frac{{\int\limits_0^x {{{\left( {\arctan t} \right)}^2}dt} }}{{x\left( {1 + {x^2}} \right)}}} dx$$ Use integration by parts, taking $\displaystyle \int_{0}^{x} (\arctan t)^2 dt$ as the first function and $ \dfrac{1}{x(x^2+1)} $ as the second function to get: $ \displaystyle \int \dfrac{dx}{x(x^2+1)} = \dfrac{1}{2} \ln\bigg(\dfrac{x^2}{x^2+1}\bigg)$ Hence, $I = \bigg|\dfrac{1}{2} \ln \bigg(\dfrac{x^2}{x^2+1}\bigg) \displaystyle \int_{0}^{x} (\arctan t)^2 dt \bigg|_{0}^{1} - \displaystyle \int_{0}^{1} \dfrac{1}{2} \ln \bigg(\dfrac{x^2}{x^2+1}\bigg) (\arctan x)^2 dx $ = $ \displaystyle \dfrac{1}{2}\int_{0}^{1} (\arctan x)^2 \ln\bigg(\dfrac{1+x^2}{2x^2}\bigg) dx $ I am not able to solve it further
To find the molarity $c$, you need to know the amount $n$ required for a titration: $$c(\ce{Na2S2O3}) = \frac{n(\ce{Na2S2O3})}{V(\ce{Na2S2O3})}$$ Unknown amount $n(\ce{Na2S2O3})$ can be found from the second balanced reaction: $$\ce{I2 + 2 Na2S2O3 -> 2 NaI + Na2S4O6}$$ $$n(\ce{Na2S2O3}) = 2\cdot n(\ce{I2})$$ Finding $n(\ce{I2})$ is trivial if you would've written the first redox equation correctly, and that's where the problem in your solution arises.The incomplete first equation resulting in wrong stoichimetry and false ratio between $\ce{KIO3}$ and $\ce{I2}$.The reacton takes place in acidic media (I added [diprotic] sulfuric acid, but basicity of acid doesn't matter), and the ratio between $\ce{KIO3}$ and $\ce{I2}$ won't be $1:1$: 1 $$\ce{KIO3 + 5 KI + 3 H2SO4 -> 3 I2 + 3 K2SO4 + 3 H2O}$$ $$n(\ce{I2}) = 3\cdot n(\ce{KIO3}) = \frac{3\cdot m(\ce{KIO3})}{M(\ce{KIO3})}$$ Now, with all the steps sorted out, the final expression for the molarity of hyposulfite can be rewritten algebraically and solved by plugging in the values: $$\begin{align}c(\ce{Na2S2O3}) &= \frac{2\cdot 3\cdot m(\ce{KIO3})}{M(\ce{KIO3})\cdot V({\ce{Na2S2O3})}}\\ &= \frac{2\cdot 3\cdot \pu{0.1 g}}{\pu{214.0 g mol-1}\cdot\pu{44e-3 L}} \\ &\approx \pu{0.064 mol L-1}\end{align}$$ 1 All you need to do to prove that is to write half-reactions for the redox process between iodate and iodide: $$\begin{align}\ce{2\overset{+5}{I}O3- + 12 H+ + 10 e- &-> \overset{0}{I}_2 + 6 H2O} & \tag{red}\\\ce{2\overset{-1}{I}^- &-> \overset{0}{I}_2 + 2 e-} &|\cdot 5 \tag{ox}\\\hline\ce{2 IO3- + 10I- + 12 H+ &-> 6 I2 + 6 H2O} \tag{redox}\end{align}$$ or, after dividing the total redox reaction by 2, finally $$\ce{IO3- + 5I- + 6 H+ -> 3 I2 + 3 H2O}$$
If $T_f$ is a distribution, i.e. a linear functional, continuous according to the convergence defined here, defined on the space $K$ of the functions of class $C^\infty$ that are null outside a bounded interval (which is not the same for all functions), its derivative is defined as $$\frac{dT_f}{dx}(\varphi):=-T_f(\varphi')$$where $\varphi'$ is the derivative of $\varphi$. The symbolic writing $T_f(\varphi)=\int_{-\infty}^\infty f(x)\varphi(x)dx$ is often used to write such a functional, since, if $g$ is (Riemann or Lebesgue) integrable on every bounded interval, then $\int_{-\infty}^\infty g(x)\varphi(x)dx$ indeed is such a continuous functional. In this context, we can symbolically define the "derivative" $f'$, for any $T_f$, even if the symbolic writing $f$ does not refer to an integrable function, according to the expression$$\int_{-\infty}^\infty f'(x)\varphi(x)dx:=-\int_{-\infty}^\infty f(x)\varphi'(x)dx=:\frac{dT_f}{dx}(\varphi).$$ Let us come to my question. While studying physics, in particular the theory of electromagnetism and the derivation of the Biot-Savart law from Ampère's law, I always find the equality$$\nabla^2\left(\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}_0\|}\right)=-4\pi\delta(\boldsymbol{x}-\boldsymbol{x}_0)$$where $\nabla^2$ is the Laplacian$^1$. I suppose that, in the tridimensional case, with $\phi:\mathbb{R}^3\to\mathbb{R}$, $\int_{\mathbb{R}^3}\frac{\partial f(\boldsymbol{x})}{\partial x_i}\phi(\boldsymbol{x}) dx_1dx_2dx_3$ is analogously defined as $\frac{\partial T_f}{\partial x_i}(\varphi)$, which I suppose to be analogously defined, in turn, as $-\int_{\mathbb{R}^3}f(\boldsymbol{x})\frac{\partial \phi(\boldsymbol{x})}{\partial x_i} dx_1dx_2dx_3$, although I say I suppose because I have not found a rigourous definition of such derivatives on line nor in cartaceous texts; as to mathematical resources, I have studied Kolmogorov-Fomin's Элементы теории функций и функционального анализа, which only focuses on the monodimensional $\varphi:\mathbb{R}\to\mathbb{R}$ case. Once fixed a proper definition of such derivatives, how can it be proved that $\nabla^2(\|\boldsymbol{x}-\boldsymbol{x}_0\|^{-1})=-4\pi\delta(\boldsymbol{x}-\boldsymbol{x}_0)$? $^1$ The link contains a derivation of $\nabla^2(\|\boldsymbol{x}-\boldsymbol{x}_0\|^{-1})=-4\pi\delta(\boldsymbol{x}-\boldsymbol{x}_0)$ (equations (18)-(24)), but I do not understand it: I would understand it if we could apply Gauss's divergence theorem at (20), but I know it for functions of class $C^1(\mathring{A})$, $\overline{V}\subset\mathring{A}$, only, while $\nabla\left(\frac{1}{\|\boldsymbol{x}-\boldsymbol{x}_0\|}\right)$ is not even defined for $\boldsymbol{x}=\boldsymbol{x}_0$; the other derivation of the identity $\nabla^2(\|\boldsymbol{x}-\boldsymbol{x}_0\|^{-1})$ $=-4\pi\delta(\boldsymbol{x}-\boldsymbol{x}_0)$ that I have found uses a "weak limit", but it does not use the formal definiton of derivative of a distribution that I have written above. These are the two only references addressing what I am asking that I have managed to find.
I'm trying to solve $ \begin{cases} -u''=f \\ u(0)=0 \\ u(1)= \alpha \end{cases} $ with FEM using reference elements and local coordinates. So we have the global matrix $K_{ij}=\int_\Omega N_i'(x) N_j'(x)$. Computing each local matrix for a 2noded element, I have $K^e(\xi)=\begin{pmatrix} 1/2 & -1/2 \\ -1/2 & 1/2 \end{pmatrix}$ To compute its global equivalent, I use the substitution rule $ \int_{\phi(a)=-1}^{\phi(b)=1} f(\xi)d\xi = \int_a^b f(\phi(x)) \phi'(x) dx $ with $\xi=\phi(x)=\frac{2}{h}(x-x_c)$ and $\phi'(x)=\frac{2}{h}$. So basically I have $K^e=K^g\frac{2}{h}$ so $K^g=K^e\frac{h}{2}$. Here this result is wrong, and I don't know where I missed up. I'm supposed to have $K^g=K^e\frac{2}{h}$ Thanks :)
Yes, if the invertible elements in $B(X)/K(X)$ are dense then the Fredholm operators are dense. Let $x \in B(X)$ and consider $\pi(x) \in B(X)/K(X)$.By density of the invertible elements $I$ in $B(X)/K(X)$, there exists a (Cauchy) sequence $y_n \in I$ such that $y_n \rightarrow \pi(x)$ as $n \rightarrow \infty$.Using an argument similar to the one that shows that the quotient of a Banach space by a closed subspace is a Banach, we can obtain a Cauchy sequence $x_n \in B(X)$ such that $\pi(x_n) = y_n$.Note that the $x_n \in B(X)$ are Fredholm since $\pi(x_n) \in I$.Completeness of $B(X)$ then implies that $x_n$ converges to some $x_\infty \in B(X)$ which must satisfy $\pi(x_\infty) = \pi(x)$.Thus, there exists some Fredholm operator $k \in K(X)$ such that $x_\infty + k =x$.It follows that the sequence $x_n + k$ is a sequence of Fredholm operators that converges to $x$. Note that this proof doesn't use anything special about Fredholm operators but rather follows simply from properties of Banach spaces and their quotients. Also, the converse statement is straightforward to show.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
What is the Jacobian matrix? What are its applications? What is its physical and geometrical meaning? Can someone please explain with examples? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Here is an example. Suppose you have two implicit differentiable functions $$F(x,y,z,u,v)=0,\qquad G(x,y,z,u,v)=0$$ and the functions, also differentiable, $u=f(x,y,z)$ and $v=g(x,y,z)$ such that $$F(x,y,z,f(x,y,z),g(x,y,z))=0,\qquad G(x,y,z,f(x,y,z),g(x,y,z))=0.$$ If you differentiate $F$ and $G$, you get \begin{eqnarray*} \frac{\partial F}{\partial x}+\frac{\partial F}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial F}{\partial v}\frac{\partial v}{\partial x} &=&0\qquad \\ \frac{\partial G}{\partial x}+\frac{\partial G}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial G}{\partial v}\frac{\partial v}{\partial x} &=&0. \end{eqnarray*} Solving this system you obtain $$\frac{\partial u}{\partial x}=-\frac{\det \begin{pmatrix} \frac{\partial F}{\partial x} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial x} & \frac{\partial G}{\partial v} \end{pmatrix}}{\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}}$$ and similar for $\dfrac{\partial u}{\partial y}$, $\dfrac{\partial u}{\partial z}$, $\dfrac{\partial v}{\partial x}$, $\dfrac{\partial v}{\partial y}$, $% \dfrac{\partial v}{\partial z}$. The compact notation for the denominator is $$\frac{\partial (F,G)}{\partial (u,v)}=\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}$$ and similar for the numerator. Then $$\dfrac{\partial u}{\partial x}=-\dfrac{\dfrac{\partial (F,G)}{\partial (x,v)}}{% \dfrac{\partial (F,G)}{\partial (u,v)}}$$ where $\dfrac{\partial (F,G)}{\partial (x,y)},\dfrac{\partial (F,G)}{\partial(u,v)}$ are Jacobians (after the 19th century German mathematician Carl Jacobi). The absolute value of the Jacobian of a coordinate system transformation is also used to convert a multiple integral from one system into another. In $\mathbb{R}^2$ it measures how much the unit area is distorted by the given transformation, and in $\mathbb{R}^3$ this factor measures the unit volume distortion, etc. Another example: the following coordinate transformation (due to Beukers, Calabi and Kolk) $$x=\frac{\sin u}{\cos v}$$ $$y=\frac{\sin v}{\cos u}$$ For this transformation you get (see Proof 2 in this collection of proofs by Robin Chapman) $$\dfrac{\partial (x,y)}{\partial (u,v)}=1-x^2y^{2}.$$ Jacobian sign and orientation of closed curves. Assume you have two small closed curves, one around $(x_0,y_0)$ and another around $u_0,v_0$, this one being the image of the first under the mapping $u=f(x,y),v=g(x,y)$. If the sign of $\dfrac{\partial (x,y)}{\partial (u,v)}$ is positive, then both curves will be travelled in the same sense. If the sign is negative, they will have opposite senses. (See Oriented Regions and their Orientation.) The Jacobian $df_p$ of a differentiable function $f : \mathbb{R}^n \to \mathbb{R}^m$ at a point $p$ is its best linear approximation at $p$, in the sense that $f(p + h) = f(p) + df_p(h) + o(|h|)$ for small $h$. This is the "correct" generalization of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$, and everything we can do with derivatives we can also do with Jacobians. In particular, when $n = m$, the determinant of the Jacobian at a point $p$ is the factor by which $f$ locally dilates volumes around $p$ (since $f$ acts locally like the linear transformation $df_p$, which dilates volumes by $\det df_p$). This is the reason that the Jacobian appears in the change of variables formula for multivariate integrals, which is perhaps the basic reason to care about the Jacobian. For example this is how one changes an integral in rectangular coordinates to cylindrical or spherical coordinates. The Jacobian specializes to the most important constructions in multivariable calculus. It immediately specializes to the gradient, for example. When $n = m$ its trace is the divergence. And a more complicated construction gives the curl. The rank of the Jacobian is also an important local invariant of $f$; it roughly measures how "degenerate" or "singular" $f$ is at $p$. This is the reason the Jacobian appears in the statement of the implicit function theorem, which is a fundamental result with applications everywhere. In single variable calculus, if $f:\mathbb R \to \mathbb R$, then \begin{equation} f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}. \end{equation} A very useful way to think about $f'(x)$ is this: \begin{equation} \tag{$\spadesuit$} f(x + \Delta x) \approx f(x) + f'(x) \Delta x. \end{equation} One of the advantages of equation $(\spadesuit)$ is that it still makes perfect sense in the case where $f:\mathbb R^n \to \mathbb R^m$: \begin{equation} f(\underbrace{x}_{n \times 1} + \underbrace{\Delta x}_{n\times 1}) \approx \underbrace{f(x)}_{m \times 1} + \underbrace{f'(x)}_{?} \underbrace{\Delta x}_{n \times 1}. \end{equation} You see, if $f'(x)$ is now an $m \times n$ matrix, then this equation makes perfect sense. So, with this idea, we can extend the idea of the derivative to the case where $f:\mathbb R^n \to \mathbb R^m$. This is the first step towards developing calculus in a multivariable setting. The matrix $f'(x)$ is called the "Jacobian" of $f$ at $x$, but maybe it's more clear to simply call $f'(x)$ the derivative of $f$ at $x$. The matrix $f'(x)$ allows us to approximate $f$ locally by a linear function (or, technically, an "affine" function). Linear functions are simple enough that we can understand them well (using linear algebra), and often understanding the local linear approximation to $f$ at $x$ allows us to draw conclusions about $f$ itself. (I know this is slightly late, but I think the OP may appreciate this) As an application, in the field of control engineering the use of Jacobian matrices allows the local (approximate) linearisation of non-linear systems around a given equilibrium point and so allows the use of linear systems techniques, such as the calculation of eigenvalues (and thus allows an indication of the type of the equilibrium point). Jacobians are also used in the estimation of the internal states of non-linear systems in the construction of the extended Kalman filter, and also if the extended Kalman filter is to be used to provide joint state and parameter estimates for a linear system (since this is a non-linear system analysis due to the products of what are then effectively inputs and outputs of the system). I found the most beautiful usage of jacobian matrices in studying differential geometry, when one abandons the idea that analysis can be done "only on balls of $\mathbb{R}^n$". The definition of tangent space in a point $p$ of a manifold $M$ can be given via the kernel of the jacobian of a suitable submersion, or via the image of the differential of a suitable immersion from an open set $U\subseteq\mathbb{R}^{\dim M}$. Quite a simple example, but when I was an undergrad four years ago it gave me the "right" idea of what a linear transformation does in a differential (analytical) framework. This is not a rigorous explanation, but here is the best intuitive explanation/motivation for the Jacobian Matrix. Start with an interval $[x_1,x_2] \subset \mathbb{R}$. What is a common measurement of space for this interval ? It is length. To find the length of $[x_1,x_2]$, take $x_2-x_1$. Now suppose I define an invertible linear transformation $T:\mathbb{R} \rightarrow \mathbb{R}$, where $$T(x)=\begin{bmatrix}a\end{bmatrix}x,$$ where $\begin{bmatrix}a\end{bmatrix}$ is a $1\times 1$ matrix with a nonzero entry $a$. The image of $[x_1,x_2]$ under $T$ is the interval $[ax_1,ax_2]$, and the length of this new interval is $ax_2-ax_1=a(x_2-x_1)$. Now we ask ourselves this question. How does the length of the new interval relate to the length of the old interval ? The length of the $[ax_1,ax_2]$ is $|a|$ times the length of $[x_1,x_2]$. But notice that: $$|a|=\left |\det\begin{bmatrix}a\end{bmatrix}\right |.$$ Now suppose you are doing u substitution to evaluate an integral in the form $$\int_{S} f(x) dx.$$ We define $x=x(u)$ and the differential $dx$ becomes $\frac{dx}{du}du$. If you view $dx$ and $du$ as vectors in $\mathbb{R}$, you get $$dx=\begin{bmatrix}\frac{dx}{du}\end{bmatrix}du.$$ The determinant of $\begin{bmatrix}\frac{dx}{du}\end{bmatrix}$ plays the same role as $a$ in that it is a scaling factor between different "infinitesimal" interval lengths. The higher dimensional analogue of the interval in $\mathbb{R}$ is a parallelepiped in $\mathbb{R}^n$. Measurement of space in $\mathbb{R}^n$ is the $n$-dimensional volume. If you define an invertible linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$, and if you write $T(x)=Ax$, where $A$ is an $n \times n$ matrix, the absolute value of $\det A$, scales the volume of a parallelepiped. Similarly, if you are dealing with the multidimensional integral: $$\int_{S}f(x_1,...,x_n)dx_1...dx_n$$ and wish to use change of variables: $$x_i=x_i(u_1,...,u_n),1 \leq i \leq n$$ you can regard $dx=(dx_1,...,dx_n),du=(du_1,...,du_n)$ as vectors in $\mathbb{R}^n$ and relate them by $$dx=\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij}du.$$ The Jacobian Matrix here is: $$\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij},$$ and the notation means the $i$th row and $j$th column entry is $\frac{\partial x_i}{\partial u_j}$. The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. Again, this explanation is merely intuitive. It is not rigorous as one would present it in a real analysis course. I don't know much about this, but I know is used in programming robotics for transforming between two frame of references. The equations become very simple. So moving from one frame to another to another is just the matrix product of Jacobian matrix. A very short contribution for the applicability question: it is a matrix of partial derivatives. One of the applications is to find local solutions of a system of nonlinear equations. When you have a system of nonlinear equation, the x`s that are solutions of the system are not easy to find, because it is difficult to invert a the matrix of nonlinear coefficients of the system. However, you can take the partial derivative of the equations, find the local linear approximation near some value, and then solve the system. Because the system becomes locally linear, you can solve it using linear algebra. The simplest answer I can give is - Jacobian Matrix is used when there is a change of variable requirement in the greater than one dimensional space. One of the explanations above explains it simplistically in the single variable concept.
Title: Electric field structure and Field lines for two point charges +/- Post by: ahmedelshfie on April 27, 2010, 06:33:33 pm The following is a simulation created by prof Hwang Modified by Ahmed Original project Electric field structure and Field lines for two point charges +/- (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=539.0) You can drag either charge and find out the new electric distribution. -*- The field line are calculated in real time. However, please remember field line is not the same as trajectory for a test charge move in the same field. You can add test charge if the check box were checked and watch the trajectory of test charge. It is easy to draw those vectors which represent electric field at different positions. Because we can calculate the electric field with $\vec{E}=\frac{kq_1}{r_1^2}\hat{r_1}+ \frac{kq_2}{r_2^2}\hat{r_2}$. However, do you know how those field lines were calculated? Hint: The tangential components of electric field line is the same as the firection of electric field at the same point. Title: Re: Electric field structure and Field lines for two point charges +/- Post by: ahmedelshfie on April 27, 2010, 08:22:26 pm Images from http://en.wikipedia.org/wiki/Magnetic_field
To comment on a paper published in the interactive scientific journal SOIL or in its discussion forum SOIL Discussions (SOILD), please choose one of the following two options: If you would like to submit an interactive comment (short comment, referee comment, editor comment, or author comment) for immediate non-peer-reviewed publication in the interactive discussion of a paper recently published in SOILD, please find this paper on the SOILD web page and follow the appropriate links there. Short comments can be submitted by every registered member of the scientific community (free registration is accessible via the login link). They should usually not be longer than 10 pages and have to be submitted within 6 weeks after publication of the discussion paper in SOILD. For details see types of interactive comments. The SOIL editorial board reserves the right to remove or to censor referee reports and any other comments if they contain personal insults or if they are not of substantial nature or of direct relevance to the issues raised in the manuscript under review. If you would like to contribute a peer-reviewed comment or reply, which continues the discussion of a scientific paper beyond the limits of immediate interactive discussion in SOILD, please be aware of the manuscript preparation. Such comments and replies undergo the same process of peer review, publication, and interactive discussion as full articles and technical notes. They are equivalent to the peer-reviewed comments and replies in traditional scientific journals and may achieve publication in SOIL if sufficiently substantial. If you want to use LaTeX commands in your comment, you need to "activate LaTeX commands" by clicking on the appropriate button just above the text input window. The following template is a simple ASCII text file with a typical layout for interactive comments and some frequently used LaTeX commands. It can be viewed, edited, copied, and pasted into the text field of the comment submission form using any standard text editor: comment_example.txt. LaTeX ignores extra spacing between words. If you want to force a line break, please use a double backslash \\ in the appropriate place. For separating paragraphs, use two hard returns. Italic text may be created by putting the text into curly braces with a \it after the opening brace. Typing "{\it This is italic} and this is not." will produce " This is italic and this is not." Remember to include the empty space between the \it and the rest of text. Bold face text is produced in a similar way using \bf. Typing "{\bf This is bold} and this is not." will produce " This is bold and this is not." Again, remember to include the empty space between the \bf and the rest of text. To create a subscript, type a dollar sign, an underscore, an open curly brace, the character(s) you want to be subscripted, a close curly brace, and another dollar sign. Typing "H$_{2}$SO$_{4}$" will produce "H 2SO 4" and typing "T$_{ice}$" will produce "T ice". Creating a superscripts follows the same procedure with the difference that you need to put a caret sign instead of the underscore. Typing "cm$^{-3}$" will produce "cm -3" and typing "T$^{ice}$" will produce "T ice". Some characters have a special function in LaTeX; if you want to use them as a normal character you need to put a backslash in front of them. \% \$ \& \# \_ \{ \} In particular the percent sign is used to introduce commented text in LaTeX, so ALWAYS put the backslash in front of it or else some of your text will disappear. Greek symbols can be used by putting the special commands listed below between two dollar signs: \alpha, \beta, \gamma, \delta, \epsilon, \nu, \kapp,a \lambda, \mu, \pi, \omega, \sigma, etc. Typing "$\mu$m" will produce "µm". Similarly, upper-case Greek letters can be produced: \Gamma, \Delta, \Lambda, \Sigma, \Omega, \Theta, \Phi, etc. Some frequently used mathematical symbols are produced in the same way as Greek symbols: Typing $<$ will produce <. Typing $>$ will produce >. Typing $=$ will produce =. Typing $\times$ will produce ×. Typing $\pm$ will produce ±. Typing $\sim$ will produce ~. Typing $^\circ$ will produce °. Typing $\rightarrow$ will produce an arrow pointing to the right as frequently used in chemical reactions. Simple equations are produced by putting all numbers, symbols, and signs between two dollar signs. Typing "$E = m c^{2}$" will produce "E = m c 2". Typing "$P_{t} = P_{0} A^{kT}$" will produce "P t = P 0 A kT".
C.~D.~A.~Evans and J. D. Hamkins, “Transfinite game values in infinite chess,” Integers, vol. 14, p. Paper No.~G2, 36, 2014. @ARTICLE{EvansHamkins2014:TransfiniteGameValuesInInfiniteChess, AUTHOR = {C.~D.~A.~Evans and Joel David Hamkins}, TITLE = {Transfinite game values in infinite chess}, JOURNAL = {Integers}, FJOURNAL = {Integers Electronic Journal of Combinatorial Number Theory}, YEAR = {2014}, volume = {14}, number = {}, pages = {Paper No.~G2, 36}, month = {}, note = {}, eprint = {1302.4377}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/game-values-in-infinite-chess}, ISSN = {1553-1732}, MRCLASS = {03Exx (91A46)}, MRNUMBER = {3225916}, abstract = {}, keywords = {}, source = {}, } In this article, C. D. A. Evans and I investigate the transfinite game values arising in infinite chess, providing both upper and lower bounds on the supremum of these values—the omega one of chess—denoted by $\omega_1^{\mathfrak{Ch}}$ in the context of finite positions and by $\omega_1^{\mathfrak{Ch}_{\!\!\!\!\sim}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we present specific positions with transfinite game values of $\omega$, $\omega^2$, $\omega^2\cdot k$ and $\omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $\omega_1$. The article is 38 pages, with 18 figures detailing many interesting positions of infinite chess. My co-author Cory Evans holds the chess title of U.S. National Master. Wästlund’s MathOverflow question | My answer there Let’s display here a few of the interesting positions. First, a simple new position with value $\omega$. The main line of play here calls for black to move his center rook up to arbitrary height, and then white slowly rolls the king into the rook for checkmate. For example, 1…Re10 2.Rf5+ Ke6 3.Qd5+ Ke7 4.Rf7+ Ke8 5.Qd7+ Ke9 6.Rf9#. By playing the rook higher on the first move, black can force this main line of play have any desired finite length. We have further variations with more black rooks and white king. Next, consider an infinite position with value $\omega^2$. The central black rook, currently attacked by a pawn, may be moved up by black arbitrarily high, where it will be captured by a white pawn, which opens a hole in the pawn column. White may systematically advance pawns below this hole in order eventually to free up the pieces at the bottom that release the mating material. But with each white pawn advance, black embarks on an arbitrarily long round of harassing checks on the white king. Here is a similar position with value $\omega^2$, which we call, “releasing the hordes”, since white aims ultimately to open the portcullis and release the queens into the mating chamber at right. The black rook ascends to arbitrary height, and white aims to advance pawns, but black embarks on arbitrarily long harassing check campaigns to delay each white pawn advance. Next, by iterating this idea, we produce a position with value $\omega^2\cdot 4$. We have in effect a series of four such rook towers, where each one must be completed before the next is activated, using the “lock and key” concept explained in the paper. We can arrange the towers so that black may in effect choose how many rook towers come into play, and thus he can play to a position with value $\omega^2\cdot k$ for any desired $k$, making the position overall have value $\omega^3$. Another interesting thing we noticed is that there is a computable position in infinite chess, such that in the category of computable play, it is a win for white—white has a computable strategy defeating any computable strategy of black—but in the category of arbitrary play, both players have a drawing strategy. Thus, our judgment of whether a position is a win or a draw depends on whether we insist that players play according to a deterministic computable procedure or not. The basic idea for this is to have a computable tree with no computable infinite branch. When black plays computably, he will inevitably be trapped in a dead-end. In the paper, we conjecture that the omega one of chess is as large as it can possibly be, namely, the Church-Kleene ordinal $\omega_1^{CK}$ in the context of finite positions, and true $\omega_1$ in the context of all positions. Our idea for proving this conjecture, unfortunately, does not quite fit into two-dimensional chess geometry, but we were able to make the idea work in infinite **three-dimensional** chess. In the last section of the article, we prove: Theorem. Every countable ordinal arises as the game value of an infinite position of infinite three-dimensional chess. Thus, the omega one of infinite three dimensional chess is as large as it could possibly be, true $\omega_1$. Here is a part of the position. Imagine the layers stacked atop each other, with $\alpha$ at the bottom and further layers below and above. The black king had entered at $\alpha$e4, was checked from below and has just moved to $\beta$e5. Pushing a pawn with check, white continues with 1.$\alpha$e4+ K$\gamma$e6 2.$\beta$e5+ K$\delta$e7 3.$\gamma$e6+ K$\epsilon$e8 4.$\delta$e7+, forcing black to climb the stairs (the pawn advance 1.$\alpha$e4+ was protected by a corresponding pawn below, since black had just been checked at $\alpha$e4). The overall argument works in higher dimensional chess, as well as three-dimensional chess that has only finite extent in the third dimension $\mathbb{Z}\times\mathbb{Z}\times k$, for $k$ above 25 or so.
It is well-known that calibrating Heston to the vanilla market is not as easy as it seems: some parameters are "interdependent" and the objective function exhibit plateaus in the parameter space (at least in some dimensions of the parameter space, typically mean-reversion). A good reference on this is this 2017 paper by Cui et Al. The authors mention There are two possible approaches that one can seek to deal with this: the first is to scale the parameters to a similar order and search on a better-scaledobjective function; the second is to decrease the tolerance level for the optimisation process, meaning to approach the very bottom of this objective function I am particularly interested in the first approach and was wondering the parameterisations that you experts tend to use for your daily Heston calibrations? Is there a sound way to disentangle $\kappa$ and $\rho$ for instance? For instance, the variance process being CIR the asymptotic variance of variance computes as $$\lim_{t \to \infty} \text{Var}_0^\Bbb{Q}[v_t] = \theta \frac{\xi^2}{\kappa} $$ To disentangle the effects of $\kappa$ and $\xi$ on smile convexity, one could therefore reparametrise the Heston variance process as $$ dv_t = \kappa (\theta-v_t) dt + \xi^* \sqrt{\kappa} \sqrt{v_t} dW_t $$ where we have defined a new parameter $\xi^*$ such that $\xi = \xi^* \sqrt{\kappa}$. This parameter looks more natural since it would eventually lead us to: $\lim_{t \to \infty} \text{Var}_0^\Bbb{Q}[v_t] = \theta \xi^*$. Actually, I've found that this parametrisation was already proposed by Hans Buehler (see here, section 1.1.1. for a small discussion and equation (2) for the result). In some other presentations he mentions another reparametrisation where vol-of-vol appears in the drift (but the idea is the same IMO).
ECE662: Statistical Pattern Recognition and Decision Making Processes Spring 2008, Prof. Boutin Collectively created by the students in the class Contents 1 Lecture 24 Lecture notes 1.1 Minimum Spanning Tree Methods (MST) 1.2 Visualizations of hierarchical clustering 1.3 Agglomerate Algorithms for Hierarchical Clustering (from Distances) 1.4 Defining Distances Between Clusters 1.5 Related Websites Lecture 24 Lecture notes Minimum Spanning Tree Methods (MST) Kruskal's Algorithm The algorithm starts with an initial empty tree, and it adds the edge with minimal cost to the tree at each step unless the edge creates a cycle. By repeating this process, it finds a Minimum Spanning tree. Prim's's Algorithm Prim's algorithm uses a connected tree as opposed to Kruskal's algorithm. Kruskal's Algorithm searches for minimum cost edge among all edges. Prim's Algorithm starts with the minimum cost edge, then it greedily searches for minimum cost edges among only adjacent edges. When every node is included in the tree algorithm stops. Zahn's clustering algorithm (1971) Find MST Identify "inconsistent" edges in MST and remove them. e.g. remove the longest edge edge removed=> 2 clusters edges removed=> 3 clusters What if you have? Instead, use local inconsistency remove edges significantly larger than their neighborhood edges. Use for visualization Visualizations of hierarchical clustering Consider the following set of five 2D data points, which we seek to cluster hierarchically. We may represent the hierarchical clustering in various ways. One is by a Venn diagram, in which we circle the data points which belong to a cluster, then subsequently circle any clusters that belong to a larger cluster in the hierarchy. Another representation is a dendogram. A dendogram represents the clustering as a tree, with clusters that are more closely grouped indicated as siblings "earlier" in the tree. The dendogram also includes a "similarity scale," which indicates the distance between the data points (clusters) which were grouped to form a larger cluster. For the example dataset above (with distances calculated as Euclidian distance), we have the following dendogram: A third representation of hierarchical clustering is by using brackets. We bracket data points/clusters which are grouped into a cluster in a hierarchical fashion as follows: $ \{\{\{X_1,X_2\},\{X_3,X_4\}\},X_5\} $ Agglomerate Algorithms for Hierarchical Clustering (from Distances) Reference - Chapter 3.2 from Jain and Dudes ?? Begin with a matrix of pairwise distances: $ \mathcal{D} = \begin{pmatrix} 0 & d_{1,2} & \cdots & d_{1,d} \\ d_{1,2} & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & d_{d-1,d} \\ d_{1,d} & \cdots & d_{d-1,d} & 0 \end{pmatrix} $. For simplicity, assume that the distances are pairwise distinct, and sort distances in increasing order: $ d_1, d_2, \cdots, d_{d/2} $ Begin with clusters $ S_1=\{X_1\}, S_2=\{X_2\}, \cdots, S_d=\{X_d\} $ Find the two nearest clusters $ S_{i_0}, S_{j_0} $ and merge them into a single cluster Repeat step 2 until the number of clusters reaches a pre-specified number (the number 1 corresponds to a dendogram) Defining Distances Between Clusters Option 1 $ {dist}(S_i, S_j) = \min_{X \epsilon S_i, X' \epsilon S_j}{dist}(X, X') $ We call this Nearest Neighbor Distance (NOT a distance, since $ {dist}(S_i, S_j)=0 \not{=>} S_i=S_j $) This distance choice implies a "Nearest Neighbor (NN) clustering algorithm." Note: At each level of clustering, we have a "single-link clustering" with threshold $ t_0= $ distance between the last two clusters that were merged. Note: If we continue until all $ X_i $'s are linked, we get MST Option 2 $ {dist}(S_i, S_j) = \max_{X \epsilon S_i, X' \epsilon S_j}{dist}(X, X') $ We call this Farthest Neighbor (FN) Distance. The Farthest Neighbor Algorithm is also known as The complete linkage clustering. The algorithm tries to estimate "How far the clusters are to each other". In order to find the distance between two clusters, It computes the distance between two farthest points (one from each cluster). Effect: Increase cluster diameters as little as possible. At each level of clustering, you get a complete clustering. Interpretation of NN and FN algorithm (Johnson-King 1967) Input proximity matrix $ \mathcal{D}=(d_{i,j}) $ Decide which cluster distance you will use (NN, FN, ...) Set scale = 0 Begin with $ S_1=\{X_1\},S_2=\{X_2\},\cdots,S_d=\{X_d\} $ (*) Find $ (S_{i_0},S_{j_0})=\underset{i \not{=} j}{argmin}({dist}(S_i,S_j)) $ from $ \mathcal{D} $ Merge $ S_{i_0} $ & $ S_{j_0} $ Update $ \mathcal{D} $ by deleting rows and columns $ i_0 $ and $ j_0 $; add a row and column for $ S_{d+1}=S_{i_0} \cup S_{j_0} $. Fill in new row and column with the chosen cluster distribution. If all objects belong to one single cluster, then stop. Else, goto (*). Generalization (Lance-William 1967) Update similarity matrix using formula $ {dist}(S_{i_0} \cup S_{j_0}, S_k) = \alpha_{i_0} d(S_{i_0},S_k)+\beta_{j_0} d(S_{j_0},S_k) + \gamma d(S_{i_0},S_{j_0}) + \delta |d(S_{i_0},S_k)-d(S_{j_0},S_k)| $ Related Websites Definition Kruskal's algorithm Prim's algorithm Greedy method Definition Kruskal's algorithm Prim's algorithm Greedy algorithm Minimum spanning algorithm Kruskal's algorithm Definition Prim-Jarnik's algorithm and example Kurskal's algorithm and example Baruvka's algorithm and example
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
I will take pretty much the same approach that 9-BBN does except I will do it a little bit differently. I have never heard of setting the reactants to zero and I'm not very good at making matrices, so I'll just jump right into the systems of equations. First of all, define your coefficients $a,b,c,d,e$ for each of the chemicals involved, being the reactants that you start with and the products that you get from the reaction. $$\ce{a FeS2 + b O2 + c H2O -> d Fe(OH)3 + e H2SO4}$$ Now we keep a tally of the number of each atom in molecule. If I put a zero, it means that there are none in that compound and this is for clarity; an equality sign replaces the reaction arrow separating products and reactants. Iron (Fe): $\displaystyle 1a + 0b + 0c = 1d + 0e \Longrightarrow a = d$ Sulfur (S): $\displaystyle 2a + 0b + 0c = 0d + 1e \Longrightarrow 2a = e$ Oxygen (O): $\displaystyle 0a + 2b + 1c = 3d + 4e \Longrightarrow 2b + c = 3d + 4e$ Hydrogen (H): $\displaystyle 0a + 0b + 2c = 3d + 2e \Longrightarrow 2c = 3d + 2e$ Now then, we have four equations and 5 variables $a,b,c,d,e$ … so it shouldn't be solvable, except that we only want one solution which is the lowest integer coefficients for each molecule. So we have to make one initial guess for any of the variables. We choose the one that is the simplest and that will be $a$. We set $a = 1$ for initial guess. Note that $a < 0$ & $a = 0$ are forbidden. In iron (Fe), $a = d$, as $a = 1$, then $1 = d$ In sulfur (S), $2a = e$, as $a = 1$, then $2 = e$ What we know: $a = 1, b =\ ?, c =\ ?, d = 1, e = 2$ With what we know, we can only solve for $c$ in the hydrogen equation next. $c = [(3d + 2e) / 2]$, as $d = 1, e = 2$, then: $c = 7 / 2$ Solving the last equation for b: $b = [3d + 4e -c] / 2$, given known variables, then: $b = 15 / 4$ Solutions: $a = 1, b = (15 / 4), c = (7 / 2), d = 1, e = 2$ This will balance the equation according to conservation of mass. However, it makes more sense if you multiply $a$ through $e$ by $4$ so as to get the lowest whole number solutions. This is like scaling a recipe. We are synthesizing the same thing, though. Therefore, $a = 4, b = 15, c = 14, d = 4, e = 8$ Balanced Chemical Equation: $$\ce{4 FeS2 + 15 O2 + 14 H2O -> 4 Fe(OH)3 + 8 H2SO4}$$ Now that your equation is balanced I will show you a little something that I've derived. So the formula for the commplete combustion of every hydrocarbon alkane ($\ce{C_nH_{2n+2}}$) such as methane, ethane, propane, butane, pentane, etc … is this: $$\ce{C_nH_{2n+2} + $(3n + 1) / 2$ O2 -> n CO2 + (n + 1) H2O}$$ So if we are completely combusting (full airflow of oxygen, no major flickering of the flame producing a mixture of $\ce{CO}$ and $\ce{CO2}$) propane which has the molecular formula $ce{C3H8}$, where $n = 3$ then it's combustion is the following: $$\ce{C3H8 (g) + 5 O2 (g) -> 3 CO2 (g) + 4 H2O (g)}$$ I know the general complete combustion equation is true for every alkane because all alkanes have the formula $\ce{C_nH_{2n+2}}$, $\ce{O2}$ is always involved in combustion, and $\ce{CO2}$ & $\ce{H2O}$ are always the products of the complete combustion of alkanes. I think this is really cool given that most popular general chemistry equations are combustion equations! Additional information: I have a lot of ideas running through my head now so once you've determined the balanced chemical equation, you may be able to calculate the spontaneity of the reaction from the change in Gibbs Free Energy of reaction which is useful for knowing if the reaction is a waste of energy, that is that it consumes more energy than produces or not! You can determine the percent yield after experimentation, how useful your reaction is from the stoichiometric amount of expected product produced. But even with ideally $100~\%$ yield, are all the elements that you find in your desired product going on to form your product or are they being wasted producing something else? The 2nd Principle of Green Chemistry: Atom Economy, states that Synthetic methods should be designed to maximize incorporation of all materials used in the process into the final product. (The American Chemical Society) So what's the point of your reaction? To make $\ce{Fe(OH)3}$? What percent of the atoms $\ce{Fe, O}$, and $\ce{H}$ are being incorporated as $\ce{Fe(OH)3}$ and how much is going off to form sulfuric acid instead? We can calculate that as the percent atom economy. $$\%\ \text{Atom Economy} = \frac{\text{mass of atoms in desired product}}{\text{mass of atoms in all reactants}} \times 100$$ where units are in grams per mole, but they are eliminated in the ratio. $$\begin{align}\%\ \text{Atom Economy} &= \frac{4 \times 106.866}{4 \times 119.965 + 15 \times 31.998 + 14 \times 18.015} \times 100\\&= \frac{427.464}{1212.04} \times 100\\&= 35.27~\%\end{align}$$ So if there were multiple ways to make $\ce{Fe(OH)3}$ $\%\ \text{Atom economy}$ may be a useful factor to consider when conducting a synthesis. I think this is more important, though, in choosing a synthetic reaction where toxic byproducts are involved and you want to minimize the toxic byproducts produced for the health of the earth and of the customer or patient and also to reduce on having to spend money to deactivate and/or filter out toxic byproducts in an industrial synthesis. Therefore, you want the synthesis with the highest $\%\ \text{Atom Economy}$. A link on Atom Economy: https://www.acs.org/content/acs/en/greenchemistry/what-is-green-chemistry/principles/gc-principle-of-the-month-2.html
The asymptotic behavior of solutions of a semilinear parabolic equation 1. Department of Mathematics, Chonnam National University, Kwangju, 500-757, South Korea, South Korea $u_t=\Delta u - (u^q)_y- u^p, \quad p, q >1,$ defined in the domain $Q=\{ (x, t): x=(x, y) \in \mathbf{R}^{N-1} \times \mathbf{R}, t >0 \}$ with nonnegative initial data in $L^1( \mathbf{R}^N)$. We completely classify the asymptotic profiles of solutions as $t \to \infty$ according to the parameters $p$ and $q$. We use rescaling transformations and a priori estimates. Keywords:singular solution., Asymptotic behaviour, self-similar solution, a semilinear heat equation. Mathematics Subject Classification:35B30, 35B40, 35K1. Citation:Minkyu Kwak, Kyong Yu. The asymptotic behavior of solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 1996, 2 (4) : 483-496. doi: 10.3934/dcds.1996.2.483 [1] Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. [2] Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. [3] [4] Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. [5] [6] Qiaolin He. Numerical simulation and self-similar analysis of singular solutions of Prandtl equations. [7] [8] Qingsong Gu, Jiaxin Hu, Sze-Man Ngai. Geometry of self-similar measures on intervals with overlaps and applications to sub-Gaussian heat kernel estimates. [9] Hideo Kubo, Kotaro Tsugawa. Global solutions and self-similar solutions of the coupled system of semilinear wave equations in three space dimensions. [10] [11] [12] [13] Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. [14] [15] [16] G. A. Braga, Frederico Furtado, Vincenzo Isaia. Renormalization group calculation of asymptotically self-similar dynamics. [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Edit: the spectral sequence argument to prove my theorem is given more concisely in Max's answer The geometric intuition here is that if we're considering free actions of a topological group $G$ on a sphere $S^k$, then the quotient $S^k/G$ is "homotopically the same" as the classifying space $BG$ in degrees that are small relative to $k$ (in particular they have the same homotopy and homology groups in these degrees), essentially since $BG$ is a quotient of a contractible space by a free action, and $S^k$ is $(k-1)$-connected so it "looks contractible" below degree $k$. Computing the homology of $BG$ is usually done via spectral sequence. In our case when $G=\mathbb{Z}/n$ for $n> 1$ then the homology of $BG$ is isomorphic to $\mathbb{Z}/n$ in every odd degree and is $0$ otherwise (see the bottom of my post), so this is one way of seeing where the torsion is coming from in $H_1$ and $H_3$ of $S^5/(\mathbb{Z}/n)$. If you're interested in seeing how the spectral sequence argument goes, I think I figured out what they were aiming for. I just noticed in one of your comments that you're very new to proofs in general so this might be a bit advanced, but believe it or not it's relatively nice as far as spectral sequences go. My spectral sequence argument actually establishes the following: Theorem: If $G$ is a topological group with an action on $S^k$ such that the quotient map is a fibre bundle (for finite groups it's enough that the action is free), and $BG$ is the classifying space of principal $G$-bundles, then $$ H_p(S^k/G) \cong H_p(BG) \text{ for }p\leq k.$$ You can also get this result via proving the analogue for homotopy groups first and then showing that implies the homology result. In our particular case of $\mathbb{Z}/n$ acting on $S^5$, this result combined with the spectral sequence computation in the Aside tells us that $H_1(X)\cong H_3(X) \cong \mathbb{Z}/n$, regardless of the free action we started with. Proof via spectral sequence: Suppose $G$ is a topological group (for our case it will be $\mathbb{Z}/n$) and suppose it admits a free action on $S^k$ such that the quotient map $S^k \to X= S^k/G$ is a fibre bundle. This is in particular a fibration so we could try study its Leray-Serre spectral sequence, but it turns out a slightly different fibration is more convenient here. Instead, use the fact that this principal $G$ bundle is classified by a map to the classifying space $BG$, such that $$ S^k \to X \to BG $$is equivalent to a fibration. If $G$ is a discrete group the spectral sequence for this fibration is usually called the spectral sequence for a covering space. The $E^2$ page has groups $$ E^2_{p,q} \cong H_p(BG ; H_q(S^k)) $$ and it converges to $H_{p+q}(X)$. These groups can only be non-zero when $q =0$ or $k$, and the $q=0$ row is just a copy of $H_*(BG)$. If $\pi_1 BG\cong \pi_0 G \neq 0$ we need to use twisted coefficients when $q = k$ where the action of $\pi_0 G$ on $H_k(S^k)$ is induced by the original action on the space $S^k$, and so the $k$-th row of the $E^2$ page (along with the differentials) will depend on the particular action. What doesn't depend on the action is that the only non-zero differentials are $d_{k+1}$ and so the groups $\{E^r_{p,0}\}_{p\leq k}$ are never hit by any differentials for $r >1$, so must survive as $\{H_p(X)\}_{p\leq k}$. The result follows since $E^2_{p,0}\cong H_p(BG)$. Aside: The integral homology of $BG$ where $G=\mathbb{Z}/n$ is $$ H_p(B\mathbb{Z}/n) \cong \begin{cases} \mathbb{Z} & \text{if}\, p=0 \\ \mathbb{Z}/n & \text{if}\, p \text{ is odd} \\ 0 &\text{otherwise} \end{cases} $$ This can be seen via another spectral sequence argument. First you do the usual computation of the spectral sequence for $S^1 \to S^\infty \to \mathbb{CP}^\infty$ and find out that the $E^2$ page is concentrated in the first two rows, and at the same time compute the cohomology of $\mathbb{CP}^\infty$ and show that every non-trivial differential must be an isomorphism. Now restrict the $S^1$ action on $S^\infty$ to the subgroup $\mathbb{Z}/n$ and construct $BG$ as the quotient $S^\infty/G$, and notice that you get a map from $BG \to \mathbb{CP}^\infty$ whose fibre is $S^1/n\mathbb{Z}$. In fact this gives a fibration $$ S^1 \to B\mathbb{Z}/n \to \mathbb{CP}^\infty $$ and a map of fibrations from $S^1 \to S^\infty \to \mathbb{CP}^\infty$ which is a degree $n$ map on the fibres. By considering the induced map on spectral sequences you can deduce the result.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Question is to check which option holds true : There exist a map $f: \mathbb{Z}\rightarrow \mathbb{Q}$ such that is bijective and increasing is onto and decreasing is bijective and satifies $f(n)\geq 0$ if $n\leq 0$ has uncountable image. First of all any subset of $\mathbb{Q}$ is countable so there is no point in looking for last option. Now, As both $\mathbb{Z}$ and $\mathbb{Q}$ are countable, there could be a possible bijective function.. Now, the first problem is i could not think of a bijection (I am very sure this exist) and second problem is even if i find some function will that old first or third possibilities. Please just do not give an answer but please give some hint and give some time to think about. Thank you :)
Backtesting and Over-Leverage are the bane of any systematic trader. A shout out to Peter who raised both these points on the previous article in the series: Trading Numbers So, what are the two issues we’ll address in this week’s article: Look ahead bias, and is it really that bad? Kelly is always touted as optimal. Is it really that optimal? And to all those who don’t like spoilers: (2) will be covered in more detail later in this article series, so close your eyes; but following Peter’s comments, I couldn’t help include some detailed analysis here, I hadn’t really considered earlier. Backtesting and Look Ahead Bias Backtesting and Look Ahead Bias I hear you asking, “What’s this look under the bed bias??” Well, remember in Trading Numbers, we used momentum to setup a strategy that easily outperformed the S&P500. Recapping: on the last trading day of the month you looked back over the last year. If the S&P500 had finished up, you bought, otherwise you were flat. Nice and simple. Now assume that you were a bit sloppy implementing this in your spreadsheet, and you let a Look Ahead bias creep in. Let’s see what this bias could have led us to believe: Wow! That’s pretty impressive. Recall what it really looked like, however: That’s a bummer. We overstated performance by 100%! What went wrong? For anybody, who’s implemented anything in Excel, you’ll know how easy it is to get references mixed up. In this particular case the Look Ahead bias entered by trading the 12 th month of our lookback period. I.e. rather than trading the 13 th month, using the prior 12 months’ information, we traded the last month of the 12-month lookback period, even though we already used that month to form our trading decision. It’s like assuming you have a crystal ball! The really insidious thing here is how 12-month momentum and 1-month momentum are so strongly correlated! You would have assumed that it wouldn’t make that much difference! Now obviously a situation like this can only arise when you backtest. Unfortunately, you only go live after you have a good backtesting result, and so it’s not surprising that, given a Look Ahead bias makes backtests look so nice, this error keeps on rearing its head. Either check your tests, or forward walk your system to see that you are indeed testing realistic rules. In-Sample Statistics In-Sample Statistics So how did we fall prey to the Look Ahead bias in the previous article, and what is our saving grace. As was pointed out, we risk adjusted our 12-month momentum strategy. Here it is again: And to do that, we had to measure the risk of the S&P 500 and that of the Momentum strategy and scale up the Momentum strategy appropriately. And if you recall from the previous article, we did that by measuring the standard deviation of both strategies. But HANG ON! How can we do that if in 1994, we have no clue how these strategies will pan out over the next 23 years?! That’s where we got clonked. The scaling factor turned out to be 1.3x, based upon these “Forward Looking” risk measures (which were nothing more than the standard deviations of the P&L streams of either strategy). A possible solution was to use an expanding window for our risk measure. Meaning at each time we measure the risk from that point in time all the way back to 1994 (the start of our simulated trading). This actually ends up giving us a much lower risk multiplier, and hence a much lower performance for the risk adjusted momentum strategy. Indeed, an issue. Resolution Resolution There is, however, a saving grace for us in all this, and it actually gives us even more confidence in the momentum strategy. Our saving grace is that the stock markets didn’t just magically appear in 1994! If you recall from the first article in this series: Building Profitable Trading Systems, we had data (albeit synthetic) going all the way back to 1871. So, let’s try this. Let’s estimate the relative risk from 1871 until 1994 for both the S&P 500 as well as its momentum filtered counterpart. It so magically turns out that the relationship of 1.3x is stable! The risks from 1871 until 1994 were nearly identical as for the period 1994 to 2017! Now that is remarkable indeed. It means two things: Our initial analysis still stands Going forward we can rest assured that we don’t have to fiddle too much with our risk multiplier, since it has been stable at the same value (at least to O(0.1) ) for the last 150 years. What else does this do for us? It underscores the stability of our approach. Going back to gaining confidence from statistical measures, it implies that structurally markets have stayed very similar over modern times. And this gives us confidence to proceed with this strategy. Over-Leverage, the Second Bane Over-Leverage, the Second Bane This leads nicely to the second point that was raised. What is the optimal leverage I should use? It turns out that simply risk-adjusting the momentum strategy does leave a lot of money on the table. Accepted wisdom recommends using a leverage factor equal to the Kelly criterion. This Kelly factor is usually evaluated using statistical properties of the return series. In particular: \( \lambda = \frac{\mu}{\sigma^2} \) Where \(\mu\) and \(\sigma\) are the average rate of return and the standard deviation of our momentum strategy. There are some caveats here. Naively plugging our mean and standard deviation estimate using this formula gives an optimal leverage factor of 8.8. This leads to: which is a cataclysmic result. And furthermore, how on earth is this possible? The simple answer: Over-leverage. Let’s define Over-Leverage: the naïve assumption that we have all available, all possible, information at our disposal, and we have the over-powering desire to ride the ragged edge of disaster. Obviously, this is a fallacious assumption and attitude. (You’ll see towards the end how this leads to the notion of a Stop Loss). So how could we rectify our example above? Let’s wind the clock back a bit, and work out Kelly from first principles. Principles: The returns of our asset / trading strategy are normally distributed. We can safely ignore any risk terms coming from higher-order moments of the normal distribution The way these assumption feed into Kelly is visible from the structure of the formula: it only includes the mean and the standard deviation of the return distribution. So, this leads us to the conclusion that maybe our Momentum Strategy isn’t as normal as we might assume, and might have riskier higher order moments than a normal distribution. Let’s check this. Kurtosis, \(\kappa=1.8\), and skweness, \(\gamma_1=-0.25\). It’s got fat tails, which isn’t surprising for a momentum strategy, and interestingly enough a negative skew. Now, this is interesting because on the one hand you’d expect positive skew for momentum strategies (viz. the MAN AHL article: Positive Point of Skew), however, for stocks skewness of the strategy tends to be negative (cf this great article Skewness Enhancement), indicating that you can fall off a cliff. So how do we deal with this? Let’s go back to the derivation of Kelly. If our wealth process evolves like \({\displaystyle V=V_0\prod_i{(1+\lambda X_i)}}\), where \(\lambda\) is our leverage, and \(X_i\) is the return over a time period, we can find the optimal leverage, the Kelly leverage, by optimizing the expected growth rate with regards to our leverage factor. Writing out the expected growth rate:\({\displaystyle g(\lambda)=\mathbb{E}\left(\log\frac{V}{V_0}\right)=\sum_i \mathbb{E} \log(1+\lambda X_i) }\) and taking into account that our returns are identically distributed over all time-steps (iid), we can Taylor expand \(g(\lambda)\) as:\(g(\lambda) = \lambda \mathbb{E} X – \frac{1}{2}\lambda^2 \mathbb{E}X^2 + \frac{1}{3}\lambda^3 \mathbb{E}X^3 – \frac{1}{4} \lambda^4 \mathbb{E} X^4\) The optimization means taking the derivative of \(g(\lambda)\) with respect to \(\lambda\), and setting it equal to zero:\(g'(\lambda)=\lambda^3\mathbb{E}X^4 – \lambda^2\mathbb{E}X^3 + \lambda\mathbb{E}X^2 – \mathbb{E}X = 0\) In the case where we assume kurtosis and skew to be zero the Kelly-leverage \(\lambda\) ends up being our usual suspect. However, we obtain a cubic equation if we include the higher order moments. Thank goodness for the Italians having found solutions for this in 16 th Century (del Ferro, Tartaglia, Cardano, and Bombelli). Working out the four moments for our case of the momentum strategy and using the algebraic solution for a cubic, we obtain an optimal leverage factor of 7.35. Now this looks good, it’s lower than before, and it seems to have noticed our fat tails and negative skew. But is it good enough?? NO! We still fall off a cliff. And I don’t even have to look at a chart. Do you want to know why? Because in 1998 August, the Russians went boom, and the stock market took a big nose dive. Our momentum portfolio lost 14% that month. And 14% x 7.35 is bigger than 100%, meaning with this leverage we would have lost it all. What went wrong with our super-duper maths? Simple, that event was a massive outlier. The next biggest loss for the momentum strategy is at 7%. This means that the kurtosis is even bigger than we estimated from our sample. So what solutions do we have? There are two possibilities in our case: We cheat! Meaning that since our strategy derives from the underlying market, why not use the skew and the kurtosis of the S&P 500? This would allow us to benefit from the increased returns and lower volatility of the momentum strategy, but still take into account the potential for big losses from the S&P500! We follow the madding crowd and use half-Kelly instead. For option (1) we obtain a leverage of 6.0, which is not too far from the optimal leverage at 6.32 (if you numerically fiddle with the numbers). For option (2) we’d obtain 4.4. What would the difference in performance be? Option (1) yields a 11,800x return, and option (2) a return of 3,900x. I would say it’s ball park similar! Forgetting for the time being the ludicrously high numbers (not that they are ridiculous), the real question arises: even if we used the S&P 500’s fatter tails, we haven’t really guarded against a cataclysmic event in the future: we simply don’t have a crystal ball! Coming back full circle, this is where ultimately the protective stop comes in. Protective Stops Protective Stops The message is as always, the old one: you can’t forecast the future, and hence you have to put a stop in to protect yourself. However, here is where it gets more refined. The stop I’m talking about is one that protects your capital from disaster. I’m not talking trailing stops, or stops set at some arbitrary point, where a trade signal has been negated. The cows haven’t come home yet on the subject of where the ideal trading stop is. However, with respect to disaster recovery, it’s clear you need them. The position depends on your risk appetite. There are people out there who are quite happy to stomach a 90% loss. Do you belong in that camp? Recap Recap In this article we covered some foundational details of developing trading systems. In particular, the dangers of backtesting and over-leverage: The Look Ahead bias can creep into your analysis in the most devious of ways. Always be on the look-out, especially if your equity curve is too much of a straight line in your tests Over-leverage is a killer when you least need it: in the worst-case scenario. It’s the reason places like LTCM, Amaranth, Peloton had to book the losses they did (as well as Howie Hubler at Morgan Stanley, and with him the rest of the US housing market). So, decide a level at which you will get out, just for the sake of keeping alive (as long as it’s not at -100%!) In addition, we had some numbers excursion that have been quite fun, and have led to some better ways of calculating the usual leverage ratios touted out there. If you want to incorporate these, here is the extension of the previous Python code: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 import matplotlib.pyplot as plt import numpy as np import math from pandas_datareader import data as pdr import fix_yahoo_finance as yf yf.pdr_override() def cubic(a,b,c,d): d0 = b**2 - 3*a*c d1 = 2*(b**3) - 9*a*b*c + 27*(a**2) * d C = ( (d1 + math.sqrt(d1**2 - 4*(d0**3)))/2 ) ** (1/3) res = -1/(3*a) * (b + C + d0/C) return res def optimal_leverage(spy_ret, mom_ret): a1 = (spy_ret**4).mean() b1 = (spy_ret**3).mean() a2 = (mom_ret**4).mean() b2 = (mom_ret**3).mean() c = (mom_ret**2).mean() d = (mom_ret**1).mean() kelly = d/c lev_mom = cubic(-a2,b2,-c,d) lev_mom_spy = cubic(-a1,b1,-c,d) return (kelly, lev_mom, lev_mom_spy) if __name__=="__main__": data = pdr.get_data_yahoo('SPY', start='1990-01-01', end='2017-10-02', interval="1mo") c = data[["Adj Close"]] c["spy"] = c["Adj Close"] / c.ix[0,"Adj Close"] c["rets"]=c["spy"]/c["spy"].shift(1)-1 c["flag"]=np.where(c["Adj Close"]>c["Adj Close"].shift(12),1,0) c["mom_ret"] = c["flag"].shift(1) * c["rets"] (kelly, lev_mom, lev_mom_spy) = optimal_leverage(c["rets"], c["mom_ret"]) print("Kelly: {}, Optimal leverage for momentum strategy: {}, " \ "Optimal leverage for momentum strategy using SPY " \ "kurtosis and skew{}".format(str(kelly),str(lev_mom),str(lev_mom_spy))) std_spy = c["rets"].std() std_mom = c["mom_ret"].std() fac = std_spy / std_mom c["mom"] = (1+c["mom_ret"]).cumprod() c["mom_lev"] = (1 + c["mom_ret"] * fac).cumprod() plt.plot(c.index, c["spy"], label="spy") plt.hold(True) plt.plot(c.index, c["mom"], label="spy 12m - mom") plt.plot(c.index, c["mom_lev"], label="spy 12m lev") plt.grid() plt.title("SPY vs SPY 12 Month Momentum vs SPY Momentum Leveraged", \ fontdict={'fontsize':24, 'fontweight':'bold'}) plt.legend(prop={'size':16}) plt.show() See You Next Time… See You Next Time… Next time round, we’ll be continuing with a technical variant of mean-reversion for equities which has proven profitable over the last 25 years, and which doesn’t let up. (Double promise! No detour foreseen). In particular, we’ll look at various ways of understanding market regimes. I’ll also utilize our analysis on Kelly betting to give you a taster on what portfolio construction entails, and how to go about extracting as much value as you can out of your very own simple momentum / mean-reversion equity portfolio. So, until next time, Happy Trading. If you have enjoyed this post follow me on Twitter and sign-up to my Newsletter for weekly updates on trading strategies and other market insights below!
Why does the following DFA have (to have) the state $b_4$? Shouldn't states $b_1,b_2,b_3$ already cover "exactly two 1s"? Wouldn't state $b_4$ mean "more than two 1s", even if it doesn't trigger an accept state? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community $b_4$ is what is called a trap state, that is, a state that exists just so that all possible transitions are explicitly represented, even those that do not lead to a final state. It doesn't change the language that is being defined, and can be omitted for the sake of brevity. b4 exists to cover the entire alphabet ([0,1], in this case) for each state. While this is not strictly necessary, it is a hot topic of discussion in the field. By showing the complete graph, it is more obvious that a third '1' in your input string permanently moves you out of the 'accept' state b3. The formal definition of a DFA is $M = (Q, \Sigma, \delta, q_0, F)$, were $Q$ is the finite set of states, $\Sigma$ is the alphabet, $\delta$ is the transition function, $q_0 \in Q$ is the start state, and $F \subseteq Q$ is the set of final states. Note that $\delta \colon Q \times \Sigma \to Q$ is specified to be a function, i.e., it has to be defined for all states and symbols. The graphical depiction of the DFA is complete in this sense with $b_4$. Often such dead states are just omitted in the sake of clarity of the diagram, the reader is surely capable of adding them if required. Answering your question I have to say(sadly) that it depend. It depends on the definition of DFA that you are using because it appears to not be concensus in a unique definition. For example I use the definition of the DFA where $\delta$ is a function. The next question is: Is $\delta$ a total function or a partial function? Personally when I use the term function I am refering to total functions by default. But someone can disagree with me. More importantly when I studied the definition of a DFA my teacher told me that $\delta$ is a total function. Summarizing I use a particular definition of a DFA where the $b_4$ state have to exist. I can skip drawing it for the sake of laziness or clarity, but I know it exist. Finally to answer your question more precisely we have to know what definition of DFA you use. Wouldn't state b4 mean "more than two 1s", even if it doesn't trigger an accept state? The state $b_4$ means that if a word $\sigma$ have more than two "1" it will never reach an accepting state so $\sigma\notin L = \{w\,|\, \text{contains exactly two ones}\}$.
In school, we learn that sin is "opposite over hypotenuse" and cos is "adjacent over hypotenuse". Later on, we learn the power series definitions of sin and cos. How can one prove that these two definitions are equivalent? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Most of the proofs in elementary calculus textbooks use the definition of $\sin x$ via geometry to prove that the derivative of $\sin x$ is $\cos x$ (namely, the fact that $\lim_{x \to 0} \frac{ \sin x}{x} = 1$). Consequently, it follows that $\sin x$ and $\cos x$ are the two linearly independent solutions of $y'' = -y$. The power series equations are also two linearly independent solutions of this differential equation. Moreover, $\sin x$ and its derivative coincide with the derivative of the power series for $ \sin x$ at zero (no surprise, it's a Taylor series). Same for $\cos x$. By uniqueness of solutions to ordinary differential equations, this proves that $\sin x$ and $\cos x$ as defined in school are equal to their power series. (This is an expansion of Qiaochu's comment.) If you allow yourself a tiny bit of calculus ( "$\sin x / x \to 1$" as "$x \to 0$" ) and apply some combinatorics, there's a really nice geometric interpretation of the terms of the power series for the functions. Consider this diagram and the polygonal "spiral" that starts at $P_0$ and closes in on the point $P$ (where $|P_0 P| = 1$). The horizontal segments $P_{2n} P_{2n+1}$ alternately overshoot and undershoot the length of the cosine segment; the vertical segments $P_{2n+1} P_{2n+2}$ do the same for the sine segment. So, $\cos \theta = \sum_{n=0}^{\infty}(-1)^n | P_{2n} P_{2n+1} |$ $\sin \theta = \sum_{n=0}^{\infty} (-1)^n | P_{2n+1} P_{2n+2} |$ Now, the lengths $|P_{k} P_{k+1}|$ are equal to the lengths of the curves $|I_k|$, which constitute a series of successive involutes (with $I_0$ defined to be a segment, and $I_1$ defined to be an arc of the unit circle). Combinatorics and the calculus result I mentioned show that the involute lengths satisfy ... $|I_k| = \theta^k / k!$ ... so that the above are, in fact, power series. Interestingly, the same thing can be done with secant and tangent, using an involute zig-zag: where $\sec \theta = \sum_{n=0}^{\infty} | P_{2n} P_{2n+1} | = \sum_{n=0}^{\infty} | I_{2n} |$ $\tan \theta = \sum_{n=0}^{\infty} | P_{2n+1} P_{2n+2} | = \sum_{n=0}^{\infty} | I_{2n+1} |$ and the lengths $|I_k|$ turn out to be the appropriate multiples of powers of $\theta^k$. Reference to the argument for sine and cosine (attributed to Y. S. Chaikovsky, as reported by Leo Gurin), a complete discussion of the trickier argument for secant and tangent, and then a refinement of the argument for sine and cosine, are in my note "Zig-Zag Involutes, Up-Down Permutations, and Secant and Tangent" (PDF). BTW: I have not (yet) cracked the case for cosecant and cotangent. Robison, "A new approach to circular functions, π, and lim sin(x)/x", Math. Mag. 41.2 (March 1968), 66–70 [jstor]. In this paper it is shown that the addition law for cosine (and a couple other simple assumptions) uniquely determines cosine and sine. So then it's enough to prove geometrically that the high school functions satisfy that law (this is essentially Ptolemy's theorem), and prove that the power series functions satisfy it (using the binomial theorem and such manipulations). There is another proof that the derivative of sine is cosine that doesn't use the sandwich theorem mentioned by Qiaochu and Akhil above. Instead, one can use the definition of arcsine and the standard calculus formula for arc length in terms of an integral to show that arcsine = the integral of (1 - x^2)^(-.5). It follows that the derivative of arcsine is (1 - x^2)^(-.5), and (by the chain rule) one can use this fact to prove that the derivative of sine is cosine. In fact, I'm not sure why this proof is presented less frequently then the one via the sandwich theorem. The unit circle definition of sine is based on arc length, and in calculus we learn a formula for arc length based on integration. Why not connect these two concepts for a natural proof that the derivative of sine is cosine? As a rough outline, the circular definitions of sine and cosine (the y- and x-coordinates of the image of (1,0) under a rotation about the origin) lead to being able to differentiate sine and cosine, and once you know how to differentiate them (infinitely), Taylor's Theorem justifies that the power series is equal to the function. Call the highschool functions (defined by the right triangle inscribed in a unit circle, the angle being equal to the length of the arc of the circle) $\sin_h$ and $\cos_h$, and let $\sin_p$ and $\cos_p$ be the power series definitions. (Note that these functions are continuous and agree at the end points $0$ and $2\pi$). Since $\sin_p^2(\theta)+\cos_p^2(\theta)=1$ the power series definitions also form a right triangle. Hence $\sin_h = \sin_p \circ \gamma$ and $\cos_h = \cos_p \circ \gamma$ for some parameterization $\gamma$. We know the power series definitions satisfy the arc length criteria so $\gamma$ must be the identity function. To do this rigorously you have essentially no choice as to the logic of the proof. The first definition constructs (sin,cos) as a pair of functions on the unit circle X^2 + Y^2 = 1. The functions are just Y and X respectively. The second definition constructs another pair (Sin,Cos), of functions on the real line. The functions are specific power series. To show that these pairs are "equal" has a unique meaning: to construct some local identification of the two spaces (a locally invertible parametrization of the circle by the line, and vice versa, i.e., a covering map) such that under this identification (sin,cos) corresponds to (Sin,Cos). This identification is itself unique: the rotationally invariant angle measure in the plane (Haar measure on SO(2,R) in its conventional normalization to have total length 2*Pi). If angle measure is already available by other means, such as length measurement on the circle, one gets a second construction of (sin,cos) through their inverse functions (using integrals) and one then has to check that arcsin^{-1} = Sin where the function on the left is an integral and the right is a power series. Otherwise we have only the map in one direction, (Cos(t),Sin(t)) from the line to the circle using power series, and one has to check that Cos^2(t) + Sin^2(t) = 1 and the local invertibility (everywhere nonzero velocity vector) of the map. This would define angle measure on the circle as "t", ie., the inverse of the power-series parametrization of the circle by the line. (Edited to remove Tex, click on the edit-history to see it with the TeX.) Akhil Mathew's answer is definitely the most beautiful and brief one, but here I have a different approach which I think will be the most effective to VISUALIZE the entire stuff. Using the power series definitions of the trigonometric functions, we can arrive at the following result: $$\frac{d}{dx}arccos(x)=-\frac{1}{\sqrt{1-x^2}}$$ Where $\mid x\mid\leqslant1$. Now we introduce a 2D cartesian co-ordinate system and plot $y=\sqrt{1-x^2}$ [it looks like the unit circle equation, but note that we're not taking $\pm\sqrt{1-x^2}$ and hence the figure we'll get will be rather a semicircle covering the 1st and the 2nd quadrant only;Otherwise we'd have a great problem with the multi-valued function.Further we take $\mid x\mid\leqslant1$ to avoid complex plotting]. Suppose a ray $\vec{OB}$ from origin cuts the circumference at $B(x,\sqrt{1-x^2})$. Now we drop a perpendicular $BD$ from $B$ to $X$ axis. Clearly, $D$ is at $(x,0)$. Using the formula for arc length, we can say that the length of the arc $BCA$ equals$$\int^1_x\sqrt{1+(\frac{d\sqrt{1-t^2}}{dt})^2}dt$$$$=\int^1_x\frac{1}{\sqrt{1-t^2}}dt$$$$=[-arccos(t)]^1_x$$$$=arccos(x)$$ Now, By definition of angle(in radians,of course), we have $s=r\theta$. Hence $$\theta=\angle BOA=\frac{s}{r}=\frac{arcBCA}{radiusOB}=\frac{arccos(x)}{1}=arccos(x)$$$$\therefore cos\angle BOA=x=\frac{x}{1}=\frac{OD}{OB}$$ And$$sin\angle BOA=\sqrt{1-cos^2\angle BOA}=\sqrt{1-\frac{OD^2}{OB^2}}=\frac{BD}{OB}$$ Isn't this what you wanted! N.B. We have deduced the interrelation of the two definitions for the first two quadrants only. If we went for the 3rd and 4th, we won't be successful as the function $arccos$ doesn't have any image to represent angles of those quadrants. You can generalize the above for all quadrants and also for angles greater than $2\pi$ using results(obtained from power series definitions)like$cos(\pi+\theta)=-cos\theta;cos(-\theta)=cos\theta;cos(2\pi+\theta)=cos\theta$etc. This was origanally part of an answer HERE. I remember my math teacher in high school (1968) putting (something like) this on the board and saying that it was a very old proof. I've tried finding it but I haven't had any luck. It is by no means a rigorous proof. But I think it is very interesting nonetheless. Let's assume that n is an odd number. Then $(\sin \theta + i \cos \theta)^{n} = \sum_{k=0}^{n} \binom{n}k (\sin \theta)^k (i \cos \theta)^{n-k}$ $\sin(n \theta) + i \cos (n \theta) = \sum_{k=0}^{n} \binom{n}k (\sin \theta)^k (i \cos \theta)^{n-k}$ $\displaystyle \sin(n \theta) = \sum_{k=0}^{(n-1)/2} \binom{n}{2k+1} (\sin \theta)^{2k+1} (-1)^k (\cos \theta)^{n-2k-1}$ Let $\theta \to 0$ in such a way that $n \theta \to x$ at the same time. This can be done by letting $\theta = \dfrac{x}{n}$ and then letting $n \to \infty$. Now we examine $\binom{n}{2k+1} (\sin \theta)^{2k+1}$ as $n \to \infty$. \begin{align*} \binom{n}{2k+1} (\sin \theta)^{2k+1} &= \dfrac{(n)(n-1)(n-2)\cdots (n-2k)}{(2k+1)(2k)(2k-1) \cdots (1)} (\sin \theta)^{2k+1}\\ &= \dfrac{(n) \sin \theta}{2k+1}\; \dfrac{(n-1) \sin \theta}{2k}\; \dfrac{(n-2) \sin \theta}{2k-1}\;\cdots \dfrac{(n - 2k) \sin \theta}{1}\\ &\to \dfrac{(n) \theta}{2k+1}\; \dfrac{(n-1) \theta}{2k}\; \dfrac{(n-2) \theta}{2k-1}\;\cdots \dfrac{(n - 2k) \theta}{1}\\ &\to \dfrac{x}{2k+1}\; \dfrac{x-\theta}{2k}\; \dfrac{x-2\theta}{2k-1}\;\cdots \dfrac{x - 2k \theta}{1}\\ &\to \dfrac{x^{2k+1}}{(2k+1)!} \end{align*} Since $\cos \theta \to 1$ as $\theta \to 0$, letting $n \to \infty$, we get $$\displaystyle \sin(x) = \sum_{k=0}^{\infty} (-1)^k \dfrac{x^{2k+1}}{(2k+1)!}$$
I am told that f has a continuous derivative and that $a \leq f(x) \leq b$ and $|f'(x)| < 1 \ \forall x \in [a,b]$ and I have to show that $f$ is a contraction. Now if I take any $x,y \in [a,b]$, the Mean-Value Theorem says that $\exists c \in (a,b)$ such that $$|f(x) - f(y)| = |f'(c)| |x-y|$$ and so clearly this is a contraction. However I haven't used the condition that $f$ has a continuous derivative or that $f$ is bounded by $a$ and $b$, why are these conditions necessary? My definition of contractive is that $|g(x) − g(y)| \leq a|x − y|$ for some real value $0 \leq a < 1$ and for all $x$, $y \in [a,b]$. Thanks
Description The [POPULATION] section is used to define a distribution for the population parameters. Scope The [POPULATION] section is used in Mlxtran models for simulation with Simulx. It allows to take into account variability or uncertainty in population parameters. Mlxtran models for Monolix or Mlxplore do not need this section: in Monolix, population parameters are estimated, and in Mlxplore they must be given a single value. Inputs The inputs for the [POPULATION] section are the parameters that are declared in the input = { } list of the [POPULATION] section. These parameters are the typical values and standard deviations for the distribution of the population parameters. When the distribution represents the uncertainty of the population parameters, the typical value is the estimated value and the standard deviation the standard errors estimated by Monolix. Outputs There is no explicit output list. Every population parameter that has been defined in the [POPULATION] section can be an output. Outputs from the [POPULATION] section are inputs for the [LONGITUDINAL], [COVARIATE] or [INDIVIDUAL] sections. Output to input matching is made by matching the names of the random variables defined in the [POPULATION] section with the parameters appearing in the inputs = { } list of other sections. Usage The definition of a probability distribution for a population parameter is done with the EQUATION: and DEFINITION: blocks. The EQUATION: block contains mathematical equations (to do transformations for instance) and the DEFINITION: block is used to definite probability distributions. The following syntax applies to define a probability distribution for the random variable X: DEFINITION: X = {distribution= distributionType, parameter1 = Val1, parameter2 = Val2} The arguments to the probability distribution definition are distributionType: is one of the following reserved keywords: normal, lognormal, logitnormal or probitnormal parameter1: is one of the following reserved keywords: mean or typical parameter2: is one of the following reserved keywords: sd or var Val1: is a double number or a parameter Val2: is a double number or a parameter The reserved keywords meanings are normal: normal distribution: lognormal: log-normal distribution: logitnormal: logit-normal distribution: probitnormal: probit-normal distribution: , where is the cumulative distribution function of the distribution. mean: is the mean of the associated normal distribution typical: is the transformed mean sd: is the standard deviation of the associated normal distribution. If no variability is required, the user can set sd=”no-variability”. var: is the variance of the associated normal distribution These probability distribution are all Gaussian probability distributions that are defined through the existence of a monotonic transformation such that is normally distributed. Notice that the mean, standard deviation, and variance arguments refer to the normal distributed variable. In pharmacometrics it is more common to use the typical value of the distribution. This is achieved by using the keyword typical instead of mean in the definition of the random variable. The relationship between the mean value and the typical value is the following: where . Thus, typical is in the variable referential, while mean is in the transformed referential. Example: model taking into account population parameter uncertainty The following model takes into account uncertainty on all population parameters. Each population parameter is defined in [POPULATION] as a log-normal distribution, except the parameters of the error model which are defined with normal distributions. [LONGITUDINAL] input = {ka, V, Cl, a} EQUATION: ; PK model definition Cc = pkmodel(ka, V, Cl) DEFINITION: Concentration = {distribution=normal, prediction=Cc, errorModel=constant(a)} ;---------------------------------------------- [INDIVIDUAL] input = {ka_pop, V_pop, Cl_pop, omega_ka, omega_V, omega_Cl} DEFINITION: ka = {distribution=logNormal, typical=ka_pop, sd=omega_ka} V = {distribution=logNormal, typical=V_pop, sd=omega_V} Cl = {distribution=logNormal, typical=Cl_pop, sd=omega_Cl} ;---------------------------------------------- [POPULATION] input = {mean_V, sd_V, mean_ka, sd_ka, mean_Cl, sd_Cl, mean_omega_V, sd_omega_V, mean_omega_ka, sd_omega_ka, mean_omega_Cl, sd_omega_Cl, mean_a, sd_a} EQUATION: mean_log_V = log(mean_V) mean_log_ka = log(mean_ka) mean_log_Cl = log(mean_Cl) mean_log_omega_V = log(mean_omega_V) mean_log_omega_ka = log(mean_omega_ka) mean_log_omega_Cl = log(mean_omega_Cl) sd_log_V = sqrt(log(1+(sd_V/mean_V)^2)) sd_log_ka = sqrt(log(1+(sd_ka/mean_ka)^2)) sd_log_Cl = sqrt(log(1+(sd_Cl/mean_Cl)^2)) sd_log_omega_V = sqrt(log(1+(sd_omega_V/mean_omega_V)^2)) sd_log_omega_ka = sqrt(log(1+(sd_omega_ka/mean_omega_ka)^2)) sd_log_omega_Cl = sqrt(log(1+(sd_omega_Cl/mean_omega_Cl)^2)) DEFINITION: V_pop = {distribution = lognormal, mean = mean_log_V, sd = sd_log_V} ka_pop = {distribution = lognormal, mean = mean_log_ka, sd = sd_log_ka} Cl_pop = {distribution = lognormal, mean = mean_log_Cl, sd = lsd_log_Cl} omega_V = {distribution = lognormal, mean = mean_log_omega_V, sd = sd_log_omega_V} omega_ka = {distribution = lognormal, mean = mean_log_omega_ka, sd = sd_log_omega_ka} omega_Cl = {distribution = lognormal, mean = mean_log_omega_Cl, sd = sd_log_omega_Cl} a = {distribution = normal, mean = mean_a, sd = sd_a} The distribution for each parameter is defined with the mean and standard deviation of the associated normal variable. For example, \(V_{pop}\) depends on two variables called here mean_log_V and sd_log_V, which are the mean and the standard deviation of the log-transformation of V: \(\log(V_{pop}) \sim \mathcal{N}(\text{mean_log_V},\text{sd_log_V}^2)\) mean_log_V and sd_log_V can be replaced in the inputs by the mean and standard deviation of \(V_{pop}\), called here mean_V and sd_V. Values for mean_V and sd_V can come for example from the estimated value and estimated standard error found for \(V_{pop}\) by Monolix. The following transformations are then defined in the block EQUATION: \(\text{mean_log_V} = \log(\text{mean_V}) \\ \text{sd_log_V} = \sqrt{\log(1+(\frac{\text{sd_V}}{\text{mean_V}})^2)} \) Alternatively, the transformation of mean_V into mean_log_V could be skipped and the distribution of \(V_{pop}\) in the block DEFINITION: could be written with typical instead of mean: V_pop = {distribution = lognormal, typical = mean_V, sd = sd_log_V}
There are several ways to prove that $SEQ_{DFA}$ is decidable. One of them is by using the fact that $EQ_{DFA}$ is decidable. Assume that $L_1$ and $L_2$ are languages such that $L_1\leq_m L_2$ (that is, there is a mapping reduction from $L_1$ to $L_2$). It holds that if $L_2$ is decidable, then so is $L_1$. Hence, in order to prove that $SEQ_{DFA}$ is deciadable, we reduce it to $EQ_{DFA}$. As hinted in the comment, we propose the following reduction. The Reduction:Let $C$ be a fixed DFA for the empty language, that is, $L(C) = \emptyset$. The reduction operates as follows. Given instance $\langle A= \langle Q_{A}, \Sigma, q^{A}_0, \delta_{A}, F_{A}\rangle, B= \langle Q_{B}, \Sigma, q^{B}_0, \delta_{B}, F_{B}\rangle\rangle$ of $SEQ_{DFA}$, the reduction computes a product automaton, $D$, for $L(A)\cap L(B)^C$ and outputs $\langle D, C\rangle$. Correctness: $L(A)\subseteq L(B)$ iff $L(A)\cap L(B)^C = \emptyset$ iff $L(D) = \emptyset$ iff $L(D) = L(C)$. (the first equivalence mentioned in the comment is simple enough and thus is left for the reader). Computability: the only non-trivial part is to compute $D$ given $A$ and $B$. We claim that $D$ is defined by $D = \langle Q_{A}\times Q_{B}, \Sigma, (q^{A}_0, q^{B}_0), \delta_{D}, F_{D}\rangle$. Where: 1) For every $(q, s)\in Q_{A}\times Q_{B}$ and $\sigma \in \Sigma$, $\delta_{D}$ is defined by: $$\delta_{D}((q, s),\sigma) = (\delta_{A}(q, \sigma), \delta_{B}(s, \sigma))$$ 2) $F_{D} = F_{A}\times (Q_{B}\setminus F_{B})$. Note that $D$ is a standard product automaton and thus for every finite word $w\in \Sigma^*$, we have that $\delta_{D}((q, s), w) = (\delta_{A}(q, w), \delta_{B}(s, w))$ (this can be proven by induction on $|w|$). Therefore, $w\in L(D)$ iff $\delta_{D}((q^{A}_0, q^{B}_0), w) \in F_{D}$ iff $(\delta_{A}(q^{A}_0, w),\delta_{B}(q^{B}_0, w) ) \in F_{A}\times (Q_{B}\setminus F_{B})$ iff $w\in L(A)$ and $w\in L(B)^C$ iff $w\in L(A)\cap L(B)^C$. Therefore $L(D) = L(A)\cap L(B)^C$ and thus we're done.
I am very new to mathematica and also new to this community. I have a problem given below, How to express this in Mathematica? $H=[2,4,10,5,12,34,42,\cdots 12]$ is a vector of size $N$. $h_n$ is the $n$th element of $H$. For example, $h_3=10$ and $h_N=12$. $F(\theta)=h_{N/2+1}+2\sum_{l=1}^{N/2-1}h_{N/2-l}\cos (\theta l)$ I want to calculate $\int_{.234}^{432}(F(\theta))^2$ Any help will be appreciated!
to determine the volatility of a basket of stocks, I often use the following formula: $\sigma_{basket}=\sum_{i}\sum_{j}w_i w_j \sigma_i \sigma_j \rho_{ij}$ where the $\sigma$ are the constituents' volatilities and the $\rho_{i,j}$ are the pair correlation coefficients of logreturns. Now, to simulate future prices, one might choose to start out with the initial basket value and then use $\sigma_{basket}$ to determine the distribution of future prices. The issue I have with this is the following: The sum of lognormally distributed random variables is not a lognormally distributed random variable. So while I assume that my naive approach may yield reasonable results, it is probably not entirely correct. Does anyone have a view on how good/bad of an approximation it is / where this breaks down / what workarounds there are. I am vaguely aware that there is a result by Brigo on this very topic, but I have not found it. Can anyone help with this please ? Much appreciated ...
I want to numerically solve the diffusion equation $\partial_t u = D \partial_x^2 u$ in Fourier space, and can think of multiple ways to do it. Setup Option 1 Differentiating $u$ twice in Fourier space brings down two factors of $ik$ from the exponential $e^{i k x}$. $$\partial_t \tilde{u}_k = D (ik)^2 \tilde{u}_k$$ which can be discretised, for example, like: $$\frac{\tilde{u}^{n+1} - \tilde{u}^n}{\Delta t} = -Dk^2\left(\frac{\tilde{u}^{n+1} + \tilde{u}^{n}}{2} \right)$$ Option 2 Taking the second order finite difference equation in real space: $$\partial_x^2 u \approx \frac{u_{i+1} - 2 u_i + u_{i-1}}{\Delta x^2}$$ where $u_{i+1}$ and $u_i$ are $\Delta x $ apart. Fourier transforming this, remembering that phase shift of $\Delta x$ is given by $e^{\Delta x k}$ we get: $$\partial_x^2 \tilde{u}_k \approx \frac{e^{+\Delta x k} -2 + e^{-\Delta x k} }{\Delta x^2} \tilde{u}_k = 2 \frac{\cos(\Delta x k) - 1}{\Delta x^2} \tilde{u}_k\qquad(1)$$ Applying again to the diffusion equation: $$\frac{\tilde{u}^{n+1}_k - \tilde{u}^n_k}{\Delta t} = D\frac{2 (\cos(\Delta x k)-1)}{\Delta x^2} \left(\frac{\tilde{u}^{n+1}_k + \tilde{u}^{n}_k}{2} \right)$$ I came across Option 2 in a few places, for example, in this paper. Question Which of these (if either) should I be using, and why? I'm guessing the discrete nature of the FFT is important, as the approaches are equivalent as $\Delta x \to 0$, seen by Taylor expanding Eq. (1): $$2 \frac{\cos(\Delta x k) - 1}{\Delta x^2} = 2 \frac{1 - \frac{(\Delta x k)^2}{2} + \mathcal{O}(\Delta x^4) - 1}{\Delta x^2} = -k^2 + \mathcal{O}(\Delta x^2)$$ Any help is greatly appreciated! Update Thinking more about this, the properties of the DFT are actually important. The maximum $k$ is $k_{max} =2\pi (N/2)$ where $\Delta x = L / N$. So as $\Delta x \to 0$, $k_{max} \to \infty$. Therefore the two methods are different for large $k$ at any $\Delta x$: This looks like the latter option is applying some smoothing at high $k$.
$\arctan(x)$ is a rational multiple of $\pi$ if and only if the complex number $1+xi$ has the property that $(1+xi)^n$ is a real number for some positive integer $n$. It is fairly easy to show this isn't possible if $x$ is an integer with $|x|>1$. This result essentially falls out of the fact that $\mathbb Z[i]$ is a UFD, and the fact that the only specific primes in $\mathbb Z[i]$ are divisors of their conjugates. You can actually generalize this for all rationals, $|x|\neq 1$, by noting that $(q+pi)^n$ cannot be real for any $n$ if $(q,p)=1$ and $|qp|> 1$. So $\arctan(\frac{p}q)$ cannot be a rational multiple of $\pi$. Fuller proof: If $q+pi=z\in \mathbb Z[i]$, and $z^n$ is real, with $(p,q)=1$, then if $z=u\pi_1^{\alpha_1} ... \pi_n^{\alpha_n}$ is the Gaussian integer prime factorization of $z$ (with $u$ some unit,) $z^n = u^n \pi_1^{n\alpha_1}...\pi_n^{n\alpha_n}$. But if a Gaussian prime $\pi_i$ is a factor of a rational integer, $z^n$, then the complement, $\bar{\pi}_i$ must also be a factor of $z^n$, and hence must be a factor of $z$. But if $\pi_i$ and $\bar{\pi}_i$ are relatively prime, that means $\pi_i\bar{\pi}_i=N(\pi_i)$ must divide $z$, which means that $N(\pi_i)$ must divide $p$ and $q$, so $p$ and $q$ would not be relatively prime. So the only primes which can divide $q+pi$ can be the primes which are multiples of their complements. But the only such primes are the rational integers $\equiv 3\pmod 4$, and $\pm1\pm i$. The rational integers are not allowed, since, again, that would mean that $(p,q)\neq 1$, so the only prime factors of $z$ can be $1+i$ (or its unit multiples.) Since $(1+i)^2 = 2i$, $z$ can have at most one factor of $1+i$, so that means, finally, that $z\in\{\pm 1 \pm i, \pm 1, \pm i\}$. But then $|pq|=0$ or $|pq|=1$.
I'm trying to find the integral of $y = x\sin^2 (x^2)$. Can someone help please? I've tried converting it to $x(\frac{1}{2} -\frac{1}{2}\cos(2x^2))$ and using integration by parts but it doesn't seem to help. Thanks. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I'm trying to find the integral of $y = x\sin^2 (x^2)$. Can someone help please? I've tried converting it to $x(\frac{1}{2} -\frac{1}{2}\cos(2x^2))$ and using integration by parts but it doesn't seem to help. Thanks. Hint: First let $u=x^{2}$ and notice that $\sin^{2}(u)=\frac{1-\cos(2u)}{2}$ $x\left(\dfrac12−\dfrac12\cos(2x^2)\right)=\dfrac{x}2-\dfrac18\times4x\cos(2x^2)$ From there $4x\cos(2x^2)$ is of the form $u'\cos u$, and therefore is easy to integrate 1) $x \sin^2(x^2)dx = \frac{1}{2}\sin^2(x^2)d x^2$ 2) $\sin^2 t = \frac{1 - \cos 2t}{2}$
To expand on @rodms answer, and explain what is going on, it might be useful to forget about probabilities for a minute. The multinomial distribution can be seen as a tool to study processes that partition a finite set into a fixed number of subsets. If you are familiar with algebra, it helps to think about the distribution of degrees in the monomials of multivariate polynomials (the consonance is not a coincidence). If you are familiar with analysis, think about the derived terms in the multivariate Taylor expansion. From these analogies, it should be clear that the number of ways to obtain $j$ realisations of a particular event amongst $N$ trials is $\binom{N}{j}$. Now going back to the probabilities, in order for this particular outcome to happen, we need some event $k$ to happen exactly $j$ times, for which the probability is $p_k^j (1-p_k)^{N-j}$. Hence the formula that @rodms gave you. But what you are asking goes one step further than that. There are many ways that $Z_j$ can be equal to any number $m$. To clarify: $\mathrm{Pr}(\ Z_j = m\ )$ is the probability that exactly $m$ events occur $j$ times. In order to obtain this formula, there is a second combinatorial layer; namely the combinations of $m$ events that would occur $j$ time, of which there are exactly $\binom{K}{m}$. Therefore, $\mathrm{Pr}(\ Z_j = m\ )$ is given by the sum over these combinations that the selected $m$ events jointly occur $j$ time. For any such combination of $m$ events with indices $i_1\neq ...\neq i_m$, the probability that $(X_{i_1} = j\ \text{ and }\ X_{i_2} = j\ ...\ \text{ and }\ X_{i_m} = j)$ if $mj\leq N$ is given by:$$\binom{N}{mj} \pi_{(i_1, ... , i_m)}^j (1- \sigma_{(i_1, ... , i_m)})^{N-mj}$$where $\pi_{(i_1, ... , i_m)} = \prod_{k=1}^m p_{i_k}$ and $\sigma_{(i_1, ... , i_m)} = \sum_{k=1}^m p_{i_k}$. This yields the (rather ugly) formula:$$\mathrm{Pr}(\ Z_j = m\ ) = \sum_{i_1\neq ...\neq i_m} \binom{N}{mj} \pi_{(i_1, ... , i_m)}^j (1- \sigma_{(i_1, ... , i_m)})^{N-mj}$$where this sums iterates over the $\binom{K}{m}$ combinations mentioned previously. If we further assume all $p_i$ equal, then this simplifies to:$$\mathrm{Pr}(\ Z_j = m\ ) = \binom{K}{m} \binom{N}{mj} p^{mj} (1- mp)^{N-mj}$$
I have a set of vectors that I am trying to predict from another set of vectors using a matrix $W$. To find this matrix, I decide I want to minimize the $\ell^2$ norm of the error, e.g.: $$ \text{find} \min_W \|y - Wx\|_2 \\ x,y \in \mathbb{C}^N \quad W\in \mathbb{C}^{N \times N} $$ Where $x$ and $y$ are respective vectors from their sets, and each pair is (I hypothesize) related to eachother by $W$. I start by expanding this out: $$ \min_W \left[ (y - Wx)^H (y - Wx) \right] \\ = \min_W \left[ (y - Wx)^H y - (y - Wx)^H Wx \right] \\ = \min_W \left[ (y^H y - y^H Wx)^H - (x^H W^H y - x^H W^H Wx)^H \right] \\ = \min_W \left[ y^H y - x^H W^H y - y^H W^H x + x^H W^H W x \right] $$ Where $(\cdot)^H$ denotes Hermitian transpose. I want to take the derivative with respect to $W$ and set it equal to zero, but I'm having a hard time with the derivative as I'm not sure exactly how that works with Hermitian transposes. Taking a look at this page, it looks like I might be in trouble (e.g. I'm going about this the wrong way), but I wanted to pick your brains to see if you all had any ideas on how to move forward from here. Thank you all! EDITMichael C. Grant has pointed out that this question is underdetermined, and he is right, if I only have a single $x$ and $y$. However I have a set of $x$ and $y$ vectors that I assume are related to eachother by $W$ (And since right now I am working with simulations, I can generate as many pairs of $x$'s and $y$'s as I want). This is a kind of "system identification" problem, where I have input-output correspondences and I'm trying to understand how the math is derived.
In Bishop's Pattern Recognition and Machine Learning (ISBN-13: 978-0387-31073-2), Bishop writes on page 86: This is an example of a rather common operation associated with Gaussian distributions, sometimes called ‘completing the square’, in which we are given a quadratic form defining the exponent terms in a Gaussian distribution, and we need to determine the corresponding mean and covariance. Such problems can be solved straightforwardly by noting that the exponent in a general Gaussian distribution $N(x|\mu,\Sigma)$ can be written: $$ -\dfrac{1}{2}(x-\mu)^T \Sigma^{-1} (x-\mu) = -\dfrac{1}{2}x^T \Sigma^{-1} x + x^T \Sigma^{-1} \mu + \text{const}$$ Where $\mu$ is the mean of a multivariate Gaussian and $\Sigma$ is the covariance matrix. How was the above equation derived? Is there a name for this technique (Looking up completing the square doesn't yield anything)?
Given a Polish space $X$, I note $C_b(X)$ the set of the continuous bounded functions with the norm of the uniform convergence, and $(C_b(X))^\star$ its topological dual with the $*-$weak convergence $\sigma((C_b(X))^\star, C_b(X))$. To a Borel probability $P$ on $X$ we can associate a $l(P) \in (C_b(X))^\star$ by $$E_P[\phi]=\langle l(P) , \phi \rangle$$for every $\phi \in C_b(X)$, where $\langle,\rangle$ denotes the duality bracket. I further consider a sequence $(P_n)_{n\in \mathbb{N}}$ of Borel probabilities on $X$ which is such that $(l(P_n))$ converges in the $*-$weak topology of $(C_b(X))^\star$ to a $m \in (C_b(X))^\star$. Obviously $$\langle m,G_1 \rangle=1$$ (where $G_1$ is the function $G_1(x)=1$ for any $x\in X$), and for any positive $\phi$ $$\langle m,\phi \rangle\geq 0$$ (the positive cone is closed). However, is it true that one can find a unique probability $P^\star$ on $X$ such that $$m=l(P^\star) \, ?$$ The problem could come from the sigma additivity.
I am trying to follow an algorithm that is described in Elementary Quantum Mechanics in 1D. I want to compute eigen-energies and functions in bound states in the basic case in rectangular potential well shown on page 99 as a simple example. I am doing my code in MATLAB. And I can't get the same solution as shown in page 100. I have tried inspecting the algorithm and it seems ok to me, so I am suspecting whether I'm doing something wrong with the Units or scaling or something like that...but I can't seem to see the problem in there either! The algorithm is inspected throughout pages 13-23. I have written the algorithm to be computed in general cases, and I have checked it with the Gaussian well too, it produces same failure results... In summary, what I want to achieve is to get correct results for just one potential well. The algorithm in that case can be summed to this. In the general case in which you have a potential function approximated with many step functions, the transfer matrix would be: $$T_{0, N+1} = E^{-1}(V_0; a_0)K^{-1}(V_0) M_1 M_2 \cdots M_N K(V_{N+1})E(V_{N+1}; a_{N+1})$$ where the matrices $E^{-1}, K^{-1}, E, K$ and $M$'s are given in tables for cases where the input energy $E$ that is being calculated is lesser or higher than the potential energy of the step in the specific region. (table 2 page 11 and the M matrices are given in table 3 page 16)I would love to use latex to write them here as well, but when I write something other than one-line latex code it just doesn't show (I'd appreciate a tip on that too...?) The author of the book decomposes the equations make them simpler for computing. Generally, what needs to be done is to calculate the $M$ matrices and then pre-multiply and post-multiply them with the boundary conditions of the potential function. $M$ matrices are dependent on the value of the potential step in the according well and its width. In the most easiest case of one rectangular potential well the regions are left and right of the well. The potential energy outside the well is 20eV while inside is 0V. The with of the well is 8 Angstroms (or 0.8nm). I need more reputation to post more than two links, and I think this book I posted is really good for reference, so I can't post what the solution looks like but tell you that it's on page 100. So, getting back to the problem in hand. I would like to calculate the energy eigenvalues in bound states. The criteria for bound states is that the energies that are being calculated need to be less then the energy of the potential well on it's left and right boundary region.$E<V_L,V_R$ where $V_L=V_R=20eV$ in this case. The MATLAB code makes vector E with values slightly above 0 and slightly less than 20eV. So, since the problem for rectangular well is greatly simplified it turns out the only thing you need to compute is this (page 99): $$ t_{11}(E) = \cos(ka) + 0.5 * \left(\frac{\kappa}{k} - \frac{k}{\kappa}\right)\sin(ka) $$ where $k = \sqrt{2mE/\hbar}$, in general case it's $E-V_i$ for it's the case on the region where $E<V_i$ but in this case the value of the well is 0. And $\kappa = \sqrt{2m(V_{L/R}-E)/\hbar}$ Here's the code: %main.mclose allclear allclc% parameters for constructing the rectangular potential wellN=101; %number of points xMin = -0.8e-9; % nm xMax = -xMin; x1 = abs(xMin)/2;U0 = 0; % well depth% U = ones(N,1)*20;x = linspace(xMin,xMax, N);% creating the potential function Ufor cn = 1 : N if abs(x(cn)) <= x1, U(cn) = U0; end; %if;end;% Test U% plot(x,U,'LineWidth',3);% stepWidth_and_their_Pots(U, x) function returns the width dx of each steps % through out the potential function U as well as their corresponding potential value Ustep[dx, Ustep] = stepWidth_and_their_Pots(U, x);numE = 50; % scope of input energy levels E to compute % computes the transfer matrix element t11(E) and plots itV = transferMatrix(U, dx, Ustep, numE); function [dx, Ustep] = stepWidth_and_their_Pots(U, x) flag = []; j = 1; for i = 1:length(U)-1 if U(i) ~= U(i+1) flag(j) = i+1; if i == length(U)-1 break; else j = j+1; end end end j = length(flag); dx = []; i = 1; for k = j:-1:1 if k-1 == 0 break; else dx(i) = x(flag(k)) - x(flag(k-1)); i = i+1; end end Ustep = []; for i = 1:length(flag)-1 Ustep(i) = U(flag(i)); endend function V = transferMatrix(U, dx, Ustep, numE) %constants hbar = 1.055e-34; % J.s q = 1.602e-19; % C m0 = 9.1093897e-31; % kg m = 0.067*m0; E = linspace(min(U) + 1e-3, U(1)-1e-3, numE); V = []; for j = 1:length(E) kappa = sqrt(2*m/hbar^2 * q * (U(1) - E(j))); % U(1)=U(N) and both < E for i = 1:length(dx) if E(j) > Ustep(i) k = sqrt(2*m/hbar^2 * q * (E(j) - Ustep(i))); Ms = [cos(k*dx(i)), -(1/k)*sin(k*dx(i)); k*sin(k*dx(i)), cos(k*dx(i))]; elseif E(j) < Ustep(i) k = sqrt(2*m/hbar^2 * q * (Ustep(i) - E(j))); Ms = [cosh(k*dx(i)), -(1/k)*sinh(k*dx(i)); -k*sinh(k*dx(i)), cosh(k*dx(i))]; else k = 0; Ms = [1, -dx(i); 0, 1]; end end t11 = 0.5 * (Ms(1, 1) + Ms(2, 2)) - 0.5*(kappa*Ms(1,2) + Ms(2,1)/kappa); %T = [exp(kappa*xMin) 0; 0, exp(-kappa*xMin)] * 0.5 * [1, -1/kappa; 1, 1/kappa] ... %* Ms * [1, 1; -kappa, kappa] * [exp(-kappa*xMax) 0; 0, exp(kappa*xMax)]; %test = cos(k*dx) + 0.5*(kappa/k - k/kappa)*sin(k*dx); V(j) = t11; end plot(E, V)end Can someone help? UPDATE EDIT: I checked my code and found an error in the mass unit...it was 0.0067 instead of 0.067, which explains one dimension flaw. But I still don't get the correct values. I have checked my units again and again, and I just can't seem to understand where's the error. so I have: $$\hbar = 1,055 * 10^{-34} Js$$ $$q = 1,602 * 10^{-19} C$$ $$m_0 = 9,1093897 * 10^{-31} kg$$ $$m = 0,067*m_0 = 6,1033 * 10^{-32} kg$$ Length scale is in $nm$ and my energies are in $eV$ Following the formula for $k$ or $\kappa$ I do get that their units are in $m^{-1}$ and that their power is $10^{8}$ which is good since that is multiplied with $nm$ so the values in the matrices $Ms$ should be ok. But I'm still not getting the same results as in the book (page 100) Here's an update picture and I have updated my code for that stupid mistake in mass $m$. I'm still not getting the correct results...
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
Consider $A =\left( \begin{array}{ccc} -1 & 2 & 2\\ 2 & 2 & -1\\ 2 & -1 & 2\\ \end{array} \right)$. Find the eigenvalues of $A$. So I know the characteristic polynomial is: $$f_A(\lambda) = (-\lambda)^n+(trA)(-\lambda)^{n-1}+...+\det A$$ I found the $\det A = 27$, so the characteristic polynomial of $A$ is: $$-\lambda^3+3\lambda^2-c\lambda+27$$ However, the textbook I'm using doesn't give any method for finding the value of $c$. I have the solution to the problem, and the value of $c$ is in fact $9$, but is there any method to numerically solve for it? If we have a $4\times 4$ matrix, we'll end up with a characteristic polynomial of the form $$\lambda^4-tr A(\lambda)^3+c_1\lambda^2-c_2\lambda+\det A$$ Similarly, is there a method to solve for $c_1,c_2$ in this case?
I've been doing some old exam problems and I've come across a problem that I've answered, but my gut is telling me that there's something I'm glossing over. Let $R$ be a commutative ring with identity and let $U$ be an ideal that is maximal among non-finitely generated ideals of $R$. I wish to show that $U$ is a prime ideal. Assume that $U$ is not prime. Let $x, y\not\in U$ be such that $xy\in U$. $U$ is contained in a maximal ideal $M$ and $xy\in M$, so either $x$ or $y$ is in $M$; assume $x\in M$. The condition $U\subset M$ then implies that there is a ring homomorphism $$\varphi: R/M\to R/U$$ Since $R/M$ is a field, $\varphi$ is injective. Hence, $\varphi(x)\in U$. This is a contradiction, so $U$ must be prime. The thing that worries me is that I never explicitly used the hypothesis that $U$ was not finitely generated or the result that $M$ must be finitely generated.
The notation for vector (a.k.a. cross) product in $\mathbb{R}^3$ I usually see is $\times$. However, some places use $\wedge$ instead, which IMHO creates a lot of confusion, as $\wedge$ usually is used for multiplication in the exterior algebra. E.g. I saw the following: let's prove the equivalence of linear dependence of three vectors $u_1,u_2,u_3$ in the space and linear dependence of their pairwise products $u_i\wedge u_j$: take $$ u_k\wedge \sum_{i<j} a_{ij} u_i\wedge u_j = 0 $$ and derive from this that $a_{ij}=0$, as $u_1\wedge u_2\wedge u_3=0$ means linear dependence of the $u_i$'s. (which would be fine if we talked about exterior product, but totally wrong for vector product, as it isn't even associative to begin with...) Are there any arguments for using $\wedge$ for cross product?
So, I believe you work in the category of $R$-modules for some commutative ring, or any abelian category with a monoidal structure and an internal hom. Otherwise, there are a priori no internal hom in the category of chain complexes. If $C^*=D^*$ is as you described, the internal hom you wrote is not correct. It is rather :$$\operatorname{hom}^*(C^*,C^*)=\operatorname{hom}(C^0,C^0)\times \operatorname{hom}(C^1,C^1)\rightarrow \operatorname{hom}(C^0,C^1)$$with differential being $(f,g)\mapsto df-gd$. Then, $\operatorname{hom}^*(C^*,C^*)\otimes C^*$ is the complex$$\begin{align}\operatorname{hom}(C^0,C^0)\times \operatorname{hom}(C^1,C^1)\otimes C^0\rightarrow\\ \operatorname{hom}(C^0,C^1)\otimes C^0\oplus (\operatorname{hom}(C^0,C^0)\times \operatorname{hom}(C^1,C^1))\otimes C^1\rightarrow& \operatorname{hom}(C^0,C^1)\otimes C^1\end{align}$$The first differential is given by $(f,g)\otimes c\mapsto (df-gd)\otimes c \oplus (f,g)\otimes dc$. The second differential is given by $u\otimes c_0\oplus (f,g)\otimes c_1\mapsto (df-gd)\otimes c_1$. The evaluation map is the map to the complex $C^0\rightarrow C^1$ given in degree 0 by : $(f,g)\otimes c\mapsto f(c)$ in degree 1 by : $u\otimes c_0 \oplus (f,g)\otimes c_1\mapsto u(c_0)+g(c_1)$ Let us check that this is indeed a morphism of complexes, in other words, let us check that these maps commute with the differentials. There are only one commutative square to check : The first is$$\require{AMScd}\begin{CD}\operatorname{hom}(C^0,C^0)\times \operatorname{hom}(C^1,C^1)\otimes C^0@>>>\operatorname{hom}(C^0,C^1)\otimes C^0\oplus (\operatorname{hom}(C^0,C^0)\times \operatorname{hom}(C^1,C^1))\otimes C^1\\@VVV@VVV\\C^0@>>>C^1\end{CD}$$the map going through the left arrow is $(f,g)\otimes c\mapsto f(c)\mapsto d(f(c))$. While the map going through the above arrow is $$(f,g)\otimes c\mapsto (df-gd)\otimes c\oplus (f,g)\otimes dc\mapsto d(f(c))-g(dc) + g(dc)=d(f(c))$$So the square is indeed commutative.
5 0 Hi PhysicsForums, I am calculating something related to the spheroidal membrane and want to ask you a question. I consider a oblate spheroid (Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length.) In spheroidal coordinate, the relationship to Cartesian coordinates is [tex]x=a\sqrt((1+u^2) (1-v^2))\cos(\phi)[/tex] [tex]y=a\sqrt((1+u^2) (1-v^2))\sin(\phi)[/tex] [tex]z=a u v[/tex] Now, I want to know how to achieve the normal derivative to the surface of a spheroid, in terms of the derivatives of u, v and [tex]\phi[/tex]. Thank you very much. I am calculating something related to the spheroidal membrane and want to ask you a question. I consider a oblate spheroid (Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length.) In spheroidal coordinate, the relationship to Cartesian coordinates is [tex]x=a\sqrt((1+u^2) (1-v^2))\cos(\phi)[/tex] [tex]y=a\sqrt((1+u^2) (1-v^2))\sin(\phi)[/tex] [tex]z=a u v[/tex] Now, I want to know how to achieve the normal derivative to the surface of a spheroid, in terms of the derivatives of u, v and [tex]\phi[/tex]. Thank you very much.
I'm new to SOCP and want to try to get familiar with the format and how to solve it with cvxopt in python. However, for a simple toy example I'm struggling to get the right input format. The problem I want to solve has the form $$ \max_x c^T x$$ subject to $$ \| A_ix + b_i\|_2\le c_i^Tx + d_i, i=1,\dots, m$$ $$Fx = g $$ The example I came up with has the following parameters: $c=(0.02, 0.06)$. Additionally I'm given a symmetric positive semi-definite matrix $$\Sigma=\begin{bmatrix} 0.000025 & 0.000046\\ 0.000046& 0.024025 \end{bmatrix} $$ and have the constraints: $$ \sqrt{x^T\Sigma x} \le d_{max}=0.05$$ $$ 0\le x_i \le 1, i = 1, 2$$ $$ \sum_{i=1}^2 x_i = 1$$ I then first transformed the constrained into the proper SOCP format. In a next step I want to make them cvxopt acceptable. 1. Transform to SOCP The constraint $$ \sum_{i=1}^2 x_i = 1$$ is simple by taking $F=\begin{bmatrix} 1 & 1\end{bmatrix}$ and $ g = 1$. Next, for the constraint $ \sqrt{x^T\Sigma x} \le d_{max}$ we use that for a positive semi-definite matrix $ \sqrt{x^T\Sigma x} = \|\Sigma^{\frac{1}{2}}x\|_2$ where $\Sigma^{\frac{1}{2}}$ denotes the Cholesky decomposition. This means, we have $A_1 = \Sigma^{\frac{1}{2}}$, $b_1=0$, $c_1 = 0$ and $d_1 = d_{max}$ The last constraints of the type $ 0\le x_i \le 1, i = 1, 2$ can be converted into a proper SOCP format by taking $A_2 = 0, b_2 =0, c_2=(-1, 0)$ and $d_2 = 1$. This gives $ x_1 \le 1$. To get $0 \le x_1$ we take $A_3 = 0, b_3 =0, c_3=(1, 0)$ and $d_3 = 0$. We get two additional constraints for $x_2$ in the same way with changing $c_4 = (0, -1)$ and $c_5=(0, 1)$. 2. Transform SCOP to cvxopt format The cvxopt has slightly different format. The objective function and equality constraint is the same. However, the second order constraints have the form $$ G_kx + s_k = h_k$$ $$s_{k0}\ge \|s_{k1}\|_2$$ where $s_k=(s_{k0}, s_{k1})$ The solver needs as input the list of the matrices $G_k$ and vectors $h_k$. If I'm not wrong I transfer each of the constraints $\| A_ix + b_i\|_2\le c_i^Tx + d_i$ to this format by taking $$ G_i =\begin{bmatrix} -c_i \\ -A_i \end{bmatrix}$$ and $$ h_i = \begin{bmatrix} d_i \\ b_i \end{bmatrix}$$ Python toy model I've wanted to test this in python. Unfortunately, it doesn't work as expected and I don't see exactly what is wrong. I'm not sure if it is the implementation or one of the transformations above. Here is the python toy example In [201]: import numpy as npIn [202]: import cvxopt as cptIn [203]: Sigma = np.asarray([[0.000025, 0.000046],[0.00046, 0.024025]])In [205]: np.linalg.eig(Sigma)Out[205]: (array([2.41183657e-05, 2.40258816e-02]), array([[-0.99981638, -0.00191659], [ 0.01916244, -0.99999816]])) As you can see the the matrix is really positive semi-definite. In [37]: c = cpt.matrix(np.asarray([[0.02, 0.06]]).T)In [38]: F = cpt.matrix(np.asarray([[1.0, 1.]]), tc='d')In [39]: g = cpt.matrix(np.asarray([[1.0]]), tc='d')In [40]: A0 = cpt.matrix(np.linalg.cholesky(Sigma))In [41]: G1 = cpt.matrix(np.vstack(([[0, 0]],-A0)), tc='d')In [42]: h1 = cpt.matrix(np.asarray([[0.05],[0.0], [0.0]]), tc='d')In [43]: G2 = cpt.matrix(np.asarray([[-1, 0],[0, 0]]), tc='d')In [44]: h2 = cpt.matrix(np.asarray([[1.0],[0.0]]), tc='d')In [45]: G3 = cpt.matrix(np.asarray([[1, 0],[0, 0]]), tc='d')In [46]: h3 = cpt.matrix(np.asarray([[0.0],[0.0]]), tc='d')In [47]: G4 = cpt.matrix(np.asarray([[0, -1],[0, 0]]), tc='d')In [48]: h4 = cpt.matrix(np.asarray([[1.0],[0.0]]), tc='d')In [49]: G5 = cpt.matrix(np.asarray([[0, 1],[0, 0]]), tc='d')In [50]: h5 = cpt.matrix(np.asarray([[0.0],[0.0]]), tc='d')In [51]: Gq = [G1]In [52]: [Gq.append(Gi) for Gi in [G2, G3, G4, G5]]In [53]: hq = [h1]In [54]: [hq.append(hi) for hi in [h2, h3, h4, h5]] Gq and hq are lists which is required by cvxopt.solvers.scop. If I know call the solver I get: In [55]: cpt.solvers.socp(-c, Gq=Gq, hq=hq, A=F, b=g) pcost dcost gap pres dres k/t 0: -3.9965e-02 -2.1105e+00 1e+01 2e+00 3e-16 1e+00 1: -3.9369e-02 -1.3687e-02 5e+00 8e-01 1e-15 1e+00 2: -3.3610e-02 1.0744e+02 1e+03 2e+00 2e-13 1e+02 3: -3.3610e-02 1.0935e+04 1e+05 2e+00 4e-12 1e+04 4: -3.3610e-02 1.0936e+06 1e+07 2e+00 1e-09 1e+06Certificate of primal infeasibility found.Out[55]: {'dual infeasibility': None, 'dual objective': 1.0, 'dual slack': 0.9204046569064762, 'gap': None, 'iterations': 4, 'primal infeasibility': None, 'primal objective': None, 'primal slack': None, 'relative gap': None, 'residual as dual infeasibility certificate': None, 'residual as primal infeasibility certificate': 5.782986313941418e-08, 'sl': None, 'sq': None, 'status': 'primal infeasible', 'x': None, 'y': <1x1 matrix, tc='d'>, 'zl': <0x1 matrix, tc='d'>, 'zq': [<3x1 matrix, tc='d'>, <2x1 matrix, tc='d'>, <2x1 matrix, tc='d'>, <2x1 matrix, tc='d'>, <2x1 matrix, tc='d'>]} I don't understand why. Even if I use $d_{max}=0.2$ we should get a solution of $x_1 = 0, x_2=1$. Any help would be much appreciated. Please note that I want to use cvxopt and I'm thankful but not interested in answers / comments suggesting to use another module like cvxpy. I have a restriction for real cases I want to work on, after finishing some of the basic examples, to use cvxopt.
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
I am reading the "Introduction to Algorithms" by Thomas Cormen et al. Particularly the theorem which says that given an open-address hash table with load factor $\alpha = n/m < 1$, the expected number of probes in an unsuccessful search is at most $1/(1-\alpha)$, assuming uniform hashing. In the proof they are assuming $p(i)$ to be the probability of exactly $i$ probes where we are finding all of the slots to be occupied. $i$ is $0,1,2,\dots$ so for $i > n$ we have $p(i) = 0$ since we can find $n$ slots already occupied. I have understood upto this point. Then it says that expected number of probes in an unsuccessful search is $$1 + \sum_{i=0}^\infty i\,p(i)\,.$$ How is it so? If there is any confusion to the question please comment. I hope I have explained the question properly. Any help would be appreciated. Thanks in advance.
This question already has an answer here: Find the number of ways of distributing 999 identical balls into 3 identical boxes. This number(999) is too huge to allow a case by case treatment. Ans: 83667 Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Find the number of ways of distributing 999 identical balls into 3 identical boxes. This number(999) is too huge to allow a case by case treatment. Ans: 83667 Hint: Break into 3 cases 1. all boxes have same num of balls 2. exactly 2 boxes have same num of balls 3. all boxes have different num of balls Case 1 and 2 are trivial to solve Case 3: solve assuming boxes are different (this is a std combinatorics problem) . reduce that by number where you could permute the distribution across boxes across all the cases (as boxes are identical) finally add the 3 cases off course Let $n_1 \le n_2 \le n_3$ be the number of balls in the three boxes in sorted order. Since the balls and boxes are identical, the configuration is uniquely specified by the triple ($n_1,n_2,n_3$). Let $m_1 = n_1$, $m_2 = n_2 - n_1$ and $m_3 = n_3 - n_2$. The condition $n_1 \le n_2 \le n_3$ becomes $m_1, m_2, m_3 \ge 0$. The problem at hand reduces to finding the number of solutions for $$ \color{red}{3m_1} + \color{green}{2m_2} + \color{blue}{m_3} = n_1 + n_2 + n_3 = N\quad\quad\text{ for }\quad m_1,m_2,m_3 \in \mathbb{N}^3$$ when $N = 999$. Let $p_N$ be the number of ways of putting $N$ identical balls into $3$ identical boxes. Above discussion tells us the OGF of $p_N$ equals to $$f(z) \stackrel{def}{=} \sum_{N=0}^\infty p_N z^N = \frac{1}{ \color{red}{(1-z^3)} \color{green}{(1-z^2)} \color{blue}{(1-z)} }$$ With help of a CAS, we can partial fraction decompose RHS and get $$\begin{align} f(z) &= \frac16\frac{1}{(1-z)^3} + \frac14 \frac{1}{(1-z)^2} + \frac{17}{72}\frac{1}{1-z} + \frac18\frac{1}{1+z} + \frac19\frac{2+z}{1+z+z^2}\\ &= \frac16\frac{1}{(1-z)^3} + \frac14 \frac{1}{(1-z)^2} + \frac{17}{72}\frac{1}{1-z} + \frac18\frac{1-z}{1-z^2} + \frac19\frac{2-z-z^2}{1-z^3} \end{align} $$ Notice $$\begin{align} \frac{1-z}{1-z^2} &= \sum_{k=0}^\infty d_k z^k\quad\text{ where }\quad d_k = \begin{cases} +1, & k \equiv 0 \pmod 2\\ -1, & k \equiv 1 \pmod 2\\ \end{cases}\\ \frac{2-z-z^2}{1-z^3} &= \sum_{k=0}^\infty e_k z^k\quad\text{ where }\quad e_k = \begin{cases} 2, & k \equiv 0 \pmod 2\\ -1, & \text{otherwise} \end{cases} \end{align} $$ Together with followings expansions for any $m \ge 1$, $$\frac{1}{(1-z)^m} = \sum_{k=0}^\infty \binom{k+m-1}{m-1} z^k$$ We get $$p_N = \frac16\binom{N+2}{2} + \frac14\binom{N+1}{1} + \frac{17}{72} + \frac18 d_N + \frac19 e_N = \frac{(N+1)(N+5) + f_N}{12} $$ where $\quad f_N = 12\left(\frac{17}{72} + \frac18 d_N + \frac19 e_N\right) = \begin{cases} 7, & N \equiv 0 \pmod 6,\\ 0, & N \equiv 1,5 \pmod 6\\ 3, & N \equiv 2,4 \pmod 6\\ 4, & N \equiv 3\pmod 6\\ \end{cases}$ Notice $0 \le \frac{f_N}{12} < 1$ and $p_N$ is an integer, we can simplify above formula to $$p_N = \left\lceil\frac{(N+1)(N+5)}{12}\right\rceil$$ Substitute $N = 999$ in this formula, the number of ways of placing $999$ identical balls into $3$ identical boxes equals to: $$p_{999} = \left\lceil\frac{1000 \times 1004}{12}\right\rceil = 83667$$
Let me reduce this (quite) well known claim to two (very) well known theorems: the Milnor-Svarc lemma; and the theorem that hyperbolicity is a quasi-isometry invariant. Along the way, there are a few simple facts about group actions that are used. If $\Gamma$ is a Cayley graph of $G$ then $G$ acts properly discontinuously and cocompactly on $\Gamma$, hence $\Gamma$ is quasi-isometric to $G$ (by the Milnor-Svarc lemma). Now consider the restricted action of the subgroup $G_0$ on $\Gamma$. Proper discontinuity is inherited under restriction of the action of $G$ to any subgroup (pretty obvious), hence the action of $G_0$ on $\Gamma$ is properly discontinuous. Cocompactness is inherited under restriction of the action of $G$ to any finite index subgroup, hence the action of $G_0$ on $\Gamma$ is cocompact (proof: if $K \subset \Gamma$ is a finite subgraph whose $G$-translates cover $\Gamma$, and if $g_1,..,g_N$ are left coset representatives of $G_0$ in $G$, then $(g_1 \cdot K) \cup \cdots \cup (g_N \cdot K)$ is a finite subgraph whose $G_0$-translates cover $\Gamma$). Again (by the Milnor-Svarc lemma) $\Gamma$ is quasi-isometric to $G_0$. Since quasi-isometry is an equivalence relation, it follows that $G$ is quasi-isometric to $G_0$. Since hyperbolicity is a quasi-isometry invariant, it follows that $G$ is hyperbolic if and only if $G_0$ is hyperbolic.
I was solving for the equation of a curve and I arrived at the following differential equation. However I do not know how to solve it: $\frac{dy}{dx}=\frac{-x+\sqrt{x^2+y^2}}{y}$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community $$ ydy +xdx=dx \sqrt{x^2 +y^2} \to d (x^2+ y^2) =2 dx\sqrt{x^2+y^2}$$ Take $u=x^2+y^2$, then $ du = 2\sqrt{u} dx$.
I am considering using petsc4py instead of scipy.integrate.odeint (which is a wrapper for Fortran solvers) for a problem involving the solution of a system of ODEs. The problem has the potential to be stiff. Writing down its Jacobian is very hard. So far, I have been able to produce reasonable speed gains by writing the RHS functions in "something like C" (using either numba or Cython). I'd like to get even more performance out, hence my consideration of PETSc. Introducing a toy version of the problem: The Amoeba Consider the following figure, which defines the basics of an amoeba: So, an amoeba consists of $N$ nodes and edges, constrained to a 2D space. The position of the nodes describing the amoeba evolve with time. Let the position of the $n$th ($n \in \{0, 1, ..., N-1\}$) node be described a position vector, $\textbf{x}_n$: $$\textbf{x}_n = \left[\begin{array}{cc} x_{0,n} \\ x_{1,n} \end{array}\right]$$ For the purposes of simplification, assume that the amoeba lives in a world with first order equations of motion, so given an applied force $\textbf{F}$ on a node, the nodal velocity $\dot{\textbf{x}}$ is produced: $$ \textbf{F} = c \dot{\textbf{x}}$$ where $c$ is a constant I pick. We are very interested in tracking the position of each node,so immediately we produce $2n$ ODEs (one ODE for each spatial component) of interest (assume that edges are elastic springs): $$ \frac{\textrm{d}\textbf{p}_n}{dt} = \text{elastic edge forces} + \textbf{F}_{n, active}$$ $\textbf{F}_{n, active}$ is the force generated by the amoeba at node $n$. One can imagine a wide range of rules that could be devised to determine how $\textbf{F}_{n, active}$ is determined, and I suggest that we consider the following rules: 1) There are red balls and blue balls inside the amoeba. ^ 2) Both red and blue balls have "switched-on" (symbolized $r^*$/$b^*$) and "switched-off" (symbolized $r$/$b$) state. 3) Both red and blue balls can be found either in the body of the amoeba where they are always switched off, or at the nodes of the amoeba, where they can be switched on or switched off. 4) The total number of red balls regardless of location of status is a constant $R$ and similarly for blue balls, a constant $B$ 5) When red or blue balls are in the body, they are free to jump onto any node they want, but when they are on nodes, some of them might be induced to move to neighbouring nodes due to some sort of "diffusion" 6) Switched on red balls at a node cause a force that attempts to reduce the area of the amoeba 7) Switched on blue balls at a node cause a force that attempts to increase the area of the amoeba 8) Switched on red balls tend to switch off blue balls that are switched on, and vice versa, switched on blue balls tend to switch off red balls that are switched on 9) ... Okay, you get my point. I can come up with a model that is fairly convoluted in terms of how the various variables of interest interact. Due to the large number of equations involved, it is already tedious to think about writing downa Jacobian. However, since we have access to a computer, it may be that we can write functions governing a particular interaction that do not have neat analytical forms (let alone whether or not their derivatives have neat analytical forms), so we might have a mess of piecewise functions needed to approximate them if we were to go about still trying to produce a Jacobian... Things get even worse if my problem involves not just one amoeba, but multiple amoebae (which is when things ARE interesting), with new interaction rules enforcing that amoebae cannot violate each others' areas, and so on. All the toy examples I see of PETSc time stepping problems have Jacobians defined, so I wonder if I would even get a speed gain going from switching to it, if perhaps one of the reasons why I have a high computational cost is due to not being able to provide a Jacobian function?
I have a set of matrices $\{(A_i,D_i)\}$ for $i\in\{1,\ldots,n\}$, where: Each $D_j\in\mathbb{R}^{S\times S}$ is diagonal, and every entry on the main diagonal is non-negative. Each $A_j\in\mathbb{R}^{m\times m}$ is symmetric and positive definite, with every entry being non-negative. I want to find a matrix $B\in\mathbb{R}^{m\times S}$ such that: $$ A_i \approx B D_i B^T \;\;\;\forall\;\;\;1\leq i \leq n \\ B^TB\approx I $$ However, everything is noisy (meaning $A_i$ and $D_i$ always have some random perturbation). This means (I assume) that I need something like: $$ B^* = \arg\min_B \;\alpha||B^TB - I|| + \beta\sum_{i=1}^n || A_i - BD_iB^T|| $$ for some matrix norm (e.g. Frobenius) and (hyper-)parameters $\alpha,\beta\in\mathbb{R}$. I am not too picky about the exact formulation, however, so feel free to tweak it. For instance, perhaps there is a way to make the orthogonality a hard constraint. My question: how do I solve an optimization problem like the one above? What have I tried: well, this reminds me of an eigenvalue decomposition problem (if $n=1$ especially), but I have a set of problems I'd like to simultaneously satisfy.One odd thing though is that the $D_j$ are known, or at least estimated (albeit with noise). My first thought was to rearrange this into a linear system somehow, but I have not been able to do so (so far). If there is some literature I can look into relating to this problem, that would be a more than good enough answer. My apologies for the lack of optimization/numerical linear algebra knowledge. Note: I have already tried to post this on math stack exchange, but to no avail.
I need to evaluate the definite integral $$\int_0^{2\pi}\frac{1}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta \ \text{ for various}\ A,B \text{; with}\ A,B<<1.$$ Wolfram Alpha provides the following indefinite general solution:- $$\int \frac{1}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta = -(2/K) \tanh^{-1} ( \frac{A-(B-1)\tan(\theta/2)}{K}) $$ where $K = \sqrt{A^2 + B^2 -1}$. But I am having trouble checking it for the simple case when $A=B=0$ when I would expect the answer to be given by:- $$\int_0^{2\pi}\frac{1}{1 + 0 + 0}\, \mathrm{d}\theta = 2\pi.$$ I have approached the Wolfram Alpha solution thus:- $$ -(2/K) \tanh^{-1} ( \frac{\tan(2\pi/2)}{K}) +(2/K) \tanh^{-1} ( \frac{\tan(0/2)}{K}) $$ $$ -(2/K) \tanh^{-1} ( \frac{\tan(\pi)}{K}) +(2/K) \tanh^{-1} ( \frac{\tan(0)}{K}) $$ $$ -(2/K) \tanh^{-1} ( \frac{0}{K}) +(2/K) \tanh^{-1} ( \frac{0}{K}) $$ which gives the result of zero. I presume this error comes from trying to integrate across the range $0, 2\pi$ where the $\tan$ function has singularities at $\pi/2$ and $3\pi/2$. However when I try and break the integration into the three continuous ranges $0,\pi/2$ and $\pi/2,3\pi/2$ and $3\pi/2,2\pi$ I am still getting a result of zero thus:- $$ -(2/K) \tanh^{-1} ( \frac{\tan(2\pi/2)}{K}) +(2/K) \tanh^{-1} ( \frac{\tan(3\pi/4)}{K}) + $$ $$ -(2/K) \tanh^{-1} ( \frac{\tan(3\pi/4)}{K}) +(2/K) \tanh^{-1} ( \frac{\tan(\pi/4)}{K}) +$$ $$ -(2/K) \tanh^{-1} ( \frac{\tan(\pi/4)}{K}) +(2/K) \tanh^{-1} ( \frac{\tan(0)}{K}) $$ leading to $$ -(2/K) \tanh^{-1} ( \frac{0}{K}) +(2/K) \tanh^{-1} ( \frac{-1}{K}) + $$ $$ -(2/K) \tanh^{-1} ( \frac{-1}{K}) +(2/K) \tanh^{-1} ( \frac{1}{K}) +$$ $$ -(2/K) \tanh^{-1} ( \frac{1}{K}) +(2/K) \tanh^{-1} ( \frac{0}{K}) $$ which gives the same result of zero. I would be grateful if somebody could tell me where I am going wrong here? EDIT 1: I have accepted the solution provided kindly by Dr. MV. I have posted a related question which seeks to understand where my original evaluation of the definite integrand goes wrong. EDIT 2: In the related question comments from user mickep pointed out that the wrong partitions had been used in the original evaluation. Using the correct partitions ($0...\pi$) and ($\pi...2\pi$) leads to the correct answer, for $A=B=0$, of $2\pi$ (as described in my self-answer to that same question). EDIT 3: It was pointed out by user mickep in comments to the related question that the Wolfram Alpha solution $$ -\left(\frac{2}{K1}\right) {\arctan}h \left( \frac{A-(B-1)\tan(\theta/2)}{K1}\right) $$ where $K1= \sqrt{A^2 + B^2 -1}$. is not as friendly as an alternative solution (reported by user mickep) which is: $$ +\left(\frac{2}{K2}\right) \arctan \left( \frac{A+(1-B)\tan(\theta/2)}{K2}\right) $$ where $K2 = \sqrt{1 - A^2 - B^2}$.
The hamiltonian $$H=AJ_z^2=A\left(\sum_j S_z^{(j)}\right)^2\tag1$$is a function of the individual $z$ spin projections $S_z^{(j)}$, and all of those commute. Therefore, the eigenstates will be product states of the form $$|\Psi\rangle=|a_1\rangle\otimes\cdots\otimes|a_N\rangle,\tag2$$ where each $|a_j\rangle$ is either $|\!\uparrow\rangle$ or $|\!\downarrow\rangle$. As such, the system is easily solvable, and you simply need to phrase your questions correctly. The expectation value $\langle J_x^2\rangle$, for example, is constant and equal to $N$ for all eigenstates of $H$ of the form (2) (though there are degeneracies and superpositions of such eigenstates may yet be squeezed). Edit: OK, I think I know what's confusing you. In particular, from your question edit: Starting with an initial (Q-)distribution in phase space, the distribution evolves with precession frequency proportional to $J_z$. But from there I do not see how the variance along $\hat z$ would change. The second paper starts off the atoms in a spin-coherent-state cloud along the $+x$ pole on the Bloch sphere, and then lets them evolve according to the hamiltonian (1). This means that points closer to the $+z$ pole have more positive energy, accumulate more phase on their up components than their down ones, and therefore rotate towards the right. Similarly, points closer to the $-z$ pole have more negative energy, accumulate more phase on their down components, and rotate towards the left. The net effect of this is to shear the cloud on the Bloch sphere, whilst preserving its height. In particular, the $z$ marginal, i.e. the distribution of ups and downs in a $z$ measurement, is not affected, as it should. As a consequence, the variance in $z$ is not (yet) affected. You can see, of course, that the cloud is now longer (in that its variance in $y$ is now much greater), and also thinner, in a slightly off-diagonal direction. Thus, to obtain squeezing in the $z$ direction, you need to rotate slightly about the $x$ direction. By how much, of course, is a question of exactly what the circumstances are, and it's a function of the product of $A$ and the interaction time; if you want the details, you should first give Kitagawa and Ueda's paper a thorough read, I think. This is quite a popular protocol for obtaining spin-squeezed states, I think: Prepare a spin coherent state, shear it using interactions, and finally rotate it to the measurement direction. The third step is crucial, because the shear does not affect the width along the interaction direction, which is what I was nagging you about in the first part of this answer. Once you rotate it, though, you can get reduced (or vastly increased) variances in any component.
Given triples of $n$ floating point values $$(\min_1, \max_1, w_1), \dots, (\min_n, \max_n, w_n)$$ and a value $V$, what is a good algorithhm to assign values $v_i$ to each of the triples such that the following conditions hold? $\min_i \le v_i \le \max_i$. $\displaystyle\sum_{i=1}^n v_i = V$. $\dfrac{v_i}{V}$ is as close as possible to $\dfrac{w_i}{W}$, where $W = \displaystyle\sum_{i=1}^n w_i$, i.e., minimize $\displaystyle\sum_{i=1}^n \left| \frac{v_i}{V} - \frac{w_i}{W} \right|$. I understand the above can be solved via an LP solver but am looking for an algorithm that may not return the optimal assignment but that is deterministic, returns close to the optimum solutions, runs in $O(n)$ time, and returns solutions that are stable in the sense that if two instances of the problem definition are "close to each other" then the solutions tend to be close to each other. It seems like there should be a greedy approach the performs sufficiently in which the first step is assigning the minima to $v_i$ and then proportioning out the "slack" but I am not sure how to deal with the maxima while doing this.
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
If you got this from Rudin (it is Exercise 8, Ch. 2 in his Real & Complex Analysis), here is his personal answer (excerpted from Amer. Math Monthly, Vol. 90, No.1 (Jan 1983) pp. 41-42). He works with the unit interval $[0,1]$, but of course this can be extended to $\mathbb R$ by doing the same thing in each interval (and by scaling these replications appropriately you can get the final set with finite measure). Anyways, here's how it goes: "Let $I=[0,1]$, and let CTDP mean compact totally disconnected subset of $I$, having positive measure. Let $\langle I_n\rangle$ be an enumeration of all segments in $I$ whose endpoints are rational. Construct sequences $\langle A_n\rangle,\langle B_n\rangle$ of CTDP's as follows: Start with disjoint CTDP's $A_1$ and $B_1$ in $I_1$. Once $A_1,B_1,\dots,A_{n-1},B_{n-1}$ are chosen, their union $C_n$ is CTD, hence $I_n\setminus C_n$ contains a nonempty segment $J$ and $J$ contains a pair $A_n,B_n$ of disjoint CTDP's. Continue in this way, and put$$ A=\bigcup_{n=1}^{\infty}A_n. $$ If $V\subset I$ is open and nonempty, then $I_n\subset V$ for some $n$, hence $A_n\subset V$ and $B_n\subset V$. Thus$$ 0<m(A_n)\leq m(A\cap V)<m(A\cap V)+m(B_n)\leq m(V); $$the last inequality holds because $A$ and $B_n$ are disjoint. Done. The purpose of publishing this is to show that the highly computational construction of such a set in [another article] is much more complicated than necessary." Edit: In his excellent comment below, @ccc managed to isolate the necessary components of my solution, and after incorporating his observation it has been greatly simplified. (Actually, after trimming the fat, I've realized that it is actually not entirely dissimilar from Rudin's.) Here it is: Let $\{r_n\}$ be an enumeration of the rationals, let $V_1$ be a segment of finite length centered at $r_1$, and let $V_n$ be a segment of length $m(V_{n-1})/3$ centered at $r_n$. Set$$ W_n=V_n-\bigcup_{k=1}^{\infty}V_{n+k}, $$and observe that\begin{equation}m(W_n)\geq m(V_n)-\sum_{k=1}^{\infty}m(V_{n+k})=m(V_n)-m(V_n)\sum_{k=1}^{\infty}3^{-k}=\frac{m(V_n)}{2}.\end{equation}In particular, $m(W_n)>0$. For each $n$, choose a Borel set $A_n\subset W_n$ with $0<m(A_n)<m(W_n)$. Finally, put $A=\bigcup_{n=1}^{\infty}A_n$. Because $A_n\subset W_n$ and the $W_n$ are disjoint, $m(A\cap W_n)=m(A_n)$. That is to say,$$ 0<m(A\cap W_n)<m(W_n) $$for every $n$. But every interval contains a $W_n$, so $A$ meets the criteria, and has finite measure (specifically, $m(A)\leq\sum_n m(V_n)=2 m(V_1)<\infty$). As a curiosity, here's my own "unnecessarily computational" way (though it's not quite as lengthy as that in the article Rudin was referring to), which I can't resist including because I slaved over it when I first came across this problem, before finding Rudin's solution: Let $\{r_n\}$ be an enumeration of the rationals, and put$$ V_n=\left(r_n-3^{-n-1},r_n+3^{-n-1}\right),\qquad W_n=V_n-\bigcup_{k=1}^{\infty}V_{n+k}. $$Observe that\begin{equation}m(W_n)>m(V_n)-\sum_{k=1}^{\infty}m(V_{n+k})=m(V_n)-m(V_n)\sum_{k=1}^{\infty}3^{-k}=\frac{m(V_n)}{2}.\qquad\qquad(1)\label{8.1}\end{equation}(We have strict inequality because there exist rationals $r_i$, with $i>n$, in the complement of $V_n$.) For each $n$, let $K_n$ be a Borel set in $V_n$ with measure $m(K_n)=m(V_n)/2$. Finally, put$$A_n=W_n\cap K_n,\qquad A=\bigcup_{n=1}^{\infty}A_n.$$ To prove that $A$ has the desired property, it is enough to verify that the inequalities$$0<m(A\cap V_n)<m(V_n)\qquad\qquad(3)\label{8.3}$$hold for every $n$. (This is because every interval contains a $V_n$.) For the left inequality, it is enough to prove that $m(A_n\cap V_n)=m(A_n)=m(W_n\cap K_n)>0$. This follows from the relations$$m(W_n\cup K_n)\leq m(V_n)<m(W_n)+m(K_n)=m(W_n\cup K_n)+m(W_n\cap K_n),$$the second inequality being a consequence of (1) and the fact that $m(K_n)=m(V_n)/2$. For the right inequality of (3), observe that $V_n\subset W_i^c$ for $i<n$, and that therefore$$ m(A\cap V_n)=m\left(\bigcup_{k=0}^{\infty}A_{n+k}\cap V_n\right)\leq\sum_{k=0}^{\infty}m(K_{n+k}\cap V_n) $$$$ <\sum_{k=0}^{\infty}m(K_{n+k})=\sum_{k=0}^{\infty}\frac{m(V_{n+k})}{2}=\sum_{k=0}^{\infty}\frac{m(V_n)}{2^{k+1}}=m(V_n). $$The strict inequality above follows from three observations: (i) $m(K_i)>0$ for every $i$; (ii) $K_i\subset V_i$; and (iii) there exist neighborhoods $V_i$, with $i>n$, that are contained entirely in the complement of $V_n$. So $A$ meets the criteria (and also has finite measure).
When pricing a spread option with two different prices, one can use Kirk's approximation combined with Margrabe's formula (https://en.wikipedia.org/wiki/Margrabe%27s_formula). But what if I am pricing an option that involves 3 or 4 different prices ? Is there a closed form formula ? Else how can I price it ? The payoff of my strategy looks like this: $$\mathop{\mathbb{E}} \left[\left(\alpha K+\beta F_1(t_1) +\gamma F_1(t_2)+\zeta F_2(t_1) +\lambda F_2(t_2)\right)^+\right]$$ Where $K$ is the strike, $F_1$ and $F_2$ the price of two different assets.
I have a situation where I'm interested in solving the same ODE many times, from different initial locations. More precisely, I'm interested in solving an ODE of the form \begin{align} \frac{dx}{dt} &= f(x) \\ x(0) &= a \in \mathbb{R}^n \end{align} where I know that this ODE has a $T$-periodic orbit $\mathcal{A}$, i.e for $a \in \mathcal{A}$, we have that \begin{align} x(0) &= a \implies x(T) = a \end{align} My problem is such that at each iteration $k$, I am given a point $a_k \in \mathcal{A}$, and need to output where the ODE takes $a_k$ after time $t$, i.e. \begin{align} \frac{dx}{dt} &= f(x) \\ x(0) &= a_k \\ \implies b_k &\triangleq x(t) \end{align} and I want to report $b_k$. My question is: is there a better way of doing this than using an ODE solver at every step $k$? For example, if possible, I would prefer to incur a one-off cost at the start of my algorithm which allows me to map directly from $a_k$ to $b_k$, rather than having to repeatedly solve the same problem. Additional details: The system I'm interested in is a two-dimensional Hamiltonian system, given in coordinates as \begin{align} \frac{dq}{dt} &= |p|^{\alpha - 1} \cdot \text{sign} (p) \\ \frac{dp}{dt} &= -|q|^{\beta - 1} \cdot \text{sign} (q) \end{align} where $\alpha, \beta > 1$ such that $1/\alpha + 1/\beta = 1$. The standard integrator for such systems is the leapfrog integrator, which I would use if I was solving for a single trajectory. As written, the system is non-smooth - I would rather not worry about that right now, and if it makes the question easier to answer, please feel free to assume that everything involved is smooth. When $\alpha = \beta = 2$ things are much easier, and I can handle this situation fine - I'm asking about the other cases.
Let $f(x) = \sum\limits_{n=0}^{\infty} a_n x^n$ be finite or infinite, where $x$ is a real number and $(a_n)_n$ is an infinite sequence of positive integers. For any integer $n$, let $ (a_{L,n})_{L}$ be an infinite sequence of integers which equals $a_n$ for any $L$ large enough and for any L, let $$ f_L(x) = \sum\limits_{n=0}^{\infty} a_{L,n} x^n. $$ Is it always true that $$ \lim_{L \rightarrow \infty} f_L(x) = f(x)? $$Is there a famous theorem that can be applied from which the identity follows? I can interpret "$(a_{n,L})$ equals $(a_n)$ for $L$ large enough either as "for every $n$ there exists $k$ such that $a_{n,L}=a_n$ for $L>k$" or as "there exists $k$ such that for $L>k$ and all $n$, $a_{n,L}=a_n$. In the first case, your statement is false. Let $a_n=1$ and $a_{L,n}=a_n=1$ for $L>n$ and $a_{L,n}=0$ else. Then $\sum a_n x^n=\frac1{1-x}=f(x)$ and $\sum a_{L,n} x^n=\sum_{n=L+1}^\infty x^n = \frac{x^{L+1}}{1-x}=f_L(x)$. For $-1<x<1$, $f_L(x)\to 0\neq f(x)$. In the second case, your statement is true, as $a_{L,n}=a_n$ for $L>k$ and for all $n$. So for every $L>k$, $f_{L}(x)=f(x)$. Then of course $\lim_{L\to\infty} f_L(x)=f(x)$. Or do you have another different definition in mind?
I've always wondered why do one use squared or absolute returns to determine if volatility modeling is required for the return series? We understand that there are various tests for its autocorrelation and conditional heteroskedasticity. However, I don't quite grasp the concept behind it. Can anyone kindly explain what's the statistical intuition behind using squared/abs returns to determine if vol representation is needed? Thank you. To simplify, consider the errors rather than the returns. The variance is effectively the average of the squared errors, while absolute deviation is the average of the absolute errors. So plotting the squared errors or absolute errors over time could give an indication of whether the variance or absolute deviation is constant over time. Since variance is more commonly the practical focus, one approach would be to simply regress the squared errors on p of its lags. This is the ARCH(p) model. GARCH(p,q) introduces an additional term, which has the effect of reducing the need of p to be large. Simple...because you are interested in deviations from a metric, and not whether it deviates above or below. The very definition of volatility is a "measure of deviation". Squaring returns or using the absolute values just eases the calculation to arrive at a deviation measure. Otherwise volatility would have to be calculated in other ways as positive and negative returns would introduce side effects that will affect the volatility computation. Also, often we can assume the average of short-term returns in the long run to be zero, the historic volatility is equal to $\hat{\sigma_T^2}=\frac{\sum_{i=1}^T{r_i^2}}{T-1}$. Sp to study the volatility process we therefore study the squared return process, which is a good proxy.
This sounds like homework, and I do not wish to ruin the value of your homework for you by giving answers too quickly; the point of homework is to exercise and solidify the skills learned in the lesson. So I will approach this step by step; please follow along carefully, and try to find an answer yourself before I start considering possible answers. First, note that the problem makes heavy use of an asymmetrical binary relation, represented by $<$. This is apt to confuse the student, so I will instead use $\prec$, which probably has no particular associations for students at this level. The point is, we are not talking about the familiar “less than” relation, nor necessarily about numbers. We are instead talking about (1) some binary relation, which (2) is asymmetrical, i.e. $a\prec b$ is not the same as $b\prec a$, and which (3) is well-defined for pairs of elements drawn from some set. It doesn’t have to be a set of numbers; we could be talking about graphs, matrices, vectors, sets … universities … bananas … any number of things. Note, also, that $\prec$ does not have to necessarily be defined for all pairs of elements; there could be some pairs where it’s simply meaningless. More on that below. So. We need to find a set of elements, such that some relation $\prec$ is defined for at least some pairs of elements, and that satisfies some of these properties but not others. So let’s take a closer look at the properties. (1) $\forall x,y,z(x\prec y\land y\prec z\rightarrow x\prec z)$ (transitivity) Meaning, elements can to some extent be ordered: if $x$ is “before” or “over” or “colder than” or “hoopier than” $y$, and the same is true of $y$ with respect to $z$, then the same is true of $x$ w.r.t. $z$. Note that this doesn’t necessarily mean that all of the elements can be ordered relative to each other. For example, in a family or other tree structure, one node can be “an ancestor of” another, and this is transitive, but siblings, cousins, etc. cannot be ordered in this way. We say the set is partially ordered, but not necessarily totally ordered. (2) $\forall x\neg(x\prec x)$ (antisymmetry) This just means that no element can be its own “ancestor”, thus ruling out, for example, directed graphs that contain any cycles. (3) $\forall x,y(x\prec y\lor x=y\lor y\prec x)$ (linearity) This means that the elements can all be placed on a line, so that the set is totally ordered. This is the one we need to avoid, so we can’t use any set that is totally ordered by $\prec$. (4) $\forall x\exists y(x\prec y)$ This means that every element has some other element that is, e.g., “scarier than” it. Since (2) ruled out cycles, this implies that the set must be infinite; otherwise, some element (possibly more than one) would be as “scary” as possible, with nothing coming after it. Note that this doesn’t necessarily work in both directions. For the set of natural numbers, every number has a successor, but $1$ does not have a predecessor. At this point, hopefully, the student’s thinking has been loosened up a bit, and concrete ideas have become more abstract. So, for example, can we use the integers, or rationals, or real numbers, with $<$? No, because that satisfies all four properties, including (3) which we are trying to avoid. What about the natural numbers, with $\prec$ meaning “is a divisor of”? Does (1) hold? How about (2)? Ah, (2) does not hold, no good; every number divides itself. Okay then, “is a divisor of and is less than”. Now (2) holds. (3)? (4)? How about members of a family and “is an ancestor of”? (1) is satisfied; (2) is satisfied, barring time-traveling incest; (3) is satisfied; but (4) is not, because not everyone has descendants. Consider an infinite directed graph in the shape of a pyramid. We have one “root” node, with three descendants. Each of those descendants likewise has three descendants of its own, and this continues forever. This is like the family above, but satisfies (4) as well. How about sets? Subsets of the natural numbers, say, with $\prec$ defined as “is a proper subset of”. Does (1) hold? (2)? (3)? (4)?
21 Threads found on edaboard.com: Cross Correlation Matlab hiii Sorry once again .... I could not found function 'modnorm' and 'qammod' in the function func_ofdm_generate....... Please post your complete code then only one can run this and give you some feedback..... Sorry for that I went to mathworks site I think these are the function from "Communications System Toolbox" ...... I don't have th Digital Signal Processing :: 10-09-2013 14:04 :: milind.a.kulkarni :: Replies: 4 :: Views: 1070 Hello everyone, I have a following problem: given cross- correlation function {C_{12}}{\left(\tau\right)} associated with a pair of time functions {f}_{1}(t) and {f}_{2}(t): {C_{12}}{\left(\tau\right)}={3 cos}^{2 }{\sigma}{t sin}{\sigma} {t} if {f}_{1}(t) =\frac{ 1}{2 }+\frac{1}{4}cos {\s Mathematics and Physics :: 07-01-2013 21:06 :: anderyza :: Replies: 1 :: Views: 1067 I need to implement following expressions for the GRPE in matlab lembda = inv(E*inv((A-T'*inv(R)*T))*E) *c; g = inv(A-T'*inv(R)*T)*E*lembda f = inv(R)*T*g where R:auto correlation of channel output T: cross correlation of channel out put and input data A:auto correlation of input data i only (...) Digital communication :: 09-25-2012 17:32 :: xplore29 :: Replies: 0 :: Views: 818 See what I am getting is that u dont know what cross correlation means... When there u find the cross correlation between signals A and B, U are actually judging the how closely A and B are related. =xcorr(A,B,maxlag); stem(lag,corr); what it does that with different- different lags compares A and B. and gives u the (...) Elementary Electronic Questions :: 05-16-2011 10:49 :: Conquer27 :: Replies: 3 :: Views: 15740 i want cross corelation matlab code for m-seq.:- x=; y=; and for gold codes:- x=; y=; regrdes Digital communication :: 05-05-2012 12:09 :: ek4m2005 :: Replies: 2 :: Views: 4041 hi, i want to find the auto correlation & cross correlation in horizontal ,vertical & in diagonal direction of the output image obtain after doing the quantization of image*(*this image is obtain after taking the FFT of image & then finding the magnitude of that image)in to four level(quantization is done by dividing the minimum to maximum (...) Digital Signal Processing :: 09-23-2011 16:33 :: ganesh singadkar :: Replies: 0 :: Views: 2346 can someone tell how to do the cross- correlation of two speech signals(each of 40,000 samples) in the matlab without using the inbuilt function Xcorr and how do I find correlation coefficient? Please be tolerant.Thanks in advance. Digital Signal Processing :: 09-13-2011 04:32 :: electricalpeople :: Replies: 4 :: Views: 24577 download this source code Sagar's Lab: Object tracking using corss correlation ---------- Post added at 20:18 ---------- Previous post was at 20:16 ---------- use matlab image and video acquisition tool box. study t PC Programming and Interfacing :: 07-11-2011 18:18 :: sagar474 :: Replies: 7 :: Views: 3667 Hy, my problem is: using matlab, how can i plot in a bode diagram the result of a fft? I have defined a function take after many elaborations bring out two vectors and than make the cross- correlation. Than i make the fft of the cross- correlation obtaining complex conjuncted numbers, i need to have the (...) ASIC Design Methodologies and Tools (Digital) :: 06-17-2011 09:20 :: always84 :: Replies: 0 :: Views: 2558 i've added noise to ecg signal taking specified level of snr as input assuming original signal is clean and obtained snr after denoising using cross correlation now I have to determine the snr of original data are there any algorithms to compute snr when data alone is given? any help algorithm or matlab code for this calculation is (...) Digital Signal Processing :: 03-26-2011 11:51 :: rsram :: Replies: 0 :: Views: 1009 Currently I want to make a matlab simulation of LMMSE channel estimation method in OFDM systems. In LMMSE method (Wiener Filtering) to find the channel estimation we have to find coefficient : c=(Rxx^-1)*Rxy where Rxx is auto correlation matrix and Rxy is cross- correlation matrix. What I want to ask is how to find these (...) Digital communication :: 11-14-2010 15:11 :: MatsuriSola :: Replies: 0 :: Views: 1635 Use the cross correlation command in matlab i-e xcorr(y,z). The peak will indicate the presence of signal of interest in that signal. Digital Signal Processing :: 07-07-2010 07:33 :: MHanif :: Replies: 1 :: Views: 2534 Hello I m trying to estimate time delay between signals using cross correlations. Unfortunately I ve encountered a few problems I m unable to handle. Here is code what I ve done. According to article I ve read it is supposed to be working. I m expecting at the last figure, one peak which should indicate a delay between signals x1, and xr, but Digital Signal Processing :: 04-10-2010 23:29 :: syrioosh :: Replies: 0 :: Views: 3544 Hi, Template is the pattern you want to find in the image. For example if you are looking for circles in an image, a smaller image containing only a circle is the template. Once you calculate the cross- correlation between a template T and an image I, you obtain an image X. The location of the pixel with the highest value in X will give you t Digital Signal Processing :: 12-13-2009 12:08 :: s_cihan_tek :: Replies: 2 :: Views: 9712 Hi all, I am having a bit of a problem computing the cross correlation between two Electric-field patterns of antennas. I will try to state my problem clearly: I have E-field values of an antenna for each value of phi and theta. The E-field values are complex numbers and are given as dB V/m. So in matlab I have a Matrix of dimensions (...) Electromagnetic Design and Simulation :: 10-14-2007 13:46 :: bilal5836 :: Replies: 1 :: Views: 1336 Hi all, I am having a bit of a problem computing the cross correlation between two Electric-field patterns of antennas. I will try to state my problem clearly: I have E-field values of an antenna for each value of phi and theta. The E-field values are complex numbers and are given as dB V/m. So in matlab I have a Matrix of dimensions (...) RF, Microwave, Antennas and Optics :: 10-14-2007 13:42 :: bilal5836 :: Replies: 1 :: Views: 1504 hi guy, i have 2 channels of signals(multi frequencies, with time and voltage recorded) from 2 microphones at different locations, I would like to find out the magnitude ratio and phase different between the 2 signals. can anyone tell me the procedures for getting these? I know I need to do the fft and cross correlation, but I just don't know wha Digital Signal Processing :: 10-01-2007 15:29 :: applejuice :: Replies: 2 :: Views: 3146 hi all i need matlab auto correlation and cross correlation function codes ;can anybody help me please ? Digital communication :: 02-22-2007 12:18 :: tomshack :: Replies: 6 :: Views: 5326 Hi, In matlab you can use the command "xcorr" in order to find the correlation. the command takes a vector or two and calculates the correlation and cross corelation. But the problem is that the out put is a vector!!! you have two soloutions to find the correlation matrix. 1- change the elements of (...) Elementary Electronic Questions :: 12-20-2005 22:50 :: dspman :: Replies: 3 :: Views: 3033 Hi abhi, From matlab help: XCORR(...,MAXLAG) computes the (auto/ cross) correlation over the range of lags: -MAXLAG to MAXLAG, i.e., 2*MAXLAG+1 lags. If missing, default is MAXLAG = M-1. Probably you are getting this message because the second argument is a scalar. Regards Z Digital Signal Processing :: 11-01-2005 21:24 :: zorro :: Replies: 1 :: Views: 5189
I have a question to the end of my proof for the problem 1.3.10 on Guillemin and Pollack's Differential Topology: Generalizaition of the Inverse Function Theorem: Let $f: X \rightarrow Y$ be a smooth map that is injective on a compact sumbanifold $Z$ of $X$. Suppose that for all $x \in Z,$ $$df_x: T_x(X) \rightarrow T_{f(x)}(Y)$$ is an isomorphism. Then $f$ maps $Z$ diffeomorphically onto f(Z). Why? Prove that $f$ maps an open neighborhood of $Z$ in $X$ diffeomorphically onto an open neighborhood of $f(Z)$ in $Y$. Suppose we have $$U_i = \cup_{z\in Z} B_i(Z),$$ where each $B_i$ is an open neighborhood centered at $z$, with $B_i(z) \rightarrow z$ as $i \rightarrow \infty$. Placing a metric on $X$ induced by $\mathbb{R}^N$ and take $B_i(z) = B(z, 1/i)$ for each $i \in \mathbb{N}$. Clearly $Z \subset U_i$ and $U_i$ is open. If $f$ is not one-to-one on some neighborhood of $Z$, we can find sequences of points $\{a_i\}, \{b_i\} \subset U_i$ such that $a_i \neq b_i$ and $f(a_i) = f(b_i).$ Since $Z$ is a compact manifold, the sequence $\{a_i\}, \{b_i\}$ is bounded, there exist convergent subsequences $a_i \to a, b_i \to b$ in $Z$. Then I got lost here: How can I show that $a,b \in Z$?
Are there any built-in functions for reducing lambda terms in Mathematica? If not, how might one do it? I've heard of Function, which is good for $\beta$-reduction, but it doesn't seem to properly handle $\alpha$-conversion. Regarding $\eta$-reduction, the best I could find was this snippet from A New Kind of Science (ref): rm[h_, v_] /; FreeQ[h, v] = k[h] Specifically, what I need is the following: A function reduce[term_]which returns termafter one step of reduction. It will do one of the following: $\alpha$-conversion (if necessary) followed by $\beta$-reduction $\eta$-reduction Abstraction and application are represented as functions, each taking two arguments: Abstraction: $\lambda x.m$ is lam[x_, m_] Application: $n\ m$ is app[n_, m_] Abstraction: $\lambda x.m$ is Other details If no reduction is possible, reducereturns termunchanged The reduction strategy is normal order For $\alpha$-conversion, you can choose any method to change the names of variables, such as appending a character ( x→ x´), incrementing a number ( x1→ x2), or changing the name altogether ( x→ a) If no reduction is possible, Examples of use: $\beta$-reduction reduce @ app[lam[x, lam[y, x]], a] lam[y, a] $\alpha$-conversion, $\beta$-reduction reduce @ app[lam[x, lam[y, x]], y] lam[y´, y] More complex $\alpha$-conversion reduce @ lam[y´, app[lam[x, lam[y, x]], app[y´, y]]] lam[y´, lam[y´´, app[y´, y]]] (* y became y´´ to avoid clashing with the first y´ *) Normal order reduce @ lam[x, app[lam[y, y], x]] lam[y, y] (* eta-reduction was done before beta-reduction *) Lambda term in normal form reduce @ app[a, b] app[a, b] (* Nothing can be reduced, so the term is left unchanged *) If anything is difficult to understand, you're very welcome to ask about it.
Electronic Journal of Probability Electron. J. Probab. Volume 22 (2017), paper no. 102, 40 pp. Extreme statistics of non-intersecting Brownian paths Abstract We consider finite collections of $N$ non-intersecting Brownian paths on the line and on the half-line with both absorbing and reflecting boundary conditions (corresponding to Brownian excursions and reflected Brownian motions) and compute in each case the joint distribution of the maximal height of the top path and the location at which this maximum is attained. The resulting formulas are analogous to the ones obtained in [28] for the joint distribution of $\mathcal{M} =\max _{x\in \mathbb{R} }\!\big \{\mathcal{A} _2(x)-x^2\}$ and $\mathcal{T} =\operatorname{argmax} _{x\in \mathbb{R} }\!\big \{\mathcal{A} _2(x)-x^2\}$, where $\mathcal{A} _2$ is the Airy$_2$ process, and we use them to show that in the three cases the joint distribution converges, as $N\to \infty $, to the joint distribution of $\mathcal{M} $ and $\mathcal{T} $. In the case of non-intersecting Brownian bridges on the line, we also establish small deviation inequalities for the argmax which match the tail behavior of $\mathcal{T} $. Our proofs are based on the method introduced in [9, 6] for obtaining formulas for the probability that the top line of these line ensembles stays below a given curve, which are given in terms of the Fredholm determinant of certain “path-integral” kernels. Article information Source Electron. J. Probab., Volume 22 (2017), paper no. 102, 40 pp. Dates Received: 10 January 2017 Accepted: 22 October 2017 First available in Project Euclid: 27 November 2017 Permanent link to this document https://projecteuclid.org/euclid.ejp/1511773232 Digital Object Identifier doi:10.1214/17-EJP119 Mathematical Reviews number (MathSciNet) MR3733660 Zentralblatt MATH identifier 06827079 Citation Nguyen, Gia Bao; Remenik, Daniel. Extreme statistics of non-intersecting Brownian paths. Electron. J. Probab. 22 (2017), paper no. 102, 40 pp. doi:10.1214/17-EJP119. https://projecteuclid.org/euclid.ejp/1511773232
Hardy and Wright in chapter on continued fractions prove the following theorem Let $x$ be an irrational number whose simple continued fraction representation is given by $[a_0,a_1,\ldots,a_n,\ldots] \big( \text{ie.} x=a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + {_\ddots}}}} \big)$. Then $ \bigl|x-\frac{p_n}{q_n} \bigl| < \frac{1}{q_n^2} $, where $\frac{p_n}{q_n}$ is the $n^{\text{th}}$ convergent of $x$. In the section on approximation of irrationals by rationals they say if we have an irrational say $x$ and an integer $q$ we want to know an upper bound for $\epsilon$ ( a positive number ) such that $\bigl|x-\frac{p}{q}\bigl| \le \epsilon $ ( $p$ is an integer ) ie. a relation of sorts mentioned in the above theorem. My question is why study this sort of relationship between $q$ and $\epsilon$ at all ? Is it not good enough if I approximate an irrational, say up to say $10$ decimal places ?