text
stringlengths
256
16.4k
As far as I know that the sun exclusively produces electron neutrinos ($\nu_e$). When the flux of solar neutrinos ($\nu_e$) is measured on the earth, a depletion is observed in the $\nu_e$ flux i.e., some $\nu_e$'s have "disappeared" in their way from the sun to the earth. As far as I know that this conclusion is drawn, by measuring only$^1$ the $\nu_e$ flux in the detectors. The explanation is that some of the neutrinos get morphed into $\nu_\mu$ and $\nu_\tau$. But to really test this hypothesis, a deficiency in the $\nu_e$ flux is not enough. There must be an experiment where the detector must also measure the $\nu_\mu$ and $\nu_\tau$ fluxes. If adding the fluxes over all three flavors turns out to be equal to the expected flux then only we can be sure that solar neutrinos have undergone oscillation. Has that been achieved in experiments? $^1$The experiment carried out by Davis et al at the Homestake mines detected $\nu_e$ through the inverse beta decay $\nu_e+^{37}{\rm Cl}\to e^-+^{37}{\rm Ar},$ and found that they were getting about one-third of the number of $\nu_e$ that were predicted from the solar models.
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
The simplest account of spontaneous symmetry breaking goes like this. Take a potential $V(\phi)$ with symmetric minima that are not at $\phi = 0$, like the Mexican hat potential shown in this site's logo. Since variations in the field cost energy due to the $(\partial_\mu \phi)^2$ term, minimum energy configurations have constant $\phi$. Therefore, the lowest energy states have $\phi$ equal to one of the minima of $V(\phi)$. Thus we have symmetry breaking, because the vacuum state (whichever one we choose) does not have the symmetry that $V$ had. In the quantum case, everything works the same, except the classical solution $\phi = c$ becomes $\langle \phi \rangle = c$. Then we have multiple vacuum states, each of which break the symmetry. I'm suspicious about the last assertion. Suppose $V$ has two mimima, giving two degenerate vacuum states, $|+\rangle$ and $|-\rangle$. Quantum mechanics allows superposition, so can we not take $(|+\rangle + |-\rangle)/\sqrt{2}$ as our vacuum? This state does not break the symmetry at all.
There are a few conventions and intuition here, which perhaps it would help to have spelled out — $\def\ket#1{\lvert#1\rangle}\def\bra#1{\!\langle#1\rvert}$ Sign bits versus {0,1} bits The first step is to make what is sometimes called the 'great notational shift', and think of bits (even classical bits) as being encoded in signs. This is productive to do if what you're mostly interested in is the parities of bit strings, because bit-flips and sign-flips basically act the same way. We map $0 \mapsto +1$ and $1 \mapsto -1$, so that for instance the sequence of bits $(0,0,1,0,1)$ would be represented by the sequence of signs $(+1,+1,-1,+1,-1)$. Parities of sequences of bits then corresponds to products of sequences of signs. For instance, just as we would recognise $0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 = 0$ as a parity computation, we may recognise $(+1)\cdot(+1)\cdot(-1)\cdot(+1)\cdot(-1) = +1$ as representing the same parity computation using the sign convention. Exercise. Compute the 'parity' of $(-1,-1,+1,-1)$ and of $(+1,-1,+1,+1)$. Are these the same? Parity checks using sign bits In the {0,1}-bit convention, parity checks have a nice representation as a dot-product of two boolean vectors, so that we can realise complicated parity computations as linear transformations.By shifting to sign-bits, we have inevitably lost the connection to linear algebra on a notational level, because we're taking products instead of sums.On a computational level, because this is only a shift in notation, we don't really have to worry too much.But on a pure mathematical level, we now have to think again a little about what we're doing with parity check matrices. When we use sign bits, we may still represent a 'parity check matrix' as a matrix of 0s and 1s, instead of signs ±1. Why? One answer is that a row vector describing a parity check of bits is of a different type than the sequence of bits themselves: it describes a function on data, not the data itself. The array of 0s and 1s now just requires a different interpretation — instead of linear coefficients in a sum, they correspond to exponents in a product. If we have sign bits $(s_1, s_2, \ldots, s_n) \in \{-1,+1\}^n$, and we want to compute a parity check given by a row-vector $(b_1, b_2, \ldots, b_n) \in \{0,1\}$, the parity check is then computed by$$ (s_1)^{b_1} \cdot (s_2)^{b_2} \cdot [\cdots] \cdot (s_n)^{b_n} \in \{-1,+1\},$$ where recall that $s^0 = 1$ for all $s$. As with {0,1}-bits, you can think of the row $(b_1,b_2,\ldots,b_n)$ as just representing a 'mask' which determines which bits $s_j$ make a non-trivial contribution to the parity computation. Exercise. Compute the result of the parity check $(0,1,0,1,0,1,0)$ on $(+1,-1,-1,-1,-1,+1,-1)$. Eigenvalues as parities. The reason why we would want to encode bits in signs in quantum information theory is because of the way that information is stored in quantum states — or more to the point, the way that we can describe accessing that information. Specifically, we may talk a lot about the standard basis, but the reason why it is meaningful is because we can extract that information by measurement of an observable. This observable could just be the projector $\ket{1}\bra{1}$, where $\ket{0}$ has eigenvalue 0 and $\ket{1}$ has eigenvalue 1, but it is often helpful to prefer to describe things in terms of the Pauli matrices.In this case, we would talk about the standard basis as the eigenbasis of the $Z$ operator, in which case we have $\ket{0}$ as the +1 -eigenvector of Z and $\ket{1}$ as the −1 -eigenvector of Z. So: we have the emergence of sign-bits (in this case, eigenvalues) as representing the information stored in a qubit. And better still, we can do this in a way which is not specific to the standard basis: we can talk about information stored in the 'conjugate' basis, just by considering whether the state is an eigenstate of $X$, and what eigenvalue it has. But more than this, we can talk about the eigenvalues of a multi-qubit Pauli operator as encoding parities of multiple bits — the tensor product $Z \otimes Z$ represents a way of accessing the product of the sign-bits, that is to say the parity, of two qubits in the standard basis.In this sense, the eigenvalue of a state with respect to a multi-qubit Pauli operator — if that eigenvalue is defined ( i.e. in the case that the state is an eigenvalue of the Pauli operator) — is in effect the outcome of a parity calculation of information stored in some choice of basis for each of the qubits. Exercise. What is the parity of the state $\ket{11}$ with respect to $Z \otimes Z$? Does this state have a well-defined parity with respect to $X \otimes X$? Exercise. What is the parity of the state $\ket{+-}$ with respect to $X \otimes X$? Does this state have a well-defined parity with respect to $Z \otimes Z$? Exercise. What is the parity of $\ket{\Phi^+} = \tfrac{1}{\sqrt 2}\bigl(\ket{00} + \ket{11}\bigr)$ with respect to $Z \otimes Z$ and $X \otimes X$? Stabiliser generators as parity checks. We are now in a position to appreciate the role of stabiliser generators as being analogous to a parity check matrix.Consider the case of the 7-qubit CSS code, with generators \begin{array} {|r|ccccccc|}\hline\scriptstyle\text{Generator} & & & & \!\!\!\!\!\!\!\!\!\scriptstyle\text{Tensor factors}\!\!\!\!\!\!\!\!\! & & & \\[-0.5ex] & \scriptstyle1 & \scriptstyle2 & \scriptstyle3 & \scriptstyle4 & \scriptstyle5 & \scriptstyle6 & \scriptstyle7 \\\hline\hline g_1 & & & & X & X & X & X \\\hline g_2 & & X & X & & & X & X \\\hline\hline g_3 & X & & X & & X & & X \\\hlineg_4 & & & & Z & Z & Z & Z \\\hline g_5 & & Z & Z & & & Z & Z \\\hline g_6 & Z & & Z & & Z & & Z \\\hline\end{array}I've omitted the identity tensor factors above, as one might sometimes omit the 0s from a {0,1} matrix, and for the same reason: in a given stabiliser operator, the identity matrix corresponds to a tensor factor which is not included in the 'mask' of qubits for which we are computing the parity. For each generator, we are only interested in those tensor factors which are being acted on somehow, because those contribute to the parity outcome. Now, the 'codewords' (the encoded standard basis states) of the 7-qubit CSS code are given by$$\begin{align} \ket{0_L} \propto{}&{} \ket{0000000} + \ket{0001111} + \ket{0110011} + \ket{0111100} \\&+ \ket{1010101} + \ket{1011010} + \ket{1100110} + \ket{1101001} = \sum_{y \in C} \ket{y},\\[1ex] \ket{1_L} \propto{}&{} \ket{1111111} + \ket{1110000} + \ket{1001100} + \ket{1000011} \\&+ \ket{0101010} + \ket{0100101} + \ket{0011001} + \ket{0010110} = \sum_{y \in C} \ket{y \oplus 1111111},\end{align}$$where $C$ is the code generated by the bit-strings $0001111$, $0110011$, and $1010101$. Notably, these bit-strings correspond to the positions of the $X$ operators in the generators $g_1$, $g_2$, and $g_3$. While those are stabilisers of the code (and represent parity checks as I've suggested above), we can also consider their action as operators which permute the standard basis. In particular, they will permute the elements of the code $C$, so that the terms involved in $\ket{0_L}$ and $\ket{1_L}$ will just be shuffled around. The generators $g_4$, $g_5$, and $g_6$ above are all describing the parities of information encoded in standard basis states.The encoded basis states you are given are superpositions of codewords drawn from a linear code, and those codewords all have even parity with respect to the parity-check matrix from that code.As $g_4$ through $g_6$ just describe those same parity checks, it follows that the eigenvalue of the encoded basis states is $+1$ (corresponding to even parity). This is the way in which 'with the observation about the similarities between the parity check matrix and the generator the exercise is "self evident"' — because the stabilisers either manifestly permute the standard basis terms in the two 'codewords', or manifestly are testing parity properties which by construction the codewords will have. Moving beyond codewords The list of generators in the table you provide represent the first steps in a powerful technique, known as the stabiliser formalism, in which states are described using no more or less than the parity properties which are known to hold of them. Some states, such as standard basis states, conjugate basis states, and the perfectly entangled states $\ket{\Phi^+} \propto \ket{00} + \ket{11}$ and $\ket{\Psi^-} \propto \ket{01} - \ket{10}$ can be completely characterised by their parity properties. (The state $\ket{\Phi^+}$ is the only one which is a +1-eigenvector of $X \otimes X$ and $Z \otimes Z$; the state $\ket{\Psi^-}$ is the only one which is a −1-eigenvector of both these operators.) These are known as stabiliser states, and one can consider how they are affected by unitary transformations and measurements by tracking how the parity properties themselves transform. For instance, a state which is stabilised by $X \otimes X$ before applying a Hadamard on qubit 1, will be stabilised by $Z \otimes X$ afterwards, because $(H \otimes I)(X \otimes X)(H \otimes I) = Z \otimes X$.Rather than transform the state, we transform the parity property which we know to hold of that state. You can use this also to characterise how subspaces characterised by these parity properties will transform.For instance, given an unknown state in the 7-qubit CSS code, I don't know enough about the state to tell you what state you will get if you apply Hadamards on all of the qubits, but I can tell you that it is stabilised by the generators $g_j' = (H^{\otimes 7}) g_j (H^{\otimes 7})$, which consist of\begin{array} {|r|ccccccc|}\hline\scriptstyle\text{Generator} & & & & \!\!\!\!\!\!\!\!\!\scriptstyle\text{Tensor factors}\!\!\!\!\!\!\!\!\! & & & \\[-0.5ex] & \scriptstyle1 & \scriptstyle2 & \scriptstyle3 & \scriptstyle4 & \scriptstyle5 & \scriptstyle6 & \scriptstyle7 \\\hline\hline g'_1 & & & & Z & Z & Z & Z \\\hline g'_2 & & Z & Z & & & Z & Z \\\hline\hline g'_3 & Z & & Z & & Z & & Z \\\hlineg'_4 & & & & X & X & X & X \\\hline g'_5 & & X & X & & & X & X \\\hline g'_6 & X & & X & & X & & X \\\hline\end{array}This is just a permutation of the generators of the 7-qubit CSS code, so I can conclude that the result is also a state in that same code. There is one thing about the stabiliser formalism which might seem mysterious at first: you aren't really dealing with information about the states that tells you anything about how they expand as superpositions of the standard basis. You're just dealing abstractly with the generators. And in fact, this is the point: you don't really want to spend your life writing out exponentially long superpositions all day, do you? What you really want are tools to allow you to reason about quantum states which require you to write things out as linear combinations as rarely as possible, because any time you write something as a linear combination, you are (a) making a lot of work for yourself, and (b) preferring some basis in a way which might prevent you from noticing some useful property which you can access using a different basis. Still: it is sometimes useful to reason about 'encoded states' in error correcting codes — for instance, in order to see what effect an operation such as $H^{\otimes 7}$ might have on the codespace of the 7-qubit code. What should one do instead of writing out superpositions? The answer is to describe these states in terms of observables — in terms of parity properties — to fix those states. For instance, just as $\ket{0}$ is the +1-eigenstate of $Z$, we can characterise the logical state $\ket{0_L}$ of the 7-qubit CSS code as the +1-eigenstate of $$ Z_L = Z \otimes Z \otimes Z \otimes Z \otimes Z \otimes Z \otimes Z$$and similarly, $\ket{1_L}$ as the −1-eigenstate of $Z_L$.(It is important that $Z_L = Z^{\otimes 7}$ commutes with the generators $\{g_1,\ldots,g_6\}$, so that it is possible to be a +1-eigenstate of $Z_L$ at the same time as having the parity properties described by those generators.)This also allows us to move swiftly beyond the standard basis: using the fact that $X^{\otimes 7}$ anti commutes with $Z^{\otimes 7}$ the same way that $X$ anti commutes with $Z$, and also as $X^{\otimes 7}$ commutes with the generators $g_i$, we can describe $\ket{+_L}$ as being the +1-eigenstate of $$ X_L = X \otimes X \otimes X \otimes X \otimes X \otimes X \otimes X,$$and similarly, $\ket{-_L}$ as the −1-eigenstate of $X_L$.We may say that the encoded standard basis is, in particular, encoded in the parities of all of the qubits with respect to $Z$ operators; and the encoded 'conjugate' basis is encoded in the parities of all of the qubits with respect to $X$ operators. By fixing a notion of encoded operators, and using this to indirectly represent encoded states, we may observe that$$ (H^{\otimes 7}) \,X_L\, (H^{\otimes 7}) = Z_L, \quad (H^{\otimes 7}) \,Z_L\, (H^{\otimes 7}) = X_L, $$which is the same relation as obtains between $X$ and $Z$ with respect to conjugation by Hadamards; which allows us to conclude that for this encoding of information in the 7-qubit CSS code, $H^{\otimes 7}$ not only preserves the codespace but is an encoded Hadamard operation. Thus we see that the idea of observables as a way of describing information about a quantum states in the form of sign bits — and in particular tensor products as a way of representing information about parities of bits — plays a central role in describing how the CSS code generators represent parity checks, and also in how we can describe properties of error correcting codes without reference to basis states.
I learned that, at frequencies corresponding to harmonics, standing waves are formed. But what actually happens at other frequencies? Won't the reflected wave superimpose with the original wave? Do any other phenomena happen? After the transients die out, the air in the pipe will always vibrate with the same frequency as the driving frequency, but the amplitude will be very small, essentially because your driving at one moment will cancel out the effect of the driving at the next. In practice, that means you won't hear anything at all. It's analogous to somebody trying to pump a swing, but moving their legs faster or slower than the swing "wants" to go. All that happens is that they wiggle around a bit at the bottom. Let the length of the pipe be $L$. Two closed ends: The molecules at the ends cannot move freely due to the boundary imposed on them. Waves with wavelength $\lambda = 2L/n$ ($n$ an even integer) will have nodes at the ends and so will not be interrupted in their propagation by the boundary. Other wavelengths will loose energy at the boundary through molecular collisions and will dissipate over time. So non-harmonic wavelengths will simply die out (fairly quickly). One closed end: The difference in the environment at the open end of the pipe (restricted by hard walls in two directions inside, unrestricted in all directions outside) acts as a sort of boundary. This time, the wavelength $\lambda = 2L/n$ uses odd-integer $n$. This is because one end is open, so there is an antinode at the open end (instead of a node). The behavior of non-harmonic waves would be similar to that of a pipe with two closed ends. Dissipation: Sound waves in air are longitudinal waves (also called compression waves) made up of the repeated compression and expansion of the air molecules. That means that, although the net displacement of the air molecules is zero (they return to their equilibrium positions after the wave passes), the molecules must move in order to transmit the wave. A standing wave is a superposition of waves traveling in opposite directions and has nodes every half wavelength where the movement of the molecules in one direction is canceled by movement in the opposite direction. In other words, the molecules at the nodes don't move. If a wave has a wavelength such that exactly $n$ wavelengths fit in your pipe of length $L$ (as is the case for harmonic wavelengths), you will have a node at the ends of the pipe. This means that the air at the ends of the pipe does not move, and all the movement happens between the ends. In this case little of the wave's energy is lost to the environment. If a wave has a wavelength such that more or less than $n$ wavelengths fit in your pipe (as is the case for non-harmonic wavelengths), you will not have a node at the end. This means that the air molecules at the end are moving. There is a boundary here, and the air molecules collide with this boundary and transmit to it their energy. In this way the wave loses it's energy to the environment and dies out. Here is an idealized mathematical model for how a waveguide may select certain wavenumbers $k=\frac{2\pi}{\lambda}$. Consider for simplicity a 1D periodic waveguide of length $L$ with a (complex) monochromatic wave $$ y(x,t)~=~Ae^{i(kx-\omega t)}\sum_{n\in\mathbb{Z}} e^{inkL}~=~2\pi A e^{i(kx-\omega t)}III(kL),$$ where $$III(\theta)~=~\delta(\theta+2\pi \mathbb{Z}) $$ is the Dirac comb/Shah function. We see that a wave is allowed iff the length $L$ is a multiple of the wavelength $\lambda$.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
The Challenge Write a program or function that takes no input and outputs a vector of length \$1\$ in a theoretically uniform random direction. This is equivalent to a random point on the sphere described by $$x^2+y^2+z^2=1$$ resulting in a distribution like such Output Three floats from a theoretically uniform random distribution for which the equation \$x^2+y^2+z^2=1\$ holds true to precision limits. Challenge remarks The random distribution needs to be theoretically uniform. That is, if the pseudo-random number generator were to be replaced with a true RNG from the realnumbers, it would result in a uniform random distribution of points on the sphere. Generating three random numbers from a uniform distribution and normalizing them is invalid: there will be a bias towards the corners of the three-dimensional space. Similarly, generating two random numbers from a uniform distribution and using them as spherical coordinates is invalid: there will be a bias towards the poles of the sphere. Proper uniformity can be achieved by algorithms including but not limited to: Generate three random numbers \$x\$, \$y\$ and \$z\$ from a normal(Gaussian) distribution around \$0\$ and normalize them. Generate three random numbers \$x\$, \$y\$ and \$z\$ from a uniformdistribution in the range \$(-1,1)\$. Calculate the length of the vector by \$l=\sqrt{x^2+y^2+z^2}\$. Then, if \$l>1\$, reject the vector and generate a new set of numbers. Else, if \$l \leq 1\$, normalize the vector and return the result. Generate two random numbers \$i\$ and \$j\$ from a uniformdistribution in the range \$(0,1)\$ and convert them to spherical coordinates like so:\begin{align}\theta &= 2 \times \pi \times i\\\\\phi &= \cos^{-1}(2\times j -1)\end{align}so that \$x\$, \$y\$ and \$z\$ can be calculated by \begin{align}x &= \cos(\theta) \times \sin(\phi)\\\\y &= \sin(\theta) \times \sin(\phi)\\\\z &= \cos(\phi)\end{align} Generate three random numbers \$x\$, \$y\$ and \$z\$ from a Provide in your answer a brief description of the algorithm that you are using. Read more on sphere point picking on MathWorld. Output examples [ 0.72422852 -0.58643067 0.36275628][-0.79158628 -0.17595886 0.58517488][-0.16428481 -0.90804027 0.38532243][ 0.61238768 0.75123833 -0.24621596][-0.81111161 -0.46269121 0.35779156]
Edit — I've revised this answer to make some small improvements in the commands, to tidy up the commands for drawing the wires for instance, because it seemed worthwhile. Flattering as it is to have this answer be the accepted one for the time being, I think I should point out that the quantikz package (see Daftwullie's answer below) and the qpic package (as pointed out in cnada's answer below) are both libraries with reasonably complete interfaces, and so better for people looking for a quick and simple solution. The code below is probably more suitable for people who are comfortable with TiKZ, and might like to tweak their circuit diagrams with TiKZ commands, but who wouldn't mind having some macros to streamline drawing their diagrams. — Nice as it might be to, for the moment I have no ambition to write a LaTeX package to make these macros available with a nice interface for all purposes (but anyone else who would like to is welcome if they give me some of the credit). Snippet See below for all of the code used to generate this example: the following commands are just the ones used to draw the circuit itself. (This snippet involves macros which I have defined for the purpose of this post, which I also define below.) % define initial positions of the quantum wires \xdef\dy{1.25} \defwire (A) at (0); \defwire (B) at ({-\dy}); \defwire (C) at ({-2*\dy}); % draw wires \xdef\dt{0.8} \drawwires [\dt] (15); \node at ($(B-0)!0.5!(B-1)$) {$/$}; \node at ($(C-0)!0.5!(C-1)$) {$/$}; % draw gates \gate (B-2) [H^{\otimes n}]; \ctrlgate (B-3) (C-3) [U]; \virtgate (A-3); \gate (B-4) [\mathit{FT}^\dagger]; \ctrlgate (B-7) (A-7) [R]; \virtgate (C-7); \gate (B-10) [\mathit{FT}]; \ctrlgate (B-11) (C-11) [U^\dagger]; \gate (B-12) [H^{\otimes n}]; \virtgate (A-12); \meas (A-14) [Z]; % draw input and output labels \inputlabel (A-0) [\lvert 0 \rangle]; \inputlabel (B-0) [\lvert 0 \rangle^{\otimes n}]; \inputlabel (C-0) [\lvert b \rangle]; \outputlabel (A-15) [\lvert 1 \rangle]; \outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}]; \outputlabel (C-15) [\lvert x \rangle]; Result Preamble You will need a pre-amble which contains at least the amsmath package, as well as the tikz package. You may not need all of the tikz libraries below, but they don't hurt. Be sure to include the commands involving layers. \documentclass[a4paper,10pt]{article} \usepackage{amsmath} \usepackage{tikz} \usetikzlibrary{shapes,arrows,calc,positioning,fit} \pgfdeclarelayer{background} \pgfsetlayers{background,main} For the purposes of this post, I've defined some ad-hoc macros to make reading the coded circuit easier for public consumption. (The macro format is not exactly good LaTeX practise, but I define them this way in order for the syntax to be more easily read and for it to stand out.) The parameters for dimensions in these gates were chosen to look good in your sample-circuit, and were found by trial-and-error: you can change them to change the appearance of your circuit. The first is a simple macro to draw a gate. \def\gate (#1) [#2]{% \node [ draw=black,fill=white, inner sep=0pt, minimum width=2.5em, minimum height=2em, outer sep=1ex ] (#1) at (#1) {$#2$}% } The second is a macro to draw an 'invisible' gate. This is not really a command which is important for the circuit itself, but helps for the placement of background frames. \def\virtgate (#1){% \node [ draw=none, fill=none, minimum width=2.5em, minimum height=2em, outer sep=1ex ] (#1) at (#1) {}; } The third is a macro to draw a controlled gate. This command works well enough for your example circuit, but doesn't allow you to draw a CNOT. (Exercise for the reader proficient in TiKZ: make a \CNOT command.) \def\ctrlgate (#1) (#2) [#3]{% \filldraw [black] (#1) circle (2pt) -- (#2); \gate (#2) [#3] } The fourth is a macro to draw a "measurement" box. I think it is perfectly reasonable to want to specify an explicit basis or observable for the measurement, so I allow an argument to specify that. \def\meas (#1) [#2]{% \node [ draw=black, fill=white, inner sep=2pt, label distance=-5mm, minimum height=2em, minimum width=2em ] (meas) at (#1) {}; \draw ($(meas.south) + (-.75em,1.5mm)$) arc (150:30:.85em); \draw ($(meas.south) + (0,1mm)$) -- ++(.8em,1em); \node [ anchor=north west, inner sep=1.5pt, font=\small ] at (meas.north west) {#2}; } I define two short macros to produce the labels for the inputs and outputs of wires. \def\inputlabel (#1) [#2]{% \node at (#1) [anchor=east] {$#2$} } \def\outputlabel (#1) [#2]{% \node at (#1) [anchor=west] {$#2$} } The macros above are all looking for co-ordinates at which to place the gates. I also define macros to define "wires", which have regularly spaced co-ordinates where gates can be located.The first is a macro which allows you to define a named wire (such as A, B, x3, etc.) and its vertical position in the circuit diagram (these diagrams are left-to-right by default, which you can change most easily using the rotate option of the tikzpicture environment.) \def\defwire (#1) at (#2){% \ifx\qmwires\empty \edef\qmwires{#1}% \else \edef\qmwires{\qmwires,#1}% \fi \coordinate (#1-0) at ($(0,#2)$)% } Having defined a collection of wires, the following command then draws all of them, starting from the same left-most starting point and ending at the same right-most ending point, with increments by a fixed amount (given in the square brackets) and for a given number of time slices. This defines a sequence of 'time-slice' co-ordinates for each wire: for a wire A, it defines the co-ordinates A-0, A-1, and so forth up until A-t (where t is the value of the second argument). \def\drawwires [#1] (#2);{% \xdef\u{0} \foreach \t in {0,...,#2} {% \foreach \l in \qmwires {% \coordinate (\l-\t) at ($(\l-\u) + (#1,0)$); \draw (\l-\u) -- (\l-\t); } \xdef\u{\t} } } The final macro is one to draw a background frame for different stages in your circuit. It takes an argument specifying which gates (including the invisible virtual 'gates') are meant to belong to the frame. \def\bgframe [#1]{% \node [% draw=black, fill=yellow!40!gray!30!white, fit=#1 ] {}% } The circuit diagram itself Now to begin drawing your circuit. \begin{document} \begin{tikzpicture} We start by defining the relative positions of the wires. (For convenience, I do this using a macro to define the spacing between them, that I can quickly change to adjust the spacing.) Below, I define three wires: A, B, and C. \let\qmwires\empty % define initial positions of the quantum wires \xdef\dy{1.25} \defwire (A) at (0); \defwire (B) at ({-\dy}); \defwire (C) at ({-2*\dy}); We now draw the circuit, using the command to draw the wires and define the co-ordinates on the wire, and placing gates independently of one another according to those co-ordinates. % draw circuit \xdef\dt{0.8} \drawwires [\dt] (15); \node at ($(B-0)!0.5!(B-1)$) {$/$}; \node at ($(C-0)!0.5!(C-1)$) {$/$}; \gate (B-2) [H^{\otimes n}]; \ctrlgate (B-3) (C-3) [U]; \virtgate (A-3); \gate (B-4) [\mathit{FT}^\dagger]; \ctrlgate (B-7) (A-7) [R]; \virtgate (C-7); \gate (B-10) [\mathit{FT}]; \ctrlgate (B-11) (C-11) [U^\dagger]; \gate (B-12) [H^{\otimes n}]; \virtgate (A-12); \meas (A-14) [Z]; % draw input and output labels \inputlabel (A-0) [\lvert 0 \rangle]; \inputlabel (B-0) [\lvert 0 \rangle^{\otimes n}]; \inputlabel (C-0) [\lvert b \rangle]; \outputlabel (A-15) [\lvert 1 \rangle]; \outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}]; \outputlabel (C-15) [\lvert x \rangle]; Annotations for the circuit The rest of the circuit diagram is literally commentary. We can do this using a combination of plain-old TiKZ nodes, and the \bgframe macro which I defined above. (Annotations are a little less predictable, so I don't have a good way of making them as systematic as the earlier parts of the circuit, so general TiKZ commands are a reasonable approach unless you know how to make your annotations uniform.) First the annotations for the stages of the circuit: % draw annotations \node [minimum height=4ex] (annotate-1) at ($(A-3) + (0,1)$) {\textit{Phase estimation}}; \node [minimum height=4ex] (annotate-2) at ($(A-7) + (0,1)$) {\textit{$\smash{R(\tilde\lambda^{-1}})$ rotation}}; \node [minimum height=4ex] (annotate-3) at ($(A-11) + (0,1)$) {\textit{Uncompute}}; \node (annotate-a) at ($(C-3) + (0,-1.25)$) {\textit{(a)}}; \node (annotate-b) at ($(C-7) + (0,-1.25)$) {\textit{(b)}}; \node (annotate-c) at ($(C-11) + (0,-1.25)$) {\textit{(c)}}; Next, the annotations for the registers, at the input: \node (A-in-annotate) at ($(A-0) + (-3em,0)$) [anchor=east] {\parbox{4.5em}{\centering Ancilla register $S$ }}; \node (B-in-annotate) at ($(B-0) + (-3em,0)$) [anchor=east] {\parbox{4.5em}{\centering Clock \\ register $C$ }}; \node (C-in-annotate) at ($(C-0) + (-3em,0)$) [anchor=east] {\parbox{4.5em}{\centering Input \\ register $I$ }}; Finally, the frames for the stages of the circuit. % draw frames for stages of the circuit \begin{pgfonlayer}{background} \bgframe [(annotate-1)(B-2)(B-4)(C-3)]; \bgframe [(annotate-2)(B-7)(C-7)]; \bgframe [(annotate-3)(B-10)(B-12)(C-11)]; \end{pgfonlayer} And that's the end of the circuit. \end{tikzpicture} \end{document}
I’ve added a new library to Incanter called incanter.latex that adds the ability to include LaTeX formatted equations as annotations and subtitles in charts. The library is based on the fantastically useful JLaTeXMath library. The following examples require Incanter version 1.2.2-SNAPSHOT or greater. Add the following dependency to your project.clj file: [incanter "1.2.2-SNAPSHOT"] Load the necessary libraries. (use '(incanter core stats charts latex)) Define the latex-formatted equation; I’ll use the str function so I can break the equation across multiple lines. Notice that I have to use two backslashes where I would only need one if I were were working directly in LaTeX; this is because the backslash is an escape character in Clojure/Java strings. (def eq (str "f(x)=\\frac{1}{\\sqrt{2\\pi \\sigma^2}}" "e^{\\frac{-(x - \\mu)^2}{2 \\sigma^2}}")) The equation can be rendered as an image with the latex function. The rendered equation can then be viewed in a window or saved as a png file with the view and save functions respectively. (view (latex eq)) (save (latex eq) filename) Use the add-latex function to add an annotation to a chart. The following example adds the above equation to a function-plot of the Normal PDF. (doto (function-plot pdf-normal -3 3) (add-latex 0 0.1 eq) view) Use the add-latex-subtitle function to add a rendered LaTeX equation as a subtitle to the chart (this particular chart does not have a main title). (doto (function-plot pdf-normal -3 3) (add-latex-subtitle eq) view) The complete code for the above examples can be found here.
Let $V$ be a $n$-dimensional $\mathbf{Q}_p$-vector space with a continuous action of $\operatorname{Gal}(\bar{L}/L)$, where $L$ is a complete discretely valued field of characteristic zero with perfect residue field of characteristic $p$. Question: Is there one standard definition of what it means for $V$ to be ordinary, and if so, what is it? The reason I ask is that I have seen a few different definitions that don't seem to quite coincide (and perhaps this is just the state of things). For example, in Ralph Greenberg's Iwasawa Theory for $p$-adic Representations, he requires there to be a filtration $$\cdots \subseteq F^{i+1}V\subseteq F^iV\subseteq\cdots$$ of $V$ by $G_L$-stable subspaces satisfying the following conditions: (i) $F^iV=0$ for $i \gg 0$ (ii) $F^iV=V$ for $i \ll 0$ (iii) the inertia group of $G_K$ acts by $\chi_p^i$ on $F^iV/F^{i+1}V$, where $\chi_p^i$ is the $p$-adic cyclotomic character I guess it is implicit in (iii) that any jump in the filtration gives a $1$-dimensional quotient. Greenberg proves in this paper (at least for $L=\mathbf{Q}_p$) that such a representation is Hodge-Tate, but in his proof, it seems that he is not requiring $F^iV/F^{i+1}V$ to be $\leq 1$ dimensional, because he calls this dimension $h_i$ and proves that this quotient, when tensored up to $\mathbf{C}_p$ (completion of $\bar{\mathbf{Q}}_p$) is isomorphic to $\mathbf{C}_p(i)^{h_i}$ (at least I think this is what he does). This definition seems to me (unless I'm missing something which is entirely possible) to differ slightly from the one given in Tom Weston's Iwasawa Invariants of Galois Deformations (where he takes $L$ to be a finite extension of $\mathbf{Q}_p$). He calls $V$ nearly ordinary if there is a composition series $$0=V^0\subsetneq V^1\subsetneq\cdots\subseteq V^n=V$$ of the $\mathbf{Q}_p[G_L]$-module. He says that if $V$ is Hodge-Tate, then for each $i$, there is an open subgroup of inertia and an integer $m_i$ such that the open subgroup acts on $V^i/V^{i-1}$ by $\chi_p^{m_i}$. He then calls $V$ ordinary if $m_1\geq m_2\geq\cdots\geq m_n$. It seems to me that if the Hodge-Tate weights (the $m_i$) are distinct, and I can take the open subgroup for each $i$ to be the entire inertia group, then Weston's definition of ordinary implies Greenberg's, but if there are the $m_i$ are not all distinct, then it doesn't seem to work. Does Greenberg's definition force the Hodge-Tate weights to all appear with multiplicity one? Finally, I'm pretty certain I've seen a $2$-dimensional $V$ (at least when $V$ is attached to a $p$-ordinary modular form) called ordinary if it has a $1$-dimensional unramified $G_L$-quotient (if I'm not mistaken Greenberg's definition, in the $2$-dimensional case, reduces to the existence of one-dimensional $G_L$-quotient that is a Tate twist of an unramified character). This use of the term "ordinary" makes sense to me because it is satisfied by the $p$-adic representation attached to an elliptic curve over $L$ with good, ordinary reduction (and perhaps this is the origin of the term). I apologize if there are mistakes in the above, or if I've failed to see some obvious equivalences. I'm sort of just learning some of this stuff.
Please help to solve this question: Instability of a difference scheme under small perturbations does not exclude the possibility that in special cases the scheme converges towards the correct function, if no errors are permitted in the data or the computation. In particular let $f(x)=e^{\alpha x}$ with a complex constant ${\alpha }$. Show that for fixed $x,t$ and any fixed positive $\lambda = k/h$ whatsoever both the expression $(3.9)$ and $(3.14)$ converge for $n\rightarrow \infty$ towards the correct limit $e^{\alpha (x-ct)}$. (This is consistent with the Courant-Friedrichs-Lewy test, since for an analytic$f$ the values of $f$ in any interval determine those at the point $\xi=x-ct $ uniquely). The PDE is a scalar linear conservation law $$ v_t + c v_x = 0$$ with initial data $v(x,0) = f(x)$, and the method of finite differences is used. The equations are $$\begin{aligned} v(x,t)&=v(x,nk) \\ & = \dots \\ &={\sum_{m=0}^{n}}\binom{n}{m}(1+\lambda c)^m(-\lambda c)^{n-m}f (x+(n-m)h) \end{aligned} \tag{3.9}$$ and $$v(x,t)=v(x,nk)={\sum_{m=0}^{n}}\binom{n}{m}(1-\lambda c)^m(\lambda c)^{n-m}f (x-(n-m)h) \, . \tag{3.14}$$ This is problem 3 page 8 from the book by Fritz John [1]. Please help. Thanks [1] F. John: Partial Differential Equations, 4th Edition, Springer, 1991.
I'm reading about induced representations for research. Particularly, I'm trying to get a firm grasp on the finite group case before venturing on to the locally compact case. I've been looking at Wikipedia and more or less get the idea with one somewhat significant issue: why should we look to $\bigoplus g_iV$ (where the $g_i$ are coset representatives of for $H$) when defining the induced representation? Why shouldn't the induced representation on $G$ act on simply $V$ or even $V^{|G|}$ instead of $[G:H]$ copies of $V$? For a general pair $G, H$ and a representation $V$ of $H$ there may be no way to extend the action of $H$ to an action of $V$. By making the $[G:H]$ copies we have enough wiggle room for $G$ to act. More or less I like to think of it as follows: Take $[G : H]$ copies of the representation indexed by the cosets of $H$. When you act by something in $H$ you act on each copy separately, and when you act by something else you permute the copies according to how that element permutes the cosets of $H$. This isn't quite correct, as other elements of $G$ not in $H$ will act on the separate copies as well as permute them, but I think it gives good intuition for what is going on. The wikipedia article gives a more explicit formula for how general $g \in G$ acts. This is a more advanced view. More generally, if you have a ring homomorphism $\phi:R_1\to R_2$, and $M$ is a left $R_1$-module, then there is a left $R_2$-module that is the "induced module" which is: $$R_2\otimes_{R_1} M$$ which is an $R_2$ module. The case of $H<G$ is then $R_1=\mathbb C[H], R_2=\mathbb C[G]$ is the case of induced group representations. In that case, $R_2$ is generated by $[G:H]$ elements when it is considered as an $R_1$-module, so this is $[G:H]$ times the 'dimension' of $M$. If you look at the categories of $R_1\text{-Mod}$ and $R_2\text{-Mod}$, there is an obvious functor from $F_{\phi}:R_2\text{-Mod}\to R_1\text{-Mod}$. I believe, but I'm not sure, that $M\to R_2\otimes_{R_1} M$ is the adjoint of that functor, but I wouldn't swear that is true. Induction of modules is in a sense related to extension by scalars. Recall that any ring morphism $f \colon R \to S$ induces a functor from left $R$-modules to left $S$-modules by sending an $R$-module $M$ to $S \otimes_R M$, where $S$ is viewed as a $(S, R)$-bimodule via $f$. So the extension by scalar functor is actually just a left tensor functor $L \otimes_R -$ for some $(S, R)$-bimodule $L$. Now suppose that $G$ is a finite group, $H$ a subgroup of $G$, and $K$ some field. We have a $KH$-module $N$ and want to obtain a $KG$-module the most universal way. Viewing the group ring $KH$ as a subring of $KG$ and then extending by scalars suggests that $KG \otimes_{KH} N$ is our candidate. In fact, you can prove that $\text{Ind}_H^G N \cong KG \otimes_{KH} N$ as $(KG, KH)$-bimodules and more generally that the induction functor $\text{Ind}_H^G \colon KH\text{-Mod} \to KG\text{-Mod}$ between the respective module categories is naturally isomorphic to the left tensor functor $KG \otimes_{KH} -$. It should also be noted that the other basic representation theory operations of restriction, inflation, and deflation can similarly be defined as tensor functors. Induced representations are a special case of extension of scalars. Say you're doing linear algebra over a vector space $V$ defined over a field $k$. Often it's useful to have an algebraically closed field of scalars, since then we can decompose a space into generalized eigenspaces of a given linear map, so what do we do if our scalars aren't algebraically closed? We extend them! To this end, let $L/K$ be any extension of fields: we want the extension-of-scalars-from-$K$-to-$L$ to take the free $K$-vector space on a set $X$, and return the free $L$-vector space on the set $X$. Thus, if our space is comprised of elements which are $K$-linear combinations of a given set of basis vectors, then after extension of scalars the same is true only we have $L$-linear combinations. There is a way to do this independent of choosing a basis, which is to tensor against $L$ over $K$, i.e. $V_L:=L\otimes_KV$. The tensor product is essentially a way to "pretend multiply" vectors in $V$ by scalars in $L$ and allow ourselves linear combinations of these "pretend products." One easily checks this makes $V_L$ an $L$-vector space. The same can be done with $R$-modules. (Vector spaces are modules over fields.) If $S$ is a ring which contains the ring $R$, and $M$ is any $R$-module, then $S\otimes_RM$ is formed by "pretending" to multiply elements of $M$ by scalars in $S$ (subject to the proviso that scalars in $R$ act the same way they did originally on $M$) and then adding these pretend products, and this turns $S\otimes_RM$ into a module over the bigger ring $S$. Linear representations of a group $H$ over a field $k$ are basically $k[H]$-modules. If we want to extend the action of $H$ on a $k$-space $V$ to an action of $G$ on it, we need to extend the scalars of $k[H]$ to the scalars of $k[G]$. This is achieved by $k[G]\otimes_{k[H]}V$. It is comprised of linear combinations of vectors $gv$ for $g\in G$, $v\in V$, subject to $(ab)v=a(bv)$, $(a+b)v=av+bv$, $a(v+w)=av+aw$, and the rule that $hv$ for $h\in H$ is exactly as it's defined when $V$ is a $k[H]$-module. To a category theorist, we notice that "restriction of the action of $G$ to the action of $H$" is basically a forgetful functor from the category of representations of $G$ to the category of representations of $H$. The notion of "free" or "universal" constructions is captured in categorical language as "adjoints to forgetful functors," so if the opposite of restricting actions is inducing them upwards, then we can define the representations of $G$ induced from a representation of $H$ via applying such an adjoint. Frobenius reciprocity (the $\hom$ version) essentially states that $k[G]\otimes_{k[H]}V$ is the left adjoint applied to a representation $V$ of $H$ to induce a representation of $G$. Whenever there is some definition (or in this case, more precisely a construction), it is indeed a good idea to ask yourself "why does it have to be exactly this way". To answer such a question, one should start by figuring out what the constructed object should satisfy. In this case, we have a group $G$, a subgroup $H$ and a representation $V$ of $H$. What we want is a representation of $G$, and we would of course like it to have some sort of relation to $V$. Since we have a very nice way to get a representation of $H$ from a representation of $G$ (restriction), this should probably somehow play into this relation. I will discuss three possible relations one might wish for: The overly optimistic, the overly pessimistic, and the "just right". The overly optimistic: If we were to wish for the very best relation we could get, we would like our induced representation $W$ to be such that restricting $W$ to $H$ gives back $V$. This would obviously give us the very best possible relation we could ever hope for, but unfortunately, $G$ need not have any representation with this property (in fact, determining when this is the case is a very interesting topic in the representation theory of finite groups). The overly pessimistic: A relation that we should at least require is that if $V$ is simple, then when we restrict the induced module to $H$, then $V$ is either a submodule or a quotient (note that I have not specified anything about the groups or the field we work over, so these need not be equivalent). However, as we will shortly see, we can get something even better than this, and we might as well get all that we can. The "just right": In the overly pessimistic version, we were interested in submodules or quotients. But often a more useful thing to consider will be $\operatorname{Hom}$-spaces between modules. In this context, having a certain simple submodule is the same as having a non-zero homomorphism from said simple module (and having it as a quotient is then the same with the arrow in the other direction). So in this way, the overly pessimistic version becomes a statement about certain $\operatorname{Hom}$-spaces being non-zero. To make this more general, we have an $H$-module $V$ and we want a $G$-module $W$ such that whenever we have a $G$-module $M$, we have some sort of comparison between the spaces $\operatorname{Hom}_H(V,M)$ and $\operatorname{Hom}_G(W,M)$ (or with the entries switched). So what sort of comparison do we want? Well, these are vectorspaces, so how about asking that they have the same dimension? This turns out to be just the right thing to ask, since it is on the one hand a very strong condition, but on the other, we can actually construct such a $W$ (that the induced module satisfies this is known as Frobenius reciprocity). Note that to recover the overly pessimistic version, we can take $M = W$ in the above. As a final note, I would like to add a bit of notation. If we denote restriction from $G$ to $H$ by $\operatorname{res}_H^G$ and induction from $H$ to $G$ by $\operatorname{ind}_H^G$ then the above condition becomes $\operatorname{Hom}_H(V,\operatorname{res}_H^G M)\cong\operatorname{Hom}_G(\operatorname{ind}_H^G V,M)$, or in other word that induction is left adjoint to restriction (or right adjoint if we switch the entries), at least when everything behaves nicely as functors, which indeed it tends to do. Also worth noting is that the above gives two possibilities for what we might require of our "induction". A "slightly more optimistic" version could be to require both to hold. And indeed, for finite groups over $\mathbb{C}$, this is what we get, but in general the two need not give the same (in fact, we might not always be sure they both exist). For example, when dealing with algebraic groups, we will usually be more interested in the version mentioned in paranthesis, since this is somewhat better behaved.
Under the auspices of the Computational Complexity Foundation (CCF) In this paper we prove two results about $AC^0[\oplus]$ circuits. We show that for $d(N) = o(\sqrt{\log N/\log \log N})$ and $N \leq s(N) \leq 2^{dN^{1/4d^2}}$ there is an explicit family of functions $\{f_N:\{0,1\}^N\rightarrow \{0,1\}\}$ such that $f_N$ has uniform $AC^0$ formulas of depth $d$ and size at most $s$; $f_N$ does not have $AC^0[\oplus]$ formulas of depth $d$ and size $s^{\varepsilon}$, where $\varepsilon$ is a fixed absolute constant. This gives a quantitative improvement on the recent result of Limaye, Srinivasan, Sreenivasaiah, Tripathi, and Venkitesh, (STOC, 2019), which proved a similar Fixed-Depth Size-Hierarchy theorem but for $d \ll \log \log N$ and $s \ll \exp(N^{1/2^{\Omega(d)}})$. As in the previous result, we use the Coin Problem to prove our hierarchy theorem. Our main technical result is the construction of uniform size-optimal formulas for solving the coin problem with improved sample complexity $(1/\delta)^{O(d)}$ (down from $(1/\delta)^{2^{O(d)}}$ in the previous result). In our second result, we show that randomness buys depth in the $AC^0[\oplus]$ setting. Formally, we show that for any fixed constant $d\geq 2$, there is a family of Boolean functions that has polynomial-sized randomized uniform $AC^0$ circuits of depth $d$ but no polynomial-sized (deterministic) $AC^0[\oplus]$ circuits of depth $d$. Previously Viola (Computational Complexity, 2014) showed that an increase in depth (by at least $2$) is essential to avoid superpolynomial blow-up while derandomizing randomized $AC^0$ circuits. We show that an increase in depth (by at least $1$) is essential even for $AC^0[\oplus]$. As in Viola's result, the separating examples are promise variants of the Majority function on $N$ inputs that accept inputs of weight at least $N/2 + N/(\log N)^{d-1}$ and reject inputs of weight at most $N/2 - N/(\log N)^{d-1}$. Many typos and minor errors corrected. In this paper we prove two results about $AC^0[\oplus]$ circuits. We show that for $d(N) = o(\sqrt{\log N/\log \log N})$ and $N \leq s(N) \leq 2^{dN^{1/d^2}}$ there is an explicit family of functions $\{f_N:\{0,1\}^N\rightarrow \{0,1\}\}$ such that $f_N$ has uniform $AC^0$ formulas of depth $d$ and size at most $s$; $f_N$ does not have $AC^0[\oplus]$ formulas of depth $d$ and size $s^{\varepsilon}$, where $\varepsilon$ is a fixed absolute constant. This gives a quantitative improvement on the recent result of Limaye, Srinivasan, Sreenivasaiah, Tripathi, and Venkitesh, (STOC, 2019), which proved a similar Fixed-Depth Size-Hierarchy theorem but for $d \ll \log \log N$ and $s \ll \exp(N^{1/2^{\Omega(d)}})$. As in the previous result, we use the Coin Problem to prove our hierarchy theorem. Our main technical result is the construction of uniform size-optimal formulas for solving the coin problem with improved sample complexity $(1/\delta)^{d+4}$ (down from $(1/\delta)^{2^{O(d)}}$ in the previous result). In our second result, we show that randomness buys depth in the $AC^0[\oplus]$ setting. Formally, we show that for any fixed constant $d\geq 2$, there is a family of Boolean functions that has polynomial-sized randomized uniform $AC^0$ circuits of depth $d$ but no polynomial-sized (deterministic) $AC^0[\oplus]$ circuits of depth $d$. Previously Viola (Computational Complexity, 2014) showed that an increase in depth (by at least $2$) is essential to avoid superpolynomial blow-up while derandomizing randomized $AC^0$ circuits. We show that an increase in depth (by at least $1$) is essential even for $AC^0[\oplus]$. As in Viola's result, the separating examples are promise variants of the Majority function on $N$ inputs that accept inputs of weight at least $N/2 + N/(\log N)^{d-1}$ and reject inputs of weight at most $N/2 - N/(\log N)^{d-1}$.
Calculate the heat produced when a strip of $\ce{Mg}$ metal with a mass of $0.0801$ gram is reacted with $50.0$ml of $1.0M$ $\ce{HCl}$ to raise the temperature of water to $7.6^\circ C$ (change in temp.=$7.6^\circ C$). Calculate the heat produced when a mole of this metal is used. I have this so far $q= m \times c \times \Delta T$ $m=0.0801$gram change in temp($\Delta T$)= ($7.6-4.184=3.416$) ($T_f-T_i$) is that right? and I have no idea what the specific heat($C$) is. is it the specific heat of Mg? what do I do with the $50.0$ml of $1.0M$ $\ce{HCl}$? thanks
Lower bounds for linear bandits turn out to be more nuanced than the finite-armed case. The big difference is that for linear bandits the shape of the action-set plays a role in the form of the regret, not just the Continue Reading Recall that in the adversarial contextual $K$-action bandit problem, at the beginning of each round $t$ a context $c_t\in \Ctx$ is observed. The idea is that the context $c_t$ may help the learner to choose a better action. This led Continue Reading In most bandit problems there is likely to be some additional information available at the beginning of rounds and often this information can potentially help with the action choices. For example, in a web article recommendation system, where the goal Continue Reading In the post on adversarial bandits we proved two high probability upper bounds on the regret of Exp-IX. Specifically, we showed: Theorem: There exists a policy $\pi$ such that for all $\delta \in (0,1)$ for any adversarial environment $\nu\in [0,1]^{nK}$, Continue Reading A stochastic bandit with $K$ actions is completely determined by the distributions of rewards, $P_1,\dots,P_K$, of the respective actions. In particular, in round $t$, the distribution of the reward $X_t$ received by a learner choosing action $A_t\in [K]$ is $P_{A_t}$, Continue Reading
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian 1. Facultad de Matemáticas, Universidad Católica de Chile, Casilla 306, Correo 22 - Santiago 2. Department of Mathematics, Pontificia Universidad Católica de Chile, Casilla 306, Correo 22, Santiago (P) $ \qquad\qquad\qquad -\Delta u=K(|x|)f(u),\quad x\in \mathbb R^n.$ Here $K$ is a positive $C^1$ function defined in $\mathbb R^+$ and $f\in C[0,\infty)$ has one zero at $u_0>0$, is non positive and not identically 0 in $(0,u_0)$, and it is locally lipschitz, positive and satisfies some superlinear growth assumption in $(u_0,\infty)$. Mathematics Subject Classification:37C4. Citation:C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. Communications on Pure & Applied Analysis, 2006, 5 (4) : 813-826. doi: 10.3934/cpaa.2006.5.813 [1] C. Cortázar, Marta García-Huidobro. On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian. [2] Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. [3] Scipio Cuccagna, Masaya Maeda. On weak interaction between a ground state and a trapping potential. [4] Marco A. S. Souto, Sérgio H. M. Soares. Ground state solutions for quasilinear stationary Schrödinger equations with critical growth. [5] Zhanping Liang, Yuanmin Song, Fuyi Li. Positive ground state solutions of a quadratically coupled schrödinger system. [6] Jian Zhang, Wen Zhang, Xianhua Tang. Ground state solutions for Hamiltonian elliptic system with inverse square potential. [7] Norihisa Ikoma. Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials. [8] Yinbin Deng, Wentao Huang. Positive ground state solutions for a quasilinear elliptic equation with critical exponent. [9] Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. [10] Xiao-Jing Zhong, Chun-Lei Tang. The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem. [11] Dengfeng Lü. Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations. [12] Claudianor Oliveira Alves, M. A.S. Souto. On existence and concentration behavior of ground state solutions for a class of problems with critical growth. [13] Jian Zhang, Wen Zhang. Existence and decay property of ground state solutions for Hamiltonian elliptic system. [14] Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. [15] Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. [16] [17] Carmen Cortázar, Marta García-Huidobro, Pilar Herreros. On the uniqueness of bound state solutions of a semilinear equation with weights. [18] Alireza Khatib, Liliane A. Maia. A positive bound state for an asymptotically linear or superlinear Schrödinger equation in exterior domains. [19] Hua Nie, Wenhao Xie, Jianhua Wu. Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor. [20] Gui-Dong Li, Chun-Lei Tang. Existence of ground state solutions for Choquard equation involving the general upper critical Hardy-Littlewood-Sobolev nonlinear term. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
First, I give my motivation to ask this question. The generalised Neumann trace can be defined as $$ {}_{H^{-1/2}(\partial\Omega)}\langle\frac{\partial u}{\partial{\mathbf{n}}},v\rangle_{H^{1/2}(\partial\Omega)} ={}_{H^{-1}(\Omega)}\langle\Delta u,v\rangle_{H^1(\Omega)}-\int_{\Omega}\nabla u\cdot\nabla v. $$ But this involves integral in the volume of $\Omega$ which is not really like a trace to me. In particular, if we substitute this definition to Green's representation formula, we obtain an identity of no use like $0=0.$ Then, I found a theorem in Girault-Raviart's book says $H(\mathrm{div},\Omega)$ always has normal trace in $H^{-1/2}(\partial\Omega)$ by smooth approximation. This is the first time I saw such smooth approximation result of negative order Sobolev spaces. I searched on the web and in books but I found only for $H^{-s}(\mathbb{R}^n)$ that $C_c^\infty(\mathbb{R}^n)$ is dense. I can not find similar results on Lipschitz domains. It would be good to give me a reference. Second, I also saw a result in McLean's book. That says, for any integer negative order Sobolev space $W^{-m,p}(\Omega)\ni f$ there is a representation$$f=\sum_{|\alpha|\leq m}\partial^{\alpha}f_{\alpha} \mbox{ with }f_{\alpha}\in L^p{(\Omega)}.$$But he does not say about negative real order Sobolev spaces. I would like a reference on similar results on negative real order Sobolev spaces.
I am considering a function $f(x,y)$ with all the appropriate assumptions so that what comes next is well defined. I think we have the equivalence: $$\mathrm{d}f=\frac{\partial f}{\partial x}\mathrm{d}x+\frac{\partial f}{\partial y}\mathrm{d}y=0\Leftrightarrow \frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=0$$ What is the exact mathematical argument behind this? How should the quantities $\mathrm{d}x$ and $\mathrm{d}y$ be treated? Paritial answer: Your equivalence is not true in general. Let´s say $f(x,y)=x^{0.5}\cdot y^{0.5} \ \ \forall x,y>0$, Then we have the partial derivatives $$\frac{\partial f}{\partial x}=0.5\cdot x^{-0.5}\cdot y^{0.5}, \frac{\partial f}{\partial y}=0.5\cdot x^{0.5}\cdot y^{-0.5} $$ Let us inspect the point at $x=y=1$ $$\frac{\partial f}{\partial x}\mathrm{d}x+\frac{\partial f}{\partial y}\mathrm{d}y=0\Rightarrow 0.5dx+0.5dy=0\Rightarrow \boxed{dx=-dy}$$ So here $\frac{\partial f}{\partial x}=\frac{\partial f}{\partial y}=0.5\neq 0$. And for $dx$ and $dy$ there are several combinations possible. You have to understand the real nature of $df, dx$ and $dy$, and to be precise in your statement. First of all, you have to understand that $df$ is the differential at a point $(x_0,y_0)\in\mathbb{R}^2$, so at least you should write $df= \frac{\partial f}{\partial x}(x_0,y_0)\mathrm{d}x+\frac{\partial f}{\partial y}(x_0,y_0)\mathrm{d}y$. Now $df,dx$ and $dy$ are linear forms on $\mathbb{R}^2$. The notation $dx$ denotes the linear from $(h,k)\in\mathbb{R}^2\mapsto h\in\mathbb{R}$, while $dy$ is the linear form $(h,k)\in\mathbb{R}^2\mapsto k\in\mathbb{R}$. Then $df$ is a linear combination of the linear forms $dx$ and $dy$. It is easy to check that $dx$ and $dy$ are linearly independant. In particular, if $df=0$, it is equivalent taht the coefficients of the combinations are both $0$. If you prefer a direct argument (which amounts to what I said previously), we may rewrite the equality as follows: $$df(h,k)=\frac{\partial f}{\partial x}(x_0,y_0)h+\frac{\partial f}{\partial y}(x_0,y_0)k=0$$ for all $(h,k)\in\mathbb{R}^2$. Setting $(h,k)=(1,0)$, then $(h,k)=(0,1)$ yields the desired equivalence.
Particle Moving Along y Axis on Surface of Rotating Ellipse \[xy\]plane. A particle on the surface of the ellipse is made to move on the surface of the ellipse as the ellipse rotates. What will be the velocity and acceleration of the particle? If the minor and major axes have length \[b, \; a\]respectively, the coordinates of a point on the surface of the ellipse with origin at one focus is \[(r, \theta )\]in polar coordinates with \[r= \frac{b^2}{a(1- ecos \theta )}\]and the velocity is \[\frac{dr}{dt}= - \frac{eb^2 \dot{\theta} sin \theta }{a(1-e cos \theta )^2} \]. The acceleration is \[\frac{d^2r}{dt^2}= - \frac{eb^2 (\ddot{\theta} sin \theta + \dot{\theta}^2 cos \theta )}{a(1-e cos \theta )^2} - \frac{2e^2b^2 (\dot{\theta}^2 sin^2 \theta}{a(1-e cos \theta )^3} \].
Journal of Symbolic Logic J. Symbolic Logic Volume 60, Issue 4 (1995), 1208-1241. Minimal Realizability of Intuitionistic Arithmetic and Elementary Analysis Abstract A new method of "minimal" realizability is proposed and applied to show that the definable functions of Heyting arithmetic (HA)--functions $f$ such that HA $\vdash \forall x\exists!yA(x, y)\Rightarrow$ for all $m, A(m, f(m))$ is true, where $A(x, y)$ may be an arbitrary formula of $\mathscr{L}$(HA) with only $x, y$ free--are precisely the provably recursive functions of the classical Peano arithmetic (PA), i.e., the $< \varepsilon_0$-recursive functions. It is proved that, for prenex sentences provable in HA, Skolem functions may always be chosen to be $< \varepsilon_0$-recursive. The method is extended to intuitionistic finite-type arithmetic, $HA^\omega_0$, and elementary analysis. Generalized forms of Kreisel's characterization of the provably recursive functions of PA and of the no-counterexample-interpretation for PA are consequently derived. Article information Source J. Symbolic Logic, Volume 60, Issue 4 (1995), 1208-1241. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183744873 Mathematical Reviews number (MathSciNet) MR1367206 Zentralblatt MATH identifier 0854.03054 JSTOR links.jstor.org Citation Damnjanovic, Zlatan. Minimal Realizability of Intuitionistic Arithmetic and Elementary Analysis. J. Symbolic Logic 60 (1995), no. 4, 1208--1241. https://projecteuclid.org/euclid.jsl/1183744873
Under the auspices of the Computational Complexity Foundation (CCF) A $k$-LIN instance is a system of $m$ equations over $n$ variables of the form $s_{i[1]} + \dots + s_{i[k]} =$ 0 or 1 modulo 2 (each involving $k$ variables). We consider two distributions on instances in which the variables are chosen independently and uniformly but the right-hand sides are different. In a noisy planted instance, the right-hand side is obtained by evaluating the system on a random planted solution and adding independent noise with some constant bias to each equation; whereas in a random instance, the right-hand side is uniformly random. Alekhnovich (FOCS 2003) conjectured that the two are hard to distinguish when $k = 3$ and $m = O(n)$. We give a sample-efficient reduction from solving noisy planted $k$-LIN instances to distinguishing them from random instances. Suppose that $m$-equation, $n$-variable instances of the two types are efficiently distinguishable with advantage $\epsilon$. We show that $O(m \cdot (m/\epsilon)^{2/k})$- equation, $n$-variable noisy planted $k$-LIN instances are efficiently solvable with probability $\exp -\tilde{O}((m/\epsilon)^{6/k})$. Our solver has worse success probability but better sample complexity than Applebaum's (SICOMP 2013). The solver is based on a new approximate local list-decoding algorithm for the $k$-XOR code at large distances. The $k$-XOR encoding of a function $F\colon \Sigma \to \{-1, 1\}$ is its $k$-th tensor power $F^k(x_1, \dots, x_k) = F(x_1)\cdots F(x_k)$. Given oracle access to a function $G$ that $\mu$-correlates with $F^k$, our algorithm outputs the description of a message that $(\mu^{1/k} - \epsilon)$-correlates with $F$ with probability $\exp -\tilde{O}(k^2\mu^{-2/k}\epsilon^{-2})$. Previous decoders have a worse dependence on $\mu$ (Levin, Combinatorica 1987) or do not apply to subconstant $\mu^{1/k}$. We also prove a new XOR lemma for this parameter regime. The decoder and its analysis rely on a new structure-versus-randomness dichotomy for Boolean-valued functions over product sets.
Let $A_n$ be the adjacency matrix of the Cayley graph $\text{Cay}(S_n,C_n)$ where $C_n \subseteq S_n$ is the conjugacy class of $n$-cycles of the symmetric group $S_n$. Since the generating set of this Cayley graph is a conjugacy class, it is not too difficult to use the representation theory of $S_n$ to elegantly count the number of nonzero eigenvalues of $A$: $$ \text{rank}(A_n) = \binom{n-1}{0}^2 + \binom{n-1}{1}^2 + \cdots + \binom{n-1}{n-1}^2 = \binom{2(n-1)}{n-1}.$$ I am interested in the rank of $A_n$ modulo $p$ where $p$ is an odd prime for all $n$. One way to determine this would be to compute the Smith Normal Form of $A_n$ (over $\mathbb{Z}$). Let $D_n = \text{diag}(s_1,s_2,\cdots,s_r,0,\cdots,0)$ such that $s_i | s_{i+1}$ for all $1 \leq i < r := \text{rank}(A)$ be the Smith Normal Form of $A_n$. Computations for small $n$ show that the nonzero $s_i$'s are all powers of 2, which might suggest that $\text{rank}_p(A_n) = \text{rank}(A_n)$ for all $n$ and odd primes $p$. It seems unlikely that one can divine unimodular matrices $U_n,V_n$ such that $D_n = U_nA_nV_n$ for all $n$, so I would like to think of $A_n$ as an endomorphism of the group algebra $\mathbb{F}_p[S_n]$ and perhaps use $p$-modular representation theory of $S_n$ to say something about the image of $A_n$. (Here, we are assuming $p$ is small, i.e., $p \mid n!$, so $\mathbb{F}_p[S_n]$ is not semisimple.) Generally speaking, working with modular representations of $S_n$ is also difficult; however, the image of $A_n$ (in the characteristic 0 case) is the direct sum of the hook-shaped Specht modules, which are pretty well-understood, even in the modular case. In particular, Peel (1971) showed for odd primes $p$ that the hook-shaped Specht modules $S^{(n-k,1^k)}_{\mathbb{F}_p}$ are simple when $p \not \mid n$ and determined their composition series when $p \mid n$. Experimentally, if one picks $b \in S^{(n-k,1^k)}_{\mathbb{F}_p}$ to be a $(n-k,1^k)$- standard polytabloid (which is a $\{0,\pm1 \}$-valued vector well-defined for any Specht module over any field), then $A_nx = b$ indeed has a solution over $\mathbb{F}_p$ for small $k$, odd primes $p$, and $n$. In the case that $p \not \mid n$, because $A_nx = b$ has a solution, it follows that $A_nx = b'$ for any $b' \in S^{(n-k,1^k)}_{\mathbb{F}_p}$, as $S^{(n-k,1^k)}_{\mathbb{F}_p}$ is irreducible by Peel's result. Here, we are "using the modular representation theory of $S_n$", but the problem is that showing a solution $x$ for $A_nx = b$ exists over $\mathbb{F}_p$ for all $0 \leq k < n$ and odd primes $p$ seems to involve similar row-operation-type calculations as putting $A_n$ into Smith Normal Form. My (open-ended) question is whether there is a more clever way to leverage such information about the modular representation theory of $S_n$ that would circumvent row and column operations to say something about the Smith Normal Form of $A_n$ or $\text{rank}_p(A)$ for odd primes $p$. EDIT Here's the SNF for small $n$ (thanks Dima for verifying these): $n = 2$ the SNF is $1^2$ $n = 3$ the SNF is $1^42^2$ $n = 4$ the SNF is $1^{8}2^{12}$ $n = 5$ the SNF is $1^{16}2^{52}8^2$ $n = 6$ the SNF is $1^{32}2^{200}8^{20}$ $n = 7$ the SNF is $1^{64}2^{728}4^{2}8^{128}16^2$ (I have not gone beyond $n=7$, as this would take some time.)
I'm following Classical Mechanics, 5th Edition by Tom W.B. Kibble and Frank H. Berkshire. I'm following it since I'm interested in studying physics (although, am doing it at home myself). I've worked through quite a range of chapters in the book but skipped a lot of chapter two since I was having so much trouble understanding it (although a lot of the rest was fine). The section I'm struggling with is where they solve the harmonic oscillator equation. Equation 2.13 ($m\ddot{x} + kx = 0$) is a linear differential equation; that is, one involving only linear terms in xand its derivatives. Such equations have the important property that their solutions satisfy the superposition principle: if $x_1(t)$ and $x_2(t)$ are solutions, then so is any linear combination $$x(t)=a_1x_1(t)+a_2x_2(t), \quad[2.15]$$ where $a_1$ and $a_2$ are constants; for, clearly, $$m\ddot{x}+kx = a_1(m\ddot{x_1}+kx_1) + a_2(m\ddot{x_2}+kx_2) = 0$$ This makes sense so far - they're just talking about the property of solutions to differential equations. Moreover if $x_1$ and $x_2$ are linearly independent solutions, then [2.15] is the general solution. Since the equation is of second order, we could obtain its solution by integrating twice, and the general solution must therefore contain just two abritrary constants of integratio. So to find the general solution, all we have to do is find any two independent solutions $x_1(t)$ and $x_2(t)$. Let us first consider the case where $k < 0$, so that $V(x)$ has a maximum at $x = 0$. Then, the differential equation can be written, $$\ddot{x} - p^2x = 0, \quad p = \sqrt{\frac{-k}{m}}$$ This is the part I don't understand. Where on earth did p come from? What is p? How did they actually solve this to get the general solution $x = \frac{1}{2}Ae^{pt} + \frac{1}{2}Be^{-pt}$? They then go on to turn to the case where $k > 0$ and this doesn't make sense either. No matter how many times I read through it I don't get it. Would it be better to understand the solution with complex numbers rather than these two different scenarios for k? Thanks for any help.
This is an update from Cheenta Research Track (Geometric Group Theory group). The group is comprised of Ashani Dasgupta, Sambuddha Majumdar. Learn more about Research Track here. Reference Texts: Metric Spaces of Non-Positive Curvature by Haefliger Algebraic Topology by Hatcher Contemporary Abstract Algebra by Gallian Aspects of Topology by Charles O. Christenson et. al Reference Papers R – trees in topology, geometry, and group theory by Mladen Bestvina Apology This is a close reading of developments in Geometric Group Theory. I have learnt most of it from my doctoral advisor. In this work group, we are investigating some aspects of this vast field. Sketch of discussion (till 13th May, 2019) Suppose X is a set. Also assume that there is a rule to measure distance between points in the set. This is loosely what a metric space is. (Read more about metric spaces and point set topology in Christenson) Isometries are maps from X to X which preserves distance. Suppose A and B are two points in X. Then, if distance between A and B is same as f(A) and f(B), then f is known as an isometry |A-B| = |f(A) – f(B)| We consider the set of all isometries of X. This is a group (learn more about groups from Gallian). Proper Action Let \( \Gamma\) be a group acting by isometries on a metric space X. The action is said to be proper (alternatively, “\( \Gamma \) acts properly on X ” ) if all but finitely many members of \( \Gamma \) move small enough balls about each point disjointly from the ball. We went ahead and proved the Proposition 8.5 from Haeflegar rigorously. Suppose a group \( \Gamma \) acts properly by isometries on the metric space X. Then: (1) For each \( x \in X \), there exists \( \epsilon > 0 \) such that if \( \gamma \cdot B (x, \epsilon ) \cap B (x, \epsilon) \neq \phi \) then \( \gamma \in \Gamma_x \), the stabilizer of x. (2) The distance between the orbits in X defines a metric on the space \( \Gamma \backslash X \) of \( \Gamma \) orbits. (3) If the action is proper and free, then the natural projection \( p : X \to \Gamma \backslash X \) is a covering map and a local isometry. (4) If a subspace Y of X is invariant under the action of a subgroup \( H \subseteq \Gamma \) then the action of H on Y is proper. (5) If the action of \( \Gamma \) is cocompact then there are only finitely many conjugacy classes of isotropy subgroups in \( \Gamma \) The discussion involved rigorous proofs and definitions of relevant terms. We backtracked inside some ideas from topology (covering space theory) and group theory (isotropy groups, conjugacy classes). What lies ahead We will review the covering space theory in some detail and finally understand the Schwarz – Milnor theorem (notion of quasi – isometry).
Problem Statement Let's run an election. $i \in \text{voters}$ $j \in \text{candidates}$ $x_j \in \{ 0, 1 \}$ The candidate is chosen by setting this to 1. This is the election result. $b_{i,j} \in [0,1]$ Ballot of voter i for candidate j. Voter gives bigger numbers if he likes the candidate. This is the input to the election. Now, how do we choose $x_j$ so that the best group of candidates win? Here is one optimization. edit 2: Actually this is a better problem. Maximize $Z$ subject to $\begin{array}{ll} \forall _{j } , & \sum_{i} f_i * b_{ij}^2 * x_j \geq Z * x_j & \text{ Minimum winning score is maximized.} \\ \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j = 1 & \text{The weight of each voter is the same.} \leq \text{works too}\\ & \sum_{j } x_j = N & \text{ The number of seats is N.} \end{array} $ When I try this in Gurobi, it complains, "Q matrix is not positive semi-definite". However, I can set an upper bound on f and then it will work. Also, I can linearize something: $ \forall _{j } , \quad N*(x_j-1)+\sum_{i} f_i * b_{ij}^2 \geq Z \quad \text{ Minimum winning score is maximized.} $ Gurobi does solve this, though it takes a lot of time. I wish I could linearize all the constraints so there is no $f*x$ term. It is also possible to just say $\text{Maximize} \quad {\displaystyle \min_j \sum_i \frac{b_{ij}^2}{\sum_j x_j*b_{ij}}}$ but I'm not sure this helps, though it does get rid of f. Here's another related problem $ \begin{array}{lll} \text{Minimize} & { \max_{i } \sum_{j } f_i * b_{ij} * x_j} & \text{The representation of each voter is fair.} \\ \text{Subject to} & {\forall \ j } \ \ \sum_{i } f_i * b_{ij}^2 *x_j \geq x_j \ \ \ & \text{The value of each winning seat is the same.} \end{array} $ old stuff below edit 1: I realize a better problem to solve is this one: Maximize $Z$ subject to $\begin{array}{ll} \forall _{j } , & \sum_{i} f_i * b_{ij} * x_j \geq Z * x_j & \text{ Minimum winning score is maximized.} \\ \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j = 1 & \text{The weight of each voter is the same.} \leq \text{works too}\\ & \sum_{j } x_j = N & \text{ The number of seats is N.} \end{array} $ When I try this in Gurobi, it complains, "Q matrix is not positive semi-definite". The model and explanation below is old but helpful in understanding the problem above. $\begin{array}{ll} \text{maximize } \text{ } \text{ } \sum_{i} & \sum_{j} f_i * b_{ij} * x_j \text{ } \text{ } \text{ } &\text{Total score is maximized.} \\ \text{ subject to } \text{ } \text{ } \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j \leq 1 & \text{The weight of each voter is the same, basically.}\\ \text{ and subject to} & \sum_{j } x_j = N & \text{The number of seats is N.} \end{array} $ What is this $f_i$? It lets you vote for multiple candidates. So if two of your candidates that you like end up winning, half your vote went to one and half to the other. Basically, $f_i$ is a way to divide but using multiplication. $f_i \in (0,1]$ How to Help I'm glad Gurobi will do this problem. I was able to implement it. And it is too slow and I want it to go faster. I want to know what gurobi is doing. I have gurobi's log file but it is hard to interpret. I also have my code. In the example I am running, I have 216 voters, 10 candidates, and 5 winners. It takes 42 seconds. What is this problem related to and are there different forms to implement it? It is a kind of load balancing where the loading is factorized to $f_i * x_j$ instead of $x_{i,j}$. It is a binary problem in $x_j$ and it is also continuous in $f_i$. This is a committee selection problem. It's also almost a binary quadratic problem except it has this additional $f_i$, which is continuous. There could be a simplification of $f_i$ because it is either 1 or $\frac{1}{\sum_{j} b_{ij} * x_j}$. Maybe this quadratic constraint can be simplified.
We address the semistable reduction conjecture of Abramovich and Karu: we prove that every surjective morphism of complex projective varieties can be modified to a semistable one. The key ingredient is a combinatorial result on triangulating lattice Cayley polytopes. Joint work with Karim Adiprasito and Michael Temkin.The lecture consists of two parts: first 30 minutes an algebra-geometric introduction by Michael Temkin, and then a one hour talk by Gaku Liu about the key combinatorial result. The purpose of this talk is to survey several results from Hjorth's theory of turbulent polish group actions. We will start by discussing certain classification problems associated with Borel equivalence relations, and present the notions of Borel reductions and smooth relations, and the E_0 dichotomy theorem of Harrington-Kechris-Louveau. For $\kappa < \lambda$ infinite cardinals let us consider the following generalization of the Lowenheim-Skolem theorem: "For every algebra with countably many operations over $\lambda^+$ there is a sub-algebra with order type exactly $\kappa^+$". We will discuss the consistency and inconsistency of some global versions of this statement and present some open questions. Model theory and geometry of fields with automorphism I will review some of the model-theoretic geometry of difference varieties, and some open problems. A difference variety is defined by polynomial equations with an additional operator $\si$ interpreted as a field automorphism. Abstract: The goal of this (and the next) talk is to introduce automorphic L-functionsfor GL(n) and other split groups, and to discuss some of their properties and some conjectures.Key words: L-functions, Langlands dual group, modular forms Abstract: The starting point of the geometric approach to the theory of automorphic forms over function fields is a beautiful observation of Weil asserting that there is a natural bijection between the two-sided quotient GL(n,F)\GL(n,A)/GL(n,O) and the set of isomorphism classes rank n vector bundles on a curve. The goal of my talk will be to explain this result and to give some applications.Key words: adeles and ideles in the function field case, algebraic curves, line and vector bundles on curves, Picard group, Riemann-Roch theorem. Last week we discussed what does it means for a functor to be a "sheaf" in the etale topology.Our goal now will be to complete the definition of algebraic stacks and to give examples.Key words: algebraic stacks, faithfully flat morphisms, faithfully flat descent, moduli spaces of vector bundleson curves. The main goal of this talk will be to define algebraic stacks and to give examples.Our main example will be moduli "space" of vector bundles on a smooth projective curve.Key words: groupoids, Grothendieck topologies, etale and smooth morphisms of schemes, G-torsors,algebraic stacks. Having defined the standard automorphic L-function for GL(n) in the first talk, we now proceed to the definition of L-functions for general split groups and representations of the Langlands dual group(which will be discussed as well). I then want to discuss some results and conjectures regarding theseL-functions.Key words: L-functions, Langlands dual group, modular forms Abstract: The goal of this talk will be to explain what are algebraic stacks and why they naturally appear.If time permits, we will start discussing our main example of moduli spaces of vector bundles on a smooth projective curve.Key words: groupoids, Grothendieck topologies, etale and smooth morphisms of schemes, algebraic stacks. Title: Local (L-, \epsilon- and \gamma-) factors, and converse theorems.Abstract: Our first goal will be to define local (L-,\epsilon- and \gamma-) factors and to study their properties. These factors are needed to formulate the local Langlands correspondence for GL(n), which was outlined two weeks ago. We will do it first for supercuspidal representations of GL(n) and then for local Galois representations, that is, for representations of Gal(\bar{F}/F), where F is a local field. Let F be a non-Archimedean local field. In the representation theory of GL_n(F), one of the basic problems is to characterize its irreducible representations up to isomorphism. There are many invariants (e.g., epsilon factors, L-functions, gamma factors, depth, etc) that we can attach to a representation of GL_n(F). Roughly, the local converse problem is to find the smallest subcollection of twisted local \gamma-factors which classifies theirreducible admissible representations of GL_n(F) up to isomorphism. First we am going to recall first basic facts about vector bundles on smooth projective curves. Then we will talk about moduli "spaces" of vector bundles on curves. If time permits, we will also talk about related "spaces" like Hecke stacks and moduli "spaces" of shtukas.Key words: Riemann-Roth theorem for curves, vector bundles on curves, degree.
I will elaborate on Fedor Petrov's comment. Interchanging the order of summation and using the binomial theorem, we remain with $$\frac{k!}{N^{n+1-k}}\sum_{i=0}^{N-1} \omega_N^{-ki} (\sum_{j=0}^{n} \binom{n}{j} \omega_N^{ji} (1+\omega_N^i)^n)=$$$$(*)\frac{k!}{N^{n+1-k}}\sum_{i=0}^{N-1} \omega_N^{(N-k)i} (1+\omega_N^i)^n.$$ Let $f(x):=x^{N-k}(1+x)^n=\sum_{l} a_l x^l$. Your sum is $$\frac{k!}{N^{n-k}} \frac{\sum_{i=0}^{N-1} f(\omega_N^i)}{N}$$ Now, using the formula for the geometric sum $\sum_{i=0}^{N-1} (\omega_N^{i})^{\ell}$ (0 when $N \nmid l$, and otherwise $N$), we find that your sum is essentially just the sum of certain coefficients of $f(x)$:$$\frac{k!}{N^{n-k}} \sum_{l \equiv 0 \mod N} a_l = \frac{k!}{N^{n-k}}\sum_{j \equiv k \mod N} \binom{n}{j}.$$This trick is sometimes called "roots of unity filter". The best closed form seems to be expression $(*)$ - for fixed $N$ this is a simple, finite sum to evaluate and understand (try $N=2$), and it also allows one to perform asymptotic analysis - the term $i=0$ contributes the majority of the sum ($2^n$ times a simple expression), the other terms contribute an exponential term in $n$ of smaller magnitude. Thinking of $N,k$ as fixed, and ignoring the outer term, your sum is a linear combination of $N$ geometric sequences: $$\sum_{i=0}^{N-1} \lambda_i c_i^n,$$ where $$c_i=1+\omega_N^i, \lambda_i = \omega_N^{(N-k)i}.$$ Such a sequence necessarily satisfies a homogeneous recurrence relation, whose coefficients belong to a polynomial vanishing on all the $c_i$'s simultaneously. Since the $c_i$'s are algebraic integers, a recurrence exists with integer coefficients. We can find it: Since $x^n-1$ vanishes on $\omega_n^i=c_i-1$, the polynomial $$(x-1)^n-1=x^n+\sum_{i=1}^{n-1}x^{n-i}\binom{n}{i}(-1)^{i}+((-1)^n-1)$$ vanishes on the $c_i$'s, and hence the sequence$$S(n):=\frac{1}{N}(\sum_{i=0}^{N-1} \omega_N^{(N-k)i} (1+\omega_N^i)^n)=\sum_{j \equiv k \mod N} \binom{n}{j}$$ satisfies the following linear homogeneous recurrence relation with integer coefficients: $$S(n) = \sum_{j=0}^{N-1}(-1)^{j-1} \binom{N}{j}S(n-j) + (1+(-1)^{n-1})S(n-N).$$ If you want a reference for all of this, I suggest this short, elementary paper by Konvalina and Liu.
tl;dr: How could the 2018-08-30 Soyuz MS-09 / ISS leak be so slow? Answer: By being about 2 millimeters in diameter! @DavidHammen's comment converts 0.8 mbar/hr to about 0.8 m^/hr air loss rate presumably at standard conditions. Let's see how that's done, how it checks against "a 2mm hole" and what it means if there were no response of any kind (human or make-up air). He uses the first order relationships $$ \frac{\dot{p}}{p}=\frac{\dot{m}}{m}=\frac{\dot{V}}{V} = 0.8 \times 10^{-3}/hr$$ where I'm guessing $p$ is the pressure of the remaining air (assuming no make-up air and no change in temperature, which is reasonable considering the air is in intimate contact with so much solid surface area), $m$ is the mass of the remaining air, and $V$ is the equivalent volume of the remaining air if it were at standard conditions. An ISS pressurized volume above about 938 m^3 (matches values on the internet) times $$0.8 \times 10^{-3}/hr$$ does indeed give about 0.8 m^3/hr! Now let's see what a 2 mm hole in a thin plate is expected to do. I found two online calculators, although they may have somewhat different assumptions, and the hole has some depth (the wall thickness of the Soyuz at this location) and side roughness, but still we can try. http://www.efunda.com/formulae/fluids/calc_orifice_flowmeter.cfm#calc https://www.tlv.com/global/TI/calculator/air-flow-rate-through-orifice.html Number 1. gives 2.74E-04 m^3/sec or 1.0 m^3/hr, almost identical to the quoted 0.8 m^3/hr value, and number 2 gives something at least close; about 3 m^3/hr. So baring any make-up air, that's about a 1% drop in pressure every ten hours. That's enough to be alarming, and of course would probably trigger an alarm in less than ten hours since strikes by meteorites and debris so likely over time that one would expect the ISS to be hyper-vigilant about leaks. So to answer the question How could the 2018-08-30 Soyuz MS-09 / ISS leak be so slow? The answer is By being about 2 millimeters in diameter!
For any natural number $n=\sum_{i=0}^k a_i 10^i$ with $0\le a_i\le 9$, define $S(n)=\sum_{i=0}^k a_i$. Then $$n^2=\left(\sum_{i=0}^k a_i 10^i\right)^2=\sum_{j=0}^{2k}\left(\sum_{i=0}^j a_i a_{j-i} \right)10^j,$$where $a_i=0$ if $i>k$. Then $$S(n^2)\le \sum_{j=0}^{2k}\sum_{i=0}^j a_i a_{j-i} = \left(\sum_{i=0}^k a_i\right)^2=S(n)^2$$and we have equality if and only if $c_j:=\sum_{i=0}^j a_i a_{j-i}<10$ for all $j$. It follows that if $S(n^2)=S(n)^2$, then $a_i<4$. In fact, if $a_j\ge 4$, then $$c_{2j}=a_j^2+\sum_{i=0}^{j-1}a_i a_{2j-i}\ge 16,$$which is impossible. It also follows that if $S(n^2)=S(n)^2$, and some $a_j=3$, then $a_i\le 1$ for all $i\ne j$. In fact, if $a_j=3$ and $a_i\ge 2$ for some $i\ne j$, then$$c_{j+i}=2a_j a_i+\sum_{\underset{l\ne i,j} {l=0} }^{j+i}a_l a_{j+i-l}\ge 12,$$which is impossible. If we now set $L(j)=\# \{a_i,\ a_i=j\}$ (which depends on $n$), then, if $S(n)^2=S(n^2)=100$, by the previous results necessarily we have$$(L(1),L(2),L(3))\in\{(10,0,0),(8,1,0),(6,2,0),(4,3,0),(2,4,0),(0,5,0),(7,0,1)\}.$$ In principle you now can try these combinations in order to see which satisfies $c_j<10$ for all $j$.For example, the smallest example with $(L(1),L(2),L(3))=(10,0,0)$ is $n=10\ 111\ 111\ 111$. Using an exhaustive search with Mathematica, one finds the smallest example with$(L(1),L(2),L(3))=(8,1,0)$ is $n=1101111211$ (which is the example you mentioned and the absolute smallest example), and the smallest example with $(L(1),L(2),L(3))=(4,3,0)$ is $n=1121102002$. Higher examples require too much time using Mathematica, but I think that one can prove that the smallest example with $(L(1),L(2),L(3))=(0,5,0)$ is $n=2000020002022$. For this one can use that if $S(n^2)=S(n)^2$, and for some $j_1<j_2<j_3$ we have $a_{j_1}=a_{j_2}=a_{j_3}=2$, then $j_2-j_1\ne j_3-j_2$. In fact, if $a_{j_1}=a_{j_2}=a_{j_3}=2$ and $j_2-j_1= j_3-j_2$for some $j_1<j_2<j_3$, then$$c_{2j_2}=(a_{j_2})^2+2a_{j_1} a_{j_3}+\sum_{\overset {l=0}{l\ne j_1,j_2,j_3} }^{2j_2}a_l a_{2j_2-l}\ge 12,$$which is impossible. I am also pretty sure that the smallest example with $(L(1),L(2),L(3))=(7,0,1)$ is $n=10101111013$ and that the smallest example with $(L(1),L(2),L(3))=(6,2,0)$ is $n=10101011122$. For $(L(1),L(2),L(3))=(2,4,0)$ I didn't found (nor searched with purpose) a smallest example. May be there are some other features than trying all possibilities by hand (or by computer). ${\bf{Edit:}}$ If you set $$L(k)=\min\{n: S(n)^2=S(n^2)=k^2\}$$then $L(1)=1$, $L(2)=2$, $L(3)=3$, $L(4)=13$, $L(5)=113$, $L(6)=1113$, $L(7)=11113$, $L(8)=1011113$, $L(9)=101011113$, $L(10)=1101111211$, $L(11)\le 1001101111211$.
For a language $L$ with pumping length $p$, and a string $s\in L$, the pumping lemmas are as follows: Regular version:If $|s| \geq p$, then $s$ can be written as $xyz$, satisfying the following conditions: $|y|\geq 1$ $|xy|\leq p$ $ \forall i\geq 0: xy^iz\in L$ Context-free version:If $|s| \geq p$, then $s$ can be written as $uvxyz$, satisfying the following conditions: $|vy|\geq 1$ $|vxy|\leq p$ $ \forall i\geq 0: uv^ixy^iz\in L$ My question is this: Why do we have condition 2 in the lemma (for either case)? I understand that condition 1 essentially says that the "pumpable" (meaning nullable or arbitrarily repeatable) substring has to have some nonzero length, and condition 3 says that the pumpable substring can be repeated arbitrarily many times without deriving an invalid string (with respect to $L$). I'm not sure what the second condition means or why it is important. Is there a simple but meaningful example to illustrate its importance?
CNO Cycle: 4H $\rightarrow$ He 4 protons (i.e. Hydrogen nuclei) combine to form a Helium molecule while releasing 26.7 MeV energy. This is the net result of any of the various fusion pathways for hydrogen to helium. Three Helium molecules combine to form a Carbon molecule, while releasing 7.4 MeV of energy. Since it takes three CNO reactors (and 12 protons) to make a Carbon molecule, we now have a net gain of 87.5 MeV. See the link, this is a long chain of reactions, each successively adding a Helium to get to a bigger element, until we get to Fe-52. The total energy released by this chain is 80.6 MeV. Taking into account the 87.5 MeV to form the initial carbon and 11 $\times$ 26.7 MeV to form all of the Helium, the net energy gain from fusion is 462 MeV, divided by 52 initial protons. Now, Fe-52 is not the most stable element, and you could theoretically get more energy by reacting up to Fe-56 or Ni-62 or something. But, I wasn't able to find a clear path for fusion up to that point. In the real world, creation of these elements is a result of an equilibrium between various fusion reactions and photodisintegration and such. I think this energy estimate is the best for your purposes. Energy released by accretion This is much more difficult to estimate, because there are a lot of factors here, and it depends strongly on the size of your black hole and shape of the accretion disk. However, reworking an estimate of luminosity based on mass transfer rate into a black hole gives:$$E = \frac{\mu m}{R},$$ where $\mu$ is the standard gravitational parameter for the black hole, $m$ is the mass of the object falling into it, and $R$ is the radius of the accretion disk. Lets take the black hole at the center of our galaxy as an example. I calculate $\mu$ to be about $5.7\times10^{26}$ and $r$ about $7.5\times10^{12}$ meters (~ 50 AU). Therefore, each AMU generates about 0.8 MeV as it falls into the accretion disk. Consider this a pretty rough estimate. The problem here is that much of this kinetic energy is either a. carried into the event horizon by the falling particle or b. radiated into the event horizon by the accretion disk. Either way, much of the released energy is unusable. Conclusion You get about 9 MeV from fusion per AMU of protons that you throw into this process, and less than 0.8 MeV from accretion per AMU of protons. Converting to J and kg, we get 870 TJ per kg from fusion, and less than 77 TJ per kg from accretion. So, you are looking at something in the range of 900 TJ per kg of hydrogen.
Background Fomin and Zelevinsky have introduced cluster algebras in an influential article. To define a cluster algebra, Fomin and Zelevinsky have defined a mutation of seeds. Here, a seed $(\mathbf{x},B)$ consists of a cluster $\mathbf{x}=(x_1,x_2,\ldots,x_n)$ of certain elements (which are called cluster variables) and a skew-symmetrisable integer $n\times n$ matrix $B$. By definition, the cluster algebra is generated by all cluster variables in all seeds that are obtained from a given initial seed by a sequence of mutations. A skew-symmetrisable integer $2\times 2$ matrix has the form $$B=\pm\begin{pmatrix}0&a\\\\-b&0\end{pmatrix}$$ for some natural numbers $a,b\geq 1$. Note that the two possible choices yield isomorphic cluster algebras that we denote by $\mathcal{A}(a,b)$. We can parametrise the cluster variables in $\mathcal{A}(a,b)$ by the set of integers, so that we obtain cluster variables $x_i$, with $i\in\mathbb{Z}$, and clusters $(x_{i-1},x_i)$, with $i\in\mathbb{Z}$. The equation \begin{align*} x_{i-1}x_{i+1}=\begin{cases}x_i^a+1,& \textrm{if } i \textrm{ is even}, \\\\ x_i^b+1,&\textrm{if } i \textrm{ is odd,}\end{cases} \end{align*} describes the mutation from the cluster $(x_{i-1},x_i)$ to the cluster $(x_i,x_{i+1})$. There are two kinds of cluster algebras: cluster algebras of finite type and cluster algebras of infinite type. We declare a cluster algebra to be of finite type if it admits only finitely many cluster variables. Fomin and Zelevinsky have furthermore classified cluster algebras of finite type by finite type root systems. We therefore see that (coefficient-free) cluster algebras (without frozen variables) are in bijection with Dynkin diagrams of type $A_n (n \geq 1)$, $B_n (n\geq 2)$, $C_n (n\geq 3)$, $D_n (n\geq 4)$, $E_n (n=6,7,8)$, $F_4$, and $G_2$. The classification theorem implies that the cluster algebra $\mathcal{A}(a,b)$ from above is of finite type if and only if $ab<4$. In this case, the sequence is $(x_i)_{i\in\mathbb{Z}}$ is periodic. Dynkin diagrams satisfy a crystallographic condition. In the context of cluster algebras, the crystallographic condition yields integer entries in the $B$-matrix. On the other hand, finite Coxeter groups are in bijection with Coxeter-Dynkin diagrams. Coxeter-Dynkin diagrams do not necessarily satisfy a crystallographic condition. Examples of non-crystallographic Coxeter groups are dihedral groups (with Coxeter-Dynkin diagram $I_2(m)$ with $m=5$ or $m\geq 7$) and the symmetry group of the icosahedron (with Coxeter-Dynkin diagram $H_3$). Recurrences associated with non-crystallographic root systems The Dynkin diagrams associated with the cluster algebras $\mathcal{A}(1,1)$, $\mathcal{A}(2,1)$ and $\mathcal{A}(3,1)$ are $A_2$, $B_2$ and $G_2$, respectively. Viewed as Coxeter-Dynkin diagrams, the corresponding Coxeter groups are the dihedral symmetry groups of the equilateral triangle, the square and the regular hexagon. More generally, the Coxeter-Dynkin diagram associated with the dihedral group of symmetries of the regular $m$-gon, for some $m\geq 3$, consists of two vertices that are joined by an edge of weight $a=4\cos^2(\frac{\pi}{m})$. Putting $b=1$, we can define a sequence $(x_i)_{i\in\mathbb{Z}}$ as above. In the case $m=5$ (where we have $a=4\cos^2(\frac{\pi}{5})=\frac12(3+\sqrt{5})\approx2.618033988$) my computer algebra system has randomly choosen various starting values $x_1$ and $x_2$ from the interval $(0,1)$ and computed the first few terms numerically. It turns out that after 14 steps, we always get close to our starting values, as the following example illustrates: 0.9449133500 0.2109364289 1.076295668 9.843229446 370.8734823 37.77962145 36.31787856 0.9877779910 0.05419694189 1.067240769 40.32970628 38.72575663 356.3222202 9.226991316 0.9462529267 0.2109303955 The same phenomenon also holds for other values of $m$, and the number of steps seems to be either $m+2$ or $2(m+2)$. Questions: What's going on? I have computed the first terms in the sequence for $m=5$ in exact form, and it seems unlikely to me that the sequence is periodic on the nose. (Although I might have to try harder and deviations in the experiments come from computation errors.) Fomin and Reading and other authors have generalised the Lie theoretic combinatorics of cluster algebras to general root systems. Have some authors also generalised cluster algebras to general root systems? For example, does a sophisticated version of the Laurent phenomenonhold in this case?
Outline Mathematics Geometry, Calculus Volume and Area of Torricelli's Trumpet In an article on Paradoxes of Infinity I mentioned a $3D$ figure known as Torricelli's Trumpet, also called Gabriel's Horn, whose surface area is infinite but whose volume is finite. Below I shall establish these facts. Torricelli's Trumpet is the surface of revolution obtained by rotating the graph of the function $\displaystyle f(x)=\frac{1}{x}$ on the interval $[1,\infty)$ around the $x-\mbox{axis}.$ The volume of Torricelli's Trumpet is given by the integral $\displaystyle V=\pi\int_{1}^{\infty}f^{2}(x)dx=\pi\int_{1}^{\infty}\frac{dx}{x^2};$ its area by the integral $\displaystyle S=2\pi\int_{1}^{\infty}f(x)\sqrt{1+[f'(x)]^{2}}dx=2\pi\int_{1}^{\infty}\frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx.$ Both integrals are improper in that they are taken over infinite intervals. Improper integrals of this sort are, by definition, the limits of integrals over finite intervals: $\displaystyle\int_{1}^{\infty}=\lim_{a\rightarrow\infty}\int_{1}^{a}.$ If the limit does not exist (or is infinite) the improper integral is said to diverge, otherwise it's convergent. The volume integral is the easier of the two: $\displaystyle V=\lim_{a\rightarrow\infty}\pi\int_{1}^{a}\frac{dx}{x^2}.$ Computing it gives $\displaystyle V=\lim_{a\rightarrow\infty}\pi$(-\frac{1}{x})\bigg|_{1}^{a},$(-\frac{1}{x})\bigg|_{1}^{a}$,$(-\frac{1}{x^2})\bigg|_{1}^{a}$,$(-\frac{1}{x^3})\bigg|_{1}^{a}$$\displaystyle =\lim_{a\rightarrow\infty}\pi$ (1-\frac{1}{a}),$(1-\frac{1}{a})$,$(\frac{1}{a}-1)$$=1.$ Concerning the area integral, to prove the claim that it is infinite, we do not actually need to calculate the integral but only to estimate its growth. This is not difficult: $\displaystyle S=2\pi\lim_{x\rightarrow\infty}\int_{1}^{a}\frac{1}{x}\sqrt{1+\frac{1}{x^4}}dx\ge 2\pi\lim_{x\rightarrow\infty}$\displaystyle\int_{1}^{a}\frac{1}{x}dx,$\displaystyle\int_{1}^{a}\frac{1}{x^4}dx$,$\displaystyle\int_{1}^{a}\frac{1}{x}dx$ Thus, $\displaystyle S\ge 2\pi\lim_{x\rightarrow\infty}\ln x\bigg|_{1}^{a}=2\pi\lim_{x\rightarrow\infty}\ln a=\infty.$ What Is Area Elementary Introduction into the Concept of Area Equidecomposition of a Triangle and a Rectangle: first variant Pick's Theorem Area of a Circle by Rabbi Abraham bar Hiyya Hanasi Area of a Circle by Leonardo da Vinci Volume and Area of Torricelli's Trumpet 65620569
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Difference between revisions of "Probability density function" m (Dating maintenance tags: {{Citation needed}}) (→Example: Quotient of two standard normals: I removed on instance of repeated identical equation lines) Line 267: Line 267: \begin{align} \begin{align} p(Y) &= \int_{-\infty}^{\infty} p_U(YZ)\,p_V(Z)\, |Z| \, dZ \\ p(Y) &= \int_{-\infty}^{\infty} p_U(YZ)\,p_V(Z)\, |Z| \, dZ \\ − &= \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-Y^2Z^2/2} \frac{1}{\sqrt{2\pi}} e^{-Z^2/2} |Z| \, dZ \\ &= \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}} e^{-Y^2Z^2/2} \frac{1}{\sqrt{2\pi}} e^{-Z^2/2} |Z| \, dZ \\ &= \int_{-\infty}^{\infty} \frac{1}{2\pi} e^{-(Y^2+1)Z^2/2} |Z| \, dZ \\ &= \int_{-\infty}^{\infty} \frac{1}{2\pi} e^{-(Y^2+1)Z^2/2} |Z| \, dZ \\ Revision as of 09:46, 11 February 2014 In probability theory, a probability density function ( density of a continuous random variable, is a function that describes the relative likelihood for this random variable to take on a given value. The probability of the random variable falling within a particular range of values is given by the integral of this variable’s density over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and its integral over the entire space is equal to one. The terms " probability distribution function" [1] and " probability function" [2] have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values, or it may refer to the cumulative distribution function, or it may be a probability mass function rather than the density. Further confusion of terminology exists because density function has also been used for what is here called the "probability mass function". [3] Contents 1 Absolutely continuous univariate distributions 2 Formal definition 3 Further details 4 Link between discrete and continuous distributions 5 Families of densities 6 Densities associated with multiple variables 7 Dependent variables and change of variables 8 Sums of independent random variables 9 Products and quotients of independent random variables 10 See also 11 Bibliography 12 External links Absolutely continuous univariate distributions A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable X has density f X, where fis a non-negative Lebesgue-integrable function, if: X Hence, if F X is the cumulative distribution function of X, then: and (if f X is continuous at x) Intuitively, one can think of f X( x) d xas being the probability of Xfalling within the infinitesimal interval [ x, x+ d x]. Formal definition A random variable X with values in a measurable space (usually R n with the Borel sets as measurable subsets) has as probability distribution the measure X ∗ P on : the density of X with respect to a reference measure μ on is the Radon–Nikodym derivative: That is, f is any measurable function with the property that: Discussion In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). Note that it is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost everywhere unique. Further details Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, ½] has probability density f( x) = 2 for 0 ≤ x ≤ ½ and f( x) = 0 elsewhere. The standard normal distribution has probability density If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if and only if its cumulative distribution function F( x) is absolutely continuous. In this case: F is almost everywhere differentiable, and its derivative can be used as probability density: If a probability distribution admits a density, then the probability of every one-point set { a} is zero; the same holds for finite and countable sets. In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If dt is an infinitely small number, the probability that X is included within the interval ( t, t + dt) is equal to f( t) dt, or: Link between discrete and continuous distributions It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. For example, let us consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability ½ each. The density of probability associated with this variable is: More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is: where x 1, …, x n are the discrete values accessible to the variable and p 1, …, pare the probabilities associated with these values. n This substantially unifies the treatment of discrete and continuous probability distributions. For instance, the above expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution of the probability. Families of densities It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density, however, is trickier and requires more work: see the section below on change of variables. Densities associated with multiple variables For continuous random variables X 1, …, X n, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the nvariables, such that, for any domain Din the n-dimensional space of the values of the variables X 1, …, X, the probability that a realisation of the set variables falls inside the domain n Dis If F( x 1, …, x ) = Pr( n X 1≤ x 1, …, X ≤ n x ) is the cumulative distribution function of the vector ( n X 1, …, X ), then the joint probability density function can be computed as a partial derivative n Marginal densities For i=1, 2, …, n, let f Xi( x) be the probability density function associated with variable i Xalone. This is called the “marginal” density function, and can be deduced from the probability density associated with the random variables i X 1, …, Xby integrating on all values of the n n− 1 other variables: Independence Continuous random variables X 1, …, X n admitting a joint density are all independent from each other if and only if Corollary If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable (where each f i is not necessarily a density) then the nvariables in the set are all independent from each other, and the marginal probability density function of each of them is given by Example This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates ( X, Y): the probability to obtain in the quarter plane of positive x and y is Dependent variables and change of variables If the probability density function of a random variable X is given as f X( x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g( X). This is also called a “change of variable” and is in practice used to generate a random variable of arbitrary shape f = g( X) fusing a known (for instance uniform) random number generator. Y If the function g is monotonic, then the resulting density function is Here g −1 denotes the inverse function. This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, or For functions which are not monotonic the probability density function for y is where n( y) is the number of solutions in x for the equation g( x) = y, and g −1 ( k y) are these solutions. It is tempting to think that in order to find the expected value E( g( X)) one must first find the probability density f of the new random variable g(X) Y = g( X). However, rather than computing one may find instead The values of the two integrals are the same in all cases in which both X and g( X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. Multiple variables The above formulas can be generalized to variables (which we will again call y) depending on more than one other variable. f( x 1, …, x ) shall denote the probability density function of the variables that n ydepends on, and the dependence shall be y = g( x 1, …, x ). Then, the resulting density function is{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}} }} n where the integral is over the entire ( n-1)-dimensional solution of the subscripted equation and the symbolic dV must be replaced by a parametrization of this solution for a particular calculation; the variables x 1, …, x are then of course functions of this parametrization. n This derives from the following, perhaps more intuitive representation: Suppose is an x n-dimensional random variable with joint density f. If = y H( ), where x His a bijective, differentiable function, then has density y g: with the differential regarded as the Jacobian of the inverse of H, evaluated at . y Using the delta-function (and assuming independence) the same result is formulated as follows. If the probability density function of independent random variables X i, i= 1, 2, … nare given as f( X i x), it is possible to calculate the probability density function of some variable i Y = G( X 1, X 2, … X). The following formula establishes a connection between the probability density function of n Ydenoted by f( Y y) and f( X i x) using the Dirac delta function: i Sums of independent random variables {{#invoke:see also|seealso}} Not to be confused with Mixture distribution It is possible to generalize the previous relation to a sum of N independent random variables, with densities U 1, …, U N: This can be derived from a two-way change of variables involving Y=U+V and Z=V, similarly to the example below for the quotient of independent random variables. Products and quotients of independent random variables {{#invoke:see also|seealso}} Given two independent random variables U and V, each of which has a probability density function, the density of the product Y= UV and quotient Y= U/ V can be computed by a change of variables. Example: Quotient distribution To compute the quotient Y= U/ V of two independent random variables U and V, define the following transformation: Then, the joint density p(Y,Z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density. The inverse transformation is The Jacobian matrix of this transformation is Thus: And the distribution of Y can be computed by marginalizing out Z: Note that this method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U+V, difference U-V and product UV. Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. Example: Quotient of two standard normals Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions: We transform as described above: This leads to: This is a standard Cauchy distribution. See also Density estimation Likelihood function List of probability distributions Probability mass function Secondary measure Bibliography {{#invoke:citation/CS1|citation |CitationClass=book }} The first major treatise blending calculus with probability theory, originally in French: Théorie Analytique des Probabilités. The first major treatise blending calculus with probability theory, originally in French: {{#invoke:citation/CS1|citation |CitationClass=book }} The modern measure-theoretic foundation of probability theory; the original German version ( Grundbegriffe der Wahrscheinlichkeitsrechnung) appeared in 1933. The modern measure-theoretic foundation of probability theory; the original German version ( {{#invoke:citation/CS1|citation |CitationClass=book }} {{#invoke:citation/CS1|citation |CitationClass=book }} Chapters 7 to 9 are about continuous variables. {{#invoke:citation/CS1|citation |CitationClass=citation }}
Let $f:X\to Y$ be a map between connected CW complexes and $k\geq 0$ an integer. I am confused by the definition of $k$-connectivity or more fundamentally by what induces a long exact sequence of homotopy groups. My favourite definition for $k$-connectivity is this: $f$ is called $k$-connected if the homotopy fiber $F$ of $f$ is $k-1$-connected, meaning that $\pi_i(F)=0$ for all $i$ with $0\leq i\leq k-1$. Of course, this definition is only reasonable for connected spaces $X$ and $Y$. I know that for $F\to X\to Y$, there is a long exact sequence\begin{equation}\ldots\to\pi_i(F)\to\pi_i(X)\to\pi_i(Y)\to\ldots\to \pi_0(X)\to\pi_0(Y)\end{equation}by arguments about the homotopy fiber $F$. I like to define the relative homotopy groups $\pi_i(Y,A)$ as $\pi_{i-1}(F)$ and one gets from the above long exact sequence a long exact sequence for the relative homotopy groups. Now Wikipedia defines for an inclusion $f:X\hookrightarrow Y$ to be $k$-connected, if its homotopy cofiber $C$ (= mapping cone) is $n$-connected, meaning that $\pi_i(C)=0$ for all $i$ with $0\leq i\leq k$. Even worse for me, the same Wikipedia article asserts a long exact sequence \begin{equation}(*)\hspace{10ex} \pi_i(X)\to\pi_i(Y)\to \pi_i(C) \end{equation} (however this is prolonged to the left and to the right). My main question is: How do the two definitions of $k$-connectivity relate? Maybe however, my problem of understanding begins even earlier: How do $\pi_i(C)$ and $\pi_i(Y,X)$ (from de definition above) relate? I was able to show that for the connectivity \begin{equation} conn(F)+1=conn(C) \end{equation} holds for simply connected $X$ and $Y$. This means, that for simply connected $X$ and $Y$, the two definitions of $k$-connectivity of $f$ coincide if there is really an exact sequence (*). But what happens when $X$ and $Y$ are not simply connected but only connected?
The optimal sampling-based motion planning algorithm $\text{RRT}^*$ (described in this paper) has been shown to yield collision-free paths which converge to the optimal path as planning time increases. However, as far as I can see, the optimality proofs and experiments have assumed that the path cost metric is Euclidean distance in configuration space. Can $\text{RRT}^*$ also yield optimality properties for other path quality metrics, such as maximizing minimum clearance from obstacles throughout the path? To define minimum clearance: for simplicity, we can consider a point robot moving about in Euclidean space. For any configuration $q$ that is in the collision-free configuration space, define a function $d(q)$ which returns the distance between the robot and the nearest C-obstacle. For a path $\sigma$, the minimum clearance $\text{min_clear}(\sigma)$ is the minimum value of $d(q)$ for all $q \in \sigma$. In optimal motion planning, one might wish to maximize minimum clearance from obstacles along a path. This would mean defining some cost metric $c(\sigma)$ such that $c$ increases as the minimum clearance decreases. One simple function would be $c(\sigma) = \exp(-\text{min_clear}(\sigma))$. In the first paper introducing $\text{RRT}^*$, several assumptions are made about the path cost metric so that the proofs hold; one of the assumptions concerned additivity of the cost metric, which doesn't hold for the above minimum clearance metric. However, in the more recent journal article describing the algorithm, several of the prior assumptions weren't listed, and it seemed that the minimum clearance cost metric might also be optimized by the algorithm. Does anyone know if the proofs for the optimality of $\text{RRT}^*$ can hold for a minimum clearance cost metric (perhaps not the one I gave above, but another which has the same minimum), or if experiments have been performed to support the algorithm's usefulness for such a metric?
With the definitions in the OP, this is false. It is OK if the Banach space $B$ is separable and $(\Omega,\mathcal F, P)$ is an arbitrary probability space. It is OK if the Banach space $B$ is arbitrary and $(\Omega,\mathcal F,P)$ is a perfect measure space. But for arbitrary $B$ and $(\Omega,\mathcal F, P)$, it can fail. It can fail in many different ways. (A theorem of Charles Stegall: if $(\Omega,\mathcal F,P)$ is a perfect probability space, $B$ is a metric space, and $f : \Omega \to B$ is $(\mathcal F, \mathcal B)$-measurable, then there is a set $\Omega_1 \subseteq \Omega$ of measure $1$ such that $f(\Omega_1)$ is separable.) Here is the simplest way in which it may fail. Write $\mathcal B = \mathrm{Borel}(B)$. Let $L^p(\Omega,B)$ be the set of all functions $f : \Omega \to B$ such that $f$ is $(\mathcal F, \mathcal B)$-measurable, and$$\int_\Omega \|f(\omega)\|^p\;dP(\omega) < \infty .$$ It is possible that there are $f,g \in L^p(\Omega,B)$ such that $f+g \notin L^p(\Omega,B)$ because $f+g$ is not even $(\mathcal F , \mathcal B)$-measurable. Example I Let $T$ be a discrete space with cardinal $\frak{a} > 2^{\aleph_0}$. Let $B = l^2(T)$, that is, a Hilbert space with orthonormal basis of cardinal $\frak{a}$. For each $t \in T$ let $e_t \in l^2(T)$ be defined by: $e_t(t) = 1$ and $e_t(s) = 0$ if $t\ne s$. This system of "unit vectors" is an orthonormal basis of the space $B$. Let $\Omega = T \times T$ be the Cartesian square. Let $\mathrm{Borel}(T)$ be the Borel sigma-algebra on $T$, which is of course the power set of $T$.Let the sigma-algebra $\mathcal{F} = \mathrm{Borel}(T) \otimes \mathrm{Borel}(T)$, the product sigma-algebra. The reason for requiring that $\mathrm{card}(T) > 2^{\aleph_0}$ is so that the diagonal$$\Delta := \{(t,t) \in \Omega : t \in T\},$$although closed, is not in $\mathcal F$. See HERE. We do not care what the probability measure $P$ is. (In an extreme case it could even be the point mass at a single point.) Finally we are ready. Define $f : \Omega \to B$ by$$f\big((u,v)\big) = e_u,$$That is: Given $\omega = (u,v)$ in $\Omega$, we take its first component, and use the corresponding unit vector. Similarly, define $g : \Omega \to B$ by$$g\big((u,v)\big) = -e_v,$$using the second component and a minus sign. I claim that $f, g \in L^p(\Omega,B)$ but $f+g$ is not. First: $f$ is $(\mathcal F, \mathcal B )$-measurable. Indeed, if$Q \in B$ is Borel, then $f^{-1}(Q) \in \mathcal F$ because$f^{-1}(Q) = \widetilde{Q} \times T \in \mathcal F$ where$\widetilde{Q} = \{t \in T : e_t \in Q\}$.So $f$ is $(\mathcal F, \mathcal B )$-measurable. Similarly$g$ is $(\mathcal F, \mathcal B )$-measurable. Next,$$\int_\Omega \|f(\omega)\|^p\,dP(\omega) = 1 < \infty.$$(Regardless of what the probability measure $P$ is, the integral of the constant $1$ is $1$.)So $f \in L_p(\Omega,B)$. Similarly, $g \in L_p(\Omega,B)$. Now we claim the sum $f+g$ is not measurable. Indeed, even more, we claim that $\{\omega\in \Omega : f(\omega)+g(\omega) = 0\} \notin\mathcal F$. (Since $\{0\}$ is closed, this shows $f+g$ is not measurable.) Indeed,$$\{\omega : f(\omega) + g(\omega) = 0\} = \{(u,v) : e_u-e_v = 0\} =\{(u,v) : u=v\} = \Delta.$$As noted above, $\Delta \notin \mathcal F$ End of Example I
ISSN: 1937-1632 eISSN: 1937-1179 All Issues Discrete & Continuous Dynamical Systems - S August 2012 , Volume 5 , Issue 4 Issue on Variational Methods in Nonlinear Elliptic Equations Select all articles Export/Reference: Abstract: The present issue intends to provide an exposition of very recent topics and results in the qualitative study of nonlinear elliptic equations or systems such as, e.g., existence, multiplicity, and comparison principles. Emphasis is put on variational techniques, combined with topological arguments and sub-super-solution methods, in both a smooth and non-smooth framework. The collected papers investigate a wide range of questions. Let us mention for instance multiple solutions to elliptic equations and systems in bounded or unbounded domains, sub-super-solutions of elliptic problems whose relevant energy functionals can be non-differentiable, singular elliptic equations, asymptotically critical problems on higher dimensional spheres, local $C^1$-minimizers versus local $W^{1,p}$-minimizers. Each contribution is original and thoroughly reviewed. Abstract: We study a class of nonlocal eigenvalue problems related to certain boundary value problems that arise in many application areas. We construct a nondecreasing and unbounded sequence of eigenvalues that yields nontrivial critical groups for the associated variational functional using a nonstandard minimax scheme that involves the $\mathbb{Z}_2$-cohomological index. As an application we prove a multiplicity result for a class of nonlocal boundary value problems using Morse theory. Abstract: The aim of this paper is to investigate elliptic variational-hemivariational inequalities on unbounded domains. In particular, by using a recent critical point theorem, existence results of at least two nontrivial solutions are established. Abstract: The aim of this paper is to investigate an ordinary fourth-order hemivariational inequality. By using non-smooth variational methods, infinitely many solutions satisfying this type of inequality, whenever the potential of the nonlinear term has a suitable growth condition or convenient oscillatory assumptions at zero or at infinity, are guaranteed. As a consequence, a multiplicity result for non-smooth fourth-order boundary value problems is pointed out. Abstract: The existence of multiple weak solutions for a class of elliptic Navier boundary problems involving the $p$--biharmonic operator is investigated. Our approach is chiefly based on critical point theory. Abstract: Using a multiple critical points theorem for locally Lipschitz continuous functionals, we establish the existence of at least three distinct solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Abstract: The existence of four solutions, one negative, one positive, and two sign-changing (namely, nodal), for a Neumann boundary-value problem with right-hand side depending on a positive parameter is established. Proofs make use of sub- and super-solution techniques as well as Morse theory. Abstract: We prove the existence of three non-zero periodic solutions for an ordinary differential inclusion. Our approach is variational and based on a multiplicity theorem for the critical points of a nonsmooth functional, which extends a recent result of Ricceri. Abstract: We prove a multiplicity result for a perturbed gradient-type system defined on strip-like domains. The approach is based on a recent Ricceri-type three critical point theorem. Abstract: In this paper we study the wavefront like phase transition of solutions of a parabolic nonlinear boundary value problem used to model phase transitions in the theory of boiling liquids. Using weak supersolutions we provide bounds for the propagation speed of such a phase transition. Also we construct stable supersolutions to initial configurations which have locally supercritical values. Abstract: This paper is about an alternate variational inequality formulation for the boundary value problem $$ \begin{array}{l} -{\rm div} (a(|\nabla u|) \nabla u) + \partial_u G(x,u) \ni 0 \;\mbox{ in } \;\Omega , \\ u=0 \;\mbox{ on } \;\partial\Omega , \end{array} $$ where the principal part may have non-polynomial or very slow growth. As a consequence of this formulation, we can apply abstract nonsmooth linking theorems to study the existence and multiplicity of nontrivial solutions to the above problem. Abstract: The aim of this paper is to use a variational approach in order to obtain the existence of non-trivial weak solutions of a quasilinear elliptic equation not in divergence form, in dimension $N=3$. Moreover, we prove that our solution is $C^{1, \alpha}(\overline\Omega)$ and also locally $C^{2, \alpha}(\overline\Omega)$ for a suitable $\alpha\in (0,1)$. Abstract: For a quasilinear elliptic system, the existence of two extremal solutions with components of opposite constant sign is established. If the system has a variational structure, the existence of a third nontrivial solution is shown. Abstract: We consider a nonlinear Dirichlet boundary value problem involving the $p(x)$-Laplacian and a concave term. Our main result shows the existence of at least three nontrivial solutions. We use truncation techniques and the method of sub- and supersolutions. Abstract: We study a class of nonlinear elliptic equations with subcritical growth and Dirichlet boundary condition. Our purpose in the present paper is threefold: (i) to establish the effect of a small perturbation in a nonlinear coercive problem; (ii) to study a Dirichlet elliptic problem with lack of coercivity; and (iii) to consider the case of a monotone nonlinear term with subcritical growth. This last feature enables us to use a dual variational method introduced by Clarke and Ekeland in the framework of Hamiltonian systems associated with a convex Hamiltonian and applied by Brezis to the qualitative analysis of large classes of nonlinear partial differential equations. Connections with the mountain pass theorem are also made in the present paper. Abstract: In this paper we study elliptic equations with a nonlinear conormal derivative boundary condition involving nonstandard growth terms. By means of the localization method and De Giorgi's iteration technique we derive global a priori bounds for weak solutions of such problems. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
ISSN: 1937-1632 eISSN: 1937-1179 All Issues Discrete & Continuous Dynamical Systems - S October 2012 , Volume 5 , Issue 5 Issue on recent progress on the long time behavior of coherent structures in discrete and continuous models Select all articles Export/Reference: Abstract: Partial differential equations viewed as dynamical systems on an infinite-dimensional space describe many important physical phenomena. Lately, an unprecedented expansion of this field of mathematics has found applications in areas as diverse as fluid dynamics, nonlinear optics and network communications, combustion and flame propagation, to mention just a few. In addition, there have been many recent advances in the mathematical analysis of differential difference equations with applications to the physics of Bose-Einstein condensates, DNA modeling, and other physical contexts. Many of these models support coherent structures such as solitary waves (traveling or standing), as well as periodic wave solutions. These coherent structures are very important objects when modeling physical processes and their stability is essential in practical applications. Stable states of the system attract dynamics from all nearby configurations, while the ability to control coherent structures is of practical importance as well. This special issue of Discrete and Continuous Dynamical Systems is devoted to the analysis of nonlinear equations of mathematical physics with a particular emphasis on existence and dynamics of localized modes. The unifying idea is to predict the long time behavior of these solutions. Three of the papers deal with continuous models, while the other three describe discrete lattice equations. For more information please click the "Full Text" above. Abstract: It is the purpose of this paper to prove error estimates for the approximate description of macroscopic wave packets in infinite periodic chains of coupled oscillators by modulation equations, like the Korteweg--de Vries (KdV) or the Nonlinear Schrödinger (NLS) equation. The proofs are based on a discrete Bloch wave transform of the underlying infinite-dimensional system of coupled ODEs. After this transform the existing proof for the associated approximation theorem for the NLS approximation used for the approximate description of oscillating wave packets in dispersive PDE systems transfers almost line for line. In contrast, the proof of the approximation theorem for the KdV approximation of long waves is less obvious. In a special situation we prove a first approximation result. Abstract: We study the Cauchy problem for the focusing time-dependent Schrödinger - Hartree equation $$i \partial_t \psi + \triangle \psi = -({|x|^{-(n-2)}}\ast |\psi|^{\alpha})|\psi|^{\alpha - 2} \psi, \quad \alpha\geq 2,$$ for space dimension $n \geq 3$. We prove the existence of solitary wave solutions and give conditions for formation of singularities in dependence of the values of $\alpha\geq 2$ and the initial data $\psi(0,x)=\psi_0(x)$. Abstract: We investigate the spectrum of the linear operator coming from the sine-Gordon equation linearized about a travelling kink-wave solution. Using various geometric techniques as well as some elementary methods from ODE theory, we find that the point spectrum of such an operator is purely imaginary provided the wave speed $c$ of the travelling wave is not $\pm 1$. We then compute the essential spectrum of the same operator. Abstract: We describe relations between the Evans function, a modern tool in the study of stability of traveling waves and other patterns for PDEs, and the classical Weyl-Titchmarsh function for singular Sturm-Liouville differential expressions and for matrix Hamiltonian systems. Also, for the scalar Schrödinger equation, we discuss a related issue of approximating eigenvalue problems on the whole line by that on finite segments. Abstract: Asymptotic stability of localized modes in the discrete nonlinear Schrödinger equation was earlier established for septic and higher-order nonlinear terms by using Strichartz estimate. We use here pointwise dispersive decay estimates to push down the lower bound for the exponent of the nonlinear terms. Abstract: We present a discrete model of resonant scattering of waves by an open periodic waveguide. The model elucidates a phenomenon common in electromagnetics, in which the interaction of plane waves with embedded guided modes of the waveguide causes sharp transmission anomalies and field amplification. The ambient space is modeled by a planar lattice and the waveguide by a linear periodic lattice coupled to the planar one along a line. We show the existence of standing and traveling guided modes and analyze a tangent bifurcation, in which resonance is initiated at a critical coupling strength where a guided mode appears, beginning with a single standing wave and splitting into a pair of waves traveling in opposing directions. Complex perturbation analysis of the scattering problem in the complex frequency and wavenumber domain reveals the complex structure of the transmission coefficient at resonance. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Edit: In case that there is no solution for the original question, I modify to enrich the question. We like to ask a possible specific inflation a $H^3(Q, \mathbb{R} /\mathbb{Z})$ cocycle with a finite group $Q$ into a coboundary in the following two cases in quaternion group or dihedral group: Inflate the 3-cocycle $\alpha_{1}$ in $Q=Z_2$ via a dihedral group $G=D_8$ of order 8. Inflate the 3-cocycle $\alpha_{2}$ in $Q=Z_2 \times Z_2$ via a quaternion group $G=H_8$ of order 8. Consider the cocycle $\alpha_1(g_a,g_b, g_c) \in H^3(Z_2, \mathbb{R} /\mathbb{Z})$ and $\alpha_2((g_{a1},g_{a2}),(g_{b1},g_{b2}),(g_{c1},g_{c2})) \in H^3(Z_2 \times Z_2, \mathbb{R} /\mathbb{Z})$ in the 3rd cohomology group of $Z_2$ and $Z_2 \times Z_2$ respectively, where $g_a,g_b,g_c \in Z_2$ respectively, and $(g_{a1},g_{a2}),(g_{b1},g_{b2}),(g_{c1},g_{c2})\in Z_2 \times Z_2$ respectively. Let both $\alpha_1$ and $\alpha_2$ to be a cup product form: $$\alpha_1(g_a,g_b, g_c)=(-1)^{g_{a}g_{b}g_{c}}. $$ $$\alpha_2((g_{a1},g_{a2}),(g_{b1},g_{b2}),(g_{c1},g_{c2}))=(-1)^{g_{a1}g_{b2}g_{c2}}. $$ question: How can we trivialize a $H^3(Z_2, \mathbb{R} /\mathbb{Z})$'s cocycle $\alpha_1$ into a coboundary and trivialize a $H^3(Z_2 \times Z_2, \mathbb{R} /\mathbb{Z})$'s cocycle $\alpha_2$ into a coboundary $$\alpha_1= \delta \beta_1$$ $$\alpha_2= \delta \beta_2$$ in a large group dihedral $D_8$ and quaternion $H_8$, respectively, by finding the explicit 2-cochain $\beta_1$ and $\beta_2$? Where we can regard the group homomorphism $$D_8 \to Z_2 \text{ and } H_8 \to Z_2 \times Z_2. $$ We can choose either the fact that $D_8/Z_4=Z_2$ or $D_8/(Z_2)^2=Z_2$. We can call $D_8=G$, and $Z_4$ or $(Z_2)^2=N$ normal subgroup, and $Z_2=Q$ as the quotient group. We all have $G/N=Q$. And we can use the fact that $H_8/Z_2=Z_2 \times Z_2$, we can call $H_8=G$, and $Z_2=N$ normal subgroup, and $Z_2 \times Z_2=Q$ as the quotient group. We all have $G/N=Q$. In particular, not only the explicit 2-cochain, but also I am interested in finding the relations to Lyndon Hochschild Serre spectral sequence and the $d_2$ differential, its homomorphism $d_2:H^1(Q,H^1(N,\mathbb{R} /\mathbb{Z}))\to H^3(Q,\mathbb{R} /\mathbb{Z})$ in the $E_2$ pages. Or whether it requires other $d_n$ differentials to determine the inflation of cocycle. p.s. The original post in ME received little attention so I decide to try MO.
Physicists! Bah…! They deal with mathematics on a needs come, needs serve basis. They regularly divide with infinitesimals, mindlessly exchange the order of integration and differentiation and carelessly sum only parts of an infinite series expansion. Rigor is dead and buried as long as they end up with meaningful and useful answers! But isen’t it weird that while the road to becoming a theoretical physicist includes numerous rigorous math courses like Algebra, Measure theory, Geometry, and Analysis, then most physicists still opt for the shortcut and get away with it? Here we will take an interest in the field of mathematical analysis, that investigates mathematical functions, their derivatives and integrals by dealing with the infinitely small. The point of this post is to emphasize that rigor can take on more or less pedagogical forms by exploring a little known but perhaps more intuitive reformulation of mathematical analysis, known as the hyperreals… A Historical Prelude Historically, the idea of taking something quite small, but finite, and then turn it infinitely small has been employed by many mathematicians throughout the ages, including works by Archimedes, Fermat, Euler, Bolzano, and Cauchy. The greatest result of this approach is calculus, where first Newton and especially Leibniz introduced positive quantities smaller than any real number and called them “fluxions” or “infinitesimals”. But, infinitesimals ended up making serious formalistic mathematicians, aka David Hilbert, uneasy. Infinity and infinitesimals do not represent real numbers. So, how can we do calculations with numbers that do not exist? It was Karl Weierstrass, who followed up on the work of Bolzano, and answered this riddle by getting completely away with the infinitesimals. Instead he introduced the concept of limits and the method of epsilon-delta reasoning – a method so strong that it is currently being celebrated in every calculus textbook and in many Analysis-101 courses. However, I’m sure that most people (phycisits included) have felt that this rigid approach to calculus could not be the full story. How was a successful tool like calculus built on such a shaky ground? And how did it manage to stand there majestically for centuries until somebody eventually took the time to secure the foundations? And how can it be that all the physics shortcuts have not left physicists in a logic-less ruin? Abraham Robinson was the first to consider taking the concept of infinitesimals seriously. He expanded the field of real numbers with unlimited numbers (the infinite) as well as infinitesimals. He needed a fancy name for his contrsuction, and today it is known as the field of the hyperreal. We are going to take a close look at it, following the most common approach based on something called ultrafilters. Ultrafilters For the construction of the hyperreals we need ultrafilters. They will help us ensure that the hyperreals is a well-ordered field of numbers. A free ultrafilter $U$ on the natural numbers, $\mathbb{N}$, is a set of subsets of $\mathbb{N}$ with the properties that, $U$ only contains infinite subsets of $\mathbb{N}$. For two elements in $U$ (let us call them $X_1$ and $X_2$), the intersection of those two sets must also belong to $U$, so $X_1 \cap X_2 \in U$. Consider some infinite subset $X_1$ which belongs to $U$. All supersets of $X_1$ (all $X_2 \supset X_1$) then also belongs to this ultrafilter. If $X \notin U$ then the complement of $X$ (written as $\mathbb{N} \setminus X$) belongs to $U$. As a simple example consider the empty set $\emptyset = \{\}$. The empty set is finite and does not belong in an ultrafilter (per statement 1). The complement of the empty set (which is the full set $\mathbb{N} \setminus \emptyset = \mathbb{N}$) then instead belongs to $U$ (per statement 4). On the other hand: Consider the infinite subset containing all the even numbers, $\mathbb{N}_e$, and the subset containing all the odd numbers, $\mathbb{N}_o$. The intersection of those two subsets $\mathbb{N}_e \cap \mathbb{N}_o = \emptyset$, meaning that they cannot both belong to our ultrafilter (per statement 2). Because the two sets are the complements of each other, exactly one of them must belong to $U$ (per statement 4), but we are free to choose which one! So an ultrafilter is not unique, but it can be shown that two ultrafilters on the same set are equivalent, and one can be constructed from the other simply by exchanging some elements with their complements. The hyperreals After this small intermission, we are ready to construct the hyperreals. This particular construction uses sequences. A sequence is a function on the positive integers, and we write them in the following way, The hyperreals are constructed from infinite sequences on the reals, $\underline{a} \in \mathbb{R}^{\mathbb{N}}$. The real numbers themselves can easily be represented as infinite sequences with constant elements, So the number 2 is written, But we may dream up many other “non-real” sequences, like these: We want the hyperreal numbers to be ordered, so any two hyperreal numbers can be compared and found to be either larger or smaller than each other. The main problem with our sequences is that they cannot be compared to each other in a consistent way. Is greater or smaller than ? And what about compared to ? If I change a single element of a sequence, is this new sequence then greater, smaller or equivalent to the original sequence? In order to extract something usable, we must “thin out” in the forest of possible sequences. We do this by letting two sequences represent the same hyperreal number if “most” of their elements are identical. For a consistent definition of “most” we return to the ultrafilter construction. We say that two sequences are equivalent, $\underline{r} \equiv \underline{s}$, if and only if the places, at which they share elements, form an infinite set that belongs to our ultrafilter, $U$: Note that we sneakily introduced the parenthesis $\langle \cdots \rangle$, to signify that we are transforming a relation between two sequence representations of hyperreal numbers into a set of indices for which the sequence elements obey that same relation. Using the properties of the ultrafilter, it is easy to show that this construction is transitive and defines an equivalence relation. The resulting equivalence classes define the hyperreals: Hyperreal numbers may be manipulated by first choosing sample sequences from the relevant equivalence classes, then performing element-wise operations on those sample sequences, and finally representing the resulting hyperreal as the equivalence class of the resulting sequence. If that was too convoluted, here comes the simple definitions of addition and multiplication, We are now ready to show that this careful construction allow us to order the hyperreals. Assume that one sequence, $r$, for “a large part” is smaller than another sequence, $s$. Again we take “for a large part” to mean that the sequence indices at which $r$ is smaller than $s$ belong to our ultrafilter, so $\langle r < s \rangle = \{ j : r_j < s_j \} \in U$. It follows almost automatically that $r \not\equiv s$, because $\langle r = s \rangle \subset \mathbb{N} \setminus \langle r < s \rangle \not\in U$ (by use of ultrafilter properties 1 and 4). When applied a little more rigorously this shows that the hyperreals, $(*\mathbb{R}, \oplus, \odot)$, indeed form an ordered field. The reals As we already anticipated, the real numbers are embedded in the hyperreals represented by classes equivalent to the constant sequences. Let us introduce a map from the reals to the hyperreals, $* : \mathbb{R} \rightarrow *\mathbb{R}$, such that We refer to these hyperreal numbers as standard. They directly equip the hyperreals with neutral elements for addition, $*0$, and multiplication, $*1$. You may remember this alternating sequence, we looked at earlier: It directly exhibits the cleverness of the ultrafilter construction. This sequence overlaps with the real number $*0$ on all the even sites, and with the real $*1$ on all the odd sites. From our discussion of ultrafilters, we know that the set of even numbers, $\mathbb{N}_e$, and the set of odd numbers, $\mathbb{N}_o$ cannot belong to the same ultrafilter. This means that for some choice of ultrafilter, the alternating sequence will belong to the $*0$ equivalence class, and in others it may belong to the $*1$ class. Infinitesimals The hyperreals also contain a lot of numbers in the vicinity of the standard numbers. Consider for example the hyperreal $s = [\underline{s}]$ defined by the sequence, This sequence only intersects sample sequences from the standard number classes a finite number of times. This makes the corresponding hyperreal number, $s = [\underline{s}]$, decidedly non-standard. It is easy to show that $\langle *x < s \rangle = \mathbb{N}$. More interestingly $s$ squeezes in between $*x$ and any other standard number $y > x$, because $\langle s < y \rangle \in U$. The difference between those two hyperreal numbers, $s - *x$, is now smaller than any positive real number, and we may refer to it as infinitesimal. There exists a whole plethora of well-ordered infinitesimals, and they can be compared to each other, so one is smaller and one is bigger, even though they are all infinitesimal. When two hyperreal numbers, $s$ and $r$, are (only) separated by an infinitesimal, we write that $s \simeq r$. We can then introduce two useful functions. The halo of a hyperreal contains the infinitesimal cloud around the closest standard number, Similarly, we say that two hyperreals, $s$ and $r$, are limited separated whenever their distance is not an infinitesimal, and we write $s \sim r$. The classes of this equivalence relation are called the galaxies, and we write. We may take a look at the hyperreal infinitesimals through the infinitesimal microscope—a pedagogical representation of the hyperreal number line originally introduced by Jerome Keisler. Focusing on a particular hyperreal, $r$, the microscope magnifies its halo: Unlimited numbers The unlimited numbers form another interesting family of hyperreal numbers. Consider for example the hyperreal number represented by, It is clearly non-standard (its sample sequences only overlap real number sequences at a finite number of places), and it is also greater than any real number: All properties which we normally associate with the infinite. However, like the infinitesimals, there are many different unlimited numbers within the hyperreals. Consider for example: Here $m$ is also unlimited, and in addition $m > n$, which may confuse or comfort you, depending on your mathematical standpoint. Hey, wait a minute… What have we just done? It seems we have taken the Cauchy sequences and turned those sequences into numbers themselves… or entities? Is that all there is to it? Well, yes, in part. But if you glance back, you may notice that it is not at all obvious. The ultrafilter construction is a necessary complication, and this complication partly explains why the hyperreals was not built earlier. Whether or not you like this construction is of course a subjective matter. Currently, the main advantage of non-standard analysis, is that it allows you to think differently about calculus. Instead of thinking about a process where finite elements shrink to zero, the hyperreals allow for a direct construction. In essence it makes it easier to extend properties of finite systems into continuous cases. Let me note that there exists many other approaches to non-standard analysis: axiomatic, like Internal Set Theory or Alternative Set Theory, as well as constructionist, like the surreal numbers or the superreal numbers. How to use it? We now leave the safe shore of pretended rigor and jump into computation. We will do so with two simple examples, but first we need: The transfer principle Any appropriately formulated statement is true of $∗\mathbb{R}$, if and only if it is true of $\mathbb{R}$. This principle is a necessity in non-standard analysis. For the ultrapower construction of the hyperreals—which we have considered here—the principle follows from Łoś’s theorem. In order to transfer our results from the hyperreals and back to the reals, we define the shadow, $\mathrm{sh}(r)$, which maps a hyperreal number onto its closets real number. The function effectively removes any stray infinitesimals or infinites. We may do all our work within the hyperreals, and only return to the reals at the very end of our computation. The derivative We start simple. Assuming—somewhat haphazardly—that simple functions may easily be transferred to the hyperreals, we define the derivative of a function as the shadow, where $\epsilon$ is a hyperreal infinitesimal. We now drop the star extensions for brevity. Take the cubic function, $f(x) = x^3$. The non-standard derivation follows then almost automatically: Canceling terms and performing the overall division gives the well-known result, The intermediate value theorem In order to contrast the standard epsilon-delta approach with our novel non-standard method we turn to the proof of the intermediate value theorem (also known as Bolzanos theorem). If you want to, you can take a look at the standard proof before continuing. The theorem considers a continuous function $f$ on the interval $[a, b]$. It states that for any $d$ in between $f(a)$ and $f(b)$, there exists $c \in [a,b]$ such that $f(c) = d$. Without loss of generality we assume that $f(a) < f(b)$. Initially, we divide the interval $[a,b]$ into an integer, $N$, number of pieces of equal length, $\delta_N = (b-a)/N$. Consider now the first division point, $s_N$, where $f(s_N) > d$. Then logically $f(s_N - \Delta_N) \leq d$. Consider the division of the interval into a number of segments given by the unlimited hyperinteger, $M \in *\mathbb{N}$. Due to the transfer principle there still exists a smallest division point, $s_M$, where $f(s_M) > d$. However, $\Delta_M$ is infinitesimal, and this shows that $s_M \simeq s_M - \Delta_M$. By continuity $f(s_M) \simeq f(s_M - \Delta_M) \simeq d$. So the real number we are looking for is actually $c = \mathrm{sh}(s_M)$. Q.E.D. This proof also ends this short section on the application of the hyperreals. As a side-note we introduced continuity on the hyperreals as $r \simeq s \Rightarrow f(r) \simeq f(s)$. If you are interested in a rigorous derivation, you can find it in the source material. Sources I am indebted to the well-written article “Infinitesimals: History & Application” by Joel A. Tropp. If you want to delve further into nonstandard analysis, you should give it a good read. Also the textbook “Elementary Calculus: An Infinitesimal Approach” by H. Jerome Keisler provided a sound foundation for my understanding of the hyperreals. Wikipedia also has several articles about non-standard analysis, although I’d rather recommend the two sources above for further study. I hope you have enjoyed this small excursion into the hyperreals. Now, at least you have an answer ready when pesky mathematicians question your logic. I have also read that people are working on applying non-standard reasoning to distributions and to concepts in mathematical physics like the Feynman Path-integral. I may (or may not) cover that in a later post…
I recently found a paper by Subhash Kak that introduces teleportation protocols that require lesser classical communication cost (with more quantum resource). I thought it'd be better to write a separate answer. Kak discusses three protocols; two of them use 1 cbit and the last one requires 1.5 cbits. But the first two protocols are in a different setting, i.e the entangled particles are initially in Alice's lab (and a few local operations are performed), then one of the entangled particle is transferred to Bob's lab; this is unlike the Standard setting where the entangled particles are pre-shared between Alice and Bob before the protocol is even started. Interested people can go through those protocols that use only 1 cbit. I'll try to explain the last protocol that uses only 1.5 cbits (fractional cbits). There are four particles, namely, $X, Y, Z$ and $U$. $X$ is the unkown particle (or state) that has to be teleported from Alice's lab to Bob's lab. $X, Y$ and $Z$ are with Alice, and $U$ is with Bob. Let $X$ be represented as $\alpha|0\rangle + \beta|1\rangle$, such that $|\alpha|^2+|\beta|^2=1$. The three particles $Y, Z$ and $U$ are in the pure entangled state $|000\rangle+|111\rangle$ (leaving the normalization constants for now). So, the initial state of the whole system is:$$\alpha|0000\rangle + \beta|1000\rangle + \alpha|0111\rangle + \beta|1111\rangle$$ Step 1: Apply chained XOR transformations on $X, Y$ and $Z$ (i) XOR the states of $X$ and $Y$ (ii) XOR the states of $Y$ and $Z$. The $XOR$ unitary is given by:$$XOR =\left[{\begin{array}{cccc}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0\end{array}}\right].$$ In other words, the state transformations are the following:$$|00\rangle \rightarrow |00\rangle \\|01\rangle \rightarrow |01\rangle \\|10\rangle \rightarrow |11\rangle \\|11\rangle \rightarrow |10\rangle \\$$ After Step 1, the state of the whole system is:$$\alpha|0000\rangle + \beta|1110\rangle + \alpha|0101\rangle + \beta|1011\rangle$$ Step 2: Apply Hadamard tranform on the state of $X$.$$\alpha(|0000\rangle + |1000\rangle) + \beta(|0110\rangle - |1110\rangle) + \alpha(|0101\rangle + |1101\rangle) + \beta(|0011\rangle - |1011\rangle)$$ Step 3: Alice measures the state of $X$ and $Y$. On simplifying the above representation, we get$$|00\rangle(\alpha|00\rangle + \beta|11\rangle) + |01\rangle(\alpha|01\rangle + \beta|10\rangle) + |10\rangle(\alpha|00\rangle - \beta|11\rangle) + |11\rangle(\alpha|01\rangle - \beta|10\rangle).$$ Step 4: Depending on Alice's measurement outcome, appropiate unitaries are applied on $Z$ (by Alice) and $U$ (by Bob). (a) If Alice gets $|00\rangle$, then both Alice and Bob do nothing. (b) If Alice gets $|10\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob does nothing. (c) If Alice gets $|01\rangle$, then Alice does nothing and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$. (d) If Alice gets $|11\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$. Basically, $\left[{\begin{array}{cc}1 & 0 \\0 & 1 \end{array}}\right]$, $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$, $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$ and $\left[{\begin{array}{cc}0 & 1 \\-1 & 0 \end{array}}\right]$ can be appropiately used to alter the combined state of $Z$ and $U$ so that it becomes $\alpha|00\rangle + \beta|11\rangle$. Note that if Alice gets $|01\rangle$ or $|11\rangle$, then Bob has to apply some unitary so that the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$. Step 5: Apply Hadamard transform on the state of $Z$. After applying the unitaries, the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$ (as mentioned above). So, after Step 5, the combined state of $Z$ and $U$ is, $$\alpha|00\rangle + \alpha|10\rangle + \beta|01\rangle - \beta|11\rangle \\= |0\rangle(\alpha|0\rangle + \beta|1\rangle) + |1\rangle(\alpha|0\rangle - \beta|1\rangle).$$ Step 6: Alice measures the state of $Z$. Based on her measurement, she transmits one classical bit of information to Bob so that he can use an appropriate unitary to obtain the unkown state! Discussion: So, how does the protocol require $1.5$ bits of clasiical communication? Cleary, Step 6 uses 1 cbit, and in Step 4, it is easy notice that for two outcomes (namely, $|10\rangle$ or $|00\rangle$), Bob need not apply any unitary. Bob has to apply some unitary (specified prior to the protocool; say $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$) if Alice gets the other two outcomes, and in those scenarios, Alice sends one cbit indicating that the unitary is to be used by Bob. So, it is mentioned that this has a computational burden of 0.5 cbits (because 50% of the time, Bob need not apply any unitary). Hence, the whole protocol requires only 1.5 cbits. But, Alice must send that 1 cbit whether or not she gets those outcomes, right? Alice and Bob cannot agree on a particular time (after the protocol) when Alice sends that 1 cbit, and if Bob doesn't get that classical bit by that time, then he knows that he need not apply any unitary. These time dependent protcols are, in general, not allowed due to relativistic consequences (otherwise, you can even make the Standard protocol to use time for indicating information and reduce the classical communication cost to 1 cbit; for example, at $t_1$, send one cbit or at $t_2$, send one cbit). So, Alice must send that cbit everytime, right? In that case, the protcol requires 2 cbits (one in Step 4 and another in Step 6). I thought it'd be good if there was a discussion on this particular part.
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
The definition of partial function doesn't exclude the possibility that the function is total. On the contrary, every total function is a fortiori a partial function. Perhaps the terminology is unfortunate. A function from $D$ to $R$ is a subset $f$ of $D \times R$ such that for all $d \in D$ there exists a unique $r \in R$ (denoted $f(d)$) such that $(d,r) \in f$. The domain of $f$ is $\operatorname{dom} f = D$. A partial function from $D$ to $R$, where $\bot \notin R$, is a function $f$ from $D$ to $R \cup \{\bot\}$. When $f(d) = \bot$, we say that $f$ is undefined at $d$. The domain of $f$ is $\operatorname{dom} f = \{d \in D : f(d) \neq \bot\}$. If $\operatorname{dom} f = D$ then $f$ is total. For every partial function $f$, $f|_{\operatorname{dom} f}$ is a total function. As you can see, a partial function is a function which can be undefined for some of the inputs; a total function is a partial function which happens to be defined everywhere. Now regarding computability. Every program computes some partial function $f$ from $\mathbb{N}$ to $\mathbb{N}$; if the program doesn't halt on some input $x$, then $f$ is undefined on $x$, in other words, $f(x) = \bot$. A computable function is one computed by an algorithm which always halts.
Schützenberger promotion, studied (for example) in Richard Stanley, Promotion and Evacuation, 2009, is a permutation of the set of all linear extensions of a finite poset. Since one can identify the linear extensions of a poset with saturated chains of order ideals in that poset, this allows one to also view Schützenberger promotion as a permutation of the set of the latter. The famous promotion of standard Young tableaux is a particular case of this. Striker-Williams promotion, defined in Jessica Striker, Nathan Williams, Promotion and Rowmotion, arXiv:1108.1172v3, Definition 4.13, is a permutation of the set of all order ideals (not saturated chains of order ideals!) of a so-called "rc poset" (which is a poset with a map into $\mathbb Z^2$ satisfying certain conditions, best viewed as a way to draw its Hasse diagram on a grid; see below or §4.2 of Striker-Williams for an exact definition). Apparently people are considering these two promotions to be closely related. However, the only direct relation I am aware of is Striker-Williams Theorem 4.12, which bijects Schützenberger promotion on standard tableaux on a two-rowed Young diagram with Striker-Williams promotion on a poset which looks like a triangle grid. Questions: 1. Is this really the only relation? Is promotion of standard Young tableaux of a Young diagram with more than $2$ rows not a (known) case of Striker-Williams promotion? 2. I've seen some kind of promotion on semistandard Young tableaux being mentioned on the internet. Assuming it's not a typo, how is that defined? Appendix: Let me define the two notions involved for the sake of completeness. Probably the sources quoted give better definitions... Definition of Schützenberger promotion: Let $P$ be a finite poset. Let $\mathcal L\left(P\right)$ denote the set of all linear extensions of $P$. We define a map $\partial : \mathcal L\left(P\right)\to \mathcal L\left(P\right)$ as follows: Let $f \in \mathcal L\left(P\right)$ be a linear extension. We set $p=\left|P\right|$, and we view $f$ as a function $P\to\left\lbrace 1,2,...,p\right\rbrace$, i. e., as a labelling of the elements of $P$ by the numbers $1$, $2$, ..., $p$ (we get this labelling by labelling every element $v\in P$ with the number $\left| \left\lbrace w\in P \ \mid \ f\left(w\right)\leq f\left(v\right) \right\rbrace \right|$). Define a (dynamic) map $g:P\to\mathbb Z$ by $g = f$ (we will be modifying $g$, while $f$ remains static). If $p=0$, do nothing. Else, set $u$ to be the element of $P$ labelled $1$ (that is, the smallest element of $P$ with respect to $g$), and do the following loop: While there exists an element of $P$ covering $u$: let $v$ be the smallest (with respect to $g$) among the elements of $P$ covering $u$ (that is, the element $p$ of $P$ covering $u$ with smallest $g\left(p\right)$); slid the label of $v$ down to $u$ (that is, set $g\left(u\right)$ to be $g\left(v\right)$, accepting that $g$ will temporarily fail to be injective); set $u = v$. Endwhile. After the end of this loop, label $u$ with $p+1$ (that is, set $g\left(u\right) = p+1$), and then subtract $1$ from each label (i. e., replace $g$ by $g-\mathbf{1}$, where $\mathbf{1}$ is the constant function $P\to\mathbb Z,\ p\mapsto 1$). The resulting $g$ is called the promotion of $f$, and denoted by $\partial f$. (It is more common to call it $f\partial$, so that $\partial$ is seen as a map acting from the right). Definition of Striker-Williams promotion: Let $P$ be a finite poset. Let $J\left(P\right)$ denote the set of all order ideals of $P$. For every $q\in P$, define a map $t_p : J\left(P\right) \to J\left(P\right)$ as follows: Let $I \in J\left(P\right)$. If $I \bigtriangleup \left\lbrace p\right\rbrace$ (with $\bigtriangleup$ standing for "symmetric difference") is an order ideal of $P$, set $t_p\left(I\right) = I \bigtriangleup \left\lbrace p\right\rbrace$. Otherwise, set $t_p\left(I\right) = I$. Let $\mathbb Z^2_{\operatorname*{ev}}$ denote the $\mathbb Z$-submodule of $\mathbb Z^2$ spanned by $\left(1,1\right)$ and $\left(2,0\right)$. In other words, let $\mathbb Z^2_{\operatorname*{ev}}$ be the set of all $\left(x,y\right)\in\mathbb Z^2$ for which $x+y$ is even. Now, let $P$ be a finite rc-poset; this means a poset along with a map $\pi : P \to \mathbb Z^2_{\operatorname*{ev}}$ such that whenever an element $p_1$ of $P$ covers an element $p_2$ of $P$, we have $\pi\left(p_1\right)-\pi\left(p_2\right) \in \left\lbrace \left(-1,1\right), \left(1,1\right) \right\rbrace$. (See §4.2 of Striker-Williams for some good pictures of what this means.) For every $p\in P$, let $\pi_1\left(p\right)$ denote the first coordinate of $\pi\left(P\right)$. Now, consider the composition of the maps $t_p$ in decreasing order of $\pi_1\left(p\right)$ (the relative order of the $t_p$ for distinct $p$ having the same $\pi_1\left(p\right)$ does not matter). This composition is Striker-Williams promotion.
Let me rephrase the question (and Ilya's answer). Given an arbitrary collection $X_i$ of schemes, is the functor (on affine schemes, say) $Y \mapsto \prod_i Hom(Y, X_i)$ representable by a scheme? If the $X_i$ are all affine, the answer is yes, as explained in the statement of the question. More generally, any filtered inverse system of schemes with essentially affine transition maps has an inverse limit in the category of schemes (this is in EGA IV.8). The topology in that case is the inverse limit topology, by the way. It is easy to come up with examples of infinite products of non-separated schemes that are not representable by schemes. This is because any scheme has a locally closed diagonal. In other words, if $Y \rightrightarrows Z$ is a pair of maps of schemes then the locus in $Y$ where the two maps coincide is locally closed in $Y$. Suppose $Z$ is the affine line with a doubled origin. Every distinguished open subset of an affine scheme $Y$ occurs as the locus where two maps $Y \rightrightarrows Z$ agree. Let $X = \prod_{i = 1}^\infty Z$. Every countable intersection of distinguished open subsets of $Y$ occurs as the locus where two maps $Y \rightarrow X$ agree. Not every countable intersection of open subsets is locally closed, however, so $X$ cannot be a scheme. Since the diagonal of an infinite product of separated schemes is closed, a more interesting question is whether an infinite product of separated schemes can be representable by a scheme. Ilya's example demonstrates that the answer is no. Let $Z = \mathbf{A}^2 - 0$. This represents the functor that sends $Spec A$ to the set of pairs $(x,y) \in A^2$ generating the unit ideal. The infinite product $X = \prod_{i = 1}^\infty Z$ represents the functor sending $A$ to the set of infinite collections of pairs $(x_i, y_i)$ generating the unit ideal. Let $B$ be the ring $\mathbf{Z}[x_i, y_i, a_i, b_i]_{i = 1}^\infty / (a_i x_i + b_i y_i = 1)$. There is an obvious map $Spec B \rightarrow X$. Any (nonempty) open subfunctor $U$ of $X$ determines an open subfunctor of $Spec B$, and this must contain a distinguished open subset defined by the invertibility of some $f \in B$. Since $f$ can involve at most finitely many of the variables, the open subset determined by $f$ must contain the pre-image of some open subset $U'$ in $\prod_{i \in I} Z$ for some finite set $I$. Let $I'$ be the complement of $I$. If we choose a closed point $t$ of $U'$ then $U$ contains the pre-image of $t$ as a closed subfunctor. Since the pre-image of $t$ is $\prod_{i \in I'} Z \cong X$ this shows that any open subfunctor of $X$ contains $X$ as a closed subfunctor. In particular, if $X$ is a scheme, any non-empty open affine contains a scheme isomorphic to $X$ as a closed subscheme. A closed subscheme of an affine scheme is affine, so if $X$ is a scheme it is affine. Now we just have to show $X$ is not an affine scheme. It is a subfunctor of $W = \prod_{i = 1}^\infty \mathbf{A}^2$, so if $X$ is an affine scheme, it is locally closed in $W$. Since $X$ is not contained in any closed subset of $W$ except $W$ itself, this means that $X$ is open in $W$. But then $X$ can be defined in $W$ using only finitely many of the variables, which is impossible. Edit: Laurent Moret-Bailly pointed out in the comments below that my argument above for this last point doesn't make sense. Here is a revision: Suppose to the contrary that $X$ is an affine scheme. Then the morphism $p : X \rightarrow X$ that projects off a single factor is an affine morphism. If we restrict this map to a closed fiber then we recover the projection from $Z$ to a point, which is certainly not affine. Therefore $X$ could not have been affine in the first place.
I'm trying to prove $L = \{ w : w \neq w^R \}$ over $\Sigma = \{0,1\}$ is CFL. Define $G = ({S,T}, \Sigma, R, S)$ where $R = S \to 0S0|1S1|0T1|1T0, \; T \to 0T|1T|\varepsilon$.Now I want to show that $\mathcal{L}(G) = L$. One direction is fine, where given $w\in \mathcal{L}(G)$, it must've deduced using $S \to 1T0|0T1$, and therefore $w \neq w^R$. But given $w \in L$, all I know is $w \neq w^R$. How can I prove $w\in \mathcal{L}(G)$?
Bespectacled Eyeballs Extension What is this about? Problem Circle $O(I)$ and $O(J)$ are centered at $O$ and are through $I$ and $J,$ respectively. Circle $Q(K)$ and $Q(L)$ are centered at $Q$ and are through $K$ and $L,$ respectively. $OL$ and $OL'$ are tangent to $Q(L);$ $QI$ and $QI'$ are tangent tangent to $O(I).$ Further, $J=OL\cap O(J),$ $G=OL'\cap O(J),$ $K=QI\cap Q(K),$ $H=OI'\cap Q(K).$ Also, $\displaystyle\frac{OI}{OJ}=\frac{QL}{QK}.$ Prove that $GHKJ$ is a rectangle. Solution Observe that when the given ratio is $1$ the problem reduces to the Eyeball theorem. Define $J'=OL\cap O(I),$ $G'=OL'\cap O(I),$ $K'=QI\cap Q(L),$ $H'=OI'\cap Q(L).$ Then, by the Eyeball theorem, $G'H'K'J'$ is a rectangle. From $\displaystyle\frac{OI}{OJ}=\frac{QL}{QK}$ we also have $\displaystyle\frac{J'G'}{JG}=\frac{K'H'}{KH}$ while, due to symmetry, $J'G'\parallel JG$ and $K'H'\parallel KH.$ This implies that $GHKJ$ is also a rectangle. (This is an example of a problem in which a particular case implies the more general one.) Acknowledgment The above has been submitted by Dao Thanh Oai (Vietnam) in private communication. Related material Problems with Ophthalmological Connotations 65620601
Dynamical Models of the Excitations of Nucleon Resonances Abstract The development of a dynamical model for investigating the nucleon resonances using the reactions of meson production from $$\pi N$$, $$\gamma N$$, $N(e,e')$, and $$N(\nu,{\it l})$$ reactions is reviewed. The results for the $$\Delta$$ (1232) state are summarized and discussed. The progress in investigating higher mass nucleon resonances is reported. Authors: Argonne National Lab. (ANL), Argonne, IL (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States) Publication Date: Research Org.: Thomas Jefferson National Accelerator Facility, Newport News, VA (United States) Sponsoring Org.: USDOE OSTI Identifier: 979699 Report Number(s): JLAB-THY-09-946; DOE/OR/23177-0635 Journal ID: ISSN 0954-3899; TRN: US1003377 Grant/Contract Number: AC05-06OR23177 Resource Type: Accepted Manuscript Journal Name: Journal of Physics. G, Nuclear and Particle Physics Additional Journal Information: Journal Volume: 36; Journal Issue: 7; Journal ID: ISSN 0954-3899 Publisher: IOP Publishing Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; MESONS; NUCLEONS; PRODUCTION Citation Formats T. Sato, and Lee, T.-S. H. Dynamical Models of the Excitations of Nucleon Resonances. United States: N. p., 2009. Web. doi:10.1088/0954-3899/36/7/073001. T. Sato, & Lee, T.-S. H. Dynamical Models of the Excitations of Nucleon Resonances. United States. doi:10.1088/0954-3899/36/7/073001. T. Sato, and Lee, T.-S. H. Fri . "Dynamical Models of the Excitations of Nucleon Resonances". United States. doi:10.1088/0954-3899/36/7/073001. https://www.osti.gov/servlets/purl/979699. @article{osti_979699, title = {Dynamical Models of the Excitations of Nucleon Resonances}, author = {T. Sato and Lee, T.-S. H.}, abstractNote = {The development of a dynamical model for investigating the nucleon resonances using the reactions of meson production from $\pi N$, $\gamma N$, $N(e,e')$, and $N(\nu,{\it l})$ reactions is reviewed. The results for the $\Delta$ (1232) state are summarized and discussed. The progress in investigating higher mass nucleon resonances is reported.}, doi = {10.1088/0954-3899/36/7/073001}, journal = {Journal of Physics. G, Nuclear and Particle Physics}, number = 7, volume = 36, place = {United States}, year = {2009}, month = {5} } Citation information provided by Web of Science Web of Science
User Guide Examples Potential Databases More User Guide Examples Potential Databases More To enable the calculation of the stress tensor you have to add stress to the compilation options of potfit. The stress tensor has to be given in the configuration file with the prefix #S. If no #S line is found for a configuration, then the calculation of stresses is disabled for this configuration. potfit and IMD calculate the virial stress, which is based on a generalization of the virial theorem of Clausius 1 for gas pressure. It can be written as $$\boldsymbol\sigma^{\text{virial}} = \frac{1}{2\Omega}\sum_i\sum_{j\neq i}\boldsymbol r_{ij}\otimes\boldsymbol f_{ij}$$ where $\Omega$ is the total volume, $i$ and $j$ the atomic indices, $\boldsymbol r_{ij}=\boldsymbol r_j-\boldsymbol r_i$ and $\boldsymbol f_{ij}$ the interatomic force applied on atom $i$ by atom $j$, $$\boldsymbol f_{ij}=\frac{\partial\phi(r_{ij})}{\partial r_{ij}}\frac{\boldsymbol r_{ij}}{r_{ij}}.$$ The deviation of the stress tensor components is added to the target function like the forces and energies $$Z_F(\boldsymbol\xi)= \sum_{k=1}^{3N}u_k(F_k(\boldsymbol\xi)-F_k^0)^2 + \sum_{l=1}^{M}w_E(E_l(\boldsymbol\xi)-E_l^0)^2 + \sum_{l=1}^{6M}w_S(S_l(\boldsymbol\xi)-S_l^0)^2$$ with the stress weight $$w_S.$$ 1. Clausius R. J. E.: On a mechanical theorem applicable to heat. Phil. Mag. 40 122–7, 1870 Link
What is the reason, historically, that the letter $m$ is used to denote the slope of a line? According to Wolfram MathWorld, there's no consensus. Some think it may have come from French monter meaning to climb, but this is just speculation $-$ it's likely just a trend that caught on. The article I linked to contains a greater elaboration and some examples of where it's not used. Not a reason, but a lovely coincidence is that the higher dimensional analogue of a slope is a matrix. $$ \vec{y} = M(\vec{x}) $$ would be the higher dimensional analogue of $$ y = mx $$ where $\vec{y} \in \mathbb{R}^m$, $\vec{x} \in \mathbb{R}^n$, and $M$ is a $n \times m$ matrix. So I like to think of the "m" as standing for "matrix". It is not known why the letter m was chosen for slope; the choice may have been arbitrary. John Conway has suggested m could stand for "modulus of slope." One high school algebra textbook says the reason for m is unknown, but remarks that it is interesting that the French word for "to climb" is monter. However, there is no evidence to make any such connection. Descartes, who was French, did not use m. In Mathematical Circles Revisited(1971) mathematics historian Howard W. Eves suggests "it just happened."
The differential equation: $y''+\omega^2 y=0$ has as a general solution: $$y=A\cos{(\omega t)}+B\sin{(\omega t)}$$ By taking: $$A=R\cos{(\omega t_0)}$$ and $$B=R\sin{(\omega t_0)}$$ We can rewrite the general solution into: $$y=R\cos{(\omega(t-t_0))}$$ However, this is also a solution: $$y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$$ QUESTION: I understand we can rewrite this last solution, $y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$ in the form of $y=A\cos{(\omega t)}+B\sin{(\omega t)}$ (using some trig identities) But... Can we also rewrite $y=\alpha\cos{(\omega(t-t_0))+\beta\sin{(\omega(t-t_0))}}$ in the form of $y=R\cos{(\omega(t-t_0))}$? If so, how?
I hope nobody cares that i exhume this question, but i found it interesting that it is possible to obtain this integral by a relativly straightforward contour integration method. Observe that,following the question opener and using parity, that we can rewrite the integral as $$\frac{1}{2}\int^{\infty}_{-\infty}\frac{1}{1+t^2}\frac{1}{1+\sin^2(t)}$$ It is now easy to show that the poles are $$t_{\pm}=\pm i\\t_{n\pm}=\pi n\pm i \text{arcsinh(1)}$$ so we have two isolated poles and the rest lies on two straight lines paralell to the real axis. Because the integrand interpreted as a complex function converges as $|z|\rightarrow\infty$ we can choose a semicircle closed in the upper half plane as an integration contour. We find $$I=\pi i\sum_{n=-\infty}^{\infty}\text{res}(t_{n+})+\pi i \text{res}(t_{+})$$ Where the residues are given by$$\text{res}(t_{+})=\frac{i}{2}\frac{1}{2 \sinh^2(1)-1}\\\text{res}(t_{n+})=\frac{-i}{2\sqrt{2}}\frac{1}{1+(n \pi+i \text{arcsinh(1)} )^2}$$ Therefore the integral reduces to the following sum $$I=\frac{\pi}{2\sqrt{2}} \sum_{n=-\infty}^{\infty} \frac{1}{1+(n \pi+i \text{arcsinh(1)})^2} -\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}$$ Using a partial fraction decomposition together with the Mittag-Leffler expansion of $\coth(x)$, this can be rewritten as $$I=\frac{\pi}{4\sqrt{2}} \sum_{n=-\infty}^{\infty} \frac{-i}{-i+n \pi+ \text{arcsinh(1)}}+ \frac{i}{i+n \pi+ i\text{arcsinh(1)}}-\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}=\\\frac{\sqrt{2} \pi}{8} \left( \coth \left(1-\text{arcsinh(1)}\right)+ \coth \left(1+\text{arcsinh(1)}\right)\right)-\frac{\pi}{2}\frac{1}{2 \sinh^2(1)-1}\\$$ Or $$I\approx 1.16353$$ Which matches the claimed result. One can also compute this explicitly noting that $\text{arcsinh(1)}=\log(1+\sqrt{2})$ (*). But this is rather tedious so i just leave this step to the reader and conclude that$$I=\frac{e^2+3-2\sqrt{2}}{e^2-3+2\sqrt{2}}$$ Appendix Just to give some details of the last part of the calculations: Using (*) the part stemming from the sum is $$\frac{\pi}{4\sqrt{2}}\left(\frac{ \frac{1+\sqrt{2}}{e}+\frac{e}{1+\sqrt{2}}}{ \frac{e}{1+\sqrt{2}}-\frac{1+\sqrt{2}}{e}}+\frac{e \left(1+\sqrt{2}\right)+\frac{1}{1+\sqrt{2} e} }{\left(1+\sqrt{2}\right) e-\frac{1}{\left(1+\sqrt{2}\right) e}}\right)=\\\frac{\left(e^4-1\right) \pi }{2 \sqrt{2} \left(1-6 e^2+e^4\right)}$$ The part of the single pole gives $$\frac{\pi }{2 \left(\left(\frac{e}{2}-\frac{1}{2 e}\right)^2-1\right)}=\frac{2 e^2 \pi }{1-6 e^2+e^4}$$ Adding both terms and factorizing then yields the desired result
Since the force is central, angular momentum is conserved, and so the angular motion can be solved in terms of the radial motion. Using the conventions of the spinless earth starting point you mention, that is $\omega^2=g/R$, for $R$ the radius thereof, we see the earth's motion is a small perturbation, as the 24hrs period is so much larger than the canonical 2π/ω=84.5 mins. In fact, the centrifugal barrier will eject the neutrino ball so the effective period is more like half the spinless case. It will come up ahead of where it was released---it will have effectively spun faster than the earth. The conserved angular momentum is $L=m r^2 \dot{\theta}$, which it pays to re-write in terms of a dimensionless variable a. The vanishing a limit is the spinless earth, while a=2 corresponds to the notional earth spinning so fast as to give the neutrino ball the escape velocity. The particle's mass m is a canard that factors out.$$L\equiv \sqrt{a} m \omega R^2 .$$Thus, given $r(t)$, integrating $\dot{\theta}=L/(mr^2)= \sqrt{a} \omega R^2/r(t)^2$ completely specifies the angular motion. For the "real" earth, $\dot{\theta}_R/\omega=\sqrt{a}\sim 0.06$, small. In these variables, the energy of the ball for $r\leq R$ (only!) is $$E=\frac{m\dot{r}^2}{2}+\frac{mg}{2R} r^2+\frac{m a\omega^2R^4}{2r^2} .$$ The equation of motion then, in terms of the dimensionless x = r/R isthe nonlinear Ermakov equation$$\ddot{x} = -\omega^2 x + \frac{a \omega^2}{x^3},$$whose first integral is $~~\dot{x}^2/\omega^2+x^2+a/x^2=2E/(\omega^2 mR^2)=1+a$. Consider the solution $$x=\sqrt{\sin^2(\omega t +\pi/2) + a/(1-a) } ~~ \sqrt{1-a} .$$It has a suitable $a\to 0$ limit, the spinless oscillation, $r=R\sin (\omega t +\pi/2)$, and x(t=0)=1. Nevertheless, it will have exactly the same 84.5 min period as the spinless case, since a and ω are independent. It should not be a surprise the periodic motion amounts to a closed elliptic orbit, of course: it is just an orbit in a harmonic potential, until surfacing (and splicing into a Kepler potential...). It is also apparent that the neutrino ball will never get to the center, as the minimum r will be $\sim R \sqrt{a}$ for small a, the effective centrifugal barrier. For the "real" earth, this perigee would be ~0.06 R. Perhaps somebody with graphics juices and time to lavish could plot the integrated r - θ orbits viewed from the North pole. Note the enormous eccentricity: $e\sim 1-2 \sqrt{a}$ !
Polarization¶ Spontaneous polarization of ferroelectric BaTiO3 Introduction¶ Ferroelectric (FE) materials have a spontaneous electric polarization that can be reversed by the application of an external electric field. FE materials find applications in capacitors, ferroelectric random access memory (RAM), and more recently in ferroelectric tunnel junction (FTJ) displaying giant electroresistance effects [1] [2]. One of the most studied FE materials is barium titanate (BaTiO 3 ), whichis the topic for this tutorial. Before continuing with the calculations, let us brieflysummarize some central theoretical concepts first. Modern theory of polarization¶ The theoretical understanding of FE materials is described by the so-called modern theory of polarization [3]. It is common to divide the polarization of a material into electronic and ionic parts. The latter is calculated using a simple classical electrostatic sum of point charges where \(Z^\nu_\mathrm{ion}\) and \(\mathbf{r}^\nu\) are the valence charge and position vector of atom \(\nu\), \(\Omega\) is the unit cell volume, and the sum runs over all ions in the unit cell. The electronic contribution to the polarization is obtained as [3] where the sum runs over occupied bands, and where \(k_\parallel\) is parallel to thedirection of polarization, and \(G_\parallel\) is a reciprocal lattice vector in the samedirection. The states \(|u_{\mathbf{k},n}\rangle\) are the cell-periodic parts of the Bloch functions,\(\psi_{\mathbf{k},n}(\mathbf{r}) = u_{n,\mathbf{k}}(\mathbf{r})e^{i\mathbf{k}\cdot\mathbf{r}}\).The last integral is known as the Berry phase. The integral over theperpendicular directions can easily be converged with a few number of k-points.The number of k-points in the parallel direction should be larger, however. The total polarization is simply the sum of the electronic and ionic contributions, An important finding in Ref. [3] was that the polarization is a multivalued quantity, and in fact forms a lattice. The reason is that the electronic polarization \(\mathbf{P}_e\) is determined by the Berry phase, which is only defined modulo \(2\pi\). Likewise, the ionic contribution \(\mathbf{P}_i\) would attain a different value if all ionic positions were displaced by a lattice constant in either direction. The polarization is thus a periodic function, and the period is called the polarization quantum, \(\mathbf{P}_q^j=\frac{|e|\mathbf{R}^j}{\Omega}\), where \(|e|\) is the electronic charge, \(\mathbf{R}^j\) is the lattice vector \(j\), and \(\Omega\) is the unit cell volume. Given the multivalued nature of the polarization, it is perhaps not surprising that only differences in polarization, \(\Delta \mathbf{P}\), between two different structures is a well-defined property. ATK computes and reports the electronic and ionic contributions separately, and also reportsthe polarization quantum. Important Note that the implementation does not work for metallic systems and orthogonal cells should be preferred when possible. Usage in 2D systems and with non-orthogonal unitcells should be done with thorough testing of the used settings and results. Spontaneous polarization of ferroelectric BaTiO 3¶ The BaTiO3 crystal structure¶ Barium titanate (BaTiO 3) has a tetragonal crystal structure at room temperature,where the unit cell is slightly elongated in the c-direction. An internal stressfurther shifts the fractional coordinates in the c-direction away from their highsymmetry positions. In this tutorial we use the experimental lattice constants andcoordinates as obtained from the Inorganic Crystal Structure Database (ICSD)The structure is given in the QuantumATK format below [4] # Set up latticelattice = SimpleTetragonal(3.9945*Angstrom, 4.0335*Angstrom)# Define elementselements = [Barium, Titanium, Oxygen, Oxygen, Oxygen]# Define coordinatesfractional_coordinates = [[ 0. , 0. , 0. ], [ 0.5 , 0.5 , 0.51427 ], [ 0.5 , 0.5 , 0.974477], [ 0.5 , 0. , 0.487618], [ 0. , 0.5 , 0.487618]]# Set up configurationbulk_configuration = BulkConfiguration( bravais_lattice=lattice, elements=elements, fractional_coordinates=fractional_coordinates ) Setting up the calculation¶ You will in this section set up a DFT calculation using the localdensity approximation (LDA) for the BaTiO 3 crystal andcalculate the polarization. You will use QuantumATK for the calculation,and it is recommended that you go through the Basic QuantumATK Tutorial to be familiar with the basic work flow. Start up QuantumATK and create a new project for this tutorial. Use a new, empty directory.Select the text for the BaTiO 3 structure in the python script above and drag it onto the Script Generator icon . The tool will interpret the script and open up with the imported geometry Tip Alternatively you can save the script to a file in the project directory anddrag and drop the file to the Script Generator from the QuantumATK main window. Next do the following steps: Now double-click the New Calculator block in the “Script” panelto open the calculator widget. Set the k-point sampling to 5x5x5; the other default settings are fine. The next step is to adjust the settings for the polarization analysis.Double-click the Polarization block. Increase the number of k-points on the diagonal to 20. This is the number of k-points along the lines of integration and needs to be relatively high. You should always check for convergence by comparing calculations with different numbers of k-points. The other k-points with values of 5 correspond to the number of transverse k-points used for averaging over the Brillouin zone. The polarization values converge relatively fast with respect to the number of transverse k-points and we thus use the default value. You have now finished the script setup. Save the script as “BaTiO3_lda.py”. Send the script to the Job Manager and run the job.After a few minutes the calculation has finished and you can inspect the results. Analyzing the results¶ To inspect the calculated polarization reported in the log file, scroll down to the end of the log file and you will find a report as shown below. +------------------------------------------------------------------------------+| Polarization |+------------------------------------------------------------------------------+| Electronic fractional polarization. || Values wrapped to the interval [-0.5,0.5] || [ -1.25164671e-15 ] || Pe= [ -6.42868666e-16 ] || [ -4.71901310e-01 ] |+------------------------------------------------------------------------------+| Ionic fractional polarization. || Values wrapped to the interval [-0.5,0.5] || [ 0.00000000e+00 ] || Pi= [ 0.00000000e+00 ] || [ -2.44642000e-01 ] |+------------------------------------------------------------------------------+| Total fractional polarization. Pt = Pe + Pi. || Values wrapped to the interval [-0.5,0.5] || [ -1.25164671e-15 ] || Pt= [ -6.42868666e-16 ] || [ 2.83456690e-01 ] |+------------------------------------------------------------------------------+| Total cartesian polarization. || [ -1.24465114e-15 ] || Pt= [ -6.39275613e-16 ] C/Meter**2 || [ 2.84624464e-01 ] |+------------------------------------------------------------------------------+| Polarization quantum. || [ 9.94410906e-01 ] || Pq= [ 9.94410906e-01 ] C/Meter**2 || [ 1.00411976e+00 ] |+------------------------------------------------------------------------------+ Tip The results can also be inspected by selecting the polarization objectin the file “BaTiO3_lda.hdf5” on the LabFloor and clicking Show Text Representation... Notes¶ The output contains five calculated quantities: First, the electronic fractional polarization \(\mathbf{P}_e\) is calculated from the Berry phase obtained from the occupied bands, as described in Modern theory of polarization. The three values correspond to the x, y, and z directions. The second quantity \(\mathbf{P}_i\) is the purely ionic fractional polarization \(\mathbf{P}_i = \sum_j Z_j^{ion}\tau_j\), where \(Z_j^{ion}\) and \(\tau_j\) are the valence charge and fractional coordinate of of atom \(j\). The third quantity, \(\mathbf{P}_t\), is the total fractional polarization, which is the sum of the electronic and ionic parts. As discussed in Modern theory of polarization, the polarization is a multivalued quantity, and therefore all fractional polarizations are wrapped to the interval [-0.5,0.5], which explains the sign change of the polarization in the z-direction. The sum \(\mathbf{P}_e(z) + \mathbf{P}_i(z) = -0.717\) is outside the range and is thus wrapped by adding a fractional quantization quantum (equal to 1), i.e. \(\mathbf{P}_t(z) = -0.717 + 1 = 0.283\). The fourth quantity is the total polarization \(\mathbf{P}_t\) in Cartesian coordinates, expressed in units of C/m 2. The fifth quantity is the polarization quantum \(\mathbf{P}_q\) introduced in Modern theory of polarization. It is relevant that \(\mathbf{P}_t\) is small compared to \(\mathbf{P}_q\). According to the modern theory of polarization [3], only the difference in polarization between two configurations is a well-definedproperty. In order to calculate the spontaneous polarization of tetragonal BaTiO 3, it is thusnecessary to also compute the polarization of the centrosymmetric, undistorted structure givenbelow: # Set up latticelattice = SimpleTetragonal(3.9945*Angstrom, 4.0335*Angstrom)# Define elementselements = [Barium, Titanium, Oxygen, Oxygen, Oxygen]# Define coordinatesfractional_coordinates = [[ 0.0 , 0. , 0. ], [ 0.5 , 0.5 , 0.5 ], [ 0.5 , 0.5 , 1.0 ], [ 0.5 , 0. , 0.5 ], [ 0.0 , 0.5 , 0.5 ]]# Set up configurationbulk_configuration = BulkConfiguration( bravais_lattice=lattice, elements=elements, fractional_coordinates=fractional_coordinates ) You may repeat the above steps to calculate the polarization for this structure. The resultsare that all polarization components are zero. The spontaneous polarization of tetragonal BaTiO 3,thus corresponds to the values reported above for the distorted structure. The calculated value for the total Cartesian polarization in the z-direction\(\mathbf{P}_t(z)=0.284\) C/m 2 compares well with the experimental value 0.26 C/m 2 [5]. References¶ [1] [2] [3] (1, 2, 3, 4) [4] [5]
Resistivity calculations using the MD-Landauer method¶ Version: 2017.0 The bulk resistivity is an intrinsic property quantifying how strongly a material disrupts a flux of electronic current. In pristine metals, the main cause of the resistivity at room temperature is electron-phonon interaction, i.e., electron scattering due to the interaction with the vibrations of the lattice. The higher the temperature, the larger becomes the vibrational amplitude of the lattice and hence the scattering probability, leading to linear increase of the resisitivity to the temperature. This case study aims at introducing the MD-Landauer method, which is a conceptually simple methodology for calculating the temperature-dependent resistivity based on combined molecular-dynamics (MD) and electronic-transport calculations within the framework of the Landauer formalism (check the tutorial Transport calculations with QuantumATK). Despite its simplicity, the MD-Landauer method has several advantages over other more rigorous methods to calculate bulk resistivities (see, e.g., Refs. [cBMO+02] and [cSPS+10], and also the tutorial Phonon-limited mobility in graphene using the Boltzmann transport equation), as it takes naturally into account the anharmonic effects in describing lattice vibrations, and is applicable to non-crystalline systems as well. The case study consists of three sections, through which we reproduce some results for bulk gold recently reported in the paper “Electron-phonon scattering from Green’s function transport combined with Molecular Dynamics: Applications to mobility predictions” [cMPS+17]. Section 1. Theory and numerical procedure provides some theoretical and methodological details. Section 2. Calculation setup then describes how to setup an MD-Landauer calculation in QuantumATK. Finally, Sec. 3. Data analysis is devoted to explaining how to analyze the data files and compute the resistivity. Note The numerical parameters employed in the calculations below have been tuned to achieve a compromise between computational efficiency and accuracy. The results will hence slightly differ from those presented in Ref. [cMPS+17]. Note QuantumWise case studies are primarily directed at experienced users of QuantumATK. Instructions are deliberately concise in order to focus mostly on the science. 1. Theory and numerical procedure¶ The figure above displays an illustrative example of a device configuration modeling bulk gold, for which the MD-Landauer calculations are performed. There are two semi-infinite electrodes and, in between, a central region which, importantly, consists of three parts: the MD region with length \({\mathcal L}\) and the electrode copy on its either side. An incoming flux of electrons from one of the electrodes interacts with phonons in the MD region, and is transmitted with a certain probability to the other electrode. Assuming that the electrodes and their replicas always remain equilibrium at temperature \(T\), MD calculations are performed by allowing the atoms to move only in the MD region for a few nanoseconds, starting with random initial velocities based on the Maxwell-Boltzmann distribution. At the end of each MD calculations, the transmission coefficient is computed as where \(G^\rm {r/a}[E;{\bf x}({\it T}),\mathcal{L}]\) represents the retarded/advanced Green’s function, and \(\Gamma_{\rm L/R}(E)\) is the width in the left/right electrode (see, e.g., Ref. [cDat97] for details of the derivation). Note that the Green’s functions and thus the transmission coefficient are both functions of \(E\) and parametrically depend on the set of displacement coordinates of atoms \({\bf x}({\it T})\) as well as \(\mathcal{L}\). The Landauer formula then provides the conductance as a function of \(\mathcal{L}\) and \(T\): where \(n_{\rm F}(E,\epsilon_{\rm F},T)\) represents the Fermi-Dirac distribution function with \(\epsilon_{\rm F}\) being the Fermi energy, and \(\langle\cdots\rangle\) means taking average over a set of MD calculations. The resistance is defined as the inverse of the conductance, i.e., where the second line defines the one-dimensional (1D) resistivity \(\rho_{\rm 1D}(T)\). The bulk resitivity is then given by where \(A\) is the cross section of the device normal to the transport direction. Dividing the procedure into three steps as follows will help clarify how to compute the 1D and bulk resistivies: Calculation of the lattice geometries: For a given value of \(T\) and a set of values of \(\mathcal{L}\), several MD calculations with random initial velocities are performed. For each calculation, the last snapshot of the MD run is taken. Calculation of the resistance: For each set of snapshots with a given length \(\mathcal{L}\), the averaged transmission coefficient is computed, which is then used to calculate the conductance \(\mathscr{G}(\mathcal{L},T)\) and the resistance \(R(\mathcal{L},T)\). Calculation of the finite-temperature resistivity: The 1D and bulk resistivities at the temperature \(T\) are calculated. The rest of this case study details each step as well as the procedure to construct a device configuration of bulk gold. 2. Calculation setup¶ 2.1. Construction of the DeviceConfiguration¶ Tip This subsection presents a detailed how-to guidline for preparing a device configuation of bulk gold, and finally provides an QuantumATK Python script, Au100_L3.py. To save time, one may just download the script and use it. Then use thetool as follows to create a 3x3 Au[100] bulk supercell with a thickness of 10 layers: Create a [100] surface by setting h=0 and k=l=0. Define the surface lattice vectors by \({\bf v}_1=3{\bf u}_1\) and \({\bf v}_2=3{\bf u}_2\). Select “ Periodic and normal (electrode)” and set the thickness to 10 layers. Finally, use theplugin to create a DeviceConfiguration. Set the length of both right and left electrodes to be the minimum value 4.07825 Å. Save the device configuration as “ Au100_L3.py,” where “ L3” indicates \(\mathcal{L}=3\) in terms of the number of electrode repetitions. Following a similar procedure, one will be able to easily construct the device configurations for \(\mathcal{L}=4\) and \(5\), and save them with proper file names. 2.2. Setup of the script¶ In the section Global IO options, change the name of the output file to “ Au100_L3_T300.nc.” The parameters in each block should then be specified as follow. MD section¶ Open the New Calculator block, then select ATK-ForceField from the available Calculators, and set “ EAM_Au_Sheng_2011” [cSKC+11] in Parameter set. Note For QuantumATK-versions older than 2017, the ATK-ForceField calculatorcan be found under the name ATK-Classical. In the MolecularDynamics block, choose the Langevin thermostat [cGRdV+12] and set the rest of the MD parameters as shown below. Important In order to describe a random distribution of initial velocities in the MD simulations, remember to set the initial velocity type to Maxwell-Boltzmann. Landauer calculation section¶ Important Remember to untick “ No SCF iteration” to carry out the calculations self-consistently. Note The choice of the “ Krylov” method is of particular importance as it accelerates the convergence in the SCF calculations. 2.3. Edit the python script¶ Making fully use of GUI supported by QuantumATK, the device configuration has been so far constructed with the necessary calculators and analyzer attached. The MD-Landauer method for computing the conductance, resistance, and resistivity, however, requires performing the calculation, for each of given values of \(\mathcal{L}\) and \(T\), several times to obtain averaged transmission coefficient \(\langle\mathscr{T}[E;{\bf x}(T),\mathcal{L}]\rangle\). To this end, the python script needs to be slightly modified by hand. By clicking the icon, send the script to Editor. Highlight the whole script and press the Tab key on your keyboard to indent every line. Then edit the script as shown below, and save it as “ Au100_L3_T300.py” To make sure that there is no mistake in your code, download Au100_L3_T300.py and compare the two scripts. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # -*- coding: utf-8 -*-for run in range(0,10): # ------------------------------------------------------------- # Two-probe Configuration # ------------------------------------------------------------- # ------------------------------------------------------------- # Left Electrode # ------------------------------------------------------------- # Set up lattice vector_a = [8.65127469112, 0.0, 0.0]*Angstrom vector_b = [0.0, 8.65127469112, 0.0]*Angstrom vector_c = [0.0, 0.0, 4.07825]*Angstrom left_electrode_lattice = UnitCell(vector_a, vector_b, vector_c) Note Since the MD and Landauer sections are all placed under a loop, this script evaluates the transmission coefficient 10 times starting with different values of initial velocity, and saves each result with a unique ID number (gIDxxx) attached. Note Following exactly the same procedure except employing different values of \(\mathcal{L}\), e.g., \(\mathcal{L}=\) 4 and 5, one can easily prepare the scripts for systematic implementation of the MD-Landauer calculations at \(T\) =300 K. To save time, one may download Au100_L4_T300.py and Au100_L5_T300.py, and use them. 3. Data analysis¶ Click the icon, send the script to Job Manager, and run it. The calculation takes 5-6 hours on 8 cores. The set of transmission coefficients should then be averaged for the analysis. This section hence focuses on how to analyze the results and obtain the 1D and bulk resistivities. 3.1. Transmission functions, conductance, and resistance¶ A window will pup up and display all the transmission functions \(\big\{\mathscr{T}[E;{\bf x}(T),\mathcal{L}]\big\}_{{\bf x}(T)}\) for \(T=300\) K and \(\mathcal{L}=3\). To compute the average of them, download the script analysis.py and run it as atkpython analysis.py in the same directory of the nc. The analysis will finish in a few seconds, and provide a list of information about the length of the MD-region, and the averaged conductance and resistance: There also appears a pup-up window, in which black lines show the transmission coefficients as before, and a thick red line represents their average \(\langle\mathscr{T}[E;{\bf x}(T),\mathcal{L}]\rangle\). 3.2. Resistivity¶ To compute the resistance at \(T=300\) K also for \(\mathcal{L}=\) 0, 4, and 5, download the scripts Au100_L0_T0.py, Au100_L4_T300.py, and Au100_L5_T300.py. Run them as atkpython Au100_Lx_Txxx.py, and analyze the results. One will then obtain a list of resistances, indicating the linear dependence of the resistance on \(\mathcal{L}\): Length of the MD-region, \(\mathcal{L}\) Resistance (# of electrode repetitions) (Å) (Ω) 0 0 1482.9 3 12.235 1535.1 4 16.313 1555.2 5 20.391 1571.9 Note The resistance for \(\mathcal{L}=0\) is computed using the script Au100_L0_T0.py, which implements no MD calculation, and because of it, is hence named with “ T0.” This script aims at evaluating the contact resistance. Let us then compute the 1D and bulk resistivities, which requires the value of the cross section. Noting the primitive vectors defined in lines 14 and 15 of “ Au100_L3_T300.py,” i.e., 14 15 vector_a = [8.65127469112, 0.0, 0.0]*Angstrom vector_b = [0.0, 8.65127469112, 0.0]*Angstrom the cross section, in this case, is given by \(A=\) \(\|\hspace{-1mm}\) vector_a \(\|\hspace{-1mm}\) × \(\|\hspace{-1mm}\) vector_b \(\|\hspace{-1mm}\) = 74.84 Å \(^2\). Using this value, and also the values of \(\mathcal{L}\) and resistance listed in the table above, one can evaluate the 1D and bulk resistivities at \(T=300\) K. To this end, download resistivity.py, and, after entering the values in the table into the INPUT SECTION of the script, run it as atkpython resistivity.py. 6 7 8 9 10 11 12 #--------------------------------------------------------------------## INPUT SECTION |#--------------------------------------------------------------------#cs_area = 8.65127469112**2lengths_list = [0,12.235,16.313,20.391]resistances_list = [1482.9,1535.1,1555.2,1571.90]output_filename = 'resistivity.png' The script interpolates the data points by means of the least squares method and provides the values of the contact resistance, as well as the 1D and bulk resistivities at \(T=300\) K: A red diamond in the figure below is the bulk resistivity at \(T=300\) K, and is compared with purple points representing the results of the MD-Landauer calculations shown in Fig. 7(b) of Ref. [cLid97]. Because of the reduced numerical condition employed in this case study, there exists a small difference. The figure also displays black squares representing the experimental result [cLid97], and proves the qualitative agreement with the MD-Landauer result. References¶ [cBMO+02] M. Brandbyge, J.-L. Mozos, P. Ordejón, J. Taylor, and K. Stokbro. Density-functional method for nonequilibrium electron transport. Phys. Rev. B, 65:165401, Mar 2002. doi:10.1103/PhysRevB.65.165401. [cDat97] S. Datta. Electronic Transport in Mesoscopic Systems. Cambridge University Press, 1997. Cambridge Studies in Semiconductor Physics and Microelectronic Engineering. [cGRdV+12] N. Goga, A. J. Rzepiela, A. H. de Vries, S. J. Marrink, and H. J. C. Berendsen. Efficient algorithms for langevin and dpd dynamics. Journal of Chemical Theory and Computation, 8(10):3637–3649, 2012. PMID: 26593009. doi:10.1021/ct3000876. [cLid97] (1, 2) D. R. Lide. Handbook of Chemistry and Physics. New York: CRC Press, 75th edition, 1997. [cMPS+17] (1, 2) T. Markussen, M. Palsgaard, D. Stradi, T. Gunst, M. Brandbyge, and K. Stokbro. Electron-phonon scattering from green’s function transport combined with molecular dynamics: Applications to mobility predictions. Phys. Rev. B, 95:245210, June 2017. URL: https://arxiv.org/abs/1701.02883, doi:10.1103/PhysRevB.95.245210. [cSKC+11] H. W. Sheng, M. J. Kramer, A. Cadien, T. Fujita, and M. W. Chen. Highly optimized embedded-atom-method potentials for fourteen fcc metals. Phys. Rev. B, 83:134118, April 2011. doi:10.1103/PhysRevB.83.134118. [cSPS+10] K. Stokbro, D. E. Petersen, S. Smidstrup, A. Blom, M. Ipsen, and K. Kaasbjerg. Semiempirical model for nanoscale device simulations. Phys. Rev. B, 82:075420, Aug 2010. doi:10.1103/PhysRevB.82.075420.
Elastic constants¶ QuantumATK provide a very simple way to set up andexecute calculations of the elastic constants for arbitrary bulk configurations.The method is generalized and can easily be used with density functional theory(ATK-DFT), semi-empirical methods (ATK-SE), or classical potentials (ATK-ForceField).In this tutorial, you will learn how to calculate elastic constants with QuantumATK. Note In a bulk solid, the elastic constants relate the linear response of the stress tensor \(\boldsymbol{\sigma}\) to an external strain \(\boldsymbol{\varepsilon}\) on the system. The elastic constants thereby describe the directional stiffness of the material under specific types of deformations. More general moduli, such as bulk, shear, or Young’s modulus, can also be calculated. All of these quantities characterize the mechanical properties of a solid. In addition, elastic constants and moduli are often employed as fitting observables for parameterization of classical potentials. Methodology¶ The stress and strain tensors are always symmetric 3x3 matrices, and they cantherefore be expressed more compactly as 6-vectors, using the so-called Voigt notation: and The linear response of the stress vector to a given strain vector can then be written as where the symmetric 6x6 matrix \(\boldsymbol{C}\) contains the elastic constants. Depending on the crystal symmetry, the number of independent entries in \(\boldsymbol{C}\) can be reduced further. For instance, in a cubic crystal only three entries, \(C_{11}\), \(C_{12}\), and \(C_{44}\), are independent: To obtain the elastic constants, one should apply small deformations, \(\eta\), to the simulation cell along selected strain vectors, and calculate the resulting stress vectors. The linear stress contribution is obtained by fitting the \(\sigma_i(\eta)\) curves of each Voigt stress component and for every strain vector. The independent elastic constants are then calculated as the least-squares solution to a linear system of equations, taking the crystal symmetry into account. Note For the calculation of elastic constants, QuantumATK employs the Lagrangian strain and stress tensors instead of the corresponding physical tensors. To minimize the number of stress calculations, QuantumATK uses the Universal Linearly-Independent Coupling Strain (ULICS) vectors (Ref. [YZY10]). For each strain vector, typically three deformations (\(-\eta\), 0, \(+\eta\)), centered at the reference configuration (\(\eta=0\)), are applied. Important For configurations with more than one atom in the cell, the atomic positionsof each strained cell must be optimized before the stress is calculated;QuantumATK automatically takes care of this. You should however ensure that the cellvectors (and atomic positions) of the reference configuration are well optimized before starting the calculation of the elastic constants. Calculating elastic constants using classical potentials¶ You will now perform an elastic constant calculation based on classicalpotentials, using ATK-ForceField. The advantage of of classical potentials,apart from the calculation speed, is that that most properties, such as energies,forces and stress, are smooth functions of the coordinates. This makescalculation of elastic constants particularly robust with respect to thesetting sused for calculating the strain. Two examples will be considered: A bulk silicon crystal, using the Stillinger–Weber potential [SW85]. SiO 2\(\alpha\)-quartz, using the “Pedone2006 Fe2” potential [PMM+06]. Bulk silicon¶ Note For QuantumATK-versions older than 2017, the ATK-ForceField calculatorcan be found under the name ATK-Classical. Add a object to the script and edit it: select sufficiently accurate settings for maximum force and maximum stress, e.g. 0.001 eV/Å for the force and 0.001 eV/Åsup:3 for the stress; uncheck the “Constrain cell” check box to enable optimization of the cell vectors; select LBFGSas the optimizer method. The first set of parameters and options refer to the stress/strain calculation: Parameter: \(\eta_{max}\) Specifies the maximum deformation magnitude applied to the cell. A sufficiently large value should be chosen to achieve a significant variation in stress, but small enough to remain essentially within the linear regime. The default value of 0.002 is in most cases a good choice. Elastic constants calculated with classical potentials are in general not too sensitive to this parameter. Parameter: \(n_\eta\) Specifies the number of intermediate deformations between \(-\eta_{max}\) and \(+\eta_{max}\) for every strain vector. A higher value, together with a higher-order fit (as specified by the next parameter), may help to filter out possible non-linear contributions, but also increases the number of calculations that need to be performed. The default value is 3 and usually works well. Parameter: Fit order Specifies the highest polynomial order in the stress vs. \(\eta\) fitting procedure, which should be smaller than \(n_\eta\). For evaluation of the elastic constants, only the linear contribution is used. Option: Enable symmetry Allows QuantumATK to detect the crystal symmetry and calculate only the independent constants for this lattice symmetry. Otherwise, all 21 constants of the upper triangle of the \(\boldsymbol{C}\) matrix are treated as independent. The remaining settings are related to the force optimization (of the atomic coordinates) that can be invoked before the stress of each strained system is calculated: Option: Optimizer Sets the optimizer method. If you select “None”, no force optimization is carried outbefore the stress is calculated, which is the fastest, but least accurate option.If there is only one atom in the unit cell, there is no need for an optimizationof the internal coordinates and the optimizer is automatically disabled. Parameters for force and stress minimization The remaining parameters control the optimizer setttings. These options are in fact the same as in the OptimizeGeometry object. While most of these parameters are only relevant in special cases, the “Max forces” parameter should be chosen sufficiently accurate. The default value of 0.005 eV/Å (which is 10x lower than the usual default in QuantumATK) represents a good balance between accuracy and efficiency. Since the example at hand uses fast ATK-ForceField calculations, you can easily afford to reduce this value even further to 0.001 eV/Å to achieve a higher precision. Execute the calculation by sending the script to the Job Manager ,again using the button. You may be asked to save the script again.Click the button to start the job. It takes around five seconds to finish on a laptop. Analysis of the results¶ You can find information about the calculation and results in the log file*.As you see, QuantumATK has correctly recognized the cubic symmetry of the FCC siliconcrystal and calculates only the three independent elastic constants: +------------------------------------------------------------------------------+| Calculating elastic constants for the given bulk configuration |+------------------------------------------------------------------------------+| Detected space group number: 227 || Detected lattice symmetry: Cubic || This lattice symmetry has 3 independent elastic constants: || C11 C12 C44 |+------------------------------------------------------------------------------+ The elastic constants matrix is printed at the bottom of the log file, and is in excellent agreement with literature results using the Stillinger–Weber potential [Cow88]: Tip The log file also records more detailed analysis of the elastic constant matrix. It contains the elastic compliance matrix, which is the inverse of the elastic constants matrix. More general moduli, such as bulk, shear or Young’s modulus, are also calculated from the elastic constants. Be aware tha there are three different definitions of bulk and shear modulus, according to the slightly different formulae from Voigt and Reuss. All of them are listed in the log output for comparison. SiO 2 quartz¶ You can use the same method to compute the elastic properties of more complex crystals,such as SiO 2 \(\alpha\)-quartz. Follow the same steps above and select the“Pedone2006_Fe2” potential , which is from Ref. [PMM+06]. Accurate long-range electrostatics¶ After setting up the script in the Script Generator, change the Script Detailsetting in the Global IO field from “Minimal” to “Show defaults”, andtransfer the script to the Editor in order to make one small modification.In the calculator block, you will find that all the pair potentials are nowindividually listed and added to the potential set (this is a result of the“Show defaults” setting). The Pedone potential includes long-range electrostatic interations. You can manuallyincrease the cutoff distance for these interactions ( r_cut). To achieve a highaccuracy of the stress calculation, you should change this parameter from the defaultof 9.0 Å to a larger value of 15 Å, to account for the long-range nature of theCoulomb potential: potentialSet.setCoulombSolver(CoulombDSF(r_cut=15.0*Angstrom, alpha = 0.2))calculator = TremoloXCalculator(parameters=potentialSet)calculator.setInternalOrdering("default") This will affect the speed of the calculations, but since the elastic constants calculation does not involve extensively long simulations, you will hardly notice it. Results¶ Performing the calculation will give the following results: +------------------------------------------------------------------------------+| Elastic Constants in GPa |+------------------------------------------------------------------------------+| 86.55 8.71 11.16 -18.36 0.00 0.00 || 86.55 11.16 18.36 0.00 0.00 || 106.67 0.00 0.00 0.00 || 49.41 0.00 0.00 || 49.41 -18.36 || 38.92 |+------------------------------------------------------------------------------+ Due to the rhombohedral symmetry, there are six independent constants, \(C_{11}, C_{12}, C_{13}, C_{14}, C_{33}\) and \(C_{44}\). All values reproduce very well the results of the original publication, Ref. [PMM+06]. Calculate elastic constants using DFT¶ Finally, you will perform a calculation of the elastic constants of siliconusing density functional theory. The framework outlined above can be used again – you justneed to replace the calculator such that is uses ATK-DFT with GGA exchange correlation. To obtain numerically precise stress differences between the strained conformations, we generally have to employ some slightly tighter calculator settings than the defaults: Choose ATK-DFTas the calculator, and set the following parameters in Basic: set the grid mesh cut-off to 300 Hartree; set k-point sampling to 15x15x15. In the Iteration control parameterstab: set the tolerance parameter to 1.0e-08; set the damping factor to 0.5 and the number of history steps to 10. In the Basis set/exchange correlationtab use the PBEsol functional, which offers very reliable DFT predictions of bulk moduli and related quantities: choose GGAfor the exchange-correlation type; select in PBESin the list of predefined functionals; choose The settings in the OptimizeGeometry and ElasticConstants blocksshould be almost identical to the ones chosen for the classical potentials,except that the \(\eta_{max}\) value in the ElasticConstants settingsshould be 0.003 (which means that a larger strain will be employed,resulting in a more pronounced change in the stress). This calculation takes only about 10 minutes to finish. The calculated elastic constants are +------------------------------------------------------------------------------+| Elastic Constants in GPa |+------------------------------------------------------------------------------+| 161.02 65.22 65.22 0.00 0.00 0.00 || 161.02 65.22 0.00 0.00 0.00 || 161.02 0.00 0.00 0.00 || 75.68 0.00 0.00 || 75.68 0.00 || 75.68 |+------------------------------------------------------------------------------+ The results are in reasonably good agreement with the experimental values, Ref. [MBBT51], which are listed below: References [Cow88] E. Roger Cowley. Lattice dynamics of silicon with empirical many-body potentials. Phys. Rev. Lett., 60:2379–2381, Jun 1988. doi:10.1103/PhysRevLett.60.2379. [MBBT51] H. J. McSkimin, W. L. Bond, E. Buehler, and G. K. Teal. Measurement of the elastic constants of silicon single crystals and their thermal coefficients. Phys. Rev., 83:1080–1080, Sep 1951. doi:10.1103/PhysRev.83.1080. [PMM+06] (1, 2, 3) Alfonso Pedone, Gianluca Malavasi, M. Cristina Menziani, Alastair N. Cormack, and Ulderico Segre. A new self-consistent empirical interatomic potential model for oxides, silicates, and silica-based glasses. The Journal of Physical Chemistry B, 110(24):11780–11795, 2006. doi:10.1021/jp0611018. [SW85] F. H. Stillinger and T. A. Weber. Computer simulation of local order in condensed phases of silicon. Phys. Rev. B, 31:5262–5271, Apr 1985. doi:10.1103/PhysRevB.31.5262. [YZY10] R. Yu, J. Zhu, and H.Q. Ye. Calculations of single-crystal elastic constants made simple. Computer Physics Communications, 181(3):671 – 675, 2010. doi:10.1016/j.cpc.2009.11.017.
Let $p \in \mathbb N$ be a prime. Let $$Q_p : = \left \{ x \in \mathbb Q : (\exists k \in \mathbb Z)\ \mathrm {and}\ (\exists n \in \mathbb N)\ \mathrm {such}\ \mathrm {that}\ x= \frac {k} {p^n} \right \}.$$ Show that $Q_p / \mathbb Z$ is divisible as $\mathbb Z$-module. How should I proceed? Please help me. Thank you in advance.
What is gap $\log(n+1)-\log(n)$ between log of consecutive integers? That is what precision of logarithms determines integers correctly? $\log(n+1)-\log n=\log(1+\frac1n)$. Using the Taylor series for $\log(1+x)$, this is $$\frac1n-\frac1{2n^2}+\frac1{3n^3}-\cdots\approx\frac1n.$$ Especially Lime's answer is absolutely correct and a very good approach. Let me show a conceptually somewhat simpler one, that only uses the derivative, namely $(\log x)'=1/x$. This is a strictly decreasing function, so $\log x$ is concave. In particular, its graph is below the tangent line at any point of the graph. We obtain $\log (n+1)< \log n + 1/n$ and $\log n< \log (n+1) - 1/(n+1)$. To sum up: $$\frac{1}{n+1} < \log (n+1)-\log n < \frac{1}{n}$$ This is rather similar to previous answers, but I think it's still worth pointing out. You're asking about the slope of a chord of the graph of $\log x$, the chord joining $(n,\log n)$ to $(n+1,\log(n+1))$. By the mean value theorem, this equals the slope of the tangent line, $1/x$, at some $x$ between $n$ and $n+1$. Just added for your curiosity. In the same spirit as in other answers, instead of Taylor series, you could consider Padé approximants and get things such as $$\log(n+1)-\log n=\log\left(1+\frac1n\right) \approx \frac{2}{2 n+1}$$ $$\log(n+1)-\log n=\log\left(1+\frac1n\right) \approx \frac{6 n+3}{6 n^2+6n+1}$$ $$\log(n+1)-\log n=\log\left(1+\frac1n\right) \approx \frac{60 n^2+60 n+11 } {60 n^3+90 n^2+36 n+3 }$$ $$\log(n+1)-\log n=\log\left(1+\frac1n\right) \approx \frac{420 n^3+630 n^2+260 n+25 }{420 n^4+840 n^3+540 n^2+120 n+6 }$$ These are respectively equivalent to Taylor series to $O\left(\frac{1}{n^3}\right)$, $O\left(\frac{1}{n^5}\right)$, $O\left(\frac{1}{n^7}\right)$ and $O\left(\frac{1}{n^9}\right)$. Here's a geometric way to get a good approximation for $\ln(n+1)-\ln(n)$: Use the fact that $\ln(x)=\int_1^x \frac1t dt$. (For $x>0$.) Then $\ln(n+1)-\ln(n)$ is the area under the curve $f(x)=\frac1x$ from $n$ to $n+1$. We are looking for the area over an interval of length $1$. So numerically the area should be equal to the 'average' height. Because the $\frac1x$ function is strictly decreasing, we get a good approximation to that 'average' height by evaluating the $\frac1x$ function a the midpoint of the interval (namely at $n+\frac12$). So $\ln(n+1)-\ln(n)\approx \frac{1}{n+\frac12}$, with the approximation getting better and better as $n$ gets large.
I think that a good way to understand the Levi-Civita connection is to say that is is the Ehresmann connection in $TTM$ obtained from the linearization of the geodesic flow by a natural geometric construction. I described this construction in my answer to this MO question, but I'll do so again with some improvements. Dynamic construction. Let $c(t)$ be an orbit of the geodesic flow in $TM$, consider the vertical subspaces $V(t)$ in $TTM$ along $c(t)$ and bring them back to the tangent space of the cotangent bundle over the point $c(0)$ by using the differential of the flow. You get a family of (Lagrangian) subspaces $l(t) := D\phi_{-t}(V(t))$ in the symplectic vector space $T_{c(0)}TM$. Now forget you ever had a geodesic flow: all that you need is the curve of subspaces. A bit of differential projective geometry---described below---shows that you also get a second curve $h(t)$ of (Lagrangian subspaces) in $T_{c(0)}(T^*M)$ that is transversal to $l(t)$. The subspace $h(0)$ is the horizontal subspace of the connection and $T_{c(0)}(T^*M) = l(0) \oplus h(0)$ is the decomposition into vertical and horizontal subspaces. Projective construction. Now I'll describe as succintly as possible the projective-geometric constructionthat underlies both the Levi-Civita connection and the Schwartzian derivative.For the detais of what follows see this paper What's new in the description here is that I explicitly use the Springer resolution (Duran and I used implicitly in the paper). First we need two remarks on the geometry of the Grassmannian $G_n(\mathbb{R}^{2n})$ of $n$-dimensional subspaces in $\mathbb{R}^{2n}$ 1. The tangent space of $G_n(\mathbb{R}^{2n})$ at a subspace $\ell$is canonically identified with the space of linear maps from $\ell$ to $\mathbb{R}^{2n}/\ell$ or, equivalently, with the space $(\mathbb{R}^{2n}/\ell) \otimes \ell^*$. Since $\mathbb{R}^{2n}/\ell$ and $\ell$ have the same dimension, we may distinguish a class of differentiable curves $\gamma$ on the Grassmannian by requiring that at each instant $t$ their velocities are invertible linear maps from $\gamma(t)$ to $\mathbb{R}^{2n}/\gamma(t)$. These curves are called fanning or regular. Using that the cotangent space of $G_n(\mathbb{R}^{2n})$ at a subspace $\ell$is canonically isomorphic to $\ell \otimes (\mathbb{R}^{2n}/\ell)^*$, we can liftevery fanning curve $\gamma(t)$ to a curve on the cotangent bundle of the Grassmannian by $t \mapsto (\dot{\gamma}(t))^{-1}$. 2. Consider the action of the linear group $GL(2n;\mathbb{R})$ on the Grassmannian $G_n(\mathbb{R}^{2n})$ and lift it to an action on its cotangent bundle. The moment map of this action takes values on the set of nilpotent matrices. Now consider a fanning curve $\gamma(t)$ on the Grassmannian $G_n(\mathbb{R}^{2n})$ and lift it to the curve $(\dot{\gamma}(t))^{-1}$ on its cotangent bundle. Use the moment map to obtain a curve $F(t)$ of nilpotent matrices. Note that everything we have done is $GL(2n,\mathbb{R})$-equivariant. Finally we come to the little miracle: the time derivative of $F(t)$ is a curve of reflections $\dot{F}(t)$ (i.e., $\dot{F}(t)^2 = I$) whose -1 eigenspace is the curve of subspaces $\gamma(t)$ and whose $1$-eigenspace defines a "horizontal curve" $h(t)$ equivariantly attached to $\gamma(t)$. This is the construction that yields the Levi-Civita connection (and what is behind the formalisms of Grifone and Foulon for connections of second order ODE's on manifolds). Differentiate $F(t)$ a second time to find the Schwartzian derivative. Geometrically, it just describes how the curve $h(t)$ moves with respect to $\gamma(t)$. For comparison, recall that the curvature of a connection is obtained by differentiating (i.e., bracketing) horizonal vector fields and projecting onto the vertical bundle.
Given a polynomial $a(x)$ of degree at most $242$ over $\mathbb{Z}_{487}$, I'd like to choose distinct values $x_0, x_1, . . . , x_{242} ∈ \mathbb{Z}_{487}$, such that I'll be able to calculate $a(x_j )$ for all $j = 0, . . . , 242$ by calculating three polynomials $a _0 (x), a_1(x), a_2(x)$ of degree at most $80$ for $81$ different values of $x$. So first of all I divided $a(x)$ into sub-polynomials, simply by: $$\begin{align*} a(x)=x^{162}\cdot&\left( a_{242}\cdot x ^ {80}+...+a_{163}\cdot x+a_{162} \right)+\\ & x^{81}\cdot\left( a_{161}\cdot x ^ {80}+...+a_{82}\cdot x+a_{81} \right)+\\ &\left( a_{80}\cdot x ^ {80}+...+a_{1}\cdot x+a_{0} \right)\end{align*}$$ Now I have shown that $2^{243} = 1 \mod 487$, and that for every $1\leq a \leq 242: \,\,\, 2^a \neq 1 \mod 487$. I wanted to use this so that I can choose distinct $x_i$ by $x_i=2^i \mod 487$ for all $0\leq i \leq 242$. Now I thought taking triples, and for that I had two ideas: Choose triples $x_i,x_{i+1},x_{i+2}$ for each $0\leq i\leq 242$ such that $i = 0 \mod 3$. Choose triples $x_i,x_{i+81},x_{i+162}$ for each $0 \leq i \leq 80$. In any case, given a trio, I wanted to calculate only $a_2(x_i), \,a_1(x_i), \,a_0(x_i)$, and then using them to calculate $a(x_i),\,a(x_{i^{'}}),\,a(x_{i^{''}})$, where ${i^{'}} $and ${i^{''}}$ will be the next in my trio (depending on which idea to choose). The problem is, I either got confused and all that's left is obvious, or I just can't seem to make those last few steps. If Im on the right way, can someone help me here? Or perhaps I should choose other trios? Or if not, maybe there's a hint that can help me out?
Let $M$ come from an ensemble of $N\times N$ matrices. The Wigner surmise is density function $p^W_0(s)=\frac{\pi}{2}se^{-\pi s^2/4}$. From a random matrix point of view, we can write $\rho^W_0(s)=\frac{d^2}{ds^2}E((0,s))$, where $E(I)$ is the eigenvalue gap probability: $M$ has no eigenvalues in interval $I$. $E(I)$ and it's derivatives are intimately related to the correlations between nearest neighbors. Question 1: What are known random matrix ensembles which have their eigenvalue gap probability (in limit) exactlyequal to the Wigner surmise? To be specific: either all $N\times N$ matrices have Wigner surmise OR the limiting eigenvalue distribution is exactly Wigner surmise. Question 2: What are known interacting-particle systems which have their particle gap probability (in limit) exactlyequal to the Wigner surmise?To be specific: either all $N\times N$ particle systems exhibit Wigner surmise OR the limiting particle distribution is exactly Wigner surmise. One example that I've seen is the real Ginibre ensemble which takes a random Gaussian matrix and focuses only on real eigenvalues. Then the probability of there being an even number of eigenvalues in $[0,s]$ matches the Wigner surmise. This is equivalent to certain statistics of creation/annihilation processes on the line. In addition to this, there are some statistical physics spin systems which seem to give an exact surmise as well. Unfortunately I'm not an expert in the latter area. Some more background: It's a well known fact that many random matrix ensembles exhibit a (limiting) density function of the form: $$p_0(s)=\frac{2u(\pi^2 s^2/4)}{s}\exp\left(-\int_0^{\pi^2 s^2/4}\frac{u(t)}{t}dt\right),$$ where $u$ satisfies a Painleve equation and of which $\rho_0^W(s)$ is a special case. So in short, $\rho^W_0(s)$ is usually an approximation, not an exact answer. One can certainly derive some conditions on $u$ and the resulting Painleve equation to get an exact Wigner surmise but this doesn't answer from which random matrix ensembles it comes from.
Timeline of prime gap bounds Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments Aug 10 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) May 14 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. May 21 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations May 28 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] May 30 59,470,640 (Morrison) 58,885,998? (Tao) 59,093,364 (Morrison) 57,554,086 (Morrison) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m May 31 2,947,442 (Morrison) 2,618,607 (Morrison) 48,112,378 (Morrison) 42,543,038 (Morrison) 42,342,946 (Morrison) Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] Jun 1 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] Jun 2 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) Jun 3 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison) 4,802,222 (Morrison) Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. Jun 4 1/224?? (v08ltu) 1/240?? (v08ltu) 4,801,744 (Sutherland) 4,788,240 (Sutherland) Uses asymmetric version of the Hensley-Richards tuples Jun 5 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz) 4,717,560 (Sutherland) 397,110? (Sutherland) 4,656,298 (Sutherland) 389,922 (Sutherland) 388,310 (Sutherland) 388,284 (Castryck) 388,248 (Sutherland) 387,982 (Castryck) 387,974 (Castryck) [math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance. [math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve Jun 6 387,960 (Angelveit) 387,904 (Angeltveit) Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. Jun 7 26,024? (vo8ltu) 387,534 (pedant-Sutherland) Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland) 285,752 (pedant-Sutherland) values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here. An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired. Jun 12 22,951 (Tao/v08ltu) 22,949 (Harcos) 249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu) 6,329? (Harcos) 6,329 (v08ltu) 60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu) 5,672? (v08ltu) 5,459? (v08ltu) 5,454? (v08ltu) 5,453? (v08ltu) 60,740 (xfxie) 58,866? (Sun) 53,898? (Sun) 53,842? (Sun) A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu) 5,453? (v08ltu) 5,452? (v08ltu) 53,774? (Sun) 53,672*? (Sun) Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao) [math]148\varpi + 33\delta \lt 1[/math]? (Tao) Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu) 1,467 (v08ltu) 12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu) [math]140\varpi + 32 \delta \lt 1[/math]? (Tao) 1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes) 1,007? (Hannes) 10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen) [math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao) 962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao) 873? (Hannes) Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao) Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao) 632 (Harcos) 4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma) 12 [EH] (Maynard) Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard) 5 [EH] (Maynard) 600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard) 582#*? (Nielsen]) 59,451 [m=2]#? (Nielsen]) 42,392 [m=2]? (Nielsen) 356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie) 448#*? (Nielsen) 43,134 [m=2]#? (Nielsen) 698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen) 10,000,000? [m=3] (Tao) 1,700,000? [m=3] (Tao) 38,000? [m=2] (Tao) 300#? (Clark-Jarvis) 182,087,080? [m=3] (Sutherland) 179,933,380? [m=3] (Sutherland) More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20 55#? (Nielsen) 36,000? [m=2] (xfxie) 175,225,874? [m=3] (Sutherland) 27,398,976? [m=3] (Sutherland) Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck) 75,000,000? [m=4] (Castryck) 3,400,000,000? [m=5] (Castryck) 5,511? [EH] [m=3] (Sutherland) 2,114,964#? [m=3] (Sutherland) 309,954? [EH] [m=5] (Sutherland) 395,154? [m=2] (Sutherland) 1,523,781,850? [m=4] (Sutherland) 82,575,303,678? [m=5] (Sutherland) A numerical precision issue was discovered in the earlier m=4 calculations Dec 23 41,589? [EH] [m=4] (Sutherland) 24,462,774? [m=3] (Sutherland) 1,512,832,950? [m=4] (Sutherland) 2,186,561,568#? [m=4] (Sutherland) 131,161,149,090#? [m=5] (Sutherland) Dec 24 474,320? [EH] [m=4] (Sutherland) 1,497,901,734? [m=4] (Sutherland) Dec 28 474,296? [EH] [m=4] (Sutherland) Jan 2 2014 474,290? [EH] [m=4] (Sutherland) Jan 6 54? (Nielsen) 270? (Clark-Jarvis) Jan 8 4 [GEH] (Nielsen) 8 [GEH] (Nielsen) Using a "gracefully degrading" lower bound for the numerator of the optimisation problem. Calculations confirmed here. Jan 9 474,266? [EH] [m=4] (Sutherland) Jan 28 395,106? [m=2] (Sutherland) Jan 29 3 [GEH] (Nielsen) 6 [GEH] (Nielsen) A new idea of Maynard exploits GEH to allow for cutoff functions whose support extends beyond the unit cube Feb 9 Jan 29 results confirmed here Feb 17 53?# (Nielsen) 264?# (Clark-Jarvis) Managed to get the epsilon trick to be computationally feasible for medium k Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [GEH] - bound is conditional the generalized Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted See also the article on Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math].
Go to the corresponding LIPIcs Volume Portal Ivanyos, Gábor ; Qiao, Youming ; Subrahmanyam, K Venkata Constructive Non-Commutative Rank Computation Is in Deterministic Polynomial Time pdf-format: LIPIcs-ITCS-2017-55.pdf (0.6 MB) AbstractLet {\mathcal B} be a linear space of matrices over a field {\mathbb spanned by n\times n matrices B_1, \dots, B_m. The non-commutative rank of {\mathcal B}$ is the minimum r\in {\mathbb N} such that there exists U\leq {\mathbb F}^n satisfying \dim(U)-\dim( {\mathcal B} (U))\geq n-r, where {\mathcal B}(U):={\mathrm span}(\cup_{i\in[m]} B_i(U)). Computing the non-commutative rank generalizes some well-known problems including the bipartite graph maximum matching problem and the linear matroid intersection problem. In this paper we give a deterministic polynomial-time algorithm to compute the non-commutative rank over any field {\mathbb F}. Prior to our work, such an algorithm was only known over the rational number field {\mathbb Q}, a result due to Garg et al, [GGOW]. Our algorithm is constructive and produces a witness certifying the non-commutative rank, a feature that is missing in the algorithm from [GGOW]. Our result is built on techniques which we developed in a previous paper [IQS1], with a new reduction procedure that helps to keep the blow-up parameter small. There are two ways to realize this reduction. The first involves constructivizing a key result of Derksen and Makam [DM2] which they developed in order to prove that the null cone of matrix semi-invariants is cut out by generators whose degree is polynomial in the size of the matrices involved. We also give a second, simpler method to achieve this. This gives another proof of the polynomial upper bound on the degree of the generators cutting out the null cone of matrix semi-invariants. Both the invariant-theoretic result and the algorithmic result rely crucially on the regularity lemma proved in [IQS1]. In this paper we improve on the constructive version of the regularity lemma from [IQS1] by removing a technical coprime condition that was assumed there. BibTeX - Entry @InProceedings{ivanyos_et_al:LIPIcs:2017:8166, author = {G{\'a}bor Ivanyos and Youming Qiao and K Venkata Subrahmanyam}, title = {{Constructive Non-Commutative Rank Computation Is in Deterministic Polynomial Time}}, booktitle = {8th Innovations in Theoretical Computer Science Conference (ITCS 2017)}, pages = {55:1--55:19}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-029-3}, ISSN = {1868-8969}, year = {2017}, volume = {67}, editor = {Christos H. Papadimitriou}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2017/8166}, URN = {urn:nbn:de:0030-drops-81667}, doi = {10.4230/LIPIcs.ITCS.2017.55}, annote = {Keywords: invariant theory, non-commutative rank, null cone, symbolic determinant identity testing, semi-invariants of quivers}} Keywords: invariant theory, non-commutative rank, null cone, symbolic determinant identity testing, semi-invariants of quivers Seminar: 8th Innovations in Theoretical Computer Science Conference (ITCS 2017) Issue Date: 2017 Date of publication: 24.11.2017
I asked a question on Physics Stack Exchange but no one answered the question and I didn't get enough views on it. I am asking it on QCSE because the question is related to experimental quantum computation realized through NMR. For an ensemble of identical atoms in a superposition state $|\phi\rangle=\alpha|S_z^+\rangle+\beta|S_z^-\rangle$ where $|\alpha|^2$ and $|\beta|^2$ are not equal(although $|\alpha|^2 + |\beta|^2 = 1$) and gives the populations of $|S_z^+\rangle$ and $|S_z^-\rangle$ respectively, we can drive a transition between the two basis states and we call this phenomenon as population transfer(please correct me if I am wrong). Mathematically, the field induced transitions changes the values of $|\alpha|^2$ & $|\beta|^2$. However, there can be one more phenomenon going on here called which is distinct from population transfer because polarization really counts the total spin magnetization of a state in the context of NMR and EPR and maybe Quantum Optics too. polarization transfer Now the third phenomenon i.e. (no classical analogue) can't occur between two states but needs three level system say $|1\rangle$, $|2\rangle$ and $|3\rangle$ and the field driven transitions between states $|1\rangle$ & $|2\rangle$ and states $|1\rangle$ & $|3\rangle$ somehow creates transition of states $|2\rangle$ & $|3\rangle$ even though there is no driving field of frequency corresponding to $|2\rangle$ & $|3\rangle$ transition. coherence transfer The last two phenomena written in bold are what I do not understand. Any insight into them will be very helpful and mathematics over them will be highly appreciated. This link also tries to explain the difference between population transfer, polarization transfer and coherence transfer through density matrices but I cannot see much physical explanation over the phenomenon.
Note: Cross-posted on Physics SE. Hi, I'm studying the quantum phase estimation algorithm from this book: M.A. Nielsen, I.L. Chuang, "Quantum Computation and Quantum Information", Cambridge Univ. Press (2000) [~p. 221]. He defines $b$ as the integer in the range $0$ to $2^t-1$ such that $\frac{b}{2^t} $ is the best t bit approximation to $\varphi$ (the phase that we want to estimate). From the first part of the circuit we have this state: $$\frac{1}{2^{t/2}} \sum\limits_{k=0}^{2^t-1} e^{2 \pi i \varphi k}|k\rangle$$ Applying the inverse quantum Fourier transform we have: $$\frac{1}{2^t} \sum\limits_{k,l=0}^{2^t-1} e^{\frac{-2\pi i k l}{2^t}} e^{2 \pi i \varphi k} |l\rangle$$ Then he define $\alpha_l$ as the amplitude of $|(b+l) \bmod{2^t}\rangle$ Then we want to bound the probability of obtaining a value of $m$ such that $|m-b|>e $ $$\sum\limits_{-2^{t-1} < l \le -(e+1)} |\alpha_l|^2 + \sum\limits_{e+1 \le l \le 2^{t-1}} |\alpha_l|^2$$ I understand the end with $e$ but not the one with $2^{t-1}$
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
An Arithmetic Sequence is a sequence in which each term differs from the previous one by the same fixed number. It cal also be referred to as an arithmetic progression. For example, 2, 5, 8, 11, … 10, 20, 30, 40, … 6, 4, 2, 0, 02, … Algebraic Definition of Arithmetic Sequence If \( \{u_n\} \) is arithmetic, then \( u_{n+1} – u_n = d \) for all positive integers \( n \) where \( d \) is a constant called the common difference. If \( a, \ b \) and \( c \) are any consecutive terms of an arithmetic sequence then \( \begin{aligned} \displaystyle \require{color} b – a &= c – b &\color{green} \text{equating common differences} \\ 2b &= a + c \\ \therefore b &= \frac{a+c}{2} \\ \end{aligned} \\ \) So, the middle term is the arithmetic mean of the terms on either side of it. The General Term Formula Suppose the first term of an arithmetic sequence is \( u_1 \) and the common difference is \( d \), \( u_n = u_1 + (n-1)d \) This formula can be referred to the following form as well. \( T_n = a + (n-1)d \) where the first term of an arithmetic sequence is \( a \) and the common difference is \( d \). Practice Questions of Arithmetic Sequence Question 1 Consider the arithmetic sequence, 4, 7, 10, 13, …, find a formula for the general term \( u_n \). \( \begin{aligned} \displaystyle u_1 &= 1 &\color{green} \text{the first term} \\ 7 – 4 &= 3 \\ 10 – 7 &= 3 \\ 13 – 10 &= 3 \\ d &= 3 &\color{green} \text{the common difference} \\ \therefore u_n &= 4 + (n-1) \times 3 \\ &= 4 + 3n – 3 \\ &= 1 + 3n \\ \end{aligned} \\ \) Question 2 Find the 100th term of an arithemtic sequence, 100, 97, 93, 89, … \( \begin{aligned} \displaystyle u_1 &= 101 &\color{green} \text{the first term} \\ 97 – 101 &= -4 \\ 93 – 97 &= -4 \\ 89 – 93 &= -4 \\ d &= -4 &\color{green} \text{the common difference} \\ u_n &= 101 + (n-1) \times -4 \\ &= 101 – 4n + 4 \\ &= 105 – 4n \\ \therefore u_{100} &= 105 – 4 \times 100 \\ &= -295 \\ \end{aligned} \\ \) Question 3 Find \( x \) given that \( 3x+1, \ x,\) and \( -3 \) are consecutive terms of an arithmetic sequence. \( \begin{aligned} \displaystyle x &= \frac{(3x+1)+(-3)}{2} \\ 2x &= 3x-2 \\ \therefore x &= 2 \\ \end{aligned} \\ \) Question 4 Find the general term \( u_n \) for an arithmetic sequence with \( u_4 = 3 \) and \( u_7 = -12 \). \( \begin{aligned} \displaystyle u_4 &= u_1 + 3d = 3 \color{green} \cdots \text{(1)} \\ u_7 &= u_1 + 6d = -12 \color{green} \cdots \text{(2)} \\ (1) – (2) \\ -3d &= 15 \\ d &= -5 \\ u_1 + 3 \times (-5) &= 3 &\color{green} \text{substitute } d=-5 \text{ into (1)} \\ u_1 &= 18 \\ u_n &= 18 + (n-1) \times (-5) \\ &= 18 -5n + 5 \\ \therefore u_n &= 23 – 5n \\ \end{aligned} \\ \) Question 5 Insert four numbers between 3 and 23 so that all six numbers are in arithmetic sequence. \( \begin{aligned} \displaystyle \require{color} \text{Six numbers are } 3, \ 3+d, \ 3+2d, \ 3+3d, \ 3+4d, \ 3+5d. &\color{green} \ \ d \text{ is the common difference} \\ 3 + 5d &= 23 \\ 5d &= 20 \\ d &= 4 \\ 3+4, \ 3+2 \times 4, \ 3+3 \times 4, \ 3+4 \times 4 \\ \therefore 7, \ 11, \ 15, \ 19 \\ \end{aligned} \\ \)
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
[WP 34s] Simple sum procedure 06-13-2015, 07:19 AM (This post was last modified: 06-16-2015 12:56 AM by Marcio.) Post: #1 [WP 34s] Simple sum procedure Hello all, Would you be kind enough as to explain how the \(\sum\) works, especially with functions like \(\frac{1}{x}\)? For example: Compute the following: \(\sum_{1}^{10}\frac{1}{x}\) Answer \(\approx\) 2.92896825397 I tried creating a program as simple as 1/x but somehow the routine fills the stack with zeros. Thanks 06-13-2015, 08:42 AM Post: #2 RE: [WP 34s] Simple sum procedure Enter this program: Code: 001: LBL 00 Return to run mode and enter: 10.001 ∑ 00 The result is 2.92896825396854 Pauli 06-13-2015, 12:36 PM (This post was last modified: 06-13-2015 01:03 PM by Marcio.) Post: #3 RE: [WP 34s] Simple sum procedure Hello Paul, Would you please point me to a reference where I can read about this method of entering data in detail? I thought it was as simples as \(1\) ENTER \(10\) \(\sum\) \(00\). That is probably why the routine was filling the stack with zeros and returning \(+\infty\) and an error message. Thank you. **EDIT: From a quick scan on the 34s user's manual, I saw that the data should be entered using the cccccc.fffii format. Alright, if that is how it is supposed to work, allow me to ask a couple of questions: How do I separate the lower limit from the increment? Do those 3 f's mean the number can not be greater than 999? Marcio 06-13-2015, 12:57 PM Post: #4 RE: [WP 34s] Simple sum procedure Here is a reference: http://sourceforge.net/projects/wp34s/fi...f/download on page 112 06-13-2015, 01:05 PM Post: #5 RE: [WP 34s] Simple sum procedure OK. Thank you. Questions answered! 06-13-2015, 08:04 PM Post: #6 RE: [WP 34s] Simple sum procedure (06-13-2015 12:57 PM)Thomas_Sch Wrote: Here is a reference:I didn't find the v3.1 manual's remarks in that section helpful at all, particularly because it had no example and contained an error. But the same section on page 174 of the v3.3 manual explained it very well. 06-13-2015, 10:13 PM Post: #7 RE: [WP 34s] Simple sum procedure (06-13-2015 08:04 PM)striegel Wrote:That's correct.(06-13-2015 12:57 PM)Thomas_Sch Wrote: Here is a reference:I didn't find the v3.1 manual's remarks in that section helpful at all, particularly because it had no example and contained an error. But the same section on page 174 of the v3.3 manual explained it very well. But the manual v3.3 is not available for all users without ordering, so I had choosen the manual v3.1 instead. 06-14-2015, 06:53 AM (This post was last modified: 06-14-2015 06:55 AM by Thomas Klemm.) Post: #8 RE: [WP 34s] Simple sum procedure (06-13-2015 08:04 PM)striegel Wrote: I didn't find the v3.1 manual's remarks in that section helpful at all WP 34S pocket reference Σ lbl x → r Compute a sum using the routine specified at LBL. Initially, X contains the loop control number in the format cccccc.fffii and the sum is set to 0. Each run through the routine specified by lbl computes a summand. Then, this summand is added to the sum; the operation then decrements cccccc by ii and repeats until cccccc ≤ fff. HTH Thomas User(s) browsing this thread: 1 Guest(s)
Let's look at all the different criteria (reflexivity, symmetry or antisymmetry, and transitivity) for the first relations, $aR_1b\Leftrightarrow a=2b$ : a) Is it reflexive, or antireflexive, or neither? For this we want to test $aR_1a$ — or in other words, whether $a=2a$. This can hold if $a=0$, but otherwise it never holds. So on $\Bbb{Z}$ the relation is neither reflexive nor antireflexive; but on $\Bbb{N}$ (the set of 'whole numbers', not counting 0) it would be antireflexive, since $a=2a$ can never hold there. b) Is it symmetric, or antisymmetric, or neither? Here we want to compare $aR_1b$ and $bR_1a$ where $a$ and $b$ are distinct values; in other words, can we have both $a=2b$ and $b=2a$? Dividing by 2, the latter is equivalent to $a=\frac{1}{2}b$, and the only way we can have both $a=2b$ and $a=\frac{1}{2}b$ is to have $b=0$, in which case $a=0$ — but since we said that $a$ and $b$ had to be distinct, this can't hold. In other words, if we have $aR_1b$ for $a\neq b$ we can never have $bR_1a$; the relation is antisymmetric. c) Is it transitive? (Note that we never ask about 'antitransitivity', even though the concept hypothetically makes sense). For this, we want to figure out if $(aR_1b)\wedge(bR_1c)\implies(aR_1c)$ — in other words, if $a=2b$ and $b=2c$, is $a=2c$? Well, if $a=2b$ and $b=2c$, then we have $a=2b=2(2c)=4c$, so it's definitely not the case that $a=2c$; this relation is not transitive. Since this has the feel of homework, I'll let you do the second one yourself, but it should be relatively straightforward (as long as you know a little bit about divisibility). For looking at reflexivity, you'll have special cases again where $a=0$ and also where $a=\pm1$ to keep an eye out for; for symmetry you should be able to use the divisibility to say something about the relative order of $a$ and $b$; and for transitivity you should try to figure out how to 'staple' the two conditions $a^2|b$ and $b^2|c$ together to see if you can figure out whether $a^2|c$.
Let $f\colon X \to Y$ and $g\colon Y \to X$ be functions. Assume $g \circ f$ is bijective. Prove $f$ is injective and $g$ is surjective. Approach: if $g \circ f$ is bijective then $g \circ f$ is one to one if $g \circ f$ is bijective then $g \circ f$ is onto so we know $$g ◦ f (a_1)=g ◦ f (a_2) \text{ this implies $a_1=a_2$}$$ $$\forall b\in X, \exists a\in x \text{ such that } g ◦ f(a)=b$$ so we have $$g(f(a_1))=g(f(a_2))$$ $$g(f(a))=b$$ From here, I am stuck. What's the next thing to consider?
If we stretch the definition of a Feynman diagram, then yes: the technique can be applied to any problem where you use perturbation theory. But if by Feynman diagram you mean the exact same philosophy behind QFT, then in principle the answer is no: it only works in those problems where you have the same algebraic structure of QFT. In QFT, there are two approaches to Feynman diagrams: canonical quantisation and path integrals. The latter can be summarised in the gaussian-type integrals$$g_n\equiv\int_{-\infty}^{+\infty}x^{2n}\mathrm e^{-\frac{1}{2}ax^2}=\sqrt{\frac{2\pi}{a}}a^{-n}(2n-1)!! \tag{1}$$ Feynman diagrams arises when we try to compute $g_n$ using combinatorial arguments together with the generating integral$$z_j=\int_{-\infty}^{+\infty} \mathrm e^{-\frac{1}{2}ax^2+jx} \tag{2}$$where, for example, $g_2=[\partial^2z_j]_{j=0}$. For more details, see Mathematical Ideas and Notions of Quantum Field Theory. This means: whenever you can invoke combinatorics to some problem at hand, then in principle you can use diagrams to represent the different combinations, which in turns means you can restate the problem through Feynman diagrams. Canonical cuantisation, on the other hand, deals with operators with the algebraic structure$$[a_i,a_j]=0\qquad [a_i,a_j^\dagger]\sim \delta_{ij} \tag{3}$$ In this case, you can use this algebraic structure to prove Wick's theorem, which is (almost) the same as Feynman diagrams. This means: if you have any theory on some Hilbert space, where the operators satisfy $(3)$ (e.g., some Sturm-Liouville problem), then in principle you can use Feynman diagrams to solve problems. The trivial example is, of course, the differential operator $a:L^2(\mathbb R)\to L^2(\mathbb R)$$$a f(x)=\left(x+\frac{\mathrm d}{\mathrm dx}\right)f(x) \tag{4}$$i.e., the ladder operator of the quantum harmonic oscillator. You can use Feynmal diagrams to calculate any "expectation value"$$\int_{-\infty}^{+\infty} p(x) f(x)\mathrm dx \tag{5}$$where $p(x)$ is any polynomial. As the eigenfunctions of $a^\dagger a$ are (Hermite) polynomials times a gaussian exponential, we get $(1)$ back. To sum up: if we extend the notion of Feynman diagrams, then I guess you could use them for most problems (though I'm not sure the usefulness of this). On the other hand, the standard meaning of Feynman diagrams can only be used if you are doing the same thing they were invented for, that is, path integrals, or something that shares the same algebraic structure. A nice example of the use of Feynman diagrams to general problems is given in Solving Classical Field Equations, where the author explains how diagrams can be used to solve non-linear PDE's perturbatively. In this sense, one could say many problems in physics can be solved using Feynman diagrams, because of the ubiquity of differential equations in physics. Another example of the use of Feynman diagrams for perturbation theory is given in this nice post by QMechanic, where you can see that Feynman diagrams can be used in (non-relativistic) quantum mechanics to simplify the evaluation of higher-order terms in perturbation theory.
One method that was suggested to me is to look at a scree plot and check for "elbow" to determine the correct number of PCs to use. But if the plot is not clear, does R have a calculation to determine the number? fit <- princomp(mydata, cor=TRUE) Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community The following article : Component retention in principal component analysis with application to cDNA microarray data by Cangelosi and Goriely gives a rather nice overview of the standard rule of thumbs to detect the number of components in a study. (Scree plot, Proportion of total variance explained, Average eigenvalue rule, Log-eigenvalue diagram, etc.) Most of them are quite straightforward to implement in R. In general if your scree plot is very inconclusive then you just need to "pick your poison". There is no absolute right or wrong for any data as in reality the number of PCs to use actually depends on your understanding of the problem. The only data-set you can "really" know the dimensionality of is the one you constructed yourself. :-) Principal Components in the end of the day provide the optimal decomposition of the data under an RSS metric (where as a by-product you get each component to represent a principal mode of variation) and including or excluding a given number of components dictates your perception about the dimensionality of your problem. As matter of personal preference, I like Minka's approach on this Automatic choice of dimensionality for PCA which based on a probabilistic interpretation of PCA but then again, you get into the game of trying to model the likelihood of your data for a given dimensionality. (Link provides Matlab code if you wish to follow this rationale.) Try to understand your data more. eg. Do you really believe that 99.99% of your data-set's variation is due to your model's covariates? If not probably you probably don't need to include dimensions that exhibit such a small proportion of total variance. Do you think that in reality a component reflects variation below a threshold of just noticeable differences? That again probably means that there is little relevance in including that component to your analysis. In any case, good luck and check your data carefully. (Plotting them makes wonders also.) There has been very nice subsequent work on this problem in the past few years since this question was originally asked and answered. I highly recommend the following paper by Gavish and Donoho: The Optimal Hard Threshold for Singular Values is 4/sqrt(3) Their result is based on asymptotic analysis (i.e. there is a well-defined optimal solution as your data matrix becomes infinitely large), but they show impressive numerical results that show the asymptotically optimal procedure works for small and realistically sized datasets, even under different noise models. Essentially, the optimal procedure boils down to estimating the noise, $\sigma$, added to each element of the matrix. Based on this you calculate a threshold and remove principal components whose singular value falls below the threshold. For a square $n \times n$ matrix, the proportionality constant 4/sqrt(3) shows up as suggested in the title: $$\lambda = \frac{4\sigma\sqrt{n}}{\sqrt{3}}$$ They also explain the non-square case in the paper. They have a nice code supplement (in MATLAB) here, but the algorithms would be easy to implement in R or anywhere else: https://purl.stanford.edu/vg705qn9070 Caveats: The problem with Kaiser's criterion (all eigenvalues greater than one) is that the number of factors extracted is usually about one third the number of items or scales in the battery, regardless of whether many of the additional factors are noise. Parallel analysis and the scree criterion are generally more accurate procedures for determining the number of factors to extract (according to classic texts by Harmon and Ledyard Tucker as well as more recent work by Wayne Velicer.
Use the method of separation of variables to find a solution $u = u(x, y)$ to the PDE $$ \frac{\partial^2 u }{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0 $$ on the infinite strip $(-\infty, \infty) \times(-1, 0)$ subject to the boundary conditions $ \frac{\partial u}{\partial y}(x, 0) = \cos(2x)$ and $\frac{\partial u}{\partial y}(x, -1) = 0.$ Attempt at the solution: Suppose $u(x, y) = f(x)g(y)$, then $$ f''(x) = -\lambda f(x), \ \ g''(y) = -\lambda g(y). $$ For $\lambda > 0:$ $$ g(y) = c_1 \cosh(\sqrt \lambda y)+c_2\sinh(\sqrt \lambda y) $$ and $$ f(x) = d_1 \sin(\sqrt \lambda x) + d_2 \cos(\sqrt \lambda x) $$ The boundary conditions on $f$ imply $c_2 = c_1 \tanh(\sqrt \lambda)$. For $\lambda = 0:$ $$g(y) = e_1 y + e_2$$ and $$f(x) = f_1 x + f_2$$ The boundary conditions imply $g(y) = e_2.$ For $\lambda < 0:$ $$ g(y) = g_1 \cos(\sqrt \lambda y) + g_2 \sin(\sqrt \lambda y) $$ and $$f(x) = h_1 \cosh(\sqrt \lambda) + h_2 \sinh(\sqrt x)$$ The boundary conditions imply $g_2 = -g_1 \tan \sqrt \lambda.$ Due to the boundary condition on $y = 0$ I tried $\lambda = 4$ and obtained (after some algebra): $$ u(x, y) = c_1 d_2 \cosh(2y)\cos(2x) + c_2 d_2 \sinh(2y) \cos(2x). $$ After solving for $c_1, d_2, c_2,$ and $d_2$ I got $$ u(x, y) = \frac{1}{2} \tanh(2) \cos(2y) \cos(2x) + \frac{1}{2} \sinh(2y) \cos(2x) $$ Question: This function satisfies the $\Delta u = 0$ on the domain and the boundary condition at $y = 0$ but not at $y = -1$. I suspect I made an algebra error but I cannot find where it happened. Before continuing, is this ad-hoc approach best? Is there a more systematic way to find the correct values for $\lambda$?
Use the definition of the derivative to find $f'(x)$ if $f(x) = \frac{3}{x^{0.5}+2} , x>0$ To begin with the definition is $$f'(x) = \lim_{h \rightarrow 0}\frac{f(x+h)-f(x)}{h}$$ Thus, this is what I have so far, $$f'(x) = \lim_{h\rightarrow0} \frac{f(x+h)-f(x)}{h}= \lim_{h\rightarrow0} \frac{\frac{3}{(x+h)^{0.5}+2} - \frac{3}{(x^{0.5})+2)}}{h}$$ I dont know how to simplify this.. But using the quotient rule, I found that the derivative is $$\frac{-3}{2x^{0.5}(x^{0.5}+x)^2}$$
Problem: Let the sequence \( \{ a_n\} _{n \ge 1 } \) be defined by $$ a_n = \tan n \theta $$ where \( \tan \theta = 2 \). Show that for all n \( a_n \) is a rational number which can be written with an odd denominator. Discussion: This is simple induction. The claim is true for n = 1 (clearly \( \tan 1\theta = \tan \theta = 2 = \frac {2}{1} =\) a rational number with an odd denominator.) Assume the claim is true for n=k; that is suppose \( \tan k \theta = \frac{p}{2t+1} \) that is a rational number with odd denominator. (since the denominator is odd we can express it as 2t +1). Of course we know this to be valid for k = 1. Next we show for n = k +1 \( \displaystyle { \tan (k+1) \theta = \tan (k \theta + \theta) \\ = \frac { tan k \theta + \tan \theta }{ 1- \tan \theta \tan k \theta } \\ = \frac { \frac {p}{2t +1} + 2 }{ 1 – 2 \frac{p}{2t+1} } \\ = \frac { p+ 4t + 2} { 1 + 2t – 2p} \\ = \frac {p+4t + 2} { 1 + 2(t-p) }} \) Since p and t are integers, hence the ratio we found is rational. Denominator is 1 + 2(t-p) (which is even + 1). Hence it is odd. (proved)
Hello! I'm a senior in highschool and I plan on getting a bachelors in computer engineering. I want to minor is astronomy (or double major although I heard engineering is very demanding so I would prefer to minor astronomy). When I complete the four years, and if I realize that I want to be an... I have a notion that students involved in contest math/physics since high school 'develop' a better ability to pick up concepts (not in the context of contest math/physics) quicker and solve relatively more 'complex' problems.In high school I heard about contest math but never really immersed... First I worked out the dispersion relations, which is pretty easy:##M \ddot x_j = K x_{j-1} + K x_{j+1} - 2K x_j -mg \frac {x_j} {l} ## (All t-derivatives)We know ##x_j## will be in the form ##Ae^{ijka}e^{-i\omega t}##so the above becomes:## -\omega^2M = K (e^{-ika}+e^{ika}-2)-\frac {g}... Again I am really confused, but I just put the travelling wave as:##\psi(x,t) = Dcos(kx- \omega t)## for positive x##\psi(x,t) = Dcos(kx+ \omega t)## for positive xThen I simply differentiated and plugged in ##x=0####F(t) = - T D k sin(\omega t)##and from this## \langle P \rangle = T D^2... So I decided to go for two degrees instead of one so the goal is two associate degrees.One in Physics and the other in Mathematics any one have any advice on the this math heavy course of action I’ have set for myself. I am currently a junior in high school and recently, my guidance counselor has been asking me a lot of questions about what I want to major in. Within the last 6-8 months, I have been leaning heavily towards a math major. That was until I started my calc class this year...I'm in AP Calc BC... So I am a high school freshman student this school year. We have this career guidance presentation every once in a while by our counselors. I wanted to be an engineer but I realized something. I don’t know which engineer I’d want to persue.I’ve limited it down to two. Mechatronics or... Hey guys, I decided to join this page since it helped me out a lot with my physics-related homework. Tbh I suck at physics and I try to work hard to learn but I just don't get it at all. Anyways, looking forward to helping each other out here :) Fashion Art Techniques IThe Science of Light and ColorIntroduction to the Fashion IndustryHistory of the United States IIntroductory AlgebraGLBTQ Literaturechildren's literatureFoundations of EducationChild Psychology and Development 1. Homework StatementSolve the following differential equations/initial value problems:(cosx) y' + (sinx) y = sin2x2. Homework EquationsI've been attempting to use the trig ID sin2x = 2sinxcosx.I am also trying to solve this problem by using p(x)/P(x) and Q(x)3. The Attempt at a... 1. Homework StatementSolve the following differential equations/initial value problem:y^(4) - y'' - 2y' +2y = 0 Hint: e^-x sinx is a solution2. Homework EquationsI was attempting to solve this problem by using a characteristic equation.3. The Attempt at a Solutiony'''' -y'' -2y' +... I am a Nepalese 12th grade student and I will be applying to a few US colleges for the Class of 2023. I am seeking to pursue a PhD in some field of physics that will pique my interest by the end of grads.I won the national physics olympiad and would be participating in IPhO now had it not been... Hi!I'm about to take my senior year in high school, and I was originally planning on taking electrical engineering plus some extra classes in physics. I like to work with my hands and to create things, so engineering is sort of a no-brainer for me, but it's also very important for me to... Translation: A man with a mental illness tried to kill himself with a shot in the head. Instead of dying, the bullet struck the part of the brain that caused the mental disorder; he healed and became a great student in college.I know that the page is not there these things. But I have a... As I’m doing more research and talking to my college advisor I learned that I don’t need a master’s degree to get into grad school and earn a doctorate, and that my field of study (Biomedical Physics) has various common classes with the Advanced Physics major.Would it be worth more to take an... Hello y'all! I'm a high school student and I'm looking for some advice:)I am very interested in physics, quantum mechanics in particular. I'm a bit of an overachiever, and I usually overthink everything. I come from a family of doctors, and although they don't pressure me into following in... Hello,I am a high school junior looking at going into a career in physics. Over the past few years, I have found that I enjoy physics, specifically in "understanding" the atom, and the mind boggling ideas of what happens at that level. I think I want to study particle physics or quantum... Hi everyone, I am a 10th grader studying in India. I want to know what it will take to get into universities like Harvard, MIT, etc. I have been getting 10 CGPA from class 6th to 9th,and i hope to score well in my boards. I am also a 5th grade pianist(Trinity) and I hope to complete upto 7... I am about to register for my final year of school and I have 21 more credits to go. Time wise, they all work together, but I was wondering if everyone thinks that 21 credits is a lot? I have never taken so many credit before in one semester. I could push graduation back one more semester but... I began to study mathematics in college because I was fascinated by the subject and have a naturally high degree of curiosity for it along with the rest of the sciences--but stress, depression and other factors take away that curiosity as I progress through each semester, leaving me bored... I was planning on going to university the coming academic year, just for the sake of learning and pushing myself to see what my limits are. I realize my chances of succeeding are slim. I'm not a genius, and I do not think I have natural smarts for solving abstract problems. I have to work hard... I am currently in 9th grade and have been into astronomy and basic physics for a few years now. I'm thinking about (once enrolled obviously) majoring in physics.Only problems are:1: Pretty good at math but struggle a bit and end up forgetting a lot of information (at least in algebra) and I... Sophomore Intended Mechanical Engineering here. I have a pretty nice schedule this time around, all STEM classes. Most are easy but I have Differential Equations, Physics II, and Statics which are the hard ones. I have a lot of free time to study over the next few weeks and wanted to try to... I'm a rather slow learner relative to other students. I take the understanding of the material to be of greater importance than rote memorization of it. If anyone looks at the forgetting curve of memorized material, you can appreciate how inefficient rote memorization is. I'm also quite lazy in... Last month, I graduated from college with a music degree...a musical theatre degree, to be specific. (Yeah yeah, I know.) I am already itching to learn more, and wanted to focus on physics, which I've been very interested in since high school.I thought about switching majors in college, but... I was admitted to a very good undergraduate physics program (Columbia University), but I think I would like to take a gap year (defer my admission) to prepare my self better. I do feel a bit intimidated as I come from Colombia and I would prefer it if I could further my studies before going to... I'm going to be taking 18 units in the fall. Classes are linear algebra, advanced calculus, statistics, and computer programming (c++). I usually work a lot but this time I've been thinking of talking to work and only working short hours on weekends, maybe 4 hours each for friday, saturday, and... Hello,I am a high school senior in both IB physics and IB computer science. I have been accepted to college at UNC Chapel Hill next year, and I am trying to decide what to major in. I enjoy learning about physics a lot, but I also love computer science, and it seems like it has more job... Im debating between Berkeley and UCSD and I am not sure what to do, I plan to go to grad school, and I am worried that it might be difficult to get a lot of research done at UCB whereas I've heard its easier to get into research at UCSD. Furthermore I'd start a semester late at UCB because I... Hi everyone, just a quick question about college applications. I'm in a STEM program at a public school, and some of the classes that I take are weighted as AP even though they are not official AP classes. Some people that I've talked to have seemed to disregard these as just the school being...
$$\frac{\left(\beta_{2}+\beta_{1}\right)^{\alpha_{1}+\alpha_{2}-1}B\left(\alpha_{1},\alpha_{2}\right)}{\beta_{1}^{\alpha_{1}}\beta_{2}^{\alpha_{2}}}=\frac{1}{\alpha_{1}\beta_{2}}{_{2}F_{1}}\left({1-\alpha_{2},1\atop 1+\alpha_{1}};-\tfrac{\beta_{1}}{\beta_{2}}\right)+\frac{1}{\alpha_{2}\beta_{1}}{_{2}F_{1}}\left({1-\alpha_{1},1\atop 1+\alpha_{2}};-\tfrac{\beta_{2}}{\beta_{1}}\right)$$ Where $B(\alpha_{1},\alpha_{2})=\frac{\Gamma\left(\alpha_{1}\right)\cdot\Gamma\left(\alpha_{2}\right)}{\Gamma\left(\alpha_{1}+\alpha_{2}\right)}$ is the Beta function To me it's interesting because: It has 4 independent parameters/variables. It relates $\frac{\beta_{1}}{\beta_{2}} and \frac{\beta_{2}}{\beta_{1}}$. The 1 in the upper term can be manipulated by Hypergeometric relations or Generalized Hypergeometric relations: • DLMF 15.5: https://dlmf.nist.gov/15.5 • DLMF 16.3(i): https://dlmf.nist.gov/16.3.i If this type of identity is known I would be interested in any information. Is it possible that the original probability problem (see below) could be a source of new Hypergeometric identities? Background: While kibitzing on a paper by Aaron Hendrickson I noticed an equivalent form arising from the calculation of conditional probabilities for the gamma-difference distribution. Specifically, for $Y\sim\mathcal{GD}(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})$ which is the distribution of the difference of independent gamma variables we have the following results. $$\operatorname{pr}(Y\leq0)=C_{Y}\frac{\Gamma(\alpha_{1}+\alpha_{2})}{\beta_{2}\alpha_{1}}{_{2}F_{1}}\left({1-\alpha_{2},1\atop 1+\alpha_{1}};-\tfrac{\beta_{1}}{\beta_{2}}\right),$$ $$\operatorname{pr}(Y\geq0)=C_{Y}\frac{\Gamma(\alpha_{1}+\alpha_{2})}{\beta_{1}\alpha_{2}}{_{2}F_{1}}\left({1-\alpha_{1},1\atop 1+\alpha_{2}};-\tfrac{\beta_{2}}{\beta_{1}}\right),$$ where $$C_{Y}=\beta_{1}^{\alpha_{1}}\beta_{2}^{\alpha_{2}}(\beta_{1}+\beta_{2})^{1-\alpha_{1}-\alpha_{2}},$$ for $\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}>0$. If true then it follows that $\operatorname{pr}(Y\leq0)+\operatorname{pr}(Y\geq0)=1$ (with an overlap of measure zero) and the equation is true. As far as I can tell the statement is true numerically but I have no idea how to prove it in terms of Hypergeometric functions.
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 43, Number 5 (1972), 1727-1731. Convergence in Distribution of Random Measures Abstract Let $(S, \mathscr{I})$ be a measurable space, $M$ the set of all finite measures on $\mathscr{I}, \mathscr{F}_M$ the $\sigma$-algebra generated by the family of all measurable cylindrical sets $\cap^k_{i=1} \{\mu \in M: \mu(A_i) \leqq a_i\}$. With each probability measure $P$ on $\mathscr{F}_M$ the family $\{P_{A_1}, \cdots, A_k\}$ of all finite-dimensional probability measures of the cylidrical sets is associated. The following problem is considered: Given a sequence $P^{(n)}$ of probability measures on $\mathscr{F}_M$ such that each sequence $P^{(n)}_{A_1}, \cdots, A_k$ converges weakly to a $k$-dimensional probability measure $P_{A_1}, \cdots, A_k$, does the family $\{P_{A_1}, \cdots, A_k\}$ generate a probability measure $P$ on $\mathscr{F}_M?$ It is proved that the answer is affirmative if $(S, \mathscr{I})$ is the Euclidean $n$-space with the $\sigma$-algebra of Borel sets. Article information Source Ann. Math. Statist., Volume 43, Number 5 (1972), 1727-1731. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177692410 Digital Object Identifier doi:10.1214/aoms/1177692410 Mathematical Reviews number (MathSciNet) MR362426 Zentralblatt MATH identifier 0249.60003 JSTOR links.jstor.org Citation Jirina, Miloslav. Convergence in Distribution of Random Measures. Ann. Math. Statist. 43 (1972), no. 5, 1727--1731. doi:10.1214/aoms/1177692410. https://projecteuclid.org/euclid.aoms/1177692410
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Assume I have some positive numbers $a_1,\ldots,a_n$ and a number $k \in \mathbb{N}$. I want to partition these numbers into exactly $k$ sets $A_1,\ldots,A_k$ such that the weighted arithmetic mean $$\text{cost}(A_i,\ldots,A_k)=\sum_{i=1}^{k}\frac{|A_i|}{n}c(A_i)$$ is minimal, where $c(A_i)=\sum_{a \in A_i}a$ is simply the sum of all numbers in $A_i$. Is there actually a (polynomial) algorithm to do this or is this a ( NP) hard problem? I tried to reduce it to some NP-hard problems but didn't get anywhere, especially because the numbers are nonnegative and thus in an optimal partition big sets need to have smaller weight which seems to be some kind of balancing problem instead of a packing problem (which I am more familiar with).
My question is; how does expansion of the conditional probability density $P_{1|1}(y_1,t_1|y_2,t_1+\tau)$ get put into the form of(6.23) of Reichl$^1$? The term with the minus sign at the front, seems a bit mysterious, could some explanation of it be given, as part of an answer? Reichl says, I quote Let us expand the conditional probability density $P_{1|1}(y_1,t_1|y_2,t_1+\tau)$ in a Taylor series for small $\tau$, in such a way that to each order in $\tau$ the normalization of $P_{1|1}(y_1,t_1|y_2,t_1+\tau) $ (cf. Eq. 6.13) is preserved. If we note that $P_{1|1}(y_1,t_1|y_2,t_1)=\delta(y_1-y_2)$ (cf. Eq. 6.12) we obtain \begin{align} P_{1|1}(y_1,t_1|y_2,t_1+\tau)&=\delta(y_1-y_2)-\tau\int dy W_{t_1}(y_1,y)\delta(y_1-y_2) \\ &+\tau W_{t_1}(y_1,y_2)\tag{6.23}\end{align} End of quote. NB: (6.12) and (6.13) are as follows $$P_1(y_2,t_2) =\int P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2) dy_1 \tag{6.12}$$ $$\int P_{1|1}(y_1,t_1|y_2,t_2) dy_2=1 \tag{6.13} $$ Reference: 1) Reichl, L.E., A Modern Course in Statistical Physics, Arnold, London, 1980. Other Info I thought I might encourage answers if I first provided some of the hard grind. I must say typing up math equations is hard work , it's a pity I've no secretary. First some $\delta$ function material. The defining property, the $f(0)$ property, is, for any continuous $f(x)$ $$ \int^\infty _{-\infty}f(x) \delta (x) dx = f(0)$$ Also, \begin{align} \int^\infty _{-\infty}f(x) \delta (x-x_0) dx &= f(x_0) \nonumber \\ \int^\infty _{-\infty}f(x,y) \delta (x-x_0) dx &= f(x_0,y) \nonumber \end{align} With similar results for functions with more arguments. A function with four arguments, two taking fixed values, like with $P_{1|1}(y_1,t_1|y_2,t_1)$ can be represented as a new function, with only two arguments, let us in this case, call this function $h(y_1,y_2)$ with rule $$h(y_1,y_2)=P_{1|1}(y_1,t_1|y_2,t_1)$$ So \begin{align} h(y_1,y_2)&=0 , y_1\neq y_2 \nonumber \\ \int^\infty _{-\infty} h(y_1,y_2) dy_2 &= 1 \nonumber \\ \int^\infty _{-\infty} h(y_1,y_2) f(y_2)dy_2 &= f(y_1) \int^\infty _{-\infty} h(y_1,y_2) dy_2 &= f(y_1) \nonumber \end{align} We have that, $h(y_1,y_2)$ behaves like $\delta (y_1-y_2)$ in an integral, so we put $$h(y_1,y_2)=\delta (y_1-y_2)=P_{1|1}(y_1,t_1|y_2,t_1) \tag{1}$$ The above material explains about $P_{1|1}(y_1,t_1|y_2,t_1)$ and part of it may also be useful latter. Taylor expanding $P_{1|1}(y_1,t_1|y_2,t_1+\tau)$ to first order \begin{align} P_{1|1}(y_1,t_1|y_2,t_1+\tau)&= P_{1|1}(y_1,t_1|y_2,t_1) +\tau \left. \frac{\partial P_{1|1}(y_1,t_1|y_2,t)}{\partial t} \right|_{t=t_1} \nonumber\\ &+ f( y_1,t_1,y_2,\tau) \nonumber \end{align} where $ f( y_1,t_1,y_2,\tau)$ is a correction term to ensure normalisation in the sense of (6.13) $$\int P_{1|1}(y_1,t_1|y_2,t_1 +\tau) dy_2=1 $$ Let $$ \left. \frac{\partial P_{1|1}(z_1,t_1|z_2,t)}{\partial t} \right|_{t=t_1}= W_{t_1}(z_1,z_2) $$ Then \begin{align} \left. \frac{\partial P_{1|1}(y_1,t_1|y_2,t)}{\partial t} \right|_{t=t_1}= W_{t_1}(y_1,y_2) \nonumber\\ \left. \frac{\partial P_{1|1}(y_1,t_1|y,t)}{\partial t} \right|_{t=t_1}= W_{t_1}(y_1,y) \nonumber \end{align} Hence our Taylor expansion may be written as \begin{align} P_{1|1}(y_1,t_1|y_2,t_1+\tau)&=\delta(y_1-y_2)+\tau W_{t_1}(y_1,y_2) \nonumber\\ &+ f( y_1,t_1,y_2,\tau) \tag{2} \end{align} Somehow, we need to prove/justify that $$ f( y_1,t_1,y_2,\tau)= - \tau \int^\infty _{-\infty}dy W_{t_1}(y_1,y) \delta (y_1-y_2) \tag{3} $$
Question is to check which option holds true : There exist a map $f: \mathbb{Z}\rightarrow \mathbb{Q}$ such that is bijective and increasing is onto and decreasing is bijective and satifies $f(n)\geq 0$ if $n\leq 0$ has uncountable image. First of all any subset of $\mathbb{Q}$ is countable so there is no point in looking for last option. Now, As both $\mathbb{Z}$ and $\mathbb{Q}$ are countable, there could be a possible bijective function.. Now, the first problem is i could not think of a bijection (I am very sure this exist) and second problem is even if i find some function will that old first or third possibilities. Please just do not give an answer but please give some hint and give some time to think about. Thank you :)
Let $A$ be an $n\times n$ positive integer-valued matrix, that is every entry of $A$ is a a positive integer. Let $\lambda$ be the Perron-Frobenius eigenvalue and $x = (x_1,...,x_n)^T$ the corresponding positive probability eigenvector: $\sum_i x_i =1, \ x_i > 0$. Denote by $H(x)$ the additive subgroup of $\mathbb R$ whose generators are the coordinates of $x, \ H(x) = < x_1,...,x_n >$. Fix any integer $k \geq 1$ and consider the set of positive integer-valued matrices $\mathcal B_k$ formed by all $n\times n$ matrices $B$ satisfying the following conditions: $\lambda^k$ is the Perron-Frobenius eigenvalue for $B$, and if $By = \lambda^k y,\ \sum_i y_i = 1, y_i >0,$ then $H(y) = H(x)$. My questions are as follows. (1) Is there an algorithm describing all matrices from $\mathcal B_k$? (2) How can one find at least one matrix $B$ in the set $\mathcal B_k$ different from $PA^kP^{-1}$ where $P$ is a permutation matrix? Comments: (i) The case when $\lambda $ is an integer is not interesting, so that one can assume that $\lambda$ is an algebraic number. (ii) I asked a similar question before but these ones seems formulated in more precise form. (iii) Of course, (2) is simpler than (1), and I actually need a constructive answer to (2). I'll be glad to see any comments, suggestions, references.
After all, functors are well known! When you convert an object into another kind, surely functions in-between those objects also transform into a new breed. Let’s look at one example: Objects: Topological Spaces (X) Functions between Objects: Continuous maps (f) Objects: Groups (G) Functions between Objects: Group Homomorphism (\( \phi \)) We wish to convert X into G (spaces into groups). If by some magical tool, we can do such a conversion, clearly that process will also affect the respective types of functions. We will want the f’s to get converted into \(\phi \)’s. The word ‘convert’ is a bit strong. In fact, it is much better to use ‘extract’ instead. [/accordion] [accordion title=”The Recipe” connect=”816″] Here is the recipe for such a conversion. Start with two reasonably nicetopological spaces X, Y. Choose preferred points in each space; suppose \( x \in X \) and \( y \in Y \) Now draw loops in X starting (and ending) at x (do the same with Y). We will declare two loops to be sameif one of them can be ‘pushed nicely’ into another. In the picture, the black and red are same loops. But red and green are not! Why not? Because when you start pushing the green loop toward the red one, you are forced to give up! There is a hole in your way. We have set of loops now. Actually, each element of the set of loops is a representative of a bunch of loops. Each pair in that bunch can be pushed easily into one another. Declare G = { set of (bunches of) loops in X based at x } What is the identity loop in G? The one that does nothing but sits at the base point. Formally, it is the constant map \( \alpha \) from a closed interval [0, 1] to the base point x. \( \alpha (t) = x \forall t\in[0,1] \) How do you combine two loops? You go along one first and then go along the other one. Clearly, this ‘combined’ loop hits the base point at least twice. But that is okay. What is the inverse loop? Just go along the same loop in the reverse direction. So we have extracted a group from a space! Let us call this group \(G_X \) corresponding to the space X. Next: what happens to the functions? There were functions between topological spaces. There are functions between groups. We converted \( X \to G_X \). Similarly we may extract \( Y \to G_Y \). We are interested in a function f between X and Y. $ X \overset {f} \rightarrow Y $ After all the loops are ‘living inside’ the space X and space ‘Y’. When f takes points in X to points in Y, it is reasonable to expect that it may take loops to loops (that is members of \( G_X \) to \( G_Y \) Does it? Let us look at functions that take the preferred base point x of space X to the preferred base point y of the space Y. What are loops? Take a segment. Bend it so that its endpoints meet at a point. Formally, it is a map \( \alpha: [0, 1] \to X \) such that \( \alpha (0) = \alpha (1) = x \) where x is the base point at which the loop is glued. Of course, instead of [0, 1], we can use any other closed interval. [0,1] is just more convenient. Anyway, so we have a formal definition of a loop. Now ‘hit’ the image [0, 1] under \( \alpha \) with the function f. \( f \cdot \alpha \) is a map from [0, 1] to Y! After all \( f( \alpha (0) ) = f(x) = y \) (remember we are looking at functions f such as f(x) = y; that is one that takes base point to base point). Also \( f( \alpha (1) ) = f(x) = y \). This is because \( \alpha (0) = \alpha (1) = x \). Therefore the loop \( \alpha \) in X has now become a loop \( f \cdot \alpha \) in Y. Wow Things are looking good so far. f sends loops to loops. But if you remember, our Group members are bunches of loops that can be pushed into one another. Does f send a bunch to bunch? It does! Here is how: Suppose a particularbunch contains a Red loop (R) and a Green loop (G) . That is the red loop can be pushed into the green one. fsends R to some loop f(R) and G to some loop f(G) in Y (we know it sends loops to loops) Are R’ and G’ members of the same bunch? If that happens we will be really happy! Because that would mean, not only loops are becoming loops under the effect of f, but a bunch is becoming another bunch. We can make this happen using a very simple method: Push the loop R a little toward the loop G. Suppose this motion, position of loop R is the loop R’ Clearly, the loop f(R) becomes f(R’) in this process. Keep on pushing R toward G. At each moment, f(R) will also deform into some loop inside space Y. Finally, R reaches G. But then f(R) reaches f(G)! Why? Because at each moment (of pushing activity), the function fwas transporting the interim loop inside Xto a loop inside Y. So the intial loop f(R) was also getting push to somewhere.Since the loops inside X, in the pushing process, ends up at G, the transported loops end up at f(G). You have guessed it right. A little more mathematical rigor is warranted here. However, our goal is to remain conversational. So if the ‘intuition’ is clear, then the rest can be achieved. So the ‘pushing the loop’ process inside space indirectly ‘pushes the image loops’ in the target space Y. Therefore bunches go to bunches. Functor Clearly f sends \( \alpha \) (a loop inside space X) to \( f \cdot \alpha \) (a loop inside space Y). Moreover, it sends bunches to bunches. We have found a map between the groups \( G_X \) and \( G_Y \) (remember, that members of \(G_X\) were the bunches of loops inside X and that members of \(G_Y\) are the bunches of loops inside Y and we found that f takes bunches to bunches). Let us call the map from \( G_X \to G_Y \) given by \( [\alpha] \to [ f \cdot \alpha ] \), by \(f_*\). We are using the square brackets to indicate ‘bunch’ instead of a ‘single loop’. We need to clarify whether \(f_*\) is a group homomorphism. We also want to know what happens to the identity element of \(G_X\) under \(f_*\). \(f_*\) is a group homomorphism. Why? \(f_* ([\alpha]\cdot [\beta]) = [f([\alpha]\cdot [\beta])] \). That is we are applying f to the combined loop \([\alpha]\cdot [\beta])] \). But this same as applying fon \(\alpha\) and then on \(\beta\) and then combine them inside Y. That is perform \( (f \cdot \alpha) \cdot (f \cdot \beta) \). But this is in the bunch \( [f \cdot \alpha] \cdot [f \cdot \beta] \) = \(f_* ([\alpha])\cdot f_*( [\beta]) \) Using similar arguments we have identity element of \(G_X\) going to identity element of \(G_Y \). So clearly f (a function between spaces) gets converted into group homomorphism ( \(f_* \) (a function between groups). But we have more! This ‘starring’ process is now converting maps between spaces into homomorphism between groups (and of-course spaces into groups) However, there are two more properties of this ‘starring’ process. The identity map between spaces gets converted into identity group homomorphism If you compose f and g (two maps between spaces)and then compute the stars, it turns out, you could have computed stars of those maps first and then combined them. That is $ (f \cdot g)_{*} = f_* \cdot g_* $ It is an exercise to ‘say’ what these statements mean! Can you articulate it? The identity map between spaces gets converted into identity group homomorphism This one is easy but we need to be a little careful. Here both spaces are same: X. We are examining \( X \overset{f} \to X \) where f(x) = x for all \( x \in X \) Now it becomes absolutely obvious. After all, f takes each point to itself. Clearly, it will map each to loop to itself and hence each bunch of loops to the same bunch of loops and so on. Hence it \(f_* \) is identity group homomorphism from \( G_X \to G_X \) $ (f \cdot g)_{*} = f_* \cdot g_* $ Again we need to be careful about domain and co-domain. Here we have potentially three spaces involved! $ X \overset {g} \rightarrow Y \overset {f} \rightarrow Z $ The recipe is simple: Start with a loop R in X g takes that loop R to g(R) which happens to be a loop in Y f takes the loop g(R) to the space Z, where the image is again a loop: f(g(R)) Is this in the same bunch as \( f_*((g_* ([R])) \) But this is obvious because: \( g_*([R]) = [g \cdot R ] \) And then \( f_*(g_*([R])) = f_*([g (R) ] ) =[ f(g(R)) ] )\) $ \begin{array} XX & \overset {f} \rightarrow & Y \\ \downarrow & & \downarrow \\ G_X & \overset{f_*} \rightarrow & G_Y \end{array} $ The first row of the diagram has spaces and map between the spaces. The second row has groups and homomorphisms between the groups. Our process extracted the second row from the first row. Not only that, it did it quite naturally: Identity map between spaces got converted into identity homomorphism between groups. The composition of maps between spaces led to the composition of respective homomorphisms. This ‘process’ is a functor.
Parabolic singular limit of a wave equation with localized boundary damping 1. Departamento de Matemática Aplicada, Universidad Complutense de Madrid, Madrid 28040, Spain, Spain $\qquad\qquad \qquad\qquad \epsilon u_{t t} -\Delta u + \lambda u =f $ on $\Omega \times (0,T)$ $(P_{\epsilon, \lambda, \Gamma_0})\qquad\qquad u_t + \frac{\partial u}{\partial \vec{n}} =g$ on $\Gamma_1 \times (0,T) $ $\qquad\qquad u=0 $ on $\Gamma_0 \times (0,T)$ where $0< \epsilon \leq \epsilon_0$, $\Omega \subset \mathbb R^N$ is a regular open connected set, $\lambda \geq 0$ and $\Gamma = \Gamma_0\cup \Gamma_1$ is a partition of the boundary of $\Omega$. We will also consider the case where $\Gamma_0$ is empty (see below for more precise assumptions on $\lambda$, $\Omega$ and $\Gamma_0$, $\Gamma_1$). For this problem the corresponding formal singular perturbation at $\epsilon =0$ is $\qquad\qquad \qquad\qquad -\Delta u + \lambda u =f$ on $\Omega \times (0,T) $ $(P_{0, \lambda, \Gamma_0}) \qquad\qquad u_t + \frac{\partial u}{\partial \vec{n}} =g$ on $\Gamma_1 \times (0,T) $ $\qquad\qquad u=0 $ on $ \Gamma_0 \times (0,T)$ We are here concerned with the well possedness of both problems for the non--homogeneous case, i.e. $f=f(t,x)$, $g=g(t,x)$, and with the convergence, as $\epsilon$ approaches $0$, of the solutions of $(P_{\epsilon, \lambda, \Gamma_0})$ to solutions of $(P_{0, \lambda, \Gamma_0})$. Mathematics Subject Classification:35L10, 35J0. Citation:Aníbal Rodríguez-Bernal, Enrique Zuazua. Parabolic singular limit of a wave equation with localized boundary damping. Discrete & Continuous Dynamical Systems - A, 1995, 1 (3) : 303-346. doi: 10.3934/dcds.1995.1.303 [1] Nicolas Fourrier, Irena Lasiecka. Regularity and stability of a wave equation with a strong damping and dynamic boundary conditions. [2] [3] Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. [4] Belkacem Said-Houari, Flávio A. Falcão Nascimento. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. [5] [6] [7] [8] Jiayun Lin, Kenji Nishihara, Jian Zhai. Critical exponent for the semilinear wave equation with time-dependent damping. [9] [10] Mohammad A. Rammaha, Daniel Toundykov, Zahava Wilstein. Global existence and decay of energy for a nonlinear wave equation with $p$-Laplacian damping. [11] Khadijah Sharaf. A perturbation result for a critical elliptic equation with zero Dirichlet boundary condition. [12] Navnit Jha. Nonpolynomial spline finite difference scheme for nonlinear singuiar boundary value problems with singular perturbation and its mechanization. [13] Takeshi Taniguchi. Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition. [14] [15] Lorena Bociu, Irena Lasiecka. Uniqueness of weak solutions for the semilinear wave equations with supercritical boundary/interior sources and damping. [16] Bopeng Rao, Laila Toufayli, Ali Wehbe. Stability and controllability of a wave equation with dynamical boundary control. [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In the context of reinforcement learning, a policy, $\pi$, is often defined as a function from the space of states, $\mathcal{S}$, to the space of actions, $\mathcal{A}$, that is, $\pi : \mathcal{S} \rightarrow \mathcal{A}$. This function is the "solution" to a problem, which is represented as a Markov decision process (MDP), so we often say that $\pi$ is a solution to the MDP. In general, we want to find the optimal policy $\pi^*$ for each MDP $\mathcal{M}$, that is, for each MDP $\mathcal{M}$, we want to find the policy which would make the agent behave optimality (that is, obtain the highest "cumulative future discounted reward", or, in short, the highest "return"). It is often the case that, in RL algorithms, e.g. Q-learning, people often mention "policies" like $\epsilon$-greedy, greedy, soft-max, etc., without ever mentioning that these policies are or not solutions to some MDP. It seems to me that these are two different types of policies: for example, the "greedy policy" always chooses the action with the highest expected return, no matter which state we are in; similarly, for the "$\epsilon$-greedy policy"; on the other hand, a policy which is a solution to a MDP is a map between states and actions. What is then the relation between a policy which is the solution to a MDP and a policy like $\epsilon$-greedy? Is a policy like $\epsilon$-greedy a solution to any MDP? How can we formalise a policy like $\epsilon$-greedy in a similar way that I formalised a policy which is the solution to a MDP? I understand that "$\epsilon$-greedy" can be called a policy, because, in fact, in algorithms like Q-learning, they are used to select actions (i.e. they allow the agent to behave), and this is the fundamental definition of a policy.
Focus on the right problem I suspect you will find that the most important challenge is not finding a trilateration ordering, but rather choosing a strategy that minimizes the localization error. It is easy to find a trilateration ordering, as I will describe below. In contrast, minimizing the error is more challenging. Trilateration might actually not be the best way to minimize the error. For instance, when trying to localize a node, it is possible that you might get better accuracy by using four neighbors (e.g., doing multiple trilaterations) and averaging. Also, the order in which you do the localization might matter. If you choose a bad order, errors might accumulate. So, I suspect you might not be focusing on the right problem. Answering your question as posed I'm going to answer your problem as posed, even though I'm not sure it is the right problem. There is a trivial, straightforward algorithm to solve your problem. You simply enumerate all possibilities. That can be done as follows: Let $S$ denote the set of nodes that are currently localized. Initially, $S$ includes just the three seed nodes. Repeat the following, until $S$ includes all nodes (i.e., until $S=V$): The running time is $O(n^5)$, i.e., polynomial in the size of the graph. With standard optimizations, the running time can be reduced significantly. For instance, you can use a worklist $W$ of nodes that are adjacent to some element of $S$ but not in $S$ themselves (i.e., $W= \{w \in V \setminus S : \exists s \in S . w \in N(s)\}$). You can update this worklist efficiently. You can also use a priority queue to sort the elements of $W$ by their "degree" (i.e., the number of nodes in $S$ they are adjacent to), and you can update this ordering efficiently. But since you didn't list any specific running time requirements, this should be sufficient. A better approach I suspect that a better approach is to try to use statistical methods to estimate a global best-fit for the location of all sensors, simultaneously. For instance, you might use least-squares methods. Let me illustrate what I mean. Suppose that you perform a bunch of measurements of the distances between pairs of sensors; create a graph $G=(V,E)$, where there is an edge $(v,w)$ if you have measured the distance between sensor $v$ and sensor $w$. Let $m(v,w)$ be the measured distance between $v$ and $w$. Now our goal is locations of all the sensors that best fit these measurements. In particular, we are looking for a map $L : V \to \mathbb{R}^2$, where $L(v)$ represents the location of sensor $v$. We are given the locations of three seed nodes, say $v_1,v_2,v_3$. Also, we are given approximate distance measures between pairs of nodes that are connected by an edge in $G$. We want to find a location map $L$ that minimizes the total error of the measurements. Thus, the cost of a location map $L$ might be given by the total squared error: $$\text{cost}(L) = \sum_{(v,w) \in E} (d(L(v),L(w)) - m(v,w))^2.$$ Here $d(p,q)$ represents the Euclidean distance between the two points $p,q \in \mathbb{R}^2$. Now we want to find the location map $L$ that minimizes $\text{cost}(L)$, subject to the requirement that $L(v_1),L(v_2),L(v_3)$ have their given values (the locations of the three seed nodes). This is an optimization problem that can be readily solved using standard methods, e.g., least-squares fitting. Your problem then decomposes into two parts: Select a set of measurements to perform. Solve the optimization problem, e.g., using least-squares best fit, to find the location of all sensors. A simple approach to decide on a set of measurements might be: each sensor looks at the set of other sensors that are within in its range, and selects 5 of them at random (and ensures that each sensor has its distance measured against at least 3 other sensors, to ensure its location can be uniquely determined; hopefully most will be measured against more than 3 other sensors). Of course, you could adjust the parameter 5 based upon empirical accuracy/performance tradeoffs. You could also imagine more sophisticated methods, e.g., where you perform a series of measurements, solve the optimization, look at the average error for each sensor (as a crude measure of how accurate its location might be), then perform a few additional measurements (e.g., only for sensors who average error in its location is too high), and solve the optimization problem again with the additional data. I think you might find that this is a more effective and robust way to solve your localization problem. I also encourage you to look at the literature on localization in sensor networks. There is lots of prior work on this problem. You know the saying: a month in the field can save you a week in the library. So, spend the week in the library, and first inform yourself about what has already been tried in the literature.
If a quadratic equation $ax^2+bx+c=0$ has more than two roots, then it is an identity i.e. it is true for all values of $x$ and $a=b=c=0$. What is a proof of this? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If a quadratic equation $ax^2+bx+c=0$ has more than two roots, then it is an identity i.e. it is true for all values of $x$ and $a=b=c=0$. What is a proof of this? This question appears to be off-topic. The users who voted to close gave this specific reason: Let the three roots be $x_1,x_2,x_3$. Method $1$: Let $f(x) = ax^2+bx+c$. Since $x_1$ and $x_2$ are roots, this means $f(x) = (x-x_1)(x-x_2)g(x)$. Since $f(x)$ has degree $2$, this forces $g(x)$ to be a constant say $k$. Further, we have $f(x_3) = 0$. This means $k(x_3-x_1)(x_3-x_2) = 0$. Since $x_3 \neq x_1$ and $x_3 \neq x_2$, this forces $k$ to be zero. Hence, $f(x) \equiv 0$ for all $x$. Method $2$ Here we shall assume that $x_i$'s are distinct.This means we have\begin{align}ax_1^2 + bx_1 + c & = 0\\ax_2^2 + bx_2 + c & = 0\\ax_3^2 + bx_3 + c & = 0\end{align}where $x_i$'s are distinct. We now have a linear system for $a,b,c$ with the right hand side being zero. Writing it in matrix form$$\begin{bmatrix}x_1^2 & x_1 & 1\\x_2^2 & x_2 & 1\\x_3^2 & x_3 & 1\end{bmatrix}\begin{bmatrix}a\\b\\c\end{bmatrix}=\begin{bmatrix}0\\0\\0\end{bmatrix}$$The determinant of $\begin{bmatrix}x_1^2 & x_1 & 1\\x_2^2 & x_2 & 1\\x_3^2 & x_3 & 1\end{bmatrix}$ is $(x_1-x_2)(x_2-x_3)(x_3-x_1)$, which is non-zero since $x_i$'s are distinct. This means the only solution for $\begin{bmatrix}a\\b\\c\end{bmatrix}$ is $$\begin{bmatrix}a\\b\\c\end{bmatrix}=\begin{bmatrix}0\\0\\0\end{bmatrix}$$ This is not true in general. The polynomial $x^2+1$ has more than two zeroes over the quaternions $\mathbb{H}$, but is not identical zero. I suppose you assume implicitly that the domain is a field ? Over a field every nonzero polynomial of degree $n$ has at most $n$ zeroes, see here. The proof uses the Vandermonde matrix. Hence if a quadratic polynomial has more than two roots it must be identically zero. Here's a more elementary proof. It's more cumbersome than other answers here, but it gets the job done... Assume there are three distinct roots of your quadratic $r_1, r_2, r_3$. Then we have \begin{align*} ar_1^2 + br_1 + c &= 0\\ ar_2^2 + br_2 + c &= 0\\ ar_3^2 + br_3 + c &= 0 \end{align*} Subtracting the second equation from the first, we get $a(r_1^2 - r_2^2) + b(r_1-r_2) = 0$, which implies $(r_1-r-2)[a(r_1+r_2)+b] = 0$. Doing the same thing with the first and third equation, we get the system \begin{align*} (r_1-r-2)[a(r_1+r_2)+b] &= 0 \\ (r_1-r-3)[a(r_1+r_3)+b] &= 0 \end{align*} Since $r_1,r_2,r_3$ are unequal, this implies \begin{align*} a(r_1+r_2)+b &= 0 \\ a(r_1+r_3)+b &= 0 \end{align*} Subtracting the first equation from the second, we get $a(r_1-r_3) = 0$. Since $r_1 \not= r_3$, we have $a=0$. Which is huge. Plugging this back into the system at the top, we have \begin{align*} br_1 + c &= 0\\ br_2 + c &= 0\\ br_3 + c &= 0 \end{align*} If we had $b\not= 0$, this would imply $\frac{-c}{b} = r_1 = r_2 = r_3$, which is a clear contradiction. Therefore, $b = 0$, and we are left with $c=0$. Therefore, we have shown $a=b=c=0$.
Starting from the beginning (a very good place to start, after all), the state $\left| 0\right\rangle^{\otimes n}\left| -\right\rangle$ is input into $H^{\otimes n}\otimes I$ (here, called the 'Fourier sample'). This generates the state $$\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\right)\left|-\right\rangle = \frac{1}{2^{n/2}}\left(\left|0\right\rangle + \left|1\right\rangle\right)^{\otimes n}\left|-\right\rangle$$. Now, we apply the operation $U_f$ (in this case, the bit oracle) to give $$U_f\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\right)\left|-\right\rangle = \sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\left|-\oplus f\left(x\right)\right\rangle.$$ The first point to note is that $\oplus$ is the classical XOR operation. What this gives is actually the phase oracle, so that we get $$\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}\left(-1\right)^{f\left(x\right)}\left|x\right\rangle\right)\left|-\right\rangle.$$ This is because $U_f\left|x\right\rangle\left(\left|0\right\rangle - \left|1\right\rangle\right) = \left|x\right\rangle\left|f\left(x\right)\right\rangle - \left|1\oplus f\left(x\right)\right\rangle = \left(-1\right)^{f\left(x\right)}\left|x\right\rangle\left(\left|0\right\rangle - \left|1\right\rangle\right)$. This is the 'set up a superposition...' point - all this means is to perform the operations required to set the qubits in the above state, which is a superposition of all possible states (with phase factors, in this case). In this case, this is just Hadamard, followed by a phase oracle. Now, $x$ is just a classical bit string: $x = \prod_ix_i$, so $$H\left|x_i\right\rangle = \frac{1}{\sqrt{2}}\left(\left|0\right\rangle + \left(-1\right)^{x_i}\left|1\right\rangle\right) = \frac{1}{\sqrt{2}}\sum_{y=\left\lbrace0, 1\right\rbrace}\left(-1\right)^{x_i.y}\left|y\right\rangle.$$ This gives the property $$H^{\otimes n}\left| x\right\rangle = \frac{1}{2^{n/2}}\sum_{y\in\left\lbrace0, 1\right\rbrace^n}\left(-1\right)^{x.y}\left|y\right\rangle.$$ This gives the final state as $$\frac{1}{2^n}\left(\sum_{x, y=\{0,1\}^n}\left(-1\right)^{f\left(x\right) \oplus x.y}\left|y\right\rangle\right)\left|-\right\rangle.$$ We know that $f\left(x\right) = u.x = x.u$, giving $\left(-1\right)^{f\left(x\right) \oplus x.y} = \left(-1\right)^{x.\left(u\oplus y\right)}$. Summing over the $x$ terms gives that $\sum_x\left(-1\right)^{x.\left(u\oplus y\right)} = 0,\, \forall\, u\oplus y \neq 0$. This means that we're left with the term for $u\oplus y = 0$, which means that $u=y$, giving the output as $\left|u\right\rangle\left|-\right\rangle$, which is measured to obtain $u$. As for : This is where the power of quantum computing comes into play - In less mathematical terms, applying the Hadamard transformation is performing a rotation on the qubit states to get into the state $\left|+\right\rangle^{\otimes n}$. You then rotate each qubit in this superposition state using an operation equivalent to XOR (in this new basis), so that when performing the Hadamard transformation again, you're now just rotating back onto the state $\left|u\right\rangle$. Another way of looking at this is to consider it as a reflection or inversion that achieves the same result. why we want to set up a superposition The point is that, using superposition, we can do this to all the qubits at the same time, instead of having to individually check each qubit as in the classical case.
Defining parameters Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.cd (of order \(40\) and degree \(16\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 800 \) Character field: \(\Q(\zeta_{40})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\). Total New Old Modular forms 160 96 64 Cusp forms 0 0 0 Eisenstein series 160 96 64 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0
This is a Test of Mathematics Solution Subjective 62 (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance. Also visit: I.S.I. & C.M.I. Entrance Course of Cheenta Consider the system of equations x + y = 2, ax + y = b. Find conditions on a and b under which (i) the system has exactly one solution; (ii) the system has no solution; (iii) the system has more than one solution. Solution to the linear equations \( a_{11} x + a_{12} y = b_1 \\ a_{21} x + a_{22} y = b_2 \) \(a_{11} \times a_{22} – a_{12} \times a_{21} \neq 0 \) implies there is a unique solution. \(a_{11} \times a_{22} – a_{12} \times a_{21} = 0 \) implies there is either no solution or infinitely many solution. No solution if: \( \frac{a_{11}}{a_{21} } = \frac{a_{12}}{a_{22}} \neq = \frac{b_1}{b_2} \); infinitely many solutions other wise. i
I was wondering about the differences between electricity and magnetism in the context of Maxwell's equations. When I thought over it, I came to the conclusion that the only difference between the two is that magnetic monopoles do not exist. Is this right? Next one. Now I searched for the equations with magnetic monopoles and found them at Wikipedia. They seem quite symmetrical (except the constants of course), except two major differences: It is $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t} - \mu_0\mathbf{j}_{\mathrm m}$, but $\nabla \times \mathbf{B} = \mu_0 \epsilon_0 \frac{\partial \mathbf{E}} {\partial t} + \mu_0 \mathbf{j}_{\mathrm e}$. This means that the induced "magnetic emf" (if I may call it that) is produced by changing electric fields and currents in the exact opposite sense (I mean direction) to the counterpart phenomenon of Electromagnetic Induction. Why so?? Is there a lenz law for "magnetic emf" induction also?? Also, the lorentz force on magnetic charges $\mathbf {F}={}q_\mathrm m (\mathbf {B} - {\mathbf v} \times \frac {\mathbf {E}} {c^2})$. Why this minus sign in the force on magnetic charges that does not appear in the lorentz force on electric charges.
Importance Weighted Autoencoders and Jackknife Methods This blog post is a summary of two papers: Burda et al. (2015) and Nowozin (2018). We first give a quick overview of variational inference and variational autoencoders (VAE), which approximate the posterior distribution by a simpler distribution and maximize the evidence lower bound (ELBO). Blei et al. (2017) and Zhang et al. (2018) are among some excellent survey papers on variational inference. Next, the importance weighted autoencoder (IWAE) is introduced, and its properties are presented. Finally, we describe the jackknife variational inference (JVI) as a way to reduce the bias of IWAE estimators. Variational autoencoders Consider a Bayesian model involving the (set of) latent variable $z$ and observation $x$, where the joint density can be decomposed into $$ p(x, z) = p(x | z) p(z), $$ where $p(x | z)$ is the likelihood and $p(z)$ is the prior distribution of the latent variable $z$. The posterior distribution $p(z|x)$ is of central interest in Bayesian inference. However, it is often intractable, and approximate inference is required. Variational inference aims to approximate the posterior distribution by a variational distribution and to derive a lower bound of the marginal log-likelihood of data $\log p(x)$. Variational autoencoder (VAE) is a type of amortized variational inference method. Here, “amortized” means that the variational distribution $q(z|x)$ is parametrized by a function of $x$, whose parameters are shared across all observations. We first rewrite the marginal log-likelihood by introducing the variational distribution $q(z|x)$: \begin{align} \log p(x) &= \log \int p(x|z) p(z) \mathrm{d} z \newline &= \log \int \frac{p(x|z) p(z)}{q(z|x)} q(z|x) \mathrm{d} z \newline &= \log \mathbb{E}_{q(z|x)} \left[ \frac{p(x|z) p(z)}{q(z|x)} \right]. \label{eq:marginal} \tag{1} \end{align} If we pull the expectation out of the logarithm function, which is concave, Jensen’s inequality gives the following inequality: $$ \log p(x) \ge \mathbb{E}_{q(z|x)} \log \left[ \frac{p(x|z) p(z)}{q(z|x)} \right]. \label{eq:elbo} \tag{2} $$ The right-hand side is called the evidence lower bound (ELBO), denoted by $\mathcal{L}$. The inference problem then becomes an optimization problem that tries to find a variational distribution $q(z|x)$ that maximizes $\mathcal{L}$. There are at least three ways of rewriting the ELBO: \begin{align} \mathcal{L} &= \log p(x) - D_{\mathrm{KL}} ( q(z|x) \parallel p(z|x) ) \newline &= \mathbb{E}_{q(z|x)} \log p(x,z) + \mathbb{E}_{q(z|x)} [-\log q(z|x)] \newline &= \mathbb{E}_{q(z|x)} \log p(x|z) - D_{\mathrm{KL}} ( q(z|x) \parallel p(z) ) \end{align} The first equation shows that the difference between the marginal log-likelihood $\log p(x)$ and the ELBO $\mathcal{L}$ is the KL divergence between $q(z|x)$ and $p(z|x)$. When these two distributions are identical (almost everywhere), the marginal log-likelihood equals the ELBO. In the second equation, the first term represents the “energy” and the second term represents the entropy of $q(z|x)$. The energy term encourages $q(z|x)$ to focus probability mass on where the joint probability $p(x, z)$ is large. The entropy encourages $q(z|x)$ to spread the probability mass to avoid concentrating on one location. The third equation is a more explicit representation of the standard architecture of a variational autoencoder. The variational distribution $q(z|x)$ and the likelihood function $p(x|z)$ are represented by an encoder network and a decoder network, respectively. Furthermore, $q(z|x)$ and $p(z)$ are often assumed to be multivariate independent Gaussian so that their KL divergence is of closed-form. A simple Monte Carlo estimator of the ELBO $\mathcal{L}$ approximates the expectation in Equation \ref{eq:elbo} by the sample mean. Let $z_i$, for $i=1, \ldots, k$, be independent samples drawn from $q(z|x)$, then the estimator is $$ \widehat{\mathcal{L}}_k^{\mathrm{ELBO}} := \frac{1}{k} \sum_{i=1}^k \log \left[ \frac{p(x|z_i) p(z_i)}{q(z_i|x)} \right]. $$ It is obvious that the estimator is unbiased, i.e., $\mathbb{E}_{z_i \sim q(z|x)} \widehat{\mathcal{L}}_k^{\mathrm{ELBO}} = \mathcal{L}$. Importance weighted autoencoders What we have described so far is first to define the ELBO $\mathcal{L}$ as a lower bound of $\log p(x)$, and then to estimate it by $\widehat{\mathcal{L}}_k^{\mathrm{ELBO}}$. An alternative approach is to approximate the expectation (inside the logarithm function) in Equation \ref{eq:marginal} by Monte Carlo, which leads to the importance weighted autoencoders (IWAE) estimator: $$ \widehat{\mathcal{L}}_k^{\mathrm{IWAE}} := \log \left[ \frac{1}{k} \sum_{i=1}^k \frac{p(x|z_i) p(z_i)}{q(z_i|x)} \right]. $$ Note the difference between $\widehat{\mathcal{L}}_k^{\mathrm{ELBO}}$ and $\widehat{\mathcal{L}}_k^{\mathrm{IWAE}}$. If we denote $\mathcal{L}_k := \mathbb{E}_{z_i \sim q(z|x)} \widehat{\mathcal{L}}_k^{\mathrm{IWAE}}$, then by Jensen’s inequality, $$ \mathcal{L}_k \le \log \left[ \mathbb{E}_{z_i \sim q(z|x)} \frac{1}{k} \sum_{i=1}^k \frac{p(x|z_i) p(z_i)}{q(z_i|x)} \right] = \log p(x). $$ In other words, the expectation of $\widehat{\mathcal{L}}_k^{\mathrm{IWAE}}$ is also a lower bound of $\log p(x)$. When $k=1$, the ELBO and IWAE estimators are equivalent. It can be shown that $\mathcal{L}_k$ is tighter than $\mathcal{L}$ when $k>1$: $$ \mathcal{L} = \mathcal{L}_1 \le \mathcal{L}_2 \le \cdots \le \log p(x), $$ and $$ \lim_{k \to \infty} \mathcal{L}_k = \log p(x). $$ Unsurprisingly, $\widehat{\mathcal{L}}_k^{\mathrm{IWAE}}$ also converges in probability to $\log p(x)$ as $k\to \infty$. A more detailed asymptotic analysis shows that $$ \mathcal{L}_k = \log p(x) - \frac{\mu_2}{2 \mu^2} \frac{1}{k} + \left( \frac{\mu_3}{3\mu^2} - \frac{3\mu_2^2}{4\mu^4} \right) \frac{1}{k^2} + O(k^{-3}), $$ where $\mu$ and $\mu_j$ are the expectation and the $j$-th central moment of $p(x|z_i) p(z_i) / q(z_i|x)$ with $z_i \sim q(z|x)$, respectively. An interesting perspective on the IWAE is that, $\widehat{\mathcal{L}}_k^{\mathrm{IWAE}}$ can be regarded as an estimator of $\log p(x)$. As shown above, the estimator is consistent but biased, and the bias is in the order of $O(k^{-1})$. The remaining sections try to reduce the bias to a higher/smaller order so that the estimator is closer to the marginal log-likelihood when $k$ is large. Jackknife resampling The jackknife is a resampling technique that can be used to estimate the bias of an estimator and further to reduce the bias. Let $\widehat{T}_n$ be a consistent but biased estimator of $T$, evaluated on $n$ samples. Assume the expectation $\mathbb{E} (\widehat{T}_n)$ can be written as an asymptotic expansion as $n \to \infty$: $$ \mathbb{E} (\widehat{T}_n) = T + \frac{a_1}{n} + \frac{a_2}{n^2} + O(n^{-3}). $$ Then the bias of $\widehat{T}_n$ is in the order of $O(n^{-1})$. A debiased estimator $\widetilde{T}_{n,1}$ can be defined as follows: $$ \widetilde{T}_{n,1} := n \widehat{T}_n - (n-1) \widehat{T}_{n-1}. $$ The idea is that the first order term is canceled by calculating the difference. \begin{align} \mathbb{E} (\widetilde{T}_{n,1}) &= n \mathbb{E} (\widehat{T}_n) - (n-1) \mathbb{E} (\widehat{T}_{n-1}) \newline &= n \left( T + \frac{a_1}{n} + \frac{a_2}{n^2} + O(n^{-3}) \right) \newline &\qquad - (n-1) \left( T + \frac{a_1}{n-1} + \frac{a_2}{(n-1)^2} + O(n^{-3}) \right) \newline &= T + \frac{a_2}{n} - \frac{a_2}{n-1} + O(n^{-2}) \newline &= T + O(n^{-2}). \end{align} The bias of $\widetilde{T}_{n,1}$ is in the order of $O(n^{-2})$ instead of $O(n^{-1})$. When $n$ is large, $\widetilde{T}_{n,1}$ has a lower bias than $\widehat{T}_n$. The estimator $\widehat{T}_{n-1}$ can be calculated on any $n-1$ samples. In practice, given $n$ samples, it is evaluated on the $n$ “leave-one-out” subsets of size $n-1$, and the average of the $n$ estimates is used in place of $\widehat{T}_{n-1}$, which reduces the variance of the estimator. The above debiasing method can be further generalized to higher orders. For example, let $$ \widetilde{T}_{n,2} := \frac{n^2}{2} \widehat{T}_n - (n-1)^2 \widehat{T}_{n-1} + \frac{(n-2)^2}{2} \widehat{T}_{n-2}, $$ then $$ \mathbb{E} ( \widetilde{T}_{n,2} ) = T + O(n^{-3}), $$ that is, the bias of $\widetilde{T}_{n,2}$ is in the order of $O(n^{-3})$. More generally, for $$ \widetilde{T}_{n,m} := \sum_{j=0}^m c(n, m, j) \widehat{T}_{n-j}, $$ where $$ c(n, m, j) = (-1)^j \frac{(n-j)^m}{(m-j)! j!}, $$ the bias is in the order of $O(n^{-(m+1)})$. Jackknife variational inference The application of the jackknife method to the IWAE estimator should be straightforward. The jackknife variational inference (JVI) estimator is defined as follows: $$ \widehat{\mathcal{L}}_{k,1}^{\mathrm{JVI}} := k \widehat{\mathcal{L}}_k^{\mathrm{IWAE}} - (k-1) \widehat{\mathcal{L}}_{k-1}^{\mathrm{IWAE}}, $$ and more generally, $$ \widehat{\mathcal{L}}_{k,m}^{\mathrm{JVI}} := \sum_{j=0}^m c(k, m, j) \widehat{\mathcal{L}}_{k-j}^{\mathrm{IWAE}}. $$ The bias of $\widehat{\mathcal{L}}_{k,m}^{\mathrm{JVI}}$, as an estimator of $\log p(x)$, is thus in the order of $O(k^{-(m+1)})$. Again, the IWAE estimator $\widehat{\mathcal{L}}_{k-j}^{\mathrm{IWAE}}$ can be evaluated on a single subset of samples of size $k-j$, or by the average of that on all subsets of size $k-j$. In the latter case, the computational cost is significant since $\sum_{j=0}^m {k \choose j}$ could be large; the time complexity is bounded by $$ O \left( k e^m \left( \frac{k}{m} \right)^m \right). $$ In practice, the algorithm is feasible only for small values of $m$. Other variations of JVI are also provided by Nowozin (2018), at the cost of higher variance of the estimator. References Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2017). Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518), 859-877. Burda, Y., Grosse, R., & Salakhutdinov, R. (2015). Importance weighted autoencoders. International Conference on Learning Representations. Nowozin, S. (2018). Debiasing evidence approximations: On importance-weighted autoencoders and jackknife variational inference. International Conference on Learning Representations. Zhang, C., Butepage, J., Kjellstrom, H., & Mandt, S. (2018). Advances in variational inference. IEEE transactions on pattern analysis and machine intelligence.
Under the auspices of the Computational Complexity Foundation (CCF) In [20] Goldwasser, Grossman and Holden introduced pseudo-deterministic interactive proofs for search problems where a powerful prover can convince a probabilistic polynomial time verifier that a solution to a search problem is canonical. They studied search problems for which polynomial time algorithms are not known and for which many solutions are possible. They showed that whereas there exists a constant round pseudo deterministic proof for graph isomorphism where the canonical solution is the lexicographically smallest isomorphism, the existence of pseudo-deterministic interactive proofs for NP-hard problems would imply the collapse of the polynomial time hierarchy. In this paper, we turn our attention to studying doubly-efficient pseudo-deterministic proofs for polynomial time search problems: pseudo-deterministic proofs with the extra requirement that the prover runtime is polynomial and the verifier runtime to verify that a solution is canonical is significantly lower than the complexity of finding any solution, canonical or otherwise. Naturally this question is particularly interesting for search problems for which a lower bound on its worst case complexity is known or has been widely conjectured. We show doubly-efficient pseudo-deterministic algorithms for a host of natural problems whose complexity has long been conjectured. In particular: We show a doubly efficient pseudo-deterministic proof for linear programming where the canonical solution which the prover will provide is the lexicographically greatest optimal solution for the LP. To this end, we show how through perturbing the linear program and strong duality this solution can be both computed efficiently by the prover, and verified by the verifier. The time of the verifier is $O(d^2 )$ for a linear program with integer data and at most $d$ variables and constraints, whereas the time to solve such linear program is $\tilde{O}(d^{\omega} )$ by randomized algorithms [11] for $\omega$ the exponent for fast matrix multiplication . We show a doubly efficient pseudo-deterministic proof for 3-SUM and problems reducible to 3-SUM where the prover is a $O(n^2)$ time algorithm and the verifier takes time $\tilde{O}(n^{1.5})$. We show a doubly-efficient pseudo-deterministic proof for the hitting set problem} where the verifier runs in time $\tilde{O}(m)$ and the prover runs in time $\tilde{O}(m^2)$ where $ m = \sum_{S \in \mathcal{S}} |S| + \sum_{T \in \mathcal{T}} |T|$ for inputs collections of sets $\mathcal{S}, \mathcal{T}$. We show a doubly-efficient pseudo-deterministic proof for the Zero Weight Triangle problem where the verifier runs in time $\tilde{O}(n^{2 + \omega/3})$ and the prover runs in randomized time $\tilde{O}(n^3)$. The Zero Weight Triangle problem is equivalent to the All-Pairs Shortest Path problem, a well-studied problem that is the foundation of many hardness results in graph algorithms [39,38], under sub-cubic reductions.
To get an action of $S_3$ in which $\langle(1,2,3)\rangle$ is the stabilizer of a point, take $X=\{1,-1\}$, and let $S_3$ act by parity: even permutations fix both $1$ and $-1$, odd permutations swap $1$ and $-1$. Then the stabilizer of a single point, $1$, is $A_3 = \langle (1,2,3)\rangle$. In general, if $H$ is a subgroup of $G$, then $G$ acts on the left cosets of $H$ by left multiplication, $g\cdot xH = gxH$. The stabilizer of the coset $H$ is $H$ itself. If you have a partition of the cosets so that $G$ acts on the partition, let $K = \{ g\in G\mid gH\text{ is in the same block as }H\}$ be the collection of all elements that represent cosets in the same part of the partition as $H$. Then $K$ certainly contains $H$. I claim that $K$ is a subgroup of $G$: for if $x\in K$, then $xH$ is the same block of the partition as $H$, hence $x^{-1}(xH)$ is in the same block as $x^{-1}H$. But $x^{-1}(xH) = H$, so $x^{-1}H$ is also in the same block of the partition, hence $x^{-1}\in K$. That is, $K$ is closed under inverses. And if $x,y\in K$, $x^{-1}\in K$. Taking the cosets $x^{-1}H$ and $yH$, which are in the same block, and multiplying by $x$, we conclude that $xyH$ and $H$ are in the same block, so $xy\in K$. Thus, $K$ is a subgroup, $H\leq K \leq G$. Conversely, if $K$ is a subgroup, $H\leq K\leq G$, then $K$ induces a partition of the cosets into blocks for the action of $G$: just take the cosets of $K$, and partition each into cosets of $H$. So systems of blocks in the action of $G$ on the left cosets of $H$ correspond to subgroups $K$, $H\leq K\leq G$. Hence, if $H$ is a maximal subgroup, then the action of $G$ on the cosets is primitive, since the only possible systems of blocks are the trivial and total blocks, corresponding to $H$ and to $G$. In particular, every maximal subgroup of $G$ is the stabilizer of a point in a primitive action of $G$, namely, $H$ is the stabilizer of $H$ in the (left) action of $G$ on the left cosets of $H$ in $G$.
Carles Simó Gran Via de les Corts Catalanes, 585. 08007 Barcelona, Spain University of Barcelona Publications: Fontich E., Simó C., Vieiro A. On the “Hidden” Harmonics Associated to Best Approximants Due to Quasi-periodicity in Splitting Phenomena 2018, vol. 23, no. 6, pp. 638-653 Abstract The effects of quasi-periodicity on the splitting of invariant manifolds are examined. We have found that some harmonics that could be expected to be dominant in some ranges of the perturbation parameter actually are nondominant. It is proved that, under reasonable conditions, this is due to the arithmetic properties of the frequencies. Martinez R., Simó C. Invariant Manifolds at Infinity of the RTBP and the Boundaries of Bounded Motion 2014, vol. 19, no. 6, pp. 745-765 Abstract Invariant manifolds of a periodic orbit at infinity in the planar circular RTBP are studied. To this end we consider the intersection of the manifolds with the passage through the barycentric pericenter. The intersections of the stable and unstable manifolds have a common even part, which can be seen as a displaced version of the two-body problem, and an odd part which gives rise to a splitting. The theoretical formulas obtained for a Jacobi constant $C$ large enough are compared to direct numerical computations showing improved agreement when $C$ increases. A return map to the pericenter passage is derived, and using an approximation by standard-like maps, one can make a prediction of the location of the boundaries of bounded motion. This result is compared to numerical estimates, again improving for increasing $C$. Several anomalous phenomena are described. Miguel N., Simó C., Vieiro A. From the Hénon Conservative Map to the Chirikov Standard Map for Large Parameter Values 2013, vol. 18, no. 5, pp. 469-489 Abstract In this paper we consider conservative quadratic Hénon maps and Chirikov’s standard map, and relate them in some sense. First, we present a study of some dynamical properties of orientation-preserving and orientationreversing quadratic Hénon maps concerning the stability region, the size of the chaotic zones, its evolution with respect to parameters and the splitting of the separatrices of fixed and periodic points plus its role in the preceding aspects. Then the phase space of the standard map, for large values of the parameter k, is studied. There are some stable orbits which appear periodically in $k$ and are scaled somehow. Using this scaling, we show that the dynamics around these stable orbits is one of the above Hénon maps plus some small error, which tends to vanish as $k \to \infty$. Elementary considerations about diffusion properties of the standard map are also presented. Puig J., Simó C. Resonance tongues in the quasi-periodic Hill–Schrödinger equation with three frequencies 2011, vol. 16, no. 1-2, pp. 61-78 Abstract In this paper we investigate numerically the following Hill’s equation $x'' + (a + bq(t))x = 0$ where $q(t) = cost + \cos\sqrt{2}t + \cos\sqrt{3}t$ is a quasi-periodic forcing with three rationally independent frequencies. It appears, also, as the eigenvalue equation of a Schrödinger operator with quasi-periodic potential. Massive numerical computations were performed for the rotation number and the Lyapunov exponent in order to detect open and collapsed gaps, resonance tongues. Our results show that the quasi-periodic case with three independent frequencies is very different not only from the periodic analogs, but also from the case of two frequencies. Indeed, for large values of $b$ the spectrum contains open intervals at the bottom. From a dynamical point of view we numerically give evidence of the existence of open intervals of $a$, for large $b$, where the system is nonuniformly hyperbolic: the system does not have an exponential dichotomy but the Lyapunov exponent is positive. In contrast with the region with zero Lyapunov exponents, both the rotation number and the Lyapunov exponent do not seem to have square root behavior at endpoints of gaps. The rate of convergence to the rotation number and the Lyapunov exponent in the nonuniformly hyperbolic case is also seen to be different from the reducible case. Vitolo R., Broer H. W., Simó C. Quasi-periodic bifurcations of invariant circles in low-dimensional dissipative dynamical systems 2011, vol. 16, no. 1-2, pp. 154-184 Abstract This paper first summarizes the theory of quasi-periodic bifurcations for dissipative dynamical systems. Then it presents algorithms for the computation and continuation of invariant circles and of their bifurcations. Finally several applications are given for quasiperiodic bifurcations of Hopf, saddle-node and period-doubling type. Martinez R., Simó C. Non-Integrability of Hamiltonian Systems Through High Order Variational Equations: Summary of Results and Examples 2009, vol. 14, no. 3, pp. 323-348 Abstract This paper deals with non-integrability criteria, based on differential Galois theory and requiring the use of higher order variational equations. A general methodology is presented to deal with these problems. We display a family of Hamiltonian systems which require the use of order $k$ variational equations, for arbitrary values of $k$, to prove non-integrability. Moreover, using third order variational equations we prove the non-integrability of a non-linear springpendulum problem for the values of the parameter that can not be decided using first order variational equations. Simó C. Invariant curves of analytic perturbed nontwist area preserving maps 1998, vol. 3, no. 3, pp. 180-195 Abstract Area preserving maps close to integrable but not satisfying the twist condition are studied. The existence of invariant curves is proved, but they are no longer graphs with respect to the angular variable. Beyond the generic, codimension 1 case, several higher codimension cases are studied. Meandering curves, higher order meandering and labyrinthic curves show up. Several examples illustrate that this behavior occurs in very simple families of maps. Giorgilli A., Lazutkin V. F., Simó C. Visualization of a Hyperbolic Structure in Area Preserving Maps 1997, vol. 2, nos. 3-4, pp. 47-61 Abstract We present a simple method which displays a hyperbolic structure in the phase space of an area preserving map. The method is illustrated for the case of the Carleson standard map. As it follows from our experiments, the structure of the chaotic zone for the standard map is different from the one found for the systems of Anosov type.
I asked a similar question about QED Lagrangian but I guess the question wasn't clear enough since I didn't get any correct answers. So, I'll try to ask the question in a different way: how does one write the QED Lagrangian using Weyl spinors for left handed electrons interacting with a photon (for example a left handed electron emitting a photon). If one considers an electron in terms of its Weyl spinors: \begin{equation} \psi_{e} = \begin{pmatrix} \chi \\ \eta^{\dagger} \end{pmatrix} \end{equation} Then one can write $\bar{\Psi_{e}}=(\eta \space \chi^{\dagger})$ Now, the QED Lagrangian is: $\bar{\Psi_{e}}\gamma^{\mu}A_{\mu}\Psi_{e}$ which would yield the following two terms: $\chi^{\dagger}\bar{\sigma^{\mu}}A_{\mu}\chi - \eta^{\dagger}\bar{\sigma^{\mu}}A_{\mu}\eta$ $\space$ (1) Assuming: $\chi$: left handed electron $\eta^{\dagger}$: right handed electron $\eta$: left handed positron $\chi^{\dagger}$: right handed positron Then the Lagrangian in (1) implies the following 2 vertices: (right-handed positron)---photon---(left-handed electron) (right-handed electron)---photon---(left-handed positron) So, where are the following type of vertices? (left-handed electron)---photon---(left-handed electron) (right-handed electron)---photon---(right-handed electron) . . .