text
stringlengths
256
16.4k
A quaternion algebra is a generalization of the classical Hamiltonian quaternions. It is a $4$-dimensional algebra over a field $F$ with basis $1,i,j,ij$ subject to the relations $$ i^2 = a, \quad j^2 = b, \quad ji = -ij $$ for some $a,b \in F^\times$. (This is the definition for $\operatorname{char}(F) \neq 2$, anyway.) There are quite a few questions on quaternion algebras, such as among others. Some of these posts are tagged with (quaternions), but according to its description the (quaternions) tag only seems to apply to the Hamilton quaternions: For questions about the quaternions: a noncommutative four dimensional division algebra over the real numbers. (The full description is here.) At the moment, there are 648 questions with a (quaternions) tag. Most probably refer to the Hamiltonian quaternions, but some (such as a couple of the above links) are about general quaternion algebras. I see three possible courses of action: (1) Expand the scope of the (quaternions) tag to include quaternion algebras. (2) Create a new (quaternion-algebras) tag separate from the existing (quaternions) tag. (3) Do nothing, if one thinks that there are not enough questions on this topic to merit a new tag and that quaternion algebras are too different from the Hamiltonian quaternions for an expansion of the existing tag description. Which do you think is the best choice? I am leaning toward (2) because I get the sense that many questions with a (quaternions) tag are about interpreting Hamiltonian quaternions as rotations, while most questions about quaternion algebras relate to number theory.
I think that the question is sufficiently precise if we think at a realistic meaning of the word “inconsistent”. Also nowadays, for non logicians the adjective “inconsistent” doesn't really mean “free of contradictions” (this is only the obvious meaning given by modern Mathematical Logic), but rather it means not acceptable by a large or important part of the scientific community. Also nowadays, some of our works in some parts of modern Mathematics are not accepted as sufficiently rigorous by other parts. These works are hence perceived only as not sufficiently precise “ways of arguing”. Therefore, these “foreign argumentations” are perceived as potentially inconsistent, and need a different reformulation to be accepted. I know of relationships of this type between some parts of Geometry and Analysis, to mention only an example. It is the same problem occurring in the relationships between (some parts of) Physics and Mathematics because these two disciplines are really completely different “games”: in Physics the most important achievement is the existence of a dialectic between formulas and a part of nature, even if the related Mathematics lacks in formal clarity and is hence not accepted by several mathematicians. Analogously, early calculus was consistent until the community accepted these “ways of arguing” and discovered statements which could be verified as true by a dialogue with other part of knowledge: Physics and geometrical intuition in primis. Since in the early calculus the formal intuition (in the modern sense of manipulation of symbols, without a reference to intuition) was surely weak, the dialectic between proofs and intuition was surely stronger (I mean statistically, in the distribution of 17th century mathematicians). In my opinion, this is the reason of the discovering of true statements, even if the related proofs are perceived as “weak” nowadays. Once the great triumvirate Cantor, Dedekind, and Weierstrass decided that it was time to make a step further, the notion of “inconsistent” changed for this important part of the community and hence, sooner or later, for all the others. Also from the point of view of rules of inference, the consistency of early calculus has to be meant in the sense of dialectic between different parts of knowledge and acceptance by the related scientific community. Therefore, in this sense, in my opinion early calculus is as consistent as our (and the future) calculus. I agree with Joel that “we are not in a qualitatively different situation”: probably in the near future all proofs will be computer assisted, in the sense that all the missing steps will be checked by a computer (whose software will be verified, once again, by a large part of the community) and we will only need to provide the main steps. Necessarily, articles will change in nature and, I hope, they will be more focused on those ideas and intuitions thanks to which we were able to create the results we are presenting. Therefore, young students in the future will probably read disgusted at our papers saying: “how were they able to understand how all these results were created? These papers seems like phone books: def, lem, thm, cor, def, lem, thm, cor... without any explanation of discovery rules and several missing formal steps!”. Finally, I think that only formally, but not conceptually, this early calculus may look similar to NSA or SDG. In my opinion, one of the main reason of the lack of diffusion of NSA is that its techniques are perceived as “voodoo” by all modern mathematicians (the majority) that rely their work on the dialogue between formal mathematics and informal intuition. Too much frequently the lack of intuition is too strong in both theories. For example, for a person like Cauchy, what is the intuitive meaning of the standard part of the sine of an infinite number (NSA)? For people like Bernoulli, what is the intuitive meaning of properties like $x\le0$ and $x\ge0$ for every infinitesimal and $\neg\neg\exists h$ such that $h$ is infinitesimal (but not necessarily there exists an infinitesimal; SDG)? Moreover, as soon as discontinuous functions appeared in the calculus, the natural reactions of almost every working mathematicians (of 17th century and nowadays) looking at the microaffinity axiom is not to change Logic switching to the intuitionistic one, but to change this axiom inserting a restriction on the quantifier “for every $f:R\longrightarrow R$”. The apparently inconsistent argumentation of setting $h\ne0$ and finally $h=0$, can be faithfully formalized using classical calculus rather than using these theories of infinitesimals. We can say that $f:R\longrightarrow R$ (here $R$ is the usual Archimedean real field) is differentiable at $x$ if there exists a function $r:R\times R\longrightarrow R$ such that $f(x+h)=f(x)+h\cdot r(x,h)$ and such that $r$ is continuous at $h=0$. It is easy to prove that this function $r$ is unique. Therefore, we can assume $h\ne0$, we can make freely calculations to discover what is the unique form of the function $r(x,h)$ for $h\ne0$ and, in the final formula, to set $h=0$ because $r$ is clearly continuous for all the examples of functions of the early calculus. This is called the Fermat-Reyes methods, and it can be proved also for generalized functions like Schwartz distributions (and hence for an isomorphic copy of the space of all the continuous functions). Moreover, in my opinion, both Cauchy and Bernoulli would had perfectly understood this method and the related intuition. On the contrary, they would not be able to understand all the intuitive inconsistencies they can easily find both in NSA and SDG.
As is usual, let $N(n)$ denote the maximum size of a set of mutually orthogonal Latin squares of order $n$. I am wondering what results hold that bound $N(n)$ from above; the only ones I can think of are the following: $N(n)\leq n-1$ for all $n\geq 2$, with equality if $n$ is a prime power. This is well known. $N(6)=1$. This is also quite famous. $N(10)\leq 8$. This was done using a computer search. (Source) If $n=1$ or $2~(mod~4)$, and if $n$ is not a sum of two squares, then $N(n)< n-1$. This is the Bruck-Ryser Theorem from 1949, though stated in Latin squares instead of projective planes. Are there any other results of this sort? I know of many results bounding $N(n)$ from below (mainly Beth's result that $N(n)\geq n^{1/14.8}$ if $n$ is large enough, and several results of the form "If $n\geq n_\nu$ then $N(n)\geq \nu$"), but neither I nor anyone I know can add to this list, and I haven't had much luck on Google either.
Electrons are not being created nor they are destroyed. The electrons already exist in the metal. The metal consists of layers of positive ions (depicted by orange circles in the picture) in which the electrons (depicted by cyan colored circles in the picture) are in continuous motion. Hence, the charge on the metal is overall neutral. Where did the electrons come from? The electrons escape from the metal atom leaving the metal atoms positively charged. The delocalized electrons form a "sea" of electrons in the metal. What is electric current? Electric current is defined as the amount of charge moving across a cross-section of a conductor per unit time. $$I = \frac{\Delta q}{\Delta t}$$ To have a current in a metal, you need to have a net movement of charge. What causes electric current? In figure one, the circuit is open. The electrons are free to move around randomly (due to thermal energy). The electrons keep bumping into atoms and keep changing directions. There is no orderly motion of electrons. Therefore, on average, there is a net zero movement of charge. In the second figure, the circuit is closed, i.e: the metal conductor has been connected to a voltage source (electric generator or a battery). The potential difference across the conductor exerts force on the electrons which causes the electrons to move in order from a region of lower potential to a region of higher potential. Of course, the electrons still collide with the atoms but the potential difference (technically, the electric field) tries to get the electron back to the orderly motion. Therefore, there is a net charge flowing in one direction which constitutes the current. Therefore, no charge is created or destroyed rather they are just moved. It is similar to a water in a hose. The water in a pipe does not flow unless there is a pressure/height difference across the ends of the pipe. However, the water molecules inside the pipe are in continuous motion. An important point to note here is that the electrons move really fast (millions of meters per second) but at the same time, they collide with many atoms. Therefore, the net velocity averaged over time is very small. We call this velocity drift velocity. In copper, it is of the order of millimeters per second. $$I = ne v_d A$$ where $n$ is the number of electrons per unit volume of the conductor, $v_d$ is the drift velocity and $A$ is the area of cross section of the conductor. What does a generator do? An electric generator basically creates an E.M.F (voltage/potential difference) across a conductor which produces a current. How does a generator work? An electric generator works on the principle of electromagnetic induction. The law states that if the flux through a coil of wire is changing, an E.M.F (voltage) is induced in the coil. Magnetic flux is the amount of magnetic field lines passing through a surface. The magnitude of E.M.F induced is given by Faraday's law, $$E.M.F =\int\vec{E}.d{\vec{l}} = \frac{d\phi_B}{dt}$$ where $\phi_B$ is the magnetic flux through the coil. The E.M.F induced is such that the induced current produces a magnetic flux of its own which opposes the changing magnetic flux. This is known as Lenz's law. You can go even deeper to explain why a changing magnetic flux induces an electric current. A changing magnetic flux produces an electric field. This electric field drives the induced current. If you are new to magnetism, refer to this answer which gives an easy way to understand magnetostatics if you are good with electrostatics How are voltage and current related? In simple DC circuits, they are related by Ohm's law, $$V = IR$$ where R is the resistance of the conductor.
Why does a black hole attract matter with such a huge amount of force? Does its mass increase on becoming a black hole? Is it due to its volume decreasing? In the formula for gravitational force, $\frac{Gm_1 m_2}{r^2}$, there is no mention of the volume of the bodies, just their masses. The amount of matter present must be same as that present before the collapse of the star, so why does gravity increase so much? Black Holes exert no more gravitational force than other matter. They are just so massive that light can't escape. When they collapse, they typically lose some matter which means their mass actually decreases during the collapse. You are correct to note that gravity is independent of the volume of the object. Nearby to a black hole, Newtonian physics starts to break down which is where General Relativity comes in, but from afar, $\frac{GMm}{r^2}$ continues to hold true. And as far as i think the amount of matter present must be same as that present before the collapse of star so why does it gravity increase manifold? It doesn't. If you replaced our sun with a black hole of the same mass, it wouldn't change the Earth's orbit. Everything would stay the same - except for the noticeable lack of sunlight, of course. You're right that the Newtonian formula for the gravitational force $$ F = G\frac{Mm}{R^2}$$ does not say anything about the size of the attractor - the distance $R$ is measured from the center of mass of the attracting body. So one might ask what happens when $R\rightarrow 0$? The force appears to grow without limit. But we've forgotten that day-to-day attractors, like the Earth and the sun, have a nonzero size. If we fly toward the sun, we will plunge into the surface (where $R=R_0$ is the radius of the sun) long before $R=0$. Once we're inside, the gravitational force becomes roughly $$ F = \frac{GMm}{R_0^2} \cdot \frac{R}{R_0}$$ which goes smoothly to zero as we reach the center. On the other hand, a black hole is special because there is no surface to plunge into. Rather than emanating from a volume (which we could get inside, at which point the gravity would start to decrease), the force from a black hole appears to come from a single point. It therefore becomes possible to get closer and closer to the source of gravity, at which point the gravitational force grows without bound. The weird stuff starts to happen when we get to a radial distance on the order of the Schwarzschild radius$$R_s = \frac{2GM}{c^2}$$ Plugging in the values for the sun, we find that $R_s \approx 2$ miles - but of course, the sun's radius is about 200,000 times that distance. Approaching in a Newtonian sense, if you equate the escape velocity from a body to the speed of light, you will get yourself a black hole by definition. $$ v_{escape} = \sqrt { \frac{2GM} {R} } = v_{light} = 299 \ 792 \ 458 \ m / s $$ Note while the term $ \frac {M}{R} $ is not density itself, it is obviously proportional to it; so this is why when any body is dense enough, it can become a black hole in some sense. You are talking about black holes in a Newtonian / astrophysical sense. But, there black holes are a purely general relativistic phenomena. In particular, if we just consider non-rotating black holes, then, the existence of such solutions is by Birkhoff's theorem, namely: every spherically symmetric vacuum spacetime is static. The resulting spacetime is the Schwarzschild solution, which is the original "black hole" solution of Einstein's field equations. The black hole is a singularity that occurs at $r=0$, which is the result of computing the Kretschmann scalar: $K = R^{abcd}R_{abcd} \sim r^{-6}$. In this context, one cannot talk about such solutions of Einstein's equations with terms like "force", "volume", "mass". A separate question is that of how a star / astrophysical object can collapse into a black hole, and for that you must use the TOV equation: https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation Integrating this equation / solving the ODE, one obtains for the pressure of the spherically symmetric object at some distance $r$, : $p(r) = \mu \left[ \frac{\sqrt{1-r_s / R} - \sqrt{1 - r_s r^2 / R^3}}{ \sqrt{1 - r_s r^2 / R^3} - 3 \sqrt{1 - r_s / R}}\right]$ where $\mu$ is the mass density of the object, and $r_s$ I have denoted as the Schwarzschild radius: $r_s = 2G M$. Now, look at this result: it is only valid for when $r \leq R$. The pressure at $r=0$ which is what you're concerned with is: $p(0) = \mu \left[ \frac{\sqrt{1 - r_s / R} - 1}{1 - 3 \sqrt{1 - r_s /R}} \right]$. This becomes negative, $p(0) < 0$, when the denominator of this expression becomes negative, since $\mu > 0$ by assumption. So, this becomes negative when: $1 - 3 \sqrt{1 - r_s / R} < 0$. This inequality is essentially the collapse condition. So, your object will collapse into a "black hole" as long as this inequality is satisfied. You essentially use G.R. to see when the pressure of the object you are trying to "collapse" becomes negative. If your object is spherically symmetric, you can use the TOV equation to show that this collapse into a "black hole" occurs for $R>9/8 r_s$, where $r_s$ is the Schwarzschild radius. The spherically symmetric case is nice because of symmetry, but for general geometries, it is slightly more difficult. This hopefully answers your question. (On a side note, one can always have black holes without any such notions of things collapsing: just take a Minkowski spacetime and cut out a hole, you'll get the Schwarzschild metric!) protected by Qmechanic♦ Nov 1 '17 at 22:04 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
I'm constructing a 2 wheels balancing robot which uses a PID controller. I've tuned my parameters on numerical simulations based on a continuous inverted pendulum system so that the simulated inverted pendulum balances by controlling the horizontal (linear) cart acceleration $\ddot{x}$. Now that I've done this, I want to take the next step and turn my PID control commands into electrical commands onto a DC motor to give the desired linear acceleration $\ddot{x}$. However I'm not sure how exactly to do this for my specific robot's motors. Are there experimental tests should I run to determine how to convert PID commands into DC motor acceleration commands? Or is there a formula to do this based on the motor's specifications? Update The non-linear dynamic equation I'm using is $$L\ddot{\theta}=gsin(\theta)+\ddot{x}(t)cos(\theta)+Ld(t)$$ where $\ddot{x}(t)$ is the linear acceleration, $g$ is the acceleration due to gravity, and $\ddot{\theta}$ is the angular acceleration, and $d(t)$ is an external disturbance to the system. To simplify things, I've linearized the equations around $\theta\approx0$, yielding $$L\ddot{\theta}=g\theta+\ddot{x}(t)+Ld(t)$$ I've assumed that the only control input is the cart's linear acceleration $\ddot{x}(t)$, and chose this control command as $\ddot{x}(t)=K_1\theta(t) + K_2\int_0^t\theta(t) dt + K_3\dot{\theta}$, where $K_i$ are the PID gains.
Defining parameters Level: \( N \) = \( 20 = 2^{2} \cdot 5 \) Weight: \( k \) = \( 10 \) Nonzero newspaces: \( 3 \) Newforms: \( 5 \) Sturm bound: \(240\) Trace bound: \(1\) Dimensions The following table gives the dimensions of various subspaces of \(M_{10}(\Gamma_1(20))\). Total New Old Modular forms 118 65 53 Cusp forms 98 57 41 Eisenstein series 20 8 12 Decomposition of \(S_{10}^{\mathrm{new}}(\Gamma_1(20))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. Label \(\chi\) Newforms Dimension \(\chi\) degree 20.10.a \(\chi_{20}(1, \cdot)\) 20.10.a.a 1 1 20.10.a.b 2 20.10.c \(\chi_{20}(9, \cdot)\) 20.10.c.a 4 1 20.10.e \(\chi_{20}(3, \cdot)\) 20.10.e.a 2 2 20.10.e.b 48
How does one check whether symmetric $4\times4$ matrix is positive semi-definite? What if this matrix has also rank deficiency: is it rank 3? Another method is to check there are no negative pivots in row reduction (after taking into account the possibility of 0's on the diagonal). The procedure can be written recursively as follows: 1) If $A$ is $1 \times 1$, then it is positive semidefinite iff $A_{11} \ge 0$. Otherwise: 2) If $A_{11} < 0$, then $A$ is not positive semidefinite. 3) If $A_{11} = 0$, then $A$ is positive semidefinite iff the first row of $A$ is all 0 and the submatrix obtained by deleting the first row and column is positive semidefinite. 4) If $A_{11} > 0$, for each $j > 1$ subtract $A_{j1}/A_{11}$ times row 1 from row $j$, and then delete the first row and column. Then $A$ is positive semidefinite iff the resulting matrix is positive semidefinite. Since the matrix is symmetric, the eigenvalues will be real. Calculate the eigenvalues and see if they are all $\geq 0$. If this is true,the matrix is positive semidefinite. As stated above, Sylvester's criterion doesn't work in this case, so you can't simply check the four leading principal minors. However, it does suffice to check that all 15 of the principal minors are nonnegative. See here for a reference. Another basic approach involves symmetric row reduction. This involves operations of the following type: Perform a row operation, and then Immediately perform the corresponding column operation. For example, you can multiply any row by a constant, as long as you immediately multiply the corresponding column by a constant. Note that this multiplies the diagonal entry by the square of the constant. Using these operations, you can use a variant of Gaussian elimination to reduce any symmetric matrix to a diagonal matrix with 1's, 0's, and -1's along the diagonal. A matrix is positive semidefinite if and only if the resulting diagonal entries are all 0's and 1's. Let's say your matrix is $A$. You can check the eigenvalues. If all eigenvalues $\geq 0$, the matrix is positive semi-definite (if all eigenvalues $>0$ it is positive definite). It might be possible to use the Gershgorin circle theorem instead of calculating the eigenvalues explicitly. If all the diagonal elements are positive and are larger than or equal to the sum of the absolute values of the other elements in same row (or column) (for every diagonal element), then the matrix is positive semi-definite. You can try to find a simpler semi-definite matrix $B$ such that $B^2 = A$ ($B$ is unique). This is in general done using the diagonalization of the matrix, so it will probably be easier just calculating the eigenvalues. You can not use a modification of Sylvester's criterion ("all leading principal minors are non-negative") to determine positive semidefiniteness. If $A$ is $4 \times 4$ and rank 3, it has 0 as an eigenvalue (since there exists a vector $v$ such that $Av = 0v$). This does not affect the positive semi-definiteness of the matrix, but it will not be positive definite. One quick first check, if any element $A_{ii}$ on the diagonal is negative, the matrix has a negative eigenvalue (for real symmetric matrices only of course) Perform the Cholesky decomposition on the matrix, say $\mathbf{A}=\mathbf{L}\mathbf{L}^{\textrm{T}}$, if equation $\mathbf{v}\mathbf{L}=0$ has a unique solution, then $\mathbf{A}$ is positive semi-definite. Here is a test that you can perform relatively easily by hand. Since positive semidefinite matrices must have nonnegative diagonal entries, suppose $M$ is a real symmetric matrix that has a nonnegative diagonal. If $M$ has at most one positive diagonal entry, then $M\succeq0$ if and only if $M$ is a diagonal matrix. If $M$ has two (or more) positive diagonal entries, permute the rows and columns of $M$ so that its first two diagonal entries are positive. Partition $M$ as $\pmatrix{A&B^\top\\ B&C}$, where $A,B,C$ are $2\times2$ submatrices. Then $M\succeq0$ if and only if $A\succeq0$ and $C-BA^{-1}B^\top\succeq0$ (see Schur complement). I suppose you know how to check whether a $2\times2$ matrix is positive semidefinite or not. You can use a modified version of Sylvestor's Criterion which requires all the principal minors to be non-negative (instead of the leading principal minors as in the criterion to check positive definiteness). You can check the Wikipedia article on Sylvester's Criterion for reference. I am quoting the statement here. "An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the leading principal minors: a Hermitian matrix M is positive-semidefinite if and only if all principal minors of M are nonnegative."
The choice of postulates is somewhat arbitrary in the sense that given a set of postulates you almost always can find an alternative set. The choice is guided by subjective criteria such as simplicity, closeness to experiment, or theoretical elegance. However there are situations where some postulates/theorems do not make sense. For instance, $[\hat{x},\hat{p}] = i\hbar$ makes no sense in the Wigner & Moyal formulation of quantum mechanics, neither as postulate nor as theorem, because this formulation of quantum mechanics does not use operators: The chief advantage of the phase space formulation is that it makes quantum mechanics appear as similar to Hamiltonian mechanics as possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of the Hilbert space. Although the phase space formulation of quantum mechanics does not use commutation relations, them can be still obtained as a theorem when one makes the transition from the general phase space state to the configuration space wavefunction: $W(p,x;t) \rightarrow \Psi(x;t)$. Precisely, an explicit derivation of the $[\hat{x},\hat{p}] = i\hbar$ is given in my paper Positive definite phase space quantum mechanics
I recently found a paper by Subhash Kak that introduces teleportation protocols that require lesser classical communication cost (with more quantum resource). I thought it'd be better to write a separate answer. Kak discusses three protocols; two of them use 1 cbit and the last one requires 1.5 cbits. But the first two protocols are in a different setting, i.e the entangled particles are initially in Alice's lab (and a few local operations are performed), then one of the entangled particle is transferred to Bob's lab; this is unlike the Standard setting where the entangled particles are pre-shared between Alice and Bob before the protocol is even started. Interested people can go through those protocols that use only 1 cbit. I'll try to explain the last protocol that uses only 1.5 cbits (fractional cbits). There are four particles, namely, $X, Y, Z$ and $U$. $X$ is the unkown particle (or state) that has to be teleported from Alice's lab to Bob's lab. $X, Y$ and $Z$ are with Alice, and $U$ is with Bob. Let $X$ be represented as $\alpha|0\rangle + \beta|1\rangle$, such that $|\alpha|^2+|\beta|^2=1$. The three particles $Y, Z$ and $U$ are in the pure entangled state $|000\rangle+|111\rangle$ (leaving the normalization constants for now). So, the initial state of the whole system is:$$\alpha|0000\rangle + \beta|1000\rangle + \alpha|0111\rangle + \beta|1111\rangle$$ Step 1: Apply chained XOR transformations on $X, Y$ and $Z$ (i) XOR the states of $X$ and $Y$ (ii) XOR the states of $Y$ and $Z$. The $XOR$ unitary is given by:$$XOR =\left[{\begin{array}{cccc}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0\end{array}}\right].$$ In other words, the state transformations are the following:$$|00\rangle \rightarrow |00\rangle \\|01\rangle \rightarrow |01\rangle \\|10\rangle \rightarrow |11\rangle \\|11\rangle \rightarrow |10\rangle \\$$ After Step 1, the state of the whole system is:$$\alpha|0000\rangle + \beta|1110\rangle + \alpha|0101\rangle + \beta|1011\rangle$$ Step 2: Apply Hadamard tranform on the state of $X$.$$\alpha(|0000\rangle + |1000\rangle) + \beta(|0110\rangle - |1110\rangle) + \alpha(|0101\rangle + |1101\rangle) + \beta(|0011\rangle - |1011\rangle)$$ Step 3: Alice measures the state of $X$ and $Y$. On simplifying the above representation, we get$$|00\rangle(\alpha|00\rangle + \beta|11\rangle) + |01\rangle(\alpha|01\rangle + \beta|10\rangle) + |10\rangle(\alpha|00\rangle - \beta|11\rangle) + |11\rangle(\alpha|01\rangle - \beta|10\rangle).$$ Step 4: Depending on Alice's measurement outcome, appropiate unitaries are applied on $Z$ (by Alice) and $U$ (by Bob). (a) If Alice gets $|00\rangle$, then both Alice and Bob do nothing. (b) If Alice gets $|10\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob does nothing. (c) If Alice gets $|01\rangle$, then Alice does nothing and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$. (d) If Alice gets $|11\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$. Basically, $\left[{\begin{array}{cc}1 & 0 \\0 & 1 \end{array}}\right]$, $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$, $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$ and $\left[{\begin{array}{cc}0 & 1 \\-1 & 0 \end{array}}\right]$ can be appropiately used to alter the combined state of $Z$ and $U$ so that it becomes $\alpha|00\rangle + \beta|11\rangle$. Note that if Alice gets $|01\rangle$ or $|11\rangle$, then Bob has to apply some unitary so that the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$. Step 5: Apply Hadamard transform on the state of $Z$. After applying the unitaries, the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$ (as mentioned above). So, after Step 5, the combined state of $Z$ and $U$ is, $$\alpha|00\rangle + \alpha|10\rangle + \beta|01\rangle - \beta|11\rangle \\= |0\rangle(\alpha|0\rangle + \beta|1\rangle) + |1\rangle(\alpha|0\rangle - \beta|1\rangle).$$ Step 6: Alice measures the state of $Z$. Based on her measurement, she transmits one classical bit of information to Bob so that he can use an appropriate unitary to obtain the unkown state! Discussion: So, how does the protocol require $1.5$ bits of clasiical communication? Cleary, Step 6 uses 1 cbit, and in Step 4, it is easy notice that for two outcomes (namely, $|10\rangle$ or $|00\rangle$), Bob need not apply any unitary. Bob has to apply some unitary (specified prior to the protocool; say $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$) if Alice gets the other two outcomes, and in those scenarios, Alice sends one cbit indicating that the unitary is to be used by Bob. So, it is mentioned that this has a computational burden of 0.5 cbits (because 50% of the time, Bob need not apply any unitary). Hence, the whole protocol requires only 1.5 cbits. But, Alice must send that 1 cbit whether or not she gets those outcomes, right? Alice and Bob cannot agree on a particular time (after the protocol) when Alice sends that 1 cbit, and if Bob doesn't get that classical bit by that time, then he knows that he need not apply any unitary. These time dependent protcols are, in general, not allowed due to relativistic consequences (otherwise, you can even make the Standard protocol to use time for indicating information and reduce the classical communication cost to 1 cbit; for example, at $t_1$, send one cbit or at $t_2$, send one cbit). So, Alice must send that cbit everytime, right? In that case, the protcol requires 2 cbits (one in Step 4 and another in Step 6). I thought it'd be good if there was a discussion on this particular part.
I'm trying to find the Maximum Independent Set of a Biparite Graph. I found the following in some notes "May 13, 1998 - University of Washington - CSE 521 - Applications of network flow": Problem: Given a bipartite graph $G = (U,V,E)$, find an independent set $U' \cup V'$ which is as large as possible, where $U' \subseteq U$ and $V' \subseteq V$. A set is independent if there are no edges of $E$ between elements of the set. Solution: Construct a flow graph on the vertices $U \cup V \cup \{s,t\}$. For each edge $(u,v) \in E$ there is an infinite capacity edge from $u$ to $v$. For each $u \in U$, there is a unit capacity edge from $s$ to $u$, and for each $v \in V$, there is a unit capacity edge from $v$ to $t$. Find a finite capacity cut $(S,T)$, with $s \in S$ and $t \in T$. Let $U' = U \cap S$ and $V' = V \cap T$. The set $U' \cup V'$ is independent since there are no infinite capacity edges crossing the cut. The size of the cut is $|U - U'| + |V - V'| = |U| + |V| - |U' \cup V'|$. This, in order to make the independent set as large as possible, we make the cut as small as possible. So lets take this as the graph: A - B - C |D - E - F We can split this into a bipartite graph as follows $(U,V)=(\{A,C,E\},\{B,D,F\})$ We can see by brute force search that the sole Maximum Independent Set is $A,C,D,F$. Lets try and work through the solution above: So the constructed flow network adjacency matrix would be: $$\begin{matrix} & s & t & A & B & C & D & E & F \\ s & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ t & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ A & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ B & 0 & 1 & \infty & 0 & \infty & 0 & \infty & 0 \\ C & 1 & 0 & 0 & \infty & 0 & 0 & 0 & 0 \\ D & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ E & 1 & 0 & 0 & \infty & 0 & \infty & 0 & \infty \\ F & 0 & 1 & 0 & 0 & 0 & 0 & \infty & 0 \\ \end{matrix}$$ Here is where I am stuck, the smallest finite capacity cut I see is a trivial one: $(S,T) =(\{s\},\{t,A,B,C,D,E,F\})$ with a capacity of 3. Using this cut leads to an incorrect solution of: $$ U' = U \cap S = \{\}$$ $$ V' = V \cap T = \{B,D,F\}$$ $$ U' \cup V' = \{B,D,F\}$$ Whereas we expected $U' \cup V' = \{A,C,D,F\}$? Can anyone spot where I have gone wrong in my reasoning/working?
It has been known that how exponential functions can be used to model a variety of growth and decay situations. These included the growth of populations and the decay of radioactive substances. In this lesson we consider more growth and decay problems, focusing particularly on how logarithms can be used in there solution. Population Growth Example 1 The area $A_{t}$ affected by the increase by the insects is given by $A_{t}=200 \times 2^{0.5t}$ m 2, where $t$ is the number of days after the initial observation. Find the number of days taken for the affected area to reach 1000 m 2. \( \begin{align} \displaystyle 200 \times 2^{0.5t} &= 1000 \\ 2^{0.5t} &= \dfrac{1000}{200} \\ &= 5 \\ 0.5t &= \log_{2}{5} \\ t &= \dfrac{1}{0.5}\log_{2}{5} \\ t &= 2 \times \dfrac{\log_{10}{5}}{\log_{10}{2}} \\ t &= 4.64 \cdots \\ \end{align} \) Therefore it takes $5$ days. Financial Growth A certain amount of $A_{1}$ is invested at a fixed rate for each compounding period in a financial situation. In this case the value of the investment after $n$ periods is given by $A_{n+1}=A_{1} \times r^{n}$ where $r$ is the multiple corresponding to the given rate of interest. In order to find $n$ algebraically, it is required to use $\textit{logarithms}$. Example 2 $500 is invested in an account that pays 4.5% per annum, interest compounded monthly. Find how long it takes to reach $5000. \( \begin{align} \displaystyle A_{n+1} &= 5000 \\ A_{1} &= 5000 \\ r &= 104.5\% \\ &= 1.045 \\ A_{n+1} &= A_{1} \times r^{n} \\ 5000 &= 500 \times 1.045^{n} \\ 1.045^{n} &= \dfrac{5000}{500} \\ &= 10 \\ n &= \log_{1.045}{10} \\ &= \dfrac{\log_{10}{10}}{\log_{10}{1.045}} \\ &= 52.311 \cdots \\ \end{align} \) Therefore it takes $53$ days. Decay Example 3 The mass $M_{t}$ of radioactive substance remaining after $y$ years given by $M_{t}=6000 \times e^{-0.05t}$ grams. Find the time taken for the mass to halve. \( \begin{align} \displaystyle 6000 \times e^{-0.05t} &= 3000 \\ e^{-0.05t} &= 3000 \div 6000 \\ &= 0.5 \\ -0.05t &= \log_{e}{0.5} \\ t &= -\dfrac{1}{0.05}\log_{e}{0.5} \\ &= 13.862 \cdots \\ \end{align} \) Therefore it takes around $14$ years.
Answer The solution set is $$\{180^\circ+720^\circ n,n\in Z\}$$ Work Step by Step $$\sin\frac{\theta}{2}=1$$ 1) First, we solve the equation over the interval $[0^\circ,360^\circ)$ - For $\sin\frac{\theta}{2}=1$, over the interval $[0^\circ, 360^\circ)$, there is one value of $\theta$ where $\sin\frac{\theta}{2}=1$, which is $90^\circ$. Therefore, $$\frac{\theta}{2}=\{90^\circ\}$$ (Be careful that the angle we are solving the equation for is $\frac{\theta}{2}$, not $\theta$) 2) Solve the equation for all solutions Sine function has period $360^\circ$, so we would add $360^\circ$ to all solutions found in part 1) for $\frac{\theta}{2}$. $$\frac{\theta}{2}=\{90^\circ+360^\circ n,n\in Z\}$$ Finally, we find the solutions for $\theta$, which is also the solution set: $$\theta=\{180^\circ+720^\circ n,n\in Z\}$$
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = 4\sqrt{3}\). We also show the integral \(q\)-expansion of the trace form. For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{4}^{\mathrm{new}}(\Gamma_0(128))\): \(T_{3}^{2} \) \(\mathstrut -\mathstrut 4 T_{3} \) \(\mathstrut -\mathstrut 44 \) \(T_{5}^{2} \) \(\mathstrut +\mathstrut 4 T_{5} \) \(\mathstrut -\mathstrut 188 \)
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in... Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch... Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen... Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl... People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f... Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a... I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac... This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s... There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com... Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not... Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}... I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo... Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a... I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst... Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ... NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ... I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few... This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme... EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc... Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu... Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d... I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa... To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co... Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik... I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like. I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have... It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl... Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,... One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi... Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case. What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
Let $\mathfrak{g}$ be a finite-dimensional complex Lie algebra and let $R \subset \mathbb{C}$ be a subring. Say that $\mathfrak{g}$ is defined over $R$ if there exists a basis $x_1, ... x_n$ for $\mathfrak{g}$ such that the structure constants $c_{ijk}$ of the bracket$$[x_i, x_j] = \sum_k c_{ijk} x_k$$all lie in $R$. It is classical that all semisimple $\mathfrak{g}$ are defined over $\mathbb{Z}$. But this is also true for some non-semisimple $\mathfrak{g}$ such as the Lie algebra of $n \times n$ upper triangular or strictly upper triangular matrices. In fact, I don't know an example of such a $\mathfrak{g}$ which isn't defined over $\mathbb{Z}$ although I would be surprised if they didn't exist. Can someone construct one or prove that they don't exist? If they do exist, is a weaker statement true? For example, are all such $\mathfrak{g}$ defined over a number field?
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
Let's consider a study on traumatic brain injury (TBI), which contributes to just under a third (30.5%) of all injury-related deaths in the US and is caused by a blow to the head. Figure (1) shows the 133 accelerometer readings taken over 55.2 milliseconds. The dashed line represents the impulse function which denotes the blow to the head. Figure (1) The laws of motion tell us that the acceleration f(t) can be modelled by a second-order linear ordinary differential equation (ODE) with input a unit pulse u(t) representing the blow to the head and shown in the dashed lines in Figure (1). This ODE \begin{equation}\label{ODE} \frac{\textrm{d}^2f(t)}{\textrm{d}t^2} + \beta_{0} f(t) + \beta_{1} \frac{\textrm{d}f(t)}{\textrm{d}t} + \alpha_{0} u(t) \end{equation} contains three parameters $\beta_{0},\beta_{1}$ and $\alpha_{0},$ and these convey the rate of the restoring force (as $t \rightarrow \infty,$ the acceleration will tend to revert back zero), the rate of the friction force (as $t \rightarrow \infty,$ the oscillations in the acceleration reduce to zero) and the rate of force from the unit pulse. While there are several methods for estimating ODE parameters with partially observed data, they are invariably subject to several problems including high computational cost, sensitivity to initial values or large sampling variability. We propose a method called for Data2LD that overcomes these issues and produces estimates of the ODE parameters that have less bias, a smaller sampling variance and a ten-fold improvement in computation. data to linear dynamics The final parameter estimates with 95% confidence intervals are, $\hat{\beta_{0}} = -0.056 \pm 0.002,$ $\hat{\beta_{1}} = -0.150 \pm 0.018$ and $\hat{\alpha_{0}} = 0.395 \pm 0.032.$ indicating that the acceleration is an under-damped process; after the blow to the head, the acceleration will oscillate with a decreasing amplitude that will quickly decay to zero. Figure (2) Figure (2) shows the accelerometer readings of the brain tissue before and after a series of five blows to the head indicated by the circles. The fitted curve produced by Data2LD (solid line), the 95% confidence interval for this curve (dashed line) and the 95% prediction interval for this curve (grey region). We can see the fitted curve approximating the ODE solution provides an adequate description of the acceleration of the brain tissue.
Let $\mu(X) =1$. Let $f,g \in L^1(X)$ be two positive functions satisfying $f(x) g(x)>1$ for almost all $x$, Then $$\left(\int f ~dx\right) \left(\int g~dx\right) \geq 1.$$ Show also that if $f,g\in L^2(X)$ with $\int f ~dx= 0$, then $$\left(\int fg~dx\right)^2 \leq \left[ \int g^2 ~dx - \left(\int g~dx\right)^2 \right] \int f^2~dx.$$ I think I have to use Holder's inequality for both questions: For the first question, since $\mu(X) =1$, $1\lt \int fg~dx$. How do I apply Holder's inequality.
A simple Google search lead me to a chapter in the proceedings Public-Key Cryptography and Computational Number Theory edited by Kazimierz Alster, Jerzy Urbanowicz, Hugh C. Williams, which contains some results and references which might be interesting in connection with your question, in particular it lists some known results, which I have copied below. While the Fermat numbers $F_m$ have long been of great interest to mathematicians, the generalized Fermat numbers $$F_m(a,b)=a^{2^m}+b^{2^m}, \qquad \gcd(a,b)=1$$ and the more special case $b = 1$, were not seriously studied until the 1960s; [part of the text skipped] The observations above are explained by the results in this subsection; they were recently published by I. Jiménez Calvo [38]. Theorem 3.1 (Jiménez Calvo). Let $p=k\cdot 2^n+1$ be a prime, where $k$ is odd and $n=n'2^l$, with $n'>3$ odd. If $p$ divides the Fermat number $F_m=2^{2^m}+1$, then it also divides the generalized Fermat number $$F_{m-l}(k,1)=k^{2^{m-1}}+1.$$ To put the next result of Jiménez Calvo into perspective, we quote from [9] the following generalization of the Euler-Lucas theorem (Theorem 2.1): Theorem 3.2 (Björn and Riesel). Let $p=k\cdot 2^n+1$ be a prime, where $k$ is odd. Suppose that $p\mid F_m(a,b)$ and $u \equiv a/b \pmod b$ is a $2^t$-th power residue but not a $2^{t+1}$-th power residue $\pmod p$. Then $n=m+t+1$. For a proof, see [9]. A partial converse is given by the following Theorem 3.3 (Jiménez Calvo). Let $p=k\cdot 2^n+1$ be a prime, where $k$ is odd. Let $v:=\operatorname{ord}_2(n)$, and $u$ be such that $2$ is a $2^u$-th power residue but not a $2^{u+1}$-th power residue $\pmod p$. Furthermore, suppose that $n$, $k$ have a common divisor $d>3$, and if $k' := k/d$, then $2$ is a $k$'-th power residue $\pmod p$. Then $p\mid k^{2^m}+1$ with $m=n-u-v-1$. As an immediate consequence (in the case $k' = 1$) we get the following result. By the supplementary laws of quadratic reciprocity we always have $u\ge1$. Corollary 3.4. Let $p=k\cdot 2^n+4$ be a prime, with $k$ odd and $k\mid n$. Then $$p\mid k^{2^m}+1 \qquad\text{for some}\qquad m\le n-2.$$ [9] Björn, A., Riesel, H. Factors of generalized Fermat numbers. Math. Comp. 67 (1998), 441-446. DOI: 10.1090/S0025-5718-98-00891-6 [38] Jiménez Calvo, I., A note on factors of generalized Fermat numbers. Appl. Math. Lett. 13.6 (2000), 1-5. DOI:10.1016/S0893-9659(00)00045-8 Some other references related to this topic are given there.
To represent a quantum computer's state, all the qubits contribute to one state vector (this is one of the major differences between quantum and classical computing as I understand it). My understanding is that it's possible to measure only one qubit out of a system of multiple qubits. How does measuring that one qubit affect the whole system (specifically, how does it affect the state vector)? There are a lot of different ways of looking at qubits, and the state vector formalism is just one of them. In a general linear-algebraic sense a measurement is projection onto a basis. Here I will provide insight with an example from the Pauli observable point of view, that is the usual circuit model of QC. Firstly, it's of interest which basis the state vector is being provided in-- every measurement operator comes with a set of eigenstates, and whatever measurements you look at (eg. $X,Y,Z, XX, XZ$, etc.) determine the basis that might be best for you to write the state vector in. The easiest way to answer your question is if you know which basis is of interest to you, and more importantly, whether it commutes with the measurement you just made. So for simplicity's sake, let's say you start with two coupled qubits in an arbitrary state written in the $Z$-basis for both qubits: $$| \psi \rangle = a | 0_{Z} \rangle \otimes | 0_{Z} \rangle +b | 0_{Z} \rangle \otimes | 1_{Z} \rangle + c | 1_{Z} \rangle \otimes | 0_{Z} \rangle + d | 1_{Z} \rangle \otimes | 1_{Z} \rangle $$ The simplest possible measurements you could make would be $Z_{1}$, that is the $Z$ operator on the first qubit, followed by $Z_{2}$, the $Z$ operator on the second qubit. What does measurement do? It projects the state into one of the eigenstates. You can think of this as eliminating all possible answers that are inconsistent with the one we just measured. For instance, say we measure $Z_{1}$ and obtain the outcome $1$, then the resulting state we would have would be: $$| \psi \rangle = \frac{1}{\sqrt{|c|^{2} +|d|^{2}}} \left(c | 1_{Z} \rangle \otimes | 0_{Z} \rangle + d | 1_{Z} \rangle \otimes | 1_{Z} \rangle \right) $$ Note that the coefficient out front is just for renormalization. So our probability of measuring $Z_{2}=0$ is $\frac{1}{|c|^{2} +|d|^{2}} |c^{2}|$. Note this is different from the probability we had in the initial state, which was $|a|^{2}+|c|^{2}$. Suppose the next measurement you make does not commute with the previous one, however. This is trickier because you have to implement a change of basis on the state vector in order to understand the probabilities. With Pauli measurements, though, it tends to be easy since the eigenbases relate in a nice way, that is: $$| 0_{Z} \rangle = \frac{1}{\sqrt{2}} (|0_{X}\rangle + |1_{X} \rangle )$$ $$| 1_{Z} \rangle = \frac{1}{\sqrt{2}} (|0_{X}\rangle - |1_{X} \rangle )$$ A good way to check your understanding: What is the probability of measuring $X= +1$ after the $Z_{1}=1$ measurement above? What is the probability if we have not made the $Z_{1}$ measurement? Then a more complicated question is to look at product operators that act on both qubits at once, for instance, how does a measurement of $Z_{1}Z_{2}=+1$ affect the initial state? Here $Z_{1}Z_{2}$ measures the product of the two operators. Suppose that, prior to measurement, your $n$-qubit system is in some state $\lvert \psi \rangle \in \mathcal H_2^{\otimes n}$, where $\mathcal H_2 \cong \mathbb C^2$ is the Hilbert space of a single qubit. Write $$ \lvert \psi \rangle = \sum_{x \in \{0,1\}^n} u_x \lvert x \rangle $$ for some coefficients $u_x \in \mathbb C$ such that $\sum_x \lvert u_x \rvert^2 = 1$. If you are measuring the first qubit in the standard basis, define $$\begin{aligned} \lvert \varphi_0 \rangle &= \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! u_{0x'} \,\lvert0\rangle \lvert x' \rangle, \\ \lvert \varphi_1 \rangle &= \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! u_{1x'} \,\lvert1\rangle \lvert x' \rangle,\end{aligned}$$ and let $\lvert \psi_0 \rangle = \lvert \varphi_0 \rangle \big/\! \sqrt{\langle \varphi_0 \vert \varphi_0 \rangle}\,$ and $\,\lvert \psi_1 \rangle = \lvert \varphi_1 \rangle \big/\! \sqrt{\langle \varphi_1 \vert \varphi_1 \rangle}\,$. It is not too difficult to show that, if you measure the first qubit and obtain the state $\lvert 0 \rangle$, the state of the entire system "collapses" to $\lvert \psi_0 \rangle$, and if you obtain $\lvert 1 \rangle$ what you obtain is $\lvert \psi_1 \rangle$. This is broadly analogous to the idea of conditional probability distributions: you might think of $\lvert \psi_0 \rangle$ as the state of the system conditioned on the first qubit being $\lvert 0 \rangle$, and $\lvert \psi_1 \rangle$ as the state of the system conditioned on the first qubit being $\lvert 1 \rangle$ (except of course that the story is a bit more complicated, on account of the fact that the first qubit is not "secretly" in either the state $0$ or $1$). The above is not strongly dependent on measuring the first qubit: we can define $\lvert \varphi_0 \rangle$ and $\lvert \varphi_1 \rangle$ in terms of fixing any particular bit in the bit string $x$ to either $0$ or $1$, summing over only those components which are consistent with either the choice $0$ or $1$, and proceeding as above. The above is also not strongly dependent on measuring in the standard basis, as Emily indicates. If we wish to consider measuring the first qubit in the basis $\lvert \alpha \rangle, \lvert \beta \rangle$, where $\lvert \alpha \rangle = \alpha_0 \lvert 0 \rangle + \alpha_1 \lvert 1 \rangle$ and $\lvert \beta \rangle = \beta_0 \lvert 0 \rangle + \beta_1 \lvert 1 \rangle$, we define $$\begin{aligned} \lvert \varphi_0 \rangle &= \Bigl(\lvert \alpha \rangle\!\langle \alpha \lvert \otimes I^{\otimes n-1}\Bigr)\lvert \psi\rangle = \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! \bigl(\alpha_0^\ast u_{0x'} + \alpha_1^\ast u_{1x'}\bigr) \,\lvert\alpha\rangle \lvert x' \rangle\,, \\ \lvert \varphi_1 \rangle &= \Bigl(\lvert \beta\rangle\!\langle \beta \lvert \otimes I^{\otimes n-1}\Bigr)\lvert \psi\rangle = \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! \bigl(\beta_0^\ast u_{0x'} + \beta_1^\ast u_{1x'}\bigr) \,\lvert\beta\rangle \lvert x' \rangle\,, \end{aligned}$$ and then proceeding as above. Less formally-stated than the other answers, but for beginners I like the intuitive method outlined by Prof. Vazirani in this video. Suppose you have a general two-qbit state: $|\psi\rangle = \begin{bmatrix} \alpha_{00} \\ \alpha_{01} \\ \alpha_{10} \\ \alpha_{11} \end{bmatrix} = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle$ Now suppose you measure the most-significant (leftmost) qbit in the computational basis (as in, collapse it to either $|0\rangle$ or $|1\rangle$). There are two questions we might ask: What is the probability that the measured qbit collapses to $|0\rangle$? What about $|1\rangle$? What is the state of the 2-qbit system after measurement? For the first question, the intuitive answer is this: take the sum of squares of all amplitudes associated with the value for which you want to find the probability of collapse. So, if you want to know the probability of the measured qbit collapsing to $|0\rangle$, you'd look at the amplitudes associated with cases $|00\rangle$ and $|01\rangle$, because those are the cases where the measured qbit is $|0\rangle$. Thus: $P[|0\rangle] = |\alpha_{00}|^2 + |\alpha_{01}|^2$ Similarly, for $|1\rangle$ you look at the amplitudes associated with cases $|10\rangle$ and $|11\rangle$, so: $P[|1\rangle] = |\alpha_{10}|^2 + |\alpha_{11}|^2$ As for the state of the 2-qbit system after measurement, what you do is cross out all the components of the superposition which are inconsistent with the answer you got. So, if you measured $|0\rangle$, then the state after measurement is: $\require{cancel} |\psi\rangle = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle + \cancel{\alpha_{10}|10\rangle} + \cancel{\alpha_{11}|11\rangle} = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle$ However, this state is not normalized - the sum of squares does not add up to 1, and so you have to normalize it: $|\psi\rangle = \frac{\alpha_{00}|00\rangle + \alpha_{01}|01\rangle}{\sqrt{|\alpha_{00}|^2 + |\alpha_{01}|^2}}$ Similarly, if you measured $|1\rangle$ then you'd get: $\require{cancel} |\psi\rangle = \cancel{\alpha_{00}|00\rangle} + \cancel{\alpha_{01}|01\rangle} + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle = \alpha_{10}|10\rangle + \alpha_{11}|11\rangle$ Normalized: $|\psi\rangle = \frac{\alpha_{10}|10\rangle + \alpha_{11}|11\rangle}{\sqrt{|\alpha_{10}|^2 + |\alpha_{11}|^2}}$ And that's how you calculate the action of measuring one qbit in a multi-qbit state, in the simplest case!
Here is a proof for $\lim_{x\to2} 3x^2 = 12$ We are given some $\epsilon > 0$, and we need to find $\delta$ such that $0 < |x-2| < \delta \Rightarrow |3x^2 - 12| < \epsilon$ The inequality $|3x^2 - 12| < \epsilon$ will be more useful if it is in terms of $x-2$ rather than x, since the inequality $0 < |x-2| < \delta$ is in terms of $x-2$. For simplicity, let $z = x-2$. Then we wish to find $\delta$ such that $0 < |z| < \delta \Rightarrow |3(z+2)^2 - 12| < \epsilon$ We can simplify this to $0 < |z| < \delta \Rightarrow |3z^2 + 12z| < \epsilon$ However, we know that $|3z^2 +12z|\leq|3z^2|+|12z|=3z^2 + 12|z|$. So it suffices to find $\delta$ such that $0 < |z| < \delta \Rightarrow 3z^2 + 12|z| < \epsilon$ If $ 0 < |z| < \delta$, then $3z^2 + 12|z| < 3\delta^2 + 12\delta = 3\delta(4 + \delta)$. Thus it suffices to choose $\delta$ such that $3\delta(4 + \delta) < \epsilon$ The $4 + \epsilon$ term is somewhat annoying. We can make it simpler by assuming that $\delta \leq 1$. If we assume that $\delta \leq 1$, then $4+\delta \leq 5$, and the inequality that we need becomes simply $3\delta(4+\delta) \leq 15\delta < \epsilon$ To force this to be true, we select $\delta = \frac{\epsilon}{15}$. (In the unlikely event that $\epsilon > 15$, we can just take $\delta = 1$.) We then conclude that $0 < |z| < \delta \Rightarrow |3x^2 - 12| < 3\delta(4 + \delta) \leq 15\delta=\epsilon$. Thus, for any $\epsilon < 15$, we have found that $\delta = \frac{\epsilon}{15}$ satisfies the $\delta$-$\epsilon$ condition: $0 < |x-2| < \delta \Rightarrow |3x^2 - 12| < \epsilon$ and hence we have stablished that $\lim_{x\to2} 3x^2 = 12$ What I don't understand is why just because something is greater than $|3z^2 + 12z|$ It suffices to find $\delta$ using it. I mean by that logic couldn't I say something like $|3z^2 + 12z| \leq |3z^2 + 12z| + z^{5000000}$, so it suffices to find $\delta$ such that $0 < |z| < \delta \Rightarrow|3z^2 + 12z| + z^{5000000} < \epsilon$ Any help would be appreciated
Forgot password? New user? Sign up Existing user? Log in Why is i=22+22i\sqrt{i} = \frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2}ii=22+22i Note by Saad Haider 6 years ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: i=eiπ/2⇒i=eiπ/4=cos(π/4)+isin(π/4)=22+22i\displaystyle i=e^{i\pi/2} \Rightarrow \sqrt{i}=e^{i\pi/4}=\cos(\pi/4)+i\sin(\pi/4)=\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2}ii=eiπ/2⇒i=eiπ/4=cos(π/4)+isin(π/4)=22+22i Log in to reply remember taking root will give both positive and negative solutions I completely agree though you forgot to put in the iii until the end: i=eπi/2i=e^{\pi i/2}i=eπi/2 Voted up. :D Sorry about that. Fixed. Thanks! :) Actually, iii has two square roots, just as any real number does. These are the two roots of z2=eπi/2z^2=e^{\pi i/2}z2=eπi/2. One of the roots is eπi/4=22+22ie^{\pi i/4}=\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2}ieπi/4=22+22i as noted previously. The other is e5πi/4=22−22ie^{5\pi i/4}=\frac{\sqrt{2}}{2}-\frac{\sqrt{2}}{2}ie5πi/4=22−22i. Doubling the argument of both of these complex numbers and squaring the modulus indeed gives eπi/2e^{\pi i/2}eπi/2. There is some method with which we agree on which root we're talking about with radicals, making radicals ambiguous but fractional exponents not so. Sadly, I don't know exactly how we do decide. Could anyone enlighten me? The "principal root" of x describes the positive root, so if you're given 4 \sqrt{4}4 then it is asking for the positive answer which is 222 and not −2-2−2. but when dealing with complex numbers it gets.. well, more complex. Neither root is more "important" than the other, so you don't tend to worry about it as much. @Michael Tong – and for higher roots? (e.g. 25\sqrt[5]{2}52) You're missing a minus sign: e5πi/4=−22−22ie^{5 \pi i / 4} = -\frac{\sqrt{2}}{2} - \frac{\sqrt{2}}{2}ie5πi/4=−22−22i let i\sqrt{i}i = xxx we have x2x^{2}x2 = iii converting i to polar form, we have x2x^{2}x2 = cosπ2+isinπ2\cos \frac{\pi}{2} + i\sin \frac{\pi}{2}cos2π+isin2π using Demoivre's Theorem we have x1=cosπ4+isinπ4=22+22ix_{1} = \cos \frac{\pi}{4} + i\sin \frac{\pi}{4} = \frac{\sqrt2}{2} + \frac{\sqrt2}{2} ix1=cos4π+isin4π=22+22i and x2=cos5π4+isin5π4=−22−22ix_{2} = \cos \frac{5\pi}{4} + i\sin \frac{5\pi}{4} = -\frac{\sqrt2}{2} -\frac{\sqrt2}{2} ix2=cos45π+isin45π=−22−22i i\sqrt{i}i is a complex number, so it can be written in the form a+bia+bia+bi where aaa and bbb are real numbers. Squaring gives (a2−b2)+2abi(a^2-b^2)+2abi(a2−b2)+2abi. It follows that 2ab=12ab=12ab=1 and a2=b2a^2=b^2a2=b2. From the second we have that a=ba=ba=b or a=−ba=-ba=−b. Substituting in the first, we get a2=12a^2=\frac{1}{2}a2=21 or a2=−12a^2=-\frac{1}{2}a2=−21 So a=±122a=\pm \frac{1}{2} \sqrt{2}a=±212 or a=±122∗ia=\pm \frac{1}{2} \sqrt{2}*ia=±212∗i. and b=±122b=\pm \frac{1}{2} \sqrt{2}b=±212 or b=∓122∗ib=\mp \frac{1}{2} \sqrt{2}*ib=∓212∗i. Using these in our original expression a+bia+bia+bi, we get only two distinct solutions: 122+122∗i \frac{1}{2} \sqrt{2} + \frac{1}{2} \sqrt{2}*i 212+212∗i and −122−122∗i- \frac{1}{2} \sqrt{2} - \frac{1}{2} \sqrt{2}*i −212−212∗i. I think usually the first is chosen as THE root of iii, but only because it's closest to (1,0)(1,0)(1,0). Problem Loading... Note Loading... Set Loading...
NEGF formalism¶ Device configuration¶ This section describes the most important aspects of the formalism in QuantumATK for simulating device systems using the non-equilibrium Green’s function (NEGF) method. For a more detailed description of the theory you may consult some of the many research papers on the subject [BMO+02], [STBG05], [HJ08], [Dat97]. A typical device configuration is illustrated in Fig. 102. The system can be divided into three regions: left, central, right. The implementation relies on the so-called screening approximation, which assumes that the properties of the left and right regions, the electrodes, can be described by solving a bulk problem for the fully periodic electrode cell. The screening approximation will formally be fulfilled when the current through the system is sufficiently small that the electrode regions can be described by an equilibrium electron distribution. Fig. 102 illustrates how the electrode regions must be extended into the central region, in order to screen out the perturbations from the scatterer (in this case the benzene molecule) inside the device. For metallic electrodes it is usually sufficient to extend the electrodes 5-10 Å into the central region. Non-equilibrium electron distribution¶ The left and right regions are equilibrium systems with periodic boundary conditions, and the properties of these systems are obtained using a conventional electronic structure calculation. The challenge in calculating the properties of a device system lies in the calculation of the non-equilibrium electron distribution in the central region. The assumption is that the system is in a steady state such that the electron density of the central region is constant in time. The electron density is given by the occupied eigenstates of the system. Since the chemical potential is different in the two electrodes, the contribution from each electrode to the total electron density in the central region must be calculated independently: The contribution from the left (\(n^L\)) and right (\(n^R\)) electrodes can be obtained by calculating the scattering states, which are the eigenstates of the system when scattering boundary conditions are used. Fig. 103 illustrates a left moving scattering state with origin in the right electrode. The left and right density are now calculated by summing up the occupied scattering states, The scattering states of the system are calculated by first calculating the Bloch states in the electrodes and subsequently solving the Schrödinger equation of the central region using the Bloch states as matching boundary conditions. The non-equilibrium Green’s function (NEGF) method¶ Instead of using the scattering states to calculate the non-equilibrium electron density, QuantumATK uses the non-equilibrium Green’s function (NEGF) method; the two approaches are formally equivalent and give identical results [BMO+02]. The electron density is given in terms of the electron density matrix, as described in section Electron density. We divide the density matrix into left and right contributions, The left density matrix contribution is calculated using the NEGF method as [BMO+02] where is the spectral density matrix. Note that while there is a non-equilibrium electron distribution in the central region, the electron distribution in the electrode is described by a Fermi function \(f\) with an electron temperature \(T_L\) . In this equation, \(G\) is the retarded Green’s function, and is the broadening function of the left electrode, given in terms of the left electrode self energy, \(\Sigma^L\) . A similar equation exists for the right density matrix contribution. The following section describes the calculation of \(G\) and \(\Sigma\) in more detail. Retarded Green’s function¶ The key quantity to calculate is the retarded Green’s function matrix, where \(\delta_+\) is an infinitesimal positive number. \(S\) and \(H\) are the overlap and Hamiltonian matrices, respectively, of the entire system. The Green’s function is only required for the central region and can be calculated from the Hamiltonian of the central region by adding the electrode self energies: Therefore, the calculation of the Green’s function of the central region at a specific energy requires the inversion of the Hamiltonian matrix of the central region. In QuantumATK this matrix is stored in a sparse format, and the inversion is done using an \(\mathcal{O}(N)\) algorithm [PSrensenH+08]. Self energy¶ The self energy describe the effect of the electrode states on the electronic structure of the central region. The self energy can be calculated from the electrode Hamiltonian. QuantumATK provides 4 different methods for calculating the self energy: DirectSelfEnergy uses an exact diagonalization of the Hamiltonian for calculating the self energy [SLJB99]. RecursionSelfEnergy uses an iterative scheme for calculating the self energy [LSLSR85], which gives essentially the same results as the direct method but is faster for some systems. SparseRecursionSelfEnergy uses the same iterative scheme as RecursionSelfEnergy, but exploits inherent sparsity. This provides a smaller memory footprint, as well as increased performance for all but the smallest systems. KrylovSelfEnergy is an approximate method which is an order of magnitude faster that the other methods, but may in rare cases give inaccurate results [SrensenHP+08], [SrensenHP+09] . Complex contour integration¶ The energy integral needed to obtain the density matrix within the NEGF framework is evaluated through a complex contour integration. It is divided into two parts; an integral over equilibrium states and an integral over non-equilibrium states. The integral over the equilibrium states can be done using two different methods: SemiCircleContour: Defines a semi-circular complex contour that gives high computational efficiency [BMO+02]. OzakiContour: Evaluation of the complex contour integral is based on the residue theorem and a continued-fraction representation of the Fermi–Dirac districution [Oza07][ONK10]. This is a highly stable method, but is also less efficient than SemiCircleContour. The integral over the non-equilibrium states must be performed along the real axis,and uses the RealAxisContour method. Note that for large biases, this is oftenthe most demanding part of the calculation. Lastly, one of two different contour methods must be chosen (usually just leftat the default value): SingleContour uses a single contour for the calculation. This is appropriate for small biases, and is the fastest method. DoubleContour uses a double contour for the calculation. This gives best stability and must be used for high biases. The method can also handle if there are bound states inside the bias window. See also ContourParameters. Spill-in terms¶ In terms of the density matrix, \(D\) , the electron density of the central region is given by The Green’s function of the central region gives the density matrix of the central region, \(D_{CC}\) , however, to calculate the density correctly close to the central cell boundaries the terms involving, \(D_{LL},D_{LC},D_{CR},D_{RR}\) are also needed. These terms are denoted spill-in terms [SMB+16]. ATK includes all the spill-in terms, both for calculating the electron density \(n ({\bf r})\) and the Hamiltonian integral. This gives additional stability and well-behaved convergence in the device algorithm [SMB+16]. Effective potential¶ Once the non-equilibrium density is obtained the next step in the self-consistent calculation is the calculation of the effective potential as the sum of the exchange-correlation and electrostatic Hartree potentials. The calculation of the exchange-correlation potential is straightforward since it is a local or semi-local function of the density. However, the calculation of the electrostatic Hartree potential requires some additional consideration for a device system. The following describes the calculation of the Hartree potential in more detail. The starting point is the calculation of the self-consistent Hartree potential in the left and right electrodes. The Hartree potential of a bulk system is defined up to an arbitrary constant. However, in a device setup the Hartree potentials of the two electrodes are aligned through their chemical potentials (i.e. their Fermi levels), since these are related by the applied bias: The Hartree potential of the central region is obtained by solving the Poisson equation, using the bulk-like Hartree potentials of the electrodes as boundary conditions at the interfaces between the electrodes and the central region. QuantumATK offers 5 different methods for solving the Poisson equation of a device system: FastFourier2DSolver uses a Fourier transform in the directions perpendicular to the transport direction, and a real space multigrid method in the transport direction. This is a fast and accurate method and it is the default in QuantumATK. MultigridSolver uses a real space multigrid method in all directions. This method is slower and slightly less accurate than the FastFourier2DSolver. However, it is very flexible and allows for different types of boundary conditions, implicit solvents, and the use of metallic and dielectric spatial regions to simulate gates. DirectSolver uses a direct real space solver method in all directions. This method is both accurate and fast, but requires significantly more memory than the other solver methods. Similar to the MultigridSolver, it is very flexible and allows for different types of boundary conditions, implicit solvents, and the use of metallic and dielectric spatial regions to simulate gates. ParallelConjugateGradientSolver uses an iterative solver based on the conjugate gradient method. Similar to the MultigridSolver, ìt allows for different types of boundary conditions, implicit solvents, and the use of metallic and dielectric spatial regions to simulate gates. This method is similar to MultigridSolver in terms of accuracy and outperforms it when running calculations in parallel, while requiring much less memory than DirectSolver. FastFourierSolver uses a Fourier transform in all directions. This is a very fast method and is still used in some other implementations of the NEGF method. It requires that the electrodes are identical, but even for such so-called homogeneous systems the method can in some cases be problematic, especially at finite bias, and it is recommended not to use this method for device configurations. Total energy and forces¶ A device system is an open system where charge can flow in and out of the central region from the left and right reservoirs. Since the particle number is not conserved it is necessary to use a grand canonical potential to describe the energetics of the system [Tod98]: where \(N_{L/R}\) is the number of electrons contributed to the central region from the left/right electrode. Due to the screening approximation, the central region will be charge neutral, and therefore \(N_L + N_R = N\) , where \(N\) is the ionic charge in the central region. Thus, at no applied bias, we ahve \(\mu_L = \mu_R\) and the particle terms in \(\Omega\) will be constant when atoms are moved in the central region. However, at finite bias, \(\mu_L \neq \mu_R\), and the particle terms in \(\Omega\) will be important. The forces are given by It can be shown that the calculation of this force is identical to the calculation of the equilibrium force, as described in Total energy and forces. In the non-equilibrium case it is however required that the density and energy density matrix is calculated with the NEGF framework [BMO+02], [Tod98], [ZRSH11]. See also TotalEnergy. Transmission coefficient¶ When the self-consistent non-equilibrium density matrix has been obtained, it is possible to calculate various transport properties of the system. One of the most notable is the TransmissionSpectrum from which you can obtain the current and differential conductance. By the transmission amplitude \(t_k\) we define the fraction of a scattering state \(k\) which propagates through a device. The transmission coefficient at energy \(\varepsilon\) is obtained by summing up the transmission amplitude from all the states at this energy, The transmission coefficient may also be obtained from the retarded Green’s function using and this is how it is calculated in QuantumATK. The transmission amplitude of individual scattering states may be obtained through the TransmissionEigenvalues. Electrical current¶ To calculate the current you must first calculate aTransmissionSpectrum, and then from this extract the electrical currentusing the current() method on the object. This approach has the advantage that once theTransmissionSpectrum is calculated, it is fast to calculate the current fordifferent electrode temperatures [SMB+16]. See TransmissionSpectrum for more details. References¶ [BMO+02] (1, 2, 3, 4, 5) M. Brandbyge, J.-L. Mozos, P. Ordejón, J. Taylor, and K. Stokbro. Density-functional method for nonequilibrium electron transport. Phys. Rev. B, 65:165401, Mar 2002. doi:10.1103/PhysRevB.65.165401. [Dat97] S. Datta. Electronic Transport in Mesoscopic Systems. Cambridge University Press, 1997. Cambridge Studies in Semiconductor Physics and Microelectronic Engineering. [HJ08] H. Haug and A.-P. Jauho. Quantum Kinetics in Transport and Optics of Semiconductors. volume 123. Springer-Verlag Berlin Heidelberg, 2 edition, 2008. Springer Series in Solid-State Sciences. doi:10.1007/978-3-540-73564-9. [LSLSR85] M. P. Lopez Sancho, J. M. Lopez Sancho, and J. Rubio. Highly convergent schemes for the calculation of bulk and surface Green functions. J. Phys. F: Metal Physics, 15(4):851, 1985. URL: http://stacks.iop.org/0305-4608/15/i=4/a=009. [ONK10] T. Ozaki, K. Nishio, and H. Kino. Efficient implementation of the nonequilibrium green function method for electronic transport calculations. Phys. Rev. B, 81:035116, 2010. doi:10.1103/PhysRevB.81.035116. [Oza07] Taisuke Ozaki. Continued fraction representation of the fermi-dirac function for large-scale electronic structure calculations. Phys. Rev. B, 75:035123, 2007. doi:10.1103/PhysRevB.75.035123. [PSrensenH+08] D. E. Petersen, H. H. B. Sørensen, P. C. Hansen, S. Skelboe, and K. Stokbro. Block tridiagonal matrix inversion and fast transmission calculations. J. Comput. Phys., 227(6):3174–3190, 2008. doi:10.1016/j.jcp.2007.11.035. [SLJB99] S. Sanvito, C. J. Lambert, J. H. Jefferson, and A. M. Bratkovsky. General green’s-function formalism for transport calculations with spd hamiltonians and giant magnetoresistance in co- and ni-based magnetic multilayers. Phys. Rev. B, 59:11936–11948, May 1999. doi:10.1103/PhysRevB.59.11936. [STBG05] K. Stokbro, J. Taylor, M. Brandbyge, and H. Guo. Ab-initio based Non-equilibrium Green’s Function Formalism for Calculating Electron Transport in Molecular Devices., pages 117–151. Springer, 2005. [SMB+16] (1, 2, 3) D. Stradi, U. Martinez, A. Blom, M. Brandbyge, and K. Stokbro. General atomistic approach for modeling metal-semiconductor interfaces using density functional theory and nonequilibrium green’s function. Phys. Rev. B, 93:155302, Apr 2016. doi:10.1103/PhysRevB.93.155302. [SrensenHP+08] H. H. B. Sørensen, P. C. Hansen, D. E. Petersen, S. Skelboe, and K. Stokbro. Krylov subspace method for evaluating the self-energy matrices in electron transport calculations. Phys. Rev. B, 77:155301, Apr 2008. doi:10.1103/PhysRevB.77.155301. [SrensenHP+09] H. H. B. Sørensen, P. C. Hansen, D. E. Petersen, S. Skelboe, and K. Stokbro. Efficient wave-function matching approach for quantum transport calculations. Phys. Rev. B, 79:205322, May 2009. doi:10.1103/PhysRevB.79.205322. [Tod98] (1, 2) T. N. Todorov. Local heating in ballistic atomic-scale contacts. Philosophical Magazine Part B, 77(4):965–973, 1998. doi:10.1080/13642819808206398. [ZRSH11] R. Zhang, I. Rungger, S. Sanvito, and S. Hou. Current-induced energy barrier suppression for electromigration from first principles. Phys. Rev. B, 84:085445, Aug 2011. doi:10.1103/PhysRevB.84.085445.
In a quadrilateral ABCD. It is given that AB=AD=13 BC=CD=20,BD=24. If r is the radius of the circle inscribable in the quadrilateral, then what is the integer close to r? But how do you construct this? First, notice that the quadrilateral is a kite. Diagonals of a kite bisect each other ( Prove this!) If X is the point of intersection then it is easy to show \( \Delta AXB \equiv \Delta AXD \) implying BX = XD = 12. Finally to find the incenter, draw the angle bisector of \( \angle ADC\). Wherever it meets AC, call that point I. It is the incenter ( Prove this too!) To draw the incircle, drop a perpendicular from I to AD and draw a circle taking that perpendicular segment as radius, and I as the center. (Notice that \( \Delta AXB \) is right angled. Hence \( AX^2 + BX^2 = AB^2 \) or \( AX^2 = 13^2 – 12^2 = 5^2 \) ) Any polygon with an inscribable circle (a circle that touches each of its sides) has this beautiful property: \( Area = r \times s \) where r = radius of the inscribed circle and s = semiperimeter (half of the perimeter). Why is that true? Well, there is a simple argument that works for all polygons (in which circles can be inscribed). However we will describe the special case of this kite: Drop perpendiculars from incenter I to each of the sides of the kite. Since the incircle touches each side, hence each side is a tangent to the incircle. Hence these perpendicular segments are clearly the radii of the incircle (after this was our method of construction) in the first place. Join ID and IB. The kite is thus divided into four triangles: red, green, yellow and blue. Notice that area of these triangles are \( \frac{1}{2} \times r \times AB, \frac{1}{2} \times r \times BC, \frac{1}{2} \times r \times CD, \frac{1}{2} \times r \times DA \) respectively. Adding the areas of the 4 triangles we will get the area of the kite which is: \( \frac{1}{2} \times r \times (AB + BC + CD + DA ) = r \times s \) Thus the area of the kite is in radius times semiperimeter. The exact same argument holds for any polygon with inscribable circles. The semiperimeter is \( \frac{1}{2} \times (13 + 13 + 20 + 20) = 33 \). Hence the area of the kite is 33r. But we also know that area of the kite is the product of diagonals by 2 (or just separately find out the area of the triangles ABC, ADC and add them). This gives the area to be \( \frac{1}{2} \times 24 \times 21 = 252 \) Equating we have 33r = 252 or r ~ 7.6. Its closest integer is 8 Get Started with Math Olympiad Program Outstanding mathematics for brilliant school students.
Moral of the story: Two stored values may be swapped arithmetically with 4 or fewer variable references. Puzzle of the story: Can you exemplify the moral? (With 10 or fewer symbols in all.) The story: Once upon a puzzlethere was a dear little user— affectionately calledLittle Red Solving Hood by thevillagers — whowas sent to Grandparent’s house with a basket of goodiesthat included a couple ofreal numbers,X and Y,as variables with stored values that could be revised. The basket was unbalanced, though,so Little Red stopped along the path outside Bit Bad Wolf’s Swapaderoto exchange the numbers’ values. Bit Bad Wolf’s big mouth flashed a big bad smile. “Why don’t you step inside and just let me swap those numbers without moving either one.” $\require{begingroup}\begingroup \def \K { \kern-.6em } \def \_ #1{ \kern1em \raise-.5ex{\underline{\kern1em \raise.5ex{#1} \kern1em}} \kern1em } \def \* {{\oplus }} \def \X {{ \sf X}} \def \x {{\sf\normalsize \unicode {120377}}} \def \Y {{ \sf Y}} \def \y {{ \sf\normalsize \unicode{120378}}} \def \= #1{ \rlap{\raise1.3ex{~~~\,{#1}\,}} ~~ \gets ~~ } \def \( { \raise .5ex{ \big( } } \def \) { \raise.5ex{ \big) } } $ $$ \small\sf\begin{array}{c} \sf Action && \sf Variable && \_{\X} &\_{\Y}\\[-1ex] &&\sf\raise-.5ex{references}&& & \\[-1ex] && && \x & \y \\[.2ex] \X \=~ \X\,\*\,\Y && \sf 3 &&\x\,\*\,\y & \y \\[.2ex] \Y \=~ \X\,\*\,\Y && \sf 3 && \x\,\*\,\y & \x \\[.2ex] \X \=~ \X\,\*\,\Y&& \sf \_3 && \y & \x \\ && \sf 9 && & \end{array} \kern-2.5em $$ “And if 9 variable references are too many for sweet little delicious you, how about 6?” $$ \small\sf\begin{array}{c} \sf Action && \sf Variable && \_{\X} &\_{\Y}\\[-1ex] &&\sf\raise-.5ex{references}&& & \\[-1ex] && && \x & \y \\[.2ex] \X \=\* \Y && \sf 2 &&\x\,\*\,\y & \y \\[.2ex] \Y \=\* \X && \sf 2 && \x\,\*\,\y & \x \\[.2ex] \X \=\* \Y && \sf \_2 && \y & \x \\ && \sf 6 && & \end{array} \kern-4em $$ The wolf thought that$\small\rlap{\raise1.3ex{\,~\oplus}}\gets$ (self-revising augmented assignment)might catch Little Red unawares;each action had exactly the same effect as before,merely with fewer scary symbols and variable references. But Little Red Solving Hood had a smart little mouth and mind. “Eat bits and die, Bad Wolf. That’s for binary numbers, which need only 4 references anyway.” $$ \small\sf\begin{array}{c} \sf Action && \sf Variable && \_{\X} & \_{\Y} \\[-1ex] &&\sf\raise-.5ex{references}&& & \\[-1ex] && && \x & \y \\[.2ex] \X \=\* \Y \=\* \X \=\* \Y && \sf \_4 && \y & \x \\ && \sf 4 && & \end{array} \kern1.7em $$ Bit Bad Wolf nodded sheepishly. After all, these assignments follow right-associative precedence. $$ \small\sf \X \=\* \Y \=\* \X \=\* \Y \qquad {\Large \equiv} \qquad \X \=\* \( ~ \Y \=\* \( ~ \X \=\* \Y ~ \) ~ \) \kern3.1em $$ “Besides, all the goodies I need can be found in the basket.” $$ \small \= ~ \K \= ~ \K \= ~ ~ \= + \K \= + \K \= + ~ \= - \K \= - \K \= - ~ \=\times \K \=\times \K \=\times ~ \=\div \K \=\div \K \=\div \raise-2ex\strut \kern2.8em \\ \small ~~~ + ~~ + ~~ + ~~~ - ~~ - ~~ - ~~~ \times ~~ \times ~~ \times ~~~ \div ~~ \div ~~ \div \sf ~~~~ 0 ~~ 1 ~~ 2 ~~ 3 ~~ 4 ~~ 5 ~~ 6 ~~ 7 ~~ 8 ~~ 9 \kern3.8em $$ With that,Little Redswapped the values of X and Yby constructing a formulawith 4 total variable references(only to X and /or Y, like the wolf’s designs)along with 6 goodies from those just listed. “Oh my! What a big frown you have, Wolfie,” jeered our sweet little user before skipping off. With what little formula did Little Red Solving Hood swap those values of X and Y? $\endgroup$
This is a matter of understanding what you're dealing with. You're asked to differentiate $x^2+y^2=1$. An equation isn't a differentiable function, therefore the equation can't be differentiated. Now comes the 'translating the problem part'. The equation $x^2+y^2=1$ 'defines a function', more precisely, there exists a function $g\colon U\to V$ such that $x^2+(g(x))^2=1$, for some sets $U$ and $V$. (A lot can be said about $g, U$ and $V$). Let's assume for the time being that $g$ is differentiable. Now what the problems is actually asking you to do is to differentiate both sides of $x^2+(g(x))^2=1$, yielding $2x+2g(x)g'(x)=0$. All this is simply the Implicit Function Theorem. The details can be checked on the link. In two dimensions the theorem goes as follows: Let $D\subseteq \Bbb R^2$ be an open set and let $f\colon D \to \Bbb R$ be a class $C^1$ function. Given $a\in \Bbb R$, suppose there exists $(x_0, y_0)\in D$ such that $f(x_0, y_0)=a$ and $f_y(x_0, y_0)\neq 0$. Then there are open intervals $U$ and $V$ with the property that there exists a class $C^1$ function $g\colon U\to V$ such that $\forall x\in U\left(f(x,g(x))=c\right)$. Furthermore defining $h\colon U\to \Bbb R, x\mapsto f(x,g(x))$, the chain rule yields $\forall x\in U(h'(x)=f_x(x,g(x))+f_y(x,g(x))g'(x)=0)$.
Under the auspices of the Computational Complexity Foundation (CCF) In this paper we prove two results about $AC^0[\oplus]$ circuits. We show that for $d(N) = o(\sqrt{\log N/\log \log N})$ and $N \leq s(N) \leq 2^{dN^{1/d^2}}$ there is an explicit family of functions $\{f_N:\{0,1\}^N\rightarrow \{0,1\}\}$ such that $f_N$ has uniform $AC^0$ formulas of depth $d$ and size at ... more >>>
For example we have the vector $8i + 4j - 6k$, how can we find a unit vector perpendicular to this vector? Let $\vec{v}=x\vec{i}+y\vec{j}+z\vec{k}$, a perpendicular vector to yours. Their inner product (the dot product - $\vec{u}.\vec{v}$ ) should be equal to 0, therefore: $$8x+4y-6z=0 \tag{1}$$ Choose for example x,y and find z from equation 1. In order to make it's lengh equal to 1, calculate $\|\vec{v}\|=\sqrt{x^2+y^2+z^2}$ and divide $\vec{v}$ with it. Your unit vector would be: $$\vec{u}=\frac{\vec{v}}{\|\vec{v}\|}$$ Every answer here gives the equation $8a+4b-6c=0$. None mentions that this equation represents a plane perpendicular to the given vector. I am sure that the omission was an oversight of each respondent. But it deserves mention and emphasis. In the plane perpendicular to any vector, the set of vectors of unit length forms a circle. So answers will vary. The vectors $(-1,2,0)^t$ and $(2,0,3)^t$ can be chosen to be a basis for the solution space of the plane: solve for a, divide by 8, and let $2b$ and $3c$ be independent variables. You can divide each by its length $\sqrt{5}$ and $\sqrt{13}$ respectively, and take a trigonometric combination of them to get a general solution. Congrats on 10'000+ views! I'd like to combine the above fine answers into an algorithm. Given a vector $\vec x$ not identically zero, one way to find $\vec y$ such that $\vec x^T \vec y = 0$ is: start with $\vec y' = \vec 0$ (all zeros); find $m$ such that $x_m \neq 0$, and pick any other index $n \neq m$; set $y'_n = x_m$ and $y'_m = -x_n$, setting potentially two elements of $\vec y'$ non-zero (maybe one if $x_n=0$, doesn't matter); and finally normalize your vector to unit length: $\vec y = \frac{\vec y'}{\|\vec y'\|}.$ (I'm referring to the $n$th element of a vector $\vec v$ as $v_n$.) An automated procedure: Take a standard vector $\vec e_k$ which is not parallel to $\vec v$, and form the cross product $\vec e_k\times\vec v$, which is guaranteed to be orthogonal to $\vec v$. The crux of the method is to take the $\vec e_k$ which is "the less parallel" to $\vec v$, i.e. that minimizes the dot product: take the index $k$ corresponding to the smallest $|v_k|$, and you are on the safe side. Update: In $d$ dimensions, take the standard vector $\vec e_k$ that forms the smallest dot product (in absolute value) with $\vec v$ and normalize the vector $$\vec e_k-\frac{\vec e_k\vec v}{\|v\|^2}\vec v=\vec e_k-\frac{v_k}{\|v\|^2}\vec v.$$ Two steps: First, find a vector $a\,{\bf i}+b\,{\bf j}+c\,{\bf k}$ that is perpendicular to $8\,{\bf i}+4\,{\bf j}-6\,{\bf k}$. (Set the dot product of the two equal to 0 and solve. You can actually set $a$ and $b$ equal to 1 here, and solve for $c$.) Then divide that vector by its length to make it a unit vector. This unit vector will still be perpendicular to $8\,{\bf i}+4\,{\bf j}-6\,{\bf k}$ . A vector $v=ai+bj+ck$ is perpendicular to $w=8i+4j-6k$ if and only if $$v\cdot w=8a+4b-6c=0.$$ So for example, we could choose $a=1,b=1,c=2$, so that $v=i+j+2k$. But this is not a unit vector: $$\|v\|=\sqrt{a^2+b^2+c^2}=\sqrt{1^2+1^2+2^2}=\sqrt{6}.$$ However, for any number $t$, it is the case that $\|tv\|=|t|\cdot\|v\|$, and $(tv)\cdot w=t(v\cdot w)$. This shows us how to modify our vector $v$ to get a unit vector that still retains the property of being perpendicular to $w$. Specifically, $$u=\frac{1}{\sqrt{6}}\cdot v=\left(\frac{1}{\sqrt{6}}\right)i+\left(\frac{1}{\sqrt{6}}\right)j+\left(\frac{2}{\sqrt{6}}\right)k$$ satisfies $$\|u\|=\frac{1}{\sqrt{6}}\|v\|=\frac{1}{\sqrt{6}}\cdot\sqrt{6}=1,$$ so that $u$ is unit vector, and $$u\cdot w=\frac{1}{\sqrt{6}}(v\cdot w)=\frac{1}{\sqrt{6}}\cdot0=0,$$ so that $u$ is perpendicular to $w=8i+4j-6k$. You are just looking to a vector $(x,y,z)$ s.t. $8x+4y-6z=0$. Take $(1,-2,0)$ for example, and then divide it by its norm to make it unit. Another method is that from any non-colinear vector to the first, you can apply Gramm-Schmidt process to get an orthogonal vector from the second. THEORY:- Suppose 2 vectors A(given) and B,we need to find vector B such that it is Perpendicular to vector A. We also know that A.B=0 (since angle b/w them is 90° and cos(90°)=0. The below described algorithm can be applied for 2D as well as 3D vectors. Proof:- Given A (8i+4j-6k) Total 3 steps. Step 1:- Suppose a vector B (i+j+k). Step 2:- Now firstly compare and divide the coefficient of each unit vector in B with that of A. As in this case, => B=(1/8)i+(1/4)j+(1/-6)k. Step 3:- Now multiply any one of the three coefficient of unit vector in B with (-2). As in this case i have multiplied (-2) with the coefficient of j.It can be done with i or k also. ** => B=(1/8)i+(1/4)j+(1/3)k. Now for the Unit Vector we can divide vector B with its magnitude, which would be in this case, => |B|= √(1/8)² +(1/4)² +(1/3)² **=> Unit (B)=B/|B| ** (B) IS THE VECTOR PERPENDICULAR TO A. ** Unit(B) IS THE REQUIRED UNIT VECTOR. Verification:-This can be verified as A.B=0 Proved. :) In addition to Yves Daoust's answer, In d dimensions, take any random vector $w$ which is not parallel to $v$, then $$u=w-\frac{v^Tw}{||v||^2}v$$ is orthogonal to the vector $v$. It doesn't have to be a standard vector (or that forms the smallest dot product). You need to find a*i + b*j + c* k so that the dot product of it with 8i +4j -6k is 0. It means 8a + 4b - 6 c = 0. You need to choose a,b,c satisfy above. For example, you can choose a = 1, b = 1, c = 2.
We have $$\begin{align} &\Gamma\Big(1~+~x\Big)~\cdot~\Gamma\Big(1-x\Big)~=~\frac{\pi x}{\sin\pi x} \\\\ &\Gamma\Big(\tfrac12+x\Big)~\cdot~\Gamma\Big(\tfrac12-x\Big)~=~\frac\pi{\cos\pi x} \\\\ &\Gamma\Big(\tfrac14+x\Big){\bigg/}\Gamma\Big(\tfrac14-x\Big)~=~\pi^{2x}\cdot\frac{\zeta\Big(\tfrac12-2x\Big)}{\zeta\Big(\tfrac12+2x\Big)} \end{align}$$ wondering whether yet another similar relation might also exist, involving either $\Gamma\Big(\tfrac18\pm x\Big)$ or $\Gamma\Big(\tfrac13\pm x\Big).$
Nonlinear dynamical behavior giving rise to complex oscillations is found in many biological, chemical, and physical systems with purely temporal dynamics—such as the “chemical clock” of the Briggs-Rauscher reaction—or spatial dynamics, as in the Turing patterns on animals’ skin and fur. However, some of the most fascinating oscillatory patterns occur in spatiotemporal nonlinear systems. For example, one can mathematically model moving fronts from steady-state changes in chemical precipitations via a dynamical system with one unstable and two stable fixed points between the phase space’s two basins of attraction. Researchers can describe propagating pulses—such as the human waves in a stadium, electrical depolarization waves on the heart, or fire fronts—as excitable systems coupled in space, with one stable fixed point in phase space. These systems experience a threshold of excitation and can only be excited after returning to the fix point, thus exhibiting an unexcitable refractory period. One can model such behavior with a reaction-diffusion (RD) system, described by at least two coupled nonlinear differential equations in space that balance a reaction-type mechanism with a diffusion-like transport process. Ilya Prigogine coined the term “dissipative structures” and received the Nobel Prize in Chemistry for his pioneering work on thermodynamic systems far from equilibrium. Fire is a good example—with great visual value—to exemplify an excitable system. For example, one can consider an oil lamp with no fire to be in rest state (a stable fixed point of the system); this will not change unless perturbed with a flame. The lamp then burns (activation) until all of the oil in the containers is gone. Or—if the oil is very viscus and takes time to diffuse up to the lamp’s tip—the lamp will burn until it finishes the available oil from the wick (see Figure 1). Then a refractory period ensures, a time when the wick cannot burn again until enough oil is re-absorbed up the wick. Figure 1. Oil candle as an excitable system where no fire is considered at rest. Once ignited, it stays in an excited state until the oil in the wick is consumed, at which point it remains in a refractory period and cannot be reignited unless the oil is replenished. In space, the dynamics of fire as an excitable system can yield propagating waves with interesting dynamics, as evidenced in the subsequent examples. Example 1 In 1972, Richard Rothermel derived an equation for the speed of fire propagation along a slope using large wooden tripods on an incline, and determined a \(\tan^2\)-dependence of the fire speed up the slope angle \(q\) [1]. In 2015, Christian Punckt, Pablo S. Bodega, Prabha Kaira, and Harm H. Rotermund developed a model to examine forest fire in a laboratory setting, and published experimental and computational results for planar match stick arrays [2]. We combined the match stick array with the sloped condition and created diamond-like match stick patterns on three-dimensional-printed models with \(3.3\) millimeter (mm) holes, which kept a constant match distance of \(5.0\) mm along the horizontal (\(\Delta x=\textrm{const.}\)). We are using the larger \((l = (58±1)\) mm\()\) Diamond Strike on Box matches (see Figure 2a). Figure 2. 2a. Example of two-dimensional match grids with a constant distance along the horizontal, arranged on a base that has an incline from 0 to 45 degrees. 2b. Setup of the array and camera. 2c. Example of fire propagating on the array. After initiating a fire front and determining its velocity using the Tracker Video Analysis and Modeling Tool (see Figure 2b), we confirmed the expected speed-slope dependence of fire fronts propagating up or down the hill, with a cutoff slope value above which no fire front can exist. Rothermel’s \(v = \tan^2(\theta)\)-relationship was confirmed if fitting for negative and positive slopes separately. Combining all propagation speeds from \(-40°\) (downhill) to \(+40°\) (uphill) was best fit by an exponential function of \(v=(13.0±0.8)+(-1.4±0.9)^{(0.05±0.01)\theta}\) in mm/s. Keeping the distance between the matchstick heads constant in vertical direction (\(\Delta z = \textrm{const.}\)) or along the slope (\(\Delta r=\textrm{const.}\)) significantly changed the propagation dynamics. We found a quadratic curve fit \(v=(9±1)+(0.01±0.04)\theta\)\(+(0.003±0.002)\theta^2\) for the \(\Delta z\)-models, and two separate fit functions for \(\theta<0\) \((v=(2.9±0.8)\tan^2θ+(6.3±0.7))\) and \(\theta>0\) \((v=(2.8±0.6)\tan^2\theta+(8.1±0.4))\). We will need to obtain more experimental data in order to draw final conclusions. We also discovered a general decrease in fire propagation speed after the company switched from the red match heads to the “greenlight” matches, which is currently under further investigation in a planar system. We plan to extend the experimental investigation of match-type propagation speed dependence to Diamond Strike Anywhere and Ohio Blue Tip Strike on Box matches, and study the differences using a cellular automaton model and a continuous RD model with non-isotropic conductivity. Example 2 Scientists have devoted extensive research to the discovery of simple media that display continuous propagating waves and chaotic properties. A candle wick that draws flammable oil from a reservoir is one type of simple system. By varying oil viscosity and wick material, the system exhibits refractoriness and nonlinear restitution dynamics under constant periodic reignition. When arranged in a row, these candles can ignite their neighbors and create propagating fire waves that beautifully display complex spatial dynamics. Figure 3. A one-dimensional square with fire everywhere at once (too excitable). In 1913, George Ralph Mines published two articles describing electrical excitation waves as a source for heartbeat [3-4], followed by an article in 1914 that linked abnormal excitation waves to tachycardia and fibrillation [5]. In 2016, a special issue in the Journal of Physiology commemorated Mines’ seminal work on cardiac nonlinear dynamics. We used fire-retardant canvas, aluminum tracks, and a viscous oil mixture to create a one-dimensional oil-candle ring to visualize some of Mines’ described electrical excitation wave behaviors. The square aluminum tracks (side length of \(15\) cm) pressed the flat canvas (height of \(30\) mm) to form a square-ring that exposed five mm of the canvas. We placed the ring in a baking sheet soaking in an oil mixture (comprised of Fluka mineral oil (light) and Paraffin oil of different viscosities) at the bottom of an aluminum pan. If the wick in the oil bath is too short or the oil is too thick, the system is below the excitation threshold to start a fire and cannot produce a wave. In the opposite case of too much oil or oil with a density that is too low, the oil diffusion into the one-dimensional wick occurs too quickly and the whole system ignites and burns along the circumference — as desired in normal candle systems (see Figure 3). With the right conditions, the system is above the threshold and a short segment of the canvas ignites, creating two fire fronts that moving in opposite directions. If one direction is extinguished, a single flame moves around the one-dimensional candle ring (see the upper-left corner of the path in Figure 4). Figure 4. A one-dimensional square with one propagating flame. Due to the slow diffusion of oil into the candle wick after the flame has passed, the same location can be excited again when the flame reaches the point after one “rotation.” We have therefore reproduced experiments of “reentrant” waves in cardiac tissue using a fire model. Experimentally, we obtained a maximum of three to four fire front revolutions before the wick burned down too much to create a flame. As an extension, we will employ the experimental one-dimensional fire ring to create a fire spiral, as shown in Jan Totz’s (Technical University Berlin) simulation in StarCraft II on YouTube. Using a “wick grid” will create a rotating fire spiral: another table-top analogue to wave phenomena found on the heart muscle and many other biological, chemical, and physical systems. Niklas Manz presented this work during a minisymposium at the 2019 SIAM Conference on Applications of Dynamical Systems, which took place last month in Snowbird, Utah. Acknowledgments: This work was supported by the National Science Foundation through grant DMR-1560093 and CMMI-1762553, and by the College of Wooster’s Sophomore Research Program. References [1] Rothermel, R.C. (1972). A mathematical model for predicting fire spread in wildland fuels (USDA Forest Service Research Paper INT-115). Ogden, Utah: Intermountain Forest and Range Experiment Station. [2] Punckt, C., Bodega, P.S., Kaira, P., & Rotermund, H.H. (2015). Wildfires in the Lab: Simple Experiment and Models for the Exploration of Excitable Dynamics. J. Chem. Educ., 92(8), 1330-1337. [3] Mines, G.R. (1913). On functional analysis by the action of electrolytes. J. Physiol., 46(3), 188-235. [4] Mines, G.R. (1913). On dynamic equilibrium in the heart. J. Physiol., 46(4-5), 349-383. [5] Mines, G.R. (1914). On circulating excitations in heart muscles and their possible relation to tachycardia and fibrillation. Trans R. Soc. Can. (Series III, Section IV), 8, 43-52.
In A simplified NP-complete MAXSAT problem, a reduction is given from Min Vertex Cover to MAX-2SAT by replacing each each vertex $x_i$ by a single-variable clause, and each edge by a two-variable clause: \begin{align} \Phi = \left(\bigwedge_{i=1}^n x_i\right) \wedge \left(\bigwedge_{\lbrace i,j\rbrace \in E} (\overline{x}_i \vee \overline{x}_j)\right) \end{align} This basically makes sense to me, because the QUBO version of Vertex Cover is to maximize: \begin{align} L = \sum_{i=1}^N x_i - \sum_{\lbrace i,j\rbrace \in E} x_ix_i \end{align} and QUBO can be converted to MAX-2SAT quite simply. However, I would to know how the reverse transformation works. How do you go from MAX-2SAT to Vertex Cover? I don't actually know if this is an unsolved problem or not, but I figure it shouldn't be since they are both NP-Complete. Would it be as simple/tedious as trying to force an arbitrary MAX-2SAT instance into the same form as $\Phi$? I don't know if that can be done though.
Difference between revisions of "Group cohomology of dihedral group:D8" (→Over the integers) (→Over the integers) Line 12: Line 12: The homology groups with coefficients in the ring of integers are as follows: The homology groups with coefficients in the ring of integers are as follows: − <math>\! H_p(D_8;\mathbb{Z}) = \left\lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 1 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \equiv 3 \pmod 4 \\ 0, & \qquad p \ne 0, p \ \operatorname{even}\end{array}\right</math> + <math>\! H_p(D_8;\mathbb{Z}) = \left\lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 1 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \equiv 3 \pmod 4 \\ 0, & \qquad p \ne 0, p \ \operatorname{even}\end{array}\right</math> As a sequence (Starting <math>p = 0</math>), the first few homology groups are: As a sequence (Starting <math>p = 0</math>), the first few homology groups are: Revision as of 01:28, 9 October 2011 Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups with coefficients in the ring of integers are as follows: Failed to parse (syntax error): \! H_p(D_8;\mathbb{Z}) = \left\lbrace \begin{array}{rl} \mathbb{Z}, & \qquad p = 0 \\ \mathbb{Z}/2\mathbb{Z}, & \qquad p \equiv 1 \pmod 4 \\ \mathbb{Z}/8\mathbb{Z}, & \qquad p \equiv 3 \pmod 4 \\ 0, & \qquad p \ne 0, p \ \operatorname{even}\\\end{array}\right As a sequence (Starting ), the first few homology groups are: 0 1 2 3 4 5 6 7 8 0 0 0 0 Over an abelian group Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the ring of integers are as follows: Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Second cohomology groups for trivial group action Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action Extensions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 ? ? second cohomology group for trivial group action of D8 on Z4
I was messing around with numbers and I made the following conjecture: Conjecture: Let $\pi_n$ be the $n^{\text{th}}$ perfect number; $p_a$ be the prime after $\pi_n$ and $p_b$ be the prime before $\pi_n$. Then, one may always see that$$\begin{align}\pi_n^{ \ \ 2}+p_a^{ \ \ 2}+p_b^{ \ \ 2}&=\sum_{i=1}^4x_i^{ \ 2}\tag{$\exists x_i\in\mathbb{Z}^+$} \\ \Leftrightarrow \pi_n + p_a + p_b&\geqslant\sum_{i=1}^4x_i\tag{with equality $\Leftrightarrow n = 1$}\end{align}$$ (Developed this conjecture from here.) Does anybody know how to prove /disprove this? I know that by Lagrange's Four-Square Theorem, the first equality is true. However, I am not particularly good at inequalities, strict and nonstrict. I know that every perfect number is of the form $2^{q-1}\left(2^q-1\right)$ for a prime $q$ and for all primes $2^q-1$, thus far. (It is not proven whether or not there exists an odd perfect number, which would mean that $2^{q-1}$ does not divide that, supposing it exists). Thus, $\pi_n^{ \ \ 2} = 4^{q-1}(4^q - 2^{q+1}+1)$ but this does not really help me. So, I made a new substitution: $$\pi_n = (2^q-1)^2 - \sum_{i=1}^{2^{q-1}-1}(4k-1).\tag{provable by induction}$$ This looks much more appropriate to work with. Squaring this and simplifying, $$\begin{align}\pi_n^{ \ \ 2} &= \left((2^q-1)^2-\frac 13(3\cdot 2^q-4^q-2)\right)^2 \\ &=\cdots \tag{lots of working out}\\ &= 4^{q-1}(2^q-1)^2.\end{align}$$ But this is exactly what we had before (also proving that it is a perfect number). What do I do in order to (dis)prove my conjecture? Thank you in advance.
There is a system of N non-interacting particles (Ideal Gas). The Hamiltonian of a system of free particles is given by: $$H = \sum_{i=1}^{N}\frac{p_{i}^2}{2m} + \sum_{i=1}^{N} \psi(q_i)$$ where to the kinetic term we have added a confining potential: $$\psi(q_i) = \begin{cases} 0 & q_i \in V\\ \infty & otherwise\end{cases}$$ which keeps the particles inside the volume $V$. First of all, some definitions: $$\Omega (E, V, N) = \int \frac{d\Gamma}{N!\hbar^3N} \Theta(E - H(\Gamma))$$ Where $d\Gamma = dp_1...dp_Ndq_1...dq_N$ (p is momentum and q position) and $\Theta$ is the step function. I want to compute the microcanonical phase space volume for an ideal gas. To do so I have to solve the following integral: $$\Omega (E, V, N) = \int \frac{d\Gamma}{N!\hbar^3N} \Theta(E - H(\Gamma)) = \int q_1\int q_2\int q_N\int \frac{dp_1...dp_N}{N!\hbar^3N}\Theta(E - \sum(\frac{p_{i}^2}{2m}))$$ I know the solutions follows as: $$\int q_1\int q_2\int q_N\int \frac{dp_1...dp_N}{N!\hbar^3N}\Theta(E - \sum(\frac{p_{i}^2}{2m})) = \frac{V^N}{N!\hbar^3N}\int_{\sum \frac{p_{i}^2}{2m} \leq E} dp_1...dp_N = \frac{V^N}{N!\hbar^3N} V'_{3N}\sqrt{2mE}$$ But I do not really understand how can we go from having the step function to just the integral over $dp_1...dp_N$.
Background: I work on a SPDE problem where in order to apply Prokhorov's theorem I need that some measure space is Polish space. And additionaly it would be good if that space is Banach space. Earlier today I was reading the book: Malek, Necas, Rokyta, Ruzicka - Weak and Measure-valued Solutions to Evolutionary PDEs, 1996, and I have a question from the Subsection 1.2.8 titled Radon measures. The definitions given bellow are taken from the same book. On the one hand, the space of Radon measures is defined as: $$M(\mathbb{R}^d)\equiv \{ \mu : C_0 (\mathbb{R}) \rightarrow \mathbb{R}; \mu \hspace{0.2cm} linear \hspace{0.2cm} s.t. \hspace{0.2cm} \exists c>0, |\mu (f)|\leq c ||f||_{\infty}, \forall f \in \mathcal{D}(\mathbb{R}^d)\}.$$ Here $C_0(\mathbb{R}^d)\equiv \{ u \in C(\mathbb{R}^d): lim_{|x|\rightarrow \infty} u(x) = 0 \}$ and $C_0(\mathbb{R}^d)=\overline{\mathcal{D}(\mathbb{R}^d)}^{||\cdot||_{\infty}}$. As usual $\mathcal{D}(\Omega)$ stands for the space of functions from $C^{\infty}( \overline{\Omega})$ with compact support in $\Omega$. If we further define $||\mu||_{M(\mathbb{R}^d)}\equiv sup\{|\mu(f)|: f \in \mathcal{D}(\mathbb{R}^d),||f||_{\infty}\leq 1 \}$, then the space $(M(\mathbb{R}^d), || \cdot ||_{M(\mathbb{R}^d)})$ is a Banach space. On the other hand, let $\Omega$ be a bounded domain. We denote by $M(\Omega)$ the space of Radon measures defined as the dual space of $C(\overline{\Omega})$. Also in this case we know that $L^1(\Omega)\hookrightarrow M(\Omega)$ (and we know that $L^1(\Omega)$ is separable). My questions are: Is the space of Radon measures separable - in the case $\Omega \subset \mathbb{R}^d$ and in the case $\mathbb{R}^d$? Or to be more precise is it a Polish space?I have search it in a few books and in the questions here but I didn't find any concrete answer (I maybe have missed something). Maybe some subspace of the space of Radon measure is Polish?I've read somewhere that the space of positive Radon measures is Polish but didn't find any book to confirm that. Are there some other spaces of measure-valued functions that are Polish(besides the spaces mentioned above)? I usually avoid dealing with meaasure-valued spaces so I don't know much about them. Help with this would be great (and I definitely need it). Thanks in advance.
I couldn't find any bounds on the Beta function, defined for $a_1,\ldots,a_n$ positive: $B(a_1,\ldots,a_n) = \prod_i \Gamma(a_i) / \Gamma(\sum_i a_i)$. where $\Gamma(x)$ is the gamma function. Are there any useful lower and upper bounds for this function, where the bounds depend on $\sum a_i$ and the $a_i$ themselves perhaps? I am not talking about asymptotic bounds (but if you are aware of one that does not use Striling's approximation, that could also be useful perhaps).
What sort of values [for the differential amplifier resistors] should I use? A rule of thumb is that the input impedance of the differential amplifier should be at least ten times the output impedance of the accelerometer in order to avoid signal loss (the differential amplifier would ideally have infinite input impedance). The block diagram of the ADXL335 datasheet suggests that the output impedance is about \$32\text{k}\Omega\$, so you'd need high valued resistors. The gain of the differential amplifier is set by the ratio of resistors (an example derivation is here). You need to set the resistors in your schematic to $$\frac{R_2}{R_1} = \frac{R_4}{R_3} = \text{desired gain}$$ The problem with this is that you need to adjust two resistors to adjust the gain. You can solve both of these problems with an instrumentation amplifier. Conceptually, it's a difference amplifier with a pair of op amp buffers on each input: The op amp buffers give you much higher input impedance than the difference amplifier alone, and the architecture allows the gain to be set by changing only one resistor. You can construct an in amp out of discrete op amps, but an IC in amp will have less gain error because ICs have better matching of the resistors. IC manufacturers offer a wide variety of instrumentation amplifiers so you should be able to find one which meets your needs. As a bonus, IC in amps provide a reference pin which you can use to provide an offset to your output (e.g. by the bias voltage). In the above in amp schematic, the reference pin replaces the ground connection to \$R_3\$. An explanation for how to use this pin can usually be found in the in amp's datasheet; for example, the AD8221 datasheet says: As shown in Figure 43, the reference terminal, REF, is at one end of a 10 kΩ resistor. The output of the instrumentation amplifier is referenced to the voltage on the REF terminal; this is useful when the output signal needs to be offset to a precise midsupply level. For example, a voltage source can be tied to the REF pin to level-shift the output so that the AD8221 can interface with an ADC. The allowable reference voltage range is a function of the gain, input, and supply voltage. The REF pin should not exceed either +VS or –VS by more than 0.5 V. For best performance, source impedance to the REF terminal should be kept low, because parasitic resistance can adversely affect CMRR and gain accuracy. You'd probably need to add a simple op amp buffer to the bias voltage so that the source impedance on the REF terminal is low enough. what voltage should I supply to the op amps - is \$\pm 5\text{ V}\$ sufficient for an accelerometer output of 3.3V maximum? It might be. It depends on the amplifier you choose. You need to make sure that the amplifier can operate on \$\pm 5\text{ V}\$, that its inputs will stay within its input common mode range, and that its output can swing close enough to its supply voltages. Consult its datasheet. To handle the gain requirement of 1 to 32 in binary steps, I'd suggest using a programmable gain amplifier with binary weighted gains. For example, the PGA205 is a programmable gain instrumentation amplifier (both the in amp and programmable gain combined) with a gain selection of 1, 2, 4, or 8. Add another programmable gain operational amplifier with binary weighted gains (e.g. the LTC6910-2 to the output of the first one to achieve an overall gain of 1 to 32).
I'm working on a programing project. For that project I have a triangle with points $A,B,C$, where $A(a_1,a_2,a_3);B(b_1,b_2,b_3);C(c_1,c_2,c_3)$. Given the coordinates of the points $A,B$ and $C$, I want to find the coordinates of the orthocenter, circumcenter,incenter and the points where the perpendicular bisectors ,altitudes and angle bisectors meet with the $AB,BC,CA$. I believe there are formulas for each of these things. I tried looking online, but I couldn't find anything, so I'm asking you - Are there formulas for these things, and if so what are they? Of course there are formulas, but it is probably easier to derive them than to find them online. The derivations become much easier if you work with vectors and take point $A$ to be $(0,0,0)$ (that is, translating by subtracting (a_1, a_2, a_3)$ from all of the points until the very end when you add it back. Let's take the meeting points of the altitudes with the sides first, and look for point $P$ where the altitude from $C$ meets $AB$ (the other two cases are easy switches of $A$, $B$ and $C$ once you know that case). Let $AB = \vec{b}$ and $AC = \vec{c}$ and $AP = \vec{p}$.Then because $P$ is on (extended) line $AB$, $$\vec{p} = k\vec{b}$$for some scalar $k$. And since $CP \perp AP$, $$ \vec{c}-\vec{p} \perp \vec{b} \rightarrow (\vec{c} - k\vec{b})\cdot \vec{b} = 0 \rightarrow k = \frac{\vec{c}\cdot \vec{b}}{|b|^2} = \frac{b_1 c_1 + b_2 c_2 + b_3 c_3}{b_1^2 + b_2^2 + b_3^2} $$ $$\vec{p} = \frac{\vec{c}\cdot \vec{b}}{|b|^2}\vec{b} = \frac{b_1 c_1 + b_2 c_2 + b_3 c_3}{b_1^2 + b_2^2 + b_3^2}\left( b_1, b_2, b_3\right) $$ Translating back the the original coordinates this gives $$ \left( a_1 + k (b_1-a_1), a_2 + k (b_2-a_2), a_2 + k (b_2-a_2) \right) $$ with $$ k=\frac{(b_1-a_1) (c_1-a_1) + (b_2-a_2) (c_2-a_2) + (b_3-a_3) (c_3-a_3)}{(b_1-a_1)^2 + (b_2-a_2)^2 + (b_3-a_3)^2} $$ (You can see how to do the translation to original coordinates from this; from here forward I will only show the work in the coordinates with $A$ at the origin.) As long as we are working with altitudes, let's do the orthocenter next: We start with $Q$, the foot of the altitude on $AC$ which by the same reasoning as above is at $$ \vec{q} = k_b\vec{c} $$ where $$ k_b = \frac{\vec{b}\cdot \vec{c}}{|c|^2} = $$ and for notational symmetry we write the $k$ given above as $$ k_c = \frac{\vec{c}\cdot \vec{b}}{|b|^2} = $$ Line $BQ$ is described by $\vec{b} + \alpha (\vec q - \vec{b}) $ and line $CP$ is described by $\vec{c} + \beta (\vec p - \vec{c}) $. Setting these equal, we have: $$\begin{array}{l} \alpha\vec{q} + (1-\alpha)\vec{b} = \beta\vec{p} + (1-\beta)\vec{c} \\ \alpha k_b \vec{c} + (1-\alpha)\vec{b} = \beta k_c \vec{b} + (1-\beta)\vec{c} \\ (1-\alpha - \beta k_b) \vec{b} = (1-\beta - \alpha k_c) \vec{c} \end{array} $$ and since $\vec{b}$ and $\vec{c}$ are not linearly dependent, this can only be true if $$\left\{ \begin{array}{l} 1-\alpha - \beta k_b = 0\\ 1-\beta - \alpha k_c =0 \end{array} \right. $$ then $$ \left\{ \begin{array}{l} \alpha = \frac{1-k_b}{1-k_c}\\ \beta = \frac{1-k_c}{1-k_b} \end{array} \right. $$ so the orthocenter is at $$ \vec{b} + \alpha (\vec q - \vec{b}) = \vec{b} + \frac{1-k_b}{1-k_c}(k_b\vec{c} - \vec{b}) $$ On the computer you calculate $k_b$ and $k_c$ and then combine in this way. The perpendicular bisector points are trivial in this scheme: On $AB$ the point is $\vec{b}/2$ for example. The three perpendicular bisectors meet at the circumcenter. At the circumcenter we are on the line from $\vec{c}$ to $\vec{b}/2$ and also on the line from $\vec{b}$ to $\vec{c}/2$ so $$ \begin{array}{l} \alpha \vec{c} + (1-\alpha)\vec{b}/2) = \beta \vec{b} + (1-\beta)\vec{c} \\ \left( \alpha - \frac{1-\beta}{2} \right) \vec{c}= \left( \beta - \frac{1-\alpha}{2} \right) \vec{b} \end{array} $$ and as before, each of those coefficients must be zero so $$ \begin{array}{l} \beta = \frac{1-\alpha}{2} \\ -\frac{1}{2} + \alpha + \frac{1-\alpha}{4} = 0 \\ \alpha = \frac{1}{3} \end{array} $$ and the circumcenter is at $$ \frac{\vec{b}+\vec{c}}{3} $$ The same sort of techniques work to find the other points. Probably your professor wanted you to do these calculations as part of your project, so I won't finish it all for you. The hardest one will be the incenter, which is the intersection of the angle bisectors.
As I understand it, Stars emit visible light, OBAFGKMRNS, in the range of $10^3 - 10^4 K$. Yet materials such as steel emit similar frequencies at much lower temps; red is around 800K. Why the difference? I thought black body radiation applies to all materials and environments. I am an interested amateur. The peak wavelength at which a body emits light is governed by Wien's displacement law, which states that this wavelength is inversely proportional to the temperature, as$$\lambda \, T=\text{const}=0.003\text{ m K}.$$More graphically, in the stellar-surface sort of temperature range, this looks like You'll notice that although the short-wavelength cut-off is rather sharp, bodies still emit light at shorter wavelengths than the Wien peak wavelength. Thus for steel at 800 K the peak wavelength is at 3.7 $ \mu\text{m}$, in the mid-infrared. The total radiance on the visible range is then proportional to some power of $(700\text{nm}/3.7\,\mu\text{m})\approx 0.2$, so it's about 1% or less. Thus, when hot irons glow red, what you're seeing is the very edge of the spectral radiance distribution. The bulk of the emissions is as heat in the IR - as you can tell if you put your hand anywhere nearby!
Often times when doing an analysis, it is important to put the results in the context of the loss. For example, a small effect that is cheaply implemented might be the best use of resources. Using Bayesian modeling and loss functions we can better assess the impact and provide better information for decision-making when it comes to allocation of scarce resources (especially in the world of small effect sizes). This is a line that I heard and saw from a talk that James Savage at an NYC R Conference 1 in which he discussed the importance of integrating over your loss function. I think that this is an infinitely important topic. Statisticians, myself included, often marvel over the the most parsimonious model with the highest explained variance or best predictive power. Additionally, when it comes to assessment and verifying the impact of an intervention, ensuring that one has accounted for as many of the confounding factors is important for uncovering the LATE or ATE (depending on the context). However, each of these approaches leaves out of the most important parts of the equation: the practical implications of the error. We live in a world that is messy. People are multitudes and not all things that count can be counted and included in a model. Additionally, we may find that some of our interventions aren’t slam dunks. For example a treatment effect may only be positive only 75% of the time. What does that mean for policy? Should we scrap the program because we didn’t reach the almighty 95% threshold? 2 Loss functions allow us to quantify the cost of doing something. What is lost opportunity for pursuing a given outcome and how does this relate to the value generated by the given intervention. 3 While this also takes some time to think through and generate, representing the impact through a loss function is a great exercise. Bayesian inference provides a great way to marry our loss function with our model of the intervention. With Bayesian inference, we have the wonderful ability to draw from our posterior distribution. These draws represent the distribution around our intervention effect. By evaluating the loss function at these draws we can then generate a distribution of our potential losses. You can then integrate over these losses and see what the net loss would be. If it is favorable, then by definition it is a good approach and one that minimizes loss. With Bayesian modeling you can introduce subjective knowledge through priors. This allows for powerful modeling in the presence of smaller samples and noiser effects. Additionally, because parameters are interpretted and random values, we can speak in terms of distributions. By drawing from these posteriors and evaluating our loss function, we capture the “many worlds” of the intervention and provide a much clearer picture of the loss. Here I am going to step through a simple example. This is a little trivial, but it will illustrate Bayesian inference, loss function generation, and evaluation. As always we start with a data generating process. Let’s assume that we are measuring the effect of some intervention \(\tau\) in the presence of a confounder, \(x\) and some response \(y\), with normally distributed errors. Mathematically we can write this as: \[y \sim N(\alpha+\beta_1x_i+\tau, \sigma^2)\] Now that we have a model for the above DGP, we can generate some fake data in R. set.seed(42)n <- 100L # Participantsx <- rnorm(n, 5, 1)treat <- rep(c(0,1), n/2) # Half Receive treat_effect <- 1beta_1 <- 5alpha <- 1y <- rnorm(n, x * beta_1 + treat_effect * treat + alpha) Now we have our fake data following our DGP above. From the Density we don’t really see too much of a shift. tibble(x, treat, y) %>% ggplot(aes(y, fill = as.factor(treat)))+ geom_density(alpha = .2) Now we can build our loss function. This is a bit contrite, but it indicates that if the treatment effect is zero or non-zero there is a penalty of 20. my_loss <- function(x){ if(x > 1){ -1/log(x) } else if (x >0 ) { x }else{ 20 }} I always find it is a little easier to graph these things: loss_vals <- map_dbl(seq(-3,3,.1), my_loss)plot(seq(-3,3,.1), loss_vals, main = "Loss Function", xlab = "Treatment Effect") Now we can build our model in Stan using rstan. I will add a function that will echo the loss function specified above. On each draw it will calculate our loss. functions{ /** * loss_function * * @param x a vector of outputed values */ real loss_function(real x){ //Build output vector real output; if(x>1) output = -1/log(x); else if (x > 0 ) output = x; else output = 20; return output; }}data { int<lower=0> N; vector[N] x; vector[N] status; vector[N] y;}// The parameters accepted by the model. Our model// accepts two parameters 'mu' and 'sigma'.parameters { real alpha; real beta; real treatment; real<lower=0> sigma;}// The model to be estimated. We model the output// 'y' to be normally distributed with mean 'mu'// and standard deviation 'sigma'.model { y ~ normal(alpha + beta * x + treatment * status, sigma);}generated quantities{ real loss = loss_function(treatment);} Then we can format our data for stan. stan_dat <- list( N = n, x = x, y = y, status = treat) And then we can run our model: fit1 <- sampling(reg_analysis, data = stan_dat, chains = 2, iter = 1000, refresh = 0) Here we would normally do some diagnosistics, but we know that we have properly captured the model, so I will skip this step. Now let’s look at our parameter estimates. Everything looks good and we were able to capture our original parameter values. print(fit1) Inference for Stan model: 00a709f1647c0a785913c03b317c7c55.2 chains, each with iter=1000; warmup=500; thin=1; post-warmup draws per chain=500, total post-warmup draws=1000. mean se_mean sd 2.5% 25% 50% 75% 97.5%alpha 0.73 0.02 0.47 -0.19 0.43 0.72 1.02 1.71beta 5.02 0.00 0.09 4.85 4.96 5.03 5.08 5.20treatment 1.13 0.01 0.19 0.75 1.00 1.13 1.26 1.49sigma 0.92 0.00 0.07 0.80 0.87 0.92 0.96 1.07loss -10.48 2.35 72.76 -56.15 -7.87 -4.31 0.65 0.98lp__ -41.28 0.08 1.56 -45.49 -42.03 -40.87 -40.17 -39.48 n_eff Rhatalpha 485 1beta 490 1treatment 717 1sigma 697 1loss 962 1lp__ 359 1Samples were drawn using NUTS(diag_e) at Wed Sep 18 22:00:09 2019.For each parameter, n_eff is a crude measure of effective sample size,and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). Now more importantly, lets pull out our treatment effect and loss values: results <- data.frame(treat <-fit1 %>% extract("treatment") %>% as.vector(),loss <- fit1 %>% extract("loss") %>% as.vector()) Here we can summarise our estimated treatment effect and its associated distribution, recall our loss function, and then examine our loss function evaluated at our values of treatment effect. par(mfrow = c(1,3))hist(results$treatment, main = "Histogram of Treatment Effect", col = "grey", breaks = 30)plot(seq(-3, 3, .1), loss_vals, main = "Loss Function")plot(x = results$treatment, y = results$loss, main = "Treatment vs Loss") Finally, we can sum our evaluted loss function and see where we land: sum(results$loss) [1] -10477.65 What does it mean? Based on our loss function and treatment effect, it has a negative loss. So it is a slam dunk! Here you can take this as 95% confidence intervals, 95% credible intervals, whatever you choose. Richard McElreath has some interesting/ funny criticisms of vestiges of our digits (literally ten fingers).↩ In the case of prediction problems this could be the penalty for a false positive or conversely penalties for false negatives.↩
As an initial remark, the Schrodinger equation can be solved exactly for a variety of potential, not just “hydrogen-like” atoms - the cases of the harmonic potential, the Morse potential, or Poschl-Teller potential immediately come to mind but there are multiple other ones as well that don’t have entries on Wikipedia. Quantization of angular momentum follows exactly from the commutation relation of the angular momentum operators $\hat L_x,\hat L_y$ and $\hat L_z$. These commutation relations do not depend on the number of particles in the system. The simplest example would be two spin-1/2 particles, made famous by various versions of Bell’s theorem. The total spin (or angular momentum) of this 2-particle system remains quantized and it can only take the values $0$ or $1$ depending on how the state is prepared. Another example is the nuclear $su(3)$ model - not so used in chemistry but still quite useful for multi-particle nuclei. The 3-dimensional harmonic oscillator can be solved using the $su(3)$ Lie algebra, and the angular momentum operators are in this algebra. It is also “easy” to construct multiparticle $su(3)$ state using Lie algebraic techniques (just an extension of Clebsch-Gordan technology for angular momentum). For these multiparticle states, angular momenta of the individual constituents are combined in the usual way and remain quantized. More generally, when the potential is not central, angular momentum will not be conserved for individual single particle states and so states will not necessarily be eigenstate of angular momentum. An example of this is the Nilsson model for deformed nuclei. This does not mean that basis states with good angular momentum quantum number cannot be used to start the calculations, just that the final states will be a linear combination of those single particle basis states and that $\Delta \ell\ne 0$. Of course since the total Hamiltonian must be a rotation scalar, the final eigenstates of $H$ can be chosen to have good angular momentum. Angular momentum remains quantized since the total angular momentum operator still satisfy the usual commutation relations.
Kiepert's Triangles Graduate to Ears of Arbitrary Shape What is this about? Problem On the side of $\Delta ABC$ construct similarly oriented similar triangles $ABN_{c},$ $BCN_a,$ and $CAN_b.$ Prove that the centroids of triangles $ABC$ and $N_{a}N_{b}N_{c}$ coincide. Hint Solution Let in a complex plain, the vertices of $\Delta ABC$ correspond to complex numbers $\alpha,$ $\beta,$ $\gamma.$ Then, if $i^{2}=-1$, $N_{a}=\gamma + t(\beta -\gamma),$ $N_{b}=\alpha + t(\gamma -\alpha),$ $N_{c}=\beta + t(\alpha -\beta),$ for some complex $t$ that determines a base angle and the corresponding side length of the formed triangles. The centroid of $\Delta ABC$ is given by $(\alpha +\beta +\gamma)/3$ while that of $\Delta N_{a}N_{b}N_{c}$ is $\displaystyle \begin{align} \frac{[\gamma + t(\beta -\gamma)] + [\alpha + t(\gamma -\alpha)] + [\beta + t(\alpha -\beta)]}{3} \\ &= \frac{\alpha +\beta +\gamma}{3}. \end{align} $ Acknowledgment I am sincerely grateful to Grégoire Nicollier who has kindly pointed out that the statement proved for Kiepert's isosceles triangles has an immediate extension to similar triangles of arbitrary shape, as above. 65620484
Lunar Lander bug? 06-10-2014, 09:02 PM Post: #1 Lunar Lander bug? I was playing with the lunar lander game on my 29C and took a look at the source code. I can't figure out the terminal velocity calculation and I was wondering if someone could walk me through it. The program is here. Lines 46-50 indicate that a burn b accelerates the craft by a=2b-5. This indicates that gravity is -5. It's also easy to deduce that each burn lasts 1 second. So when you burn the last of the fuel, what is the crash velocity? To figure this out, recognize that you get a 1 second burn consisting of the all the fuel (a=2b-5), followed by a free fall that ends in a crash landing (a=-5). Using the equations provided: \begin{equation}a_0=2b-5\\ v_1 = v_0+a_0t = v_0+2b-5\\ x_1 = x_0+v_0t+\frac{1}{2}a_0t^2 = x_0+v_0+b-2.5\\ v_f^2 = v_1^2 + 2a_1(x_f-x_1) = v_1^2 + 2a_1(0-x_1) = v_1^2+2*(-5)*(-x_1)\\ v_f = -\sqrt{v_1^2+10x_1}\end{equation} Lines 41-44 compare the burn amount to the remaining fuel. If you're burning the last of the fuel it goes to label 6 at line 68. Lines 69-74 add b-2.5 to x Lines 75-77 add 2b-5 to v Lines 78-86 compute vf from x I don't understand lines 69-74. It seems to me that it should subtract v from x also. Am I missing something or is this a bug? Thanks, Dave 06-11-2014, 07:28 AM Post: #2 RE: Lunar Lander bug? (06-10-2014 09:02 PM)David Hayden Wrote: I don't understand lines 69-74. It seems to me that it should subtract v from x also. Am I missing something or is this a bug? But in some cases that might turn out negative and then you have a problem when taking the square root in line 85. You could change the program that in lines 41-44 you just use the remaining fuel if the entry was bigger than that (no branch to LBL 6), calculate the new values, check whether we crashed and check whether we still have fuel. Only then loop back. Otherwise calculate the final crash velocity. With this approach the redundant calculations after LBL 6 could be avoided. Cheers Thomas PS: The LBL 9 in line 40 could probably be removed. User(s) browsing this thread: 1 Guest(s)
...because it is a moral issue... It's not the first time when Nima Arkani-Hamed gave a "totally negative" talk on a similar issue but it's a fun time. Twelve days ago he came to The Space Telescope Science Institute (STScI) in Baltimore, Maryland – the terrestrial headquarters for Hubble (past) and James Webb Telescope (future) and a loose part of Johns Hopkins University – and had the following things to say. Video: Don't Modify Gravity—Understand It! (video, 85 minutes)He is introduced by a host who enumerates some prizes Nima has received, including a nice one from Milner, and mentions that the only person over there who would understand the talk is Mario Livio who is just recovering from an illness (fortunately for his health, unfortunately for the readiness of the audience). Nima claims to have a special moral right to give an entirely negative talk about the sinfulness of modifications of gravity because he's been a dirty sinner himself. ;-) So that's why he has the credentials to crawl from his hole, confess, and repent. Of course, he means it's wrong to modify gravity at long distances. Of course, GR isn't adequate at super short distances, an issue he later (around 15:00) talks about, too. Gravity just becomes an intrinsically strong force over there (at the Planck scale). Nima debunks the usual vague philosophical attitudes about general relativity's being justified by the beauty, sexiness of geometry, and similar stuff. Instead, GR works because it's inevitable; it's dictated by consistency on top of special relativity and quantum mechanics. This provable inevitability shows how deep our knowledge of physics has become. He sketches his plans to discuss the cosmological constant problem and learn from the epic failure of every single attempt to solve the problem by modifying gravity at long distances. He divides the theories to the deeply flawed ones that he would spend his time with, anyway; and complete idiocies that are not worth talking about at all, despite a thousand of deluded papers about them. The experience has numerously shown that physicists have to be radically conservative instead of conservatively radical. Well, in this case, he claims that the conservatism leads to the landscape. OK, his more technical comments start with the proof that gauge theories and gravity are inevitable. Nima attacks the claims in the popular books that "we don't know how to combine gravity and quantum mechanics". Obviously, neutron interferometry may be done and measured (and used to show that Erik's Verlinde entropic gravity is crap). Obviously, we may talk about gravity and quantum mechanics in the same sentence – otherwise we couldn't have discovered quantum mechanics on planet Earth (where we have gravity) at all. In fact, we may even calculated the leading-corrected Newton's law:\[ F = \frac{G m_1 m_2}{r^2} \zav{ 1 - \frac{27}{2\pi^2}\frac{G\hbar}{r^2c^2} +\dots} \] We can questions like that! The effect is minuscule but we may still calculate it. Quantum mechanics is a robust framework that can deal with anything. The real difficulty – sad properly – is that we don't know what happens with various terms in gravity at short distances which is why GR needs a replacement. I couldn't agree with him more. Note that Nima attacks not only really crappy popular books such as Lee Smolin's ones but even the really mainstream ones such as Brian Greene's books. Needless to say, I think that Brian's confusing presentation of the conflict is biased towards his completely flawed tendency to modify quantum mechanics whenever he sees a vague opportunity or excuse for that. ;-) Fine. Nima returns to long distances. Photons would produce a wrong negative-norm polarization. Gauge invariance ("redundancy", using his more favorite words) is needed. The same goes for the diffeomorphism redundancy and gravity and spin-2 fields. These gauge redundancies aren't real symmetries – they're artifacts of our imagination. He continues with Weinberg's arguments from the 1960s that the interactions with added photons – regardless of the nature of the theory – have to depend on the polarization vectors for these interactions to remain nonzero at long distances. Now, pure gauge polarizations have to decouple (interaction amplitude going to zero) which implies that the total charge is zero and therefore, there has to be the "symmetry". The same with spin-2. You don't have to know anything about the "culture" of gravity or the word. The spin-2 needs to decouple the bad polarizations. This forces the conservation laws for the momentum in this case. As I have explained many times, the allowed spins of massless/light particles are \(0,1/2,1,3/2,2\) and the last three choices inevitably arrive with increasingly constrained gauge redundancies. Nima says many things I repeat many times. Someone comes to you and tells you that he may have a completely new spiritual viewpoint on gravity, based on some torsion, wakalixes, or any other word. We just don't need to care about these vague words and fog! We may study what the theory predicts for the interactions and whatever it is, it will either agree with the "gauge descriptions above" or violate the general postulates of relativity or quantum mechanics. That's it, we're finished, "anything goes" is just wrong. At 35:40, he begins to talk about the cosmological constant problem, the most frequent excuse for modifications of gravity at long distances. Everything is Planckian in the fundamental theory, so should be the density or curvature of empty space, but it is empirically 123 orders of magnitude smaller. There are contributions to the C.C. that are Planckian, after all, and they must cancel with the huge relative accuracy (classical terms against quantum corrections etc.). The cancellation isn't an inconsistency, however ludicrous it may seem. People used to believe in the fantasy that there was a deep so far unknown reasons why the C.C. ultimately exactly cancels. Its nonzero value made this scenario far less plausible: it almost cancels but there must still be a tiny leftover. Oops. Nima compares this situation to the fantasies in the 1930s when the first divergent loop corrections to QED were seen. It was believed that those infinities were fantasies that would ultimately see a redefinition of QED that makes all of them zero. Well, they were finite but they were definitely not zero. Loop corrections at every order matter and are nonzero, although much smaller than the naive (infinite) value. As a stumbling block, the case of the C.C. is more serious; as a stimulation of the progress, the story of the loop corrections to the magnetic moment was far more dramatic. When written as an energy density, the C.C. is the fourth power of the inverse millimeter. So some new physics could be appearing at the millimeter scale. Except that we have looked and there seems to be nothing over there. (We think that the analogous solution does explain the hierarchy problem, the unbearable lightness of the Higgs' being, however.) Now, at the level of linguistics, a great idea to explain why the C.C. is so small is to "degravitate" the vacuum energy, all the modes of the fields or those whose wavelengths are shorter than the millimeter (this refinement is needed to preserve the tested behavior of gravity at distances longer than 1 mm). Everything else gravitates but this form of energy just happens to exert no gravitational influence, the proposal says. However, physics only starts once you begin to convert the word-level ideas to equations and 99.9% of the word-level ideas simply fail at that point. A slightly clever mechanism to realize similar ideas was "dumping curvature in the bulk" of a higher-dimensional space. Of course it's easy to "solve" the C.C. problem if you're allowed to modify the behavior of gravity, e.g. by changing the exponent in Newton's power law. The discussion turns to examples in lower-dimensional gravity (3D gravity with deficit angles), too. A modification of the idea is based on Randall-Sundrum who managed to "trap" gravity on the brane. At the beginning, it looked like the tower of apparently massless Kaluza-Klein modes of the gravitons may provide you with a loophole. However, one may ultimately see that the RS theories end up being effectively equivalent to normal four-dimensional theories, GR with massless matter. I know that Lisa doesn't like these "RS is equivalent to..." comments but I am afraid (or happy to see) that Nima is right. It's not just the full-fledged AdS/CFT methods that argue in this direction. More exotic ideas (53:00) came from DGP, Dvali-Gabadadze-Porrati (it's a messed up GDP; I really disliked those things from the beginning and without an interruption because they seemed so contrary to all the lessons I learned from string theory). The Einstein-Hilbert term is weighted by a coefficient that only makes gravity matter on our brane; it's extra 4D gravity added on top of the normal 5D gravity. Your humble correspondent thinks it's inconsistent at the quantum level (swampland). Surely in string theory, it looks like gravity only affects "all of space" and the "\(D\)-dimensional space with all the dimensions one can find". At any rate, Nima shows that if the crossover state obeys the inequality needed to dump the curvature etc., gravity on the brane will be modified at millimeter-like distances, too. Deficit angle didn't work because it can't go over \(2\pi\) or so. With an excess angle, this limitation seems to be lifted. Things look fine. However, if you're diligent, you derive the propagator in your theory and alas, it is modified, too. Some \(2\pi\) becomes \(2\pi+|T|\), in some units; the tension is added. For a large tension, the tension term dominates. You recalculate at what distances gravity starts to behave properly and you find out that these distances must be larger than the Hubble scale (the size of the Universe). Too bad. ;-) A model after model after model fails like that. Why all of them fail? It's useful to hit your head with a hammer 10 or 20 times, then you may start to get the message. ;-) The reason behind the failure of all these things is relativity. Relativity relates space and time. If you modify physics at some distances, you modify it at some time scale, too – or break relativity or causality. More technically, your stress energy tensor \(T^{\mu\nu}={\rm diag}(\rho,p,p,p)\). The different components have different sources – the C.C., visible matter, radiation – and there's simply no relativistically invariant way to "isolate" the piece that you would like to make non-gravitating without spoiling the proper behavior of matter and radiation at the same moment! The only way to avoid the conclusion would be to modify gravity by "knowing in advance" what the impact would be, and that would violate causality. At 1:00:00, he switches to the question whether one may modify gravity at all – a purely theoretical question. Can one make gravity a bit massive etc.? Every modification of gravity has to add a new degree of freedom, typically a scalar field that couples to \(T_\mu^\mu\). That's morally nothing else than boring extension of GR with scalars such as Brans-Dicke etc., something that's been around since the 1940s. If you add the scalars, you modify gravity in a boring way – e.g. you modify the bending of light so that you are in conflict with the observations at the same time unless you boringly choose the coefficients of the extra terms to be much smaller than one (obnoxious imitators/deformations of GR that can't solve its fundamental problems such as the C.C. problem). Exciting possibilities are those where the scalar fields are more exotic, e.g. massive gravity or Galileons or ghosts or fields with specific non-linear self-interactions. Lots of phenomenology but as I always said, these theories lead to deep conflicts with locality and/or thermodynamics. At any rate, Nima promotes the Galileons. Scalar fields are equipped not just by symmetry \(\phi\to\phi+c\) but also \(\phi\to\phi+v_\mu x^\mu\), a "Galilean symmetry" but acting on fields rather than \(\vec x\). Lunar phenomenology, fun, but to make the story short, some radial modes propagate superluminally. Despite formal Lorentz invariance, the "nice structure" is inseparable from the lethal acausality. Similarly, "higgsed gravity" allow you to produce "black hole hair" which is enough to create a perpetual motion machine of second kind by making a cycle between two distinct event horizons of a black hole valid for two different particle species/fields. Too bad. Nima spent 8 years of his life with similar things, he says. Gravity is much more constraining than other types of dynamics. He declares that the eternal inflation plus the multiverse are the only possible way to the C.C. problem given the failures discussed above. I disagree with that. I don't have to modify gravity at all but the selection that makes the C.C. tiny may still reject the existence of any multiverse. Of course, I agree with Nima that one has to be radically conservative instead of a left-wing as*hole because the declaration that the previous knowledge was "just wrong and may be scratched" has never been successful. Even quantum mechanics overthrew classical physics in a very respectful way – when it explained some surprising features of classical physics that should have been asked even by classical physicists, e.g. "Why can the equations of motion be derived from a Hamiltonian as well as the principle of least action?" No trashing here. Radical conservatism – the right attitude – means that one is prepared to push the tested principles as far as they can go and only when one becomes absolutely sure that there is a problem, he may think about modifications. Nima repeated his slogan, Don't modify gravity, just understand it. And the host proposed not to modify Nima's talk by stupid questions, instead, let's drink some beer which is what they are conservatively good at.
This is actually parts of an exercise I found in W.Hodges' shorter model theory (P.147): Let $\mathcal{L}$ be a first-order language and T a theory in $\mathcal{L}$. Also let $\mathcal{A}$ and $\mathcal{B}$ are models of T and $\mathcal{C}$ is an $\mathcal{L}$-structure such that $\mathcal{A} \subseteq \mathcal{C} \subseteq \mathcal{B}$. If T is equivalent to a set of $\exists \forall$-sentences and $\mathcal{A} \preccurlyeq_2 \mathcal{B}$ (i.e. for every $\exists \forall$-formula $\phi(\bar{x})$ of $\mathcal{L}$ and every tuple $\bar{a}$ of elements of $\mathcal{A}$, $\mathcal{B} \models \phi(\bar{a})$ implies $\mathcal{A} \models\phi(\bar{a})$), then $\mathcal{C}$ is also a model of T. I think if we can show that $\mathcal{C}\models \exists \bar{x}\forall\bar{y} \phi(\bar{x},\bar{y})$ whenever $\mathcal{A},\mathcal{B} \models\exists \bar{x}\forall\bar{y} \phi(\bar{x},\bar{y})$ with $\mathcal{A} \subseteq \mathcal{C} \subseteq \mathcal{B}$, then we are done. But I have no idea how to get this conclusion. Any hints/helps are welcomed. Thank you !
Suppose one inertial observer measures a rod at rest w.r.t. him and another observer is moving w.r.t. rod. We then say that length will be shorter for moving observer but at the instants the first observer is measuring the length, the second observer doesn't even get the length of the rod, he just gets distance between two points in space after Lorentz Transformations because simultaneity is a relative concept. So how is it a length contraction in literal sense? Isn't it a misnomer ? High energy muons shower down on Earth from the upper atmosphere: Muons have a mean lifetime of approximately 2.2 microseconds in the laboratory. The distance muons needs to travel from the upper atmosphere to the Earth is approximately 15km. Let's consider two situations: 1. Before taking length contraction into account: A muon travelling close to the speed of light would be expected to travel approximately 660m before decaying. Hence we wouldn't expect the measure any muons at ground level. However the measured flux of muons at ground level is actually 1 cm$^{-2}$ min$^{-1}$... so where are these muons coming from? 2. After taking length contraction into account: If you consider a muon with energy 20 GeV, it has a length contraction factor of 189 - so the distance that a 20 GeV muon observes is from the atmosphere to the earth not 15km, it is 79m! The length has contracted. Hence you would expect the majority of muons at this energy to survive - which is what is observed. Second observer is not supposed to care explaining the result of measurement of the first observer. He has to do his own measurement. If both measure in the same way, the resulting length is shorter for the second observer. [...] is it a length contraction in literal sense? Isn't it a misnomer ? The short answer is that people who speak of " length contraction" (or likewise of " time dilation") are thereby not strictly and exclusively referring to proper quantities, but they are making (therefore) "improper statements" (in a specific technical sense. It is therefore not really considered odious to make such "improper statements" in the context of RT; but it is certainly possible, less confusing, and thus preferrable to strictly stick to "proper statements" instead). To explain in more detail: Suppose one inertial observer measures a rod at rest w.r.t. him Of course, the two ends of this rod (let's call them $A$ and $B$) can and should be considered observers in their own right; and they themselves, first of all, should have been able (at least in principle) to determine that they were at rest to each other. Consequently it can be said that $A$ and $B$ are characterized by a distance from each other: "distance $AB$". (Using terminology which permits making "improper statements", and which thus allows to distinguish "improper" from "proper" in the first place, the distance $AB$ would also be called "proper length of rod $AB$".) and another observer is moving w.r.t. rod. Let's give this other observer an explicit name, too: say $J$. We require of course that $J$ moved uniformly (straight, without acceleration) w.r.t. $A$ and $B$ (and others, too, who were at rest w.r.t. $A$ and $B$). If so, there are many additional observers identifiable (say $K$, $P$, $Q$ ...) who were at rest w.r.t. $J$ (and who consequently were moving w.r.t. $A$ and $B$, just as $J$ was). Now, of particular interest here is the case that $J$ moved "along the rod"; say first passing $A$ and subsequently passing $B$. Then let $P$ be the observer (at rest w.r.t. $J$) who also moved "along the rod", first passing $A$ and subsequently passing $B$, such that $J$'s indication of passing $B$ was simultaneous to $P$'s indication of passing $A$. An let $Q$ be the observer (at rest w.r.t. $J$) who also moved "along the rod", first passing $A$ and subsequently passing $B$, such that $B$'s indication of passing $J$ was simultaneous to $A$'s indication of passing $Q$. From those setup conditions, together with Einstein's definition of how to measure "simultaneity" of indications between suitable pairs of participants, follows the value for the distance ratio $\frac{JP}{JQ} = (1 - \beta^2)$, where the number $\beta$ quantifies the speed at which the rod (participants $A$ and $B$) and $J$, $P$, $Q$ etc. were moving against each other; in comparison to the speed of light. (I may add an explicit derivation later as an appendix.) As far as the motion of $J$, $P$, $Q$ etc. w.r.t. $A$ and $B$ can be considered equivalent to the motion of $A$ and $B$ w.r.t. $J$, $P$, $Q$ etc. (and in particular, if the value of the refractive index in the region containing these participants was found as $n = 1$) then the corresponding distance relations may be considered as mutually equivalent as well, viz. $\frac{JP}{AB} = \frac{AB}{JQ}$, and therefore $\frac{JP}{AB} = \sqrt{ \frac{JP}{AB} \frac{AB}{JQ} } = \sqrt{ \frac{JP}{JQ} } = \sqrt{ 1 - \beta^2 }$. $JP$ denotes first of all plainly the distance of $J$ and $P$ to each other. Of course there is some particlar relation to $A$ and $B$ due to the setup prescription above (especially due to the requirement that $J$'s indication of passing $B$ was simultaneous to $P$'s indication of passing $A$). As shown above: the distance $JP$ is not equal to the distance $AB$ (if $\beta \ne 0$). We then say that length will be shorter for moving observer [...] That's an "improper statement" since participants $J$ and $P$ are plainly other participants than $A$ and $B$ (if $\beta \ne 0$). Nevertheless, this "improper statement" is referring precisely to the setup prescription, calculations, and results shown above. Relativistic optical distortions are from the observer's point of view. Additional POV effects occur, http://bkocay.cs.umanitoba.ca/Students/Theory.html James Terrell, "The Terrell Effect" Am. J. Phys, 57(1) 9–10 (1989) http://www.youtube.com/watch?v=JQnHTKZBTI4 For a relativistc broomstick fitting within a shorter width barn, it depends upon the observer. Perception is maleable.
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Proof of Snell's Law Suppose the speed in medium 1 is \[v_1\]and the speed in medium 2 is \[v_2\]then the time taken to pass from point A to B above is \[t= \frac{d_1}{v_1}+ \frac{d_2}{v_2}= \frac{\sqrt{a^2+x^2}}{v_1} + \frac{\sqrt{b^2+(c-x)^2}}{v_2} = \frac{sin \theta_1}{v_1} + \frac{sin \theta_2}{v_2}\]. The time is minimised when \[\frac{dt}{dx} = 0\]. \[\frac{dt}{dx}= \frac{x}{v_1 \sqrt{a^2+x^2}} - \frac{c-x}{v_2 \sqrt{b^2 + (c-x)^2}}= \frac{sin \theta_1}{v_1}- \frac{sin \theta_2}{v_2}=0\]. Hence \[\frac{sin \theta_1}{v_1}= \frac{sin \theta_2}{v_2}\]. This is one version of Snell's law. Becuase the number of waves passing each point per second - the frequency \[f\]- cannot change, and because \[v= f \lambda\], where \[\lambda\]is the wavelength, the frequency is proportional to the wavelength so we can write Snell's Law as \[\frac{sin \theta_1}{\lambda_1}= \frac{sin \theta_2}{\lambda_2}\].
This question is somewhat related to one of my previous questions: Fibonorial of a fractional or complex argument. Recall the definition of harmonic numbers: $$H_n=\sum_{k=1}^n\frac1k=1+\frac12+\,...\,+\frac1n\tag1$$ Obviously, harmonic numbers satisfy the following functional equation: $$H_n-H_{n-1}=\frac1n\tag2$$ The definition $(1)$ is valid only for $n\in\mathbb N$, but it can be generalized to all positive indices. There are several equivalent ways to do this: $$H_a=\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a}\right)=\int_0^1\frac{1-x^a}{1-x}\,dx=\frac{\Gamma'(a+1)}{\Gamma(a+1)}+\gamma\tag3$$ This generalized definition gives a real-analytic function (that can be extended to a complex-analytic if needed) and still satisfies the functional equation $(2)$ even for non-integer values of $a$. Now, consider the product of harmonic numbers: $$P_n=\prod_{k=1}^nH_k=H_1\,H_2\,...H_n=1\times\left(1+\frac12\right)\times\,...\times\left(1+\frac12+\,...+\frac1n\right)\tag4$$ The numerators and denominators of the terms of this sequence appear as A097423 and A097424 in the OEIS. Obviously, the following function equations hold: $$\frac{P_n}{P_{n-1}}=H_n,\quad\quad\frac{P_n}{P_{n-1}}-\frac{P_{n-1}}{P_{n-2}}=\frac1n\tag5$$ I'm looking for a continuous generalization $P_a$ of the discrete sequence $P_n$, which is real-analytic for all $a>0$ and satisfies the functional equations $(5)$. Could you suggest a way to construct such a function? Is there a series or integral representation for it? Can we generalize it to complex arguments? Update: It seems we can use the same trick that is used to define $\Gamma$-function using a limit involving factorials of integers:$$P_a=\lim_{n\to\infty}\left[\left(H_n\right)^a\cdot\prod_{k=1}^n\frac{H_k}{H_{a+k}}\right]=\frac1{H_{a+1}}\cdot\prod_{n=1}^\infty\frac{\left(H_{n+1}\right)^{a+1}}{\left(H_n\right)^a\,H_{a+n+1}}\tag6$$
There is a formula that calculates the number of ways how a given integer can be split into parts, to be found for example here: Groupprops: Conjugacy class size formula in symmetric group: Suppose $n$ is a natural number and $\lambda$ is an unordered integer partition of $n$ such that $\lambda$ has $a_j$ parts of size $j$ for each $j$. In other words, there are $a_1$ $1$s, $a_2$ $2$s, $a_3$ $3$s, and so on. Let $c$ be the conjugacy class in the symmetric group of degree $n$ comprising the elements whose cycle type is $\lambda$, i.e., those elements whose cycle decomposition has $a_j$ cycles of length $j$ for each $j$. Then: $$ \! |c| = \frac{n!}{\prod_j (j)^{a_j}(a_j!)} $$ The ways how to split $4$ are $1+1+1+1=2+1+1=2+2=3+1=4$, where each corresponds to conjugacy class of $S_4$, e.g. $2+1+1$ is a 2-cycle and two 1-cycles, The order of elements of a conjugacy class can be calculated by the least common multiple of the addends, e.g. $\operatorname{lcm}(2,1,1)=2$. Let's do the first of your examples: $r(2)>r(4)$ $2+1+1$ and $2+2$ have $\operatorname{lcm}(2,1,1)=\operatorname{lcm}(2,2)=2$ and $4$ has $\operatorname{lcm}(4)=4$. Therefore $$\! \frac{4!}{[(2)^1(1!)][(1)^2(2!)]} + \! \frac{4!}{(2)^2(2!)} > \frac{4!}{(4)^1(1!)}\\6 + 3 > 6$$
Under the auspices of the Computational Complexity Foundation (CCF) A fundamental question of complexity theory is the direct product question. Namely weather the assumption that a function $f$ is hard on average for some computational class (meaning that every algorithm from the class has small advantage over random guessing when computing $f$) entails that computing $f$ on ... more >>> A basic question in complexity theory is whether the computational resources required for solving k independent instances of the same problem scale as k times the resources required for one instance. We investigate this question in various models of classical communication complexity. We define a new measure, the subdistribution bound, ... more >>> Hardness amplification is the fundamental task of converting a $\delta$-hard function $f : {0,1}^n -> {0,1}$ into a $(1/2-\eps)$-hard function $Amp(f)$, where $f$ is $\gamma$-hard if small circuits fail to compute $f$ on at least a $\gamma$ fraction of the inputs. Typically, $\eps,\delta$ are small (and $\delta=2^{-k}$ captures the case ... more >>> We give a polynomial time algorithm that computes a decomposition of a finite group G given in the form of its multiplication table. That is, given G, the algorithm outputs two subgroups A and B of G such that G is the direct product of A ... more >>> In this work, we introduce a framework to study the effect of random operations on the combinatorial list decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural ... more >>> In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem $\Pi$ is direct product feasible if it is possible to efficiently aggregate any $k$ instances of $\Pi$ and form one large instance ... more >>>
Forgot password? New user? Sign up Existing user? Log in WHAT ARE AMPHIPROTIC SPECIES? Note by Abhijeet Verma 4 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Amphiprotic species are those which can act as both acid and base by accepting or donating proton. For example, HSO4−HSO_{4}^{-}HSO4− is an amphiprotic species. Working as base\color{#20A900}{\text{Working as base}}Working as base (Although equilibrium constant (Kb)(K_{b})(Kb) is very small in this reaction, as H2SO4H_{2}SO_{4}H2SO4 is a strong acid.) Working as acid\color{#20A900}{\text{Working as acid}}Working as acid (Equilibrium constant is better in this case, as H2SO4H_{2}SO_{4}H2SO4 is a dibasic strong acid, and it furnishes both H+H^{+}H+ easily) Log in to reply Just checked on Google. Both our definitions are correct. These are the species that can donate a proton and can act as both acids and bases. They must also be capable of accepting proton @Pranjal Jain – Well, not necessary. Your statement is true only when talking about Arrhenius acids, but not about the other two types. @Abhineet Nayyar – Look up on Wikipedia! Amphoteric behaves as both acids and bases! While amphiprotic are those amphoteric species which do so by accepting and donating H+ These species are the name given to those Acids, especially Arrhenius Acids, which are capable of releasing an H(+ve) ion. For example HCl is an amphiprotic acid, as it can readily release an H(+ve) ion. (SO4)2- is not an amphiprotic species, as it does not have an H+ ion to release. Are you sure dude? I guess you are mistaking! Well, if you are saying that they can act as both acid and bases, then what are amphoteric species like Aluminium oxide and Zinc oxide?? @Abhineet Nayyar – Yeah! They are some typical amphoteric species! Al2O3+NaOH→NaAlO2+H2OAl_{2}O_{3}+NaOH\rightarrow NaAlO_{2}+H_{2}OAl2O3+NaOH→NaAlO2+H2OAl2O3+HCl→AlCl3+H2OAl_{2}O_{3}+HCl \rightarrow AlCl_{3}+H_{2}OAl2O3+HCl→AlCl3+H2O HCl is not amphiprotic!! Problem Loading... Note Loading... Set Loading...
Well done to Amina from Greenacre Public School, Australia and Carl who sent us their solutions to this problem. Amina found the ratio between the different organisms in the food chain and used it to find the answer to the first part or the problem: a. The ratio is: $$ \begin{align} \begin{array}{ccccccc} \text{Bushes} &:& \text{Caterpillars}&:& \text{Birds}&:& \text{Wildcats}\\ 1 & :& 30 & :& 3 & :& 1 \\ 500 & :& 15000 & :& 1500 & :& 500 \end{array} \end{align} $$Therefore, if there are $500$ bushes, there will be $15000$ caterpillars, $1500$ birds and $500$ wildcats. Carl sent us the following solutions to the other parts of the problem: b. Each predator gets $100-0.25=0.75=75\%$ of the energy from the previous level of the food chain. So the caterpillars eat the bushes, and get $75\%$ of the energy. The birds eat the caterpillars and get $75\%$ of the caterpillars' energy, which is only $0.75^2=0.5625=56.25\%$ of the energy from the bush. So the wildcats get $75\%$ of the birds' energy, which is$0.75^3=0.421875=42.1875\%$ of the energy from the bush. If the wildcats became vegetarian and ate the bush, they would get $75\%$ of the energy instead. So the wildcats could get $\frac{0.75-0.421875}{0.421875}=\frac{7}{9}=77.\dot{7}\%$ more energy by eating the bushes if they were vegetarian. c. We still have $500$ bushes but now we only have $10000$ caterpillars, so we would then have: $$ \frac{10000}{10} = 1000 \text{ birds}\\ \frac{1000}{3} = 333 \text{ wildcats} $$This means that $500-333=167$ wildcats would starve due to a lack of food. d. The wildcats only need to eat $56.25\%$ of a bush, so there is more food for the wildcats because a bush can feed $\frac{1}{0.5625} = \frac{16}{9} = 1.\dot{7}$ wildcats. So $500$ bushes would feed $500 \times \frac{16}{7} = 888$ wildcats. e. In this (arguably fairer) situation $500$ bushes would feed $7500$ caterpillars and $444$ wildcats. This would allow for $750$ birds, which would then feed another $250$ wildcats. So in total there would be $500$ bushes, $7500$ caterpillars, $750$ birds and $694$ wildcats.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
That means we can have actual fancy mathematical equations expressed as such without resorting to pseudocode, uneditable \$\LaTeX\$ screenshots, etc. MathJax is derived from LaTeX, but not exactly equal to it. (It's generally off topic on TeX Stack Exchange.) What's this do? Basically, we can fancify our equations: $$ E = mc^2 $$ We can wrap our equations in \$ ... \$ (for an inline equation: \$c^2 = a^2 + b^2\$), or if we want it to take up its own lines or be multiline we can use the $$ ... $$ delimiters instead. This also lets us write equations with some significant visual complexity: $$ \begin{align} \vec{v} &= \begin{pmatrix} x \\ y \\ z \end{pmatrix} \\ \vert{\vec{v}}\vert &= \sqrt{x^2 + y^2 + z^2} \end{align} $$ The rule of thumb is that LaTeX makes extremely complex stuff simple, and extremely simple stuff complex. :) A cheat sheet to the syntax is available here: MathJax basic tutorial and quick reference. A Game Development-specific MathJax guide is being created here by the community: Game Development MathJax Cookbook
Ex. 2, Chap. 6 in do Carmo's Riemannian Geometry says $\mathrm{T}^2=\mathbf{x}(\mathbb{R}^2)\subset \mathrm{S}^3\subset \mathbb{R}^4$ is a torus with sectional curvature zero in the induced metric, where $\mathbf{x}:\mathbb{R}^2\to\mathbb{R}^4$ is given by$$\mathbf{x}(\theta, \varphi) = \frac{1}{\sqrt{2}}(\cos\theta, \sin\theta,\cos\varphi,\sin\varphi)$$ Since $\mathrm{T}^2$ is two-dimensional, the sectional curvature is indeed the Gaussian curvature, which is intrinsic. But in the usual case $\mathrm{T}^2\subset \mathbb{R}^3$ we know there are points with positive and negative Gaussian curvature in $\mathrm{T}^2$, and this is different from the claim of the exercise. I am confused why the curvatures in these two cases are different. Since the metrics induced from $\mathrm{S}^3$ and $\mathbb{R}^3$ are the same, the Gaussian curvature should coincide in these two immersions.
Electronic structure of NiO with DFT+U¶ Version: 2017 By tuning the empirical Hubbard parameter U, one can sometimes obtain the correct band gap for semiconductors even with LDA or GGA. This tutorial shows how to approach this type of calculations by taking NiO as an example, and at the same time it also introduces the density of states (DOS) functionality in QuantumATK. Introduction¶ The self-interaction error is probably the most serious drawback of the LDA and GGA approximations to the exchange-correlation energy. This self-interaction can be described as the spurious interaction of an electron with itself. It has two main consequences: Electrons are over-delocalized; Band gaps in semiconductors and insulators are predicted to be much lower than their real counterpart. The mean-field Hubbard correction, popularly called DFT+U, is a semi-empirical correction which tries to improve on these deficiencies. In the DFT+U an additional energy term, is added to the Exchange-correlation energy [CdG05].In this equation, \(n_\mu\) is the projection onto an atomic shell and \(U_\mu\) is the value of the “ Hubbard U” for that shell. The \(E_{U}\) energy term is zero for a fully occupied or unoccupied shell, while positive for a fractionally occupied shell. The energy is thereby lowered if states become fully occupied. This may happen if the energy levels move away from the Fermi Level, i.e. increasing the band gap, or if the broadening of the states is decreased, i.e. the electrons are localized. In this way, the Hubbard U improves on the deficiencies of LDA and GGA. The NiO crystal has a too low band gap in LDA and GGA and is one of the standard examples of how the DFT+U approximation can be used to improve the description of the electronic structure of solids [SLP99]. In this tutorial you will compare the DFT and DFT+U models for this system using the GGA. The electronic structure of NiO calculated with DFT¶ NiO has a fcc crystal structure with two atoms in the unit cell. The Ni atoms have a net magnetic moment and form an anti-ferromagnetic arrangement in the (111) direction of the fcc cell. The structure can be described by a rhombohedral unit cell with 4 atoms in the basis [CdG05]. The structure is given below in the QuantumATK QuantumATK format: # Set up latticelattice = Rhombohedral(5.138*Angstrom, 33.5573*Degrees)# Define elementselements = [Nickel, Oxygen, Nickel, Oxygen]# Define coordinatesfractional_coordinates = [[ 0. , 0. , 0. ], [ 0.25, 0.25, 0.25], [ 0.5 , 0.5 , 0.5 ], [ 0.75, 0.75, 0.75]]# Set up configurationbulk_configuration = BulkConfiguration( bravais_lattice=lattice, elements=elements, fractional_coordinates=fractional_coordinates ) Copy the script and save it as NiO.py, or download it directly: NiO.py. Note The rhombohedral unit cell vectors are given as \(( 1,\frac{1}{2},\frac{1}{2}) a\), where \(a\) is the fcc lattice constant. The length of the rhombohedral unit cell vectors are therefore given by \(\sqrt{\frac{3}{2}} a\), and are in accordance with the experimental fcc lattice constant of 4.19 Å. Setting up the calculation¶ You will in this section set up a spin-polarized DFT calculation using the spin-polarized version of the generalized gradient approximation (SGGA) for the NiO electronic structure and calculate the Mulliken population and density of states. If you are not familiar with the workflow of QuantumATK you are recommended to first go through the Basic QuantumATK Tutorial. Start up QuantumATK and drag the script NiO.py onto the Builder. The NiOcrystal will be added to the Stash. Tip Alternatively you can drag the script NiO.py directly onto the Scripter from the QuantumATK Project Files list. Change the default output file name to NiO_sgga.hdf5 and add the following blocks to the script: Note When an even sampling grid is used, the grid is automatically shifted to make sure that the \(\Gamma\)-point is included. The automatic shift can beavoided by unticking the option “ Shift to \(\Gamma\)”. Select User spin – this will allow you to individually set the spin on each atom. Set opposite spins on the two nickel atoms and no spin on the oxygen atoms as illustrated below. Important The initial spin on each atom is given relative to the atomic spin of the element as obtained by Hund’s rule. For nickel the electronic configuration of the atom is [Ar]3d 84s 2 (see periodic table in the ATK Reference Manual.). The 3d shell is fractionally occupied, and only this shell will contribute to the spin of the atom. According to Hund’s rule the 3d shell has 5 electrons in the up direction and 3 electrons in the down direction, giving a total atomic spin of 2\(\mu_B\) for nickel. Finally, open the DensityOfStates block and set the k-points sampling to 10x10x10. In general, one should choose a quite dense k-points grid for DOS analyses, in order to capture possible sharp features in the density of states. Save the script as NiO_sgga.py, but do not close the Scripter window – you will need it again later. Performing the calculation¶ Analysing the results¶ Mulliken Population¶ To inspect the Mulliken population reported in the calculation log file, scroll down to the end of the log file and you will find a report as shown below. 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 +------------------------------------------------------------------------------+| || Mulliken Population Report || || ---------------------------------------------------------------------------- || | || Element Total Shell | Orbitals || | || | s || 0 Ni 9.244 0.994 | 0.994 || 7.892 0.994 | 0.994 || | y z x || 2.997 | 0.999 0.999 0.999 || 2.996 | 0.999 0.999 0.999 || | s || 0.127 | 0.127 || 0.127 | 0.127 || | xy zy zz-rr zx xx-yy || 4.887 | 0.986 0.986 0.964 0.986 0.964 || 3.533 | 0.980 0.980 0.297 0.980 0.297 || | y z x || 0.042 | 0.014 0.014 0.014 || 0.044 | 0.015 0.015 0.015 || | xy zy zz-rr zx xx-yy || 0.056 | 0.007 0.007 0.018 0.007 0.018 || 0.053 | 0.007 0.007 0.016 0.007 0.016 || | y z x || 0.140 | 0.047 0.047 0.047 || 0.145 | 0.048 0.048 0.048 || | --------------------------------------------------- | Tip The Mulliken population reports the numbers of electrons per spin and orbital, as well as the orbital sum for each atom. Note that oxygen atoms are not polarized while the two nickel atoms are polarized in opposite directions, thus forming an anti-ferromagnetic arrangement. The polarization can be calculated from the difference between the number of electrons in the spin-up channel (9.244) and that in the spin-down channel (7.892). The resulting value of 1.35\(\mu_B\) is in good agreement with other DFT calculations [CdG05]. Projected density of states¶ To investigate the NiO density of states (PDOS), select the DensityOfStates item on the LabFloor,click the 2D Plot tool in the right-hand panel. You may need to zoom in a little onthe plot; use the left mouse button for this. Tip The plot shows the total density of states of the spin-up channel with a black line and minus the result for the spin-down channel with a red line. If you de-select Flip spin-down in the Options menu, the density of states of the spin-down channel will also be plotted on the positive axis. The total DOS shows no difference between the two spin channels. However, you saw from the Mulliken population that the nickel atoms are spin polarized! To inspect the projected DOS corresponding to just one nickel atom, select a nickel atom with the left-hand button of the mouse, as illustrated below. The PDOS is simply the total DOS projected onto the selected nickel atom. The expected difference between the spin-up and spin-down DOS channels is now apparent. Tip You can also create combined projections by selecting multiple atoms (use the left-hand button of the mouse while holding Ctrl) and more than one shell. The calculation predicts a band gap of ~0.8 eV, which is much smaller than the experimental value of 4.0 eV [AAL97]. In the next chapter you will see how the description of the band gap is improved with the DFT+U approximation. DFT+U calculation for the NiO crystal¶ You will now perform a DFT+U calculation of the NiO crystal, using U = 4.6 eV for the nickel d-states, as proposed in [CdG05]. Calculations¶ Change the default output file name to NiO_sgga_u.nc. Change the Script detailto Show defaults. Switch to the Basis set/exchange correlationtab. Change the option Hubbard Ufrom Disabledto Onsite. In the Basis setsection, click Setin the column on the right-hand side on the row corresponding to nickel, and set the Hubbard U parameter for the 3d orbital to 4.6 eV. In the editor locate the line where the nickel basis is defined and check that the hubbard U parameter is set to 4.6 eV for the nickel_3d and nickel_3d_0 orbitals. 190 191 192 193 194 195 196 197 198 199 NickelBasis = BasisSet( element=PeriodicTable.Nickel, orbitals=[nickel_3s, nickel_3p, nickel_4s, nickel_3d, nickel_3p_0, nickel_3d_0, nickel_4p], occupations=[2.0, 6.0, 0.0, 8.0, 1.0, 1.0, 0.0], hubbard_u=[0.0, 0.0, 0.0, 4.6, 0.0, 4.6, 0.0]*eV, dft_half_parameters=Automatic, filling_method=SphericalSymmetric, onsite_spin_orbit_split=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]*eV, pseudopotential=NormConservingPseudoPotential("normconserving/sg15/gga/28_Ni.upf"), ) Analyzing the results¶ First inspect the Mulliken population in the log file. You should find a magnetic moment of 1.80\(\mu_B\) on a nickel atom, which is in good agreement with the experimental result of 1.64 – 1.9\(\mu_B\) [AAL97]. To determine the band gap, inspect the printed density of states in the log file. You should find that the DOS is zero in the range -1.69 to 1.69 eV, corresponding to a band gap of 3.38 eV. This is much higher than the SGGA value of 0.8 eV, and in better agreement with the experimental value of 4.0 eV [AAL97]. Comparing the DFT and DFT+U projected DOS¶ The final step is to compare the nickel PDOS obtained with DFT and DFT+U. You will here use Python scripting to perform the analysis. The script dos-comparision.py performs the analysis. It is shown below: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #read in the dos objectdos = nlread('NiO_lsda.hdf5',DensityOfStates)[0]#generate some energiesenergies = numpy.linspace(-5,5,400)*eV#calculate the spectrumn0_up = dos.tetrahedronSpectrum(energies=energies, spin=Spin.Up, projection_list = ProjectionList([0]))n0_down = dos.tetrahedronSpectrum(energies=energies, spin=Spin.Down, projection_list = ProjectionList([0]))e = dos.energies()#do the same for LSDA+Udos_u = nlread('NiO_lsda_u.hdf5',DensityOfStates)[0]n0_up_u = dos_u.tetrahedronSpectrum(energies=energies, spin=Spin.Up, projection_list = ProjectionList([0]))n0_down_u = dos_u.tetrahedronSpectrum(energies=energies, spin=Spin.Down, projection_list = ProjectionList([0]))#plot the spectrum using pylabimport pylab#first plot the up component with dotspylab.plot(e.inUnitsOf(eV), n0_up.inUnitsOf(eV**-1), 'k:',label = 'SGGA')#now plot the down component with negative values and dotspylab.plot(e.inUnitsOf(eV), -1.*n0_down.inUnitsOf(eV**-1), 'k:')#now plot the LSDA+U up components with solidpylab.plot(e.inUnitsOf(eV), n0_up_u.inUnitsOf(eV**-1),'k',label = 'SGGA+U')#now plot the LSDA+U down component with negative values and solidpylab.plot(e.inUnitsOf(eV), -1.*n0_down_u.inUnitsOf(eV**-1),'k')#show legendspylab.legend()pylab.xlabel("Energy (eV)")pylab.ylabel("DOS (1/eV)")pylab.show() Download the script and execute it using the Job Manager.The following plot is produced, illustrating the projected DOS for the nickel atomobtained using SGGA and SGGA+U. Notice the large difference in band gap between thetwo calculations (region of zero DOS around the Fermi level, at 0 eV energy). Note The plotting is based on the matplotlib package which is part of QuantumATK, see Plotting using pylab for more information. References¶ [AAL97] (1, 2, 3) Vladimir I Anisimov, F Aryasetiawan, and A I Lichtenstein. First-principles calculations of the electronic structure and spectra of strongly correlated systems: the lda + u method. Journal of Physics: Condensed Matter, 9(4):767, 1997. doi:10.1088/0953-8984/9/4/002. [CdG05] (1, 2, 3, 4) M. Cococcioni and S. de Gironcoli. Linear response approach to the calculation of the effective interaction parameters in the LDA+U method. Phys. Rev. B, 71:035105, Jan 2005. doi:10.1103/PhysRevB.71.035105. [SLP99] A. B. Shick, A. I. Liechtenstein, and W. E. Pickett. Implementation of the LDA+U method using the full-potential linearized augmented plane-wave basis. Phys. Rev. B, 60:10763–10769, Oct 1999. doi:10.1103/PhysRevB.60.10763.
A Straightedge Construction of the Midpoint of a Chord Common to Two Circles What Is This About? Problem Solution In the diagram below, $\angle \beta=\angle \gamma,$ as two inscribed angles subtended by the same chord $OL$ (not shown.) For the same reason, $\angle \gamma=\angle\delta,$ so that, by transitivity, $\angle \beta=\angle\delta.$ It follows that $AN=BN.$ Similarly, $AO=BO.$ It follows that $AOBN$ is a kite in which the diagonal $NO$ is perpendicular to the diagonal $AB$ and passes through the midpoint of the latter. Acknowledgment The problem is by Thanos Kalogerakis who kindly informed me of the problem and later communicated to me the solution by Nikoz Fragkakis. 65620695
Forgot password? New user? Sign up Existing user? Log in Prove that mmm is an integer bigger than 11 then there always exist 2 different composite integers xxx and yyy such that m=x+ym=x+ym=x+y. Note by Christian Warjri 3 years ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: If we take m≡0(mod4)m \equiv 0 \pmod4m≡0(mod4), we can take x=8x=8x=8 and y as the required multiple of 4. Note that 16 has to written as 10+6. If we take m≡1(mod4)m \equiv 1 \pmod4m≡1(mod4), we can take x=9x=9x=9 and y as the required multiple of 4. If we take m≡2(mod4)m \equiv 2 \pmod4m≡2(mod4), we can take x=10x=10x=10 and y as the required multiple of 4. If we take m≡3(mod4)m \equiv 3 \pmod4m≡3(mod4), we can take x=15x=15x=15 and y as the required multiple of 4. Log in to reply Great! Depending on the intent of the question, you might be missing one small case. Note: Most people do not consider 0 a composite number. Oh right! 15 can be written as 6+96+96+9. Will you please help me out in solving a question as There is a right angle isosceles triangle ABC 90 degree angle at B and AC is hypotenuse. There are two points D, E in between AC such that AD:DE:EC=3:5:4 then prove that angle DBE=45degree m>11m=11+a; a∈Qa=±n, n∈Qm=x+y, x=11,y=±nm>11 \\ m = 11 + a ; ~ a \in Q\\ a = \pm n, ~ n \in Q \\ m = x + y, ~x = 11, y = \pm nm>11m=11+a; a∈Qa=±n, n∈Qm=x+y, x=11,y=±n Can you explain what you are trying to do? I have several concerns: Ahh, is this sum that tough? I am doomed. @Christian Warjri – Nope, I am pointing out that the solution makes no sense, and does not answer the problem at all. It's actually a pretty easy problem, just give it a try. What have you tried? Where did you get stuck? @Calvin Lin – oh pretty sure i dont notice the condition it needs to be a composite number @Viki Zeta – 13 = 9 + 4 Problem Loading... Note Loading... Set Loading...
TIFR 2015 Problem 2 Solution is a part of TIFR entrance preparation series. The Tata Institute of Fundamental Research is India’s premier institution for advanced research in Mathematics. The Institute runs a graduate programme leading to the award of Ph.D., Integrated M.Sc.-Ph.D. as well as M.Sc. degree in certain subjects. The image is a front cover of a book named Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert. This book is very useful for the preparation of TIFR Entrance. Also Visit: College Mathematics Program of Cheenta Problem: Let \(f: \mathbb{R} \to \mathbb{R} \) be a continuous function. Which of the following can not be the image of \((0,1]\) under \(f\)? A. {0} B. \((0,1)\) C. \([0,1)\) D. \([0,1]\) Discussion: If f is the constant function constantly mapping to 0, which is continuous, then the image set is {0}. Suppose that \(f((0,1])=(0,1)\) . Then \(f((0,1))=(0,1)- \{f(1)\} \). Now since \(f(1)\in (0,1) \) the set \( (0,1)- \{f(1)\} \) is not connected. But \((0,1)\) is connected, and we know that continuous image of a connected set is connected. This gives a contradiction. So \((0,1)\) can not be the image of \((0,1]\) under f. Define \(f(x)=1-x\). Then \(f((0,1])= [0,1)\). Define \(f(x)=0\) for \(x\in [0,\frac{1}{2}] \) and \(f(x)= 2(x-\frac{1}{2}) \) for \(x\in [\frac{1}{2} ,1 ] \). \(f\) is continuous on \((0,\frac{1}{2}] \) and \( [\frac{1}{2} ,1 ] \) and \(f\) agrees on the common points, by pasting lemma \(f\) is continuous on \( [0,1] \) . And image of \((0,1] \) is \([0,1]\). TIFR 2015 Problem 2 Solution is concluded. Chatuspathi What is this topic:Real Analysis What are some of the associated concept:Continuous Function, Metric Space Book Suggestions:Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
I want to know how can I have the nice ▷ $\triangleright$ comment symbol of algorithmicx in algorithm2e? \SetKwComment{Comment}{<start>}{<end>}defines a macro \Comment{text comment}which writes text commentbetween <start>and <end>. Note that <start>or <end>can be empty. It defines also \Comment*{side comment text}macro which allows to put comment on the same line as the code. This macro can take various option to control its behaviour: \Comment*[r]{side comment text}put the end of line mark ( ;by default) and side comment text just after and right justified, then end the line. It is the default. \Comment*[l]{side comment text}same thing but side comment text is left justified. \Comment*[h]{side comment text}put side comment right after the text. No end of line mark is put, and line is not terminated (is up to you to put \;to end the line). \Comment*[f]{side comment text}same as the previous one but with side comment text right justified. Here's an example of the above: \documentclass{article}\usepackage{algorithm2e}\SetKwComment{Comment}{$\triangleright$\ }{}\begin{document}\begin{algorithm}[H] \SetAlgoLined \KwData{this text} \KwResult{how to write algorithm with \LaTeX2e } initialization\; \While{not at end of this document}{ read current\; \eIf{understand}{ go to next section \Comment*[r]{Some comment} current section becomes this one\; }{ go back to the beginning of current section\; } } \caption{How to write algorithms}\end{algorithm}\end{document} If you wish to reformat the comment font, you can adjust \CommentSty. In algorithm2e this is done via (for example) \SetCommentSty{itshape} to obtain an \itshape or italics comment. \documentclass{article}\usepackage{algorithm,algpseudocode}\begin{document}\begin{algorithm} \caption{Euclid’s algorithm}\label{euclid} \begin{algorithmic}[1] \Procedure{Euclid}{$a,b$}\Comment{The g.c.d. of a and b} \State $r\gets a\bmod b$ \While{$r\not=0$}\Comment{We have the answer if r is 0} \State $a\gets b$ \State $b\gets r$ \State $r\gets a\bmod b$ \EndWhile\label{euclidendwhile} \State \textbf{return} $b$\Comment{The gcd is b} \EndProcedure \end{algorithmic}\end{algorithm}\end{document}
[1] oai:arXiv.org:1910.06212 [pdf] - 1977908 An Extreme-mass Ratio, Short-period Eclipsing Binary Consisting of a B Dwarf Primary and a Pre-main Sequence M Star Companion Discovered by KELT Submitted: 2019-10-14 We present the discovery of KELT J072709+072007 (HD 58730), a very low massratio ($q \equiv M_2/M_1 \approx 0.08$) eclipsing binary (EB) identified by theKilodegree Extremely Little Telescope (KELT) survey. We present the discoverylight curve and perform a global analysis of four high-precision ground-basedlight curves, the Transiting Exoplanets Survey Satellite (TESS) light curve,radial velocity (RV) measurements, Doppler Tomography (DT) measurements, andthe broad-band spectral energy distribution (SED). Results from the globalanalysis are consistent with a fully convective ($M_2 =0.253^{+0.021}_{-0.017}\ M_{\odot})$ M star transiting a late-B primary ($M_1 =3.348^{+0.057}_{-0.082}\ M_{\odot};\ T_{\rm eff,1} = 12000^{+580}_{-530}\ {\rmK}$). We infer that the system is younger than 272 Myr ($3\sigma$), and the Mstar mass and radius are consistent with values from a pre-main sequenceisochrone of comparable age. We separately and analytically fit for thevariability in the out-of-eclipse TESS phase curve, finding good agreementbetween these results and those from the global analysis. Such systems arevaluable for testing theories of binary star formation and understanding howthe environment of a star in a close-but-detached binary affects its physicalproperties. In particular, we examine how a star's properties in such a binarymight differ from the properties it would have in isolation. [2] oai:arXiv.org:1910.06233 [pdf] - 1977909 Trans-Planckian Censorship Conjecture and Non-thermal post-inflationary history Submitted: 2019-10-14 The recently proposed Trans-Planckian Censorship Conjecture (TCC) can be usedto constrain the energy scale of inflation. The conclusions however depend onthe assumptions about post-inflationary history of the Universe. E.g. in thestandard case of a thermal post-inflationary history in which the Universestays radiation dominated at all times from the end of inflation to the epochof radiation matter equality, TCC has been used to argue that the Hubbleparameter during inflation, $H_{\inf}$, is below ${\cal O}(0.1) ~{\rm GeV}$.Cosmological scenarios with a non-thermal post-inflationary history arewell-motivated alternatives to the standard picture and it is interesting tofind out the possible constraints which TCC imposes on such scenarios. In thiswork, we find out the amount of enhancement of the TCC compatible bound on$H_{\inf}$ if post-inflationary history before nucleosynthesis was non-thermal.We then argue that if TCC is correct, for a large class of scenarios, it is notpossible for the Universe to have undergone a phase of moduli domination. [3] oai:arXiv.org:1910.06236 [pdf] - 1977910 The CALIFA view on stellar angular momentum across the Hubble sequence Submitted: 2019-10-14 [Abridged] We present the apparent stellar angular momentum of 300 galaxiesacross the Hubble sequence, using integral-field spectroscopic data from theCALIFA survey. Adopting the same $\lambda_\mathrm{R}$ parameter previously usedto distinguish between slow and fast rotating early-type (elliptical andlenticular) galaxies, we show that spiral galaxies as expected are almost allfast rotators. Given the extent of our data, we provide relations for$\lambda_\mathrm{R}$ measured in different apertures, including conversions tolong-slit 1D apertures. Our sample displays a wide range of$\lambda_\mathrm{Re}$ values, consistent with previous IFS studies. The fastestrotators are dominated by relatively massive and highly star-forming Sbgalaxies, which preferentially reside in the main star-forming sequence. Thesegalaxies reach $\lambda_\mathrm{Re}$ values of $\sim$0.85, are the largestgalaxies at a given mass, and display some of the strongest stellar populationgradients. Compared to the population of S0 galaxies, our findings suggest thatfading may not be the dominant mechanism transforming spirals into lenticulars.Interestingly, we find that $\lambda_\mathrm{Re}$ decreases for late-type Scand Sd spiral galaxies, with values than in occasions puts them in theslow-rotator regime. While for some of them this can be explained by theirirregular morphologies and/or face-on configurations, others are edge-onsystems with no signs of significant dust obscuration. The latter are typicallyat the low-mass end, but this does not explain their location in the classical($V/\sigma$,$\varepsilon$) and ($\lambda_\mathrm{Re}$,$\varepsilon$) diagrams.Our initial investigations, based on dynamical models, suggest that these aredynamically hot disks, probably influenced by the observed important fractionof dark matter within R$_\mathrm{e}$. [4] oai:arXiv.org:1910.06237 [pdf] - 1977911 Near-IR Spectroscopic Studies of Galaxies at z~1-3 Submitted: 2019-10-14 ISM comprises multiple components, including molecular, neutral, and ionizedgas, and dust, which are related to each other mainly through star formation -some are fuel for star formation (molecular gas) while some are the products ofit (ionized gas, dust). To fully understand the physics of star formation andits evolution throughout cosmic time, it is crucial to measure and observedifferent ISM components of galaxies out to high redshifts. I will review thecurrent status of near-IR studies of galaxies during the peak of star formationactivity (z~1-3). Using rest-frame optical emission lines, we measure dust,star formation, and gaseous properties of galaxies. JWST will advance suchstudies by probing lower luminosities and higher redshifts, owing to itssignificantly higher sensitivity. Incorporating ALMA observations of cold dustand molecular gas at z>1 will give us a nearly complete picture of the ISM inhigh-redshift galaxies over a large dynamic range in mass. [5] oai:arXiv.org:1910.06256 [pdf] - 1977912 Carbon-Deficient Red Giants Submitted: 2019-10-14 Carbon-deficient red giants (CDRGs) are a rare class of peculiar red giants,also called "weak G-band" or "weak-CH" stars. Their atmospheric compositionsshow depleted carbon, a low 12C/13C isotopic ratio, and an overabundance ofnitrogen, indicating that the material at the surface has undergone CN-cyclehydrogen-burning. I present Stromgren uvby photometry of nearly all knownCDRGs. Barium stars, having an enhanced carbon abundance, exhibit the"Bond-Neff effect"--a broad depression in their energy distributions at ~4000A, recently confirmed to be due to the CH molecule. This gives Ba II starsunusually low Stromgren c1 photometric indices. I show that CDRGs, lacking CHabsorption, exhibit an "anti-Bond-Neff effect": higher c1 indices than normalred giants. Using precise parallaxes from Gaia DR2, I plot CDRGs in thecolor-magnitude diagram (CMD) and compare them with theoretical evolutiontracks. Most CDRGs lie in a fairly tight clump in the CMD, indicating initialmasses in the range ~2 to 3.5 Msun, if they have evolved as single stars. It isunclear whether they are stars that have just reached the base of the red-giantbranch and the first dredge-up of CN-processed material, or are more highlyevolved helium-burning stars in the red-giant clump. About 10% of CDRGs havehigher masses of ~4 to 4.5 Msun, and exhibit unusually high rotationalvelocities. I show that CDRGs lie at systematically larger distances from theGalactic plane than normal giants, possibly indicating a role of binarymass-transfer and mergers. CDRGs continue to present a major puzzle for ourunderstanding of stellar evolution. [6] oai:arXiv.org:1910.06263 [pdf] - 1977913 Magnetized Neutron Stars Submitted: 2019-10-14 In this work we show the results for numerical solutions of the relativisticGrad-Shafranov equation for a typical neutron star with 1.4 solar masses. Wehave studied the internal magnetic field considering both the poloidal andtoroidal components, as well as the behavior of the field lines parametrized bythe ratio between these components of the field. [7] oai:arXiv.org:1910.06272 [pdf] - 1977914 CMB anisotropy and BBN constraints on pre-recombination decay of dark matter to visible particles Submitted: 2019-10-14 Injection of high energy electromagnetic particles around the recombinationepoch can modify the standard recombination history and therefore the CMBanisotropy power spectrum. Previous studies have put strong constraints on theamount of electromagnetic energy injection around the recombination era(redshifts $z\lesssim 4500$). However, energy injected in the form of energetic($>$ keV) visible standard model particles is not deposited instantaneously.The considerable delay between the time of energy injection and the time whenall energy is deposited to background baryonic gas and CMB photons, togetherwith the extraordinary precision with which the CMB anisotropies have beenmeasured, means that CMB anisotropies are sensitive to energy that was injectedmuch before the epoch of recombination. We show that the CMB anisotropy powerspectrum is sensitive to energy injection even at $z = 10000$, giving strongerconstraints compared to big bang nucleosynthesis and CMB spectral distortions.We derive, using Planck CMB data, the constraints on long-lived unstableparticles decaying at redshifts $z\lesssim 10000$ (lifetime $\tau_X\gtrsim10^{11}$s) by explicitly evolving the electromagnetic cascades in the expandingUniverse, thus extending previous constraints to lower particle lifetimes. Wealso revisit the BBN constraints and show that the delayed injection of energyis important for BBN constraints. We find that the constraints can be weaker bya factor of few to almost an order of magnitude, depending on the energy, whenwe relax the quasi-static or on-the-spot assumptions. [8] oai:arXiv.org:1910.06274 [pdf] - 1977915 Emulating the Global 21-cm Signal from Cosmic Dawn and Reionization Submitted: 2019-10-14 The 21-cm signal of neutral hydrogen is a sensitive probe of the Epoch ofReionization, Cosmic Dawn and the Dark Ages. Currently operating radiotelescopes have ushered in a data-driven era of 21-cm cosmology, providing thefirst constraints on the astrophysical properties of sources that drive thissignal. However, extracting astrophysical information from the data is highlynon-trivial and requires the rapid generation of theoretical templates over awide range of astrophysical parameters. To this end emulators are oftenemployed, with previous efforts focused on predicting the power spectrum. Inthis work we introduce 21cmGEM -- the first emulator of the global 21-cm signalfrom Cosmic Dawn and the Epoch of Reionization. The smoothness of the outputsignal is guaranteed by design. We train neural networks to predict thecosmological signal based on a seven-parameter astrophysical model, using adatabase of $\sim$30,000 simulated signals. We test the performance with a setof $\sim$2,000 simulated signals, showing that the relative error in theprediction has an r.m.s. of 0.0159. The algorithm is efficient, with a runningtime per parameter set of 0.16 sec. Finally, we use the database of models tocheck the robustness of relations between the features of the global signal andthe astrophysical parameters that we previously reported. In particular, weconfirm the prediction that the coordinates of the maxima of the global signal,if measured, can be used to estimate the Ly{\alpha} intensity and the X-rayintensity at early cosmic times. [9] oai:arXiv.org:1910.06285 [pdf] - 1977916 No Snowball on Habitable Tidally Locked Planets with a Dynamic Ocean Submitted: 2019-10-14 Terrestrial planets orbiting within the habitable zones of M-stars are likelyto become tidally locked in a 1:1 spin:orbit configuration and are primetargets for future characterization efforts. An issue of importance for thepotential habitability of terrestrial planets is whether they could experiencesnowball events (periods of global glaciation). Previous work using anintermediate complexity atmospheric Global Climate Model (GCM) with no oceanheat transport suggested that tidally locked planets would smoothly transitionto a snowball, in contrast with Earth, which has bifurcations and hysteresis inclimate state associated with global glaciation. In this paper, we use acoupled ocean-atmosphere GCM (ROCKE-3D) to model tidally locked planets with nocontinents. We chose this configuration in order to consider a case that weexpect to have high ocean heat transport. We show that including ocean heattransport does not reintroduce the snowball bifurcation. An implication of thisresult is that a tidally locked planet in the habitable zone is unlikely to befound in a snowball state for a geologically significant period of time. [10] oai:arXiv.org:1910.06305 [pdf] - 1977917 The impact of relativistic effects on the 3D Quasar-Lyman-$\alpha$ cross-correlation Submitted: 2019-10-14 We study the impact of relativistic effects in the 3-dimensionalcross-correlation between Lyman-$\alpha$ forest and quasars. Apart from therelativistic effects, which are dominated by the Doppler contribution, severalsystematic effects are also included in our analysis (intervening metals,unidentified high column density systems, transverse proximity effect andeffect of the UV fluctuations). We compute the signal-to-noise ratio for theBaryonic Oscillation Spectroscopic Survey (BOSS), the extended BaryonicOscillation Spectroscopic Survey (eBOSS) and the Dark Energy SpectroscopicInstrument (DESI) surveys, showing that DESI will be able to detect the Dopplercontribution in a Large Scale Structure (LSS) survey for the first time, with aS/N $>7$ for $r_{\rm min} > 10$ Mpc$/h$, where r$_{\rm min}$ denotes theminimum comoving separation between sources. We demonstrate that severalphysical parameters, introduced to provide a full modelling of thecross-correlation function, are affected by the Doppler contribution. By usinga Fisher matrix approach, we establish that if the Doppler contribution isneglected in the data analysis, the derived parameters will be shifted by anon-negligible amount for the upcoming surveys.
Consider a continuous map $f : B^2 \rightarrow \mathbb{R}^2$ such that $f(S^1) \subset S^1$ and $deg(f_{|S^1}) \ne 0.$ Prove that $B^2 \subset \operatorname{im}(f).$ [Note: Here, $B^2$ is the closed unit disk, and the degree of a function is defined as in this question] I don't know how to prove the statement. In an informal way, I see that the winding number tells us that it's true but I don't know how the fundamental group and the degree encodes the information that the winding number gives. The above idea comes from the following thread. But I don't know how to completely translate the solution to "purely" Algebraic Topology terms. In my course, all the theory of the "Fundamental Group" and the "Degree" has been taught without any mention to the winding number, and everything I learned about the winding number is from a Complex Analysis course (i.e. with holomorphic functions.) Thanks everyone!
From a geometric measure theory perspective, it is standard to define Radon measures $\mu$ to be Borel regular measures that give finite measure to any compact set. Of course, their connection with linear functionals is very important, but in all the references I know, they start with a notion of a Radon measure and then prove representation theorems that represent linear functionals by integration against Radon measures. Here are some examples: $\color{blue}{I:}$ Evans and Gariepy's Measure Theory and Fine Properties of Functions states it this way: A [outer] measure $\mu$ on $X$ is regular if for each set $A \subset X$ there exists a $\mu$-measurable set $B$ such that $A\subset B$ and $\mu(A)=\mu(B)$. A measure $\mu$ on $\Bbb{R}^n$ is called Borel if every Borel set is $\mu$-measurable. A measure $\mu$ on $\Bbb{R}^n$ is Borel regular if $\mu$ is Borel and for each $A\subset\Bbb{R}^n$ there exists a Borel set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$. A measure $\mu$ on $\Bbb{R}^n$ is a Radon measure if $\mu$ is Borel regular and $\mu(K) < \infty$ for each compact set $K\subset \Bbb{R}^n$. $\color{blue}{II:}$ In De Lellis' very nice exposition of Preiss' big paper, he doesn't even define Radon explicitly, but rather talks about Borel Regular measures that are also locally finite, by which he means $\mu(K) < \infty$ for all compact $K$. His Borel regular is a bit different in that he only considers measurable sets -- $\mu$ is Borel regular if any measurable set $A$ is contained in a Borel set $B$ such that $\mu(A) = \mu(B)$. (I am referring to Rectifiable Sets, Densities and Tangent Measures by Camillo De Lellis.) $\color{blue}{III:}$ In Leon Simon's Lectures on Geometric Measure Theory, he defines Radon measures on locally compact and separable spaces to be those that are Borel Regular and finite on compact subests. $\color{blue}{IV:}$ Federer 2.2.5 defines Radon Measures to be measure a $\mu$, over a locally compact Hausdorff spaces, that satisfy the following three properties: If $K\subset X$ is compact, then $\mu(K) < \infty$. If $V\subset X$ is open, then $V$ is $\mu$ measurable and $\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$ If $A\subset X$, then $\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$ Note: it is a theorem (actually, a Corollary 1.11 in Mattila's Geometry of Sets and Measures in Euclidean Spaces) that a measure is a Radon a la Federer if and only if it is Borel Regular and locally finite. I.e {Federer Radon} $\Leftrightarrow$ {Simon or Evans and Gariepy Radon}. (I am referring of course to Herbert Federer's 1969 text Geometric Measure Theory.) $\color{blue}{V:}$ For comparison, Folland (in his real analysis book) defines things a bit differently. For example, he defines regularity differently than the first, third and fourth texts above. In those, a measure $\mu$ is regular if for any $A\subset X$ there is a $\mu$-measurable set $B$ such that $A\subset B$ and $\mu(A) = \mu(B)$. In Folland, a Borel measure $\mu$ is regular if all Borel sets are approximated from the outside by open sets and from the inside by compact sets. I.e. if $\hspace{1in}\mu(B) = \inf \mu(V): V\text{ is open, } B\subset V$ and $\hspace{1in}\mu(B) = \sup \mu(K): K\text{ is compact, } K\subset B$ for all Borel $B\subset X$. Folland's definition of Radon is very similar to Federer's but not quite the same: A measure $\mu$ is Radon if it is a Borel measure that satisfies: If $K\subset X$ is compact, then $\mu(K) < \infty$. If $V\subset X$ is open, then $\hspace{1in} \mu(V) = \sup\mu(K): K\text{ is compact, } K\subset V$ If $A\subset X$ and $A$ is Borel then $\hspace{1in} \mu(A) = \inf\mu(V): V\text{ is open, } A\subset V$ ... and by Borel measure, Folland means a measure whose measuralbe sets are exactly the Borel sets. Discussion: Why choose one definition over another? Partly personal preference -- I prefer the typical approach taken in geometric measure theory, starting with an outer measure and progressing to Radon measures a la Evans and Gariepy or Simon or Federer or Mattila. It seems, somehow, more natural and harmonious with the Caratheodory criterion and Caratheodory construction used to generate measures, like the Hausdorff measures. With this approach, for example, sets with an outer measure of 0 are automatically measurable. Another reason not to use the more restrictive definition 2 (in the question above) it makes sense to require that continuous images of Borel sets be measurable. But all we know is that continuous maps map Borel to Suslin sets. And there are Suslin sets which are not Borel! If we use the definition of Borel regular, as in I,III and IV above, then Suslin sets are measurable. There is a very nice discussion of this in section 1.7 of Krantz and Parks' Geometric Integration Theory -- see that reference for the definition of Suslin sets. (Krantz and Parks is yet another text I could have added to the above list that agrees with I, III, and IV as far as Radon, Borel regular, etc. goes.
Let's start by looking at: $$\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert.$$ We can rewrite this by plugging in the definition of $G_{t:t+n}$: \begin{aligned}& \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert \\%=& \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ R_{t + 1} + \gamma R_{t + 2} + \dots + \gamma^{n - 1} R_{t + n} + \gamma^n V_{t + n - 1}(S_{t + n}) \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert \\%=& \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ R_{t:t+n} + \gamma^n V_{t + n - 1}(S_{t + n}) \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert,\end{aligned} where $R_{t:t+n} \doteq R_{t + 1} + \gamma R_{t + 2} + \dots + \gamma^{n - 1} R_{t + n}$. If you go all the way back to page 58 of the book, you can see the definition of $v_{\pi}(s)$: \begin{aligned}v_{\pi}(s) &\doteq \mathbb{E}_{\pi} \left[ \sum_{k = 0}^{\infty} \gamma^k R_{t + k + 1} \mid S_t = s \right] \\%&= \mathbb{E}_{\pi} \left[ R_{t:t+n} + \gamma^n \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \mid S_t = s \right] \\%&= \mathbb{E}_{\pi} \left[ R_{t:t+n} \mid S_t = s \right] + \gamma^n \mathbb{E}_{\pi} \left[ \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \mid S_t = s \right]\end{aligned} Using this, we can continue rewriting where we left off above: \begin{aligned}& \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ R_{t:t+n} + \gamma^n V_{t + n - 1}(S_{t + n}) \mid S_t = s \right] - v_{\pi}(s) \Bigr\rvert \\%=& \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ \gamma^n V_{t + n - 1}(S_{t + n}) \mid S_t = s \right] - \gamma^n \mathbb{E}_{\pi} \left[ \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \mid S_t = s \right] \Bigr\rvert \\%=& \gamma^n \max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ V_{t + n - 1}(S_{t + n}) - \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \mid S_t = s \right] \Bigr\rvert\end{aligned} Because the absolute value function is convex, we can use Jensen's inequality to show that the absolute value of an expectation is less than or equal to the expectation of the corresponding absolute value: $$\left| \mathbb{E} \left[ X \right] \right| \leq \mathbb{E} \left[ \left| X \right| \right].$$ This means that: \begin{aligned}\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} - v_{\pi}(s) \mid S_t = s \right] \Bigr\rvert &\leq \gamma^n \max_s \mathbb{E}_{\pi} \left[ \Bigl\lvert V_{t + n - 1}(S_{t + n}) - \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \Bigr\rvert \mid S_t = s \right] \\%\end{aligned} Now, the important trick here is to see that: $$\max_s \mathbb{E}_{\pi} \left[ \Bigl\lvert V_{t + n - 1}(S_{t + n}) - \sum_{k = 0}^{\infty} \gamma^k R_{t + n + k + 1} \Bigr\rvert \mid S_t = s \right] \leq \max_s \mathbb{E}_{\pi} \left[ \Bigl\lvert V_{t + n - 1}(S_{t}) - \sum_{k = 0}^{\infty} \gamma^k R_{t + k + 1} \Bigr\rvert \mid S_t = s \right]$$ I'm skipping the formal steps to show that this is the case to save space, but the intuition is that: The left-hand side of this inequality involves finding an $S_t = s$ such that some function of $S_{t + n}$ is maximized, whereas the right-hand side involves finding an $S_t = s$ such that exactly the same function of $S_{t}$ is maximized. In the left-hand side, selecting an $S_t = s$ implicitly induces a probability distribution over multiple possible states $S_{t + n}$, given by $S_t$, the environment's transition dynamics, and the policy $\pi$. Intuitively, this is more "restrictive" for the $\max$ operator, it does not have the "freedom" to directly select a single state $S_{t + n}$ such that the function of $S_{t + n}$ is maximized. The right-hand side is free to choose any single state $S_t = s$ such that $S_t$ in the right-hand side were equal to an "optimal" $S_{t+n}$ on the left-hand side, but it is also free to make even better choices which might never be uniquely reachable after $n$ steps on the left-hand side. We can use this to rewrite the previous inequality we had (where we might be making the right-hand side a bit bigger than it was, but that's fine, it already was an upper bound anyway so that inequality will still hold): \begin{aligned}\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} - v_{\pi}(s) \mid S_t = s \right] \Bigr\rvert &\leq \gamma^n \max_s \mathbb{E}_{\pi} \left[ \Bigl\lvert V_{t + n - 1}(S_{t}) - \sum_{k = 0}^{\infty} \gamma^k R_{t + k + 1} \Bigr\rvert \mid S_t = s \right] \\%&= \gamma^n \max_s \mathbb{E}_{\pi} \left[ \Bigl\lvert V_{t + n - 1}(S_{t}) - v_{\pi}(s) \Bigr\rvert \mid S_t = s \right].\end{aligned} After this rewriting we've got a hidden $\mathbb{E}_{\pi}$ "inside" another $\mathbb{E}_{\pi}$ (because the definition of $v_{\pi}(s)$ contains an $\mathbb{E}_{\pi}$), which I suppose is kind of ugly... but mathematically meaningless. The maximum of a random variable is an upper bound on the expectation of that random variable, so we can get rid of the expectation in the right-hand side (again potentially increasing the right-hand side, which again is still fine since it's already an upper bound anyway): \begin{aligned}\max_s \Bigl\lvert \mathbb{E}_{\pi} \left[ G_{t:t+n} - v_{\pi}(s) \mid S_t = s \right] \Bigr\rvert &\leq \gamma^n \max_s \Bigl\lvert V_{t + n - 1}(s) - v_{\pi}(s) \Bigr\rvert,\end{aligned} which we can finally rewrite to Equation (7.3) in the book by moving the subtraction of $v_{\pi}(s)$ outside of the expectation on the left-hand side of the inequality (which is fine because, as I already mentioned above, the definition of $v_{\pi}(s)$ itself contains another $\mathbb{E}_{\pi}$ anyway).
Given a gamma-posterior distribution $p(\theta|y)$ I want to compute the posterior distribution for the log-odds: $log\frac{\theta}{1-\theta}$ I tried to solve it with the change of variables (hopefully this is a good start). Solution approach: $p_{\theta}(\theta) = \frac{\beta^\alpha}{\Gamma(\alpha)}\theta^{-(\alpha+1)}e^{-\frac{\beta}{\theta}}$ $\phi = log\frac{\theta}{1-\theta}$; $h(\phi) = \frac{e^\phi}{e^\phi+1}$; $h'(\phi)=\frac{1}{1+e^\phi}$ $p_\phi(\phi) = p_\theta(h'(\phi))\times h'(\phi)$ $p_\phi(\phi)= (\frac{\beta^\alpha}{\Gamma(\alpha)}\frac{e^\phi}{1+e^\phi}^{-(\alpha+1)}e^{-\frac{\beta(1+e^\phi)}{e^\phi}})\frac{1}{1+e^\phi}$ Questions: Is the posterior distribution for the log-odds derived correct? (Maybe not 100% mathematically correct but mostly I care about the solution (: ) Would it be correct to write the log-odds distribution like that? $p_\phi(\phi) \sim Gamma(\frac{e^\phi}{e^\phi+1} \ | \ \alpha, \beta)\times \frac{1}{1+e^\phi}$ Is there a way of simulating the log-odds posterior distribution out of the given posterior distribution without doing the analytics? Thank you already in advance!
Comparing groups for statistical differences: how Download it (using education research as an example). Sign in 8 0theoretical sampling distribution the behavior of which is described by the central limit theorem.If a variable's coefficient estimate is significantly different from zero (or some of Loading... Is there a textbook you'd recommend to get It can allow the researcher to construct a confidence standard http://grid4apps.com/standard-error/solved-interpretation-standard-error-of-the-estimate.php interval within which the population mean is likely to fall. of Standard Error Of Estimate Calculator Being out of school for "a few years", I find that I S provides important informationthe technical challenges, can be a good thing. However, it can be converted into an Excel, Detection Limits, and ICH Guidelines. - Duration: 10:09. Todd Grande 3,287 views 11:43 Excel Walkthrough and Its Applications. 4th ed. An outlier may or may not have a dramatic effect on interpretation Does this mean you should Given that the population mean may be zero, the researcher might conclude that the 10 patients who developed bedsores are outliers. In essence this is a measure of howbut are used differently. How To Interpret Standard Error In Regression At least, that worked withat: http://www.scc.upenn.edu/čAllison4.html.S.E. P=.05) of samples that are possible assuming that official site data can I obtain from the below information.This is interpreted as follows: The population meanLoginSign UpPrivacy Policy Interpreting the standard error The 70-95-100 rule-of-thumb However, if the sample size is very large, for example, sample sizes greater thanby Dalmario.In fact, the confidence interval can be so large that it Standard Error Of Estimate Formula only one sample. when one character needs something specific? Browse other questions tagged statistical-significance Lane error You can still consider the cases inWorking...Further, as I detailed here, R-squared is error hope not.When the standard error is large relative to recommended you read Validated Meta your communities Sign up or log in to customize your list.It is, however, an important indicator of how reliablestatistic called the coefficient of determination. From measurement error) and perhaps decided on the range of predictor values you errors plus the square of their mean: this is a mathematical identity.Sign in to add this to of audio for your podcastAndrew on Should Jonah Lehrer be a junior Gladwell? In some situations, though, it may be felt that the absolute maximum rating on a part? Ideally, you would like your confidence intervals to bechain X, for brands A and B for a year -104 numbers.You should not try to compare R-squared between models that do and do not includeCopyright (c) 2010 Croatian Society to regression line Figure 2. And the reason is that the standard errors of is not clinically or scientifically significant. for writing! The Standard Error Of The Estimate Is A Measure Of Quizlet They have neither the read this post here as large as the SE will be statistically significant at p=<.05.Specifically, the term standard error refers to a group of statistics Sign in to adda confidence interval in which the population mean is likely to fall. of is very significant. the dispersion (or variability) in the predicted scores in a regression. What Is A Good Standard Error The smaller the standard error, the closerbe $\sqrt{\frac{s^2}{\sum(X_i - \bar{X})^2}}$.Figure suggest that some pairs of variables are not providing independent information. Standard error statistics are a class of statistics that are providedRelated articles Related pages: Calculateof the variability of the sampling distribution.Spider Phobia Course Moreof the variability of the sample. Get the http://grid4apps.com/standard-error/info-interpretation-standard-error-of-the-mean.php the data points from the fitted line is about 3.5% body fat.Boost Your Self-Esteem Self-Esteem Course Deal With Too Much Worry Worry Course Howstandard error?The standard deviation is a measure The log transformation is also Standard Error Of Regression Coefficient It is calculated by are repeated, then the standard error of mean is zero. . . Need to report the video?Also, SEs are useful for doing other hypothesis tests - not just testing of the association tested by the statistic. Availableto achieve this level of precision? The confidence interval (at the 95% example, a regression. Other formsError of the Regression (S)? Approximately 95% of the observations should fall within plus/minus 2*standard error of the regression Linear Regression Standard Error Wednesday, July 2, 2014 Dear Mr. estimate Minitab You'll see Similar formulas are used when the standard error of the of 1996. 2. Larsen RJ, Marx ML. Standard Error of the Estimate A related and similar concept to Standard Error Of Prediction points and it explains 98% of the variability of the response data around its mean.Designedtend to read scholarly articles to keep up with the latest developments. can such a simple issue be sooooo misunderstood? of It can allow the researcher to construct a confidenceMontenegro without visa? Khan Academy 499,267 views 15:15 Standard Deviation is related to the significance level of the finding.Formulas for a sample comparable to the use the mean scores. The S value is still the average distance resources that might convincingly explain why hypothesis tests are inappropriate for population data? Statistical Methods in Education and a distinction between 1 or 2 tailed tests of significance. The typical rule of thumb, is that you go about two standard deviations aboveIn multiple regression output, just look in the error of the mean and the standard error of the estimate. Should Jonah Lehrer be a junior Gladwell?
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
The problem in the title is to be proven, and while proving $\mathrm{coNP}\subset\mathrm{NP}$ is rather clear given the assumptions (see below), I fail to see a way to prove $\mathrm{NP}\subset\mathrm{coNP}$. My main idea is to prove that $L$ is also $\mathrm{NP}$-complete (or rather $\mathrm{NP}$-hard, as it is in $\mathrm{NP}$), but I don't really see a possibility to prove this, as it is not sufficient that a subset of $\mathrm{NP}$ is reducible to $L$; thus I would be very thankful for some idea for a proof. Addendum 1: I am aware of if $L\in NP\cap Co-NP$ is NP-Hard, then $NP=Co-NP$, but in this source only the obvious implication is proven. - My proof of the first subset-relation: Addendum 2 All languages $K\in\mathrm{coNP}$ are reducible to the $\mathrm{coNP}$-complete language $L$, thus if $L\in\mathrm{NP}$ every $K$ is reducible to a language in $\mathrm{NP}$ and the implication $K\in\mathrm{coNP}\Rightarrow K\in\mathrm{NP}$, and thus $\mathrm{coNP}\subset\mathrm{NP}$, holds.
Contrary to DaftWullie's answer, it is possible to implement a CNOT gate in a photonic system with 100% efficiency. However, there are caveats to this - it depends on what's used as the qubits (or, as this is a photonic system, potentially qudits) in the system. KLM: A photon as a qubit The first thing that most people think of in terms of photonic qubits is polarisation. In this case, postselection (/heralding) is generally required. This was theoretical shown to be possible by Knill, Laflamme and Milburn (KLM) in 2001. Within a couple of years, the first probabilistic photonic CNOT gate was shown by O'Brien et. al. (arXiv version) in an equivalent scheme, as shown in figure 1. Figure 1: Circuit diagram of probabilistic 2-photon CNOT gate. Each photon (control and target) is encoded from polarisation to spatial modes. After postselecting on a single photon in $C_{\text{out}}$ and a single photon in $T_{\text{out}}$, when the control photon, $C$, is in spatial mode $C_0$, the identity operation is performed on the target photon, $T$, while when $C$ is in $C_1$, the NOT (X) operation is performed on $T$ which has a probability of success of 1/9. Uses beamsplitters. Image taken from Figure 1a of O'Brien et. al. One variation of this is to use a nonlinear phase shift to make a deterministic version of this. While the above may not sound overly great for the prospects of optical quantum computing, encoding a qubit as polarisation/2 spatial modes of a photon is far from the only way to perform optical quantum computing. Reck: Many modes makes... Many dimensions One other such method was proposed before KLM by Reck et. al. (shown in figure 2) and has since been improved upon by Clements et. al. In this a single photon is encoded in some number, $d$, of spatial modes. This is equivalent to a $\log_2 d$ qubit system and can be used to implement any unitary. For a 2-qubit system, this is equivalent to having 4 spatial modes labelled $\left|00\right>, \left|01\right>, \left|10\right>$ and $\left|11\right>$ and a CNOT operation is equivalent to swapping the bottom 2 $\left(\left|10\right> \text{ and } \left|11\right>\right)$ modes. Figure 2: Image of a 6-mode Reck scheme chip, which can be used to implement a deterministic 'CNOT' gate. Uses phase shifters and beam splitters to build up a unitary evolution over the modes of the system. Image taken from Figure 1 of Carolan et. al. Of course, it's not quite that simple and, due to requiring an exponential number of modes, the Reck scheme isn't generally considered to be overly scalable. That leaves us with the (final) two options 1: nonlinear optics (continuous variable) and measurement based quantum computing Continuous variable: Just keep squeezing As detailed in my answer here, continuous variable QC also offers a universal gateset which can be used to make arbitrary unitaries, in theory at least. Unfortunately, as more squeezing is still required, an experimental realisation of this is yet to occur. And now for something completely different: Measurement based Another scheme that hasn't been experimentally achieved, yet shows potential, is measurement based QC. Instead of performing CNOT gates during unitary evolution that defines a circuit, the entangling operations occur as part of the state preparation of the system. As per Ewert and Loock (arXiv version) the current idea of doing this involves generating small clusters of entangled photons, then entangling these into larger clusters using fusion gates, as shown in figure 3. Figure 3: Diagram of a 75% efficient fusion gate. Inputting the state $\left|\Upsilon_1\right> = \frac{1}{\sqrt 2}\left(\left|20\right> + \left|02\right>\right)$ allows for the detection of higher dimensional states. These can then be cascaded to detect larger and larger cluster states. The probabilistic measurement is equivalent to an entangling operation, similar to a CNOT gate. Image taken from Figure 1 of Ewert and Loock. 1 Although there are a number of variations of the different schemes used and work is constantly being done to improve upon them
Group Theory New submissions 1-15] [ showing up to 500 entries per page: fewer | more ] New submissions for Tue, 15 Oct 19 [1] arXiv:1910.05638 [pdf, ps, other] Title: Coset Posets of Infinite GroupsComments: 12 pagesSubjects: Group Theory (math.GR) We consider the coset poset associated with the families of proper subgroups, proper subgroups of finite index, and proper normal subgroups of finite index. We investigate under which conditions those coset posets have contractible geometric realizations. [2] arXiv:1910.05718 [pdf, ps, other] Title: Logarithmic bounds for the diameters of some Cayley graphsComments: A preliminary versionSubjects: Group Theory (math.GR); Number Theory (math.NT) Let $\mathcal S \subset{\text{SL}(d,\mathbb Z)\ltimes \mathbb Z^d}$ or $\mathcal S \subset\text{SL}(d,\mathbb Z)\times\cdots\times \text{SL}(d,\mathbb Z)$ be a finite symmetric set. We show that if $\Lambda=\langle\mathcal S\rangle$ is Zariski-dense, then the diameter of the Cayley graph $\mathcal Cay(\Lambda/\Lambda(q),\pi_q(\mathcal S))$ is $O(\log q)$, where $q$ is an arbitrary positive integer, $\pi_q: \Lambda\rightarrow\Lambda/\Lambda(q) $ is the canonical congruence projection, and the implied constant depends only on $\mathcal S$. [3] arXiv:1910.05805 [pdf, ps, other] Title: On Pro-$2$ Identities of $2\times2$ Linear GroupsComments: 40 pagesSubjects: Group Theory (math.GR) Let $\hat{F}$ be a free pro-$p$ non-abelian group, and let $\Delta$ be a local commutative complete ring with a maximal ideal $I$ such that $\textrm{char}(\Delta/I)=p$. In [Zu], Zubkov showed that when $p\neq2$, the pro-$p$ congruence subgroup $GL_{2}^{1}(\Delta)=\ker(GL_{2}(\Delta)\overset{\Delta\to\Delta/I}{\longrightarrow}GL_{2}(\Delta/I))$ admits a pro-$p$ identity. I.e. there exists an element $1\neq w\in\hat{F}$ that vanishes under any continuous homomorphism $\hat{F}\to GL_{2}^{1}(\Delta)$. In this paper we investigate the case $p=2$. The main result is that when $\textrm{char}(\Delta)=2$, the pro-$2$ group $GL_{2}^{1}(\Delta)$ admits a pro-$2$ identity. This result was obtained by the use of trace identities that are originated in PI-theory. [4] arXiv:1910.05822 [pdf, ps, other] Title: Cheeger-Gromoll Splitting Theorem for groupsComments: 20 pages, 2 firgures. Comments are welcomeSubjects: Group Theory (math.GR); Geometric Topology (math.GT); Metric Geometry (math.MG) We study a notion of curvature for finitely generated groups which serves as a role of Ricci curvature for Riemannian manifolds. We prove an analog of Cheeger-Gromoll splitting theorem. As a consequence, we give a geometric characterization of virtually abelian groups. We also explore the relation between this notion of curvature and the growth of groups. [5] arXiv:1910.05855 [pdf, other] Title: Stallings automata for free-times-abelian groups: intersections and indexComments: 33 pages, 23 figuresSubjects: Group Theory (math.GR) We extend the classical Stallings theory (describing subgroups of free groups as automata) to direct products of free and abelian groups: after introducing enriched automata (i.e., automata with extra abelian labels), we obtain an explicit bijection between subgroups and a certain type of such enriched automata, which - as it happens in the free group - is computable in the finitely generated case. This approach provides a neat geometric description of (even non finitely generated) intersections of finitely generated subgroups within this non-Howson family. In particular, we give a geometric solution to the subgroup intersection problem and the finite index problem, providing recursive bases and transversals respectively. Cross-lists for Tue, 15 Oct 19 [6] arXiv:1910.05468 (cross-list from math.CO) [pdf, ps, other] Title: On $A_1^2$ restrictions of Weyl arrangementsComments: 28 pagesSubjects: Combinatorics (math.CO); Group Theory (math.GR) Let $\mathcal{A}$ be a Weyl arrangement in an $\ell$-dimensional Euclidean space. The freeness of restrictions of $\mathcal{A}$ was first settled by a case-by-case method by Orlik-Terao (1993), and later by a uniform argument by Douglass (1999). Prior to this, Orlik-Solomon (1983) had completely determined the exponents of these arrangements by exhaustion. A classical result due to Orlik-Solomon-Terao (1986), asserts that the exponents of any $A_1$ restriction, i.e., the restriction of $\mathcal{A}$ to a hyperplane, are given by $\{m_1,\ldots, m_{\ell-1}\}$, where $\exp(\mathcal{A})=\{m_1,\ldots, m_{\ell}\}$ with $m_1 \le \cdots\le m_{\ell}$. As a next step after Orlik-Solomon-Terao towards understanding the exponents of the restrictions, we will investigate the $A_1^2$ restrictions, i.e., the restrictions of $\mathcal{A}$ to subspaces of the type $A_1^2$. In this paper, we give a combinatorial description of the exponents of the $A_1^2$ restrictions and describe bases for the modules of derivations in terms of the classical notion of related roots by Kostant (1955). [7] arXiv:1910.05690 (cross-list from math.RT) [pdf, ps, other] Title: Periodicity in the cohomology of finite general linear groups via q-divided powersComments: 17 pagesSubjects: Representation Theory (math.RT); Group Theory (math.GR) We show that $\bigoplus_{n \ge 0} {\mathrm H}^t({\bf GL}_n({\bf F}_q), {\bf F}_\ell)$ canonically admits the structure of a module over the $q$-divided power algebra (assuming $q$ is invertible in ${\bf F}_{\ell}$), and that, as such, it is free and (for $q \neq 2$) generated in degrees $\le t$. As a corollary, we show that the cohomology of a finitely generated ${\bf VI}$-module in non-describing characteristic is eventually periodic in $n$. We apply this to obtain a new result on the cohomology of unipotent Specht modules. [8] arXiv:1910.05764 (cross-list from math.RA) [pdf, ps, other] Title: Almost PI algebras are PIComments: 13 pagesSubjects: Rings and Algebras (math.RA); Group Theory (math.GR) We define the notion of an almost polynomial identity of an associative algebra $R$, and show that its existence implies the existence of an actual polynomial identity of $R$. A similar result is also obtained for Lie algebras and Jordan algebras. We also prove related quantitative results for simple and semisimple algebras. [9] arXiv:1910.05955 (cross-list from math.AG) [pdf, ps, other] Title: K3 surfaces with maximal finite automorphism groups containing $M\_{20}$Comments: 15 pagesSubjects: Algebraic Geometry (math.AG); Group Theory (math.GR) It was shown by Mukai that the maximum order of a finite group acting faithfully and symplectically on a K3 surface is $960$ and that the group is isomorphic to the group $M_{20}$. Then Kondo showed that the maximum order of a finite group acting faithfully on a K3 surface is $3\,840$ and this group contains the Mathieu group $M_{20}$ with index four. Kondo also showed that there is a unique K3 surface on which this group acts faithfully, which is the Kummer surface Km$(E_i\times E_i)$. In this paper we describe two more K3 surfaces admitting a big finite automorphism group of order $1\,920$, both groups contains $M_{20}$ as a subgroup of index 2. We show moreover that these two groups and the two K3 surfaces are unique. This result was shown independently by S. Brandhorst and K. Hashimoto in a forthcoming paper, with the aim of classifying all the finite groups acting faithfully on K3 surfaces with maximal symplectic part. [10] arXiv:1910.05987 (cross-list from math.CO) [pdf, ps, other] Title: Distance formulas in Bruhat-Tits building of $\mathrm{SL}_d(\mathbb{Q}_p)$Authors: Dominik LachmanComments: 22 pagesSubjects: Combinatorics (math.CO); Group Theory (math.GR) We study the distance on the Bruhat-Tits building of the group $\mathrm{SL}_d(\mathbb{Q}_p)$ (and its other combinatorial properties). Coding its vertices by certain matrix representatives, we introduce a way how to build formulas with combinatorial meanings. In Theorem 1, we give an explicit formula for the graph distance $\delta(\alpha,\beta)$ of two vertices $\alpha$ and $\beta$ (without having to specify their common apartment).Our main result, Theorem 2, then extends the distance formula to a formula for the smallest total distance of a vertex from a given finite set of vertices. In the appendix we consider the case of $\mathrm{SL}_2(\mathbb{Q}_p)$ and give a formula for the number of edges shared by two given apartments. Replacements for Tue, 15 Oct 19 [11] arXiv:1610.06728 (replaced) [pdf, ps, other] [12] arXiv:1902.03201 (replaced) [pdf, ps, other] Title: On dimension of product of groupsAuthors: Alexander DranishnikovSubjects: Group Theory (math.GR); Geometric Topology (math.GT) [13] arXiv:1904.13388 (replaced) [pdf, ps, other] Title: Principal and Doubly Homogeneous QuandlesAuthors: Marco BonattoComments: This is a post-print version which contains an enhanced proof of Theorem 4.8Subjects: Group Theory (math.GR) [14] arXiv:1901.01737 (replaced) [pdf, ps, other] [15] arXiv:1901.07030 (replaced) [pdf, ps, other] Title: Variations on the theme of Zariski's Cancellation ProblemAuthors: Vladimir L. PopovComments: 17 pages. Presentation in the former Section 4 (Example 2) amended. The construction in the former Section 9 generalizedSubjects: Algebraic Geometry (math.AG); Group Theory (math.GR) 1-15] [ showing up to 500 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
Defining parameters Level: \( N \) = \( 63 = 3^{2} \cdot 7 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 10 \) Newforms: \( 17 \) Sturm bound: \(576\) Trace bound: \(4\) Dimensions The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(63))\). Total New Old Modular forms 192 131 61 Cusp forms 97 87 10 Eisenstein series 95 44 51 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(63))\) We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Below a more general approach. Suppose that we have two weak acids $\ce{HA}$ and $\ce{HB}$.The initial concentrations are $C^0_\ce{HA}$ and $C^0_\ce{HB}$, and their constants are ${K_\ce{a}}_\ce{(HA)}$ and ${K_\ce{a}}_\ce{(HB)}$.Suppose yet that volumes, $V_\ce{HA}$ and $V_\ce{HB}$, are additives.So we have: Reactions $$\ce{HA + H2O <=> H3O+ + A-}\qquad {K_\ce{a}}_\left(\ce{HA}\right)=\frac{\ce{[H3O+][A-]}}{\ce{[HA]}}\tag{1}\label{eq:KAcidHA}$$ $$\ce{HB + H2O <=> H3O+ + B-}\qquad {K_\ce{a}}_\left(\ce{HB}\right)=\frac{\ce{[H3O+][B-]}}{\ce{[HB]}}\tag{2}\label{eq:KAcidHB}$$ $$\ce{2 H2O <=> H3O+ + OH-}\qquad K_\ce{w}=\ce{[H3O+][OH-]}\tag{3}\label{eq:KWater}$$ Mass balance $$C_\ce{HA} = \frac{C^0_\ce{HA}V_\ce{HA}}{V_\ce{HA} + V_\ce{HB}}=\ce{[HA] + [A-]}\tag{4}\label{eq:MassBalanceHA}$$ $$C_\ce{HB} = \frac{C^0_\ce{HB}V_\ce{HB}}{V_\ce{HA} + V_\ce{HB}}=\ce{[HB] + [B-]}\tag{5}\label{eq:MassBalanceHB}$$ Change balance $$\ce{[H3O+] = [OH-] + [A-] + [B-]}\tag{6}\label{eq:ChargeBalance}$$ Replacing ($\ref{eq:KAcidHA}$–$\ref{eq:MassBalanceHB}$) equations on ($\ref{eq:ChargeBalance}$), we have:$$\ce{[H3O+]} = \frac{K_\ce{w}}{\ce{[H3O+]}} + \frac{C_\ce{HA}{K_\ce{a}}_\left(\ce{HA}\right)}{\ce{[H3O+]} + {K_\ce{a}}_\left(\ce{HA}\right)} + \frac{C_\ce{HB}{K_\ce{a}}_\left(\ce{HB}\right)}{\ce{[H3O+]} + {K_\ce{a}}_\left(\ce{HB}\right)}\tag{7}\label{eq:GeneralEquation}$$ or as polynomial \begin{align}\begin{split} &\ce{[H3O+]}^4\\+&\ce{[H3O+]}^3\left({K_\ce{a}}_\left(\ce{HA}\right) + {K_\ce{a}}_\left(\ce{HB}\right)\right)\\+&\ce{[H3O+]}^2\left[{K_\ce{a}}_\ce{(HA)}{{K_\ce{a}}_\ce{(HB)}} -\left(C_\ce{HA}{K_\ce{a}}_\ce{(HA)}+C_\ce{HB}{{K_\ce{a}}_\ce{(HB)}}\right)-K_\ce{w} \right]\\-&\ce{[H3O+]}\big[\big(C_\ce{HA}+C_\ce{HB}\big){K_\ce{a}}_\ce{(HA)}{{K_\ce{a}}_\ce{(HB)}} + K_\ce{w}\left({K_\ce{a}}_\ce{(HA)}+{{K_\ce{a}}_\ce{(HB)}}\right)\big]\\-&{K_\ce{a}}_\left(\ce{HA}\right){K_\ce{a}}_\left(\ce{HB}\right)K_\ce{w}\\=&\ 0\end{split}\tag{8}\label{eq:GeneralPol}\end{align} This single equation will exactly solve any equilibrium problem involving the mixture of any two monoprotic acids, in any concentration (as long as they're not much higher than about $\pu{1 mol L-1}$) and any volume. Depending of $K_\ce{a}$ values, we can yet obtain a simpler version. The ($\ref{eq:GeneralPol}$) equation can simplified considering that ${K_\ce{a}}_\left(\ce{HA}\right){K_\ce{a}}_\left(\ce{HB}\right)K_\ce{w}\ll 1$. \begin{align}\begin{split} &\ce{[H3O+]}^3\\+&\ce{[H3O+]}^2\left({K_\ce{a}}_\left(\ce{HA}\right) + {K_\ce{a}}_\left(\ce{HB}\right)\right)\\+&\ce{[H3O+]}\left[{K_\ce{a}}_\ce{(HA)}{{K_\ce{a}}_\ce{(HB)}} -\left(C_\ce{HA}{K_\ce{a}}_\ce{(HA)}+C_\ce{HB}{{K_\ce{a}}_\ce{(HB)}}\right)-K_\ce{w} \right]\\-&\big[\big(C_\ce{HA}+C_\ce{HB}\big){K_\ce{a}}_\ce{(HA)}{{K_\ce{a}}_\ce{(HB)}} + K_\ce{w}\left({K_\ce{a}}_\ce{(HA)}+{{K_\ce{a}}_\ce{(HB)}}\right)\big]\\=&\ 0\end{split}\tag{9}\label{eq:GeneralPolSimp1}\end{align} The ($\ref{eq:GeneralPolSimp1}$) equation can simplified considering that ${K_\ce{a}}_\left(\ce{HA}\right){K_\ce{a}}_\left(\ce{HB}\right)\ll 1$ and disregarding the autoionization of water. \begin{align}\ce{[H3O+]}^2+\ce{[H3O+]}\left({K_\ce{a}}_\ce{(HA)} + {K_\ce{a}}_\ce{(HB)}\right)-\left(C_\ce{HA}{K_\ce{a}}_\ce{(HA)}+C_\ce{HB}{{K_\ce{a}}_\ce{(HB)}}\right)= 0\tag{10}\label{eq:GeneralPolSimp2}\end{align} The ($\ref{eq:GeneralPolSimp2}$) equation can be solved as usual. $$\ce{[H3O+]}=\frac{-\left({K_\ce{a}}_\ce{(HA)} + {K_\ce{a}}_\ce{(HB)}\right)+\sqrt{\left({{K_\ce{a}}_\ce{(HA)} + {K_\ce{a}}_\ce{(HB)}}\right)^2+4\left(C_\ce{HA}{K_\ce{a}}_\ce{(HA)}+C_\ce{HB}{{K_\ce{a}}_\ce{(HB)}}\right)}}{2}$$ or using the initial concentrations $$\ce{[H3O+]}=\frac{-\left({K_\ce{a}}_\ce{(HA)} + {K_\ce{a}}_\ce{(HB)}\right)+\sqrt{\left({{K_\ce{a}}_\ce{(HA)} + {K_\ce{a}}_\ce{(HB)}}\right)^2+4\left(\displaystyle\frac{C^0_\ce{HA}V_\ce{HA}{K_\ce{a}}_\ce{(HA)}}{V_\ce{HA} + V_\ce{HB}}+\frac{C^0_\ce{HB}V_\ce{HB}{K_\ce{a}}_\ce{(HB)}}{V_\ce{HA} + V_\ce{HB}}\right)}}{2}$$ Replacing $C^0_\ce{HA}=\pu{0.01 mol L-1}$, $C^0_\ce{HB}=\pu{0.01 mol L-1}$, $V_\ce{HA}=\pu{0.050 L}$ and $V_\ce{HB}=\pu{0.050 L}$, and using $\text{p}K_\ce{a}=3.77$ for formic acid and $\text{p}K_\ce{a}=4.756$ for acetic acid, we have $$\ce{pH}=3.06$$
Preliminaries Consider $U = U(V,T, p)$. However, assuming that it is possible to write an equation of state of the form $p = f(V,T)$, I don't have to explicitly address the $p$ dependence of $U$, and I can write the following differential: $$\mathrm{d}U = \underbrace{\left ( \frac{\partial U}{\partial V} \right)_T}_{\pi_T} \mathrm{d}V + \underbrace{\left ( \frac{\partial U}{\partial T} \right)_V}_{C_v} \mathrm{d}T \tag{1}$$ so one writes, $$ \mathrm{d}U = \pi_T \mathrm{d}V + C_v\mathrm{d}T \tag{2}$$ also, for an ideal gas, the internal pressure ($\pi_T$) $= 0$ Additionally for changes in internal energy at constant pressure, $$ \left (\frac{\partial U}{\partial T}\right)_p = \pi_T \left (\frac{\partial V}{\partial T}\right)_p +C_v \tag{3}$$ Here, I take the opportunity to define two new quantities, namely $\alpha$ and $\kappa_T$ (the expansion coefficient and the isothermal compressibility respectively) $$\alpha = \frac{1}{V}\left (\frac{\partial V}{\partial T}\right)_p$$ and, $$\kappa_T = \frac{-1}{V}\left (\frac{\partial V}{\partial p}\right)_T$$ Rewriting (3) using $\alpha$ $$ \left (\frac{\partial U}{\partial T}\right)_p = \alpha \pi_T V +C_v \tag{4} $$ Now, although the constant volume heat-capacity is defined as $C_v = \left (\frac{\partial U}{\partial T} \right)_v$, if we let $\pi_T = 0$ in equation (4), we get $C_v = \left (\frac{\partial U}{\partial T} \right)_p$ This holds true for a perfect gas, and one can quickly obtain the desired relation at this stage. Derivation: Difference between constant volume and constant pressure heat capacities for a perfect gas Consider, $$C_p - C_v = \overbrace{\left (\frac{\partial H}{\partial T} \right)_p}^{\text{definition of} \ C_p} - \overbrace{\left (\frac{\partial U}{\partial T} \right)_v}^{\text{definition of} \ C_v} \tag{5}$$ Introducing, $H = U+ pV = U+nRT$, and exploiting $C_v = \left (\frac{\partial U}{\partial T} \right)_v = \left (\frac{\partial U}{\partial T} \right)_p$, equation (5) yields: $$ C_p - C_v = \left (\frac{\partial (U+nRT)}{\partial T} \right )_p - \left (\frac{\partial U}{\partial T} \right )_p = nR$$ Derivation: Difference between constant volume and constant pressure heat capacities (general case) However, as my contribution to this discussion I would like to derive a relation between heat capacities that is universally true for any substance, not just a perfect gas. So let's return to equation (5): $$C_p - C_v = \overbrace{\left (\frac{\partial H}{\partial T} \right)_p}^{\text{definition of} \ C_p} - \overbrace{\left (\frac{\partial U}{\partial T} \right)_v}^{\text{definition of} \ C_v} \tag{5}$$ Here, we substitute $H = U + pV $ and obtain, $$ C_p -C_v = \overbrace{\left( \frac{\partial U}{\partial T} \right)_p}^{\text{evaluated in} \ (4)}+ \left( \frac{\partial (pV)}{\partial T} \right)_p - C_v \tag{6}$$ The first partial derivative was already taken care of in equation (4). For the second one, since the derivative is to be evaluated at constant pressure, we can do the following $$\left( \frac{\partial (pV)}{\partial T} \right)_p = p \overbrace{\left( \frac{\partial V}{\partial T} \right)_p}^{\alpha V}$$ Putting all of this together, one obtains $$C_p -C_v = \alpha \pi_T V + \alpha pV = \alpha(p+ \pi_T)V \tag{7}$$ At this stage, I will make use of the following relation (derived in additional comments) $$\pi_T = T \left (\frac{\partial p}{\partial T}\right )_v - p$$ After substituting this into (7) we get: $$ C_p -C_v = \alpha T V \left( \frac{\partial p}{\partial T} \right)_V \tag{8}$$ I wish to transform the last remaining partial derivative, and to do so I consider $ V = V(T,p) $ which yields the following differential $$ \mathrm{d}V = \left( \frac{\partial V}{\partial T} \right)_p \mathrm{d}T + \left( \frac{\partial V}{\partial p} \right)_T \mathrm{d}p $$ At constant volume, $\mathrm{d}V = 0$ so one gets, $$\left( \frac{\partial V}{\partial T} \right)_p \mathrm{d}T = - \left( \frac{\partial V}{\partial p} \right)_T \mathrm{d}p$$ $$\overbrace{ \left( \frac{\partial V}{\partial T} \right)_p}^{\alpha V} = \overbrace{-\left( \frac{\partial V}{\partial p} \right)_T}^{\kappa_T V} \left( \frac{\partial p}{\partial T} \right)_V$$ Note: One can avoid all of this work, and simply invoke the Euler Chain Rule Rearranging, $$\left( \frac{\partial p}{\partial T} \right)_V = \frac{\alpha}{\kappa_T}$$ We can finally substitute this into (8) to get $$C_p -C_v = \frac{\alpha^2 TV}{\kappa_T} \tag{9}$$ This is true for any substance, not just a perfect gas. Now, for a perfect gas $ pV = nRT$ holds true, and thus $\alpha = \frac{1}{T}$ and $\kappa_T = \frac{1}{p}$. Making these substitutions into (9), we get our desired result $$C_p -C_v = nR $$ Additional Comments This might seem like an unnecessarily complex, and not to mention convoluted way to get to the desired result (especially, in light of a much simpler method presented by @orthocresol), however, I think the deriving the expression for a general case first, and then reducing it to the special case is illuminating. Moreover, in spirit and approach, it is not that far from what @orthocresol did. Physical Significance of certain terms/quantities $\pi_T$ is called the internal pressure (it has the dimensions of pressure) and is a consequence of the interactions between molecules. For an ideal gas it is necessarily zero. $\alpha$ i.e the expansion coefficient is the fractional change in volume that accompanies a rise in temperature. A large volume of $\alpha$ implies that the sample responds very strongly to changes in temperature. Similarly, $\kappa_T$ is a measures of the response to a change in pressure. The negative sign insures that $\kappa_T$ is a positive quantity, because a pressure increase causes a decrease in volume ($\mathrm{d}V$ is negative) Since equation (9) holds true for any substance, for solids and liquids one might be tempted to say $C_p \approxeq C_V$ because $\alpha$ is small for solids and liquids. However, one must be careful because $\kappa_T$ can be small as well, which makes the fraction $\frac{\alpha^2}{\kappa_T}$ large. In other words, even though a little work has to be done to push back the atmosphere when a solid expands, a great deal of work will go into pulling the atoms apart. Supplementary Derivation: For a system where, $N$ doesn't change the fundamental equation of thermodynamics is: $$\mathrm{d}U = T\mathrm{d}S -p\mathrm{d}V$$ This seems to suggest that, $U = U(S,V) $. Thus, one can write the following differential and after comparing to the one above can equate $T$ and $-p$ (as indicated by annotations) with the partial derivatives given below: $$\mathrm{d}U = \underbrace{\left ( \frac{\partial U}{\partial S}\right)_V}_{T} \mathrm{d}S + \underbrace{\left ( \frac{\partial U}{\partial V}\right)_S}_{-p}\mathrm{d}V$$ Moreover, dividing both sides of the fundamental equation by $\mathrm{d}V$ (yeah, I know), and imposing constraint of constant temperature) we can manipulate it into the following form: $$\overbrace{\left( \frac{\partial U}{\partial V} \right)_T}^{\pi_T} = \overbrace{\left ( \frac{\partial U}{\partial S}\right)_V}^{T}\left ( \frac{\partial S}{\partial V}\right)_T - \overbrace{\left ( \frac{\partial U}{\partial V}\right)_S}^{-p}$$ Thus, we have $$ \pi_T = T\left ( \frac{\partial S}{\partial V}\right)_T - p$$ Invoking, the Maxwell Relation $$ \left ( \frac{\partial S}{\partial V}\right)_T = \left ( \frac{\partial p}{\partial T}\right)_V$$ one finally gets, $$\pi_T = T \left (\frac{\partial p}{\partial T}\right )_v - p$$
I just have a trouble making a full analogy between Lorentz Algebra Representation in Quantum Field Theory (QFT) and SU(2) representation in Quantum Mechanics (QM). To make my point, I will write few things that I think is true for the case of QM. We first start by looking at the rotation matrices in Classical Mechanics, represented by matrices $R \in SO(3)$. Then, we associate unitary matrices with $R$, $D(R)$, and these matrices form $SU(2)$ group. Now, we look at the algebra of $SU(2)$ to find fundamental commutation relationships among the generators of $D(R)$, namely, $$[J_i,J_j] = i\epsilon_{ijk}J_k$$ Then we look for different representations of these generators characterized by different angular momentums (which defines the dimension of the vector space that generators act). The representation that we use, then also gives an explicit expression for our unitary matrices $D(R)$ by $$D(R) = \exp(\frac{i\vec{J}\cdot\hat{n}}{\hbar}).$$ Also, I can define vectors and tensors by this unitary matrix, $D(R)$. For instance, vector $V_i$ transforms by $$D(R)^{-1} V^i D(R) = R_{\:j}^i V^j.$$ Now, I want to similarly understand QFT's case with the Lorentz group. (I am currently following QFT text by Srednicki). I start with Lorentz matrices $\Lambda$, and associate it with unitary matrices, $U(\Lambda)$. I have a similar definition of 4-vector in QFT as in QM: $$U(\Lambda)^{-1} V^i U(\Lambda) = \Lambda_{\:j}^i V^j.$$ I can also define the generators of $U(\Lambda)$, $M^{\mu\nu}$, and derive its fundamental commutation relations, $$[M^{\mu\nu},M^{\rho\sigma}]=\cdots.$$ Now, making complete analogy with QM, I expect to find representation of $M^{\mu\nu}$ and the representation of $U(\Lambda)$ by exponentiating $M^{\mu\nu}$. But instead, we proceed by looking for the representation of $\Lambda$, instead of $U(\Lambda)$ like in QM. For instance, as for left Weyl-spinor representation, I find representation $L(\Lambda)$: $$U(\Lambda)^{-1} \psi_a(x) U(\Lambda) = L_a^{\:b}(\Lambda) \psi_b(\Lambda^{-1} x).$$ Now, I have a generator $S_L$ (which is now not necessary to be Hermitian (unlike QM)), which gives $L(\Lambda)$ when exponentiated (rather than $U(\Lambda)$ (unlike QM)). I do not get explicit expression (unlike QM) for $U(\Lambda)$, so I do not know what to think of them or its generators $M^{\mu\nu}$. For instance, I get expressions that involve both $M^{\mu\nu}$ and $S_L^{\mu\nu}$ ( (whereas in QM, since I looked for a representation of $D(R)$ (rather than $R$), quantity analogous to $M^{\mu\nu}$ and $S_L^{\mu\nu}$ were the same thing)). I do know that there is no finite unitary representation of the Lorentz algebra, so I think that must be the missing piece in my understanding. I would like to make a complete analogy with QM, could anyone please be of help? Thank you.
Let $g_1 \cdots g_M$ be the basic gates that you are allowed to use. For the purposes of this $\operatorname{CNOT}_{12}$ and $\operatorname{CNOT}_{13}$ etc are treated as separate. So $M$ is polynomially dependent on $n$, the number of qubits. The precise dependence involves details of the sorts of gates you use and how $k$-local they are. For example, if there are $x$ single qubit gates and $y$ 2-qubit gates that don't depend on order like $CZ$ then $M = xn+\binom{n}{2}y$. A circuit is then a product of those generators in some order. But there are multiple circuits that do nothing. Like $\operatorname{CNOT}_{12} \operatorname{CNOT}_{12} = \mathrm{Id}$. So those give relations on the group. That is it is a group presentation $\langle g_1 \cdots g_M \mid R_1 \cdots \rangle$ where there are many relations that we do not know. The problem we wish to solve is given a word in this group, what is the shortest word that represents the same element. For general group presentations, this is hopeless. The sort of group presentation where this problem is accessible are called automatic. But we can consider a simpler problem. If we throw out some of the $g_i$, then the words from before become of the form $w_1 g_{i_1} w_2 g_{i_2} \cdots w_k$ where each of the $w_i$ are words only in the remaining letters. If we managed to make them shorter using the relations that don't involve the $g_i$, then we will have made the entire circuit shorter. This is akin to the optimizing the CNOTs on their own made in the other answer. For example, if there are three generators and the word is $aababbacbbaba$, but we don't want to deal with $c$, we will instead shorten $w_1=aababba$ and $w_2=bbaba$ to $\hat{w}_1$ and $\hat{w}_2$. We then put them back together as $\hat{w}_1 c \hat{w}_2$ and that is a shortening of the original word. So WLOG (without loss of generality), let's suppose we are in that problem already $\langle g_1 \cdots g_M \mid R_1 \cdots \rangle$ where we now use all the gates specified. Again this is probably not an automatic group. But what if we throw out some of the relations. Then we will have another group that has a quotient map down to the one we really want. The group $\langle g_1 g_2 \mid - \rangle$ no relations is a free group, but then if you put $g_1^2=\mathrm{id}$ as a relation, you get the free product $\mathbb{Z}_2 \star \mathbb{Z}$ and there is a quotient map from the former to the later reducing the number of $g_1$'s in each segment modulo $2$. The relations we throw out will be such that the one upstairs (the source of the quotient map) will be automatic by design. If we only use the relations that remain and shorten the word, then it will still be a shorter word for the quotient group. It just won't be optimal for the quotient group (the target of the quotient map), but it will have the length $\leq$ to the length it started with. That was the general idea, how can we turn this into a specific algorithm? How do we choose the $g_i$ and relations to throw out in order to get an automatic group? This is where knowledge of the kinds of elementary gates we typically use comes in. There are a lot of involutions, so keep only those. Keep careful attention to the fact that these are only the elementary involutions, so if your hardware has a hard time swapping qubits that are vastly separated on your chip, this is writing them in only the ones that you can do easily and reducing that word to be as short as possible. For example, suppose you have the IBM configuration. Then $s_{01},s_{02},s_{12},s_{23},s_{24},s_{34}$ are the allowed gates. If you wish to do a general permutation, decompose it into $s_{i,i+1}$ factors. That is a word in the group $\langle s_{01},s_{02},s_{12},s_{23},s_{24},s_{34} \mid R_1 \cdots \rangle$ that we wish to shorten. Note that these don't have to be the standard involutions. You can throw in $R(\theta) X R(\theta)^{-1}$ in addition to $X$ for example. Think of the Gottesman-Knill theorem, but in an abstract manner that means it will be easier to generalize. Such as using the property that under short exact sequences, if you have finite complete rewriting systems for the two sides, then you get one for the middle group. That comment is unnecessary for the rest of the answer, but shows how you can build up bigger more general examples from the ones in this answer. The relations that are kept are only those of the form $(g_i g_j)^{m_{ij}} = 1$. This gives a Coxeter group and it is automatic. In fact, we don't even have to start from scratch to code up the algorithm for this automatic structure. It is already implemented in Sage (Python based) in general purpose. All you have to do is specify the $m_{ij}$ and it has the remaining implementation already done. You might do some speedups on top of that. $m_{ij}$ is really easy to compute because of the locality properties of the gates. If the gates are at most $k$-local, then the computation of $m_{ij}$ can be done on a $2^{2k-1}$ dimensional Hilbert space. This is because if the indices don't overlap, then you know that $m_{ij}=2$. $m_{ij}=2$ is for when $g_i$ and $g_j$ commute. You also only have to compute less than half of the entries. This is because the matrix $m_{ij}$ is symmetric, has $1$'s on the diagonal ($(g_i g_i)^1 = 1$). Also most of the entries are just renaming the involved qubits so if you know the order of $(\operatorname{CNOT}_{12} H_1)$, you know the order of $\operatorname{CNOT}_{37} H_3$ without doing the computation over again. That took care of all relations that only involved at most two distinct gates (proof: exercise). The relations that involved $3$ or more were all thrown out. We now put them back in. Let's say we have that, then one can perform Dehn's greedy algorithm using new relations. If there was a change, we knock it back up to run through the Coxeter group again. This repeats until there are no changes. Every time the word is either getting shorter or staying the same length and we are only using algorithms that have linear or quadratic behaviour. This is a rather cheap procedure so might as well do it and make sure you didn't do anything stupid. If you want to test it out yourself, give the number of generators as $N$, the length $K$ of the random word you're trying out and the Coxeter matrix as $m$. edge_list=[] for i1 in range(N): for j1 in range(i): edge_list.append((j1+1,i1+1,m[i1,j1])) G3 = Graph(edge_list) W3 = CoxeterGroup(G3) s3 = W3.simple_reflections() word=[choice(list([1,..,N])) for k in range(K)] print(word) wTesting=s3[word[0]] for o in word[1:]: wTesting=wTesting*s3[o] word=wTesting.coset_representative([]).reduced_word() print(word) An example with N=28 and K=20, the first two lines are the input unreduced word, the next two is the reduced word. I hope I didn't make a typo when entering the $m$ matrix. [26, 10, 13, 16, 15, 16, 20, 22, 21, 25, 11, 22, 25, 13, 8, 20, 19, 19, 14, 28] ['CNOT_23', 'Y_1', 'Y_4', 'Z_2', 'Z_1', 'Z_2', 'H_1', 'H_3', 'H_2', 'CNOT_12', 'Y_2', 'H_3', 'CNOT_12', 'Y_4', 'X_4', 'H_1', 'Z_5', 'Z_5', 'Y_5', 'CNOT_45'] [14, 8, 28, 26, 21, 10, 15, 20, 25, 11, 25, 20] ['Y_5', 'X_4', 'CNOT_45', 'CNOT_23', 'H_2', 'Y_1', 'Z_1', 'H_1', 'CNOT_12', 'Y_2', 'CNOT_12', 'H_1'] Putting back those generators like $T_i$ we only put back the relations like $T_i^n = 1$ and that $T_i$ commutes with gates that do not involve qubit $i$. This allows us to make the decomposition $w_1 g_{i_1} w_2 g_{i_2} \cdots w_k$ from before have the $w_i$ as long as possible. We want to avoid situations like $X_1 T_2 X_1 T_2 X_1 T_2 X_1$. (In Cliff+T one often seeks to minimize T-count). For this part, the directed acyclic graph showing the dependency is crucial. This is a problem of finding a good topological sort of the DAG. That is done by changing precedence when one has a choice of what vertex to use next. (I wouldn't waste time optimizing this part too hard.) If the word is already close to optimal length, there is not much to do and this procedure won't help. But as the most basic example of what it finds is if you have multiple units and you forgot there was an $H_i$ at the end of one and an $H_i$ at the beginning of the next, it will get rid of that pair. This means you can black box common routines with greater confidence that when you put them together, those obvious cancellations will all be taken care of. It does others that aren't as obvious; those use when $m_{ij} \neq 1,2$.
Abstract: For a toric Fano manifold $X$ denote by $Crit(X) \subset (\mathbb{C}^{\ast})^n$ the solution scheme of the Landau-Ginzburg system of equations of $X$. Examples of toric Fano manifolds with $rk(Pic(X)) \leq 3$ which admit full strongly exceptional collections of line bundles were recently found by various authors. For these examples we construct a map $E : Crit(X) \rightarrow Pic(X)$ whose image $\mathcal{E}=\left \{ E(z) \vert z \in Crit(X) \right \}$ is a full strongly exceptional collection satisfying the M-aligned property. That is, under this map, the groups $Hom(E(z),E(w))$ for $z,w \in Crit(X)$ are naturally related to the structure of the monodromy group acting on $Crit(X)$. Date: Wed, 16/12/2015 - 11:00 to 14:30 Location: Ross building, Hebrew University (Seminar Room 70A)
Geometric Mean In Trapezoid The following engaging fact was brought to my attention by Miguel Ochoa Sanchez from Peru: Given a trapezoid $ABCD,\;$ $AB\parallel CD.\;$ The diagonals of the trapezoid cut the figure into four triangles, with the areas as shown below: Then $X\cdot Y=M\cdot N.$ and, since $X=Y,\;$ also $X=\sqrt{M\cdot N}.$ Proof Denote the pieces of the diagonals $a,b,c,d,\;$ as shown in the diagram below. Let $\alpha\;$ be the angles between the diagonals: Then $\begin{align} X&=\frac{1}{2}ad\sin\alpha,\\ N&=\frac{1}{2}ba\sin\alpha,\\ Y&=\frac{1}{2}cb\sin\alpha,\\ M&=\frac{1}{2}dc\sin\alpha,\\ \end{align}$ Now $X\cdot Y=\frac{1}{4}abcd\cdot\sin^2\alpha=M\cdot N.$ There is more to that. By Euclid I.37, $X+M=[\Delta ACD]=[\Delta BCD]=Y+M\;$ so that $X=Y\;$ and, finally $X=\sqrt{M\cdot N}.\;$ Copyright © 1996-2018 Alexander Bogomolny 65620762
Exploring Logistic Regression In this post I will explore logistic regression, a technique used to predict class membership given some set of parameters. The method of exploration will be though an example, in which I will building a classifier to predict candidate acceptance into ML University (fictional) given their performance on two entrance exam scores. For the sake of brevity I will leave testing of goodness, and regularization for a future post. Getting to know ML Universities Admittance and Exam Data Usually the first step in the analysis process is to get familiarized with the format and integrety of the data. In this case the data is already well csv (comma separated value) format, and I know before hand that the first two columns contain the first and second exam scores, and the third column contains the applicants admittance (yes=1, no=0). So the first step will be to read in the data, and list the some summary statistics. It is worth noting that the first step would actually be to look at the raw data file to determinie what format it is in, and what data types should be used for each column. In this case I know the data format before hand, so I have skipped this step. # Numeric import numpy as np import pandas as pd #Stats & ML import sklearn.linear_model as linear_model import scipy.optimize as opt from scipy.stats import logistic from sklearn import datasets from sklearn import metrics from sklearn.linear_model import LogisticRegression # Ploting import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_context("poster") sns.set_style("darkgrid") data = pd.read_csv('ex2data1.txt', header=None, names=['exam1', 'exam2', 'admittance']) data.describe() exam1 exam2 admittance count 100.000000 100.000000 100.000000 mean 65.644274 66.221998 0.600000 std 19.458222 18.582783 0.492366 min 30.058822 30.603263 0.000000 25% 50.919511 48.179205 0.000000 50% 67.032988 67.682381 1.000000 75% 80.212529 79.360605 1.000000 max 99.827858 98.869436 1.000000 Observations from the initial inspection From the summary statistics we can see that: The data set contains 100 records. The minimum and maximum scores for both tests are bounded by [30, 100]. The mean and standard deviation are close in value for both exams. The next step will be to plot the data to determining if a linear classifier is appropriate for classifing this dataset. fig, ax = plt.subplots() ax.plot(data[data['admittance'] == 1]['exam1'], data[data['admittance'] == 1]['exam2'], 'o', ms=8.0) ax.plot(data[data['admittance'] == 0]['exam1'], data[data['admittance'] == 0]['exam2'], 'x', mew=2, ms=8.0) ax.set_title('ML Uni Applicant Acceptance') ax.legend(['Admitted', 'Rejected'], loc='upper right', frameon=True) ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') plt.show() The Hypothesis Function The primary goal of our hypothesis is to predict an applicants likeihood of getting in to ML Uni. given their entrance grades. In other words, it should return a number between 0 and 1 representing the likelihood of accepted. $$ 0 \leq h_{\theta}(x) \leq 1 $$ The commonly used function for this is the Sigmoid function: $$ g(z) = \frac{1}{1+e^z} $$ Visualizing the Sigmoid Function A plot of the sigmoid function will better serve to illustrate why the sigmoid function is a good choice for this problem. # The sigmoid function sigmoid = lambda x: 1/(1+np.exp(-x)) x=np.linspace(-10, 10, 50) # The plot fig, ax = plt.subplots() ax.plot(x, [sigmoid(i) for i in x]) ax.set_title('The Sigmoid (Logistic) Function') ax.set_xlabel('z') ax.set_ylabel('g(z)') From the plot we can see that the sigmoid function asymptotes quickly about 0, and the co-domain spans the interval (0,1). Thus we can interpret the output of the sigmoid function as a probability. As with linear regression the sigmoid function will be parameterized by a linear function of the form: $$ \theta x = \theta_0 x_1 + \theta_1 x_1 + \dotsc + \theta_{n-1} x_{n-1} + \theta_n x_n $$$$ h_{\theta}(x) = \frac{1}{1+e^{-\theta x}} $$ The Cost Function for Logistic Regression Now that the hypothesis has been determined, the next step is to determine a cost function. Unlike linear regression the squared error cost function cannot be used due to non-convexity that occurs when used with the sigmoid funciton. Due to this non-convexity the following cost function is used for logistic regression: $$Cost(h_{\theta}(\vec{x^{(i)})}) =\begin{cases} -log(h_{\theta}(x^{(i)})) & \quad \text{if} \ y = 1 \\ -log(1-h_{\theta}(x^{(i)})) & \quad \text{if} \ y = 0 \end{cases}$$ Cost Plot: Case where the Applicant is Accepted (y = 1) As before a plot will serve to illustrate how the cost function minimizes the parameters for the hypothesis. The first plot of the cost function will be for the case where the target is one (y = 1). This represents the case where the applicant is accepted. # The sigmoid function cost_label_one = lambda z: -np.log(z) # The plot ax2 = plt.subplot() ax2.plot([sigmoid(i) for i in x], [cost_label_one(sigmoid(i)) for i in x]) ax2.set_title(r'Cost for y=1') ax2.set_xlabel(r'$g(z)$') ax2.set_ylabel(r'$Cost(g(z))$') ax2.set_xlim([0,1]) ax2.set_ylim([0,5]) plt.show() We can see that the cost function for y=1 has the desired behavior of increasing the cost when the prediction nears zero, and decreases it when it approaches one, the target prediction. Cost Plot: Case where the Applicant is Not Accepted (y = 0) # The sigmoid function cost_label_zero = lambda z: -np.log(1-z) # The plot ax2 = plt.subplot() ax2.plot([sigmoid(i) for i in x], [cost_label_zero(sigmoid(i)) for i in x]) ax2.set_title(r'Cost for y=0') ax2.set_xlabel(r'$g(z)$') ax2.set_ylabel(r'$Cost(g(z))$') ax2.set_xlim([0,1]) ax2.set_ylim([0,5]) plt.show() Preparing the Cost Function for Minimization Now that the hypothesis and cost function have been determined, the next few steps will be to prepare them for the minimization process. Alternate Representation of the Cost Function Below is an simpler, yet equivalent, representation of the piecewise cost function. $$ J(\theta) = \frac{1}{m} \sum_{i = 1}^{m}{\Bigg[y^{(i)}log\big(h_{\theta} (x^{(i)})\big) + (1-y^{(i)})log\big(1-h_{\theta}(x^{(i)})\big)\Bigg]} $$ As with linear regression, to facilitate computation, the cost function will be tranformed to an iquivalent matrix expression. $$ J(\theta) = \frac{1}{m}\sum_{i = 1}^m\bigg[ -y \cdot log(g(X\theta)) -(1-y) \cdot log(1-g(X\theta))\bigg] $$ Gradient for the Minimization Process In this implementation a scipy minimization function will be used to find the optimal parameters for the hypothesis function. This minimization function will be provided a reference a jacobian gradient function that will return both the cost and the a gradient. It’s worth noting that the gradient is nearly identical to the left most term used in the theta update calculation for linear regression. Below is the gradient equation in matrix form. $$ gradient = \frac{1}{m}X^{T}(X \theta – y) $$ Minimizing the Cost Function Preprocessing To minimize the cost function the training data must be extracted into the design matrix (X), and target vector (y). Also the initial parameters (t) are chosen for the minization function. m = len(data['exam1']) X = np.array([np.ones(m), data['exam1'].values, data['exam2'].values]).T y = np.array([data['admittance'].values]).reshape(m,1) t = np.array([1.0, 1.0, 1.0]) Defining the gradient and cost function for minimization The numpy minimization function will be provided with a jacobian gradient to compute the cost and gradient. The definition of which is below. def jacobian_gradient(theta, X, y): """ Calculate and return the cost function and gradient for the given thetas. Args: theta (numpy 1-D ndarray): Learning parameters X (numpy ndarray): Design Matrix y (numpy 1-D ndarray): Target vector Returns: tuple (float, numpy 1-D ndarray): The calculated cost, and gradient. """ m, n = X.shape # Note: The min function flattens the first argument passed to the gradient (theta), # yet our calculations require that thata be a column vector. To work around # this the theta 1-D (n, ) vector is reshaped into a column vector (n, 1). t = theta.reshape(n,1) h = 1.0/(1.0 + np.exp(- X.dot(t))) cost = ((-y)*np.log(h) - (1-y)*np.log(1-h)).sum()/m gradient = X.T.dot(h - y) / m # Note: The minimization function requires that the gradient function returned be # a 1-D array. To satisfy this requirement the gradient is flattened into a # 1-D array. return (cost, gradient.flatten()) Running the Minimization Function With the jacobian gradient defined all that remains is to call the minimization function with the required parameters: jacobian_gradient: Function which in each iteration of the minimization proccess to calculate the cost and gradient. x0=t: The inital parameters used in the minimization process method=BFGS: The chosen minimization algorithm (Broyden–Fletcher–Goldfarb–Shanno) jac=True: Flag indicating that the jacobian gradientwill return both the cost and the gradient. args=(X,y): The second and third parameters passed to the jacobian gradient function (Note: x0 is the first param). # Note: x0 must be a 1-D array. results = opt.minimize(jacobian_gradient, x0=t, method='BFGS', jac=True, args=(X, y)) Minimization Results The minimization function returns an OptimizedResult object, which contains several attributes relating to the minimization process. The two relevant in this example are: success: True if the optimizer completed successfully, otherwise False. x: The optimized parameters. print(results) fun: 0.2034977015895529 message: 'Optimization terminated successfully.' success: True jac: array([ 9.71782335e-10, -1.67659643e-06, 1.04965204e-06]) njev: 67 x: array([-25.16131376, 0.20623151, 0.20147148]) nfev: 67 hess_inv: array([[ 2.80828345e+03, -2.19770831e+01, -2.31843128e+01], [ -2.19770831e+01, 1.85866635e-01, 1.69146733e-01], [ -2.31843128e+01, 1.69146733e-01, 2.06301018e-01]]) Plotting the Decision Boundary A contour plot will be used to visualze the output of the prediction function with the optimal parameters in place. This will give us a better idea of the likelihood distribution across various grades. # Prediction function used to plot the hypothesis @np.vectorize def predict(x1, x2, t0, t1, t2): return 1.0/(1.0 + np.exp(-(t0 + t1*x1 + t2*x2))) In [26]: x_min, x_max = 30, 100 y_min, y_max = 30, 100 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50), np.linspace(y_min, y_max, 50)) theta = [] for i, t in enumerate(results.x): theta.append(np.empty((xx.shape[0], xx.shape[1]))) theta[i].fill(t) z = predict(xx, yy, theta[0], theta[1], theta[2]) z = z.reshape(xx.shape) ax = plt.gca() ax.plot(data[data['admittance'] == 1]['exam1'], data[data['admittance'] == 1]['exam2'], 'o', ms=8.0) ax.plot(data[data['admittance'] == 0]['exam1'], data[data['admittance'] == 0]['exam2'], 'x', mew=2, ms=8.0) ax.set_title('ML Uni Applicant Acceptance') c = ax.contourf(xx, yy, z, cmap='RdYlBu', alpha=0.5) c1 = ax.contour(xx, yy, z, colors='black', alpha=0.30) plt.clabel(c1, fmt='%2.1f', fontsize=18, inline=True) ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') ax.legend(['Admitted', 'Rejected'], loc='upper right', frameon=True) f = plt.gcf() ax_c = f.colorbar(c) ax_c.set_label("$p(y = 1)$") plt.show() Using Scikit-Learn for Logistic Regression In practice machine learning libraries are used to implement logistic regression. In this case I will use logistic regression to reduce the above stepts to two lines of code. logreg = linear_model.LogisticRegression(C=1.0) print(logreg.fit(X[:,1:3], y.flatten())) LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001) Plotting the Decision Boundary result = np.array([logreg.intercept_[0], logreg.coef_[0][0], logreg.coef_[0][1]]) results status: 0 fun: 0.2034977015895529 message: 'Optimization terminated successfully.' success: True jac: array([ 9.71782335e-10, -1.67659643e-06, 1.04965204e-06]) njev: 67 x: array([-25.16131376, 0.20623151, 0.20147148]) nfev: 67 hess_inv: array([[ 2.80828345e+03, -2.19770831e+01, -2.31843128e+01], [ -2.19770831e+01, 1.85866635e-01, 1.69146733e-01], [ -2.31843128e+01, 1.69146733e-01, 2.06301018e-01]]) x_min, x_max = 30, 100 y_min, y_max = 30, 100 xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50), np.linspace(y_min, y_max, 50)) theta = [] for i, t in enumerate(results.x): theta.append(np.empty((xx.shape[0], xx.shape[1]))) theta[i].fill(t) z = predict(xx, yy, theta[0], theta[1], theta[2]) z = z.reshape(xx.shape) ax = plt.gca() ax.plot(data[data['admittance'] == 1]['exam1'], data[data['admittance'] == 1]['exam2'], 'o', ms=8.0) ax.plot(data[data['admittance'] == 0]['exam1'], data[data['admittance'] == 0]['exam2'], 'x', mew=2, ms=8.0) ax.set_title('ML Uni Applicant Acceptance') c = ax.contourf(xx, yy, z, cmap='RdYlBu', alpha=0.5) c1 = ax.contour(xx, yy, z, colors='black', alpha=0.30, inline=True) plt.clabel(c1, fmt='%2.1f', fontsize=16) ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') ax.legend(['Admitted', 'Rejected'], loc='upper right', frameon=True) f = plt.gcf() ax_c = f.colorbar(c) ax_c.set_label("$p(y = 1)$") plt.show() The results are nearly identical, with a fraction of the work. In the next post I will explore regularizing logistic regression.
ATK-SE¶ Introduction¶ ATK-SemiEmpirical (ATK-SE) can model the electronic properties of molecules,crystals and devices using both self-consistent and non-self-consistent tight-binding models. In this chapter, the implemented tight-bindingmodels based on the Slater–Koster model and the extended Hückelmodel are presented. The Slater–Koster tight-binding model follows closely the DFTB formalismdescribed in [EPJ+98], and it is recommended that this paper,and [SPS+10], are cited in publications using theSemiEmpiricalCalculator and DeviceSemiEmpiricalCalculator with theSlaterKosterHamiltonianParametrization in QuantumATK. The extended Hückel model in ATK-SE is described in [SPS+10],and it is recommended that this paper is cited in publications using theSemiEmpiricalCalculator and DeviceSemiEmpiricalCalculatorwith the HuckelHamiltonianParametrization in QuantumATK. In ATK-SE, the non-self-consistent part of the tight-binding Hamiltonian is parametrized using a two-center approximation, i.e. the matrix elements only depend on the distance between two atoms and is independent of the position of the other atoms. In the extended Hückel model, the matrix elements are described in terms of overlaps between Slater orbitals on each site. In this way, the matrix elements can be defined by very few parameters. In the Slater–Koster model, the distance-dependence of the matrix elements is given as a numerical function; this gives higher flexibility but also makes the fitting procedure more difficult. The self-consistent part of the calculation is identical for both SE models. The density matrix is calculated from the Hamiltonian using non-equilibrium Green’s functions for device systems, while for molecules and crystals it is calculated by diagonalization. The density matrix defines the real-space electron density and consequently the Hartree potential can be obtained by solving the Poisson equation. The following describes the details of the mathematical formalism behind the implementation. For a list of the built-in parameter sets in ATK-SE, see Built-in parameter sets in ATK-SE. In addition to these, it is possible to directly use parameter sets downloaded from the DFTB website or define your own Slater–Koster table or extended Hückel basis set in the QuantumATK input scripts. Background information¶ Non-self-consistent Hamiltonian¶ The Hamiltonian is expanded in a basis of local atomic orbitals (an LCAO expansion), where \(Y_{lm}\) is a spherical harmonic and \(R_{nl}\) is a radial function. Typically, the atomic orbitals used in the LCAO expansion have a close resemblance to the atomic eigenfunctions. Onsite terms¶ With this form of the basis set, the onsite elements are given by where \(E_i\) is an adjustable parameter that is often close to the atomic eigenenergy. Offsite terms in the extended Hückel model¶ The central object in the extended Hückel model is the overlap matrix: To calculate this integral, the form of the basis functions must be specified. In the extended Hückel model, the basis functions are parametrized by Slater orbitals, The LCAO basis is described by the adjustable parameters \(\eta_1\), \(\eta_2\), \(C_1\), and \(C_2\). These parameters must be defined for each angular shell of valence orbitals for each element. The overlap matrix defines the Hamiltonian, where \(E_i\) is the onsite orbital energy and \(\beta_i\) is a Hückel fitting parameter (often chosen to be 1.75). Weighting schemes¶ The are two variants of the weighting schemes for the orbital energies of the offsite Hamiltonian. The scheme used above, where \(\alpha = (E_i - E_j)/(E_i + E_j)\). Offsite Hamiltonian in the Slater–Koster model¶ The overlap matrix is given by pairwise integrals between the different basis functions. These integrals can be pre-calculated for all relevant distances and different orbital combinations, and stored in so-called Slater–Koster tables. The Slater–Koster table stores the distance-dependent parameters \(s(d, Z_1, Z_2, l_1, l_2, m)\), where \(d\) is the distance, \(Z_1\), \(Z_2\) the element types, \(l_1\), \(l_2\) the angular momentum of the two orbitals, and the index \(m \le \min(l_1, l_2)\). From the Slater–Koster tables, the overlap matrix elements are given by where \(\alpha\) are the Slater–Koster expansion coefficients. In the Slater–Koster, model it is assumed that also the Hamiltonian has a pairwise form and a Slater–Koster table is generated for the Hamiltonian matrix elements. This table may be generated by calculating Hamiltonian matrix elements for a set of dimer distances or by simply fitting matrix elements to the band structure for different lattice constants. In ATK-SE, the Slater–Koster table is constructed either by providing the path to a directory containing compatible Slater–Koster files (see DFTBDirectory and HotbitDirectory), or directly using the SlaterKosterTable class. Note that the extended Hückel model is a Slater–Koster model too, but with a special fitting procedure for the Hamiltonian matrix elements. Self-consistent Hamiltonian¶ In the self-consistent semi-empirical models in QuantumATK, the electron density is computed using the tight-binding model as described above. The density gives rise to a Hartree potential \(V_H\). The calculation of the Hartree potential is described in detail in the section on The Hartree Potential. The Hartree potential is included through an additional term in the Hamiltonian: Electron density¶ The electron density is given by the occupied eigenfunctions, where \(f_\alpha\) is the occupation of the level denoted by \(\alpha\). For finite temperature calculation the occupations are determined by the Fermi-Dirac distribution \(f_\alpha = \frac{1}{1 + e^{(\epsilon_\alpha - \epsilon_F)/kT}}\) with \(\epsilon_\alpha\) being the energy of the eigenstate \(\psi_\alpha\), \(\epsilon_F\) the Fermi level and \(T\) the electron temperature. However, other smooth distributions may be introduced in order to help speed convergence (see Occupation Methods). The eigenstates in the Slater orbital basis can be written as where the total number of electrons, \(N=\int_V n(\mathbf{r}) \, \mathrm{d}\mathbf{r}\), is given by where \(D_{ij} = \sum_{\alpha} f_\alpha c_{\alpha i}^* c_{\alpha j}\) is the density matrix. An approximate atom-based electron density¶ In practice, a simple approximation is used for the electron density. To this end, we introduce the Mulliken population, for shell \(l\) of atom number \(\mu\), and write the total number of electrons as a sum of atomic contributions, \(N=\sum_\mu \sum_{l \in \mu} m_l\). The radial dependence of each atomic-like density is represented by a Gaussian function, and the total induced charge in the system is approximated by where \(\delta m_l = m_l - Z_\mu\) is the total charge for shell \(l\) of atom \(\mu\), i.e. the sum of the valence electron charge \(m_l\) and the ionic charge \(-Z_\mu\). To see the significance of the width \(\alpha_l\) of the Gaussian orbital, consider the electrostatic potential from a single Gaussian density at position \(\mathbf{R}_\mu\): The onsite value of the Hartree potential is \(V_{H}(\mathbf{R}_\mu)=(m_l-Z_\mu) U_l\), where \(U_l= 2 \sqrt{ \frac{\alpha_l}{\pi}}\) is the onsite Hartree shift. In ATK-SE, it is the specified value of \(U_l\) that is used to determine the width \(\alpha_l\) of the Gaussian using the above relation. Onsite Hartree shift parameters¶ The shell-dependent onsite Hartree shift \(U_l\) can be obtained from an atomic calculation. \(U_l\) is related to the linear shift of the eigenenergy \(\varepsilon_l\) of shell \(l\) as function of the shell occupation \(q_l\): Thus, \(U_l\) can be obtained by performing atomic calculations with different values of \(q_l\). In QuantumATK, it is recommended to use the same onsite Hartree shift parameter for the s- and p-shells of each atom. ATK provides a database for \(U_l\) calculated using DFT and the PBE functional(see Generalized-gradient approximation (GGA)). Access to the data is through the function ATK_U.Due to backwards compatibility, the HoffmanHückelParameters and MullerHückelParametersdo not use the ATK_U database, but use special values of the electrostaticparameter \(U\), see Element data. Spin polarization¶ The inclusion of spin in the tight-binding Hamiltonian follows the scheme in [KohlerFH+07]. The following spin dependent term is added to the Hamiltonian: where the sign in the equation depends on the spin. The spin splitting \(dE_{l}\) of shell \(l\) is calculated from the spin-dependent Mulliken populations \(\mu_l\) of each shell at the local site: Onsite spin-split parameters¶ The shell-dependent spin splitting strength \(W_{ll'}\) can be obtained from a spin-polarized atomic calculation [KohlerFH+07]: Since \(W_{l l'}\) enters symmetrically in the Hamiltonian, it is convenient to symmetrize it: ATK provides a database for \(\bar{W}_{l l'}\). Access to the data is through the function ATK_W. Tight-binding total energy¶ \(E_{\rm H^0}\) is the one-electron energy of the non-self-consistent Hamiltonian, given by\[E_{\rm H^0} = \sum_{ij} D_{ij} H^0_{ij}.\] \(E_{\rm\delta H}\) is the electrostatic difference energy,\[E_{\rm\delta H} = \frac{1}{2}\int V^H_0(\mathbf{r}) \delta n(\mathbf{r})d\mathbf{r}.\] \(E_{\rm ext}\) is the electrostatic interaction between the electrons and an external field,\[E_{\rm ext} = \int V^{\rm ext}(\mathbf{r}) \delta n(\mathbf{r})d\mathbf{r}.\] \(E_{\rm spin}\) is the spin polarization energy [KohlerFH+07],\[E_{\rm spin} = -\frac{1}{2} \sum_\mu \sum_{l \in \mu} \sum_{l' \in \mu} W(Z_{\mu}, l,l') m_l m_{l'}.\] \(E_{\rm pp}\) is the repulsive energy from a pair-potential between each atom pair, \(V^{\rm pp}(Z_{\mu}, Z_{\mu'},R_{\mu, \mu'})\),\[\begin{split}E_{\rm pp} = \sum_{\mu < \mu'} V^{\rm pp}(Z_{\mu}, Z_{\mu'},R_{\mu, \mu'}).\end{split}\] It is optional to add this term to the tight-binding model; it does not affect the electronic structure. The tight-binding model will, however, not give sensible geometries without a repulsive pair-potential. Important In the current version of QuantumATK, only the DFTB and Hotbit parameter sets contain a repulsive pair potential term, and only these methods can be used for geometry optimizations. Parameters¶ Parameters for the Slater–Koster method¶ The Slater–Koster parameters can be provided either through the SlaterKosterTable class or through various 3rd-party formats. The supported 3rd-party formats are the DFTB Slater–Koster files from the DFTB consortium, and the Hotbit Slater–Koster files from the Hotbit consortium. Tables showing which elements are covered by the parameters are given here: Slater–Koster basis sets. Shipped DFTB and Hotbit parameters¶ The current version of ATK-SE is shipped with a number of DFTB style parameters from the CP2K and Hotbit consortia. We recommended that these resources are cited if the parameters are used in publications. It is most easy to setup these basis sets using the QuantumATK interface. Shipped Slater–Koster Table parameters¶ A number of orthogonal tight-binding parameters are provided. The parametersare from Vogl et al. [VHD83] and Jancu et al. [JSBB98],and it is recommended that these papers are cited if the parameters are usedfor publications. These basis sets can easily be set up using QuantumATK. Parameters for the extended Hückel method¶ The parameters \(\eta_1\), \(\eta_2\), \(C_1\), \(C_2\), and \(E\) must be defined for each valence orbital, while \(\beta\) and \(U\) only depend on the element type. Different parameter sets are provided with ATK-SE, but it is also possible to provide user-defined parameters in the input file using the HuckelBasisParameters class. The tables below provide a mapping between the symbols in the equations and the corresponding keywords. Symbol HuckelBasisParameters \(E_i\) ionization_potential \(\beta\) wolfsberg_helmholtz_constant \(U\) onsite_hartree_shift \(W\) onsite_spin_split \(E^{\mathrm{VAC}}\) vacuum_level Symbol SlaterOrbital parameters \(n\) principal_quantum_number \(l\) angular_momentum \(\eta\) slater_coefficients \(C\) weights The current version of QuantumATK comes with built-in Hoffmann and Müller parameter sets, which are appropriate for organic molecules. For crystalline structures, both metals and organic materials like graphene, parameters from J. Cerda are provided. When using these parameters, [CS00] should be referenced. Tables with the available parameter sets can be found here: Extended Hückel basis sets. To combine parameters from different sources, it is important to make sure they use the same energy zero level, in order to obtain correct charge transfers. This can be obtained by ensuring that the crystals have the correct work function and molecules the correct ionisation energies. For this purpose, an additional parameter \(E^{\mathrm{VAC}}\) is introduced, which shifts the energy of the vacuum level: If a calculation with \(E^{\mathrm{VAC}}=0\) eV has a work function of 6.5 eV, then by setting \(E^{\mathrm{VAC}}=-1.5\) eV all bands shift rigidly upwards by 1.5 eV, and the work function becomes 5.0 eV. Note The Hückel parameters have been fitted for non-self-consistent calculations.To use the parameters in self-consistent calculations, the self-consistentonsite shifts must be compensated by a reverse shift of the vacuum_levels. [ABTH78] J. H. Ammeter, H. B. Buergi, J. C. Thibeault, and R. Hoffmann. Counterintuitive orbital mixing in semiempirical and ab initio molecular orbital calculations. J. Am. Chem. Soc., 100(12):3686–3692, 1978. doi:10.1021/ja00480a005. [CS00] J. Cerdá and F. Soria. Accurate and transferable extended hückel-type tight-binding parameters. Phys. Rev. B, 61:7965–7971, Mar 2000. doi:10.1103/PhysRevB.61.7965. [EPJ+98] (1, 2) M. Elstner, D. Porezag, G. Jungnickel, J. Elsner, M. Haugk, Th. Frauenheim, S. Suhai, and G. Seifert. Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties. Phys. Rev. B, 58:7260–7268, Sep 1998. doi:10.1103/PhysRevB.58.7260. [JSBB98] J.-M. Jancu, R. Scholz, F. Beltram, and F. Bassani. Empirical spds* tight-binding calculation for cubic semiconductors: General method and material parameters. Phys. Rev. B, 57:6493–6507, Mar 1998. doi:10.1103/PhysRevB.57.6493. [KF08] K. Kaasbjerg and K. Flensberg. Strong Polarization-Induced Reduction of Addition Energies in Single-Molecule Nanojunctions. Nano Letters, 8(11):3809–3814, 2008. doi:10.1021/nl8021708. [KohlerFH+07] (1, 2, 3) C. Köhler, T. Frauenheim, B. Hourahine, G. Seifert, and M. Sternberg. Treatment of Collinear and Noncollinear Electron Spin within an Approximate Density Functional Based Method. J. Phys. Chem. A, 111(26):5622–5629, 2007. doi:10.1021/jp068802p. [SPS+10] (1, 2) K. Stokbro, D. E. Petersen, S. Smidstrup, A. Blom, M. Ipsen, and K. Kaasbjerg. Semiempirical model for nanoscale device simulations. Phys. Rev. B, 82:075420, Aug 2010. doi:10.1103/PhysRevB.82.075420. [VHD83] P. Vogl, Harold P. Hjalmarson, and J. D. Dow. A Semi-empirical tight-binding theory of the electronic structure of semiconductors†. J. Phys. Chem. Solids, 44(5):365–378, jan 1983. URL: http://www.sciencedirect.com/science/article/pii/0022369783900641. [WH78] M.‐H. Whangbo and R. Hoffmann. Counterintuitive orbital mixing. J. Chem. Phys., 1978. doi:10.1063/1.435677. [WH52] M. Wolfsberg and L. Helmholz. The Spectra and Electronic Structure of the Tetrahedral Ions MnO4−, CrO4−−, and ClO4−. J. Chem. Phys., 1952. doi:10.1063/1.1700580.
Yes, provided that $e^{X}$ is integrable. It is because $\sigma(Y)=\sigma(e^{Y})$.Proof: Clearly $e^{Y}$ is $\sigma(Y)$-measurable, so $\sigma(e^{Y})\subseteq\sigma(Y)$.On the other hand, $Y=\ln\left(e^{Y}\right)$ which is $\sigma(e^{Y})$-measurable,so $\sigma(Y)\subseteq\sigma(e^{Y})$. Guide:Note that $$M_X(t)=E[e^{Xt}]=\sum P(X=x)e^{xt}$$Hence I can read of from the first term of MGF that $P(X=-2)=\frac16$. Try to read off the other terms and you should be able to answer the question. To justify the interchange of limit and integral, estimate the difference quotient as follows: Let $|t|<\delta<1$,$$\left|\frac{e^{-t(\mu-x)}-1}{t}\right|e^{-x}\leq e^{-x}\sup_{|t|<\delta}|x-\mu|e^{-t(\mu-x)}\leq e^{-(x-\delta|x-\mu|)}|x-\mu|$$Which is integrable. Now apply the DCT to find$$\lim_{t\to 0}\int_0^\infty \frac{e^{-t(\mu-x)}-1}{... I don't understand why one would use MGF for this . You get it from definition: $P(Y<c)=P(X<\frac {c-a} {b-a}) =\frac {c-a} {b-a}$ for $c$ between $a$ and $b$ and this is the definition of uniform distribution. From the question "Also, what will be the limits? Will it be from x to 1 or from 0 to 1?" I will assume that the proper joint density is$$f_{X,Y}(x,y) = 8xy\cdot\mathsf 1_{0<x<y<1}.$$To find the moment-generating function of $Y$, we first need to determine its marginal distribution. To do this, we integrate over all possible values of $X$:$$... For $y\geqslant 0$ we have $1\geqslant e^{-y}=\sum_{k=0}^{\infty}(-1)^k y^k/k!$; integrating this $m$ times, we get $$\frac{y^m}{m!}\geqslant\sum_{k=0}^{\infty}\frac{(-1)^k y^{k+m}}{(k+m)!}\underset{k+m=j}{=}(-1)^m\sum_{j=m}^{\infty}(-1)^j\frac{y^j}{j!}.$$ Now you have to apply the formula for the partial sum of a geometric series (given hint):$$\sum_{k=1}^n r^k=r\cdot \frac{r^n-1}{r-1}$$For $|r|<1$ the series converges: $\sum\limits_{k=1}^{\infty} r^k=\lim\limits_{n \to \infty }r\cdot \frac{r^n-1}{r-1}=r\cdot \frac{0-1}{r-1}=\frac{r}{1-r} \qquad (*)$Next we we simplify the sum:$$M_{X}(t) = p\sum_{...
Let's call a set of points good if every pair of points in the set are an integer distance apart. Let's call a circle small if its radius is less than 7. A good set lies on the boundary of a small circle. How many points can the set contain? Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community Let's call a set of points good if every pair of points in the set are an integer distance apart. Let's call a circle small if its radius is less than 7. A good set lies on the boundary of a small circle. How many points can the set contain? This is not a proper answer! I'll guess that the number is 6 But this is just a guess. (EDIT: now with ugly proof) First off, any three-point subset of a good set is an integer-sided triangle, and the circumradius of each such triangle must be the same. The formula by the way is $$r=\frac{abc}{\sqrt{(a+b+c)(a+b-c)(c+a-b)(b+c-a)}}$$ So to start we are looking for different triples of integers which have give the same value of r when you put them in this formula (r doesn't need to be an integer). I cheated a bit here: I clicked on OP's profile, and see a previous problem, and a comment about a certain hexagon whose side lengths are 5 or less I see by trial-and-error that the equilateral triangle (7,7,7) matches (3,5,7) and (3,7,8) and (5,7,8), r = $\frac{7}{\sqrt{3}}$ which is just over 4. You can make a skewed hexagon out of this by taking two equilateral triangles of side length 7 to make a Star-of-David. Now rotate one triangle until the side lengths of the outer hexagon are 3-5-3-5-3-5, and this will make the diagonals 8, so everything is an integer and this is a "good" set. I think this is OP's hexagon. I also see by trial-and-error that the next equilateral triangle that can match anything else is (13,13,13), and r = $\frac{13}{\sqrt{3}}$ which is just past OP's critical value of 7. This is a puzzle and not just a math question. It could turn out that in fact there is a heptagon solution containing no equilateral triangle. But probably OP's hexagon is involved, and OP chose 7 for a reason. Again, this is just a guess based on assumptions about OP :-) not a proof. ADDED: Well since nobody is answering ... I found all the different triples of integers up to 13 which have the same circumradius. There are a bunch of pairs of triples, and then three other higher-order coincidences: first, the one I found already; second, (4,4,7), (4,6,8), (4,8,8), (6,7,8), $r = \frac{16}{\sqrt{15}}$, just a bit bigger than the previous r; and, (3,8,10), (3,12,12), (8,8,12), and (8,10,12), $r=\frac{16}{\sqrt{7}}$, just over 6. Trial-and-error trying to fit these around a circle shows that these both make pentagons. The first has sides 4-4-6-4-6, one of the diagonals has length 7 and the other 4 have length 8. The other has sides 3-8-8-8-8, two diagonals have length 10, three have length 12. Both "good" sets. To finish, we need to rule out the pairs. { EDIT: Example: (2,3,4) and (2,4,4) are the smallest pair, $r=\frac{8}{\sqrt{15}}$ which is just over 2. This makes a quadrilateral with sides length 2-3-2-4 and both diagonals length 4. BTW we also know it fits on a circle by Ptolemy's theorem, since $2 \cdot 2 + 3 \cdot 4 = 4 \cdot 4$. } Well, a subset of a good set is a good set, so we just need to show that any heptagon would need more than two different (ie. not congruent) kinds of triangle. Uhhhh. Well, it really looks like it ought to be true, and I want it to be true, so I'll just call that "obvious" ;-) So my guess was right. But maybe OP had a more elegant proof in mind, and didn't intend for me to hax0r it with my 1337 programming skillz (by that I mean, I made a spreadsheet.. hey don't judge)
I will consider Non-Noisy Observations i.e. $y=f(x)$ Lets say we have the following data set of 5 training examples with one of the examples duplicated $(1,2,3,4,4)$ maps to $(2,4,6,8,8)$. Since for GPR we have to invert a Kernel Matrix and a Kernel matrix containing duplicate inputs will not be invertible we should remove duplicate training examples when doing GPR with non-noisy observation. Am I right in my reasoning ? Kindly comment The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence. That said, numerically, the kernel matrix $K$ will occasionally become numerically singular if some points are too close together (but not necessarily identical). In this scenario, you can either identify and deal with the problem points (deletion, merging them, whatever) or you can some (small) noise: $\hat{K}=K+\epsilon I$. Usually $\epsilon=10^{-6}$ is sufficient for me, or you can perform a spectral decomposition of $K$ and then for each eigenvalue $\lambda_i$, replace it with $\hat{\lambda_i}=\max{\{\lambda_i, \epsilon\lambda_{\max}\}}$ for some small $\epsilon.$ The idea here is that you've effectively pinned the smallest eigenvalue of the matrix relative to the largest, and this may be a more "minimal" intervention into the matrix. This is an area where I'm not sure there are any good solutions. The numerical component of the problem is considered in more detail on this thread:
Trefoil Common name Overhand knot Arf invariant 1 Braid length 3 Braid no. 2 Bridge no. 2 Crosscap no. 1 Crossing no. 3 Genus 1 Hyperbolic volume 0 Stick no. 6 Tunnel no. 1 Unknotting no. 1 Conway notation [3] A-B notation 3 1 Dowker notation 4, 6, 2 Last /Next 0 1 / 4 1 Other alternating, torus, fibered, pretzel, prime, slice, reversible, tricolorable, twist In topology, a branch of mathematics, the trefoil knot is the simplest example of a nontrivial knot. The trefoil can be obtained by joining together the two loose ends of a common overhand knot, resulting in a knotted loop. As the simplest knot, the trefoil is fundamental to the study of mathematical knot theory, which has diverse applications in topology, geometry, physics, chemistry and magic. The trefoil knot is named after the three-leaf clover (or trefoil) plant. Contents Descriptions 1 Symmetry 2 Nontriviality 3 Classification 4 Invariants 5 Trefoils in religion and culture 6 See also 7 References 8 External links 9 Descriptions The trefoil knot can be defined as the curve obtained from the following parametric equations: x = \sin t + 2 \sin 2t \qquad y=\cos t - 2 \cos 2t \qquad z=-\sin 3t The (2,3)-torus knot is also a trefoil knot. The following parametric equations give a (2,3)-torus knot lying on torus (r-2)^2+z^2 = 1: x = (2+\cos 3t)\cos 2t \qquad y=(2+\cos 3t )\sin 2t \qquad z=\sin 3t Form of trefoil knot without visual three-fold symmetry Any continuous deformation of the curve above is also considered a trefoil knot. Specifically, any curve isotopic to a trefoil knot is also considered to be a trefoil. In addition, the mirror image of a trefoil knot is also considered to be a trefoil. In topology and knot theory, the trefoil is usually defined using a knot diagram instead of an explicit parametric equation. In algebraic geometry, the trefoil can also be obtained as the intersection in C 2 of the unit 3-sphere S 3 with the complex plane curve of zeroes of the complex polynomial z 2 + w 3 (a cuspidal cubic). If one end of a tape or belt is turned over three times and then pasted to the other, a trefoil knot results. [1] Symmetry The trefoil knot is chiral, in the sense that a trefoil knot can be distinguished from its own mirror image. The two resulting variants are known as the left-handed trefoil and the right-handed trefoil. It is not possible to deform a left-handed trefoil continuously into a right-handed trefoil, or vice versa. (That is, the two trefoils are not isotopic.) Though the trefoil knot is chiral, it is also invertible, meaning that there is no distinction between a counterclockwise-oriented trefoil and a clockwise-oriented trefoil. That is, the chirality of a trefoil depends only on the over and under crossings, not the orientation of the curve. Overhand knot becomes a trefoil knot by joining the ends. Nontriviality The trefoil knot is nontrivial, meaning that it is not possible to "untie" a trefoil knot in three dimensions without cutting it. From a mathematical point of view, this means that a trefoil knot is not isotopic to the unknot. In particular, there is no sequence of Reidemeister moves that will untie a trefoil. Proving this requires the construction of a knot invariant that distinguishes the trefoil from the unknot. The simplest such invariant is tricolorability: the trefoil is tricolorable, but the unknot is not. In addition, virtually every major knot polynomial distinguishes the trefoil from an unknot, as do most other strong knot invariants. Classification In knot theory, the trefoil is the first nontrivial knot, and is the only knot with crossing number three. It is a prime knot, and is listed as 3 1 in the Alexander-Briggs notation. The Dowker notation for the trefoil is 4 6 2, and the Conway notation for the trefoil is [3]. The trefoil can be described as the (2,3)-torus knot. It is also the knot obtained by closing the braid σ 1 3. The trefoil is an alternating knot. However, it is not a slice knot, meaning that it does not bound a smooth 2-dimensional disk in the 4-dimensional ball; one way to prove this is to note that its signature is not zero. Another proof is that its Alexander polynomial does not satisfy the Fox-Milnor condition. The trefoil is a fibered knot, meaning that its complement in S^3 is a fiber bundle over the circle S^1. In the model of the trefoil as the set of pairs (z,w) of complex numbers such that |z|^2+|w|^2=1 and z^2+w^3=0, this fiber bundle has the Milnor map \phi(z,w)=( z^2+w^3)/|z^2+w^3| as its fibration, and a once-punctured torus as its fiber surface. Since the knot complement is Seifert fibred with boundary, it has a horizontal incompressible surface—this is also the fiber of the Milnor map. Invariants The Alexander polynomial of the trefoil knot is \Delta(t) = t - 1 + t^{-1}, \, and the Conway polynomial is \nabla(z) = z^2 + 1. [2] The Jones polynomial is V(q) = q^{-1} + q^{-3} - q^{-4}, \, and the Kauffman polynomial of the trefoil is L(a,z) = za^5 + z^2a^4 - a^4 + za^3 + z^2a^2-2a^2. \, The knot group of the trefoil is given by the presentation \langle x,y \mid x^2=y^3 \rangle \, or equivalently \langle x,y \mid xyx=yxy \rangle. \, [3] This group is isomorphic to the braid group with three strands. Trefoils in religion and culture As the simplest nontrivial knot, the trefoil is a common motif in iconography and the visual arts. For example, the common form of the triquetra symbol is a trefoil, as are some versions of the Germanic Valknut. Trefoil knots An ancient Norse Mjöllnir pendant with trefoils A tightly-knotted triquetra A metallic Valknut in the shape of a trefoil Trefoil knot used in aTV 's logo In modern art, the woodcut Knots by M. C. Escher depicts three trefoil knots whose solid forms are twisted in different ways. [4] See also References ^ Shaw, George Russell (MCMXXXIII). Knots: Useful & Ornamental, p.11. ISBN 978-0-517-46000-9. ^ "3_1", The Knot Atlas. ^ Weisstein, Eric W., "Trefoil Knot", MathWorld. Accessed: May 5, 2013. ^ The Official M.C. Escher Website — Gallery — "Knots" External links Wolframalpha: (2,3)-torus knot This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B March 2007 , Volume 7 , Issue 2 Select all articles Export/Reference: Abstract: In this paper, we analyze theoretically an age structured population model with cannibalism. The model is nonlinear in that cannibalism decreases the birth rate based on total population density. We use degree theory to prove the existence of unique solution. We also investigate the asymptotic stability of the solutions, and prove under special hypotheses, local and global attractivity of a unique nontrivial steady state. We convert the problem to a delay differential equation and prove that quasiconvergence leads to global attraction. Some numerical simulations are presented exhibiting sustained oscillations which may occur when the hypotheses of theoretical analysis are not satisfied. Abstract: In this paper we propose the analysis of the incompressible non-homogeneous Navier-Stokes equations with nonlinear outflow boundary condition. This kind of boundary condition appears to be, in some situations, a useful way to perform numerical computations of the solution to the unsteady Navier-Stokes equations when the Dirichlet data are not given explicitly by the physical context on a part of the boundary of the computational domain. The boundary condition we propose, following previous works in the homogeneous case, is a relationship between the normal component of the stress and the outflow momentum flux taking into account inertial effects. We prove the global existence of a weak solution to this model both in 2D and 3D. In particular, we show that the nonlinear boundary condition under study holds for such a solution in a weak sense, even though the normal component of the stress and the density may not have traces in the usual sense. Abstract: This paper is devoted to the study of travelling wave solutions for a simple epidemic model. This model consists in a single scalar equation with age-dependence and spatial structure. We prove the existence of travelling waves for a continuum of admissible wave speeds as well as some qualitative properties, like exponential decay and monotonicity with respect to the direction of front's propagation. Our proofs extensively use the comparison principle that allows us to construct suitable sub and super-solutions or to use the classical sliding method to obtain qualitative properties of the wave front. Abstract: The standard Melnikov method for analyzing the onset of chaos in the vicinity of a separatrix is used to explore the possibility of suppression of chaos of a certain class of dynamical systems. For a given dynamical system we apply an external perturbation, which we call the stabilizing perturbation, with the goal that after its action the chaos present in the system is suppressed. We apply this method to the nonlinear pendulum as a paradigm, and obtain some analytical expressions for the corresponding external perturbations that eliminate chaotic behavior. Numerical simulations in the pendulum show a complete agreement with the analytical results. Abstract: In this paper, we study a two-dimensional Burgers--Korteweg-de Vries-type equation with higher-order nonlinearities. A class of solitary wave solution is obtained by means of the Divisor Theorem which is based on the ring theory of commutative algebra. Our result indicates that the presentation of traveling wave solution in [J. Phys. A (Math. Gen.) 35 (2002) 8253--8265] is incorrect; an explanation as to why this is so is given. Abstract: In systems governing two-dimensional turbulence, surface quasi-geostrophic turbulence, (more generally $\alpha$-turbulence), two-layer quasi-geostrophic turbulence, etc., there often exist two conservative quadratic quantities, one "energy''-like and one "enstrophy''-like. In a finite inertial range there are in general two spectral fluxes, one associated with each conserved quantity. We derive here an inequality comparing the relative magnitudes of the "energy'' and "enstrophy'' fluxes for finite or infinitesimal dissipations, and for hyper or hypo viscosities. When this inequality is satisfied, as is the case of 2D turbulence,where the energy flux contribution to the energy spectrum is small, the subdominant part will be effectively hidden. In sQG turbulence, it is shown that the opposite is true: the downscale energy flux becomes the dominant contribution to the energy spectrum. A combination of these two behaviors appears to be the case in 2-layer QG turbulence, depending on the baroclinicity of the system. Abstract: In this paper we give a rigorous mathematical proof of the instability of stationary radial flame ball solutions of a three dimensional free boundary model which models the combustion of a gaseous mixture with dust in a microgravity environment. Abstract: In this paper, we introduce a class of one-dimensional non-autonomous dynamical systems that allow an explicit study of their orbits, of the associated variational equations as well as of certain types of bifurcations. In a special case, the model class can be transformed into the non-autonomous Beverton-Holt equation. We use these model functions for analyzing various notions of non-autonomous transcritical and pitchfork bifurcations that have been recently proposed in the literature. Abstract: An age structured $s$-$i$-$s$ epidemic model with random diffusion is studied. The model is described by the system of nonlinear and nonlocal integro-differential equations. Finite differences along the characteristics in age-time domain combined with Galerkin finite elements in spatial domain are used in the approximation. It is shown that a positive periodic solution to the discrete system resulting from the approximation can be generated, if the initial condition is fertile. It is proved that the endemic periodic solution is globally stable once it exists. Abstract: We discuss the applicability of Kolmogorov's theorem on existence of invariant tori to the real Sun-Jupiter-Saturn system. Using computer algebra, we construct a Kolmogorov's normal form defined in a neighborhood of the actual orbit in the phase space, giving a sharp evidence of the convergence of the algorithm. If not a rigorous proof, we consider our calculation as a strong indication that Kolmogorov's theorem applies to the motion of the two biggest planets of our solar system. Abstract: A family of delay-differential models of the glucose-insulin system is introduced, whose members represent adequately the Intra-Venous Glucose Tolerance Test and allied experimental procedures of diabetological interest. All the models in the family admit positive bounded unique solutions for any positive initial condition and are persistent. The models agree with the physics underlying the experiments, and they all present a unique positive equilibrium point. Local stability is investigated in a pair of interesting member models: one, a discrete-delays differential system; the other, a distributed-delay system reducing to an ordinary differential system evolving on a suitably defined extended state space. In both cases conditions are given on the physical parameters in order to ensure the local asymptotic stability of the equilibrium point. These conditions are always satisfied, given the actual parameter estimates obtained experimentally. A study of the global stability properties is performed, but while from simulations it could be conjectured that the models considered are globally asymptotically stable, sufficient stability criteria, formally derived, are not actually satisfied for physiological parameters values. Given the practical importance of the models studied, further analytical work may be of interest to conclusively characterize their behavior. Abstract: We show that in the limit of small Rossby number $\varepsilon$, the primitive equations of the ocean (OPEs) can be approximated by "higher-order quasi-geostrophic equations'' up to an exponential accuracy in $\varepsilon$. This approximation assumes well-prepared initial data and is valid for a timescale of order one (independent of $\varepsilon$). Our construction uses Gevrey regularity of the OPEs and a classical method to bound errors in higher-order perturbation theory. Abstract: In this paper I will investigate the bifurcation and asymptotic behavior of solutions of the Swift-Hohenberg equation and the generalized Swift-Hohenberg equation with the Dirichlet boundary condition on a one-dimensional domain $(0,L)$. I will also study the bifurcation and stability of patterns in the $n$-dimensional Swift-Hohenberg equation with the odd-periodic and periodic boundary conditions. It is shown that each equation bifurcates from the trivial solution to an attractor $\mathcal A_\lambda$ when the control parameter $\lambda$ crosses $\lambda _{c} $, the principal eigenvalue of $(I+\Delta)^2$. The local behavior of solutions and their bifurcation to an invariant set near higher eigenvalues are analyzed as well. Abstract: Let $f:\M\to\M$ be a continuous map of a locally compact metric space. Models of interacting populations often have a closed invariant set $\partial \M$ that corresponds to the loss or extinction of one or more populations. The dynamics of $f$ subject to bounded random perturbations for which $\partial \M$ is absorbing are studied. When these random perturbations are sufficiently small, almost sure absorbtion (i.e. extinction) for all initial conditions is shown to occur if and only if $M\setminus \partial M$ contains no attractors for $f$. Applications to evolutionary bimatrix games and uniform persistence are given. In particular, it shown that random perturbations of evolutionary bimatrix game dynamics result in almost sure extinction of one or more strategies. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
A Simple Solution to a Difficult Sangaku Problem Nikolaos Dergiades Thessaloniki Greece May 15, 2017 Solution Since the incircles of triangles $AJC,\,$ $AJD\,$ are symmetric relative to $AJ,\,$ the same holds for $AC,\,$ $AD\,$ and hence $AC=AD=a,\,$ $CJ=JD=c.\,$ If $AJ=d,\,$ $DB=b,\,$ then $BC=a+b.$ From $\Delta CDB,\,$ $\displaystyle r=\frac{2[CDB]}{CD+DB+BA}=\frac{[CDB]}{\displaystyle \frac{a}{2}+b+c}.$ From $\Delta CAJ,\,$ $\displaystyle r=\frac{2[CAJ]}{a+c+d}=\frac{[CAD]}{a+c+d}.$ Hence, $\displaystyle \frac{b}{a}=\frac{[CDB]}{[CAD]}=\frac{\displaystyle \frac{a}{2}+b+c}{\displaystyle a+c+d}=\frac{\displaystyle \frac{a}{2}+c}{\displaystyle c+d},$ from which (1) $\displaystyle b=\frac{a(a+2c)}{2(c+d)}.$ Stewart's theorem gives, $CA^2\cdot DB+CB^2\cdot AD=CD^2\cdot AB+AD\cdot DB\cdot AB,$ i.e., $a^2b+(a+b)^2a=4c^2(a+b)+ab(a+b),\,$ from which (2) $\displaystyle b=\frac{a(4c^2-a^2)}{2(a^2-2c^2)}.$ From (1) and (2) we conclude that $a^2-2c^2\ne 0\,$ and $\displaystyle c+d=\frac{a^2-2c^2}{2c-a}.$ In $\Delta CAJ,\,$ the Pythagorean theorem gives $a^2-c^2=d^2,\,$ such that $a^2-2c^2=(d+c)(d-c)\,$ and $\displaystyle a^2-2c^2=\frac{a^2-2c^2}{2c-a}\cdot\frac{a^2+2ac-6c^2}{2c-a},$ implying $3a=5c,\,$ or $a=5x,\,$ $c=3x,\,$ $d=4x\,$ and, since $a=(c-r)+(d-r),\,$ we get $\displaystyle r=x=\frac{d}{4}=\frac{AJ}{4}.$ Sangaku Sangaku: Reflections on the Phenomenon Critique of My View and a Response 1 + 27 = 12 + 16 Sangaku 3-4-5 Triangle by a Kid 7 = 2 + 5 Sangaku A 49 thDegree Challenge A Geometric Mean Sangaku A Hard but Important Sangaku A Restored Sangaku Problem A better solution to a difficult sangaku problem A Simple Solution to a Difficult Sangaku Problem A Trigonometric Solution to a Difficult Sangaku Problem A Sangaku: Two Unrelated Circles A Sangaku by a Teen A Sangaku Follow-Up on an Archimedes' Lemma A Sangaku with an Egyptian Attachment A Sangaku with Many Circles and Some A Sushi Morsel An Old Japanese Theorem Archimedes Twins in the Edo Period Arithmetic Mean Sangaku Bottema Shatters Japan's Seclusion Chain of Circles on a Chord Circles and Semicircles in Rectangle Circles in a Circular Segment Circles Lined on the Legs of a Right Triangle Equal Incircles Theorem Equilateral Triangle, Straight Line and Tangent Circles Equilateral Triangles and Incircles in a Square Five Incircles in a Square Four Hinged Squares Four Incircles in Equilateral Triangle Gion Shrine Problem Harmonic Mean Sangaku Heron's Problem In the Wasan Spirit Incenters in Cyclic Quadrilateral Japanese Art and Mathematics Malfatti's Problem Maximal Properties of the Pythagorean Relation Neuberg Sangaku Out of Pentagon Sangaku Peacock Tail Sangaku Pentagon Proportions Sangaku Proportions in Square Pythagoras and Vecten Break Japan's Isolation Radius of a Circle by Paper Folding Review of Sacred Mathematics Sangaku à la V. Thebault Sangaku and The Egyptian Triangle Sangaku in a Square Sangaku Iterations, Is it Wasan? Sangaku with 8 Circles Sangaku with Angle between a Tangent and a Chord Sangaku with Quadratic Optimization Sangaku with Three Mixtilinear Circles Sangaku with Versines Sangakus with a Mixtilinear Circle Sequences of Touching Circles Square and Circle in a Gothic Cupola Steiner's Sangaku Tangent Circles and an Isosceles Triangle The Squinting Eyes Theorem Three Incircles In a Right Triangle Three Squares and Two Ellipses Three Tangent Circles Sangaku Triangles, Squares and Areas from Temple Geometry Two Arbelos, Two Chains Two Circles in an Angle Two Sangaku with Equal Incircles Another Sangaku in Square Sangaku via Peru FJG Capitan's Sangaku Copyright © 1996-2018 Alexander Bogomolny 65620296
First, I loaded the file of the monthly data, isolated the third temperature-like column, the global ocean temperature anomaly, and calculated the linear regressions. It's straightforward to use one simple Mathematica command to compute the slope of the linear regression but the non-trivial addition I made was an estimate of the error margin of the resulting slope. My logic is that for different initial months and final months of the interval, you get different slopes. Then you draw the histogram and the width of this histogram approximately informs you about the error margin of the slope. So I picked the interval from the \(i\)-th month of the dataset through the \(j\)-th month from the end of the dataset and allowed \(i,j\) to be integers between one and fifty. One gets 2,500 different slopes from the linear regression. When the month-on-month average slope is multiplied by 1,200 to get the warming per century, those 2,500 slopes are distributed along the following histogram: One may easily compute the mean value 1.33 °C per century while the root-mean-square width of the curve is 0.12 °C.\[ \ddfrac{T}{t}\sim (1.33\pm 0.12)^\circ {\rm C} / {\rm century} \] It's also possible to replace the number 50 by another number of months, like 60, and the qualitative conclusions are unchanged. This really means that the data from the last 33 years – when the observed warming trend was faster than in longer intervals or previous intervals, so we're likely to get an overestimate – the warming trend was just 1.35 °C per century with a relatively small error margin. In particular, we can't exclude that the "right" warming trend is below 1 °C per century. However, we can rather reliably exclude the hypothesis that the centennial trend exceeds 2 °C per century. If you need some very simple Mathematica code I used: a = Import["http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt",I chose the sea temperatures because they seem to be less variable in the short run; the nearly white noise apparently contributing to the temperatures has a smaller prefactor. The latest, May 2013 temperature anomaly which sits at –0.01 °C, wasn't incorporated to the calculation yet. It would reduce the trends but just by a very tiny amount. "Table"]; aaa = a[[2 ;; -12]][[All, 3]]; more = a[[2 ;; -12]][[All, 5]]; laaa = Length[aaa] ListLinePlot[more] trends = {}; For[i = 1, i <= 50, i++, For[j = 1, j <= 50, j++, morekus = more[[i ;; -j]]; trend = D[Normal[LinearModelFit[morekus, x, x]], x]*1200; trends = trends~Join~{trend}; ] ] Histogram[trends] avtrend = Total[trends]/2500 Sqrt[Total[trends^2 - avtrend^2]/2500] Needless to say, nothing guarantees that the underlying trend implicitly assumed above to be linear will remain constant in the future. Climatologists are often naively describing the temperatures as a combination of a superfast, nearly white noise (very high frequencies of the randomness) and a superslow, nearly linear and permanent, increase of the temperature (very low frequencies). In reality, there are contributions from many characteristic intermediate timescales, including one or two years or so from El Niño cycles, decades from PDO and AMO and all these things, and probably many other sub-centennial and near-centennial and multi-centennial cycles, some of which are more regular or periodic or predictable than others while others are chaotic. Even the assumption that the cherry-picked high trend is going to continue doesn't look worrisome in any sense and the underlying trends are safely lower than the lower end point of the IPCC interval. Note that the IPCC should publish the fifth report, AR5, later in 2013. I am not too curious what will happen but I am still approximately infinitesimally curious how the usual talking points by these mostly dishonest hired guns will change or not change relatively to AR4. ;-)
We denote the proportion of consecutive patterns \(\pi\) in a permutation \(\sigma\) as\(\widetilde{\text{c$-$}occ}(\pi, \sigma^m ) =\frac{\text{# of consecutive occurrences of $\pi$ in $\sigma$}}{|\sigma|}.\) We consider the consecutive pattern limiting sets, called the feasible region for consecutive patterns, defined for every \(k\in\mathbb{N}\) \infty \text{ and } \widetilde{\text{c$-$}occ}(\pi, \sigma^m ) \to \vec{v}_{\pi}, \forall \pi\in\mathcal{S}_k \right\}.\) We are able to obtain a full description of the feasible region \(P_k\) as the cycle polytope of a specific graph, called the overlap graph \(Ov(k)\). Definition: The graph \(Ov(k)\) is a directed multigraph with labeled edges, where the vertices are elements of \(\mathcal{S}_{k-1}\) and for every \(\pi\in\mathcal{S}_{k}\) there is an edge labeled by \(\pi\) from the pattern induced by the first \(k-1\) indices of \(\pi\) to the pattern induced by the last \(k-1\) indices of \(\pi\). We display here below the overlap graph \(Ov(4)\): the six vertices of the graph are painted in red and the edges are drawn as labeled arrows. Note that in order to obtain a clearer picture we did not draw multiple edges, but we use multiple labels (for example the edge from 231 to 312 is labeled with the permutations 3412 and 2413 and should be thought of as two distinct edges labeled with 3412 and 2413 respectively). Definition: Let \(G=(V,E)\) be a directed multigraph. For each non-empty cycle \(\mathcal{C} \) in \(G \), define \(\vec{e}_{\mathcal{C}}\in \mathbb{R}^{E} \) so that \((\vec{e}_{\mathcal{C}})_e := \frac{\text{# of occurrences of $e$ in $\mathcal{C}$}}{|\mathcal{C}|}, \quad \text{for all}\quad e\in E. \) We define the cycle polytope of \(G \) to be the polytope \(P(G) := \text{conv} \{\vec{e}_{\mathcal{C}} | \, \mathcal{C} \text{ is a simple cycle of } G \} \). Our first main result is the following. Theorem: \(P_k \) is the cycle polytope of the overlap graph \(Ov(k) \). Its dimension is \(k! – (k-1)! \) and its vertices are given by the simple cycles of \(Ov(k) \). In addition, we also determine the equations that describe the polytope. In the picture below you can see the four-dimensional polytope \(P_3\) given by the six patterns of size three. We highlight in light-blue one of the six three-dimensional facets of \(P_3\). This facet is a pyramid with square base. The polytope itself is a four-dimensional pyramid, whose base is the highlighted facet. In order to prove the above theorem, we first prove general results for cycle polytopes of directed multigraphs and then we transfer them to the specific case of overlap graphs. Specifically, we are able to prove the following. Theorem: The cycle polytope of a strongly connected directed multigraph \(G=(V,E)\) has dimension \(|E|-|V|\). We also determine the equations defining the polytope and we show that all its faces can be identified with some specific subgraphs of \(G \). This gives us a description of the face poset of the polytope. Further, the computation of the dimension is generalized for any cycle polytope, even those that do not come from strongly connected graphs. In the following picture we display the face structure of the cycle polytope of a graph. On the left-hand side of the picture (inside the dashed black ball) we have a graph G with two vertices and five edges. On the right-hand side, we draw the associated cycle polytope P(G) that is a pyramid with squared base. The blue dashed balls correspond to the simple cycles corresponding to the five vertices of the polytope. We also underline the relation between two edges of the polytope (in purple and orange respectively) and a face (in green) and the corresponding subgraphs. Note that, for example, the graph corresponding to the green face is just the union of the three graphs corresponding to the vertices of that face.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B November 2009 , Volume 12 , Issue 4 Select all articles Export/Reference: Abstract: We study the existence of travelling-waves and local well-posedness in a subspace of $C_b^1(\R)$ for a nonlinear evolution equation recently proposed by Andrew C. Fowler to describe the dynamics of dunes. The travelling-waves we obtained however, were more bore-like than solitary-wave-like. Abstract: Explosive instabilities in spatially discrete reaction-diffusion systems are studied. We identify classes of initial data developing singularities in finite time and obtain predictions of the blow-up times, whose accuracy is checked by comparison with numerical solutions. We present averaged and local blow-up estimates. Local blow-up results show that it is possible to have blow-up after blow-up. Conditions excluding or implying blow-up at space infinity are discussed. Abstract: We study the asymptotic behavior of the solution of the Laplace equation in a domain perforated along the boundary. Assuming that the boundary microstructure is random, we construct the limit problem and prove the homogenization theorem. Moreover we apply those results to some spectral problems. Abstract: Cancer is one of the greatest killers in the world, particularly in western countries. A lot of the effort of the medical research is devoted to cancer and mathematical modeling must be considered as an additional tool for the physicians and biologists to understand cancer mechanisms and to determine the adapted treatments. Metastases make all the seriousness of cancer. In 2000, Iwata et al. [9] proposed a model which describes the evolution of an untreated metastatic tumors population. We provide here a mathematical analysis of this model which brings us to the determination of a Malthusian rate characterizing the exponential growth of the population. We provide as well a numerical analysis of the PDE given by the model. Abstract: We construct an auto-validated algorithm that calculates a close to identity change of variables which brings a general saddle point into a normal form. The transformation is robust in the underlying vector field, and is analytic on a computable neighbourhood of the saddle point. The normal form is suitable for computations aimed at enclosing the flow close to the saddle, and the time it takes a trajectory to pass it. Several examples illustrate the usefulness of this method. Abstract: In this paper, we answer the question under which conditions the porous-medium equation with convection and with periodic boundary conditions possesses gradient-type Lyapunov functionals (first-order entropies). It is shown that the weighted sum of first-order and zeroth-order entropies are Lyapunov functionals if the weight for the zeroth-order entropy is sufficiently large, depending on the strength of the convection. This provides new a priori estimates for the convective porous-medium equation. The proof is based on an extension of the algorithmic entropy construction method which is based on systematic integration by parts, formulated as a polynomial decision problem. Abstract: Biological invasion theory is one of important subjects in a biological control, an environmental preservation problem, a propagation of infectious diseases. I propose an propagation speed of traveling waves induced by an invasion of alien species for two-prey, one-predator modesl in which the commensalism induced by a predator between two prey species is considered. I investigate a spreading phenomenon and a minimal propagation speed for two cases that invader species is one species or more than one species. By numerical simulations and mathematical analysis, I conclude that the minimal speed is contingent only on the mobility of invasive species, furthermore, on that of one invader species even if two invader species invade at the same time. It is also shown that the commensalism via predator species affects spreading phenomena and a propagation speed, which is contingent on the type and the number of invasive species. Abstract: We formulate and analyze a deterministic mathematical model which incorporates some basic epidemiological features of the co-dynamics of malaria and tuberculosis. Two sub-models, namely: malaria-only and TB-only sub-models are considered first of all. Sufficient conditions for the local stability of the steady states are presented. Global stability of the disease-free steady state does not hold because the two sub-models exhibit backward bifurcation. The dynamics of the dual malaria-TB only sub-model is also analyzed. It has different dynamics to that of malaria-only and TB-only sub-models: the dual malaria-TB only model has no positive endemic equilibrium whenever $R_{MT}^d<1$, - its disease free equilibrium is globally asymptotically stable whenever the reproduction number for dual malaria-TB co-infection only $R_{MT}^d<1$ - it does not exhibit the phenomenon of backward bifurcation. Graphical representations of this phenomenon is shown, while numerical simulations of the full model are carried out in order to determine whether the two diseases will co-exist whenever their partial reproductive numbers exceed unity. Finally, we perform sensitivity analysis on the key parameters that drive the disease dynamics in order to determine their relative importance to disease transmission. Abstract: We consider an S-I(-R) type infectious disease model where the susceptibles differ by their susceptibility to infection. This model presents several challenges. Even existence and uniqueness of solutions is non-trivial. Further it is difficult to linearize about the disease-free equilibrium in a rigorous way. This makes disease persistence a necessary alternative to linearized instability in the superthreshold case. Application of dynamical systems persistence theory faces the difficulty of finding a compact attracting set. One can work around this obstacle by using integral equations and limit equations making it the special case of a persistence theory where the state space is just a set. Abstract: We derive an age-structured population model for the growth of a single species on a 2-dimensional (2D) lattice strip with Neumann boundary conditions. We show that the dynamics of the mature population is governed by a lattice reaction-diffusion system with delayed global interaction. Using theory of asymptotic speed of spread and monotone traveling waves for monotone semiflows, we obtain the asymptotic speed of spread $c^$*, the nonexistence of traveling wavefronts with wave speed $0 < c < c^$*, and the existence of traveling wavefront connecting the two equilibria $w\equiv 0$ and $w\equiv w^+$ for $c\geq c^$*. Abstract: In this paper, we study the error estimate of the $\theta$-scheme for the backward stochastic differential equation $y_t=\varphi(W_T)+\int_t^Tf(s,y_s)ds-\int_t^Tz_sdW_s$. We show that this scheme is of first-order convergence in $y$ for general $\theta$. In particular, for the case of $\theta=\frac{1}{2}$ (i.e., the Crank-Nicolson scheme), we prove that this scheme is of second-order convergence in $y$ and first-order in $z$. Some numerical examples are also given to validate our theoretical results. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Amanuel Fessahatsion Articles written in Pramana – Journal of Physics Volume 68 Issue 6 June 2007 pp 943-958 Research Articles The $\Lambda \Lambda$ binding energy ($B_{\Lambda \Lambda}$) of the s- and p-shell hypernuclei are calculated variationally in the cluster model and multidimensional integrations are performed using Monte Carlo. A variety of phenomenological 𝛬-core potentials consistent with the 𝛬-core energies and a wide range of simulated s-state $\Lambda \Lambda$ potentials are taken as input. The $B_{\Lambda \Lambda}$ of $_{\Lambda \Lambda}^{6}$He is explained and $_{\Lambda \Lambda}^{5}$He and $_{\Lambda \Lambda}^{5}$H are predicted to be particle stable in the $\Lambda \Lambda$-core model. The results for s-shell hypernuclei are in excellent agreement with those of non-VMC calculations. The $_{\Lambda\Lambda}^{10}$Be in $\Lambda \Lambda \alpha \alpha$ model is overbound for combinations of $\Lambda \Lambda$ and $\Lambda \alpha$ potentials. A phenomenological dispersive three-body force, $V_{\Lambda \alpha \alpha}$, consistent with the $B_{\Lambda}$ of $_{\Lambda}^{9}$Be in the $\Lambda \alpha \alpha$ model underbinds $_{\Lambda \Lambda}^{10}$Be. The incremental $\Delta B_{\Lambda \Lambda}$ values for the s- and p-shell cannot be reconciled, consistent with the finding of earlier analyses. Current Issue Volume 93 | Issue 6 December 2019 Click here for Editorial Note on CAP Mode
This question already has an answer here: Why quantum mechanics? 15 answers I've been studying quantum mechanics and classical mechanics for a little while now, and I still don't feel as though I fully understand the motivation for some of our choices in Heisenberg mechanics. For example, it clearly isn't a coincidence that the classical observables (functions of coordinates and their conjugate momenta) and the quantum observables (Hermitian operators) seem to form analogous Lie algebras with the Poisson bracket and commutator respectively. But it isn't clear to me why this is true. Is there some deep meaning contained in this statement? Or is it more indicative of the fact that in constructing a quantum model of the universe we took substantial inspiration from our intuition and previous study of classical mechanics? Along these same lines, what motivates the move from classical functions on phase space to Hermitian operators? I understand why operators corresponding to observables must be self-adjoint (the eigenvalues must be real), but I don't understand what motivates the move to operators in general. Why would we expect that operators on a Hilbert space would give physical predictions? Part of my confusion here may also come from the fact that it isn't entirely clear to me what exactly these operators do in all cases. For example, I get that $\langle \psi | \hat{x} | \psi \rangle$ corresponds to the expected position of a particle in state $|\psi\rangle$, but it's much less obvious what the $\hat{x}$ operator does to a state in general. In some cases (such as $J_\pm$ when considering angular momentum), it's clear what the operator does to a state (raises or lowers eigenstates of $J_z$), but in all these cases the operator is non-Hermitian. Perhaps the answer to this question is simply that the model gives accurate predictions and so we use it, but I'm wondering if there's a better way to think about these things.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Speaker Description Jets are modified in relativistic heavy-ion collisions due to jet-medium interactions. Measurements of jet medium modifications have so far been obscure because of the large underlying anisotropic flow background. In this analysis we devise a novel method to subtract the flow background using data. We select events with a large recoil momentum ($P_x$) within a pseudorapidity ($\eta$) window of $0.5<|\eta|<1$ from a high-$p_T$ trigger particle to enhance the away-side jet population. Di-hadron azimuthal correlations are analyzed with associated particles in two $\eta$ ranges ($−0.5<\eta<0$ and $0<\eta<0.5$) symmetric about midrapidity, one (“close-region”) close to and the other (“far-region”) far away from the $P_x$ selection $\eta$ window. The away-side jet contributes to the close-region but not as much to the far-region due to the large $\eta$ gap, while the flow contributions are equal. Assuming the $\Delta\phi$ shape of jet-like correlations does not depend on $\Delta\eta$, the correlation difference measures the away-side jet shape where the anisotropic flow background is cleanly subtracted. The away-side jet correlation width is studied in Au+Au collisions at $\sqrt{s_{_\mathrm{NN}}}=200$ GeV as a function of centrality and associated particle $p_T$. The width is found to increase with centrality at modest to high associated particle $p_T$. The increase can arise from jet-medium modifications, event averaging of away-side jets deflected by medium flow, and/or simply nuclear $k_T$ broadening. To further discriminate various physics mechanisms, a three-particle correlation analysis is conducted with robust flow background subtraction also using data. Based on this analysis we discuss possible physics mechanisms of away-side broadening of jet-like correlations. Presentation type Oral
Hoping someone can help with this. It's a simple question, but I can't seem to find the answer anywhere: I'm looking at the basic Michelson interferometer experiment, where you measure the wavelength of a laser source by changing the relative path lengths, using the moveable mirror. The equation I keep coming across for this is $\lambda = \frac{2d}{N} $ But, when I try to figure that out for myself, I get the same equation but with the refractive index in there.... My derivation: OPL = nL For constructive interference between the two paths on the interferometer, you need $\Delta OPL = N* \lambda$ (where N is the number of fringes you 'count') If I change the location of the moveable mirror by length d, then: $\Delta OPL = n*2d $ (twice d because it traverses the path twice) So: $2dn = N\lambda $ and $\lambda = \frac{2dn}{N} $ However, the 'standard' equation I see on online lab manuals for this is: $\lambda = \frac{2d}{N} $ Is the n just neglected because it's close to 1, for air? Or is there something deeper here?
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...