text
stringlengths
256
16.4k
From this circuit, and from reading Farhi et al., I added the following: A 5th qubit to be a readout bit (ancilla) to measure the predicted label of the readout bit. A Pauli-X gate to ensure the readout bit would be |1> following Farhi's paper prior to start measuring the predicted label. N-bit toffoli gate to indirectly measure the ansatz to measure the effects of the $(\beta, \gamma)$ in the circuit as these are changed. Hadamard layer before entering the operations shown in the picture above. I did this to be able to apply the optimization algorithm successfully; otherwise, without this first Hadamard layer, it will never converge. Having a readout bit that can measure the ansatz, and following the QNN paper, one defines: loss$(\theta,z) = 1 - l( 𝑧 )𝑃(\phi)$, where $P(\phi)$ is the result of measuring the output of the readout bit as you change $\beta, \gamma$, and $l(z)$ is the label of quantum bit. It turns out that for the purposes of implementation this relation can be stated as: loss$(\theta,z) \approx 1 - [𝑃(\phi)]^2$, I have applied this technique to this circuit, and while for some cases the technique performs well, I see some instances this output: Plot on the left shows the loss function as parameters in circuit are variated; Plot on the right shows the state of readout bit under optimization process. Presumably, since the readout bit after X-gate is converted to |1>, the minimization of the loss measured in this system will return the readout bit to this state. That said, if you find this assumption also questionable, please let me know. And it can get much worse: So, here are my questions: Clearly the loss function shows something is not right with this setup. Granted, the circuit itself can be flawed, I for one I am not sure if acting Pauli-X on the readout bit and then n-toffoli gate was the right thing to do. That said, I am not sure about that after this results. How do you "debug" a circuit with this in mind? As a general question, given that from this toy-model it is clear that one cannot willy-nilly apply QNN to any random circuit, what is the criteria, if any, Hamiltonian / random quantum circuit, must have to be worth applying this technique. Are there any better ways to extract information of the circuit for an optimization method than the n-toffoli approached I took? For example, I know it is a silly question, but can you illustrate how useful it can be to apply measurement gates to all the qubits? Suppose you are shown this picture of the circuit, how can one take the circuit layout and go back to the Hamiltonian for the circuit? I ask because I am both fascinated and a bit confused about how can the Ising, Hubbard-fermi, among many others, are represented by a hamiltonian that can also be expressed as a quantum circuit through an ansatz. Just for reference, optimization algorithm I implemented is the one discussed on Farhi et al but with the modification of Adam to SGD. On a paper out of Google X last two weeks ago they shown it could successfully be applied for machine learning applications on both quantum circuits but also Tensor Networks. Sorry for the long post. Thank you for reading, and thanks in advance for your feedback.
Also, read Determinant formulas Mean, Median, Mode Formulas Set Theory Matrix Algebra: Matrix has emerged as a great mathematical tool which simplifies our work to a great extent. As we know only straight long methods of calculation but this mathematics tool made it easy. By the emergence of concept of matrix algebra, we can obtain compact and simple methods of solving system of linear equations and other algebraic calculation. Simply matrix algebra is a puzzle game. You must enjoy playing it. It is the different type of arrangement of numbers, symbols or expression in several rows and columns. Or by definition, it is said that a matrix is an ordered rectangular array of numbers or functions. Let’s see the example: \(\begin{bmatrix} 2 & 5 & 1\\ 7 & 9 & 3\\ -4 & 5 & 6 \end{bmatrix}\) The horizontal lines from left to right in the above matrix is said to be rows. Rows: Then vertical lines from up to down in the above matrix is said to be columns. Columns: Order of matrix: A matrix which has m rows and n columns. We say this type of matrix as matrix of order m × n. We can express the order of any matrix as: A =\( [a_{ij}]_{m \times n} = \begin{bmatrix} a_{11} & a_{12} & … & a_{1n}\\ a_{21} & a_{22} & … & a_{2n}\\ .& .& … &. \\ .& . & … &. \\ a_{m1} & a_{m2} & … & a_{mn} \end{bmatrix} _{m \times n}\) Also note that 1 ≤ I ≤ m,1 ≤ j ≤ n also i, j ∈ N Types of matrix: S. no. Types of matrices Notation for m x n matrix Denotion 1 Row Matrix [a ij] 1 x n \(\begin{bmatrix} a & b & c \end{bmatrix}\) 2 Column Matrix [a ij] m x 1 \(\begin{bmatrix} a\\ b \end{bmatrix}\) 3 Square Matrix m = n \(\begin{bmatrix} a & b\\ c & d \end{bmatrix}\) 4 Diagonal Matrix [a ij] m x m if a ij = 0, when i ≠ j \(\begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i\\ \end{bmatrix}\) 5 Scalar Matrix [a ij] n x n if a ij = 0, when i ≠ j, a \(\begin{bmatrix} a & 0 & 0\\ 0 & a & 0 \\ 0 & 0 & a\\ \end{bmatrix}\) 6 Identity Matrix [a ij] n x n if a ij = 1, when i = j, a \(\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1\\ \end{bmatrix}\) 7 Zero Matrix Every element = 0 \(\begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0 \\ 0 & 0 & 0\\ \end{bmatrix}\) Addition and Subtraction of Matrices: In matrix algebra the addition and subtraction of any two matrix is only possible when both the matrix is of same order. There is addition law for matrix addition. You should only add the element of one matrix to the corresponding elements only. i.e a Addition: ij+ b ij= c ij \(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\) + \(\begin{bmatrix} e & f \\ g & h \end{bmatrix}\) = \(\begin{bmatrix} a+e & b + f \\ c + g & d + h \end{bmatrix}\) There is also subtraction law for matrix addition. You should only add the element of one matrix to the corresponding elements only. i.e a Subtraction: ij– b ij= d ij \(\begin{bmatrix} a & b \\ c & d \end{bmatrix}\) – \(\begin{bmatrix} e & f \\ g & h \end{bmatrix}\) = \(\begin{bmatrix} a – e & b – f \\ c – g & d – h \end{bmatrix}\) Matrix multiplication: Matrix algebra for multiplication are of two types: we may define multiplication of a matrix by a scalar as follows: Scalar multiplication: if A = [a ij] m × n is a matrix and k is a scalar, then kA is another matrix which is obtained by multiplying each element of A by the scalar k. Two matrices A and B can only be multiplied if and only if the number of column of matrix A is equal to the number of rows of matrix B or vice versa. Vector Multiplication: Note: A m×n × B p×q = M m×q We can understand matrix multiplication by following rule: \(\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix}_{3 \times 3} \times \begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23}\\ b_{31} & b_{32} & b_{33} \end{bmatrix}_{3 \times 3}\) =\(\begin{bmatrix} (a_{11}\times b_{11} + a_{12}\times b_{21}+ a_{13}\times b_{31}) & (a_{11}\times b_{12} + a_{12}\times b_{22}+ a_{13}\times b_{32}) & (a_{11}\times b_{13} + a_{12}\times b_{23}+ a_{13}\times b_{33}) \\ (a_{21}\times b_{11} + a_{22}\times b_{21}+ a_{23}\times b_{31}) & (a_{21}\times b_{12} + a_{22}\times b_{22}+ a_{23}\times b_{32}) & (a_{21}\times b_{13} + a_{22}\times b_{23}+ a_{23}\times b_{33})\\ (a_{31}\times b_{11} + a_{32}\times b_{21}+ a_{33}\times b_{31}) & (a_{31}\times b_{12} + a_{32}\times b_{22}+ a_{33}\times b_{32}) & (a_{31}\times b_{13} + a_{32}\times b_{23}+ a_{33}\times b_{33}) \end{bmatrix}_{3 \times 3}\) Properties of matrix algebra: Let two independent matrix in matrix algebra be A & B then, A = [a ij] = [b ij] = B this is only possible if (i) A and B are of same order, (ii) a ij= b ijfor all possible values of i and j. kA = k[a ij] m × n= [k(a ij)] m × n Negative of a matrix: – A = (–1)A A – B = A + (–1) B Matrix commutativity: A + B = B + A Matrix assosiativity: (A + B) + C = A + (B + C), where A, B and C are of same order. k(A + B) = kA + kB, where A and B are of same order, k is constant. (k + l) A = kA + lA, where k and l are constant. Matrix multiplication: (i) A(BC) = (AB)C, (ii) A(B + C) = AB + AC, (iii) (A + B)C = AC + BC Transpose of matrix: If A = [a ij] m × n , then A′ or A T = [a ji] n × m Properties of Transpose of matrix: (A′)′ = A, (kA)′ = kA′, (A + B)′ = A′ + B′, (AB)′ = B′A′ Types of Matrix as transpose: Symmetric matrix: A is a symmetric matrix only if A′ = A. Skew Symmetric Matrix: A is a skew-symmetric matrix only if A′ = –A. Note:Any square matrix can be represented as the sum of a symmetric and a skew-symmetric matrix. Inverse of a matrix: If A and B are two square matrices such that AB = BA = I, then B is the inverse matrix of A. Inverse of matrix A is denoted by A –1 and A is the inverse of B. Inverse of a square matrix, if it exists, is always unique. This is a great factor dealing with matrix algebra. It is given that A -1 = \(\frac{adj\: A}{|A|}\) Matrix algebra problems: \(\begin{bmatrix} 1 & 4\\ 2 & 9\\ 6 & 11 \end{bmatrix}_{3 \times 2}\) + \(\begin{bmatrix} 2 & 5\\ 7 & 16\\ 9 & 17 \end{bmatrix}_{3 \times 2}\) Add the following matrix: Solution:As the number of rows and column of first matrix is equal to the number of rows and columns of the second matrix. Therefore,by matrix algebrathe matrix addition is possible. \(\begin{bmatrix} 1 & 4\\ 2 & 9\\ 6 & 11 \end{bmatrix}_{3 \times 2}\) + \(\begin{bmatrix} 2 & 5\\ 7 & 16\\ 9 & 17 \end{bmatrix}_{3 \times 2}\) = \(\begin{bmatrix} 3 & 9\\ 9 & 25\\ 15 & 28 \end{bmatrix}_{3 \times 2}\) We added all the corresponding elements.
Let $x_n$ be a sequence of real numbers such that $$ \sum_{n=1}^\infty |x_n|^2 = \infty. $$ Then there exists a sequence $y_n$ of complex numbers such that $$ \sum_{n=1}^\infty |y_n|^2 < \infty \text{ and } \sum_{n=1}^\infty y_n x_n = \infty. $$ This problem looks innocent enough but I cannot seem to figure out how to start. I was thinking if $N_n$ is a sequence of integers such that $\sum_{k=1}^{N_n} x_k^2 > n$ then I can set $L_ n := \sum_{k=1}^{N_n} x_k^2$ and $y_n := x_n/L_n$. Then for $n \geq N_n$ I have $L_n < 1/n$. I am not really sure how to go on. I was thinking that I could maybe get something like the harmonic series in the case of the series over the product and something like the series over $1/n^2$ in case of the series over $y_n^2$. How to do this just is not clear to me yet. Edit 1: Added that $x_n$ should be real and $y_n$ complex and added absolute values. Context This problem arose in the last lecture of the measure and integration theory class that I just took. The lecturer really cursory introduced measures derived from weak* limits of Riesz products. One defines the functions $$ \mu_{(a_j)}^{n} = \prod_{j = 1}^{n} (1 + a_j \cos(\lambda_j x)) $$ where $(a_j)$ is a sequence of real numbers such that $|a_j| \leq 1$ and $(\lambda_j)$ is a lacunary sequences, i.e. one such that $\lambda_{i+1}/\lambda_{i} \geq 3$ or more clearly, if a real number $x$ can be written as $$ x = \sum_{j = 1}^{+\infty} \epsilon_j \lambda_j $$ where $\epsilon_j \in \{-1, 0, +1\}$ then there is a unique sequence of $(\epsilon_j)$ which produced $x$ in combination with $\lambda_j$. Now one can show that the weak* limit of these $\mu_{(a_j)}^{n}$ is a measure, which we will call $\mu_{(a_j)}$, since it still depends on the sequence $(a_j)$. The problem arose in the proof of the following theorem: Let $(a_j)$ and $(b_j)$ be two sequences as above and $\mu_{(a_j)}$ and $\mu_{(b_j)}$ the associated measures, then these measures are mutually singular if $\sum |a_j - b_j|^2 = \infty$. In the proof one gets to the point where the above property is needed, with $x_n = a_n - b_n$. It seems like literature about this topic is mostly papers on a very advanced level that my restricted knowledge of measure theory, functional analysis and algebra does not give me access to...
Good morning. I am trying to find the arc length of the cardioid $r=2\sin{\theta}-2$. After plugging in $r$ and $\frac{dr}{d\theta}$ into the arc length formula we get $$s=\int_\alpha^\beta\sqrt{(2\sin\theta-2)^2+(2\cos\theta)^2}d\theta=2\sqrt{2}\int_\alpha^\beta{\sqrt{1-\sin\theta}} d\theta$$ It appears from a graphing calculator that the bounds of integration cover the principal angles $0\le\theta<2\pi$. Using an identity $$\sqrt{1-\sin\theta}=\sqrt{\frac{1-\sin^2\theta}{1+\sin\theta}}=\frac{\cos\theta}{\sqrt{1+\sin\theta}}$$ This gives me the integral $$s=2\sqrt{2}\int_0^{2\pi}{\frac{\cos\theta}{\sqrt{1+\sin\theta}}} d\theta$$ So now to solve I make a $u$-substitution. Let $u=1+\sin\theta$. Then $du=\cos\theta d\theta$. But if $\theta=0$, $u=1$. And if $\theta=2\pi$, then also $u=1$. This creates a situation where the bounds of integration are the same, which makes a calculation of $0$ for the final answer. So where did I go wrong?
It's a very simple problem. I put it into wolfram alpha and it gave me a hugely complicated answer. This makes me think that I am supposed to be doing it differently. I believe I am supposed to use the residue theorem for this answer but have no idea how. $$\int_{0}^{\infty}\frac{x^2}{x^4+16}dx = 2\pi i \sum_{n=0}^{\infty} Res\big(\frac{x^2}{x^4+16}\big)$$ I have no idea what Res() is or how to use this in any way. Any and all help is appreciated, thank you. Sorry, I typed the equation in wrong, I just corrected it from $\frac{x^4}{x^2+16}$ to $\frac{x^2}{x^4+16}$
Algorithm 1) Step 1:Obtain a sample $y$ from distribution $Y$ and a sample $u$ from $(0,1)$ Step 2: Check whether or not $u < f(y)/ M.g(y)$ if true accept $y$ as a sample from $f$ Else, reject the value of $y$ return to the sampling step Algorithm 2) There are model parameters $\theta$ described by a prior $\pi(\theta)$, and a forwards-simulation model for the data $x$, defined by $\pi(x|\theta)$. It is clear that a simple algorithm for simulating from the desired posterior $\pi(\theta|x)$ can be obtained as follows. First simulate from the joint distribution $\pi(\theta,x)$ by simulating $\theta^\star\sim\pi(\theta)$ and then $x^\star\sim \pi(x|\theta^\star)$. This gives a sample $(\theta^\star,x^\star)$ from the joint distribution. A simple rejection algorithm which rejects the proposed pair unless $x^\star$ matches the true data $x$ clearly gives a sample from the required posterior distribution. https://darrenjw.wordpress.com/2013/03/31/introduction-to-approximate-bayesian-computation-abc/ Sample $\theta^\star \sim \pi(\theta^\star)$ Sample $x^\star\sim \pi(x^\star|\theta^\star)$ If $x^\star=x$, keep $\theta^\star$ as a sample from $\pi(\theta|x)$, otherwise reject. Return to step 1
Hey guys, I want to build a strong and straight plan for my next years of studying and once finish I am able to do something on my own and come up with crazy ideas and actually test them, build some awesome algorithms, all that cool stuff, but I'm kinda stumble so it would be nice if someone... Please see this page and give me an advice.https://physics.stackexchange.com/questions/499269/simultanious-eigenstate-of-hubbard-hamiltonian-and-spin-operator-in-two-site-modKnown fact1. If two operators ##A## and ##B## commute, ##[A,B]=0##, they have simultaneous eigenstates. That means... In this problem I am supposed to treat the shelf as a weak perturbation. However it doesn't give us what the perturbed state H' is. At the step V(x) = Vo, but that is all that is given and isn't needed to determine H'.This isn't in a weak magnetic field so I wouldn't you use H'=qEx and then... Using the fact thatPa ∝ |α|^2 and Pb ∝ |β|^2, we get:Pa = k|α|^2 and Pb = k|β|^2Since the probability of measuring the two states must add up to 1, we have Pa + Pb = 1 => k = 1/(|α|^2 + |β|^2). Substituting this in Pa and Pb, we get:Pa = |α|^2/(|α|^2 + |β|^2)and Pb = |β|^2/(|α|^2 + |β|^2)... Is there a relationship between the momentum operator matrix elements and the following:<φ|dH/dkx|ψ>where kx is the Bloch wave numbersuch that if I have the latter calculated for the x direction as a matrix, I can get the momentum operator matrix elements from it? Two questions, where the 1st is related to previous discussion regarding thes couplings:The selection rules for LS coupling is quite clear - it's based on calculating the compatible electric dipole matrix element. However, in the case of jj coupling we end up with different selection rules... 1) I know that the binding energy is the energy that holds a nucleus together ( which equals to the mass defect E = mc2 ). But what does it mean when we are talking about binding energy of an electron ( eg. binding energy = -Z2R/n2 ? ). Some website saying that " binding energy = - ionization... Can anyone elaborate on Deutsch's attempt to solve the incoherence problem?He postulates a continuously infinite set of universes, together with a preferred measure on that set. And so when a measurement occurs, the proportion of universes in the original branch that end up on a given branch... Is it possible that evolution happens in quantum jumps as no intermediate lifeforms were ever found? Analogous to an electron jumping from lower energy level to higher energy level without intermediary states. Hi there,Question from a biologist with very poor background in physics, but willing to understand quantum physics. I think quantum entanglement shocks everyone, even if it has been proven right. I would love to know if there is any hypothesis or crazy theory out there to explain why or how... Hello all,The second quantization of a general electromagnetic field assumes the energy density integration to be performed inside a box in 3D space. Someone mentioned to me recently that the physical significance of the actual volume used is that it should be chosen based on the detector used... Let me present what I think is the understanding of a particular situation in quantum mechanics, and ask people to tell me whether I am right or wrong.To say that everything happens randomly in QM would be misleading at best. We get at least statistical prediction. But discussions such as the... 1. Homework StatementHow should I calculate the expectation value of momentum of an electron in the ground state in hydrogen atom.2. Homework Equations3. The Attempt at a SolutionI am trying to apply the p operator i.e. ##-ihd/dx## over ##\psi##. and integrating it from 0 to infinity... In @A. Neumaier 's excellent Physics FAQ, he notes under "Are electrons pointlike/structureless?" that"Physical, measurable particles are not points but have extension. By definition, an electron without extension would be described exactly by the 1-particle Dirac equation, which has a... 1. Homework StatementIs the following matrix a state operator ? and if it is a state operator is it a pure state ? and if it is so then find the state vectors for the pure state.If you dont see image here is the matrix which is 2X2 in matlab code:[9/25 12/25; 12/25 16/25]2. Homework... I'm having trouble with trying to find the expansion coefficients of a superposition of a Gaussian wave packet.First I'm decomposing a Gaussian wave packet$$\psi(\textbf{r},0) = \frac{1}{(2\pi)^{3/4}\sigma^{3/2}}\text{exp}\left[ -\frac{(\textbf{r} - \textbf{r}_0)^2}{4\sigma^2} + i\textbf{k}_0... ...to give a number?https://ocw.mit.edu/courses/physics/8-04-quantum-physics-i-spring-2016/lecture-notes/MIT8_04S16_LecNotes5.pdfOn page 6, it says,"Matrix mechanics, was worked out in 1925 by Werner Heisenberg and clarified by Max Born and Pascual Jordan. Note that, if we were to write xˆ... 1. Homework StatementFind the wave packet Ψ(x, t) if φ(k) = A for k0 − ∆k ≤ k ≤ k0 + ∆k and φ(k) = 0 for all other k. The system’s dispersion relation is ω = vk, where v is a constant. What is the wave packet’s width?2. Homework EquationsI solved for Ψ(x, t):$$\Psi(x,t) =... 1. Homework StatementDoes the n = 2 state of a quantum harmonic oscillator violate the Heisenberg Uncertainty Principle?2. Homework Equations$$\sigma_x\sigma_p = \frac{\hbar}{2}$$3. The Attempt at a SolutionI worked out the solution for the second state of the harmonic oscillator... Or does ontological probability exist?I was reading an article that came up in my google searches ( https://breakingthefreewillillusion.com/ontic-probability-doesnt-exist/ ) ignore the free will philosophy stuff.But the author makes the claim that ontological probability simply does not... I see this has been already discussed but the old threads are closed.EPR before EPR: a 1930 Einstein-Bohr thought experiment revisited"In this example, Einstein presents a paradox in QM suggesting that QM is inconsistent, while Bohr attempts to save consistency of QM by combining QM with the...
I know once a photon hits an electron it moves from the ground state to an excited state, then come back down to ground by releasing that energy as a photon. But the the electron "and the atom" also increase in kinetic energy or does the energy get released too fast before it has a chance to increase the electron "and atom" kinetic energy? Electrons and photons and nuclei are described with quantum mechanical equations. A photon can interact with a free electron, scattering elastically or inelastically called Comtpon scattering, where part of the energy of the photon turns into kinetic energy of the electron. An electron bound to a nucleus forms an atom , and usually occupies the ground energy level. If a photon with the appropriate energy hits the atom and it has an energy covering the energy levels of the atom, the system goes to a higher excited energy level. The electron is located at a higher energy level, the momentum of the photon is transferred to the atom. In the semiclassical Bohr model one can say that the electron has higher energy, though the rigorous quantum mechanical solution is about probable values of the energy if measured. When the atom relaxes to the lower level emitting a photon , momentum has to be conserved and the whole atom has to take part in the exercise. The incoming and outgoing photons will not have the same direction since the decay is probabilistic. Consider an atom with two levels initially at rest. The energy of the ground state is $E_{g}=0$ and the energy of the excited state is $E_{e}=\hbar \omega_{0}$. A photon of wavevector $\vec{k}$ and frequency $\omega$ is aimed at the atom. To see what happens, we need to write equations for the conservation of energy and momentum. If we assume the atom absorbs the photon then from the conservation of momentum $$\hbar\vec{k}=m\vec{v}$$ where $m$ is the mass of the atom and $\vec{v}$ is its velocity. Moreover, from conservation of energy $$\hbar\omega=\hbar\omega_{0}+\frac{1}{2}mv^2$$ Using the dispersion relation of the photon $\omega=ck$ these two equations can be solved. But even without solving the equations you can see some interesting results. First, you clearly observe that the absorption of photon causes the atom to gain velocity. This means that your statement is correct. But more interesting, because the atoms gains velocity and velocity is related to energy, the photon must have a frequency larger then the frequency of transition for absorption to happen. This is because some of the energy goes to the internal degrees of freedom of the atom (exited state), and some goes to external degrees of freedom (velocity of the center of mass). Momentum is conserved, so the momentum of the atom must be different after the photon is absorbed. Since the momentum is changed, the kinetic energy of the atom is changed as well. Even after a photon is emitted and the electron returns to the ground state, the atom does not usually have the same kinetic energy as it started with, since the photon is probably emitted in a different direction.
Munkres in his book states that: Theorem 30.3 Suppose that $X$ has countable basis, then every open covering of $X$ contains a countable subcollection covering $X$. $\textbf{Proof.}$ Let ${B_n}$ be a countable basis and $\mathcal{A}$ an open cover of $X$. For each positive integer $n$ for which it is possible, choose an element $A_n$ of $\mathcal{A}$ containing the basis element $B_n$. The collection $\mathcal{A'}$ of the sets $A_n$ is countable, since it is indexed with a subset $J$ of the positive integers. Furthermore, it covers X: given a point $x \in X$, we can chosse an element $A$ of $\mathcal{A}$ containing $x$. Since $A$ is open, there is a basis element $B_n$ such that $x \in B_n \subset A$. Because $B_n$ lies in an element of $\mathcal{A}$, the index $n$ belong to the set $J$, so $A_n$ is defined; since $A_n$ contains $B_n$, it contains $x$. Thus $\mathcal{A'}$ is a countable subcollection of $\mathcal{A}$ that covers $X$. My first doubt is when he states $A_n$ is indexed with $J \subset \mathbb{Z}^+$. Why is this true? My second doubt is about the construction of $\mathcal{A'}$: he states that $\mathcal{A'}$ is the collection of the sets $A_n$, but could have $A, A^* \in \mathcal{A'}$ such that $B_n \subset A \ \cap \ A^*$. In this case, I think that we need have one of these sets in $\mathcal{A'}$ to ensure that $\mathcal{A'}$ is countable, but how exactly do I do this? Thanks in advance!
Suppose we have an extended real (countably) infinite sequence $(x_n)$. Then consider all of its possible subsequences $(x_{n_k})$. We could then consider the set $$A = \{a\in \overline{\mathbb{R}}:x_{n_k}\rightarrow a \text{ for some subsequence}\}.$$ Must this set necessarily be finite? Otherwise we have countably many numbers that the original sequence approaches arbitrarily closely countably many times in its tail. Is there some kind of argument that this would require an uncountable sequence? My thought is we could take the supposed countable set, and then choose some $\epsilon>0$ so that no two limits live in the same $\epsilon$-ball. Far enough down each subsequence, no element in the tail of one subsequence can also be in the tail of another. From here, is the argument like Cantor's diagonalization? Limits: subseq $a_1 : x_{n_1}, x_{n_2}, x_{n_3}, \dots$ $a_2 : x_{m_1}, x_{m_2}, x_{m_3}, \dots$ $a_3 : x_{o_1}, x_{o_2}, x_{o_3}, \dots$ We would need uncountably many limits for these subsequences to be unique, and this contradicts the original sequence being countable.
Theoretical efforts have been made to study the nontrivial properties of complex networks, such as clustering, scale-free degree distribution, community structures, etc. Here Reciprocity is another quantity to specifically characterize directed networks. Link reciprocity measures the tendency of vertex pairs to form mutual connections between each other. [1] Motivation In real network problems, people are interested in determining the [1] How is it defined? Traditional definition A traditional way to define the reciprocity r is using the ratio of the number of links pointing in both directions L^{<->} to the total number of links L [6] r = \frac {L^{<->}}{L} With this definition, r = 1 is for a purely bidirectional network while r = 0 for a purely unidirectional one. Real networks have an intermediate value between 0 and 1. However, this definition of reciprocity has some defects. It cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, but whether mutual links occur more or less often than expected by chance. Besides, in those networks containing self-linking loops (links starting and ending at the same vertex), the self-linking loops should be excluded when calculating L. Garlaschelli and Loffredo's definition In order to overcome the defects of the above definition, Garlaschelli and Loffredo defined reciprocity as the correlation coefficient between the entries of the adjacency matrix of a directed graph (a_{ij} = 1 if a link from i to j is there, and a_{ij} = 0 if not): \rho \equiv \frac {\sum_{i \neq j} (a_{ij} - \bar{a}) (a_{ji} - \bar{a})}{\sum_{i \neq j} (a_{ij} - \bar{a})^2}, where the average value \bar{a} \equiv \frac {\sum_{i \neq j} a_{ij}} {N(N-1)} = \frac {L} {N(N-1)}. \bar{a} measures the ratio of observed to possible directed links (link density), and self-linking loops are now excluded from L because of i not equal to j. The definition can be written in the following simple form: \rho = \frac {r - \bar{a}} {1- \bar{a}} The new definition of reciprocity gives an absolute quantity which directly allows one to distinguish between reciprocal (\rho > 0) and antireciprocal (\rho < 0) networks, with mutual links occurring more and less often than random respectively. If all the links occur in reciprocal pairs, \rho = 1; if r=0, \rho = \rho_{min}. \rho_{min} \equiv \frac {- \bar{a}} {1- \bar{a}} This is another advantage of using \rho, because it incorporates the idea that complete antireciprocal is more statistical significant in the networks with larger density, while it has to be regarded as a less pronounced effect in sparser networks. References ^ a b Diego Garlaschelli, and Maria I. Loffredo. Patterns of Link Reciprocity in Directed Networks. Phys. Rev. Lett. 93, 268701 (2004). ^ M. E. J. Newman, S. Forrest, and J. Balthrop, Phys. Rev. E 66, 035101(R) (2002). ^ R. Albert, H. Jeong, and A.-L. Baraba´si, Nature (London) 401, 130 (1999). ^ D. Garlaschelli and M. I. Loffredo, Phys. Rev. Lett. 93, 188701 (2004). ^ V. Zlatic, M. Bozicevic, H. Stefancic, and M. Domazet, Phys. Rev. E 74, 016115 (2006) ^ M. E. J. Newman, S. Forrest, and J. Balthrop, Phys. Rev. E 66, 035101(R) (2002). This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
In a five-team tournament, each team plays one game with every other team. Each team has a $50\%$ chance of winning any game it plays. (There are no ties.) Let $\dfrac{m}{n}$ be the probability that the tournament will produce neither an undefeated team nor a winless team, where $m$ and $n$ are relatively prime integers. Find $m+n$. The probability that one team wins all games is $5\cdot (\frac{1}{2})^4=\frac{5}{16}$. Similarity, the probability that one team loses all games is $\frac{5}{16}$. I did this much, but after that what should I do to reach the final answer ? I'm confused. Can someone explain?
It looks like you're new here. If you want to get involved, click one of these buttons! Okay, now we've got all the machinery set up to study co-design diagrams with feedback. Today let's consider a very simple one. I'll start without feedback. I seem to like examples from business and economics for these purposes: This describes someone who buys bread and then sells it, perhaps at a higher price. This is described by the composite of two feasibility relations: $$ \mathrm{Purchase} \colon \mathbb{N} \nrightarrow \mathbb{N} $$ and $$ \mathrm{Sell} \colon \mathbb{N} \nrightarrow \mathbb{N} $$ where \(\mathbb{N}\) is the set of natural numbers given its usual ordering \(\le\). Be careful about which way these feasibility relations go: \( \mathrm{Purchase}(j,k) = \texttt{true}\) if you can purchase \(j\) loaves of bread for \(k\) dollars. \( \mathrm{Sell}(i,j) = \texttt{true} \) if you can make \(i\) dollars selling \(j\) loaves of bread. The variable at right is the 'resource', while the variable at left describes what you can obtain using this resource. For example, in purchasing bread, \( \mathrm{Purchase}(j,k) = \text{true}\) if starting with \(k\) dollars as your 'resource' you can buy \(j\) loaves of bread. This is an arbitrary convention, but it's the one in the book! When we compose these we get a feasibility relation $$ \mathrm{Purchase} \mathrm{Sell} \colon \mathbb{N} \to \mathbb{N} $$ (and again, there's an annoying arbitrary choice of convention in the order here). We have I haven't said what the feasibility relations \( \mathrm{Purchase}\) and \( \mathrm{Sell}\) actually are: they could be all sorts of things. But let's pick something specific, so you can do some computations with them. Let's keep it very simple: let's say you can buy a loaf of bread for \( $ 2\) and sell it for \( $ 3\). Puzzle 218. Write down a formula for the feasibility relation \(\mathrm{Purchase}.\) Puzzle 219. Write down a formula for the feasibility relation \(\mathrm{Sell}.\) Puzzle 220. Compute the composite feasibility relation \( \mathrm{Purchase} \mathrm{Sell}\). (Hint: we discussed composing feasibility relations in Lecture 58.) That was just a warmup. Now let's introduce feedback! Now you can reinvest some of the money you make to buy more loaves of bread! That creates a 'feedback loop'. Obviously this changes things dramatically: now you can start with a little money and keep making more. But how does the mathematics work now? First, you'll notice this feedback loop has a cap at left and a cup at right. I defined these last time. But this feedback loop also involves two feasibility relations called \(\hat{\textstyle{\sum}}\) and \(\check{\textstyle{\sum}}\). We use the one at left, $$ \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} ,$$ to say that the money we reinvest (which loops back), plus the money we take as profit (which comes out of the diagram at left), equals the money we make by selling bread. We use the one at right, $$ \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} ,$$ to say that the money we have reinvested (which has looped around), plus the new money we put in (which comes into the diagram at right), equals the money we use to purchase bread. These two feasibility relations are both built from the monotone function $$ \textstyle{\sum} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} $$ defined in the obvious way: $$ \textstyle{\sum}(m,n) = m + n .$$ Remember, we saw in Lecture 65 that any monotone function \(F \colon \mathcal{X} \to \mathcal{Y} \) gives two feasibility relations, its 'companion' \(\hat{F} \colon \mathcal{X} \nrightarrow \mathcal{Y}\) and its 'conjoint' \(\check{F} \colon \mathcal{Y} \nrightarrow \mathcal{X}\). Puzzle 221. Give a formula for the feasibility relation \( \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} \). In other words, say when \(\hat{\textstyle{\sum}}(a,b,c) = \texttt{true}\). Puzzle 222. Give a formula for the feasibility relation \( \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} \). And now finally for the big puzzle that all the others were leading up to: Puzzle 223. Give a formula for the feasibility relation described by this co-design diagram: You can guess the answer, and then you can work it systematically by composing and tensoring the feasibility relations defined by the boxes, the cap and the cup! This is a good way to make sure you understand everything I've been talking about lately.
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
Consider the Lagrangian: The vertex Feynman rule I obtained from this Lagrangian is $$i\lambda \left[(p\cdot k)g^{\mu \nu} - p^{\mu}k^{\nu} \right]$$ where $p^{\mu}$ and $k^{\nu}$ are the photon momenta. When I consider the 1-loop diagrams of 4-photon interaction with incoming momenta $p_1^{\mu}, p_2^{\nu}$ and outgoing momenta $p_3^{\rho}, p_4^{\sigma}$ , I obtained the corresponding Feynman amplitude: $$ \begin{alignat}{4} & \Big(\frac{s}{2} & \Big)^2 (g^{\mu \nu}g^{\rho \sigma}) & \int \frac{\mathrm{d}^4k}{\left(2\pi\right)^4}\frac{1}{k^2\left(k+p_s\right)^2} \\ +~ & \Big(\frac{t}{2} & \Big)^2 (g^{\mu \rho}g^{\nu \sigma}) & \int \frac{\mathrm{d}^4k}{\left(2\pi\right)^4}\frac{1}{k^2\left(k+p_t\right)^2} \\ +~ & \Big(\frac{u}{2} & \Big)^2 (g^{\mu \sigma}g^{\rho \nu}) & \int \frac{\mathrm{d}^4k}{\left(2\pi\right)^4}\frac{1}{k^2\left(k+p_u\right)^2} \end{alignat} $$ where $p_s = p_1 + p_2$, $p_t = p_1 - p_3$, and $p_u = p_1 - p_4$. My problem is, I cannot verify if my answer satisfies the Ward identity, which states that the amplitude should vanish when I replace one of the polarization vector with its momenta. Is my amplitude wrong or is my Feynman rule wrong or if I'm right and just need to work harder to check?
I've been told that for any group of SM, the running of the corresponding coupling constant, $g$, is given by: $$ \frac{dg}{d(\ln{Q})} = b·g^3/(16\pi^2) $$ Where $$ b = -\frac{11}{3}C_2(A) + \sum\Bigg[\frac{2}{3}T(R_f) + \frac{1}{3}T(R_s) \Bigg] $$ and $$ C_2(A) = \begin{cases} N,\ {\rm for\ }SU(N)\\ 0,\ {\rm for\ }U(1) \end{cases}, \qquad T(R_f) = \begin{cases}\frac{1}{2},\ {\rm Weyl\ spinors\ in\ fundamental\ repr.\ of\ }SU(N)\\ N,\ {\rm Weyl\ spinors\ in\ adjoint\ repr.\ of\ }SU(N)\\ Y_f^2,\ {\rm for\ }U(1) \end{cases} $$ $Y_f$ is the hypercharge of the corresponding field. $T(R_s)$ takes the same values as $T(R_f)$ for each complex scalar in the corresponding representation. I'm trying to obtain the correct $b$ terms for each group, achieving: $$ b(SU(3)) = -(11/3)·3 + 6·(2/3)·(1/2) + (2/3)·(1/2)·3 + (2/3)·(1/2)·3 = -7 $$ In the RHS and reading from left to right we find the term corresponding to $C_2(A)$, followed by the term for QCD quark triplets, SU(2) lepton doublets, SU(2) right parts. This result is the same as Cheng and Li, so maybe it's correct. I'm not sure because I have used $T(R_f) = 1/2$ for right fields. For SU(2): $$ b(SU(2)) = -(11/3)·2 + 3·(2/3)·(1/2) + 3·(2/3)·(1/2) + 9·(2/3)·(1/2) + 3·6·(2/3)·(1/2) = 11/3 $$ Again, from left to right, $C_2(A)$ part, 3 lepton doublets, 3 right leptons, 3 quark lepton doublets with 3 colours each one, and 6 flavour with 3 colours each for right quarks. For U(1): $$ b(U(1)) = (2/3)·\{3[3(2·(1/6)^2 + (2/3)^2 + (1/3)^2)] + 2(1/2)^2 + 1\} + 2(1/2)^2(1/3) = 41/6 $$ Here, $\{···\}$ counts the 3 families with 3 colour copies for left and right quarks and the 3 families for left leptons and right charged leptons. The last contribution is the Higgs that counts as 2 complex scalars. Checking with Cheng and Li, section 14.3, I know the 2nd and 3rd values are incorrect. Actually, for the 2nd one we should have a result less than zero and for the 3rd, $b(U(1)) = 4$. What am I doing wrong? Find this formula here: https://en.wikipedia.org/wiki/Beta_function_(physics)#SU(N)_Non-Abelian_gauge_theory
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
I have two questions related to having fixed effects in the DD model. I have a treatment that occurs at different times (e.g., 2001,2005, etc.). I want to fit a DD model, so I standardize the treatment years to year "0" as the the treatment time. To control for treatment year heterogeneity, I included the true year fixed effects. $y_{it} = \beta_0 + \beta_1 \text{Treat} + \beta_2 \text{After} + \beta_3 (\text{Treat $\cdot$ After}) + \eta (\text{Year Fixed Effects})+ \gamma C_{it} + \epsilon_{it}$ Question 1: Is there anything wrong with this model? Question 2: Is there an issue to including time-constant fixed effects to this DD model? For example, what if I include i-level fixed effects ($\alpha_i$) and/or group indictors of i fixed effects (e.g. male/female or race)? I realize that DD cancels out time-constant i-lvl FE, but what if I include it here again?
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
S → aS | aSbS | (empty) where the alphabet is {a,b} in other words, the set of strings where any prefix has at least as many 'a's as 'b's. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community S → aS | aSbS | (empty) where the alphabet is {a,b} in other words, the set of strings where any prefix has at least as many 'a's as 'b's. Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. A grammar that can do this unambiguously is: $S \to aS \mid A S \mid \epsilon$ $A \to a AAb \mid \epsilon$ Every b is associated with an a in front of it, and anything between these is also associated in the same way so there is always balance.
Let $A$ be a complex Banach algebra, and suppose that the spectrum of some $x \in A$ is not connected. Show that $A$ contains a non-trivial idempotent $z$, for which also holds: $A = A_0 \oplus A_1$, where $$A_0 = \{x : zx = 0\}, \quad A_1 = \{x: zx = x\}.$$ I have been trying to define a function with value $1$ on one part of the spectrum and value $0$ on the other part, and defining the functional calculus $$\tilde{f}(x) = \frac{1}{2\pi i} \int_\Gamma f(\lambda)(\lambda e - x)^{-1} d\lambda.$$ I know that the holomorphic functional calculus is a homomorphism, so if $\Phi_x$ is defined as $\Phi_x(f) = \tilde{f}(x) $ then $(\tilde{f}(x))^2 = \Phi_x(f)\cdot\Phi_x(f) = \Phi_x(f \cdot f)$. But we only know that $f$ has the property $f\cdot f = f$ on $\sigma(x)$, but we are looking on a contour $\Gamma$ that is around $\sigma(x)$, so can also include points which are not in $\sigma(x)$, right? I think I'm missing the point of something.. Also for the second part I'm clueless. Similar question as here but I didn't really understand the answer to it. Let $A$ be a Banach algebra. Suppose that the spectrum of $x\in A$ is not connected. Prove that $A$ contains a nontrivial idempotent $z$.
Why in the following proof $$\sum_nA_n(X_n,X_m)=A_m(X_m,X_m)$$ ? The author says it's because orthogonality but orthogonality means $(f,g)=\int_a^bfgdx=0$. So how come orthogonality helps to prove it ? Could someone explain this please? Thanks Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Why in the following proof $$\sum_nA_n(X_n,X_m)=A_m(X_m,X_m)$$ ? The author says it's because orthogonality but orthogonality means $(f,g)=\int_a^bfgdx=0$. So how come orthogonality helps to prove it ? Could someone explain this please? Thanks To elaborate on other answers/comments, observe that $$ \sum_{n}A_{n}(X_{n}, X_{m}) = A_{1}(X_{1}, X_{m}) + A_{2}(X_{2}, X_{m}) + \ldots + A_{m}(X_{m}, X_{m}) + \ldots $$ and all of the terms where the index of $A$ is not $m$ are zero. So, $$ \sum_{n}A_{n}(X_{n}, X_{m}) = 0 + 0 + \ldots + 0 + A_{m}(X_{m}, X_{m}) + 0 +\ldots = A_{m}(X_{m}, X_{m}). $$ Because $ (X_n,X_m)=0$ if $n\ne m$. That's the orthogonality.
Abbreviation: CPoMon A is a partially ordered monoid $\mathbf{A}=\langle A,\cdot,1,\le\rangle$ such that commutative partially ordered monoid $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be commutative partially ordered monoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$, $h(1)=1$, and $x\le y\Longrightarrow h(x)\le h(y)$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Abelian partially ordered groups]] expansion [[Partially ordered monoids]] supervariety [[Commutative monoids]] subreduct
Answer: Let AB be the height of water level and CD be the height of hill Then, In \[\Delta \,ABC\] \[\Delta \,ABC\] \[\tan \,30{}^\circ =\frac{10}{y}\] \[y=10\sqrt{3}\] ?(i) In \[\Delta \,ADE\] \[\tan \,60{}^\circ =\frac{x}{y}\] \[y=\frac{x}{\sqrt{3}}\] ?(ii) From (i) and (ii), we get \[\frac{x}{\sqrt{3}}=10\sqrt{3}\] \[x=10\times 3=30\,m\] \[\therefore \] Distance of the hill from this ship is \[10\sqrt{3}\,m\] and the height of the hill is \[30+10=40\text{ }m\]. You need to login to perform this action. You will be redirected in 3 sec
A was reading a book with this question in it: Q.Find a quadratic polynomial, the sum of whose zeroes is 7 and their product is 12. Hence find the zeores of the polynomial. Sol. Let the polynomial be $ax^2+bx+c$ and suppose its zeroes are $\alpha$ and $\beta$ Therefore, sum of zeroes $=\alpha+\beta=\frac{-b}{a}=7$ and product of zeores $=\alpha\beta=\frac{c}{a}=12$ Take $a=LCM(12,1)=12$ Therefore $b=-7a=-7\times12=-84$ and $c=12a=12\times12=144$ So the polynomial is $12x^2-84x+144$ $\cdots$ Why have we taken $a=LCM(12,1)$? If we had taken $a=1$ then we could have got the answer without any kind of calculation. Is there any real reason for taking $a=LCM(12,1)$?
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
I was wondering how it is possible to see from the $SU(3)$ Gauge Theory alone that Gluons carry two charges colors: $g\overline{b}$ etc. Some background: The W-Bosons (pre-symmetry breaking) form an $SU(2)$ triplet and carry the corrsponding weak Isospin $1,0-1$. After SSB/Higgs the charged $W^\pm$-Bosons can be identified with complex linear combinations of the $W^{1,2}$, bosons, and therefore the corresponding term in the Lagrangian is $U(1)$ invariant, i.e. the $W^\pm$ carry electric charge, too. For a local $SU(3)$ gauge theory 8 gauge fields, the gluon fields are needed. Exactly as it was the case for $SU(2)$, one for each generator $\lambda_a$ and one introduces consequently "matrix gauge fields" $$ A^\mu = A^\mu_a \lambda_a$$ which can be seen as elements of the corresponding Lie algebra, because the $\lambda_a$ form a basis and the expression above can be seen as a expansion of $A^\mu$ in terms of this basis. The transformation behaviour is the same for all $SU(N)$ theories $$ A^\mu \rightarrow U A^\mu U^\dagger + \frac{i}{g} (\partial_\mu U) U^{-1} $$ As usual the fermions transform according to the fundamental representation, i.e. for $SU(3)$ are arranged in triplets. Each row representes a different color as explained in the answer here (What IS Color Charge? which recites from Griffith) Therefore a red fermion, for example is $$ c_{red} = \left(\begin{array}{c} f \\0\\0\end{array}\right) $$ where $f$ is the usual dirac spinor. An anti-red fermion would be $$ c_{red} = \left(\begin{array}{c} \bar f 0 0\end{array}\right) $$ The red fermion transforms according to the fundamental rep $F$, the anti-red fermion according to the conjugated rep $F^\star$. Which is a difference to $SU(2)$, because $SU(2)$ has only real representations and therefore the normal and anti rep are equivalent (why is it enough, that they are equivalent? The conjugate rep for $SU(2)$ is different but considered equivalent because $r = U \bar r U^{-1}$, for some unitary matrix $U$. Any thoughts on this would be great, too), i.e. there is no anti-isospin. I guess this is the reason the $W$ do not carry anti-charge, simply because there isn't anti charge for $SU(2)$. Now where is the point that we can see that the gluons carry anti-colorcharge and colorcharge? Is it because the matrix gluon fields defined above are part of the Lie algebra and transform therefore according to the adjoint rep of the group $A \rightarrow g A g^\dagger$, which could be seen as transforming according to the rep and anti-rep at the same time (or could be seen as completely non-sense idea from me ;) ) ? Why does the gluon octed does not get charge assigned like the $SU(2)$ triplet, which would mean the gluons carry different values of one strong charge ? (Analogous to $1,0-1$ for weak isospin of the $W$ triplet.) Any thoughts or ideas would be awesome!
For a vector $$$\vec{v} = (x, y)$$$, define $$$|v| = \sqrt{x^2 + y^2}$$$. Allen had a bit too much to drink at the bar, which is at the origin. There are $$$n$$$ vectors $$$\vec{v_1}, \vec{v_2}, \cdots, \vec{v_n}$$$. Allen will make $$$n$$$ moves. As Allen's sense of direction is impaired, during the $$$i$$$-th move he will either move in the direction $$$\vec{v_i}$$$ or $$$-\vec{v_i}$$$. In other words, if his position is currently $$$p = (x, y)$$$, he will either move to $$$p + \vec{v_i}$$$ or $$$p - \vec{v_i}$$$. Allen doesn't want to wander too far from home (which happens to also be the bar). You need to help him figure out a sequence of moves (a sequence of signs for the vectors) such that his final position $$$p$$$ satisfies $$$|p| \le 1.5 \cdot 10^6$$$ so that he can stay safe. The first line contains a single integer $$$n$$$ ($$$1 \le n \le 10^5$$$) — the number of moves. Each of the following lines contains two space-separated integers $$$x_i$$$ and $$$y_i$$$, meaning that $$$\vec{v_i} = (x_i, y_i)$$$. We have that $$$|v_i| \le 10^6$$$ for all $$$i$$$. Output a single line containing $$$n$$$ integers $$$c_1, c_2, \cdots, c_n$$$, each of which is either $$$1$$$ or $$$-1$$$. Your solution is correct if the value of $$$p = \sum_{i = 1}^n c_i \vec{v_i}$$$, satisfies $$$|p| \le 1.5 \cdot 10^6$$$. It can be shown that a solution always exists under the given constraints. 3 999999 0 0 999999 999999 0 1 1 -1 1 -824590 246031 1 8 -67761 603277 640586 -396671 46147 -122580 569609 -2112 400 914208 131792 309779 -850150 -486293 5272 721899 1 1 1 1 1 1 1 -1 Name
Vector Algebra class 12 Formulas: A quantity which has both magnitudes, as well as direction, is said to be vector quantity. It is denoted by an arrow pointing direction and length of its tail as the magnitude. Thus, It’s symbol is \(\overrightarrow{A}\) and its magnitude is given as |A|. Thus, read vector algebra formulas further. Position Vector and Magnitude: We write position vector of any point P(x, y, z) as \(\overrightarrow{OP}\) (=\(\overrightarrow{r}\) ) = x\(\hat{k}\) + y\(\hat{k}\) + z\(\hat{k}\). Also, its magnitude as \(\sqrt{x^{2} + y^{2} + z^{2}}\). Direction Ratio: Scalar components of any vector are its direction ratios and represent its projections along the respective axes. Thus, direction ratio and direction ratios are considered in vector algebra formula. Relation between Direction Ratio and Direction Cosine: The magnitude (r), direction ratios (a, b, c) and direction cosines (l, m, n) of any vector are related as: l = \(\frac{a}{r}\), m = \(\frac{b}{r}\), n = \(\frac{c}{r}\) Triangle Formula: The vector algebra formulas are absolute for triangles. The vector sum of the three sides of a triangle taken in order is \(\overrightarrow{0}\). Dividing the line segment in given ratio: Internally: We write the position vector of a point R dividing a line segment in the ratio m : n joining the points P and Q whose position vectors are a and b respectively as: \(\frac{n\overrightarrow{a} + m\overrightarrow{b}}{m + n}\) 2. Externally: Similarly we can find position vector of R as \(\frac{n\overrightarrow{a} – m\overrightarrow{b}}{m – n}\) Scalar product: For two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) also having angle θ between them, the scalar product is: \(\overrightarrow{a}.\overrightarrow{b}\) = |\(\overrightarrow{a}\)||\(\overrightarrow{b}\)|cosθ Thus, Also written as cosθ = \(\frac{\overrightarrow{a}.\overrightarrow{b}}{|\overrightarrow{a}||\overrightarrow{b}|}\) Vector product: The cross product of two vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) also having angle θ between them is: \(\overrightarrow{a}\) x \(\overrightarrow{b}\) = |\(\overrightarrow{a}\)||\(\overrightarrow{b}\)| sinθ \(\hat{n}\) Here, \(\hat{n}\) is a unit vector. Also, lies perpenducular the plane of \(\overrightarrow{a}\) and \(\overrightarrow{b}\). Then, we see that it is simple to denote these at same extent. Properties of vector Algebra: If we write vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) in its component form as \(\overrightarrow{a}\) = a 1\(\hat{i}\) + a 2\(\hat{j}\) + a 3\(\hat{k}\) \(\overrightarrow{b}\) = b 1\(\hat{i}\) + b 2\(\hat{j}\) + b 3\(\hat{k}\) also λ be any scalar. Then vector algebra formulas are: \(\overrightarrow{a}\) + \(\overrightarrow{b}\) = (a 1+ b 1)\(\hat{i}\) + (a 2+ b 2)\(\hat{j}\) + (a 3+ b 3)\(\hat{k}\) λ \(\overrightarrow{a}\) = (λa 1)\(\hat{i}\) + (λa 2)\(\hat{j}\) + (λa 3)\(\hat{k}\) \(\overrightarrow{a}\) . \(\overrightarrow{b}\) = a 1b 1+ a 2b 2+ a 3b 3 \(\vec{a} \times \vec{b} = \begin{vmatrix} i & j & k \\ a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \end{vmatrix}\) Thus, All the formulas will help you to deal with problem solving assesment. In vector algebra class 12 Everything is there with full details. More From Algebra Polynomial Formulas Quadratic Equation Formulas Algebraic Formulas Square and Square Root Formula Cube and Cube Root Formulas Pair of linear equation in two variable formulas Decimal Rules Vector Algebra Formulas Factorisation
Suppose we have a tower of field extensions: $\overline{F} \subset K \subset E \subset F$ Is it true in general that $|G(K/F)| = |G(K/E)| \cdot |G(E/F)|$? I was able to verify some specific examples, like $\mathbb{Q}(\sqrt[3]{2}, \omega)$ for $x^3-2$ and another extension, but how could I show that this holds in general for all such towers of extensions?
In some book about continuum mechanics I read that from principle of virtual work follows balance of rotational momentum when $\delta \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{r}, \; \boldsymbol{\delta \varphi} = \boldsymbol{\mathsf{const}}$ ($\boldsymbol{r}$ is location vector, $\delta \boldsymbol{r}$ is its variation, $\boldsymbol{\delta \varphi}$ is not variation, just denoted as it for some reason like being small enough for infinitesimal $\delta \boldsymbol{r}$). Then there is written without any explaination $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. I know that $\boldsymbol{E}$ is bivalent “metric unit identity” tensor (the one which is neutral to dot product operation), and that $\boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{E}$. And that $\boldsymbol{a} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{a} \:\: \forall\boldsymbol{a}$, no minus here. To get minus, transposing is needed: $\left( \boldsymbol{E} \times \boldsymbol{\delta \varphi} \right)^{\mathsf{T}} \! = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. Thus I can’t get why $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$ has minus sign. For constant $\boldsymbol{\delta \varphi}$, $\boldsymbol{\nabla} \boldsymbol{\delta \varphi} = {^2\boldsymbol{0}}$ (bivalent zero tensor). Isn’t it true that $\boldsymbol{\nabla} \! \left( \boldsymbol{\delta \varphi} \times \boldsymbol{r} \right) = \boldsymbol{\delta \varphi} \times \boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{\delta \varphi}$? Searching for how to get gradient of cross product of two vectors gives gradient of dot product, divergence ($\boldsymbol{\nabla} \cdot$) of cross product, and many other relations. But no gradient of cross product $\boldsymbol{\nabla} \! \left( \boldsymbol{a} \times \boldsymbol{b} \right) = \ldots$ Is it impossible or unknown how to find it? At least for the case when first vector is constant. update As “gradient” I mean tensor product with “nabla” $\boldsymbol{\nabla}$: $\operatorname{^{+1}grad} \boldsymbol{A} \equiv \boldsymbol{\nabla} \! \boldsymbol{A}$, here $\boldsymbol{A}$ may be tensor of any valence (and I don’t use “$\otimes$” or any other symbol for tensor product). Nabla (differential Hamilton’s operator) is $\boldsymbol{\nabla} \equiv (\sum_i)\, \boldsymbol{r}^i \partial_i$, $\:(\sum_i)\, \boldsymbol{r}^i \boldsymbol{r}_i = \boldsymbol{E} \,\Leftrightarrow\, \boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^{i}_{j}$ (Kronecker’s delta), $\,\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (basis vectors), $\,\partial_i \equiv \frac{\partial}{\partial q^i}$, $\:\boldsymbol{r}(q^i)$ is location vector, and $q^i$ $(i = 1, 2, 3)$ are coordinates.
Let us have five points at $x = -2,-1,0,1,2$ with ordinates equal to $y_i$, I want to derive the formula for $a_0$, such that the parabola $y = a_0 + a_1x + a_2x^2$ fits the points the best in ordinary least square sense. $$(y_{-2}-a_0 +2a_1-4a_2)^2 + (y_{-1} - a_0 + a_1 -a_2)^2 + (y_{0} -a_0)^2 + (y_{1} -a_0 - a_1 - a_2)^2 + (y_{2} - a_0 - 2a_2 - 4a_2)^2 \to min \ \text{w.r.t. } a_0, a_1 \text{and } a_2.$$ I know the answer $a_0 = \frac{1}{35} [ -3 y_{-2} + 12 y_{-1} + 17 y_0 + 12 y_1 -3 y_2]$. It was given on a lecture and the proof was left as an exercise. Background of the question is time series weighted moving average smoothing. We substitute every $y$ with $a_0$ to get a smoother time series. However I don't understand how we can derive it not taking three derivatives, getting 3 linear equations and solving them. But looks like there is a simpler proof for $a_0$ particularly that I don't see. If we denote every summand being squared as $S_i$ then we get a system of linear equations. $$ \begin{cases} S_{-2} + S_{-1} + S_0 + S_1 + S_2 = 0 \\ 2S_{-2}+S_{-1} - S_1 - 2S_2 = 0 \\ 4S_{-2}+S_{-1}+S_{1}+4S_2 = 0 \end{cases}$$ But is there a faster proof? The original slide is given below for reference
My question is in the title: Do black holes have a moment of inertia? I would say that it is: $$I ~\propto~ M R_S^2,$$ where $R_S$ is the Schwarzschild radius, but I cannot find anything in the literature. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community The angular velocity of a Kerr black hole with mass $M$ and angular momentum $J$ is $$ \Omega = \frac{J/M}{2M^2 + 2M \sqrt{M^2 - J^2/M^2}} $$ The moment of inertia of an object can be thought of as a map from the object's angular velocity to its angular momentum. However, here we see that the relationship between these two quantities is non-linear. If we want to think of moment of inertia in the usual sense, we should linearise the above equation. When we do so, we find the relationship $$ J = 4 M^3 \Omega \qquad (\mathrm{to\ first\ order})$$ And so the moment of inertia is $$ I = 4 M^3 $$ In other words, the expression you guessed is correct, and the constant of proportion is unity. Note that since the Schwarschild radius of a black hole is merely twice its mass, and since the only two parameters that describe the black hole are its mass and angular momentum, any linear relationship between the angular velocity and angular momentum of our black hole must be of the form $J = k\, M R_S^2\, \Omega$ on dimensional grounds. Note that $G = c = 1$ throughout. EDIT. As pointed out in the comments, it's not obvious how one should define the angular velocity of a black hole. At the risk of being overly technical, we can do this as follows. First consider the Killing vector field $\xi = \partial_t + \Omega \partial_\phi$ (using Boyer-Lindquist coordinates), where $\Omega$ is defined to be as above. The orbits, or integral curves, of this vector field are the lines $\phi = \Omega t + \mathrm{const.}$, which correspond to rotation at angular velocity $\Omega$ with respect to a stationary observer at infinity. One can show that this vector field is tangent to the event horizon, and its orbits lying on the event horizon are geodesics. These geodesics hence rotate at angular velocity $\Omega$ (with respect to an observer at infinity), and hence it is natural to interpret the quantity $\Omega$ as the angular velocity of the black hole. Whether it is possible to make a more definite statement than this I do not know. Moments of inertia are defined about a given axis of rotation. Moment of inertia is the name given to rotational inertia, the rotational analog of mass for linear motion. It appears in the relationships for the dynamics of rotational motion. The moment of inertia must be specified with respect to a chosen axis of rotation There exist rotating black holes. A rotating black hole is a black hole that possesses angular momentum. In particular, it rotates about one of its axes of symmetry.
Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link function is a logit function $$ E(Y|X,\beta) = p = \text{logit}^{-1}( \eta ) $$ where $Y$ take only values in $\{0,1\}$ and inverse logit functions transforms linear combination $\eta$ to this range. This is where classical logistic regression ends. However if you recall that $E(Y) = P(Y = 1)$ for variables that take only values in $\{0,1\}$, than $E(Y | X,\beta)$ can be considered as $P(Y = 1 | X,\beta)$. In this case, the logit function output could be thought as conditional probability of "success", i.e. $P(Y=1|X,\beta)$. Bernoulli distribution is a distribution that describes probability of observing binary outcome, with some $p$ parameter, so we can describe $Y$ as $$ y_i \sim \text{Bernoulli}(p) $$ So with logistic regression we look for some parameters $\beta$ that togeder with independent variables $X$ form a linear combination $\eta$. In classical regression $E(Y|X,\beta) = \eta$ (we assume link function to be identity function), however to model $Y$ that takes values in $\{0,1\}$ we need to transform $\eta$ so to fit in $[0,1]$ range. Now, to estimate logistic regression in Bayesian way you pick up some priors for $\beta_i$ parameters as with linear regression (see Kruschke et al, 2012), then use logit function to transform the linear combination $\eta$, so to use its output as a $p$ parameter of Bernoulli distribution that describes your $Y$ variable. So, yes, you actually use the equation and the logit link function the same way as in frequentionist case, and the rest works (e.g. choosing priors) like with estimating linear regression the Bayesian way. The simple approach for choosing priors is to choose Normal distributions (but you can also use other distributions, e.g. $t$- or Laplace distribution for more robust model) for $\beta_i$'s with parameters $\mu_i$ and $\sigma_i^2$ that are preset or taken from hierarchical priors. Now, having the model definition you can use software such as JAGS to perform Markov Chain Monte Carlo simulation for you to estimate the model. Below I post JAGS code for simple logistic model (check here for more examples). model { # setting up priors a ~ dnorm(0, .0001) b ~ dnorm(0, .0001) for (i in 1:N) { # passing the linear combination through logit function logit(p[i]) <- a + b * x[i] # likelihood function y[i] ~ dbern(p[i]) } } As you can see, the code directly translates to model definition. What the software does is it draws some values from Normal priors for a and b, then it uses those values to estimate p and finally, uses likelihood function to assess how likely is your data given those parameters (this is when you use Bayes theorem, see here for more detailed description). The basic logistic regression model can be extended to model the dependency between the predictors using a hierarchical model (including hyperpriors). In this case you can draw $\beta_i$'s from Multivariate Normal distribution that enables us to include information about covariance $\boldsymbol{\Sigma}$ between independent variables $$ \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_k \end{pmatrix} \sim \mathrm{MVN} \left(\begin{bmatrix} \mu_0 \\ \mu_1 \\ \vdots \\ \mu_k \end{bmatrix},\begin{bmatrix} \sigma^2_0 & \sigma_{0,1} & \ldots & \sigma_{0,k} \\ \sigma_{1,0} & \sigma^2_1 & \ldots &\sigma_{1,k} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{k,0} & \sigma_{k,1} & \ldots & \sigma^2_k\end{bmatrix}\right)$$ ...but this is going into details, so let's stop right here. The "Bayesian" part in here is choosing priors, using Bayes theorem and defining model in probabilistic terms. See here for definition of "Bayesian model" and here for some general intuition on Bayesian approach. What you can also notice is that defining models is pretty straightforward and flexible with this approach. Kruschke, J. K., Aguinis, H., & Joo, H. (2012). The time has come: Bayesian methods for data analysis in the organizational sciences. Organizational Research Methods, 15(4), 722-752. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360–1383.
Let $R$ be a commutative ring with only one prime ideal. I want to show that every element of $R$ is either a unit or nilpotent, or equivalently, that the nilradical is the unique maximal/prime ideal. Is there a way to prove this without using the fact that the nilradical is in fact EQUAL to the intersection of all prime ideals, instead of just contained in that intersection? I ask because all the solutions I've seen posted to this Dummit and Foote problem used that fact, which had not been proved yet in the book. I would try this approach. It uses, however, localizations, which is (if I remember correctly) the only tool needed for proving that the nilradical is the intersection of all prime ideals. Let $a \in R$ be some non-invertible element. Thus, it is contained in the only maximal (= the only prime) ideal of $R$. Consider the localization $S^{-1}R$ of $R$, where $S=\{a^k \; | \; k \in \mathbb{N}\}$. Now the "correspondence theorem for localizations" says that prime ideals of $S^{-1}R$ are in one-to -one correspondence with prime ideals of $R$ not intersecting $S$. But since $a$ is a member of the only prime ideal of $R$, it follows that $S^{-1}R$ must be the zero ring (it is an unital ring with no prime ideals, hence no maximal ideals). Thus, we have $(1/1)=(0/1)$ in $S^{-1}R$, which by definition means that $a^n=a^n(1-0)=0$ in $R$ for some $n \in \mathbb{N}$. Another approach (more elementary one):I have a feeling this is just a rephrasing of the proof above, however, it may be more transparent. Let $a$ be a non-invertible element which is not nilpotent. Then $a$ is contained in some maximal ideal $M$, which is prime. Consider the family of ideals $$\mathcal{M}=\{I \;|\; a^k \notin I\; \forall k\}$$ Ordered by inclusion. Since $a$ is not nilpotent, $0 \in \mathcal{M}$, hence the collection is nonempty. It is clearly closed under taking unions of chains of ideals. Hence it contains some maximal element $P$. The claim is that $P$ is prime ideal. Consider two elements $x,y \in R \setminus P$. Then we have $xR+P, yR+P \supsetneq P$. Since $P$ was maximal in $\mathcal{M}$, it follows that $$a^m=xr+p, a^n=ys+q$$ for some $n,m \in \mathbb{N},\;\; r,s \in R, \;\; p,q \in P$. Then $$a^{n+m}=xyrs+xrq+ysp+pq,$$ where $xrq+ysp+pq \in P$. It follows that $xyrs \notin P$, since $a^{n+m}\notin P$. Thus, $xy \notin P$. We have proved that $P$ is a prime ideal not containing $a$. In particular, $M$ and $P$ are two distinct prime ideals in $R$. Thus, assuming $R$ has only one prime ideal, all non-invertible elements must be nilpotent.
Identity of Group is Unique Theorem Then $e$ is unique. The result follows by applying the result Identity of Monoid is Unique. $\blacksquare$ Then: \(\displaystyle e\) \(=\) \(\displaystyle e \circ f\) $f$ is an identity \(\displaystyle \) \(=\) \(\displaystyle f\) $e$ is an identity So $e = f$ and there is only one identity after all. $\blacksquare$ $a x = b$ and there exists a unique $y \in G$ such that: $y a = b$ Setting $b = a$, this becomes: There exists a unique $x \in G$ such that: $a x = a$ and there exists a unique $y \in G$ such that: $y a = a$ These $x$ and $y$ are both $e$, by definition of identity element. $\blacksquare$
Abbreviation: MultSlat A (or multiplicative semilattice ) is a structure $\mathbf{A}=\langle A,\vee,\cdot\rangle$ of type $\langle 2,2\rangle$ such that $m$-semilattice $\langle A,\vee\rangle$ is a semilattice $\cdot$ distributes over $\vee$: $x(y\vee z)=xy\vee xz$, $(x\vee y)z=xz\vee yz$ Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be multiplicative semilattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Lattice-ordered semigroups]] [[Semilattices]] reduced type
In the linear sigma model, the Lagrangian is given by $$ \mathcal{L} = \frac{1}{2}\sum_{i=1}^{N} \left(\partial_\mu\phi^i\right)\left(\partial^\mu\phi^i\right) +\frac{1}{2}\mu^2\sum_{i=1}^{N}\left(\phi^i\right)^2-\frac{\lambda}{4}\left(\sum_{i=1}^{N}\left(\phi^i\right)^2\right)^2 \tag{11.65} $$ (for example see Peskin & Schroeder page 349). When perturbatively computing the effective action for this Lagrangian the derivative $ \frac{\delta^2\mathcal{L}}{\delta\phi^k(x)\delta\phi^l(x)} $ needs to be computed. (for instance, Eq. (11.67) in P&S): $$ \frac{\delta^2\mathcal{L}}{\delta\phi^k(x)\delta\phi^l(x)} ~=~ -\partial^2\delta^{kl} +\mu^2\delta^{kl}-\lambda\left[\phi^i\phi^i\delta^{kl}+2\phi^k\phi^l\right].\tag{11.67}$$ My question is, how is one supposed to handle the derivative term? This seems to be completely implicit in the presentation of P&S, but from what I could gather it should go like so: 1) Because we are computing the effective action, $\mathcal{L}$ is actually under an integral and we can replace $\left(\partial_\mu\phi^i\right)\left(\partial_\mu\phi^i\right)$ with $-\left(\partial^\mu\partial_\mu\phi^i\right)\phi^i=-\left(\partial^2\phi^i\right)\phi^i$ using Stokes' theorem. 2) Then when performing the first derivative I get $\frac{\delta}{\delta\phi^l}\left[-\left(\partial^2\phi^i\right)\phi^i\right]=-\partial^2\phi^l$. 3) It is the second derivative I get stuck at, for as far as I can see, $\frac{\delta}{\delta\phi^k}\left[-\partial^2\phi^l\right]=0$, for there is only dependence on the 2nd derivative of $\phi^l$ and not $\phi^l$ itself. If, as is usual in field theory, the field and its derivatives are treated as independent dynamical variables, then the second derivative should also be an independent dynamical variable. How is it explained then, that the result of this computation should be $\frac{\delta}{\delta\phi^k}\left[-\partial^2\phi^l\right]=-\delta^{kl}\partial^2$?
Article On Sobolev Regularity of Mass Transport and Transportation Inequalities We study Sobolev a priori estimates for the optimal transportation $T = \nabla \Phi$ between probability measures $\mu=e^{-V} \ dx$ and $\nu=e^{-W} \ dx$ on $\R^d$. Assuming uniform convexity of the potential $W$ we show that $\int \| D^2 \Phi\|^2_{HS} \ d\mu$, where $\|\cdot\|_{HS}$ is the Hilbert-Schmidt norm, is controlled by the Fisher information of $\mu$. In addition, we prove similar estimate for the $L^p(\mu)$-norms of $\|D^2 \Phi\|$ and obtain some $L^p$-generalizations of the well-known Caffarelli contraction theorem. We establish a connection of our results with the Talagrand transportation inequality. We also prove a corresponding dimension-free version for the relative Fisher information with respect to a Gaussian measure. This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. Let γ be a Gaussian measure on a locally convex space X and H be the corresponding Cameron-Martin space. It has been recently shown by L. Ambrosio and A. Figalli that the linear first-order transportational PDE on X admits a weak solution under broad assumptions. Applying transportation of measures via triangular maps we prove a similar result for a large class of non-Gaussian probability measures ν on $\R^{\infty}$, under the main assumption of integrability of logarithmic derivativesof v. We also show uniqueness of the solution for a wide class of measures. This class includes uniformly log-concave Gibbs measures and certain product measures. measures. A sharp Poincaré-type inequality is derived for the restriction of the Gaussian measure on the boundary of a convex set. In particular, it implies a Gaussian mean-curvature inequality and a Gaussian iso-second-variation inequality. The new inequality is nothing but an infinitesimal equivalent form of Ehrhard’s inequality for the Gaussian measure. While Ehrhard’s inequality does not extend to general CD(1, ∞) measures, we formulate a sufficient condition for the validity of Ehrhard-type inequalities for general measures on RnRn via a certain property of an associated Neumann-to-Dirichlet operator. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traпђГc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the пђБnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a пђБnite-dimensional system of diпђАerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diпђАerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself. Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny ƒЬ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Bulletin of the American Physical Society 2017 Fall Meeting of the APS Division of Nuclear Physics Volume 62, Number 11 Wednesday–Saturday, October 25–28, 2017; Pittsburgh, Pennsylvania Session KG: Mini-Symposium on Fundamental Symmetries II Hide Abstracts Chair: Nadia Fomin, University of Tennessee Room: Marquis A Friday, October 27, 2017 2:00PM - 2:12PM KG.00001: Nab: a precise study of unpolarized neutron beta decay Dinko Pocanic Nab is a program of measurements of unpolarized neutron decays at the Spallation Neutron Source, Oak Ridge, TN. Nab aims to determine $a$, the $e$--$\nu$ correlation with precision of $\delta a/a = 10^{-3}$, and $b$, the Fierz interference term, with uncertainty $\delta b \simeq 3\times 10^{-3}$. The set of available observables overconstrains neutron beta decay in the Standard Model (SM), opening the door to searches for evidence of possible SM extensions. Projected Nab results will lead to a new precise determination of the ratio $\lambda=G_A/G_V$, and to significant reductions in the allowed limits for both right- and left-handed scalar and tensor currents. Alternatively, Nab may detect a discrepancy from SM predictions consistent with certain realizations of supersymmetry. A long asymmetric spectrometer, optimized to achieve the required narrow proton momentum response function, is currently under construction. The apparatus is to be used in follow-up measurements (ABba experiment) of asymmetry observables $A$ and $B$ in polarized neutron decay. Nab is planned for beam readiness in 2018. We discuss the experiment's motivation, expected reach, design and method, and update its overall status. [Preview Abstract] Friday, October 27, 2017 2:12PM - 2:24PM KG.00002: The Nab Spectrometer, Precision Field Mapping, and Associated Systematic Effects Jason Fry The Nab experiment will make precision measurements of $a$, the $e$-$\nu$ correlation parameter, and $b$, the Fierz interference term, in neutron beta decay, aiming to deliver an independent determination of the ratio $\lambda = G_A / G_V$ to sensitively test CKM unitarity. Nab utilizes a novel, long asymmetric spectrometer to measure the proton TOF and electron energy. We extract $a$ from the slope of the measured TOF distribution for different electron energies. A reliable relation of the measured proton TOF to $a$ requires detailed knowledge of the effective proton pathlength, which in turn imposes further requirements on the precision of the magnetic fields in the Nab spectrometer. The Nab spectrometer, magnetometry, and associated systematics will be discussed. [Preview Abstract] Friday, October 27, 2017 2:24PM - 2:36PM KG.00003: The Electric Field in the Neutron Decay Region of the Nab Experiment Huangxing Li The Nab collaboration will determine two parameters in free neutron beta decay: (a) the electron-antineutrino correlation coefficient $a$ to $|\delta a /a| \le 10^{-3}$ and (b) the Fierz interference term $b$ to $|\delta b| \le 3\times10^{-3}$. Part (a) will be done with a measurement of the two-dimensional electron energy and proton time-of-flight spectrum in the neutron beta decay. We will discuss the requirements for the electric field in the neutron decay region to achieve the desired experimental uncertainty. We will present our solution: an electrode system made from materials with low work function variations, and its characterization with a Kelvin probe. [Preview Abstract] Friday, October 27, 2017 2:36PM - 2:48PM KG.00004: Neutronics Studies for the Nab Experiment Elizabeth Scott The Nab experiment at the Spallation Neutron Source at ORNL aims to measure the neutron beta decay electron-neutrino correlation coefficient "a" and the Fierz interference term "b" with competitive precision. In Nab, the parameter "a" is extracted from the proton momentum and electron energy using an asymmetric magnetic spectrometer and two large-area highly pixelated Si detectors . To achieve $10^{-3}$ accuracy, there must be low background rates compared to our 1 kHz signal rates. The background is primarily reduced by using coincidence detection of the electron and photon from the decay. However, further reduction is still necessary. Neutron and gamma rates in the Si detectors can lead to false coincidences. The majority of this background radiation can be reduced by well designed collimation and shielding. The collimation design was done with McStas and the background shielding with MCNP6 (Monte Carlo N-Particle 6). Neutrons are absorbed by $^{6}Li$-loaded materials or borated polyethylene and gammas close to spectrometer with non magnetic materials such as lead and stainless steel. I will present the shielding design and MCNP6 results. [Preview Abstract] Friday, October 27, 2017 2:48PM - 3:00PM KG.00005: Determining a Limit on the Nab Timing Systematic Aaron Sprow Weak decay angular correlations such as that of the electron and neutrino from neutron decay can be used in precise tests of the Standard Model. The Nab experiment at the Spallation Neutron Source will determine the electron-neutrino correlation coefficient, $a$, to a fractional uncertainty of $1\times10^{-3}$ by considering the proton spectra for a fixed electron energy. Each proton energy will be determined via a time-of-flight measurement relative to the fast arrival of the coincident electron. Parametric studies show that a timing systematic of $\delta t_{\textrm{pe}}=300\,\textrm{ps}$ can saturate the Nab error budget. Presented in this talk will be a discussion of the simulation and experimental efforts designed to understand the timing systematic, and their results. [Preview Abstract] Friday, October 27, 2017 3:00PM - 3:12PM KG.00006: Programming a Detector Emulator on NI's FlexRIO Platform Michelle Gervais, Christopher Crawford, Aaron Sprow Recently digital detector emulators have been on the rise as a means to test data acquisition systems and analysis toolkits from a well understood data set. National Instruments' PXIe-7962R FPGA module and Active Technologies AT-1212 DAC module provide a customizable platform for analog output. Using a graphical programming language, we have developed a system capable of producing two time-correlated channels of analog output which sample unique amplitude spectra to mimic nuclear physics experiments. This system will be used to model the Nab experiment, in which a prompt beta decay electron is followed by a slow proton according to a defined time distribution. We will present the results of our work and discuss further development potential. [Preview Abstract] Friday, October 27, 2017 3:12PM - 3:24PM KG.00007: Overview of the Calcium-45 Beta Spectrum Measurement at Los Alamos National Laboratory Camen Royse One smoking gun of BSM physics would be the observation of a non-zero Fierz interference term, a feature in the beta spectrum produced by scalar and tensor couplings. Calcium-45 is an almost ideal candidate with which to search for a Fierz term. It is a pure beta emitter with a low endpoint of 256 keV and a simple decay scheme, with a $7/2- \rightarrow 7/2-$ g.s. to g.s. branching ratio of 99.9981(11)\%. Isospin selection rules ensure the decay is greater than about 98.5\% pure Gamow-Teller and the integrated effect of the weak magnetism over the entire spectrum is expected to be only 0.13\%. An experiment designed to precisely measure the beta spectrum of Ca-45 has been run over the past two summers at Los Alamos National Laboratory. The experiment is composed of a 4$\pi$-capture magnetic spectrometer between two segmented arrays of hexagonal silicon detectors (similar to the Nab experiment), a helium gas cooling system, front end electronics and amplifiers, and a data acquisition system which synchronizes the timing from the signals coming from both detector arrays. Data is analyzed to account for the pile-up of signals and other physical and calibration factors. An overview of the design and execution of the experiment as divided into the above topics will be presented. [Preview Abstract] Friday, October 27, 2017 3:24PM - 3:36PM KG.00008: Status of the 45Ca beta spectrum measurement at Los Alamos National Laboratory Noah Birge Although the Standard Model describes fundamental particle interactions to high precision,neutrino flavor oscillations, the observed baryon asymmetry, and complete absence of gravity from the model make it clear that there is important physics it does not describe, so called beyond the standard model (BSM). A nonzero Fierz interference term for beta decay is one candidate for BSM physics. This effect essentially manifests in the form of a distortion of the beta decay electron energy spectrum. $^{45}$Ca is a particularly appealing nucleus to attempt a measurement of the interference term, as it is a pure beta emitter. A program is in progress to perform this measurement at the Los Alamos National Lab. The 2017 run incorporates cold helium gas to cool the two detector systems. A similar system will be implemented in the Nab experiment, so this experiment also serves as an early prototype for Nab. Results of cooling tests and their effects on detector performance using a waveform analysis and preliminary energy spectra will be presented. [Preview Abstract] Friday, October 27, 2017 3:36PM - 3:48PM KG.00009: Waveform Analysis Optimization for the $^{45}$Ca Beta Decay Experiment Ryan Whitehead The $^{45}$Ca experiment is searching for a non-zero Fierz interference term, which would imply a tensor type contribution to the low-energy weak interaction, possibly signaling Beyond-the-Standard-Model (BSM) physics. Beta spectrum measurements are being performed at LANL, using the segmented, large area, Si detectors developed for the Nab and UCNB experiments. $10^{9}$ events have been recorded, with 38 of the 254 pixels instrumented, during the summers of 2016 and 2017. An important step to extracting the energy spectra is the correction of the waveform for pile-up events. A set of analysis tools has been developed to address this issue. A trapezoidal filter has been characterized and optimized for the experimental waveforms. This filter is primarily used for energy extraction, but, by adjusting certain parameters, it has been modified to identify pile-up events. The efficiency varies with the total energy of the particle and the amount deposited with each detector interaction. Preliminary results of this analysis will be presented. [Preview Abstract] Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Current browse context: astro-ph.CO Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Impact of kinetic and potential self-interactions on scalar dark matter (Submitted on 3 Jun 2019 (this version), latest version 22 Aug 2019(v2)) Abstract: We consider models of scalar dark matter with a generic interaction potential and non-canonical kinetic terms of the K-essence type that are subleading with respect to the canonical term. We analyse the low-energy regime and derive, in the nonrelativistic limit, the effective equations of motions. In the fluid approximation they reduce to the conservation of matter and to the Euler equation for the velocity field. We focus on the case where the scalar field mass $10^{-21} \ll m \lesssim 10^{-4} \, {\rm eV}$ is much larger than for fuzzy dark matter, so that the quantum pressure is negligible on cosmological and galactic scales, while the self-interaction potential and non-canonical kinetic terms generate a significant repulsive pressure. At the level of cosmological perturbations, this provides a dark-matter density-dependent speed of sound. At the non-linear level, the hydrostatic equilibrium obtained by balancing the gravitational and scalar interactions imply that virialized structures have a solitonic core of finite size depending on the speed of sound of the dark matter fluid. For the most relevant potential in $\lambda_4 \phi^4/4$ or K-essence with a $(\partial \phi)^4$ interaction, the size of such stable cores cannot exceed 60 kpc. Structures with a density contrast larger than $10^6$ can be accommodated with a speed of sound $c_s\lesssim 10^{-6}$. We also consider the case of a cosine self-interaction, as an example of bounded non-polynomial self-interaction. This gives similar results in low-mass and low-density halos whereas solitonic cores are shown to be absent in massive halos. Submission historyFrom: Patrick Valageas [view email] [v1]Mon, 3 Jun 2019 12:10:54 GMT (235kb) [v2]Thu, 22 Aug 2019 08:57:54 GMT (235kb)
Abbreviation: DLO A is a structure $\mathbf{A}=\langle A,\vee,\wedge,f_i\ (i\in I)\rangle$ such that distributive lattice with operators $\langle A,\vee,\wedge\rangle$ is a distributive lattice $f_i$ is in each argument: $f_i(\ldots,x\vee y,\ldots)=f_i(\ldots,x,\ldots)\vee f_i(\ldots,y,\ldots)$ join-preserving Let $\mathbf{A}$ and $\mathbf{B}$ be distributive lattices with operators of the same signature. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a distributive lattice homomorphism and preserves all the operators: $h(f_i(x_0,\ldots,x_{n-1}))=f_i(h(x_0),\ldots,h(x_{n-1}))$ Example 1:
In Gali chapter 2 we have the following constraint to the classical monetary model $P_t C_t + Q_t B_t \leq B_{t-1} + W_t N_t-T_t$. Then it seems that this is treated as an equality. Therefore my question is: are we assumig that $\partial U/\partial B_t >0$? So households would achieve the optimun at the equality. Another question is, in general if I have the problem $Max E_0 \sum_{t=0}^{\infty}\beta^t U(C_t,N_t)$ Should I solve the OP $Max \sum_{t=0}^{\infty}\beta^t U(C_t,N_t)$ And then take expectation to the relations I obtain by optimizing?. For example if we add the constraint $P_t C_t + Q_t B_t \leq B_{t-1} + W_t N_t-T_t$ to the OPs, in the problem without expectation I get $Q_t=\beta \frac{\partial U/\partial C_{t+1}}{\partial U/\partial C_{t}} \frac{P_t}{P_{t+1}}$ and taking $E_t$ I obtain Euler's equation. Is the procedure of solving the problem without expectation and then take expectation correct? EDIT:If it is incorrect to forget the expectation and optimize, then I am confused about this: How to relate real rate of return on capital to bond interest rate: Lagrangian. As there the procedure to solve the problem seems to forget the expectation. Second, if I do $\frac{\partial }{\partial C_t}E[U]=E\left[\frac{\partial U}{\partial C_t}\right]$. I obtain $Q_t=\beta\frac{ E_0\left[\partial U/\partial C_{t+1} \right]}{E_0\left[\partial U/\partial C_{t} \right]}\frac{P_t}{P_{t+1}}$ because I can not get rid off $E_0$. Nevertheless this would be equivalent to $Q_t=\beta E_0 \left[\frac{\partial U/\partial C_{t+1} }{\partial U/\partial C_{t} }\right]\frac{P_t}{P_{t+1}}$ if $E_0 \left[ \frac{\partial U/\partial C_{t+1} }{\partial U/\partial C_{t} }\right]=\frac{ E_0\left[\partial U/\partial C_{t+1} \right]}{E_0\left[\partial U/\partial C_{t} \right]}$. Is this because $C_t$ is independent of $C_{t+1}$?. Third, I realized that if I solve $Max \sum_{t=0}^{\infty}\beta^t E_tU(C_t,N_t)$ s.t the constraint as before and proceed with $\frac{\partial }{\partial C_t}E[U]=E\left[\frac{\partial U}{\partial C_t}\right]$ I obtain the correct Euler's equation. Is this the correct problem to optimize? but what about Gali's propose?
In this post you can find the algebraic steps that lead to the (standard) result mentioned in Varian's book. Now, let's assume that, in a specific market, the consumer's preferences are such that they lead to a constant elasticity demand curve, with elasticity lower than unity in absolute terms, $|\eta| < 1$, for example $$Q^d = AP^{\eta}, -1 <\eta < 0$$ Also, let's assume that for historical or institutional reasons this market is a monopoly. From the post mentioned above we have that profit maximization by the monopolist requires that$$P^* = \frac {|\eta|}{|\eta|-1} MC \tag{1}$$ where $$\eta = \frac {\partial Q }{ \partial P}\cdot \frac {P}{Q} \Rightarrow \frac {\partial Q }{ \partial P} = \eta \cdot \frac {Q}{P} \tag{2}$$and $MC$ is marginal cost.Obviously, this price is negative in our case, and so meaningless. We don't need to go into sophisticated constrained maximization procedures to see what happens here: the profit function is$$\pi = P\cdot Q(P) - C(Q(P)) \tag{3}$$and its derivative with respect to price is$$\frac {\partial \pi}{\partial P} = Q + P\frac {\partial Q }{ \partial P} - MC\cdot \frac {\partial Q }{ \partial P} \tag{4}$$ Using $(2)$ we get $$ \frac {\partial \pi}{\partial P}=Q + P\cdot \eta \cdot \frac {Q}{P} - MC\cdot \eta \cdot \frac {Q}{P} $$ $$\implies \frac {\partial \pi}{\partial P}= Q\cdot \left [1 + \eta - \eta \cdot \frac {MC}{P}\right]$$ $$\implies \frac {\partial \pi}{\partial P}= Q\cdot \left [1 - |\eta| + |\eta| \cdot \frac {MC}{P}\right] \tag{5}$$ From $(5)$ we see that $$|\eta| < 1 \implies \frac {\partial \pi}{\partial P} > 0,\;\; \forall P >0 \tag{6}$$ So a profit maximizing monopolist would theoretically have the tendency to increase the price to "infinity" sending the quantity supplied to zero. Note that the Revenue function here is $$R = P\cdot Q^d = P\cdot AP^{\eta} = AP^{1-|\eta|}, \uparrow \text{in} \;P$$ while Costs are decreasing in $Q^d$. So indeed profits would tend to infinity by selling less and less for higher and higher price. What markets could be described by such tendencies?
Measurement of proton-dissociative diffractive photoproduction of vector mesons at large momentum transfer at HERA 61 Downloads Citations Abstract. Diffractive photoproduction of vector mesons, \(\gamma p \to V Y\), where Y is a proton-dissociative system, has been measured in e +p interactions with the ZEUS detector at HERA using an integrated luminosity of 25\(\pbi\). The differential cross section, \(d\sigma/d t\), is presented for \(1.2 < -t<12{\rm GeV}^2\), where t is the square of the four-momentum transferred to the vector meson. The data span the range in photon-proton centre-of-mass energy, W, from 80 GeV to 120 GeV. The t distributions are well fit by a power law, \(d\sigma/d t \propto (-t)^{-n}\). The slope of the effective Pomeron trajectory, measured from the W dependence of the \(\rho^0\) and \(\phi\) cross sections in bins of t, is consistent with zero. The ratios \(d\sigma_{\gamma p \to \phi Y}/d t\) to \(d\sigma_{\gamma p \to \rho^0 Y}/d t\) and \(d\sigma_{\gamma p \to J/\psi Y}/d t\) to \(d\sigma_{\gamma p \to \rho^0 Y}/d t\) increase with increasing - t. Decay-angle analyses for \(\rho^0\), \(\phi\) and \(J/\psi\) mesons have been carried out. For the \(\rho^0\) and \(\phi\) mesons, contributions from single and double helicity flip are observed. The results are compared to expectations of theoretical models. KeywordsTheoretical Model Momentum Transfer Differential Cross Section Vector Meson Differential Cross Preview Unable to display preview. Download preview PDF.
I guess what you're looking for is the following circuit. Here, $b_1,b_2,b_3,b_4 \in \{0,1\}$, and $\oplus$ is addition modulo $2$. Here, the fifth qubit is used as an auxiliary, or ancilla qubit. It starts at $|0\rangle$ and ends in $|0\rangle$ when the circuit is applied. Let me elaborate on how this circuit works. The idea is to first of all check whether the first two qubits are in state $|1\rangle$. This can be done using a single Toffoli gate, and the result is stored in the auxiliary qubit. Now, the problem reduces to flipping qubit $4$, whenever qubits $3$ and the auxiliary qubit are in $|1\rangle$. This can also be achieved using one application of a Toffoli gate, namely the middle one in the circuit shown above. Finally, the last Toffoli gate serves to uncompute the temporary result that we stored in the auxiliary qubit, such that the state of this qubit returns to $|0\rangle$ after the circuit is applied. In the comment section, the question arose whether it is possible to implement such a circuit using only Toffoli gates, without using auxiliary qubits. This question can be answered in the negative, as I will show here. We want to implement the $CCCNOT$-gate, which acts on four qubits. We can define the following matrix (the matrix representation of the Pauli-$X$-gate):$$X = \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}$$Furthermore, we denote the $N$-dimensional identity matrix by $I_N$. Now, we observe that the matrix representation of the $CCCNOT$-gate, acting on four qubits, is given by the following $16 \times 16$ matrix:$$CCCNOT = \begin{bmatrix}I_{14} & 0 \\ 0 & X\end{bmatrix}$$Hence, we can determine its determinant:$$\det(CCCNOT) = -1$$Now consider the matrix representation of the Toffoli gate, acting on the first three qubits of a $4$-qubit system. Its matrix representation is written as (where we used the Kronecker product of matrices):$$Toffoli \otimes I_2 = \begin{bmatrix}I_6 & 0 \\ 0 & X\end{bmatrix} \otimes I_2 = \begin{bmatrix}I_{12} & 0 \\ 0 & X \otimes I_2\end{bmatrix} = \begin{bmatrix}I_{12} & 0 & 0 \\ 0 & 0 & I_2 \\ 0 & I_2 & 0\end{bmatrix}$$Calculating its determinant yields:$$\det(Toffoli \otimes I_2) = 1$$The Toffoli gates can also act on different qubits of course. Suppose we let the Toffoli gate act on the first, second and fourth qubit, where the fourth qubit is the target qubit. Then we obtain the new matrix representation from the one displayed above by swapping the columns corresponding to the states that differ only in the third and fourth qubit, i.e., $|0001\rangle$ with $|0010\rangle$, $|0101\rangle$ with $|0110\rangle$, etc. The important thing to note here, is that the number of swaps of columns is even, and hence that the determinant remains unchanged. As we can write every permutation of qubits as a sequence of consecutive permutations of just $2$ qubits (that is, $S_4$ is generated by the transpositions in $S_4$), we find that for all Toffoli gates, applied to any combination of control and target qubits, its matrix representation has determinant $1$. The final thing to note is that the determinant commutes with matrix multiplication, i.e., $\det(AB) = \det(A)\det(B)$, for any two matrices $A$ and $B$ compatible with matrix multiplication. Hence, it now becomes apparent that applying multiple Toffoli gates in sequence never creates a circuit whose matrix representation has a determinant different from $1$, which in particular implies that the $CCCNOT$-gate cannot be implemented using only Toffoli gates on $4$ qubits. The obvious question, now, is what changes when we do allow an auxiliary qubit. We find the answer when we write out the action of the $CCCNOT$-gate on a $5$-qubit system:$$CCCNOT \otimes I_2 = \begin{bmatrix}I_{14} & 0 \\ 0 & X\end{bmatrix} \otimes I_2 = \begin{bmatrix}I_{28} & 0 & 0 \\ 0 & 0 & I_2 \\ 0 & I_2 & 0\end{bmatrix}$$If we calculate this determinant, we find:$$\det(CCCNOT \otimes I_2) = 1$$Hence, the determinant of the $CCCNOT$-gate acting on $5$ qubits is $1$, instead of $-1$. This is why the previous argument is not valid for $5$ qubits, as we already knew because of the explicitly constructed circuit the OP asked for.
A first remark This same phenomenon of 'control' qubits changing states in some circumstances also occurs with controlled-NOT gates; in fact, this is the entire basis of eigenvalue estimation. So not only is it possible, it is an important fact about quantum computation that it is possible. It even has a name: a "phase kick", in which the control qubits (or more generally, a control register) incurs relative phases as a result of acting through some operation on some target register.$\def\ket#1{\lvert#1\rangle}$ The reason why this happens Why should this be the case? Basically it comes down to the fact that the standard basis is not actually as important as we sometimes describe it as being. Short version. Only the standard basis states on the control qubits are unaffected. If the control qubit is in a state which is not a standard basis state, it can in principle be changed. Longer version — Consider the Bloch sphere. It is, in the end, a sphere — perfectly symmetric, with no one point being more special than any other, and no one axis more special than any other. In particular, the standard basis is not particularly special. The CNOT operation is in principle a physical operation. To describe it, we often express it in terms of how it affects the standard basis, using the vector representations$$ \ket{00} \to {\scriptstyle \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{01} \to {\scriptstyle \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{10} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}}\,,\quad\ket{11} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}}$$— but this is just a representation. This leads to a specific representation of the CNOT transformation:$$\mathrm{CNOT}\to{\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}}\,.$$and for the sake of brevity we say that those column vectors are the standard basis states on two qubits, and that this matrix is a CNOT matrix. Did you ever do an early university mathematics class, or read a textbook, where it started to emphasise the difference between a linear transformation and matrices — where it was said, for example, that a matrix could represent a linear transformation, but was not the same as a linear transformation? The situation with CNOT in quantum computation is one example of how this distinction is meaningful. The CNOT is a transformation of a physical system, not of column vectors; the standard basis states are just one basis of a physical system, which we conventionally represent by $\{0,1\}$ column vectors. What if we were to choose to represent a different basis — say, the X eigenbasis — by $\{0,1\}$ column vectors, instead? Suppose that we wish to represent $$\begin{aligned}\ket{++} \to{}& [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{+-} \to{}& [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{-+} \to{}& [\, 0 \;\; 0 \;\; 1 \;\; 0 \,]^\dagger\,,\\\ket{--} \to{}& [\, 0 \;\; 0 \;\; 0 \;\; 1 \,]^\dagger \,.\end{aligned}$$This is a perfectly legitimate choice mathematically, and because it is only a notational choice, it doesn't affect the physics — it only affects the way that we would write the physics. It is not uncommon in the literature to do analysis in a way equivalent to this (though it is rare to explicitly write a different convention for column vectors as I have done here). We would have to represent the standard basis vectors by:$$ \ket{00} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}\,,\quad\ket{01} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}}\,,\quad\ket{10} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ -1 \\ -1 \end{bmatrix}}\,,\quad\ket{11} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}}\,.$$Again, we're using the column vectors on the right only to represent the states on the left. But this change in representation will affect how we want to represent the CNOT gate. A sharp-eyed reader may notice that the vectors which I have written on the right just above are the columns of the usual matrix representation of $H \otimes H$. There is a good reason for this: what this change of representation amounts to is a change of reference frame in which to describe the states of the two qubits. In order to describe $\ket{++} = [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger$, $\ket{+-} = [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger$, and so forth, we have changed our frame of reference for each qubit by a rotation which is the same as the usual matrix representation of the Hadamard operator — because that same operator interchanges the $X$ and $Z$ observables, by conjugation. This same frame of reference will apply to how we represent the CNOT operation, so in this shifted representation, we would have$$\begin{aligned}\mathrm{CNOT} \to \tfrac{1}{4}{}\,{\scriptstyle\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}\,\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\,\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}}\,=\,{\scriptstyle\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}}\end{aligned}$$which — remembering that the columns now represent $X$ eigenstates — means that the CNOT performs the transformation$$ \begin{aligned}\mathrm{CNOT}\,\ket{++} &= \ket{++} , \\\mathrm{CNOT}\,\ket{+-} &= \ket{--}, \\\mathrm{CNOT}\,\ket{-+} &= \ket{-+} , \\\mathrm{CNOT}\,\ket{--} &= \ket{+-} .\end{aligned} $$Notice here that it is only the first, 'control' qubits whose state changes; the target is left unchanged. Now, I could have shown this same fact a lot more quickly without all of this talk about changes in reference frame. In introductory courses in quantum computation in computer science, a similar phenomenon might be described without ever mentioning the words 'reference frame'. But I wanted to give you more than a mere calculation. I wanted to draw attention to the fact that a CNOT is in principle not just a matrix; that the standard basis is not a special basis; and that when you strip these things away, it becomes clear that the operation realised by the CNOT clearly has the potential to affects the state of the control qubit, even if the CNOT is the only thing you are doing to your qubits. The very idea that there is a 'control' qubit is one centered on the standard basis, and embeds a prejudice about the states of the qubits that invites us to think of the operation as one-sided. But as a physicist, you should be deeply suspicious of one-sided operations. For every action there is an equal and opposite reaction; and here the apparent one-sidedness of the CNOT on standard basis states is belied by the fact that, for X eigenbasis states, it is the 'target' which unilaterally determines a possible change of state of the 'control'. You wondered whether there was something at play which was only a mathematical convenience, involving a choice of notation. In fact, there is: the way in which we write our states with an emphasis on the standard basis, which may lead you to develop a non-mathematical intuition of the operation only in terms of the standard basis. But change the representation, and that non-mathematical intuition goes away. The same thing which I have sketched for the effect of CNOT on X-eigenbasis states, is also going on in phase estimation, only with a different transformation than CNOT. The 'phase' stored in the 'target' qubit is kicked up to the 'control' qubit, because the target is in an eigenstate of an operation which is being coherently controlled by the first qubit. On the computer science side of quantum computation, it is one of the most celebrated phenomena in the field. It forces us to confront the fact that the standard basis is only special in that it is the one we prefer to describe our data with — but not in how the physics itself behaves.
$\alpha _n ^n-1=0$ $\alpha _n=e^{2 \pi i/n}$ $$f(x_1,x_2,x_3,\ldots,x_n)=(x_1+\alpha _n x_2+ \alpha _n ^2 x_3+\cdots+\alpha _n ^{n-1} x_n)^n$$ I have read in Jim Brown's paper on page 5 that Lagrange showed If n=3 then $f(x_1,x_2,x_3)$ Maximum can have 2 different results with all permutations of $(x_1,x_2,x_3)$ If n=4 then $f(x_1,x_2,x_3,x_4)$ Maximum can have 3 different results with all permutations of $(x_1,x_2,x_3,x_4)$ If n=5 then $f(x_1,x_2,x_3,x_4,x_5)$ Maximum can have 6 different results with all permutations of $(x_1,x_2,x_3,x_4,x_5)$ but no proof how he did that result. According to the Paper, It was an important result for insolvability of quintic via radicals. Thus I searched the paper of Lagrange (Lagrange's 1771 paper reflections on the Algebraic theory of Equations ) in the internet but I could not find it. Jım Brown's paper does not mention the general solution for n. What is the general formula of how many different values can have $f(x_1,x_2,x_3,\ldots,x_n)$ with all permutations of $(x_1,x_2,x_3,\ldots,x_n)?$ Any idea to find the general formula for n? or if it is not possible for all n , at least to show a way how easily to proof for n=3,n=4 and n=5 (I tried to do that n=3 is relatively easy but need a lot calculation in classic approach as binomial expansion). Could you please help me how to approach the problem without using group theory? I need a proof in algebraic way. And also welcome all links that shows how Lagrange proved for n=3,n=4 and n=5. Note:I try to understand deeply how Abel and Ruffini showed the insolubility of quintic via radicals. The problem is also related to my other question that shown that f is not symmetric function n>2. and $f(x_1,x_2,x_3,\ldots,x_n)=f(x_n,x_1,x_2,\ldots,x_{n-1})=f(x_{n-1},x_n,x_1,\ldots,x_{n-2})=.....=f(x_2,x_3,x_4,\ldots,x_n,x_1)$ (totally $n$ permutation of f is equal each other) it means at least n values are the same in total $n!$ all permutations of $(x_1,x_2,x_3,\ldots,x_n)$ . Thanks a lot for your answers and your advises. $UPDATE:$ I completed the Proof for $n=3$ I would like to share my way for $n=3$ All permutations for $n=3$ are: $1)$-->$f(x_1,x_2,x_3)=(x_1+\alpha _3 x_2+ \alpha _3 ^2 x_3)^3$ $2)$-->$f(x_3,x_1,x_2)=(x_3+\alpha _3 x_1+ \alpha _3 ^2 x_2)^3=\alpha _3 ^3(x_3+\alpha _3 x_1+ \alpha _3 ^2 x_2)^3=(\alpha _3 x_3+\alpha _3 ^2 x_1+ x_2)^3$ $3)$-->$f(x_2,x_3,x_1)=(x_2+\alpha _3 x_3+ \alpha _3 ^2 x_1)^3=\alpha _3 ^3(x_2+\alpha _3 x_3+ \alpha _3 ^2 x_1)^3=(\alpha _3 x_2+ \alpha _3 ^2 x_3+x_1)^3$ $4)$-->$f(x_1,x_3,x_2)=(x_1+\alpha _3 x_3+ \alpha _3 ^2 x_2)^3$ $5)$-->$f(x_2,x_1,x_3)=(x_2+\alpha _3 x_1+ \alpha _3 ^2 x_3)^3=\alpha _3 ^3(x_2+\alpha _3 x_1+ \alpha _3 ^2 x_3)^3=(\alpha _3x_2+\alpha _3 ^2 x_1+ x_3)^3$ $6)$-->$f(x_3,x_2,x_1)=(x_3+\alpha _3 x_2+ \alpha _3 ^2 x_1)^3=\alpha _3 ^3 (x_3+\alpha _3 x_2+ \alpha _3 ^2 x_1)^3=(x_3\alpha _3+\alpha _3 ^2 x_2+ x_1)^3$ It can be easily seen that (Permutation 1 = Permutation 3) and (Permutation 2 = Permutation 3) Thus (Permutation 1 = Permutation 2 =Permutation 3) (Permutation 4 = Permutation 6) and (Permutation 5 = Permutation 6) Thus (Permutation 4 = Permutation 5 =Permutation 6) If so for n=3, the function can have 2 different result {Permutation 1= $f(x_1,x_2,x_3)$ , Permutation 4 = $f(x_1,x_3,x_2)$) To test with inputs: $x_1=1, x_2=2 ,x_3=0$ we know very well that $1+\alpha _3+\alpha _3 ^2=0$ $\alpha _3=e^{2 \pi i/3}=-\frac{1}{2}+ i\frac{\sqrt{3}}{2}$ Permutation 1: $f(x_1,x_2,x_3)=f(1,2,0)=(1+2\alpha _3)^3=1+6\alpha _3+12\alpha _3 ^2+8=-3-6\alpha _3=-3i\sqrt{3}$ Permutation 4: $f(x_1,x_3,x_2)=f(1,0,2)=(1+2\alpha _3 ^2)^3=1+6\alpha _3^2+12\alpha _3 +8=3+6\alpha _3=3i\sqrt{3}$ As you see in the example above .Permutation 1 and Permutation 4 cannot be the same always.
Errors in the Conclusions of Special Relativity A false start does not necessarily mean a wrong ending. So let's have a look at the conclusions of Special Relativity. The following equations are two of its conclusions: \tau =t\sqrt{1-v^2/c^2} (a clock in motion runs slower) V =\frac{v+w}{1+vw/c^2} or The first equation is about time dilation, which effectively puts a limit to the speed in the universe. If there is no limit for length, no limit for time, why do we need a limit for speed? If the numbering system in mathematics did not have infinity but used a large number as its limit, it would have been broken many times. If the speed of light c is the speed limit in the universe, does not the c+v in the paper exceed that limit? If a conclusion breaks its precondition, does not that mean this theory is wrong? The second equation is the velocity combination rule under Special Relativity. Under this transformation, or So the principle of relativity is broken. Only in a linear velocity combination rule, like the Galilean one, the influence of the reference frame can be completely removed by the differentiation of velocity with time, if the reference frame is in uniform motion. If the principle of relativity works so well in this universe, which is introduced by Einstein himself, as can be seen from Part I, Section 05 of [2], why do we have to break it? This breaking of the principle of relativity is not a surprise. The essence of Einstein's work is trying to build a bridge between relative motion and absolute motion. As relative motion is the base of Newtonian mechanics, which in turn is the foundation of the principle of relativity, any tiny change to the rule of relative motion will destroy the principle of relativity. So, without going through its mathematical details, we can safely claim that Special Relativity is wrong.
While reading the article http://inspirehep.net/record/61135, I came across the concept of "closed set under renormalization". The definition they give is the following. In any renormalizable field theory, let $A_1$, ..., $A_n$ be a set of monomials in the fundamental fields and their derivatives. Change the original Lagrangian of the theory like \begin{equation} \cal{L}\rightarrow\cal{L}+\underset{a}{\sum}A_{a}{\cal J}_{a}\,, \end{equation} where $\cal{J}$'s are arbitrary functions of space and time. Let us define $\Gamma_a^n$ as the Fourier transform of the variational derivative of the $n$-point Green's function in the fundamental field with respect to $\cal{J}_a$, evaluated at $\cal{J}_a$ equals zero. If we add appropriate counterterms in the Lagrangian, we can make the $\Gamma$'s finite. If these counterterms are also in the set $A_1$, ..., $A_n$, we say that the set is closed under renormalization. I have tried 'Renormalization' by Collins to look for this concept, but I couldn't find much information there. EDIT: Just for your information, the comment on page 145 around Eq.6.2.13 was all I could find in 'Renormalization'. Q1: In the reference, they frequently use the expression "simple power counting shows that the following set of operators are closed under renormalization". It is hard for me to guess what kind of power counting they have in mind. For example in the massive $\lambda\phi^4$ theory, they say that the following operators are closed under renormalization: \begin{equation} \left\{ g_{\mu\nu}\phi^{2},g_{\mu\nu}\partial_{\lambda}\phi\partial^{\lambda}\phi,g_{\mu\nu}\phi\square\phi,\partial_{\mu}\phi\partial_{\nu}\phi,\phi\partial_{\mu}\partial_{\nu}\phi,g_{\mu\nu}\phi^{4}\right\}\,. \end{equation} What could be the logic to arrive at this conclusion? Q2: They also say as a consequence of the BPH(Bogoliubov, Parasuik, and Hepp) theorem, given a set of operators closed under renormalization, we can find a set of cut-off independent functions $R_a^n$, such that \begin{equation} \Gamma_{a}^{n}=\underset{b}{\sum}c_{ab}R_{b}^{n}\,, \end{equation} where $c$'s are constant, possibly cut-off-dependent, coefficients. This statement seems very important when proving finiteness of $\Gamma$'s using Ward identities. It is the first time for me to see this. For example, I couldn't find information on this on textbooks such as Peskin, Schwartz. Could anyone guide me where I could find out more about this? (I have tried to read the original paper(https://link.springer.com/content/pdf/10.1007%2FBF01773358.pdf), but it is quite an old paper and I find it very hard to follow.) I would also appreciate any hints or intuitions to understand this relation.
To simplify things a bit, let's take a single qubit and a single qutrit for comparison. First, the amplitude damping channel (giving e.g. emission of a photon) for a qubit is $\mathcal E\left(\rho\right) = E_0\rho E_0^\dagger + E_1\rho E_1^\dagger$, where $$E_0 = \begin{pmatrix}1 && 0 \\ 0 &&\sqrt{1-\gamma}\end{pmatrix}, \quad E_1 = \begin{pmatrix} 0 && \sqrt \gamma \\ 0 &&0\end{pmatrix},$$ with $E_0$ being required for normalisation. This can also be written as $E_0 = \left|0\rangle\langle 0\right| + \sqrt{1-\gamma}\left|1\rangle\langle 1\right|$ and $E_1 = \sqrt{\gamma}\left|0\rangle\langle 1\right|$. When $\gamma = 1$, the amplitude damping channel gives the state $$\rho = \begin{pmatrix} 1 && 0 \\ 0 && 0\end{pmatrix} = \left|0\rangle\langle 0\right|$$ with certainty. However, this only applies at $0$ temperature, so to represent what happens at finite temperature, the generalised amplitude damping channel needs to be used. This has $\mathcal E\left(\rho\right) = \sum_kE_k\rho E_k^\dagger$, where $$E_0 = \sqrt p\begin{pmatrix}1 && 0 \\ 0 &&\sqrt{1-\gamma}\end{pmatrix}, \quad E_1 = \sqrt p\begin{pmatrix} 0 && \sqrt \gamma \\ 0 &&0\end{pmatrix},$$ $$E_2 = \sqrt{1-p}\begin{pmatrix}\sqrt{1-\gamma} && 0 \\ 0 &&1\end{pmatrix}, \quad E_3 = \sqrt{1-p}\begin{pmatrix} 0 && 0 \\ \sqrt \gamma &&0\end{pmatrix}.$$ Each of these can be split into four sections - an element in the upper left (bottom right) corner causes the amplitude of the upper left (bottom right) corner to decrease (here, this is really just a normalisation factor) while having a single element in a matrix, which is in the top right (bottom left) causes loss (gain) from the bottom right (top left) to the top left (bottom right) element of the density matrix. Now when $\gamma = 1$, this gives the state $$\rho = \begin{pmatrix}p && 0\\ 0 && 1-p\end{pmatrix} = p\left|0\rangle\langle 0\right| + \left(1-p\right)\left|1\rangle\langle 1\right|.$$ This shows that after a long time, the loss and gain cancel (i.e. no more loss or gain occurs), although this state is not very useful for computation, so lets try extending this a bit and adding another qubit. Lets go a bit further than that and take a pair of coupled spin-half fermions (such as a pair of electrons). This gives 4 states - the singlet state $S = \frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right> - \left|\downarrow\uparrow\right>\right)$. It also gives a subspace of triplet states $T = \left\lbrace \left|\uparrow\uparrow\right>, \frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right> + \left|\downarrow\uparrow\right>\right), \left|\downarrow\downarrow\right>\right\rbrace = \left\lbrace T_0, T_1, T_2\right\rbrace$. Describing this as a qudit with $d=4$ and using an equivalent 'qudit generalised amplitude damping channel' gives a few possible interaction as in the qubit case: loss from the triplet subspace to the singlet state gain from the singlet state to the triplet subspace amplitude damping within the triplet subspace. By itself, this still doesn't help very much, so lets place this pair of spin-half particles in the centre of a larger system (a 'spin bath', here used as the environment, mediating the interaction) and allow it to interact. As the states in the triplet subspace are symmetric and it is in the centre of a bath, the probability rate of amplitude damping on the first qubit will, on average, equal the rate of amplitude damping on the second qubit. This means that, instead of having a single qudit amplitude damping channel, there are two copies of the same generalised amplitude damping channel, which reduces the number of possible interactions. In the limit of long time and taking $p=1/2$ (this is just setting the system to a certain non-zero temperature), these are, ignoring normalisation: gain on the $\left|\downarrow\downarrow\right> = T_2$ state: $$T_2\rightarrow T_2' \propto T_2 + \beta T_1 + \beta^2T_0$$ loss on the $\left|\uparrow\uparrow\right> = T_0$ state: $$T_0\rightarrow T_0' \propto T_0 + \beta T_1 + \beta^2T_2$$ gain and loss on the $\frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right>\pm \left|\downarrow\uparrow\right>\right)$ states ($S$ and $T_1$ states): $$T_1\rightarrow T_1' \propto \left(1+\beta^2\right)T_1 + \beta \left(T_0+T_2\right)$$ $$S\rightarrow S' \propto\left(1-\beta^2\right) S.$$ This shows that oscillations in the triplet subspace occur instead of decay, meaning that the triplet subspace is a . In reality, interactions are more complicated and there will be other types of noise, so there will still be some decoherence, but the reasoning is still the same in that pairing two spin-half particles allows for the triplet state to be used as a decoherence free subsystem (or at least have less decoherence than a qubit) to mitigate the effects of some types of noise. decoherence free subsystem and can be used as a qutrit
Abbreviation: NVLGrp A (or normal valued lattice-ordered group $\ell$ normal valued ) is a lattice-ordered group $\mathbf{L}=\langle L, \vee, \wedge, \cdot, ^{-1}, e\rangle$ that satisfies -group $(x\vee x^{-1})(y\vee y^{-1}) \le (y\vee y^{-1})^2(x\vee x^{-1})^2$ Let $\mathbf{L}$ and $\mathbf{M}$ be $\ell$-groups. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $f:L\rightarrow M$ that is a homomorphism: $f(x\vee y)=f(x)\vee f(y)$ and $f(x\cdot y)=f(x)\cdot f(y)$. Remark: It follows that $f(x\wedge y)=f(x)\wedge f(y)$, $f(x^{-1})=f(x)^{-1}$, and $f(e)=e$ Classtype variety Equational theory Quasiequational theory First-order theory hereditarily undecidable 2) 3) Locally finite no Residual size Congruence distributive yes (see lattices) Congruence modular yes Congruence n-permutable yes, $n=2$ (see groups) Congruence regular yes, (see groups) Congruence uniform yes, (see groups) Congruence extension property Definable principal congruences Equationally def. pr. cong. Amalgamation property Strong amalgamation property Epimorphisms are surjective None W. Charles Holland, 1) , Proceedings of the AMS, The largest proper variety of lattice-ordered groups 57(1), 1976, 25–28 Yuri Gurevic, 2) , Algebra i Logika Sem., Hereditary undecidability of a class of lattice-ordered Abelian groups 6, 1967, 45–62 Stanley Burris, 3) , Algebra Universalis, A simple proof of the hereditary undecidability of the theory of lattice-ordered abelian groups 20, 1985, 400–401, http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/HerUndecLOAG.pdf
Mathematically, applying a FIR with impulse response $h_\mathrm{lpf}[n]$ to digital signal is convolution: $y = x * h_\mathrm{lpf}$, or thanks to the properties of the (discrete) Fourier transform, $Y = X\cdot H_\mathrm{lpf}$, as convolution becomes multiplication. Now, making a bandpass out of a low pass can be modeled by shifting the Frequency response $H_\mathrm{lpf}$ in frequency domain. "Shifting" can be represented by a convolution of the low pass filter with a dirac impulse at the desired center frequency: $H_\mathrm{bpf}= H_\mathrm{lpf} * \delta_{f_\mathrm{center}} $ Again, convolution becomes multiplication when transformed to time domain. The inverse (discrete) Fourier transform of a dirac at ${f_\mathrm{center}}$ is a complex oscillation $e^{2\pi{f_\mathrm{center}}n}$, so this becomes $y[n] = x * (h_\mathrm{lpf} \cdot e^{2\pi{f_\mathrm{center}}n})$
Abbreviation: CORng A is an ordered ring $\mathbf{A}=\langle A,+,-,0,\cdot,\le\rangle$ such that commutative ordered ring $\cdot$ is : $xy=yx$ commutative Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a orderpreserving homomorphism: $h(x + y)=h(x) + h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, and $x\le y\Longrightarrow h(x)\le h(y)$. A is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Ordered fields]] expansion [[Ordered rings]] supervariety [[Commutative rings]] subreduct
The rotation matrix $$\pmatrix{ \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta}$$ has complex eigenvalues $\{e^{\pm i\theta}\}$ corresponding to eigenvectors $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$. The real eigenvector of a 3d rotation matrix has a natural interpretation as the axis of rotation. Is there a nice geometric interpretation of the eigenvectors of the $2 \times 2$ matrix? Lovely question! There is a kind of intuitive way to view the eigenvalues and eigenvectors, and it ties in with geometric ideas as well (without resorting to four dimensions!). The matrix, is unitary (more specifically, it is real so it is called orthogonal) and so there is an orthogonal basis of eigenvectors. Here, as you noted, it is $\pmatrix{1 \\i}$ and $\pmatrix{1 \\ -i}$, let us call them $v_1$ and $v_2$, that form a basis of $\mathbb{C^2}$, and so we can write any element of $\mathbb{R^2}$ in terms of $v_1$ and $v_2$ as well, since $\mathbb{R^2}$ is a subset of $\mathbb{C^2}$. (And we normally think of rotations as occurring in $\mathbb{R^2}$! Please note that $\mathbb{C^2}$ is a two-dimensional vector space with components in $\mathbb{C}$ and need not be considered as four-dimensional, with components in $\mathbb{R}$.) We can then represent any vector in $\mathbb{R^2}$ uniquely as a linear combination of these two vectors $x = \lambda_1 v_1 + \lambda_2v_2$, with $\lambda_i \in \mathbb{C}$. So if we call the linear map that the matrix represents $R$ $$R(x) = R(\lambda_1 v_1 + \lambda_2v_2) = \lambda_1 R(v_1) + \lambda_2R(v_2) = e^{i\theta}\lambda_1 (v_1) + e^{-i\theta}\lambda_2(v_2) $$ In other words, when working in the basis ${v_1,v_2}$: $$R \pmatrix{\lambda_1 \\\lambda_2} = \pmatrix{e^{i\theta}\lambda_1 \\ e^{-i\theta}\lambda_2}$$ And we know that multiplying a complex number by $e^{i\theta}$ is an anticlockwise rotation by theta. So the rotation of a vector when represented by the basis ${v_1,v_2}$ is the same as just rotating the individual components of the vector in the complex plane! Tom Oldfield's answer is great, but you asked for a geometric interpretation so I made some pictures. The pictures will use what I called a "phased bar chart", which shows complex values as bars that have been rotated. Each bar corresponds to a vector component, with length showing magnitude and direction showing phase. An example: The important property we care about is that scaling a vector corresponds to the chart scaling or rotating. Other transformations cause it to distort, so we can use it to recognize eigenvectors based on the lack of distortions. (I go into more depth in this blog post.) So here's what it looks like when we rotate <0, 1> and <i, 0>: Those diagram are not just scaling/rotating. So <0, 1> and <i, 0> are not eigenvectors. However, they do incorporate horizontal and vertical sinusoidal motion. Any guesses what happens when we put them together? Trying <1, i> and <1, -i>: There you have it. The phased bar charts of the rotated eigenvectors are being rotated (corresponding to the components being phased) as the vector is turned. Other vectors get distorting charts when you turn them, so they aren't eigenvectors. The simplest answer to your question is perhaps yes. The eigenvectors of a genuinely complex eigenvalue are necessarily complex. Therefore, there is no real vector which is an eigenvector of the matrix. Ignoring of course the nice cases $\theta=0, \pi$ the rotation always does more than just rescale a vector. On the other hand, if we view the matrix as a rotation on $\mathbb{C}^2$ then the eigenvectors you give show the directions in which the matrix acts as a rescaling operator in the complex space $\mathbb{C}^2$. I hope someone has a better answer, I would like to visualize complex two-space. Here is a geometric interpretation, in which the eigenvector condition appears as a commutation relation: http://www-po.coas.oregonstate.edu/~rms/notes/rot_eig.html A related curious fact is that moving from affine to projective, the two complex directions fixed by any rotation define two points at infinity (of projective coordinates $[0:\pm i:1]$) called cyclic points. Then it is easy to check that every circle passes through these cyclic points. This should be compared to Bezout's theorem--that says (in particular) that any two conics intersect in 4 points (counting multiplicities)--and the fact that it is impossible to get more then 2 points intersecting affine circles.
An article entitled "Experimental data from a quantum computer verifies the generalize Pauli exclusion principle" by Scott E. Smart, David I. Schuster, and David A. Mazziotti has just appeared In the study, they generate sets of random pure states of 3 fermions in 6 orbitals and examine their 1-RDM's (reduced density matrices). For this purpose, they employ "the IBM Quantum Experience devices (ibmqx4 and ibmqx2), available online, in particular, the 5-transmon quantum computing device". Can one adapt their approach to testing the "separability probability" question (arXiv:quant-ph/9804024) in any form (fermionic, or otherwise)? In particular, can the conjectures that the two-qubit separability probabilities are $\frac{8}{33}$, $\frac{25}{341}$ or $1-\frac{256}{27 \pi^2}$, depending upon the choice of Hilbert-Schmidt, Bures or operator monotone function $\sqrt{x}$ measures (arXiv:1701.01973, arXiv:1901.09889) be evaluated? If one could generate random (using Fubini-Study measure) pure four-qubit states, and find the reduced two-qubit systems, then perhaps one could examine the Hilbert-Schmidt instance (p. 422 of Bengtsson-Zyczkowski monograph). I see that there is 2011 and subsequent work of J. A. Miszczak (arXiv:1102.4598) concerning the (Mathematica) generation of random numbers through quantum processes, and their use in the production of random quantum states (such as I am seeking)--through standard (Ginibre-ensemble/random matrix theory) algorithms. But, I think, this is (presently?) comparatively slow in relation to pseudo-random means. Also, can the random states be generated "more directly" with the IBM devices? (Smart, Schuster, Mazziotti prepare initial pure states and then perform "arbitrary unitary transformations" ["generated on the quantum computer"] to obtain random pure states.)
It’s been a while since my last post. I would have probably had more time for extra curricular projects if I didn’t spend so much of my life waiting for C++ applications to compile! In fact I’ve created a tool to help with exactly that… The problem C/C++ projects often compile painfully slowly. A large cause of this problem is “#include” statements. One included headers drags in others which drag in yet more – one combinatorial explosion later you’re left twiddling your thumbs waiting a project to compile (since hundreds of headers can take a while to open and compile). One way to avoid headers dragging into too many other headers with them is to use forward declaration and dynamic allocation. This allows us to remove some include directives from headers. Removing an unnecessary include from a popular header car really help compile times. Define costly What would be nice is to find “costly” include directives in headers automatically – one could then think about using forward declarations or other refactoring to remove them. We will first need a good definition of cost. We want a definition of “cost” of an include directive which formalises the notion of the number of “file open” operations avoided during compilation of the entire project if we omit that include directive. It makes things much simpler later on to create a formal definition capturing this notion. With a little inspection hopefully you can see that this does the trick: Definitions Let \(S\) be our set of source files and \(H\) be our set of headers. Let an include graph for \(S\) and \(H\) be a directed graph \(G=(V,E)\) where \(V = S \cup H \) and $$E=\{(u,v) \in V \times V : \text{ file } u \text{ includes file } v \}.$$ An include is an edge in our graph, that is \((u,v) \in E\) (which implies file u has an “#include v” statement.) Let the set of reachable files from \(w \in V \) be $$R(G,w)=\{x \in V : \text{ there is a path from } w \text{ to } x \text{ in } G\}.$$ Let there be an include \((u,v) \in E\) and a file \( w \in V \). Let \(G’\) be the include graph \(G\) but with include \((u,v)\) removed. Then the partial cost of include \((u,v)\) w.r.t file \(w\) is $$C_p((u,v),w) = \left| R(G,w) \right| – \left| R(G’,w) \right|.$$ That is the partial cost from an include with respect to a file is the number of files no longer reachable from that file if the include is removed. The cost of include \((u,v)\) is $$ C(u,v) = \Sigma_{w \in S} C_p((u,v),w).$$ That is the cost of an include is the sum of the partial costs of that include over all source files. Example Some example costs for a particular include graph should make the definition more clear. For this include graph: We get the following include costs: Cost: 6, from: ("e.h","f.h") Cost: 5, from: ("src/b.cpp","g.h") Cost: 4, from: ("src/d.cpp","e.h") Cost: 4, from: ("src/a.cpp","e.h") Cost: 4, from: ("f.h","j.h") Cost: 4, from: ("e.h","g.h") Cost: 3, from: ("src/c.cpp","e.h") Cost: 3, from: ("g.h","e.h") Cost: 2, from: ("g.h","i.h") Cost: 0, from: ("src/d.cpp","i.h") Cost: 0, from: ("src/c.cpp","f.h") Cost: 0, from: ("src/a.cpp","i.h") Cost: 0, from: ("h.h","j.h") The above cost output was actually generated with incude-wrangler – let’s have a quick look at how. Some Haskell Here is a short description of the core code of include-wrangler. You can see the full code on the github page. We can represent an include graph in Haskell with the following datatype: -- An includes graph is just a map from verts to list of verts. -- (Use list instead of Set since we wat to preserve include order!) data IncludesGraph v = IGraph (Map.Map v [v]) We want to be able to do a depth first search on a graph to find reachable files. edgeMap (IGraph em) = em edgesFrom graph v = (edgeMap graph) Map.! v -- Depth first search on an includes graph from v. -- Follows the same "search" order that C++ preprocessor would. Avoids cycles by -- recording visited list - so assumes every include is guarded by ifdefs/pragma once. -- Returns a set of vertices of the include graph. dfs' graph v visited = next follow where follow = filter (\v -> not $ Set.member v visited) $ edgesFrom graph v descend u = dfs' graph u (Set.insert u visited) next [] = visited next (u:_) = dfs' graph v $ descend u dfs graph v = dfs' graph v (Set.fromList [v]) Now we can express the above definitions of the cost of an include statement easily. -- Remove edge (v,u) from an include graph. removeEdge (IGraph em) (v,u) = IGraph $ Map.adjust (filter ((/=) u)) v em -- Remove node b from an include graph. removeNode (IGraph em) v = IGraph $ Map.map (filter ((/=) v)) $ Map.delete v em -- The "cost" of an include directive w.r.t file w. icost' graph (u,v) w = (Set.size $ dfs graph w) - (Set.size $ dfs (removeEdge graph (u,v)) w) -- The "cost" of an include directive w.r.t. to a list of files s - (i.e. list of .cpp files) icost graph s (u,v) = sum $ map (icost' graph (u,v)) $ s The rest of the include wrangler code deals with: Reading user input (for source files and include directories). Constructing the include graph from source files. And finally calculating and outputting the cost of every include directive in the project using the “icost” function. Other features Include wrangler has some other features. It outputs costs for entire header files using a similar definition for “cost” as for includes. That is the cost of a header file is the number of file open operations we can avoid during compilation if we remove that header from the project. Expensive header files could be a good for candidates for precompiled headers or general refactoring to reduce dependencies. It outputs the include graph in graphvis format. This means you can produce nice images of your graph like the one above, or perform further analysis on the include graph. There are probably many other useful pieces of information that could be extracted by analysing the include graph – I may add things as and when I have a need. How to use it Head over to the github page to download include-wrangler and for instructions on how to build and run the application on your own projects. Final thoughts Tools to do what include wrangler does already existed, but none of them were quite right for me. The commercial tools I found which could provide similar functionality all had some of the following problems: Expensive. Operated only as visual studio plugins. Offered a ton of features that would get in the way. I had no luck finding any open source software that would do the job (please let me know in the comments if there is something out there!) Include-wrangler was created to “scratch an itch”. Once it solved the particular problem I was having I regarded it as “done” so there are a few rough edges: It doesn’t fail particularly gracefully if a file/directory does not exist (although it tells you enough to fix things) has no “options” or fancy user interface and does not run as fast as it could. That said I have found it quite useful on “real world” large codebase, and so has one of my coworkers.
I can find this using the fact that $\sin(\sin^{-1}(x)) = x$, for all $x\in[-1,1].$ Now, differentiate. $$\frac{d}{d\sin^{-1}(x)}\sin(\sin^{-1}(x))\cdot \frac{d}{dx} \sin^{-1}(x)= \frac{d}{dx} x= 1$$ $$\cos(\sin^{-1}(x))\cdot \frac{d}{dx} \sin^{-1}(x) = 1$$ $$\frac{d}{dx} \sin^{-1}(x) = \frac{1}{\cos(\sin^{-1}(x))}$$ $$\frac{d}{dx} \sin^{-1}(x) = \frac{1}{\sqrt{1-\sin^2(\sin^{-1}(x))}}$$ $$\frac{d}{dx} \sin^{-1}(x) = \frac{1}{\sqrt{1-x^2}}$$ However, what if I wanted to differentiate this like $\ \sin^{-1}(\sin(x))$ without knowing the fact that $\ \frac{d}{dx}\sin^{-1}(x) = \frac{1}{\sqrt{1-x^2}}$ ? Is there a solution for it? I keep getting stuck at a certain step when I try this...
Let's suppose we have $$\gcd(a,b)\cdot\gcd(b,c)\cdot\gcd(c,a)\cdot \gcd(a,b,c) = abc$$ and look at a prime $p$ dividing at least one of $a,b,c$. Suppose $p$ divides at most two of the three, say $p\nmid c$, and $a = p^\alpha\cdot a'$, $b = p^\beta\cdot b'$ with $p\nmid a'b'$. Then on the left hand side, $p$ does only occur in $\gcd(a,b)$, with exponent $\min\{\alpha,\beta\}$. But on the right hand side, it occurs with exponent $\alpha+\beta = \min\{\alpha,\beta\} + \max\{\alpha,\beta\}$, and since the exponents must be equal, it follows that $\max\{\alpha,\beta\} = 0$, contradicting the assumption that $p$ divides at least one of $a,b,c$. So every prime dividing at least one of $a,b,c$ must divide all three. Let the exponents of $p$ be $\alpha \leqslant \beta \leqslant \gamma$. Then on the left hand side, $p$ occurs with the exponent $$\alpha + \beta + \alpha + \alpha = 3\alpha + \beta,$$ and on the right it occurs with the exponent $\alpha + \beta + \gamma$. It follows that $\gamma = 2\alpha$. Furthermore, the condition that none of $a,b,c$ shall be an integer multiple of any other implies that for each pair $(x,y)$ of two of the numbers, there is at least one prime $p_{xy}$ that occurs with a larger exponent in the prime factorisation of $x$ than it occurs in the prime factorisation of $y$. Any prime that is not one of the $p_{xy}$ can be removed from all of $a,b,c$, and leads to a solution with smaller numbers, and smaller $a+b+c$, so the primes involved in the minimal solution are precisely the $$\{p_{ab},p_{ac},p_{ba},p_{bc},p_{ca},p_{cb}\}.$$ That set must contain at least two primes, and it contains at most six. It is clear that for the minimal solution, the set must contain the $k$ smallest primes, $2 \leqslant k \leqslant 6$. Finding the minimal solution is then a small amount of work even brute-forcing.
Since the purpose here is presumably to obtain some valid and useful estimate of $\theta$, the prior distribution should be consistent with the specification of the distribution of the population from which the sample comes. This does NOT in any way mean that we "calculate" the prior using the sample itself -this would nullify the validity of the whole procedure. We do know that the population from which the sample comes is a population of i.i.d. uniform random variables each ranging in $[0,\theta]$. This is a maintained assumption and is part of the prior information that we possess (and it has nothing to do with the sample, i.e. with a specific realization of a subset of these random variables). Now assume that this population consists of $m$ random variables, (while our sample consists of $n<m$ realizations of $n$ random variables). The maintained assumption tells us that$$\max_{i=1,...,n}\{X_i\}\le \max_{j=1,...,m}\{X_j\} \le \theta$$ Denote for compactness $\max_{i=1,...,n}\{X_i\} \equiv X^*$. Then we have $\theta \ge X^*$ which can also be written $$\theta = cX^*\qquad c\ge 1$$ The density function of the $\max$ of $N$ i.i.d Uniform r.v.'s ranging in $[0,\theta]$ is $$f_{X^*}(x^*) = N\frac {(x^*)^{N-1}}{\theta^N} $$ for the support $[0,\theta]$, and zero elsewhere. Then by using $\theta = cX^*$ and applying the change-of-variable formula we obtain a prior distribution for $\theta$ that is consistent with the maintained assumption:$$f_p(\theta) = N\frac {(\frac{\theta}{c})^{N-1}}{\theta^N}\frac 1c = \frac {N}{c^N} \theta^{-1}\qquad \theta \in [x^*, \infty]$$ which may be improper if we don't specify the constant $c$ suitably. But our interest lies in having a proper posterior for $\theta$, and also, we do not want to restrict the possible values of $\theta$ (beyond the restriction implied by the maintained assumption). So we leave $c$ undetermined. Then writing $\mathbf X = \{x_1,..,x_n\}$ the posterior is $$f(\theta \mid \mathbf X)\; \propto\; \theta^{-N}\frac {N}{c^N} \theta^{-1} \Rightarrow f(\theta \mid \mathbf X) = A\frac {N}{c^N} \theta^{-(N+1)}$$ for some normalizing constant A. We want $$\int_{S_{\theta}}f(\theta \mid \mathbf X)d\theta =1 \Rightarrow \int_{x^*}^{\infty}A\frac {N}{c^N} \theta^{-(N+1)}d\theta =1$$ $$\Rightarrow A\frac {N}{c^N}\frac {1}{-N}\theta^{-N}\Big |_{x^*}^{\infty} = 1 \Rightarrow A = (cx^*)^N$$ Inserting into the posterior$$f(\theta \mid \mathbf X) = (cx^*)^N\frac {N}{c^N} \theta^{-(N+1)} = N(x^*)^N\theta^{-(N+1)} $$ Note that the undetermined constant $c$ of the prior distribution has conveniently cancelled out. The posterior summarizes all the information that the specific sample can give us regarding the value of $\theta$. If we want to obtain a specific value for $\theta$ we can easily calculate the expected value of the posterior,$$E(\theta\mid \mathbf X) = \int_{x^*}^{\infty}\theta N(x^*)^N\theta^{-(N+1)}d\theta = -\frac{N}{N-1}(x^*)^N\theta^{-N+1}\Big |_{x^*}^{\infty} = \frac{N}{N-1}x^*$$ Is there any intuition in this result? Well, as the number of $X$'s increases, the more likely is that the maximum realization among them will be closer and closer to their upper bound, $\theta$ - which is exactly what the posterior mean value of $\theta$ reflects: if, say, $N=2 \Rightarrow E(\theta\mid \mathbf X) = 2x^*$, but if $N=10 \Rightarrow E(\theta\mid \mathbf X) = \frac{10}{9}x^*$.This shows that our tactic regarding the selection of the prior was reasonable and consistent with the problem at hand, but not necessarily "optimal" in some sense.
Abbreviation: SymRel A is a structure $\mathbf{X}=\langle X,R\rangle$ such that $R$ is a symmetric relation (i.e. $R\subseteq X\times X$) that is binary relation on $X$ symmetric: $xRy\Longrightarrow yRx$ Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{X}$ and $\mathbf{Y}$ be symmetric relations. A morphism from $\mathbf{X}$ to $\mathbf{Y}$ is a function $h:A\rightarrow B$ that is a homomorphism: $xR^{\mathbf X} y\Longrightarrow h(x)R^{\mathbf Y}h(y)$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[Directed graphs]] supervariety
Yes, it is true and follows from usual AM-GM. For example$$\begin{aligned}&\sqrt[k]{a_2\cdots a_{k+1}}+\sqrt[k]{a_1a_3\cdots a_{k+1}}+\sqrt[k]{a_1a_2a_4\cdots a_{k+1}}+\cdots+\sqrt[k]{a_1a_2\cdots a_k}\\&\ge (k+1)\cdot\sqrt[(k+1)k]{a_1^ka_2^k\cdots a_{k+1}^k}=(k+1)\cdot\sqrt[k+1]{a_1a_2\cdots a_{k+1}}.\end{aligned}\tag{1}$$ Note: the $m$-th term is $\sqrt[k]{\dfrac{a_1a_2\dots a_{k+1}}{a_m}}$. The needed inequality follows by adding all similar inequalities. Let's look at an example for $n=4,k=2$:$$\begin{aligned}\sqrt{a_2a_3}+\sqrt{a_1a_3}+\sqrt{a_1a_2}&\ge 3\sqrt[3]{a_1a_2a_3},\\\sqrt{a_2a_4}+\sqrt{a_1a_4}+\sqrt{a_1a_2}&\ge 3\sqrt[3]{a_1a_2a_4}\\\sqrt{a_3a_4}+\sqrt{a_1a_4}+\sqrt{a_1a_3}&\ge 3\sqrt[3]{a_1a_3a_4}\\\sqrt{a_3a_4}+\sqrt{a_2a_4}+\sqrt{a_2a_3}&\ge 3\sqrt[3]{a_2a_3a_4}\end{aligned}$$Each of the $2$nd root occurs $n-k=4-2$ times (which I incorrectly counted as $n-k+1$ in the comment). So how do we know for sure in general? There are two ways to see that The precise way: For each $i_1<i_2<\cdots <i_k$, $\sqrt[k]{a_{i_1}a_{i_2}\cdots a_{i_k}}$ occurs exactly in the inequalities for $a_{i_1},a_{i_2},\cdots a_{i_k},a_{\color{red}{h}}$ for each $\color{red}{h} \neq i_1,i_2,\dots, i_k$. The intuitive way: By symmetry of the inequality (which is kinda hard to explain), because we are talking about totally symmetry of (1), the means of $k$-th roots must be larger than or equal to the mean of $(k+1)$-th roots. Big Edit I just realized that the OP asked for cyclic, not symmetric, sum. Shame on me. But that obviously is not a valid inequality when $a_i$ are all equal, just as @Macavity originally mentioned. For example, when $n=4, k=2$ and all $a_i=1$, the RHS is $1$ while the LHS is $4/6$.
Forgot password? New user? Sign up Existing user? Log in What is the solution for 2y′′+3y′+5y=7x2+9x+112y''+3y'+5y=7x^2+9x+112y′′+3y′+5y=7x2+9x+11 thanks. Note by Ce Die 4 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: 2y′′+3y′+5y=7x2+9x+112y'' + 3y' + 5y = 7x^2 + 9x + 112y′′+3y′+5y=7x2+9x+11 The homogeneous equation is: 2y′′+3y′+5y=02y'' + 3y' + 5y = 02y′′+3y′+5y=0 The general solution to the homogeneous equation has the form: y=c eαxy = c \, e^{\alpha x}y=ceαx Plugging this back into the H equation yields a complex conjugate solution pair: 2α2 y+3αy+5y=02α2+3α+5=0α1=−3−31i4α2=−3+31i42 \alpha^2 \, y + 3 \alpha y + 5y = 0 \\2 \alpha^2 + 3 \alpha + 5 = 0 \\\alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\\alpha_2 = \frac{-3 + \sqrt{31} i}{4}2α2y+3αy+5y=02α2+3α+5=0α1=4−3−31iα2=4−3+31i The solution to the H equation is: yH=c1eα1x+c2eα2xy_H = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x} yH=c1eα1x+c2eα2x Now for the particular solution associated with the the polynomial, it is evident that this must be a second order polynomial: yP=Ax2+Bx+Cy_P = A x^2 + B x + CyP=Ax2+Bx+C Plugging into the differential equation: 2(2A)+3(2Ax+B)+5(Ax2+Bx+C)=7x2+9x+115Ax2+(6A+5B)x+4A+3B+5C=7x2+9x+115A=7 ⟹ A=756A+5B=9 ⟹ B=3254A+3B+5C=11 ⟹ C=1261252(2A) + 3(2A x + B) + 5(A x^2 + B x + C) = 7x^2 + 9x + 11 \\5A x^2 + (6A + 5B)x + 4A + 3B + 5C = 7x^2 + 9x + 11 \\5A = 7 \implies A = \frac{7}{5} \\6A + 5B = 9 \implies B = \frac{3}{25} \\4A + 3B + 5C = 11 \implies C = \frac{126}{125} 2(2A)+3(2Ax+B)+5(Ax2+Bx+C)=7x2+9x+115Ax2+(6A+5B)x+4A+3B+5C=7x2+9x+115A=7⟹A=576A+5B=9⟹B=2534A+3B+5C=11⟹C=125126 The particular solution is: yP=75x2+325x+126125y_P = \frac{7}{5} x^2 + \frac{3}{25} x + \frac{126}{125} yP=57x2+253x+125126 The total solution is: y=yH+yP=c1eα1x+c2eα2x+75x2+325x+126125α1=−3−31i4α2=−3+31i4y = y_H + y_P = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x} + \frac{7}{5} x^2 + \frac{3}{25} x + \frac{126}{125} \\ \alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\\alpha_2 = \frac{-3 + \sqrt{31} i}{4}y=yH+yP=c1eα1x+c2eα2x+57x2+253x+125126α1=4−3−31iα2=4−3+31i To solve for constants c1 c_1 c1 and c2 c_2 c2, solve initial-value equations for the function and its derivative End of first part Suppose instead that the differential equation had been: 2y′′+3y′+5y=7x3+11x2+13x+92y'' + 3y' + 5y = 7x^3 + 11x^2 + 13x + 92y′′+3y′+5y=7x3+11x2+13x+9 The homogeneous solution is identical to the previous problem. The particular solution has the form: yP=Ax3+Bx2+Cx+Dy_P = A x^3 + B x^2 + Cx + DyP=Ax3+Bx2+Cx+D 2(6Ax+2B)+3(3Ax2+2Bx+C)+5(Ax3+Bx2+Cx+D)=7x3+11x2+13x+95A=7 ⟹ A=759A+5B=11 ⟹ B=−82512A+6B+5C=13 ⟹ C=−471254B+3C+5D=9 ⟹ D=14266252(6 A x + 2B) + 3(3 A x^2 + 2 B x + C) + 5(A x^3 + B x^2 + Cx + D) = 7x^3 + 11x^2 + 13x + 9 \\5A = 7 \implies A = \frac{7}{5} \\9A + 5B = 11 \implies B = -\frac{8}{25} \\ 12A + 6B + 5C = 13 \implies C = -\frac{47}{125} \\ 4B + 3C + 5D = 9 \implies D = \frac{1426}{625} 2(6Ax+2B)+3(3Ax2+2Bx+C)+5(Ax3+Bx2+Cx+D)=7x3+11x2+13x+95A=7⟹A=579A+5B=11⟹B=−25812A+6B+5C=13⟹C=−125474B+3C+5D=9⟹D=6251426 The total solution is therefore: y=yH+yP=c1eα1x+c2eα2x+75x3−825x2−47125x+1426625α1=−3−31i4α2=−3+31i4y = y_H + y_P = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x} + \frac{7}{5} x^3 -\frac{8}{25} x^2 -\frac{47}{125} x + \frac{1426}{625} \\ \alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\\alpha_2 = \frac{-3 + \sqrt{31} i}{4}y=yH+yP=c1eα1x+c2eα2x+57x3−258x2−12547x+6251426α1=4−3−31iα2=4−3+31i Log in to reply Thank you and also how to solve this 2y’’+3y’+5y=7x3+11x2+13x+92y’’+3y’+5y=7x^3+11x^2+13x+9 2y’’+3y’+5y=7x3+11x2+13x+9 ? Thanks The process is the same as for the other problem. The only difference is that the particular solution is a third order polynomial instead of a second order polynomial. Please I give me a solution for this @Ce Die – Ok, I will post it a bit later @Steven Chase – thank you i hope to see it @Ce Die – I have added it @Steven Chase – thank you so much you enlight me! :) @Ce Die – You're welcome How about this 2y’’+3y’+5y=7x^4 how to solve this? Thanks Since the general procedure is the same as for the other two, I'll leave this one to you Is this correct if 4th degree is y′′=8Ax2+6Bx+2Cy'' = 8Ax^2+6Bx+2Cy′′=8Ax2+6Bx+2C, y′=4Ax3+3Bx2+2Cx+Dy'=4Ax^3+3Bx^2+2Cx+Dy′=4Ax3+3Bx2+2Cx+D and y=Ax4+Bx3+Cx2+Dx+Ey = Ax^4+Bx^3+Cx^2+Dx+Ey=Ax4+Bx3+Cx2+Dx+E? I hope its right. Thanks Correct, except that the first term in your y′′y'' y′′ equation should have a coefficient of 1212 12 instead of 88 8 y′′=12Ax2+6Bx+2Cy''=12Ax^2+6Bx+2Cy′′=12Ax2+6Bx+2C is this correct? @Ce Die – Yes, it is for x5x^5x5 the ypy_{p}yp is y′′=15Ax3+12Bx2+6Cx+2D y''=15Ax^3+12Bx^2+6Cx+2Dy′′=15Ax3+12Bx2+6Cx+2D? This this correct Sir? Look again at the coefficient on your x3x^3 x3 term I think its 18? Problem Loading... Note Loading... Set Loading...
We sit in a gravitational potential, so there should be a blue shift on the CMB light from the potential of the Milky Way. Is this blue shift dependent on direction? Is it being subtracted from the CMB? Or is it simply to small to be measurable? The gravitational potential of the Milky Way will cause a blue shift not a red shift. This happens because relative to an observer far from the Milky Way the gravitational potential within it causes a time dilation i.e. here on Earth our clocks run very slightly slower than clocks out in intergalactic space. Since our clocks run slower the frequency of light coming from intergalactic space is slightly increased. As it happens I have described the time dilation for observers within the Milky Way in my answer to Why isn't the center of the galaxy "younger" than the outer parts? From experimental data we have the following approximate formula for the gravitational potential energy (per unit mass) inside the Milky way: $$ \Phi = -\frac{GM}{\sqrt{r^2 + (a + \sqrt{b^2 + z^2})^2}} \tag{1} $$ where r is the radial distance, z is the height above the disk, a = 6.5 kpc and b = 0.26 kpc. The time dilation is well described by the weak field equation: $$ \frac{\Delta t_r}{\Delta t_\infty} = \sqrt{1 - \frac{2\Delta\Phi}{c^2}} $$ According to NASA the Sun lies about 8 kpc from the centre of the Milky Way, so $r = 8$ kpc and according to the Astronomy SE we are about 20 pc away from the plane so $z = 20$ pc. Finally let's guesstimate the mass of the Milky Way as $10^{12}$ solar masses (it's a guesstimate because we don't know how much dark matter the Milky Way contains). Plugging in all these numbers gives us: $$ \Phi \approx 4 \times 10^{11} \,\text{joules/kg} $$ And plugging this into out equation for the time dilation gives: $$ \frac{\Delta t_r}{\Delta t_\infty} = 0.999995 $$ Taking the reciprocal of this to estimate the blueshift we find the frequency of the CMB is blue shifted by a factor of $1.000005$.
Suppose that we expand our idea of context free grammar rules to allow regular expressions of terminals on the right hand side. For example, consider $G_1$: $\begin{align*} S & \rightarrow (a \mid b) S (c \mid d) \\ S & \rightarrow (a \mid b) A (c \mid d) \\ A & \rightarrow (f \mid g)^* \end{align*} $ Then the language of $G_1$ is the following: $$L(G_1) = \{(a \mid b)^n (f \mid g)^* (c \mid d)^n \mid n > 0\}$$ Give a standard CFG that has the same language as $G_1$, is your grammar weakly equivalent to $G_1'$, strongly equivalent to $G_1'$, or both? Why? Secondly, how can I transform any CFG with regular expressions of terminals on the right hand side to a normal context free grammar?
Take the Poincaré group for example. The conservation of rest-mass $m_0$ is generated by the invariance with respect to $p^2 = -\partial_\mu\partial^\mu$. Now if one simply claims The state where the expectation value of a symmetry generator equals the conserved quantity must be stationary one obtains $$\begin{array}{rl} 0 &\stackrel!=\delta\langle\psi|p^2-m_0^2|\psi\rangle \\ \Rightarrow 0 &\stackrel!= (\square+m_0^2)\psi(x),\end{array}$$ that is, the Klein-Gordon equation. Now I wonder, is this generally a possible quantization? Does this e.g. yield the Dirac-equation for $s=\frac12$ when applied to the Pauli-Lubanski pseudo-vector $W_{\mu}=\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} M^{\nu \rho} P^{\sigma}$ squared (which has the expectation value $-m_0^2 s(s+1)$)?
Abbreviation: Fld A is a commutative rings with identity $\mathbf{F}=\langle F,+,-,0,\cdot,1\rangle$ such that field $\mathbf{F}$ is non-trivial: $0\ne 1$ every non-zero element has a multiplicative inverse: $x\ne 0\Longrightarrow \exists y (x\cdot y=1)$ Remark: The inverse of $x$ is unique, and is usually denoted by $x^{-1}$. Let $\mathbf{F}$ and $\mathbf{G}$ be fields. A morphism from $\mathbf{F}$ to $\mathbf{G}$ is a function $h:F\rightarrow G$ that is a homomorphism: $h(x+y)=h(x)+h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(1)=1$ Remark: It follows that $h(0)=0$ and $h(-x)=-h(x)$. Example 1: $\langle\mathbb{Q},+,-,0,\cdot,1\rangle$, the field of rational numbers with addition, subtraction, zero, multiplication, and one. $0$ is a zero for $\cdot$: $0\cdot x=x$ and $x\cdot 0=0$. $\begin{array}{lr} f(1)= &0\\ f(2)= &1\\ f(3)= &1\\ f(4)= &1\\ f(5)= &1\\ f(6)= &0\\ \end{array}$ There exists one field, called the Galois field $GF(p^m)$ of each prime-power order $p^m$.
What is a binomial? A binomial is an algebraic expression of the sum (+) or the difference (-) of two terms. Quick Review: Let’s review some terms and expressions that might help us understand these. Monomial examples: $5y, 8x^2, -2x$ Binomial examples: $-3x^2-2,9y-2y^2$ Polynomial examples: $8x^2+3x-2,12x^2+11x+2$ What is a polynomial? Polynomials are algebraic expressions that include real numbers (positive, negative, large, small, whole, or decimal numbers) and variables (x, y, etc.). They include more than term and are the sum of monomials. They are usually also written in decreasing order of terms. Term Polynimal or Not? Why? $8x^2+3x-2$ Polynomial $8x^{−3}-7y-2$ NOT a polynomial The exponent is negative ($x^{−3}$) $8x^2+8x-\frac{2}{3}$ NOT a polynomial Cannot have division Now...how do we multiply binomials? When multiplying binomials, you can use the FOIL method. For instance, to find the product of 2 binomials, you’ll add the products of the First terms, the Outer terms, the Inner terms, and the Last terms. FIRST: multiply the first term in each set of parenthesis OUTER: multiply the outer term in each set of parenthesis INNER: multiply the inner term in each set of parenthesis LAST: multiply the last term in each set of parenthesis Example 1 Let’s work out this problem. $$(3x-7)(5x+6)$$ 1. Identify the FOIL numbers \begin{align*} \color{#FF0000}{\text{First}}&:3x\cdot5x\\ \color{#FF8C00}{\text{Outer}}&:3x\cdot6\\ \color{#008000}{\text{Inner}}&:(−7)\cdot5x\\ \color{black}{\text{Last}}&:(−7)\cdot6\\ \end{align*} 2. Multiply the terms \begin{align*} \color{#FF0000}{\text{First}}&:3x\cdot5x=\color{#FF0000}{15x^2}\\ \color{#FF8C00}{\text{Outer}}&:3x\cdot6=\color{#FF8C00}{18x}\\ \color{#008000}{\text{Inner}}&:(−7)\cdot5x=\color{#008000}{−35x}\\ \color{black}{\text{Last}}&:(−7)\cdot6=\color{black}{−42}\\ \end{align*} 3. Combine like terms \begin{align*} $\color{#FF8C00} {18x} \color{#008000} {−35x}= {−17x} \end{align*} 4. Combine all terms in decreasing order \begin{align*} $\color{#FF0000} {15x^2} \color{#FF8C00} {−17x} \color{black} {−42} \end{align*} Final Answer: $$15x^2-17x-42$$ Example 2 $$(5+4x)(3+2x)$$ $$(5+4x)(3+2x)$$ $$= \color{#FF0000}{(5\cdot3)} + \color{#FF8C00}{(5\cdot2x)} + \color{#008000}{(4x\cdot3)} + \color{black}{(4x\cdot2x)}$$ $$= \color{#FF0000}{15} + \color{#FF8C00}{10x} + \color{#008000}{12x} + \color{black}{8x^2}$$ $$= \color{#FF0000}{15} + \color{#FF8C00}{22x}+ \color{black}{8x^2}$$ $$= \color{black}{8x^2}+ \color{#FF8C00}{22x}+ \color{#FF0000}{15}$$ Final answer: $$8x^2+22x+15$$ Additional Resources Our Math Tutors recommend the following websites for help: Paul's Online Notes: Polynomials
Category : 5th Class FRACTION AND DECIMALS FUNDAMENTALS Types of Fraction Example: \[\frac{1}{3},\frac{2}{3},\frac{4}{5}\] Example:\[\frac{5}{3},\frac{6}{5},\frac{7}{4},\frac{7}{7}\] etc. Example:\[1+\frac{2}{3}\] is written as\[1\frac{2}{3}\],\[2+\frac{1}{5}\] is written as\[2\frac{1}{5}\] Example: \[\frac{1}{8},\frac{2}{8},\frac{5}{8}\] etc. In all the above fraction denominators are equal, so they are like fraction. Example: \[\frac{1}{3},\frac{1}{5},\frac{5}{8},\frac{3}{7}\] etc. Example: \[\frac{1}{5},\frac{2}{10},\frac{3}{15}\] etc. In above fractions value of each fraction is equal so they are equipment fractions. Example:\[\frac{3}{10},\frac{1}{100},\frac{1}{1000}\] Example: \[\frac{1}{2}\]of\[\frac{3}{8}\],\[\frac{1}{3}\] of \[\frac{4}{7}\] etc. Example: \[4+\frac{1}{1+\frac{1}{1+\frac{2}{3}}},\,\,2+\frac{1}{1-\frac{1}{1-\frac{1}{3}}}\]etc. Additional of fractions Sum of like fractions\[=\frac{\text{sum}\,\,\text{of}\,\,\text{numerators}}{\text{sum}\,\,\text{of}\,\,\text{denominators}}\] Example:\[\frac{3}{7}+\frac{4}{7}=\frac{3+4}{7}=\frac{7}{7}=1,\] \[\frac{3}{8}+\frac{7}{8}=\frac{10}{8}=\frac{5}{4}=1\frac{1}{4}\]etc. Sum of\[\frac{2}{5}\]and \[\frac{1}{3}\] LCM of 5 and 3= 15 Now,\[\frac{2}{5}\times \frac{5}{3}=\frac{6}{15}\]and \[\frac{1\times 5}{3\times 5}=\frac{5}{15}\] Then,\[\frac{6}{15}+\frac{5}{15}=\frac{11}{15}\] Subtraction of lie fractions \[=\frac{\text{Difference between the numerators}}{\text{common denominator}}\] Examples: \[\frac{6}{5}-\frac{2}{5}=\frac{6-2}{5}=\frac{6-2}{5}=\frac{4}{5}\] \[\frac{8}{3}-\frac{1}{3}=\frac{7}{3}=2\frac{1}{3}\] etc. Difference of \[\frac{3}{5}\]and\[\frac{1}{2}\] LCM of 5 and 2=10 \[\frac{3}{5}=\frac{3\times 2}{5\times 2}=\frac{6}{10}\] \[\frac{1}{2}=\frac{1\times 5}{2\times 5}=\frac{5}{10}\] Now, \[\frac{6}{10}-\frac{5}{10}=\frac{6-5}{10}=\frac{1}{10}\] Multiplication of a fraction by a whole number. \[=\frac{\text{ Numerator of fractionwhole number }\!\!~\!\!\text{ }}{\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ }\!\!~\!\!\text{ Denominator of the fraction}}\] Multiply \[\frac{2}{5}\] by 4 We have, \[\frac{2}{5}\times 4=\frac{2\times 4}{5}=\frac{8}{5}=1\frac{3}{5}\] Multiplication of a fraction by a fraction \[\frac{\text{Product of their numerators}}{\text{Product of their denominators}}\] Multiply \[\frac{3}{10}\] by \[\frac{7}{8}\] We have, \[\frac{3}{10}\times \frac{7}{8}=\frac{21}{80}\] Division of a fraction by a fraction Example: Divide \[\frac{3}{5}\div \frac{7}{8}=\frac{3}{5}=\frac{8}{7}=\frac{24}{35}\] Division of a fraction by a fraction Example: Divide\[\frac{7}{8}\div 14\] \[\therefore \]\[\frac{7}{8}\times \frac{1}{14}=\frac{1}{16}\] Decimals Example: 0.7, 1.68, 9.357 Example: In 468.23, whole part is 468 and decimal part is 0. 23. It is read as Four Sixty eight point two three. Example: 1. 23 have two decimal places and 1. 417 have three decimal places Example: 5.321, 6.932, 5.834 are like decimals because of each having 3 places of decimals. Example: 5.41, 6.232, 9.2314 are unlike decimals because of each having different number of decimals. Equivalent Decimals Let there be two decimal numbers having different numbers of decimal after decimal point to the number having less number of decimal places, we add appropriate number of zeros at the extreme right so that the two numbers have same number of decimal places. Then two numbers called equivalent decimals. Example: Let 9.6 and 8.324 ne two number. Now, 9.6 can be written as 9.600 so that 9.600 and 8.324 have both 3 decimal places. Hence 9.600 and 8.324 are equivalent decimals Similarly, 10.32 = 10.320 7.3 = 7.300 9.142 = 9.142 All the above decimals are equivalent decimals. Additional of Decimal Numbers Add 7.35 and 5.26 We have, Subtraction of Decimal Numbers Step I: If the given decimal numbers are unlike decimals, write them into like decimals. Step II: Write the smaller decimal number under the larger decimal number. Step III: Subtract as usual ignoring the decimal points. Step IV: Finally, put the decimal point in the difference under the decimal points of the given number. Example: Subtract 13.74 from 80.4 On converting the given numbers into like decimals we get, 13.74 and 80.40. Writing the decimals in column and on subtracting, we get, \[\therefore \]\[80.40-13.74=66.66\] Multiplication of Decimal Numbers by a whole number Step I: Multiply like whole numbers, ignoring the decimals. Step II: Count the number of decimal places in the decimal numbers Step III: Show the same number of decimal places in the product. Example: Multiply 6.238 by 6 First we multiply 6.238 by 6 Since given decimal number has 3 decimal places. So, the product will have 3 decimals places. So,\[6.238\times 6=37.428\] Multiplication of two Decimal Numbers Example: Multiply 12.8 by 1.2 Sum of decimal places in the given decimals\[=1+1=2\] So, place the decimal point in the product so as to have 2 decimal points \[\therefore \]\[12.8\times 1.2=15.36\] (a) On multiplying a decimal number by 10, the decimal point moves one place to the right \[10\times 0.212=2012,\,\,10\times 3.163=31.63.\] (b) On multiplying a decimal number by 100, the decimal point moves two place to the right\[100\times 0.4321=43.21,\,\,100\times 7.832=783.2\]. (c) On multiplying a decimal number by\[1000\], the decimal point moves three places to the right. \[1000\times 0.2312=231.2\] \[1000\times 0.12=120\] \[1000\times 7.3=7300\] Decimal of Decimal Numbers Consider the dividend as a whole number and perform the division, when the division of whole number part of the decimal is complete. Place the decimal point in the question and continue with the division as in the case of whole numbers. Example: Divide 337.5 by 15 \[\therefore \,\,337.5\div 15=22.5\] Example: \[\frac{13}{10}=1.3,\,\,\frac{151}{100}=1.51,\,\,\frac{1321}{1000}=1.321\] Division of a decimal number by another decimal number Example: Divided 21.97 by 1.3 \[21.97\div 13=\frac{21.97\times 10}{1.3\times 10}=\frac{219.7}{13}=16.9\] Division of a whole numbers by a decimal number Example: Divided 68 by 0.17 We have \[\frac{68}{0.17}=\frac{68\times 100}{0.17\times 100}=\frac{6800}{17}=400\] Converting a Decimal into a vulgar fraction Example: \[0.13=\frac{13}{100},\,\,0.17=\frac{17}{100}\]etc. You need to login to perform this action. You will be redirected in 3 sec
Abbreviation: RLGrp A (or representable lattice-ordered group $\ell$ representable ) is a lattice-ordered group $\mathbf{L}=\langle L, \vee, \wedge, \cdot, ^{-1}, e\rangle$ that satisfies the identity -group $(x\wedge y)^2 = x^2\wedge y^2$ Let $\mathbf{L}$ and $\mathbf{M}$ be $\ell$-groups. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $f:L\rightarrow M$ that is a homomorphism: $f(x\vee y)=f(x)\vee f(y)$ and $f(x\cdot y)=f(x)\cdot f(y)$. Remark: It follows that $f(x\wedge y)=f(x)\wedge f(y)$, $f(x^{-1})=f(x)^{-1}$, and $f(e)=e$ Every representable $\ell$-group is a subdirect product of totally ordered groups. Classtype variety Equational theory Quasiequational theory First-order theory hereditarily undecidable 1) 2) Locally finite no Residual size Congruence distributive yes (see lattices) Congruence modular yes Congruence n-permutable yes, $n=2$ (see groups) Congruence regular yes, (see groups) Congruence uniform yes, (see groups) Congruence extension property Definable principal congruences Equationally def. pr. cong. Amalgamation property no 3) Strong amalgamation property no 4) Epimorphisms are surjective None Yuri Gurevic, 1) , Algebra i Logika Sem., Hereditary undecidability of a class of lattice-ordered Abelian groups 6, 1967, 45–62 Stanley Burris, 2) , Algebra Universalis, A simple proof of the hereditary undecidability of the theory of lattice-ordered abelian groups 20, 1985, 400–401, http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/HerUndecLOAG.pdf A. M. W. Glass, D. Saracino and C. Wood, 3) , Math. Proc. Camb. Phil. Soc. 95 (1984), 191–195 Non-amalgamation of ordered groups Mona Cherri and Wayne B. Powell, 4) , International J. Math. & Math. Sci., Vol 16, No 1 (1993) 75–80, http://www.hindawi.com/journals/ijmms/1993/405126/abs/ doi:10.1155/S0161171293000080 Strong amalgamation of lattice ordered groups and modules
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
A common task in text mining is document clustering. There are other ways to cluster documents. However, for this vignette, we will stick with the basics. The example below shows the most common method, using TF-IDF and cosine distance. Let’s read in some data and make a document term matrix (DTM) and get started. library(textmineR)# load nih_sample data set from textmineRdata(nih_sample)# create a document term matrix dtm <- CreateDtm(doc_vec = nih_sample$ABSTRACT_TEXT, # character vector of documents doc_names = nih_sample$APPLICATION_ID, # document names ngram_window = c(1, 2), # minimum and maximum n-gram length stopword_vec = c(stopwords::stopwords("en"), # stopwords from tm stopwords::stopwords(source = "smart")), # this is the default value lower = TRUE, # lowercase - this is the default value remove_punctuation = TRUE, # punctuation - this is the default remove_numbers = TRUE, # numbers - this is the default verbose = FALSE, # Turn off status bar for this demo cpus = 2) # default is all available cpus on the system# construct the matrix of term counts to get the IDF vectortf_mat <- TermDocFreq(dtm) First, we must re-weight the word counts in the document term matrix. We do this by multiplying the term frequency (in this case, count of words in documents) by an inverse document frequency (IDF) vector. textmineR calculates IDF for the \(i\)-th word as \[\begin{align} IDF_i = ln\big(\frac{N}{\sum_{j = 1}^N C(word_i, doc_j)}\big) \end{align}\] where \(N\) is the number of documents in the corpus. By default, when you multiply a matrix with a vector, R multiplies the vector to each column. For this reason, we need to transpose the DTM before multiplying the IDF vector. Then we transpose it back to the original orientation. The next step is to calculate cosine similarity and change it to a distance. We’re going to use some linear algebra to do this. The dot product of two positive-valued, unit-length vectors is the cosine similarity between the two vectors. For a deeper explanation of the math and logic, read this article. R’s various clustering functions work with distances, not similarities. We convert cosine similarity to cosine distance by subtracting it from \(1\). This works because cosine similarity is bound between \(0\) and \(1\). While we are at it, we’ll convert the matrix to a dist object. The last step is clustering. There are many clustering algorithms out there. My preference is agglomerative hierarchical clustering using Ward’s method as the merge rule. Compared to other methods, such as k-means, hierarchical clustering is computationally inexpensive. In the example below, I choose to cut the tree at \(10\) clusters. This is a somewhat arbitrary choice. I often prefer to use the silhouette coefficient. You can read about this method here. Performing this is an exercise I’ll leave to the reader. It might be nice to get an idea of what’s in each of these clusters. We can use the probability difference method from above. The code chunk below creates a summary table of clusters. Each cluster’s size and the top 5 words are represented. # create a summary table of the top 5 words defining each clustercluster_summary <- data.frame(cluster = unique(clustering), size = as.numeric(table(clustering)), top_words = sapply(cluster_words, function(d){ paste( names(d)[ order(d, decreasing = TRUE) ][ 1:5 ], collapse = ", ") }), stringsAsFactors = FALSE) cluster size top_words 1 23 risk, health, diabetes, intervention, treatment 2 4 hiv, inflammation, env, testing, study 3 38 cell, infection, determine, cells, function 4 7 research, program, cancer, disparities, students 5 8 cancer, brain, imaging, tumor, metastatic 6 3 microbiome, crc, gut, psoriasis, gut_microbiome 7 3 cdk, nmdar, nmdars, calpain, tefb 8 7 research, core, center, support, translational 9 3 lung, ipf, expression, cells, methylation 10 4 mitochondrial, metabolic, redox, ros, bde You may want a word cloud to visualize each cluster. Using the wordcloud package, we plot cluster 100 below.
This proof seems odd to me. I have come to the conclusion that I will use induction. I would like to see a smoother way or just some improvements on my technique. Let $f:A\rightarrow B$ be a function between two finite sets of equal cardinality. Show that $f$ is surjective if and only if it is injective. To start, I will show that a surjection implies an injection using induction. I will dismiss the cases that both sets are empty or contain one element as being trivial (essentially vacuously true). Assume $|A| \geq 2$, $|B| \geq 2$, $|A| = n = |B|$, and $f:A \rightarrow B$ is a surjection. For the base case, let $n = 2$. There are two elements in both $A$ and $B$. Due to surjection, every element $b \in B$ must be mapped to, through $f$, by at least one element $a \in A$. If each of the two elements in $B$ were mapped to by the same element in $A$, the definition of function would be violated. Therefore, they are mapped to by unique elements in $A$. Thus, for $f(p), f(q) \in B$, if $f(p) = f(q)$, it must be true that $p = q$ so $f$ is injective. Now assume that the surjection implies an injection for $n \geq 2$. We must show this to be true for $|A| = n + 1 = |B|$. Since it is true for $|A| = n = |B|$, the $n + 1$ case represents the addition of one new element to both $A$ and $B$. The new element in $B$ cannot be mapped to any other element in $A$ except for the new one. If mapped to by an old one, the definition of function would be violated. It must be mapped to by something since $f$ is surjective, hence it must be the new element. Finally, the new element in $A$ cannot be mapped to an old element in $B$ because it is unique and the previous $B$ was shown to be injective. $$\blacksquare$$ This is a very wordy and awkward proof in my opinion. I have been out of proofs for a long time. I would like to see one that is more clear or seek validation if there isn't. I know that I have only completed half of the proof and have yet to go the other way.
In the book Elements of Statistical Learning in Chapter 7 (page 228), the training error is defined as: $$ \overline{err} = \frac{1}{N}\sum_{i=1}^{N}{L(y_i,\hat{f}(x_i))} $$ Whereas in-sample error is defined as $$ Err_{in} = \frac{1}{N}\sum_{i=1}^{N}{E_{Y^0}[L(Y_{i}^{0},\hat{f}(x_i))|\tau]} $$ The $Y^0$ notation indicates that we observe N new response values at each of the training points $x_i, i = 1, 2, . . . ,N$. Which seems to be exactly the same as training error because training error is also calculated i.e by computing the response of the training set using the fitted estimate $\hat{f}(x)$. I have checked this and this explanation of this concept, but could not understand the difference between training error and in-sample error, and why optimism is not always 0: $$ op\equiv Err_{in}-\overline{err} $$ So how are the errors $Err_{in}$ and $\overline{err}$ different, and what is the intuitive understanding of optimism in this context? Additionally, what does the author mean by "usually biased downward" in the statement: This is typically positive since err is usually biased downward as an estimate of prediction error. while describing Optimism (Elements of Statistical Learning, page 229)
Analyzing the Viscous and Thermal Damping of a MEMS Micromirror Micromirrors have two key benefits: low power consumption and low manufacturing costs. For this reason, many industries use micromirrors for a wide range of MEMS applications. To save time and money when designing micromirrors, engineers can accurately account for thermal and viscous damping and analyze device performance via the COMSOL Multiphysics® software. The Many Applications of Micromirrors Picture a micromirror as a single string on a guitar. The string is so light and thin that when you pluck it, the surrounding air dampens the string’s motion, bringing it to a standstill. Micromirrors have a wide variety of potential applications. For instance, these mirrors can be used to control optic elements, an ability that makes them useful in the microscopy and fiber optics fields. Micromirrors are found in scanners, heads-up displays, medical imaging, and more. Additionally, MEMS systems sometimes use integrated scanning micromirror systems for consumer and telecommunications applications. When developing a micromirror actuator system, engineers need to account for its dynamic vibrating behavior and damping, both of which greatly affect the operation of the device. Simulation provides a way to analyze these factors and accurately predict system performance in a timely and cost-efficient manner. To perform an advanced MEMS analysis, you can combine features in the Structural Mechanics Module and Acoustics Module, two add-on products to the COMSOL Multiphysics simulation platform. Let’s take a look at frequency-domain (time-harmonic) and transient analyses of a vibrating micromirror. Performing a Frequency-Domain Analysis of a Vibrating Micromirror We model an idealized system that consists of a vibrating silicon micromirror — which is 0.5 by 0.5 mm with a thickness of 1 μm — surrounded by air. A key parameter in this model is the penetration depth; i.e., the thickness of the viscous and thermal boundary layers. In these layers, energy dissipates via viscous drag and thermal conduction. The thickness of the viscous and thermal layers is characterized by the following penetration depth scales: where f is the frequency, \rho is the fluid density, \mu is the dynamic viscosity, \kappa is the coefficient of thermal conduction, C_\textrm{p} is the heat capacity at constant pressure, and \textrm{Pr} is the nondimensional Prandtl number. For air, when the system is excited at a frequency of 10 kHz (which is typical for this model), the viscous and thermal scales are 22 µm and 18 µm, respectively. These are comparable to the geometric scales, like the mirror thickness, meaning that thermal and viscous losses must be included. Moreover, in real systems, the mirrors may be located near surfaces or in close proximity to each other, creating narrow regions where the damping effects are accentuated. The frequency-domain analysis provides insight into the frequency response of the system, including the location of the resonance frequencies, Q-factor of the resonance, and damping of the system. The micromirror model geometry, showing the symmetry plane, fixed constraint, and torquing force components. In this example, we use three separate interfaces: The Shellinterface to model the solid micromirror, available in the Structural Mechanics Module The Thermoviscous Acoustics, Frequency Domaininterface to model the air domain around the mirror, available in the Acoustics Module The Pressure Acoustics, Frequency Domaininterface to truncate the computational domain, available in the Acoustics Module By modeling the detailed thermoviscous acoustics and using the Thermoviscous Acoustics, Frequency Domain interface, we can explicitly include thermal and viscous damping while solving the full linearized Navier-Stokes, continuity, and energy equations. In doing so, we accomplish one of the main goals for this model: accurately calculating the damping experienced by the mirror. To set up and combine the three interfaces, we use the Acoustics-Thermoviscous Acoustics Boundary and Thermoviscous-Acoustics-Structure Boundary multiphysics couplings. We then solve the model using a frequency-domain sweep and an eigenfrequency study. These analyses enable us to study the resonance frequency of the mirror under a torquing load in the frequency domain. Results of the Frequency-Domain Analysis Let’s take a look at the displacement of the micromirror for a frequency of 10 kHz and when exposed to the torquing force. In this scenario, the displacement mainly occurs at the edges of the device. To view displacement in a different way, we also plot the response at the tip of the micromirror over a range of frequencies. Micromirror displacement at 10 kHz for phase 0 (left) and the absolute value of the z -component of the displacement field at the micromirror tip (right). Next, let’s view the acoustic temperature variations (left image below) and acoustic pressure distribution (right image below) in the micromirror for a frequency of 11 kHz. As we can see, the maximum and minimum temperature fluctuations occur opposite to one another and there is an antisymmetric pressure distribution. The temperature fluctuations are closely related to the pressure fluctuations through the equation of state. Note that the temperature fluctuations fall to zero at the surface of the mirror, where an isothermal condition is applied. The temperature gradient near the surface gives rise to the thermal losses. Temperature fluctuation field within the thermoviscous acoustics domain (left) and the pressure isosurfaces (right). The two animations below show a dynamic extension of the frequency-domain data using the time-harmonic nature of the solution. Both animations depict the mirror movement in a highly exaggerated manner, with the first one showing an instantaneous velocity magnitude in a cross section and the second showing the acoustic temperature fluctuations. These results indicate that there are high-velocity regions close to the edge of the micromirror. We determine the extent of this region into the air via the scale of the viscous boundary layer (viscous penetration depth). We can also identify the thermal boundary layer or penetration depth using the same method. Animation of the time-harmonic variation in the local velocity. Animation of the time-harmonic variation in the acoustic temperature fluctuations. When the problem is formulated in the frequency domain, eigenmodes or eigenfrequencies can also be identified. From the eigenfrequency study (also performed in the model), we can determine the vibrating modes, shown in the animation below (only half the mirror is shown as symmetry applies). Our results show that the fundamental mode is around 10.5 kHz, with higher modes at 13.1 kHz and 39.5 kHz. The complex value of the eigenfrequency is related to the Q-factor of the resonance and thus the damping. (This relationship is discussed in detail in the Vibrating Micromirror model documentation.) Animation of the first three vibrating modes of the micromirror. Transient Analysis of Viscous and Thermal Damping in a Micromirror As of version 5.3a of the COMSOL® software, a different take on this example solves for the transient behavior of the micromirror. Using the same geometry, we extend the frequency-domain analysis into a transient analysis. To achieve this, we swap the frequency-domain interfaces with their corresponding transient interfaces and adjust the settings of the transient solver. In the simulation, the micromirror is actuated for a short time and exhibits damped vibrations. The resulting model includes some of the most advanced air and gas damping mechanisms that COMSOL Multiphysics has to offer. For instance, the Thermoviscous Acoustics, Transient interface generates the full details for the viscous and thermal damping of the micromirror from the surrounding air. In addition, by coupling the transient perfectly matched layer capabilities of pressure acoustics to the thermoviscous acoustics domain, we can create efficient nonreflecting boundary conditions (NRBCs) for this model in the time domain. Results of the Transient Analysis Let’s start with the displacement results. The 3D results (left image below) visualize the displacement of the micromirror and the pressure distribution at a given time. We also generate a plot (right image below) to illustrate the damped vibrations caused by thermal and viscous losses. The green curve represents the undamped response of the micromirror when the surrounding air is not coupled to the mirror movement. The time-domain simulations make it possible to study transients of the system, like the decay time, and the response of the system to an anharmonic forcing. Micromirror displacement and pressure distribution (left) and the transient evolution of the mirror displacement (right). We can also examine the acoustic temperature variations surrounding the micromirror. The isothermal condition at the micromirror surface produces an acoustic thermal boundary layer. As with the frequency-domain example, the highest and lowest temperatures are located opposite to one another. In addition, by calculating the acoustic velocity variations of the micromirror, we see that a no-slip condition at the micromirror surface results in a viscous boundary layer. Acoustic temperature variations (left) as well as acoustic velocity variations for the x -component (center) and z -component (right). Next Steps These examples demonstrate that we can analyze micromirrors using advanced modeling features available in the Acoustics Module in combination with the Structural Mechanics Module. For more details on modeling micromirrors, check out the tutorials below. Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
I need to prove that: $$\sum_{n=1}^{\infty} (-1)^{n+1}\log\left(1+\frac{1}{n}\right)$$ is convergent, but not absolutely convergent. I tried the ratio test: $$\frac{a_{n+1}}{a_n} = -\frac{\log\left(1+\frac{1}{n+1}\right)}{\log\left(1+\frac{1}{n}\right)} = -\log\left({\frac{1}{n+1}-\frac{1}{n}}\right)$$ I know that the thing inside the $\log$ converges to $1$, so $-\log$ converges to $0$? This is not right, I cannot conclude that this series is divergent. Also, for the sequence without the $(-1)^{n+1}$ it would give $0$ too.
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
In lectures on effective field theory the professor wanted to find the correction to the four point vertex in massless $\phi^4$ theory by calculating the diagram, $\hspace{6cm}$ We consider the zero external momentum limit and denote $p$ as the momentum in the loop. Then we get, \begin{align} \int \frac{ d ^d p }{ (2\pi)^4}\frac{1}{p ^4 } & = \frac{ - i }{ 16 \pi ^2 } ( 4\pi ) \Gamma ( \epsilon ) \mu ^\epsilon \\ & = - \frac{ i }{ 16 \pi ^2 } \left( \frac{1}{ \epsilon _{ UV}} - \gamma + \log 4\pi - \log \mu ^2 \right) \\ & = \frac{ i }{ 16 \pi ^2 } \left( \frac{1}{ \epsilon _{ UV}} - \frac{1}{ \epsilon _{ IR}} \right) \end{align} where we introduced $\mu$ as an IR cut-off and then take $\log \mu ^2 $ as a $\frac{1}{\epsilon_{IR}}$. This is fine, however the professor then goes on to say that this diagram is zero since the two divergences cancel. Why would this be the case? The two divergences arise for completely different reasons. The UV divergence is due to a UV cutoff (possibly from new high energy particles arising at some high up scale) and the second is a consequence of studying a massless theory. For more context the lecture notes are available here under Effective Field Theory (Eq. 4.17)
Geometrical optics Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction, or instrument, useful in approximating the paths along which light propagates in certain classes of circumstances. The simplifying assumptions of geometrical optics include that light rays: propagate in rectilinear paths as they travel in a homogeneous medium bend, and in particular circumstances may split in two, at the interface between two dissimilar media follow curved paths in a medium in which the refractive index changes may be absorbed or reflected. Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations. Contents Explanation 1 Reflection 2 Refraction 3 Underlying mathematics 4 A simple example 4.1 See also 5 References 6 Further reading 7 External links 8 Explanation A slightly more rigorous definition of a light ray follows from Fermat's principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. [1] Geometrical optics is often simplified by making the paraxial approximation, or "small angle approximation." The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications. [2] Reflection Glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. With such surfaces, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. [3] This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. (The magnification of a flat mirror is equal to one.) The law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. [3] Refraction Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n_1 and another medium with index of refraction n_2. In such situations, Snell's Law describes the resulting deflection of the light ray: n_1\sin\theta_1 = n_2\sin\theta_2\ where \theta_1 and \theta_2 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. This phenomenon is also associated with a changing speed of light as seen from the definition of index of refraction provided above which implies: v_1\sin\theta_2\ = v_2\sin\theta_1 where v_1 and v_2 are the wave velocities through the respective media. [3] Various consequences of Snell's Law include the fact that for light rays traveling from a material with a high index of refraction to a material with a low index of refraction, it is possible for the interaction with the interface to result in zero transmission. This phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber optic cable, it undergoes total internal reflection allowing for essentially no light lost over the length of the cable. It is also possible to produce polarized light rays using a combination of reflection and refraction: When a refracted ray and the reflected ray form a right angle, the reflected ray has the property of "plane polarization". The angle of incidence required for such a scenario is known as Brewster's angle. [3] Snell's Law can be used to predict the deflection of light rays as they pass through "linear media" as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. Additionally, since different frequencies of light have slightly different indexes of refraction in most materials, refraction can be used to produce dispersion spectra that appear as rainbows. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. [3] Some media have an index of refraction which varies gradually with position and, thus, light rays curve through the medium rather than travel in straight lines. This effect is what is responsible for mirages seen on hot days where the changing index of refraction of the air causes the light rays to bend creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Material that has a varying index of refraction is called a gradient-index (GRIN) material and has many useful properties used in modern optical scanning technologies including photocopiers and scanners. The phenomenon is studied in the field of gradient-index optics. [4] A device which produces converging or diverging light rays due to refraction is known as a lens. Thin lenses produce focal points on either side that can be modeled using the lensmaker's equation. [5] In general, two types of lenses exist: convex lenses, which cause parallel light rays to converge, and concave lenses, which cause parallel light rays to diverge. The detailed prediction of how images are produced by these lenses can be made using ray-tracing similar to curved mirrors. Similarly to curved mirrors, thin lenses follow a simple equation that determines the location of the images given a particular focal length (f) and object distance (S_1): \frac{1}{S_1} + \frac{1}{S_2} = \frac{1}{f} where S_2 is the distance associated with the image and is considered by convention to be negative if on the same side of the lens as the object and positive if on the opposite side of the lens. [5] The focal length f is considered negative for concave lenses. Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. Likewise, the magnification of a lens is given by M = - \frac{S_2}{S_1} = \frac{f}{f - S_1} where the negative sign is given, by convention, to indicate an upright object for positive values and an inverted object for negative values. Similar to mirrors, upright images produced by single lenses are virtual while inverted images are real. [3] Lenses suffer from aberrations that distort images and focal points. These are due to both to geometrical imperfections and due to the changing index of refraction for different wavelengths of light (chromatic aberration). [3] Underlying mathematics As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations. In this short-wavelength limit, it is possible to approximate the solution locally by u(t,x) \approx a(t,x)e^{i(k\cdot x - \omega t)} a_0(t,x) e^{i\varphi(t,x)/\varepsilon}. The phase \varphi(t,x)/\varepsilon can be linearized to recover large wavenumber k:= \nabla_x \varphi, and frequency \omega := -\partial_t \varphi. The amplitude a_0 satisfies a transport equation. The small parameter \varepsilon\, enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis. A simple example Starting with the wave equation for (t,x) \in \mathbb{R}\times\mathbb{R}^n L(\partial_t, \nabla_x) u := \left( \frac{\partial^2}{\partial t^2} - c(x)^2 \Delta \right)u(t,x) = 0, \;\; u(0,x) = u_0(x),\;\; u_t(0,x) = 0 assume an asymptotic series solution of the form u(t,x) \sim a_\varepsilon(t,x)e^{i\varphi(t,x)/\varepsilon} = \sum_{j=0}^\infty i^j \varepsilon^j a_j(t,x) e^{i\varphi(t,x)/\varepsilon}. Check that L(\partial_t,\nabla_x)(e^{i\varphi(t,x)/\varepsilon}) a_\varepsilon(t,x) = e^{i\varphi(t,x)/\varepsilon} \left( \left(\frac{i}{\varepsilon} \right)^2 L(\varphi_t, \nabla_x\varphi)a_\varepsilon + \frac{2i}{\varepsilon} V(\partial_t,\nabla_x)a_\varepsilon + \frac{i}{\varepsilon} (a_\varepsilon L(\partial_t,\nabla_x)\varphi) + L(\partial_t,\nabla_x)a_\varepsilon \right) with V(\partial_t,\nabla_x) := \frac{\partial \varphi}{\partial t} \frac{\partial}{\partial t} - c^2(x)\sum_j \frac{\partial \varphi}{\partial x_j} \frac{\partial}{\partial x_j} Plugging the series into this equation, and equating powers of \varepsilon, the most singular term O(\varepsilon^{-2}) satisfies the eikonal equation (in this case called a dispersion relation), 0 = L(\varphi_t,\nabla_x\varphi) = (\varphi_t)^2 - c(x)^2(\nabla_x \varphi)^2. To order \varepsilon^{-1}, the leading-order amplitude must satisfy a transport equation 2V a_0 + (L\varphi)a_0 = 0 With the definition k : = \nabla_x \varphi, \omega := -\varphi_t, the eikonal equation is precisely the dispersion relation that results by plugging the plane wave solution e^{i(k\cdot x - \omega t)} into the wave equation. The value of this more complicated expansion is that plane waves cannot be solutions when the wavespeed c is non-constant. However, it can be shown that the amplitude a_0 and phase \varphi are smooth, so that on a local scale there are plane waves. To justify this technique, the remaining terms must be shown to be small in some sense. This can be done using energy estimates, and an assumption of rapidly oscillating initial conditions. It also must be shown that the series converges in some sense. See also References Arthur Schuster, An Introduction to the Theory of Optics, London: Edward Arnold, 1904 online. Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides 1. Hugh D. Young (1992). University Physics 8e. Addison-Wesley. Chapter 35 E. W. Marchand, Gradient Index Optics, New York, NY, Academic Press, 1978. Hecht, Eugene (1987). Optics(2nd ed.). Addison Wesley. Chapters 5 & 6. Further reading Archive.org. "The Light of the Eyes and the Enlightened Landscape of Vision" is a manuscript, in Arabic, about geometrical optics, dating from the 16th century. Theory of Systems of Rays - W.R. Hamilton in Transactions of the Royal Irish Academy, Vol. XV, 1828. English translations of some early books and papers: H. Bruns, "Das Eikonal" M. Malus, "Optique" J. Plucker, "Discussion of the general form for light waves" E. Kummer, "General theory of rectilinear ray systems" E. Kummer, presentation on optically-realizable rectilinear ray systems R. Meibauer, "Theory of rectilinear systems of light rays" M. Pasch, "On the focal surfaces of ray systems and the singularity surfaces of complexes" A. Levistal, "Research in geometrical optics" F. Klein, "On the Bruns eikonal" R. Dontot, "On integral invariants and some points of geometrical optics" T. de Donder, "On the integral invariants of optics" Fundamentals of Photonics - Module on Basic Geometrical Optics
2. Series 18. Year Post deadline: - Upload deadline: - 1. Moses's miracle Moses came to the Red Sea and said: „Lets the water open and let us go by dry foot to the land of promise.“ Then he entered into waves and they opened. What was the Moses's force, if he moved Jews over Red Sea. Assuming the sea to be 1 wide and 20 deep. Vymyslel Jarda Trnka při čtení Bible. 3. helicopter For helicopter to levitate it needs motor of power P. What is the power P' of the helicopter, which is scale copy of the previous one in scale 1:2? Assume the effectiveness of the rotor to be 100%. Úloha byla převzata z MFO v Kanadě. 4. desperate shipwrecked people Shipwrecked people at the north pole are trying to make a cup of coffee. Advice them, how to boil the water, to get as much as possible if there are only 3 ways how to boil it: Re-chargeable battery of internal resistance $2R$ is connected directly to the heating element of resistance $R$. The same battery is connected in series in with heating element and capacitor. Each time, when the capacitor is fully charged, it is disconnected and connected into the circuit in reverse polarity. The same battery will be used to charge capacitor and then the capacitor powers up by the heating element. Vymyslel Matouš Ringel, když si na výletě vařil kávu. P. unexpected obstacle The driver of the car moving at the speed $v$ suddenly recognise, that is heading to the middle of the concrete wall of the width of $2d$ and is in distance of $l$ from the wall. The coefficient of the friction between the tyres and the surface of road is $f$. What is the best way to do to avoid the inevitable accident. Decide, what is the maximum velocity to avoid the crash. Napadlo Pavla Augustinského při cestě autem. S. Newton's kinematics equations Write down and solve the kinematics equation for mass point in gravitation field of the Earth. The orientation of the coordinate system make that $x$ and $y$ are horizontal and z is vertical, pointing upwards. The starting position is $\textbf{r}_{0} = (0,0$,$h)$, starting velocity is $\textbf{v}_{0} =(v_{0}\cosα,0,v_{0}\sinα)$. The man with the gun sits in the chair rotating alongside vertical axe at frequency $f=1\;\mathrm{Hz}$. With the chair also the target is rotating (it is fixed to the chair). Then the man shoots the bullet at the speed of $v=300\;\mathrm{km} \cdot \mathrm{h}^{-1}$ from the rotational axes directly to the middle of the target. In what place the bullet is going to go through the target. Solve in non-inertial system and from the inertial system. The distance to the middle of the target from the centre of rotation is $l=3\;\mathrm{m}$, the air friction is negligible. State the dependence of the speed of the mass point at its position in gravitational field of the Sun. Zadal Honza Prachař.
The Friedman-Robertson-Walker (FRW) metric in the comoving coordinates $(t,r,\theta,\varphi)$ which describes a homogeneous and isotropic universe is $$ ds^2\,= -dt^2+\frac{a(t)^2}{1-kr^2}\,dr^2 + a(t)^2 r^2\,\Big( d\theta^2+\sin^2 \!\theta \,d\varphi^2 \Big) $$ where $k$ is the curvature normalized into $\{-1\,,0\,,+1\}$ which refers to a closed, flat and open universe, respectively; and $a(t)$ is the scale factor. My question is, this FRW metric is NOT asymptotically flat at spatial infinity $r\to+\infty$, isn't it? Thus, we can not calculate the so-called ADM mass (Arnowitt-Deser-Misner), right? If so, how to get the mass of the matter content from the metric? Note: I do not mean the trivial $m=\rho V$, I mean the mass obtained from the FRW metric. The matter/material content determines geometry/metric, and reversely the metric reflects the matter content. So I'm trying to recover the material mass (not including the gravitational energy) from the FRW metric.
3. Series 18. Year Post deadline: - Upload deadline: - 2. bay watch The lifeguard (plavcik) is standing in distance $D$ from the beach. He suddenly sees drowning blond girl (blondynka) which is in distance $D$ in sea (see fig. 1). The lifeguard can run at maximum speed $v$ and swim at maximum speed $v/2$. The distance from the lifeguard to the beach end is defined by following equation $$d(\phi) = \frac{D}{3}( 8\cos{\phi}- {2\sqrt{16\cos^2{\phi}-12\cos{\phi} -3}}-3)\,,$$ where φ is angle blond-lifeguard-beach. What is the optimum trajectory for the lifeguard to safe her? 4. with the glider over the channel One of well known glider pilots decided to cross the British Channel. In Calais he rented a plane to took him to the height h = 3 km and from there glided directly to England. As every pilot knows, glider's downward speed $v_{kles}$ depends on the forward speed $v_{dop}$ as in the image 2. What is the optimum speed to achieve the longest flight? When the pilot is 3/4 of its way to England strong wind starts to blow in direction from England to France at the speed 10 $ms^{-1}$. What is the optimum speed now? What is the maximum wind speed which allows him to come to England? And what is the sped of wind to allow for the safe return to France? Vymyslel Matouš. P. the tower from push-carts What is the acceleration of the first and hundred-th push cart on the image 3 (counting from the bottom)? There is infinite number of push-cars, on the image are depicted only first four. The mass of the bottom one is $m$, the second is $m/2$ and the mass of the weight attached is $m/2$ as well. The next mass is $m/4$ and the weight is $m/4$ etc. Assume that the weights are connected to the cars and are not moving in horizontal direction relatively to the carts. The friction between carts is negligible. S. Langrange's equations first type Lets have a mass point suspended on massless string. Introduce Cartesian coordinates and write down equation for the mass point. Write Lagrange's equations of first type for mass point from part a). Show, that they are equivalent with the equation for mathematical pendulum d^{2}$\varphi/dt^{2}$ + $g/l$ \cdot sin $\varphi$ = 0, where $\varphi$ is angular displacement from equilibrium. Small body is in rest at the top of the hemisphere and starts to slide down. Using Lagrange's equations for first type calculate the height when the body take-off the hemisphere. ( Hint: The body takes-off when the $λ$ = 0.) Autoři seriálu.
Let $\mathrm{SO}(1,d-1)_{+}$ be the restricted Lorentz Group in $d$ dimensions. Are there projective irreducible representations of this group that do not descend from a representation of $\mathrm{C}\ell_{1,d-1}$? In other words, it is known that any representation of the Clifford algebra induces a representation of the corresponding $\mathrm{Spin}$ group; is the converse true, i.e., does any representation of the $\mathrm{Spin}$ group correspond to some representation of the corresponding Clifford algebra? Any set of matrices $\{\gamma^\mu\}$ satisfying $$ \gamma^{(\mu}\gamma^{\nu)}=\eta^{\mu\nu}\tag1 $$ leads to a set of matrices $S^{\mu\nu}:=\frac i2\gamma^{[\mu}\gamma^{\nu]}$ satisfying $$ [S^{\mu\nu},S^{\rho\sigma}]=\eta^{\mu\rho}S^{\nu\sigma}+\text{perm.}\tag2 $$ My question is: is it true that for any set of matrices $\{S^{\mu\nu}\}$ satisfying $(2)$ we will have a set of matrices $\{\gamma^\mu\}$ satisfying $(1)$? Note: when considering projective representations of this group, only two phases are possible, $\pm1$. Needless to say, here I am asking about those corresponding to $-1$. For the other sign the answer is obvious.
Is there a relationship between regression and linear discriminant analysis (LDA)? What are their similarities and differences? Does it make any difference if there are two classes or more than two classes? I take it that the question is about LDA and linear (not logistic) regression. There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the dependent variable (DV) consists just of 2 groups the two analyses are actually identical. Despite that computations are different and the results - regression and discriminant coefficients - are not the same, they are exactly proportional to each other. Now for the more-than-two-groups situation. First, let us state that LDA (its extraction, not classification stage) is equivalent (linearly related results) to canonical correlation analysis if you turn the grouping DV into a set of dummy variables (with one redundant of them dropped out) and do canonical analysis with sets "IVs" and "dummies". Canonical variates on the side of "IVs" set that you obtain are what LDA calls "discriminant functions" or "discriminants". So, then how canonical analysis is related to linear regression? Canonical analysis is in essence a MANOVA (in the sense "Multivariate Multiple linear regression" or "Multivariate general linear model") deepened into latent structure of relationships between the DVs and the IVs. These two variations are decomposed in their inter-relations into latent "canonical variates". Let us take the simplest example, Y vs X1 X2 X3. Maximization of correlation between the two sides is linear regression (if you predict Y by Xs) or - which is the same thing - is MANOVA (if you predict Xs by Y). The correlation is unidimensional (with magnitude R^2 = Pillai's trace) because the lesser set, Y, consists just of one variable. Now let's take these two sets: Y1 Y2 vs X1 x2 x3. The correlation being maximized here is 2-dimensional because the lesser set contains 2 variables. The first and stronger latent dimension of the correlation is called the 1st canonical correlation, and the remaining part, orthogonal to it, the 2nd canonical correlation. So, MANOVA (or linear regression) just asks what are partial roles (the coefficients) of variables in the whole 2-dimensional correlation of sets; while canonical analysis just goes below to ask what are partial roles of variables in the 1st correlational dimension, and in the 2nd. Thus, canonical correlation analysis is multivariate linear regression deepened into latent structure of relationship between the DVs and IVs. Discriminant analysis is a particular case of canonical correlation analysis (see exactly how). So, here was the answer about the relation of LDA to linear regression in a general case of more-than-two-groups. Note that my answer does not at all see LDA as classification technique. I was discussing LDA only as extraction-of-latents technique. Classification is the second and stand-alone stage of LDA (I described it here). @Michael Chernick was focusing on it in his answers. Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975. Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. Generative classifierers: A comparison of logistic regression and naive Bayes. And here is an abstract of a comment on it by Xue & Titterington, 2008, that mentions O'Neill's papers related to his PhD dissertation: Comparison of generative and discriminative classifiers is an ever-lasting topic. As an important contribution to this topic, based on their theoretical and empirical comparisons between the naïve Bayes classifier and linear logistic regression, Ng and Jordan (NIPS 841---848, 2001) claimed that there exist two distinct regimes of performance between the generative and discriminative classifiers with regard to the training-set size. In this paper, our empirical and simulation studies, as a complement of their work, however, suggest that the existence of the two distinct regimes may not be so reliable. In addition, for real world datasets, so far there is no theoretically correct, general criterion for choosing between the discriminative and the generative approaches to classification of an observation $x$ into a class $y$; the choice depends on the relative confidence we have in the correctness of the specification of either $p(y|x)$ or $p(x, y)$ for the data. This can be to some extent a demonstration of why Efron (J Am Stat Assoc 70(352):892---898, 1975) and O'Neill (J Am Stat Assoc 75(369):154---160, 1980) prefer normal-based linear discriminant analysis (LDA) when no model mis-specification occurs but other empirical studies may prefer linear logistic regression instead. Furthermore, we suggest that pairing of either LDA assuming a common diagonal covariance matrix (LDA) or the naïve Bayes classifier and linear logistic regression may not be perfect, and hence it may not be reliable for any claim that was derived from the comparison between LDA or the naïve Bayes classifier and linear logistic regression to be generalised to all generative and discriminative classifiers. There are a lot of other references on this that you can find online. The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct framework is provided by reduced rank regression (RRR). We will show that LDA is equivalent to RRR of the whitened class indicator matrix on the data matrix. Notation Let $\newcommand{\X}{\mathbf X}\X$ be the $n\times d$ matrix with data points $\newcommand{\x}{\mathbf x}\x_i$ in rows and variables in columns. Each point belongs to one of the $k$ classes, or groups. Point $\x_i$ belongs to class number $g(i)$. Let $\newcommand{\G}{\mathbf G}\G$ be the $n \times k$ indicator matrix encoding group membership as follows: $G_{ij}=1$ if $\x_i$ belongs to class $j$, and $G_{ij}=0$ otherwise. There are $n_j$ data points in class $j$; of course $\sum n_j = n$. We assume that the data are centered and so the global mean is equal to zero, $\newcommand{\bmu}{\boldsymbol \mu}\bmu=0$. Let $\bmu_j$ be the mean of class $j$. LDA The total scatter matrix $\newcommand{\C}{\mathbf C}\C=\X^\top \X$ can be decomposed into the sum of between-class and within-class scatter matrices defined as follows: \begin{align} \C_b &= \sum_j n_j \bmu_j \bmu_j^\top \\ \C_w &= \sum(\x_i - \bmu_{g(i)})(\x_i - \bmu_{g(i)})^\top. \end{align} One can verify that $\C = \C_b + \C_w$. LDA searches for discriminant axes that have maximal between-group variance and minimal within-group variance of the projection. Specifically, first discriminant axis is the unit vector $\newcommand{\w}{\mathbf w}\w$ maximizing $\w^\top \C_b \w / (\w^\top \C_w \w)$, and the first $p$ discriminant axes stacked together into a matrix $\newcommand{\W}{\mathbf W}\W$ should maximize the trace $$\DeclareMathOperator{\tr}{tr} L_\mathrm{LDA}=\tr\left(\W^\top \C_b \W (\W^\top \C_w \W)^{-1}\right).$$ Assuming that $\C_w$ is full rank, LDA solution $\W_\mathrm{LDA}$ is the matrix of eigenvectors of $\C_w^{-1} \C_b$ (ordered by the eigenvalues in the decreasing order). This was the usual story. Now let us make two important observations. First, within-class scatter matrix can be replaced by the total scatter matrix (ultimately because maximizing $b/w$ is equivalent to maximizing $b/(b+w)$), and indeed, it is easy to see that $\C^{-1} \C_b$ has the same eigenvectors. Second, the between-class scatter matrix can be expressed via the group membership matrix defined above. Indeed, $\G^\top \X$ is the matrix of group sums. To get the matrix of group means, it should be multiplied by a diagonal matrix with $n_j$ on the diagonal; it's given by $\G^\top \G$. Hence, the matrix of group means is $(\G^\top \G)^{-1}\G^\top \X$ ( sapienti will notice that it's a regression formula). To get $\C_b$ we need to take its scatter matrix, weighted by the same diagonal matrix, obtaining $$\C_b = \X^\top \G (\G^\top \G)^{-1}\G^\top \X.$$ If all $n_j$ are identical and equal to $m$ ("balanced dataset"), then this expression simplifies to $\X^\top \G \G^\top \X / m$. We can define normalized indicator matrix $\newcommand{\tG}{\widetilde {\mathbf G}}\tG$ as having $1/\sqrt{n_j}$ where $\G$ has $1$. Then for both, balanced and unbalanced datasets, the expression is simply $\C_b = \X^\top \tG \tG^\top \X$. Note that $\tG$ is, up to a constant factor, the whitened indicator matrix: $\tG = \G(\G^\top \G)^{-1/2}$. Regression For simplicity, we will start with the case of a balanced dataset. Consider linear regression of $\G$ on $\X$. It finds $\newcommand{\B}{\mathbf B}\B$ minimizing $\| \G - \X \B\|^2$. Reduced rank regression does the same under the constraint that $\B$ should be of the given rank $p$. If so, then $\B$ can be written as $\newcommand{\D}{\mathbf D} \newcommand{\F}{\mathbf F} \B=\D\F^\top$ with both $\D$ and $\F$ having $p$ columns. One can show that the rank two solution can be obtained from the rank solution by keeping the first column and adding an extra column, etc. To establish the connection between LDA and linear regression, we will prove that $\D$ coincides with $\W_\mathrm{LDA}$. The proof is straightforward. For the given $\D$, optimal $\F$ can be found via regression: $\F^\top = (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G$. Plugging this into the loss function, we get $$\| \G - \X \D (\D^\top \X^\top \X \D)^{-1} \D^\top \X^\top \G\|^2,$$ which can be written as trace using the identity $\|\mathbf A\|^2=\mathrm{tr}(\mathbf A \mathbf A^\top)$. After easy manipulations we get that the regression is equivalent to maximizing (!) the following scary trace: $$\tr\left(\D^\top \X^\top \G \G^\top \X \D (\D^\top \X^\top \X \D)^{-1}\right),$$ which is actually nothing else than $$\ldots = \tr\left(\D^\top \C_b \D (\D^\top \C \D)^{-1}\right)/m \sim L_\mathrm{LDA}.$$ This finishes the proof. For unbalanced datasets we need to replace $\G$ with $\tG$. One can similarly show that adding ridge regularization to the reduced rank regression is equivalent to the regularized LDA. Relationship between LDA, CCA, and RRR In his answer, @ttnphns made a connection to canonical correlation analysis (CCA). Indeed, LDA can be shown to be equivalent to CCA between $\X$ and $\G$. In addition, CCA between any $\newcommand{\Y}{\mathbf Y}\Y$ and $\X$ can be written as RRR predicting whitened $\Y$ from $\X$. The rest follows from this. Bibliography It is hard to say who deserves the credit for what is presented above. There is a recent conference paper by Cai et al. (2013) On The Equivalent of Low-Rank Regressions and Linear Discriminant Analysis Based Regressions that presents exactly the same proof as above but creates the impression that they invented this approach. This is definitely not the case. Torre wrote a detailed treatment of how most of the common linear multivariate methods can be seen as reduced rank regression, see A Least-Squares Framework for Component Analysis, 2009, and a later book chapter A unification of component analysis methods, 2013; he presents the same argument but does not give any references either. This material is also covered in the textbook Modern Multivariate Statistical Techniques (2008) by Izenman, who introduced RRR back in 1975. The relationship between LDA and CCA apparently goes back to Bartlett, 1938, Further aspects of the theory of multiple regression -- that's the reference I often encounter (but did not verify). The relationship between CCA and RRR is described in the Izenman, 1975, Reduced-rank regression for the multivariate linear model. So all of these ideas have been around for a while. Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function linear in the parameters that best fits the data. It does not even have to be linear in the covariates. Linear discriminant analysis on the other hand is a procedure for classifying objects into categories. For the two-class problem it seeks to find the best separating hyperplane for dividing the groups into two catgories. Here best means that it minimizes a loss function that is a linear combination of the error rates. For three or more groups it finds the best set of hyperplanes (k-1 for the k class problem). In discriminant analysis the hypoerplanes are linear in the feature variables. The main similarity between the two is term linear in the titles.
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
Let $I$ be a finite set of items and $\mathcal{M} = \{M | M \subseteq I\}$ be a set of subsets of $I$. The task is to find the biggest subset $\tilde{\mathcal{M}} \subseteq \mathcal{M}$ so that all elements of $\tilde{\mathcal{M}}$ are pairwise disjoint: $$\text{arg } \max_{\tilde{\mathcal{M}} \subseteq \mathcal{M}} |\tilde{\mathcal{M}}| \text{ with } \bigcap_{M \in \tilde{\mathcal{M}}} M = \emptyset$$ What is an efficient algorithm to do so? (If $|I| \approx 10\,000$ and $|\mathcal{M}| \approx 100$ ) Example Let $I = \{a, b, c, d, e\}$ be the set of items and $$\mathcal{M} = \{A, B, C, D\}$$ with $$ \begin{align} A &= \{a\}\\ B &= \{b, c\}\\ C &= \{a, c\}\\ D &= \{b, d\} \end{align} $$ Then $$\tilde{\mathcal{M}} = \{A, B\}$$ is one of the solutions. There is no way to have $|\tilde{\mathcal{M}}| \geq 3$. Ideas Brute force M_tilde_max = Nonefor Mc_tilde in powerset(Mc): if Mc_tilde is disjunct and (M_tilde_max is None or M_tilde_max < |Mc_tilde|): M_tilde_max = Mc_tilde This has complexity $\mathcal{O}(2^{|\mathcal{M}|})$. Apriori If a set $\tilde{\mathcal{M}}$ has only disjunct items, then all possible subsets have to be disjunct. So one can at first find all sets of size 1 which have this property (which are simply all sets), then all sets of size two, then all of size 3, ... This has the same worst-case time complexity as brute force as in the worst case $\tilde{\mathcal{M}}$ is just equal to $\mathcal{M}$. It might in some scenarios - if one knows that it will be smaller - be much better. However, the space complexity is quite bad here. More I guess one could pre-compute for all $|\mathcal{M}|^2$ combinations of two sets if they are disjunct or not. After that, the table can be used and $\mathcal{M}$ and $I$ don't have to be touched at all. This is the maximum satisfiability problem. Each set $M_i \in \mathcal{M}$ is a boolean variable $x_i$. If $M_i$ and $M_j$ are disjunct, then $(x_i \lor x_j)$ is added. If not, then either $(x_i \lor \neg x_j)$ or $(\neg x_i \lor x_j)$ can be added (I think it doesn't matter which one?). This way, a conjunctive normal form can be produced. All clauses have only two elements, so I think there might be an efficient algorithm for it? Context This question is more a brain-teaser for me. It started with the following problem: I would like to generate a tree of categories for Wikimedia commons. The problem is that categories in Wikimedia Commons are not disjunct. For example, for https://commons.wikimedia.org/wiki/Category:Rosa there is the category "Roses by location", "Rosa by month", "Roses by photographer" which are not what I want. However, to filter those I think just removing categories with the substring " by " might be enough. I have to check it, though.
During my PhD thesis, I have partly worked on the problem of the automatic accurate test data generation. In order to be complete and self-contained, I have addressed all kinds of data types, including strings. This article is the first one of a little series that aims at showing how to generate accurate and relevant strings under several constraints. What is a regular expression? We are talking about formal language theory here. In the known world, there are four kinds of languages. More formally, in 1956, the Chomsky hierarchy has been formulated, classifying grammars (which define languages) in four levels: unrestricted grammars, matching langages known as Turing languages, no restriction, context-sensitive grammars, matching contextual languages, context-free grammars, matching algebraic languages, based on stacked automata, regular grammars, matching regular languages. Each level includes the next level. The last level is the “weaker”, which must not sound negative here. Regular expressions are used often because of their simplicity and also because they solve most problems we encounter daily. A regular expression is a small language with very few operators and, most of the time, a simple semantics. For instance ab(c|d) means: a word (a data) starting by ab and followed by c or d. We also have quantification operators (also known as repetition operators), such as ?, * and +. We also have { to define a repetition between x, y} and x . Thus, y ? is equivalent to {0,1}, * to {0,} and + to {1,}. When is missing, it means \displaystyle +\infty , so unbounded (or more exactly, bounded by the limits of the machine). So, for instance y ab(c|d){2,4}e? means: a word starting by ab, followed 2, 3 or 4 times by c or d (so cc, cd, dc, ccc, ccd, cdc and so on) and potentially followed by e. The goal here is not to teach you regular expressions but this is kind of a tiny reminder. There are plenty of regular languages. You might know POSIX regular expression or Perl Compatible Regular Expressions (PCRE). Forget the first one, please. The syntax and the semantics are too much limited. PCRE is the regular language I recommend all the time. Behind every formal language there is a graph. A regular expression is compiled into a Finite State Machine (FSM). I am not going to draw and explain them, but it is interesting to know that behind a regular expression there is a basic automaton. No magic. Why focussing regular expressions? This article focuses on regular languages instead of other kind of languages because we use them very often (even daily). I am going to address context-free languages in another article, be patient young padawan. The needs and constraints with other kind of languages are not the same and more complex algorithms must be involved. So we are going easy for the first step. Understanding PCRE: lex and parse them The Hoa\Compiler library provides both \displaystyle LL(1) and \displaystyle LL(k) compiler-compilers. The documentation describes how to use it. We discover that the \displaystyle LL(k) compiler comes with a grammar description language called PP. What does it mean? It means for instance that the grammar of the PCRE can be written with the PP language and that Hoa\Compiler\Llk will transform this grammar into a compiler. That’s why we call them “compiler of compilers”. Fortunately, the Hoa\Regex library provides the grammar of the PCRE language in the hoa://Library/Regex/Grammar.pp file. Consequently, we are able to analyze regular expressions written in the PCRE language! Let’s try in a shell at first with the hoa compiler:pp tool: $ echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --visitor dump> #expression> > #concatenation> > > token(literal, a)> > > token(literal, b)> > > #quantification> > > > #alternation> > > > > token(literal, c)> > > > > token(literal, d)> > > > token(n_to_m, {2,4})> > > #quantification> > > > token(literal, e)> > > > token(zero_or_one, ?) We read that the whole expression is composed of a single concatenation of two tokens: a and b, followed by a quantification, followed by another quantification. The first quantification is an alternation of (a choice betwen) two tokens: c and d, between 2 to 4 times. The second quantification is the e token that can appear zero or one time. Pretty simple. The final output of the Hoa\Compiler\Llk\Parser class is an Abstract Syntax Tree (AST). The documentation of Hoa\Compiler explains all that stuff, you should read it. The \displaystyle LL(k) compiler is cut out into very distinct layers in order to improve hackability. Again, the documentation teach us we have four levels in the compilation process: lexical analyzer, syntactic analyzer, trace and AST. The lexical analyzer (also known as lexer) transforms the textual data being analyzed into a sequence of tokens (formally known as lexemes). It checks whether the data is composed of the good pieces. Then, the syntactic analyzer (also known as parser) checks that the order of tokens in this sequence is correct (formally we say that it derives the sequence, see the Matching words section to learn more). Still in the shell, we can get the result of the lexical analyzer by using the --token-sequence option; thus: $ echo 'ab(c|d){2,4}e?' | hoa compiler:pp hoa://Library/Regex/Grammar.pp 0 --token-sequence # … token name token value offset----------------------------------------- 0 … literal a 0 1 … literal b 1 2 … capturing_ ( 2 3 … literal c 3 4 … alternation | 4 5 … literal d 5 6 … _capturing ) 6 7 … n_to_m {2,4} 7 8 … literal e 12 9 … zero_or_one ? 13 10 … EOF 15 This is the sequence of tokens produced by the lexical analyzer. The tree is not yet built because this is the first step of the compilation process. However this is always interesting to understand these different steps and see how it works. Now we are able to analyze any regular expressions in the PCRE format! The result of this analysis is a tree. You know what is fun with trees? Visiting them. Visiting the AST Unsurprisingly, each node of the AST can be visited thanks to the Hoa\Visitor library. Here is an example with the “dump” visitor: use Hoa\Compiler;use Hoa\File;// 1. Load grammar.$compiler = Compiler\Llk\Llk::load( new File\Read('hoa://Library/Regex/Grammar.pp'));// 2. Parse a data.$ast = $compiler->parse('ab(c|d){2,4}e?');// 3. Dump the AST.$dump = new Compiler\Visitor\Dump();echo $dump->visit($ast); This program will print the same AST dump we have previously seen in the shell. How to write our own visitor? A visitor is a class with a single visit method. Let’s try a visitor that pretty print a regular expression, i.e. transform: ab(c|d){2,4}e? into: ab( c | d){2,4}e? Why a pretty printer? First, it shows how to visit a tree. Second, it shows the structure of the visitor: we filter by node ID ( #expression, #quantification, token etc.) and we apply respective computations. A pretty printer is often a good way for being familiarized with the structure of an AST. Here is the class. It catches only useful constructions for the given example: use Hoa\Visitor;class PrettyPrinter implements Visitor\Visit { public function visit ( Visitor\Element $element, &$handle = null, $eldnah = null ) { static $_indent = 0; $out = null; $nodeId = $element->getId(); switch($nodeId) { // Reset indentation and… case '#expression': $_indent = 0; // … visit all the children. case '#quantification': foreach($element->getChildren() as $child) $out .= $child->accept($this, $handle, $eldnah); break; // One new line between each children of the concatenation. case '#concatenation': foreach($element->getChildren() as $child) $out .= $child->accept($this, $handle, $eldnah) . "\n"; break; // Add parenthesis and increase indentation. case '#alternation': $oout = []; $pIndent = str_repeat(' ', $_indent); ++$_indent; $cIndent = str_repeat(' ', $_indent); foreach($element->getChildren() as $child) $oout[] = $cIndent . $child->accept($this, $handle, $eldnah); --$_indent; $out .= $pIndent . '(' . "\n" . implode("\n" . $cIndent . '|' . "\n", $oout) . "\n" . $pIndent . ')'; break; // Print token value verbatim. case 'token': $tokenId = $element->getValueToken(); $tokenValue = $element->getValueValue(); switch($tokenId) { case 'literal': case 'n_to_m': case 'zero_or_one': $out .= $tokenValue; break; default: throw new RuntimeException( 'Token ID ' . $tokenId . ' is not well-handled.' ); } break; default: throw new RuntimeException( 'Node ID ' . $nodeId . ' is not well-handled.' ); } return $out; }} And finally, we apply the pretty printer on the AST like previously seen: $compiler = Compiler\Llk\Llk::load( new File\Read('hoa://Library/Regex/Grammar.pp'));$ast = $compiler->parse('ab(c|d){2,4}e?');$prettyprint = new PrettyPrinter();echo $prettyprint->visit($ast); Et voilà ! Now, put all that stuff together! Isotropic generation We can use Hoa\Regex and Hoa\Compiler to get the AST of any regular expressions written in the PCRE format. We can use Hoa\Visitor to traverse the AST and apply computations according to the type of nodes. Our goal is to generate strings based on regular expressions. What kind of generation are we going to use? There are plenty of them: uniform random, smallest, coverage based… The simplest is isotropic generation, also known as random generation. But random says nothing: what is the repartition, or do we have any uniformity? Isotropic means each choice will be solved randomly and uniformly. Uniformity has to be defined: does it include the whole set of nodes or just the immediate children of the node? Isotropic means we consider only immediate children. For instance, a node #alternation has \displaystyle c^1 immediate children, the probability \displaystyle C to choose one child is: \displaystyle P(C) = \frac{1}{c^1} Yes, simple as that! We can use the Hoa\Math library that provides the Hoa\Math\Sampler\Random class to sample uniform random integers and floats. Ready? Structure of the visitor The structure of the visitor is the following: use Hoa\Visitor;use Hoa\Math;class IsotropicSampler implements Visitor\Visit { protected $_sampler = null; public function __construct ( Math\Sampler $sampler ) { $this->_sampler = $sampler; return; } public function visit ( Visitor\Element $element, &$handle = null, $eldnah = null ) { switch($element->getId()) { // … } }} We set a sampler and we start visiting and filtering nodes by their node ID. The following code will generate a string based on the regular expression contained in the $expression variable: $expression = '…';$ast = $compiler->parse($expression);$generator = new IsotropicSampler(new Math\Sampler\Random());echo $generator->visit($ast); We are going to change the value of $expression step by step until having ab(c|d){2,4}e?. Case of #expression A node of type #expression has only one child. Thus, we simply return the computation of this node: case '#expression': return $element->getChild(0)->accept($this, $handle, $eldnah); break; Case of token We consider only one type of token for now: literal. A literal can contain an escaped character, can be a single character or can be . (which means everything). We consider only a single character for this example (spoil: the whole visitor already exists). Thus: case 'token': return $element->getValueValue(); break; Here, with $expression = 'a'; we get the string a. Case of #concatenation A concatenation is just the computation of all children joined in a single piece of string. Thus: case '#concatenation': $out = null; foreach($element->getChildren() as $child) $out .= $child->accept($this, $handle, $eldnah); return $out; break; At this step, with $expression = 'ab'; we get the string ab. Totally crazy. Case of #alternation An alternation is a choice between several children. All we have to do is to select a child based on the probability given above. The number of children for the current node can be known thanks to the getChildrenNumber method. We are also going to use the sampler of integers. Thus: case '#alternation': $childIndex = $this->_sampler->getInteger( 0, $element->getChildrenNumber() - 1 ); return $element->getChild($childIndex) ->accept($this, $handle, $eldnah); break; Now, with $expression = 'ab(c|d)'; we get the strings abc or abd at random. Try several times to see by yourself. Case of #quantification A quantification is an alternation of concatenations. Indeed, e{2,4} is strictly equivalent to ee|eee|eeee. We have only two quantifications in our example: ? and {. We are going to find the value for x, y} and x and then choose at random between these bounds. Let’s go: y case '#quantification': $out = null; $x = 0; $y = 0; // Filter the type of quantification. switch($element->getChild(1)->getValueToken()) { // ? case 'zero_or_one': $y = 1; break; // {x,y} case 'n_to_m': $xy = explode( ',', trim($element->getChild(1)->getValueValue(), '{}') ); $x = (int) trim($xy[0]); $y = (int) trim($xy[1]); break; } // Choose the number of repetitions. $max = $this->_sampler->getInteger($x, $y); // Concatenate. for($i = 0; $i < $max; ++$i) $out .= $element->getChild(0)->accept($this, $handle, $eldnah); return $out; break; Finally, with $expression = 'ab(c|d){2,4}e?'; we can have the following strings: abdcce, abdc, abddcd, abcde etc. Nice isn’t it? Want more? for($i = 0; $i < 42; ++$i) echo $generator->visit($ast), "\n";/** * Could output: * abdce * abdcc * abcdde * abcdcd * abcde * abcc * abddcde * abddcce * abcde * abcc * abdcce * abcde * abdce * abdd * abcdce * abccd * abdcdd * abcdcce * abcce * abddc */ Performance This is difficult to give numbers because it depends of a lot of parameters: your machine configuration, the PHP VM, if other programs run etc. But I have generated 1 million ( \displaystyle 10^6 ) strings in less than 25 seconds on my machine (an old MacBook Pro), which is pretty reasonable. Conclusion and surprise So, yes, now we know how to generate strings based on regular expressions! Supporting all the PCRE format is difficult. That’s why the Hoa\Regex library provides the Hoa\Regex\Visitor\Isotropic class that is a more advanced visitor. This latter supports classes, negative classes, ranges, all quantifications, all kinds of literals (characters, escaped characters, types of characters — \w, \d, \h…—) etc. Consequently, all you have to do is: use Hoa\Regex;// …$generator = new Regex\Visitor\Isotropic(new Math\Sampler\Random());echo $generator->visit($ast); This algorithm is used in Praspel, a specification language I have designed during my PhD thesis. More specifically, this algorithm is used inside realistic domains. I am not going to explain it today but it allows me to introduce the “surprise”. Generate strings based on regular expressions in atoum atoum is an awesome unit test framework. You can use the Atoum\PraspelExtension extension to use Praspel and therefore realistic domains inside atoum. You can use realistic domains to validate and to generate data, they are designed for that. Obviously, we can use the Regex realistic domain. This extension provides several features including sample, sampleMany and predicate to respectively generate one datum, generate many data and validate a datum based on a realistic domain. To declare a regular expression, we must write: $regex = $this->realdom->regex('/ab(c|d){2,4}e?/'); And to generate a datum, all we have to do is: $datum = $this->sample($regex); For instance, imagine you are writing a test called test_mail and you need an email address: public function test_mail ( ) { $this ->given( $regex = $this->realdom->regex('/[\w\-_]+(\.[\w\-\_]+)*@\w\.(net|org)/'), $address = $this->sample($regex), $mailer = new \Mock\Mailer(…), ) ->when($mailer->sendTo($address)) ->then ->…} Easy to read, fast to execute and help to focus on the logic of the test instead of test data (also known as fixtures). Note that most of the time the regular expressions are already in the code (maybe as constants). It is therefore easier to write and to maintain the tests. I hope you enjoyed this first part of the series :-)! This work has been published in the International Conference on Software Testing, Verification and Validation: Grammar-Based Testing using Realistic Domains in PHP.
I'd like to expand my earlier comment into a little essay on the severe practical difficulties in performing the suggested experiment. I'm going to start my asserting that we don't care if the experiment is a "two-slit" per se. It is sufficient that it is a diffractive scattering experiment of some kind. However, we do care about having spacial resolution good enough to distinguish which scattering site (or slit) was the one on the path of the alleged particle the ability to run the experiment at low rate so that we can exclude multi-projectile or beam/beam interaction as the source of any interference that we observe. (Though it's going to turn out that we never even get far enough for this to matter...) Now let's get down to designing the beast. To start with we should note to any casual readers that the diagrams you see in pop-sci treatment are not even remotely to scale: typical classroom demonstration kit for use with lasers has the slits set less than $1\,\mathrm{mm}$ apart and uses projection distances of several meters or more to get fringes that are separated by a few centimeters. Or then use much closer set slits to get large angles. The angular separation between maxima is on order of$$ \Delta \theta = \frac{\lambda}{d} \,,$$where $\lambda$ is the relevant wavelength and $d$ is the scattering site (or slit) separation. Allowing that the distance from the scattering surface to the projection surface is $\ell$, the spacial separation is (in the small angle approximation)$$ \Delta x = l \Delta \theta = \frac{\ell}{d} \lambda \,.$$ Anna has suggested doing the experiment with electrons, which means that we're interested in the de Broglie wavelength usually given by $\lambda = \hbar/p$, and measuring their position en route with a tracking detector of some kind. The tracking detector's spacial resolution is going to be the big barrier here. Let's start by considering a Liquid Argon TPC because it is a hot technology just now. Spacial resolution down to about $1 \,\mathrm{mm}$ should be achievable without any breakthrough in technology (typical devices have $3$-$5\,\mathrm{mm}$ resolution). That sets our value for $d$. Now, to observe a interferences pattern, we need a detector resolution at least four times finer than the spacial resolution. Assume for the sake of argument that I use a detector with a $20 \,\mathrm{\mu{}m}$ spacial resolution. Maybe a MCP or a silicon tracker. That sets $\Delta x = 4(20 \,\mathrm{\mu{}m})$. I also assume that I need $\ell$ to be at least $2d$ to be able to track the particle between the scattering and projection planes. Probably an under-estimate, so be it. Now I can compute the properties of the necessary electron source$$\begin{align*}p &= \frac{\hbar}{\lambda} \\&= \frac{\hbar\ell}{d \, \Delta x} \tag{1}\\&= 2\frac{\hbar}{\Delta x}\\&= \frac{7 \times 10^{-22} \,\mathrm{MeV \, s}}{40 \times 10^{-6} \,\mathrm{m}}\\&= \frac{7 \times 10^{-22} \,\mathrm{MeV}}{7 \times 10^{-12} c} \\&= 10^{-10} \,\mathrm{MeV/c}\\&= 10^{-4} \,\mathrm{eV/c} \,,\end{align*}$$which is safely non-relativistic, so we have a beam energy of $5 \times 10^{-9}\,\mathrm{eV^2}/(m_e c^2)$, and the tracking medium will completely mess up the experiment. By choosing a $20\,\mathrm{m}$ flight path between scattering and detection and getting down to, say, the $10\,\mathrm{\mu{}m}$ scale for $d$ we can get beam momenta up to $10^3\,\mathrm{eV}$ which at lest gives us beam energies about $1\,\mathrm{eV}$. But how are you going to track a $1\,\mathrm{eV}$ electron without scattering it? I'm sure you can get better spacial resolution in silicon, but I don't think you can get the beam energy up high enough to pass a great enough distance through the tracking medium to actually make the measurement. The fundamental problem here is the tension between the desire to track the electron on it's route which forces you to use nearly human scales for parts of the detector and the presence of that pesky $\hbar$ in the numerator of equation (1) which is driving the necessary beam momentum down. The usual method of getting diffractive effects is just to make $d$ small and $\ell$ large enough to compensate for the $\hbar$ but our desire to track the particles works against us there by putting a floor on our attemtps to shrink $d$ and by because longer flight paths mean more sensitivity to scattering by the tracking medium.
The Annals of Probability Ann. Probab. Volume 30, Number 4 (2002), 1539-1575. Characterization of stationary measures for one-dimensional exclusion processes Abstract The product Bernoulli measures $\nu_\alpha$ with densities $\alpha$, $\alpha\in [0,1]$, are the extremal translation invariant stationary measures for an exclusion process on $\mathbb{Z}$ with irreducible random walk kernel $p(\cdot)$. Stationary measures that are not translation invariant are known to exist for finite range $p(\cdot)$ with positive mean. These measures have particle densities that tend to 1 as $x\to\infty$ and tend to 0 as $x\to -\infty$; the corresponding extremal measures form a one-parameter family and are translates of one another. Here, we show that for an exclusion process where $p(\cdot)$ is irreducible and has positive mean, there are no other extremal stationary measures. When $\sum_{x<0} x^2 p(x) =\infty$, we show that any nontranslation invariant stationary measure is not a blocking measure; that is, there are always either an infinite number of particles to the left of any site or an infinite number of empty sites to the right of the site. This contrasts with the case where $p(\cdot)$ has finite range and the above stationary measures are all blocking measures. We also present two results on the existence of blocking measures when $p(\cdot)$ has positive mean, and $p(y)\leq p(x)$ and $p(-y)\leq p(-x)$ for $1\leq x\leq y$. When the left tail of $p(\cdot)$ has slightly more than a third moment, stationary blocking measures exist. When $p(-x)\leq p(x)$ for $x>0$ and $\sum_{x<0}x^2p(x)>\infty$, stationary blocking measures also exist. Article information Source Ann. Probab., Volume 30, Number 4 (2002), 1539-1575. Dates First available in Project Euclid: 10 December 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1039548366 Digital Object Identifier doi:10.1214/aop/1039548366 Mathematical Reviews number (MathSciNet) MR1944000 Zentralblatt MATH identifier 1039.60086 Citation Bramson, Maury; Liggett, Thomas M.; Mountford, Thomas. Characterization of stationary measures for one-dimensional exclusion processes. Ann. Probab. 30 (2002), no. 4, 1539--1575. doi:10.1214/aop/1039548366. https://projecteuclid.org/euclid.aop/1039548366
Suppose we have the following count data: | X | Freq ||-------|------|| 0 | 18 | | 1 | 32 | | 2 | 17 || 3 | 10 || 4 | 5 || 5 | 2 || TOTAL | 84 | We want to construct a likelihood ratio test to see if a Poisson distribution is suitable to describe the data. The likelihood and log-likelihood equations for a Poisson distribution are: $$ L(\lambda) = \prod_{y=1}^{84} \frac{e^{-\lambda}\lambda^y}{y!} $$ $$ l(\lambda) = \sum_{y=1}^{84}\bigg(-\lambda + ylog(\lambda) - log(y!) \bigg)$$ We can solve for the MLE $\hat{\lambda}$ as follows: $$ \frac{dl(\lambda)}{d\lambda} = \sum_{y=1}^{84}\bigg(-1 + \frac{y}{\lambda}\bigg) = 0 \rightarrow \hat{\lambda} = \frac{\sum_{y=1}^{84}y}{84} = \frac{\sum_{i=1}^{84}x_i f_i}{84} = \frac{126}{84} = 1.5 $$ The alternative hypothesis is that the data follow a multinomial distribution. The likelihood and log-likelihood equations for a Poisson distribution are: $$ L(p_0,p_1,p_2,p_3,p_4,p_5) = {n\choose{f_0,f_1,f_2,f_3,f_4,f_5}} p_0^{f_0} p_1^{f_1} p_2^{f_2} \pi_3^{f_3} p_4^{f_4} p_5^{f_5} $$ $$ l(p_0,p_1,p_2,p_3,p_4,p_5) = log{n\choose{f_0,f_1,f_2,f_3,f_4,f_5}} + \sum_{i=0}^{5} f_i \times log(p_i) $$ Using the method of Lagrange Multipliers (with constraint $\sum_{i=0}^{5} p_i = 1$), it can be shown that the MLE's $\hat{p_i}$ are equal to: $$ \hat{p_i} = \frac{f_i}{n} $$ I know that the likelihood ratio statistic is defined as $ D = -2(l_0 - l_A) \sim \chi_{(6-1)-1}^2 \sim \chi_4^2$, where $l_A = l(p_0,p_1,p_2,p_3,p_4,p_5)$, i.e. the log-likelihood equation for the multinomial distribution. However, I am not sure what $l_0$ should be. Do I assume that the distribution of the null hypothesis follows a multinomial distribution with $p_i$ ~ Poisson ( Case 1), or do I assume that the null hypothesis follows Poisson distribution ( Case 2)? I've tried both and I do not get the same answer. Case 1: $ l_0 = log{n\choose{f_0,f_1,f_2,f_3,f_4,f_5}} + \sum_{i=0}^{5} f_i \times log\bigg(\frac{e^{-\hat{\lambda}}\hat{\lambda}^i}{i!}\bigg) $ Case 2: $ l_0 = \sum_{y=1}^{84}\bigg(-\lambda + ylog(\lambda) - log(y!) \bigg) $
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$ A Modeling Introduction to Deep Learning In this post, I’d like to introduce you to some basic concepts of deep learning (DL) from a modeling perspective. I’ve tended to stay away from “intro” style blog posts because: There are so, so many of them. They’re hard to keep in focus. That said, I was presenting on BERT for a discussion group at work. This was our first DL paper, so I needed to warm-start a technical audience with a no-frills intro to modeling with deep nets. So here we are, trying to focus what this post will be: It will presume a technically sophisticated reader. No machine learning (ML) background is assumed. The main goal is to set the stage for future discussion about BERT. Basically, this is me typing up those notes. Note the above leaves questions about optimization and generalization squarely out of scope. The Parametric Model Deep learning is a tool for the generic task of parametric modeling. Parametric modeling (PM) is a term I am generously applying from statistical estimation theory that encapsulates a broad variety of ML buzzwords, including supervised, unsupervised, reinforcement, and transfer learning. In the most general sense, a parametric model \(M\) accepts some vector of parameters \(\theta\) and describes some structure in a random process. Goodness, what does that mean? Structure in a random process is everything that differentiates it from noise. But what’s “noise”? When we fix the model \(M\), we’re basically saying there’s only some classes of structure we’re going to represent, and everything else is what we consider noise. The goal is to pick a “good” model and find parameters for it. A Simple Example For instance, let’s take a simple random process, iid draws from the normal distribution \(z\sim \mathcal{D}= N(\mu, \sigma^2)\) with an unknown mean \(\mu\) and variance \(\sigma^2\). We’re going to try capture the richest possible structure over \(z\), its actual distribution. One model might be the unit normal, \(M(\theta)=N(\theta, 1)\). Then our setup, and potential sources of error, look like this: What I call parametric and model mismatch are also known as estimation and approximation error (Bottou and Bousquet 2007). Here, we have one the most straightforward instances of PM, parameter estimation (we’re trying to estimate \(\mu\)). Revisiting our definitions What constitutes a “good” model? Above, we probably want to call models with \(\theta\) near \(\mu\) good ones. But in other cases, it’s not so obvious what makes a good model. One of the challenges in modeling in general is articulating what we want. This is done through a loss function \(\ell\), where want models with small losses. In other words, we’d like to find a model \(M\) and related parameters \(\theta\) where \[ \E_{z\sim \mathcal{D}}\ha{\ell(z, M(\theta))} \] is as small as possible (here, for our iid process). Note that in some cases, this doesn’t have to be the same as the loss function used for optimization for finding \(\theta\), but that’s another discussion (there are several reasons to do so). Another Example Now let’s jump into another modeling task, supervised learning. Here: Our iid random process \(\mathcal{D}\) will be generating pairs \(\pa{\text{some image}, \text{“cat” or “dog”}}\). The structure we want to capture is that all images of dogs happen to be paired with the label \(\text{“dog”}\) and analogously so for cats. We’ll gloss over what our model is for now. A loss that captures what we want for our desired structure would be the zero-one loss, which is \(1\) when we’re wrong, \(0\) when we’re right. Let’s fix some model and parameters, which takes an image and labels it as a cat or dog (so \(M(\theta)\) is a function itself) as follows, and then let’s see how it does on our loss function. OK, so why Deep Learning? This post was intentionally structured in a way that takes the attention away from DL. DL is a means to achieve the above PM goals–it’s a means to an end and being able to reason about higher-level modeling concerns is crucial to understanding the tool. So, DL is an approach to building models \(M\) and it studies how to find good parameters \(\theta\) for those models. Deep Learning Models A DL model is anything that vaguely resembles the following model. Namely, it has many parameterized functions composed together to create a function. A function is usually good enough to capture most structure that we’re interested in random processes, given sufficiently sophisticated inputs and outputs. The inputs and outputs to this function can be (not exhaustive): fixed-width multidimensional arrays (casually known as tensors, sort of) embeddings (numerical translations) of categories (like all the words in the English dictionary) variable width tensors The parameters this function takes (which differ from its inputs and effect what the function looks like) are fixed width tensors. I haven’t seen variable-width parameters in DL models, except as some Bayesian interpretations (Hinton 1993). The Multi-Layer Perceptron Our prototypical example of a neural network is the Multi-Layer Perceptron, or MLP, which takes a numerical vector input to a numerical vector output. For a parameter vector \(\theta=\mat{\theta_1& \theta_2&\cdots&\theta_L}\), which contains parameters for our \(L\) layers, an MLP looks like: \[ M(\theta)= x\mapsto f_{\theta_L}^{(L)}\circ f_{\theta_{L-1}}^{(L-1)}\circ\cdots\circ f_{\theta_1}^{(1)}(x)\,, \] and we define each layer as \[ f_{\theta_i}=\max(0, W_ix+b_i)\,. \] The parameters \(W_i, b_i\) are set by the contents of \(\theta_i\). This is the functional form of linear transforms followed by nonlinearities. It describes what’s going on in this image: Why DL? While it might be believable that functions in general make for great models that could capture structure in a lot of phenomena, why have these particular parameterizations of functions taken off recently? This is basically the only part of this post that has to do with DL, and most of it’s out of scope. In my opinion, it boils down to three things. Deep learning is simultaneously: Flexible in terms of how many functions it can represent for a fixed parameter size. Lets us find so-called low-loss estimates of \(\theta\) fairly quickly. Has working regularization strategies. Flexibility The MLP format above might seem strange, but this linearity-followed-by-non-linearity happens to be particularly expressive, in terms of the number of different functions we can represent with a small set of parameters. The fact that a sufficiently wide neural network can well-approximate smooth functions is well known (Universal Approximation Theorem), but what’s of particular interest is how linear increases in depth to a network exponentially increase its expressiveness (Montúfar, et al 2014). An image from the cited work above demonstrates how composition with non-linearities increases expressiveness. Here, with an absolute value nonlinearity, we can reflect the input space on itself through composition. This means we double the number of linear regions in our neural net by adding a layer. Efficiency One of the papers that kicked off the DL craze was Alexnet (Krizhevsky 2012), and the reasons for its existence was that we could efficiently compute the value of a neural network \(M(\theta)\) on a particular image \(x\) using specialized hardware. Not only does the simple composition of simple functions enable fast forward computation of the model value \(M(\theta)(x)\), but because the operations can be expressed as a directed acyclic graph of almost differentiable functions, one can quickly compute reverse automatic derivatives \(\partial_\theta M(\theta)(x)\) in just about the same amount of time. This is a very happy coincidence. We can compute the functional value of a neural net and its derivative in time linear in the parameter size, and we have a lot of parameters. Here, efficiency matters a lot for the inner loop of the optimization (which uses derivatives with SGD) to find “good” parameters \(\theta\). This efficiency, in turn, enabled a lot of successful research. Generalization Finally, neural networks generalize well. This means that given a training set of examples, they are somehow able to have low loss on unseen examples coming from the same random process, just by training on a (possibly altered, or regularized) loss from given examples. This is particularly counterintuitive for nets due to their expressivity, which is typically at odds with generalization with traditional ML analyses. have been proposed, but none of them are completely satisfying yet. Next time We’ll review the Transformer, and what it does. That’ll set us up for some BERT discussion.
@RussellBorogove's answer mentions temperatures of roughly 3700 K and 1800 K in the combustion chamber and exhaust of a big rocket engine. The canonical Wikipedia plot of velocity and temperature for a de Laval nozzle is shown below as a schematic representation only. Ignoring some aspects of gas theory, we can estimate the thermal velocity of a molecule using $$v_T = \sqrt{\frac{k_\mathrm B T}{m}}.$$ Testing example using air or nitrogen ($m=28 \times 1.673\times10^{-27}\ \mathrm{kg}$) at $293\ \mathrm K$ with Boltzmann constant $k_\mathrm B = 1.381\times 10^{-23}\ \mathrm{J/K}$ gives $297\ \mathrm{m/s}$ which does agree with the speed of sound (a good rough indicator of average thermal velocity). species mass (kg) 293 K 1800 K 3700 K ------- --------- ------ ------ ------ H2 3.346E-27 1100 2700 3900 CO2 7.361E-26 230 580 830 The thermal velocity will be isotropic (all directions) but the exhaust velocity is directed mostly in one direction. The Maxwell-Boltzman distribution projected in one dimension is given as $$f(v)\,\mathrm dv = \left( \frac{m}{2 \pi k_\mathrm B T}\right)^{1/2} \exp\left(-\frac{mv^2}{2 k_\mathrm B T}\right)\,\mathrm dv$$ The directed exhaust velocity might be close to zero in the combustion chamber, and roughly $3600\ \mathrm{m/s}$ in the nozzle. Assuming that the engine burns methane and a little bit of H2 is formed in the exhaust in order to fulfill the terms of the question, this plot shows the resulting estimated velocity distributions. For each species, the wide curve centered at zero represents the condition in the combustion chamber, and the narrower curve offset to the right represents the axial velocity exiting the nozzle. The transverse velocity distribution would look similar except that the the offset for the exhaust curve would be closer to zero and depend on distance from the axis and details of under/over expansion. That's a more complicated calculation and has too many special cases for to address for this level of approximation. def f(v, v0, m, T): term_1 = np.sqrt((m)/(twopi * kB * T)) term_2 = (m * (v-v0)**2)/(2*kB*T) return term_1 * np.exp(-term_2) import numpy as np import matplotlib.pyplot as plt twopi = 2 * np.pi kB = 1.381E-23 mp = 1.673E-27 temps = np.array([3700, 1800]) v0s = np.array([0, 3600]) m_H2, m_CO2 = mp * np.array([2, 44]) v = np.linspace(-15000, 15000, 301) f_H2 = [f(v, v0, m_H2, T) for (T, v0) in zip(temps, v0s)] f_CO2 = [f(v, v0, m_CO2, T) for (T, v0) in zip(temps, v0s)] if True: fig = plt.figure() for i, (f, name) in enumerate(zip((f_H2, f_CO2), ('H2', 'CO2'))): ax = fig.add_subplot(2, 1, i+1) # plt.subplot(2, 1, i+1) for thing in f: ax.plot(v, thing) ax.set_title(name, fontsize=18) # ax.set_yticklabels([]) # no labels plt.gca().axes.get_yaxis().set_visible(False) # no labels or ticks ax.set_xlabel('velocity (m/s)', fontsize=16) plt.show() Source
I want to perform some interval-operations, and for addition, subtraction, and logic-/shift-operators, that works very well. The only problem I have is the multiplication. An interval $[a, b]$ denotes all two's complement numbers $x$ with the property $a \leq x \leq b$. An interval-operation means that if i have a binary operation $\circ$ and two intervals $[a, b]$ and $[c, d]$, then $[a, b] \circ [c, d] = [e, f]$ means that for for an arbitrary $x \in [a, b]$ and $y \in [c, d]$: $$x \circ y \in [e, f].$$ But additionally, I want to have the most precise or a very precise interval. "The most precise" means that there are the values $w,x \in [a, b]$ and $y,z \in [c, d]$ for which holds that $w \circ y = e$ and $x \circ z = f$ An example of an interval-operation: $A = [7,14]$ $B = [-6, 77]$ $A + B = [1, 91]$ It's correct, because there is no value outside of $[1, 91]$ that can be reached, when adding numbers out of $A$ and $B$. Also it's precise, because $7+(-6) = 1$ and $14+77 = 91$ It seems impossible to find an efficient algorithm that handles all the overflows correctly and finds the precise (or at least a good) interval. Is there a good algorithm?
The Planck function that describes the blackbody emission is a function of temperature and wavelength:$$B_\lambda(T)=\frac{2hc^2}{\lambda^5}\cdot\frac1{e^{hc/\lambda k_BT}-1}$$Due to the temperature dependence, blackbodies at different temperatures have different emissions. The graph below, from Wikipedia shows the drastic changes for blackbodies of temperatures 3000 K, 4000 K, and 5000 K (also shown is the Rayleigh-Jeans regime where $B_\lambda(T)\propto\lambda^{-4}$ which blows up to infinity at low wavelengths). The spectral class of the sun is G2V. The G2 signifies that the surface temperature of the sun is about 5800 K, not 5250 C (about 5520 K) in your diagram. Thus, the emissions observed from the sun should be larger than that of the modeled blackbody you show. Plotted below is the Planck function for a 5520 K emitter, a 5777 K emitter and a 5800 emitter. Both the 5777 K and 5800 K blackbodies have a peak that is about 30% larger than the 5520 K blackbody (left axis is W/sr/m$^3$, bottom axis is $\mu$m). From Jim in the comments, Don't forget temperature differences across the surface, light from hotter depth that eventually finds its way out, other light-producing phenomena (spectral emission from excited electrons recombining with atoms then ionizing again, photons created in scattering, decay, annihilation, etc processes), etc. A star is a complex system with a lot happening all the time. It's safe to assume that blackbody radiation (while the primary source) isn't the only source of radiation. It appears that your image wants to fit the curve to the larger wavelengths, rather than the peak--this is where your confusion lies. If you fit the peaks, then surely $\varepsilon\leq1$ is satisfied (so long as you note the above comment from Jim, that the sun really isn't a pure blackbody).
The table below shows the ISO A series paper sizes (in mm) from A0 through A10. The most common one is A4, of which the exact size is 210 × 297 mm. How is that size determined? A0 841 × 1189 A1 594 × 841 A2 420 × 594 A3 297 × 420 A4 210 × 297 A5 148 × 210 A6 105 × 148 A7 74 × 105 A8 52 × 74 A9 37 × 52 A10 26 × 37 Amazingly, you can exactly compute all these sizes from the following two simple rules. The area of a sheet of A0 is 1 m 2, with the actual paper size rounded to the nearest mm. When you fold a page in two, the size of the folded page has to be equal to the next smaller size (for example, a folded A3 must be the size of an A4), with the actual paper size rounded downto the nearest mm. Applying the Rules Rule 1 is simple, you have to start from some basic size, and 1 m 2 is a nicely round number. Rule 2 exactly determines the aspect ratio of a sheet, since there is only one aspect ratio for which it holds. Why is that so? If you fold a sheet of size \(w\times h\), you get a sheet of size \(h/2\times w\). If you want the aspect ratio of both sheets to be the same, you need to have \[\frac{w}{h}=\frac{h/2}{w},\] which means that \(h/w=\sqrt{2}\). Hence, the ratio between the width and the height of a page has to be \(1/\sqrt{2}\). Rule 2 allows for easy scaling. For example, a copier can shrink an A3 page onto A4 paper without fiddling with margins. With the area of a sheet of A0 and its aspect ratio known, we can now compute its exact size. We have that \(hw=10^6\,\mathrm{mm}\) and \(h=\sqrt{2}w\). It follows that \(\sqrt{2}w^2=10^6\), from which we get that \(w=841\,\mathrm{mm}\) (after rounding to the nearest mm). From \(hw=10^6\,\mathrm{mm}\) and \(w=h/\sqrt{2}\), it follows that \(h=1189\,\mathrm{mm}\) (again after rounding to the nearest mm). This makes the size of a sheet of A0 841 × 1189 mm. The dimensions of the smaller sizes are then computed one after the other by dividing the largest dimension by two (and rounding down). For A1, we get that \(\left\lfloor{1189/2}\right\rfloor=594\), so A1 is 594 × 841 mm. Continuing in this way, we arrive at 210 × 297 mm for A4. The following Python script computes sizes A0 through A10. from __future__ import print_function from __future__ import division width = int(1000 / 2 ** (1 / 4) + 0.5) height = int(1000 * 2 ** (1 / 4) + 0.5) for i in range(11): print('A' + str(i), '=', width, 'x', height, 'mm') width, height = height, width width //= 2 Some Other Interesting Tidbits The weight of paper is typically expressed in g/m 2, which immediately provides us with the weight of a sheet of A0. For a typical weight of 80 g/m 2, we get exactly 5 g for a sheet of A4, since its area is that of A0 halved four times, so divided by 16. When following the rules through for even smaller sizes than A10, we arrive at the smallest possible size being A19. Curiously enough, A19 is actually a square of 1 × 1 mm. I hope nobody is crazy enough as to actually produce A19 paper. The next smaller size, A20, would then be one-dimensional (0 × 1 mm), which is not physically realizable, and all following smaller sizes are 0 × 0 mm. But I’ll stop there, since we have clearly descended into the realm of silliness.
Newspace parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.j (of order \(2\) and degree \(1\)) Newform invariants Self dual: No Analytic conductor: \(1.79663404548\) Analytic rank: \(0\) Dimension: \(2\) Coefficient field: \(\Q(i)\) Coefficient ring: \(\Z[a_1, \ldots, a_{13}]\) Coefficient ring index: \( 2 \) Projective image \(D_{2}\) Projective field Galois closure of \(\Q(\zeta_{12})\) Artin image size \(16\) Artin image $D_4:C_2$ Artin field Galois closure of 8.0.2916000000.5 The \(q\)-expansion and trace form are shown below. Character Values We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/3600\mathbb{Z}\right)^\times\). \(n\) \(577\) \(901\) \(2801\) \(3151\) \(\chi(n)\) \(-1\) \(1\) \(1\) \(-1\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. Label \(\iota_m(\nu)\) \( a_{2} \) \( a_{3} \) \( a_{4} \) \( a_{5} \) \( a_{6} \) \( a_{7} \) \( a_{8} \) \( a_{9} \) \( a_{10} \) 1999.1 0 0 0 0 0 0 0 0 0 1999.2 0 0 0 0 0 0 0 0 0 Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 3.b Odd 1 CM by \(\Q(\sqrt{-3}) \) yes 4.b Odd 1 CM by \(\Q(\sqrt{-1}) \) yes 12.b Even 1 RM by \(\Q(\sqrt{3}) \) yes 5.b Even 1 yes 15.d Odd 1 yes 20.d Odd 1 yes 60.h Even 1 yes This newform can be constructed as the kernel of the linear operator \(T_{7} \) acting on \(S_{1}^{\mathrm{new}}(3600, [\chi])\).