text stringlengths 256 16.4k |
|---|
I'm a physicist with very little knowledge of chemistry. Recently a chem colleague left a molecular model in my classroom, so I brought it back to her, and as I was walking down the hall with it, a hydroxy group started spontaneously swinging around in circles. This made me curious about whether such motions could actually be observed. After asking around and doing some web searching, my understanding is that this sort of thing is called an internal rotation, and usually there is a barrier to rotation, which is modeled by a potential $V(\phi)=(1/2)\sum V_n(1-\cos n\phi)$.
If I'm understanding the history correctly, then originally people thought that the spectrum would be that of a free rotor, but that produced incorrect thermodynamic predictions. Then in 1937 Pitzer published a famous paper, "Thermodynamics of Gaseous Hydrocarbons," in which he showed that you could get correct thermodynamic predictions if you put in a barrier to rotation.
The classic textbook example seems to be ethane. I found this explanation of the physics (di Lauro, "Floppy Molecules," in Rotational Structure in Molecular Infrared Spectra, 2013), which I only partly understand. He gives the following theoretical spectrum of energy states:
So at low energies we have roughly a set of vibrational energy levels with energies $\hbar\omega (\nu+1/2)$, which makes sense to me because I guess we're sitting in a minimum of the potential where the geometry is staggered. He says, "The high levels, on the contrary, correlate and eventually almost coincide with the free internal rotor eigenfunctions, whose energy increases parabolically as $AK_i^2$, where $K_i$ is the eigenvalue of $J_γ$, in ℏ units." This also sort of makes sense to me, because if you put in a lot of energy, you shouldn't see the effect of the little hills and valleys in the potential $V(\phi)$, so it should act more like a free rotor.
Do the units of inverse cm mean $1/\lambda$, or $2\pi/\lambda$? If the former, then $1/\lambda= 200\ \text{cm}^{-1}$ corresponds to have $\hbar\omega_0/k\approx 300$ kelvin, so it makes sense that the barrier to rotation would have significant thermodynamic effects at temperatures on the order of room temperature or below, since the rotational-vibrational modes are not available. Am I getting this right?
But why is it necessary to place this fictitious rotational band on top of the $\nu=1$ state, rather than $\nu=0$? I would think that the fictitious band should be build from the ground state, it wouldn't match the actual energies at low energies, and then at high energies they would match up. |
Clojure Linear Algebra Refresher (3) - Matrix TransformationsYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
June 13, 2017
Please share: Twitter.
New books are available for subscription.
A Clojure programmer will immediately feel at home with linear transformations - functions are also transformations!
Linear transformations preserve the mathematical structure of a vector space.Just like functions define transformations, matrices define a kind of linear transformations - matrix transformations.I often spot programmers using matrices and vectors as dumb data structures and writing theirown loops, or accessing elements one by one in a haphazard fashion. Matrices are useful data structures,but using them as transformations is what really gives them power. This is something that is very well understoodin computer graphics, but is often neglected in other areas.
Before I continue, a few reminders:
These articles are not stand-alone. You should follow along with a linear algebra textbook. I recommend Linear Algebra With Applications, Alternate Edition by Gareth Williams (see more in part 1 of this series). The intention is to connect the dots from a math textbook to Clojure code, rather than explain math theory or teach you basics of Clojure. Please read Clojure Linear Algebra Refresher (1) - Vector Spaces, and, optionally, Clojure Linear Algebra Refresher (2) - Eigenvalues and Eigenvectors first. Include Neanderthal library in your project to be able to use ready-made high-performance linear algebra functions.
This text covers the first part of chapter 6 of the alternate edition of the textbook.
Transformations
Consider a Clojure function
(fn [x] (+ (* 3 x x) 4)). The set of allowed values for x is called the
domainof the function. For value
2, for example, the result is
16. We say that the
image of
2 is
16.In linear algebra, the term
transformation is used, rather than the term function.
As an example, take transformation \(T\) from \(R^3\) to \(R^2\) defined by \(T(x,y,z) = (2x,y-z)\) that the textbook I'musing presents. The
domain, \(R^3\) is the set of all real vectors with dimension 3, while the codomain, the setof valid results, is \(R^2\), the set of all real vectors with dimension 2. The image of any vector can befound by filling in concrete values in formula for T. The image of \((1,4,-2)\) is \(((2\times1),(4-(-2)))\), that is \((2,6)\).
We could write a function in Clojure that does exactly that:
(require '[uncomplicate.neanderthal [native :refer [dv dge]] [core :refer [mv mm nrm2 dot axpy]] [math :refer [sqrt sin cos pi]]]) (defn transformation-1 [x] (dv (* 2.0 (x 0)) (- (x 1) (x 2)))) (transformation-1 (dv 1 4 -2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 2.00 6.00 ]
Although this way of working with vectors seems natural to a programmer, I'll show you a much easier method. And much faster if you have lots of those numbers!
In general, a
transformation T of \(R^n\) ( domain) into \(R^m\) ( codomain) is a rule that assigns to each vector \(\mathbf{u}\) in \(R^n\) a unique vector \(\mathbf{v}\) ( image) in \(R^m\) and write \(T(\mathbf{u})=\mathbf{v}\). We can also use the term mapping, which is more familiar to functional programmers. Dilations, Contractions, Reflections, Rotations
Let's look at some useful geometric transformations, and find out that
they can bedescribed by matrices. For graphical representations of these examples, see the textbook, of course.Here, I concentrate on showing you the code.
Consider a transformation in \(R^2\) that simply scales a vector by some positive scalar \(r\).It maps every point in \(R^2\) into a point \(r\) times as far from the origin. If \(r>1\) it movesthe point further from the origin (
dilation), and if it is \(0< r < 1\), closer to the origin ( contraction).
This equation can be written in a matrix form:\begin{equation} T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right) = \begin{bmatrix} r & 0\\ 0 & r \end{bmatrix} \begin{bmatrix}x\\y\\\end{bmatrix} \end{equation}
For example, when \(r=3\):
(mv (dge 2 2 [3 0 0 3]) (dv 1 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 3.00 6.00 ]
By multiplying our transformation matrix with a vector, we dilated the vector!
Consider
reflection: it maps every point in \(R^2\) into its mirror image in x-axis. (mv (dge 2 2 [1 0 0 -1]) (dv 3 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 3.00 -2.00 ] Rotation about the origin through an angle \(\theta\) is a bit more complicated to guess.After recalling a bit of knowledge about trigonometry, we can convince ourselves that it can be describedby a matrix of sines and cosines of that angle.
If we'd like to do a rotation of \(\pi/2\), we'd use \(\sin\pi/2 = 1\) and \(\cos\pi/2 = 0\). For vector \((3,2)\):
(mv (dge 2 2 [0 1 -1 0]) (dv 3 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ -2.00 3.00 ] Matrix transformations
Not only that we can construct matrices that represent transformations, it turns out that
every matrix defines a transformation!
According to the textbook definition: Let \(A\) be a \(m\times{n}\) matrix, and \(\mathbf{x}\) an elementof \(R^n\). \(A\) defines a
matrix transformation \(T(\mathbf{x})=A\mathbf{x}\) of \(R^n\) (domain)into \(R^m\) (codomain). The resulting vector \(A\mathbf{x}\) is the image of the transformation.
Note that matrix dimensions
m and
n correspond to the dimensions of thecodomain (
m, number of rows) and domain (
n, number of columns).
Matrix transformations have the following geometrical properties:
They map line segments into line segments (or points); If \(A\) is invertible, they also map parallel lines into parallel lines.
Example 1 from page 248 illustrates this. Let \(T:R^2\rightarrow{R^2}\) be a transformation defined by the matrix \(A=\begin{bmatrix}4&2\\2&3\end{bmatrix}\). Define the image of the unit square under the transformation.
The code:
(let [a (dge 2 2 [4 2 2 3]) p (dv 1 0) q (dv 1 1) r (dv 0 1) o (dv 0 0)] [(mv a p) (mv a q) (mv a r) (mv a o)]) '(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )
And, we got a parallelogram, since \(A\) is invertible. Check that as an exercise; a matrix is invertible if its determinant is non-zero (you can use the det function).
Something bugs the programmer in me, though: what if we wanted to transform many points (vectors)?Do we use this pedestrian way, or we put those points in some sequence and use good old Clojurehigher order functions such as
map,
reduce,
filter, etc.? Let's try this.
(let [a (dge 2 2 [4 2 2 3]) points [(dv 1 0) (dv 1 1) (dv 0 1) (dv 0 0)]] (map (partial mv a) points)) '(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )
I could be pleased with this code. But I am not. I am not, because we only picked up the low-hanging fruit, and left a lot of simplicity and performance on the table. Consider this:
(let [a (dge 2 2 [4 2 2 3]) square (dge 2 4 [1 0 1 1 0 1 0 0])] (mm a square)) #RealGEMatrix[double, mxn:2x4, layout:column, offset:0] ▥ ↓ ↓ ↓ ↓ ┓ → 4.00 6.00 2.00 0.00 → 2.00 5.00 3.00 0.00 ┗ ┛
By multiplying matrix \(A\) with matrix \((\vec{p},\vec{q},\vec{r},\vec{o})\), we did the same operation as transforming each vector separately.
I like this approach much more.
It's simpler. Instead of maintaining disparate points of a unit square, we can treat them as one entity. If we still want access to points, we just call the col function. It's faster. Instead of calling mv four times, we call mm once. In addition to that, our data is concentrated next to each other, in a structure that is cache-friendly. This can give huge performance yields when we work with large matrices.
This might be obvious in graphics programming, but I've seen too much data crunching code that uses matrices and vectors as dumb data structures, that I think this point is worth reiterating again and again.
Composition of Transformations
I hope that every Clojure programmer will immediately understand function composition with the
comp function:
(let [report+ (comp (partial format "The result is: %f") sqrt +)] (report+ 2 3)) The result is: 2.236068
By composing the
+,
sqrt and
format functions together, we got the function that transforms theinput equivalently. Or more formally, \((f\circ{g}\circ{h})(x)=f(g(h(x)))\).
Matrices are transformations just like functions are transformations; it's no surprise that matrices (as transformations) can also be composed!
Let's consider \(T_1(\mathbf{x}) = A_1(\mathbf{x})\) and \(T_2(\mathbf{x}) = A_2(\mathbf{x})\). The composite transformation \(T=T_2\circ{T_1}\) is given by \(T(\mathbf{x})=T_2(T_1(\mathbf{x}))=T_2(A_1(\mathbf{x}))=A_2 A_1(\mathbf{x})\).
Thus, the composed matrix transformation \(A_2\circ A_1\) is defined by the matrix product \(A_2{A_1}\). It can be extended naturally to a composition of \(n\) matrix transformations. \(A_n\circ{A_{n-1}}\circ\dots\circ{A_1}=A_n A_{n-1}\dots A_1\)
Take a look at the Clojure code for the example 2 (page 250) form the textbook. There are 3 matrices given. One defines reflection, the other rotation, and the third dilation. By composing them, we get one complex transformation that does all three transformations combined.
(let [pi2 (/ pi 2) reflexion (dge 2 2 [1 0 0 -1]) rotation (dge 2 2 [(cos pi2) (sin pi2) (- (sin pi2)) (cos pi2)]) dilation (dge 2 2 [3 0 0 3])] (def matrix-t (mm dilation rotation reflexion))) matrix-t #RealGEMatrix[double, mxn:2x2, layout:column, offset:0] ▥ ↓ ↓ ┓ → 0.00 3.00 → 3.00 -0.00 ┗ ┛
The image of the point \((1,2)\) is:
(mv matrix-t (dv 1 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 6.00 3.00 ] Orthogonal Transformations
Recall from Clojure Linear Algebra Refresher (1) - Vector Spaces that
orthogonal matrix is an invertible matrixfor which \(A^{-1} = A^t\). An orthogonal transformation is a matrix transformation \(T(\mathbf{x})=A\mathbf{x}\), where\(A\) is orthogonal. Orthogonal transformations preserve norms, angles, and distances. They preserve shapes ofrigid bodies.
I'll illustrate this with example 3 (page 252):
(let [sqrt12 (sqrt 0.5) a (dge 2 2 [sqrt12 (- sqrt12) sqrt12 sqrt12]) u (dv 2 0) v (dv 3 4) tu (mv a u) tv (mv a v)] [[(nrm2 u) (nrm2 tu)] [(/ (dot u v) (* (nrm2 u) (nrm2 v))) (/ (dot tu tv) (* (nrm2 tu) (nrm2 tv)))] [(nrm2 (axpy -1 u v)) (nrm2 (axpy -1 tu tv))]])
2.0 2.0 0.6 0.6000000000000001 4.123105625617661 4.1231056256176615
We can see that norms, angles and distances are preserves (with a rather tiny differences due to rounding errors in floating-point operations).
Translation and Affine Transformation
There are transformations in vector spaces that are not truly matrix transformations, but are usefulnevertheless. I'll show you
why these transformations are not linear transformations in the next post;for now, let's look at how they are done. Translation is transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = \mathbf{u} + \mathbf{v}\).This transformation slides points in the direction of vector \(\mathbf{v}\).
In Clojure, the translation is done with the good old axpy function.
(axpy (dv 1 2) (dv 4 2)) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 5.00 4.00 ] Affine transformation is a transformation \(T:R_n \rightarrow R_n\) defined by \(T(\mathbf{u}) = A\mathbf{u} + \mathbf{v}\). It can be interpreted as a matrix transformation followedby a translation. See the textbook for more details, and you can do a Clojure example as a customarytrivial exercise left to the reader. Hint: the mv function can do affine transformations. To be continued…
These were the basics of matrix transformations. In the next post, we will explore linear transformations more. I hope this was interesting and useful to you. If you have any suggestions, feel free to send feedback - this can make next guides better.
Happy hacking! |
I've proven the following "theorem":
Let $I \subset \mathbb{R}$ be an interval, $(f_n: I \rightarrow \mathbb{R})_{n \in \mathbb{N}}$ be a family of continuous functions converging pointwise to a continuous function $f: I \rightarrow \mathbb{R}$ on $I$. Then: $(f_n)_{n \in \mathbb{N}}$ is equicontinuous on I.
Now my problem is, that here Equicontinuity of a pointwise convergent sequence of monotone functions with continuous limit additionally the $f_n$ have to be monotonic. So is my proof a generalization, or am I just missing something? Here is my proof:
Proof: Let $\epsilon > 0$. Observe first: \begin{equation} | f_n(x) - f_n(y) | \leq |f_n(x) - f(x)| + |f_n(y) - f(y)| + |f(x)- f(y)| \end{equation} Now there is by pointwise convergence of $(f_n)_{n \in \mathbb{N}}$ a $N \in \mathbb{N}$ such that for all $n \geq N$ we have $|f_n(x) - f(x)|<\frac{\epsilon}{3}$ and $|f_n(y) - f(y)| < \frac{\epsilon}{3}$. Further there is a $\delta > 0$ such that $|f(x) - f(y)| < \frac{\epsilon}{3}$ for $|x-y| < \delta$ by continuity of $f$. Hence we have shown, that there is a $N \in \mathbb{N}$ and a $\delta > 0$ such that for all $n \geq N$ \begin{equation} |f_n(x) - f_n(y)| < \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3} = \epsilon \end{equation} holds. Now let $n < N$. Then, by continuity of $f_n$ there is a $\delta_n$ such that $(x-y) < \delta_n$ implies $|f_n(x) - f_n(y)| < \epsilon$. Setting \begin{equation} \tilde{\delta} = \min_{n < N} \delta_n \end{equation} (which exists and is greater than $0$) we obtain, that for all $n < N$ the following holds: \begin{equation} |x - y| < \tilde{\delta} \Rightarrow |f_n(x) - f_n(y) | < \epsilon \end{equation} Setting now $\hat{\delta} = \min \{\delta, \tilde{\delta} \}$ we have, that for all $n \in \mathbb{N}$ the following holds: \begin{equation} |x-y| < \hat{\delta} \Rightarrow |f_n(x)- f_n(y) | < \epsilon \end{equation} Hence we have shown, that for all $\epsilon > 0$ there is a $\hat{\delta} > 0$ such that forall $n \in \mathbb{N}$ we have, that $|x-y| < \delta$ implies $|f_n(x) - f_n(y)| < \epsilon$. |
How does one graph this?
I am trying to parametrize $x^2+y^2+sin(4x)+sin(4y)=4$.
I need to find a way of taking the intersections between $x^2+y^2+\sin(4x)+\sin(4y)=4$, and $\tan(nx)$. As n increases from $0\le{n}\le{2\pi}$, I can take the following in coordinate-form....
$$(n,\text{The x-intersection value})$$ $$(n,\text{The y-intersection value})$$
Finally I need to take the following to graph its parametric derivative. Which is...
$$\frac{({\text{The x-intersection value}})^2+4\cos(4(\text{The x-intersection value}))}{-(\text{The y-intersection value})^2-4\cos(4(\text{The y-intersection}))}$$
I have little knowledge with how to use sage. If someone can help I'll be thankful. |
Forgot password? New user? Sign up
Existing user? Log in
f(x)f(x) f(x) is a polynomial of degree 99. For exactly 100 (out of 101) integer values ranging from 0 00 to 100 100100, we have f(x)=1x+1 f(x) = \frac {1}{x+1} f(x)=x+11. Also, f(101)=0f(101) = 0f(101)=0. For what value of aaa, 0≤a≤100 0 \leq a \leq 1000≤a≤100 is f(a)≠1a+1 f(a) \neq \frac {1}{a+1} f(a)=a+11?
Problem Loading...
Note Loading...
Set Loading... |
Clojure Linear Algebra Refresher (4) - Linear TransformationsYou can adopt a pet function! Support my work on my Patreon page, and access my dedicated discussion server. Can't afford to donate? Ask for a free invite.
June 15, 2017
Please share: Twitter.
New books are available for subscription.
Now that we got ourselves acquainted with matrix transformations, the next obvious step is to generalize our knowledge with linear transformations. This post covers a bit more theoretical ground, so I will refer you to look things up in your linear algebra textbook more often than in the Matrix Transformations post. There will also be less code, since the computations that we'd need are similar to what I've already shown you. That's a good thing though; it indicates that we covered the most important things already, and just need more understanding of math theory and applications, and a bit finer grasp of the fine details of the performance numerical computations.
Before I continue, a few reminders:
These articles are not stand-alone. You should follow along with a linear algebra textbook. I recommend Linear Algebra With Applications, Alternate Edition by Gareth Williams (see more in part 1 of this series). The intention is to connect the dots from a math textbook to Clojure code, rather than explain math theory or teach you basics of Clojure. Please read Clojure Linear Algebra Refresher (3) - Matrix Transformations first. Include Neanderthal library in your project to be able to use ready-made high-performance linear algebra functions.
This text covers the second part of chapter 6 of the alternate edition of the textbook.
Linear Transformations
Recall from the Vector Spaces post that a vector space has two operations: addition and scalar multiplication. Consider these matrix transformations: \(T(\mathbf{u} + \mathbf{v}) = A(\mathbf{u} + \mathbf{v}) = A\mathbf{u}+A\mathbf{v} = T(\mathbf{u})+T(\mathbf{v})\), and \(T(c\mathbf{u}) = A(c\mathbf{u}) = cA\mathbf{u} = cT(\mathbf{u})\). From this, it's easy to understand the textbook definition:
Let \(\mathbf{u}\text{ and }\mathbf{v}\) be vectors in \(R^n\) and \(c\) be a scalar. A transformation \(T:R^n\rightarrow{R^n}\)is a
linear transformation if \(T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u})+T(\mathbf{v})\) and\(T(c\mathbf{u}) = cT(\mathbf{u})\).
These properties tell us that all linear transformations preserve addition and scalar multiplication. Every matrix transformation is linear, but translations and affine transformations are not (I'll show you later that there is a workaround).
We can refer to a transformation where domain and codomain are the same (such as \(R^n\rightarrow{R^n}\)) as an
operator.
In the previous post, I (and the textbook author) used ad hoc ways of arriving at matrices that describe transformations such as dilations, rotations, and reflections. Now, we will learn a formal method.
Let's start with the example, but first require the namespaces that we're going to need:
(require '[uncomplicate.neanderthal [native :refer [dv dge]] [core :refer [mv mm col axpy copy]] [linalg :refer [ev! sv! trf tri]] [math :refer [cos sin pi]]])
The example 3 from page 259 of the textbook tries to find a matrix that describes the following transformation: \(T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right)=\begin{bmatrix}2x+y\\3y\end{bmatrix}\).
Here's what we'll do in Clojure: first we find the effect of \(T\) on the standard basis of the domain \(R^n\), and then form a matrix whose columns are those images. Easy!
(let [t! (fn [v] (v 0 (+ (* 2 (v 0)) (v 1))) (v 1 (* 3 (v 1)))) a (dge 2 2 [1 0 0 1])] (t! (col a 0)) (t! (col a 1)) a) #RealGEMatrix[double, mxn:2x2, layout:column, offset:0] ▥ ↓ ↓ ┓ → 2.00 1.00 → 0.00 3.00 ┗ ┛
Excuse the imperative code in function
t!, but we've found the matrix that represents thelinear transformation. Please keep in mind that vectors and matrices are functions that can retrieveor update values at specific index, but that these functions are usually boxed. In tight loops, you'dwant to use entry and entry! from the uncomplicate.neanderthal.real namespace, or some otheroperation optimized for performance.
Let \(T\) be a linear transformation on \(R^n\), \(\left\{\mathbf{e_1}, \mathbf{e_2}, \dots , \mathbf{e_n}\right\}\)the
standard basis (see the Vector Spaces post) and \(\mathbf{u}\) arbitrary vector in \(R^n\).\(\mathbf{e_1}=\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}\),\(\mathbf{e_2}=\begin{bmatrix}0\\1\\\vdots\\0\end{bmatrix}\), …,\(\mathbf{e_n}=\begin{bmatrix}0\\0\\\vdots\\1\end{bmatrix}\), and\(\mathbf{u}=\begin{bmatrix}c_1\\c_2\\\vdots\\c_n\end{bmatrix}\).
We can express \(\mathbf{u}\) as a linear combination of \({\mathbf{e_1}, \mathbf{e_2}, \dots , \mathbf{e_n}}\):\(\mathbf{u} = c_1\mathbf{e_1}+c_2\mathbf{e_2}+\dots+c_n\mathbf{e_n}\). If we fill that in \(T(\mathbf{u})\),and apply the properties of linear transformations (look this up in the textbook, or do your own penand paper exercise), we find that \(A=[T(\mathbf{e_1})\cdots T(\mathbf{e_n})]\). \(A\) is calledthe
standard matrix of \(T\).
The previous definition of linear transformations in \(R^n\) can be expanded for any vector space (not only \(R^n\)).
Homogeneous coordinates
Translation and affine transformation (see the previous post) are not linear transformations and cannot be composed, yet they are very useful operations. Programmers in computer graphics understand the value of linear algebra and utilize it well. We can see an example of how the problem of translation and affine transformation composability have been overcome in that field, and keep that in mind when we develop solutions that use more than 3 dimensions.
It turns out that
homogeneous coordinates can describe points in a plane in a way that matrix multiplicationcan be used for translations. In homogeneous coordinates, a third component of
1 is added to each coordinate,and the transformations are defined by the following matrices:
point \(\begin{bmatrix}x\\y\\1\end{bmatrix}\) rotation \(A=\begin{bmatrix}cos\theta&-sin\theta&0\\sin\theta&cos\theta&0\\0&0&1\end{bmatrix}\) reflection \(B=\begin{bmatrix}1&0&0\\0&-1&0\\0&0&1\end{bmatrix}\) dilation/contraction \(C=\begin{bmatrix}r&0&0\\0&r&0\\0&0&1\end{bmatrix}\), \(r>0\) translation \(E=\begin{bmatrix}1&0&h\\0&1&k\\0&0&1\end{bmatrix}\)
If we, for example, wanted to do a dilation followed by translation and then a rotation, we would use the \(AEC\) matrix.
The textbook I recommended offers us a practical example (example 7, page 264): There is a triangle, and we would like to rotate it, but not about the origin, but about a point \(P(h,k)\). Thus, the solution is to translate \(P\) to origin, rotate, and then translate \(P\) back. In the example, I implement the rotation through an angle \(\pi/2\) about the point \(P(5,4)\).
(let [t1 (fn [p] (dge 3 3 [1 0 0 0 1 0 (- (p 0)) (- (p 1)) 1])) r (fn [theta] (let [c (cos theta) s (sin theta)] (dge 3 3 [c s 0 (- s) c 0 0 0 1]))) t2 (fn [p] (dge 3 3 [1 0 0 0 1 0 (p 0) (p 1) 1])) t2rt1 (fn [p theta] (mm (t2 p) (r theta) (t1 p)))] (t2rt1 (dv 5 4) (/ pi 2))) #RealGEMatrix[double, mxn:3x3, layout:column, offset:0] ▥ ↓ ↓ ↓ ┓ → 0.00 -1.00 9.00 → 1.00 0.00 -1.00 → 0.00 0.00 1.00 ┗ ┛ Kernel, Range, and the Rank/Nullity Theorem
Besides
domain and codomain, linear transformations are associated with another two importantvector spaces: kernel and range.
With transformation \(T:U\rightarrow{V}\):
kernel, \(ker(T)\), is the set of vectors in \(U\) that are mapped into the zero vector of \(V\). rangeis the set of vectors in \(V\) that are the images of vectors in \(U\).
This section does not contain nothing new I could show you in Clojure code, but is important to grasp the theory. Therefore, I'll only mention the important stuff, and direct you to the textbook.
Example 2 (page 273) demonstrates how to find the kernel, by solving a system of equations \(T(\mathbf{x}) = A\mathbf{x} = \mathbf{0}\). The first thing that we could do is to try to use the function for solving linear systems of equations sv!. That won't help us much; since our system is homogeneous, sv! can only give us the trivial solution \(\mathbf{x}=\mathbf{0}\). Fortunately, in this example, \(A\) is a square matrix, so we can exploit the equality \(A\mathbf{x}=\lambda\mathbf{x}\) between eigenvectors \(\mathbf{x}\) and eigenvalue(s) \(\lambda=0\), if such \(\lambda\) exists (since this is a textbook example, I'd be surprised if it didn't). See Clojure Linear Algebra Refresher (2) - Eigenvalues and Eigenvectors if you need to refresh your Eigen-fu.
(let [a (dge 3 3 [1 0 1 2 -1 1 3 1 4]) eigenvectors (dge 3 3)] (def lambdas (ev! a nil eigenvectors)) (def eigen eigenvectors) eigenvectors) #RealGEMatrix[double, mxn:3x3, layout:column, offset:0] ▥ ↓ ↓ ↓ ┓ → -0.64 -0.96 0.71 → -0.13 0.19 -0.71 → -0.76 0.19 0.00 ┗ ┛
Let's see whether any of these eigenvectors is our solution.
lambdas #RealGEMatrix[double, mxn:3x2, layout:column, offset:0] ▥ ↓ ↓ ┓ → 5.00 0.00 → -0.00 0.00 → -1.00 0.00 ┗ ┛
The second eigenvalue is \(0\), so the second column \((-0.96,0.19,0.19)\) is the normalized basis of one-dimensional subspace of \(R^n\) that is the kernel we were looking for.
(seq (col eigen 1))
-0.9622504486493764 0.1924500897298754 0.1924500897298751
For a linear transformation \(T\), \(dim\, ker(T) + dim\, range(T) = dim\, domain(T)\). Also, \(dim\, range(T) = rank(A)\).The kernel of \(T\) is also called the
null space. \(dim\, ker(T)\) is called nullity and \(dim\, range(T)\) is the rankof the transformation. The previous equality is referred to as the rank/nullity theorem. Systems of Linear Equations
As we've already seen in the previous section, there is a close connection between linear transformations and systems of linear equations. System \(A\mathbf{x}=\mathbf{y}\) can be written as \(T(\mathbf{x})=\mathbf{y}\).
Note that the solution of the
homogeneous system \(A\mathbf{x}=\mathbf{0}\) is the kernelof the transformation, as we've already shown.
There is an interesting relation between an element of the
kernel \(\mathbf{z}\), particular solution \(\mathbf{x_1}\)of the nonhomogeneous system \(A\mathbf{x}=\mathbf{y}\), and every other solution \(\mathbf{x_1}\) of that system:\(\mathbf{x}=\mathbf{z}+\mathbf{x_1}\)!
Look the details up in the textbook, an I'm showing you example 2 (page 286) in Clojure. We can use the sv! function to solve this non-homogeneous system.
(let [a (dge 3 3 [1 0 1 1 -1 1 3 1 4]) b (dge 3 1 [11 -2 9])] (sv! a b) (def a-solution (col b 0)) a-solution) #RealBlockVector[double, n:3, offset: 0, stride:1] [ 17.00 -0.00 -2.00 ]
Now, knowing the kernel from the previous section, and having this particular solution, we can construct all solution to this system:
(let [z (col eigen 1) x1 a-solution] (defn y [r] (axpy r z x1))) (y 1.33) #'user2/y#RealBlockVector[double, n:3, offset: 0, stride:1] [ 15.72 0.26 -1.74 ]
As an exercise, you can check whether those vectors are really solutions (and inform me if they're not).
(let [a (dge 3 3 [1 0 1 1 -1 1 3 1 4]) b (dge 3 4 [11 -2 9 3 -2 1 4 5 -6 -1 0 1])] (sv! a b)) #RealGEMatrix[double, mxn:3x4, layout:column, offset:0] ▥ ↓ ↓ ↓ ↓ ┓ → 17.00 9.00 49.00 -9.00 → -0.00 -0.00 -15.00 2.00 → -2.00 -2.00 -10.00 2.00 ┗ ┛
I haven't killed seven in one blow, like the Brave Little Tailor, only four. But I could have, easily, even more!
One-to-One and Inverse Transformations
This section covers only the theory, so I'll just summarize the results, and direct you to the textbook for details.
A transformation is one-to-one if each element in the range is the image of exactly one element in the domain.
A transformation \(T\) is invertible if there is a transformation \(S\) such that \(S(T(\mathbf{u}))=\mathbf{u}\) and \(T(S(\mathbf{u}))=\mathbf{u}\) for every \(\mathbf{u}\).
The following statements are equivalent:
\(T\) is invertible. \(T\) is nonsingular (\(det(a)\neq 0\)). \(T\) is one-to-one. \(ker(T)=\mathbf{0}\). \(ran(T)=R^n\). \(T^{-1}\) is linear. \(T^{-1}\) is defined by \(A^{-1}\). Coordinate Vectors
Let \(U\) be a vector space, \(B = \left\{\mathbf{u_1}, \mathbf{u_2}, \cdots, \mathbf{u_n}\right\}\) basis,\(\mathbf{u}\) a vector in \(U\), and \(a_1, a_2, \cdots, a_n\) scalars such that\(\mathbf{u} = a_1\mathbf{u_1} + a_2\mathbf{u_2} + \cdots + a_n\mathbf{u_n}\). The column vector\(\begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix}\) is called the
coordinate vector of \(\mathbf{u}\)releative to \(B\), and \(a_1, a_2, \cdots, a_n\) are called the coordinates of \(\mathbf{u}\).
Column vectors are more convenient to work with when developing the theory. In Clojure code, no distinction is made. The same vector can be used either as a column or as a row vector.
The standard bases are the most convenient bases to work with. To express a vector in coordinatesrelative to another base, we need to solve a system of linear equations. One special case is
orthonormal basis.In that case, the coordinate vector is\(\mathbf{v}_B = \begin{bmatrix}\mathbf{v\cdot u_1\\v\cdot u_2\\\vdots\\v\cdot u_n}\end{bmatrix}\). Change of Basis
In general, if \(B\) and \(B'\) are bases of \(U\), and \(\mathbf{u}\) is a vector in \(U\) having coordinatevectors \(\mathbf{u_B}\) and \(\mathbf{u_{B'}}\) relative to \(B\) and \(B'\), then\(\mathbf{u_{B'}} = P \mathbf{u_B}\), where \(P\) is a
transition matrix from \(B\) to \(B'\). \(P = [(\mathbf{u_1})_{B'} \cdots (\mathbf{u_n})_{B'}]\).
The change from a nonstandard basis to the standard basis is easy to do - the columns of \(P\) are the columns of the first basis! (example 4, page 294, from the textbook):
(let [b (dge 2 2 [1 2 3 -1]) b' (dge 2 2 [1 0 0 1]) p (copy b)] (mv p (dv 3 4))) #RealBlockVector[double, n:2, offset: 0, stride:1] [ 15.00 2.00 ]
A more interesting case is when neither basis is standard. In that case, we can use the standard basis as an intermediate basis. Let \(B\) and \(B'\) be bases, \(S\) the standard basis, and \(P\) and \(P'\) transition matrices to the standard basis. Then, transition from \(B\) to \(B'\) can be accomplished by transition matrix \(P'^{-1}P\), and from \(B'\) to \(B\) with \(P^{-1}P'\).
Here's example 5 (page 295) from the textbook worked out in Clojure code:
(let [p (dge 2 2 [1 2 3 -1]) p' (dge 2 2 [3 1 5 2]) p-1 (tri (trf p)) p'-1 (tri (trf p'))] (mv (mm p'-1 p) (dv 2 1))) #RealBlockVector[double, n:2, offset: 0, stride:1] [ -5.00 4.00 ]
Which is equal to the result from the book. Nice.
It's not over
What we started in the previous post on matrix transformations, we wrapped up with the general concept of linear transformations. In the next post, we'll take a step back, and cover a bit of more basic ground - the technicalities of matrices.
I hope this was interesting and useful to you. If you have any suggestions, feel free to send feedback - this can make next guides better. |
WHY?
Bilinear model can caputure rich relation of two vectors. However, the computational complexity of bilinear model is huge due to its high dimensionality. To make bilinear model more applicable, this paper suggests low-rank bilinear pooling using Hadamard product.
WHAT?
Bilinear model have huge matrix W of N x M. Instead of huge matrix W, decomposed matrix U and V can be used to reduce computation. Also, instead of making a model for a feature, matrix P can be used to capture multiple features.
f_i = \sum^N_{j=1}\sum^M_{k=1} w_{ijk}x_j y_k + b_i = \mathbf{x}^T\mathbf{W}_i\mathbf{y} + b_i\\= \mathbf{x}^T\mathbf{U}_i\mathbf{V}_i^T\mathbf{y} + b_i = \mathbb{1}(\mathbf{U}_i^T\mathbf{x}\circ\mathbf{V}_i^T\mathbf{y}) + b_i\\\mathbf{f} = \mathbf{P}^T(\mathbf{U}_i^T\mathbf{x}\circ\mathbf{V}_i^T\mathbf{y}) + \mathbf{b}
Full model includes bias vectors for each linear projections. Also, non-linear activation functions can increase representativeness. Activation can be either right after linear mapping or after Hadamard product. Finally, shorcut connection can be applied as residual learning. Applying these technique after low-rank bilinear model is generalized form of MRN.
\mathbf{f} = \mathbf{P}^T((\mathbf{U}^T\mathbf{x} + \mathbf{b}_x)\circ(\mathbf{V}_i^T\mathbf{y}+ \mathbf{b}_y)) + \mathbf{b}\\\mathbf{f} = \mathbf{P}^T(\sigma(\mathbf{U}_i^T\mathbf{x})\circ\sigma(\mathbf{V}_i^T\mathbf{y})) + \mathbf{b}\\\mathbf{f} = \mathbf{P}^T(\sigma(\mathbf{U}_i^T\mathbf{x}\circ\mathbf{V}_i^T\mathbf{y})) + \mathbf{b}\\\mathbf{f} = \mathbf{P}^T(\sigma(\mathbf{U}_i^T\mathbf{x})\circ\sigma(\mathbf{V}_i^T\mathbf{y})) + h_x(\mathbf{x}) + h_y(\mathbf{y}) + \mathbf{b}\\
These mechanism can be used to extract information in visual question-answering. This model capture multimodal attention in each layers. Assume q as question embedding and F as image features.
\alpha = softmax(\mathbf{P}^T_{\alpha}(\sigma(\mathbf{U}^T_{\mathbf{q}}\mathbf{q}\cdot\mathbb{1}^T)\odot\sigma(\mathbf{V}^T_{\mathbf{F}}\mathbf{F})))\\\hat{\mathbf{v}} = \parallel_{g = 1}^{G}\sum^{S^2}_{s=1}\alpha_{g,s}\mathbf{F}_s\\p(a|\mathbf{q}, \mathbf{F}; \Theta) = softmax(\mathbf{P}^T_{O}(\sigma(\mathbf{W}^T_{\mathbf{q}}\mathbf{q})\odot\sigma(\mathbf{V}^T_{\hat{\mathbf{v}}}\hat{\mathbf{v}})))\\\hat{a} = argmax p(a|\mathbf{q}, \mathbf{F}; \Theta)
This model is called Multimodal Low-rank Bilinear Attention Networks(MLB). With residual connection, it is called Multimodal Attention Residual Networks(MARN).
So?
MARN achieved SOTA results in VQA.
Critic
Apllying multimodality in attention seems creative. Visualization of attention would be nice. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Search
Now showing items 1-10 of 20
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE
(Springer, 2013-07)
The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ...
Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV
(American Physical Society, 2013-01)
Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... |
What is an example of a group $G$ which
1- is finitely generated by $S$,
2- does not have property (T),
3- admits infinitely many finite quotients which do not factor through an homomorphism $G \to H$ for some [fixed & infinite] property (T) group $H$
4- all the Cayley graphs (w.r.t. $S$) of those finite quotients are $\epsilon$-expanders (for some fixed $\epsilon >0$)
The question is motivated by the
false assertion: "a group has property (T) if and only all its finite quotients are $\epsilon$-expanders".
There are "simple" answers if one does not put the factor condition in 3. For example, one could consider $A \times H$ where $A$ is a finitely generated simple amenable group (as proven/constructed by Juschenko-Monod/Matui) and $H$ is some residually finite property (T) group ($SL_3(\mathbb{Z})$ being the canonical choice). Because $A$ is amenable, this does not have (T) and its finite quotients (which all come from $H$) are $\epsilon$ expanders.
Actually, to enlarge the list of examples of this form, a side question would be: what are the finitely generated groups which do not have property (T) and do not have finite non-trivial quotients? |
This question already has an answer here:
Let $T$ be a linear operator on the finite dimensional vector space $V$. Suppose $T$ has a cyclic vector. Prove that if $U$ is any linear operator which commutes with $T$, then $U$ is a polynomial in $T$.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
Let $T$ be a linear operator on the finite dimensional vector space $V$. Suppose $T$ has a cyclic vector. Prove that if $U$ is any linear operator which commutes with $T$, then $U$ is a polynomial in $T$.
Let $v$ be a cyclic vector for $T$. Show that there exists a polynomial of degree at most $\dim(V)-1$ such that $Uv=P(T)v$. (Expand $Uv$ in the cyclic basis.) Then use $UT^k=T^kU$ to show that $U$ and $P(T)$ coincide on the cyclic basis of $V$.
If $v$ is a cyclic vector, then $U$ is determined just by its image $U(v)$ of$~v$, since commutation implies the relation $U(T^k(v))=T^k(U(v))$ that fixes it on other elements of the spanning set $\{\,T^k(v)\mid k\in\Bbb N\,\}$ of $V$. With $n=\dim(V)$ we can write $U(v)=\sum_{0\leq k<n}c_kT^k(v)$ and then $P=\sum_{0\leq k<n}c_kX^k$, and therefore have $P[T](v)=U(v)$; but then $P[T]$ is an operator commuting with $T$ and having the same image of$~v$ as $U$ does, so $U=P[T]$ by our opening sentence. |
Search
Now showing items 1-10 of 32
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Table of Contents
In the opening scene of the “The Euclid Alternative”, we see Sheldon (Jim Parsons) demanding that Leonard (Johnny Galecki)needs to drive him around to run various errands. Leonard, after spending a night in the lab using the new Free Electron Laser to perform X-ray diffraction experiments. In the background, we can see equations that describe a rolling ball problem on the whiteboard in the background.
Rolling motion plays an important role in many familiar situations so this type of motion is paid considerable attention in many introductory mechanics courses in physics and engineering. One of the more challenging aspects to grasp is that rolling (without slipping) is a combination of both translation and rotation where the point of contact is instantaneously at rest.The equations on the white board describe the velocity at the point of contact on the ground, the center of the object and at the top of the object.
Pure Translational Motion
When an object undergoes pure translational motion, all of its points move with the same velocity as the center of mass– it moves in the same speed and direction or \(v_{\textrm{cm}}\).
Pure Rotational Motion
In the case of a rotating body, the speed of any point on the object depends on how far away it is from the axis of rotation; in this case, the center. We know that the body’s speed is \(v_{\textrm{cm}}\) and that the speed at the edge must be the same. We may think that all these points moving at different speeds poses a problem but we know something else — the object’s angular velocity.
The angular speed tells us how fast an object rotates. In this case, we know that all points along the object’s surface completes a revolution in the same time. In physics, we define this by the equation: \begin{equation} \omega=\frac{v}{r} \end{equation} where \(\omega\) is the angular speed. We can use this to rewrite this equation to tell us the speed of any point from the center: \begin{equation} v(r)=\omega r \end{equation} If we look at the center, where \(r=0\), we expect the speed to be zero. When we plug zero into the above equation that is exactly what we get: \begin{equation} v(0)= \omega \times 0 = 0 \label{eq:zero} \end{equation} If we know the object’s speed, \(v_{\textrm{cm}}\) and the object’s radius, \(R\), using a little algebra we can define \(\omega\) as: \[\omega=\frac{v_{\textrm{cm}}}{R}\] or the speed at the edge, \(v_{\textrm{cm}}\) to be \(v(R)\) to be: \begin{equation} v_{\textrm{cm}}=v(R) = \omega R \label{eq:R} \end{equation} Putting it all Together
To determine the absolute speed of any point of a rolling object we must add both the translational and rotational speeds together. We see that some of the rotational velocities point in the opposite direction from the translational velocity and must be subtracted. As horrifying as this looks to do, we can reduce the problem somewhat to what we see on the whiteboard. Here we see the boys reduce the problem and look at three key areas, the point of contact with the ground (\(P)\), the center of the object, (\(C\)) and the top of the object (\(Q\)).
We have done most of the legwork at this point and now the rolling ball problem is easier to solve.
At point \(Q\)
At point \(Q\), we know the translational speed to be \(v_{\textrm{cm}}\) and the rotational speed to be \(v(R)\). So the total speed at that point is
\begin{equation} v = v_{\textrm{cm}} + v(R) \label{eq:Q1} \end{equation} Looking at equation \eqref{eq:R}, we can write \(v(R)\) as \begin{equation} v(R) = \omega R \end{equation} Putting this into \eqref{eq:Q1} and we get, \begin{aligned} v & = v_{\textrm{cm}} + v(R) \\ & = v_{\textrm{cm}} + \omega R \\ & = v_{\textrm{cm}} + \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} + v_{\textrm{cm}} = 2v_{\textrm{cm}} \end{aligned} which looks almost exactly like Leonard’s board, so we must be doing something right. At point \(C\)
At point \(C\) we know the rotational speed to be zero (see equation \eqref{eq:zero}).
Putting this back into equation \eqref{eq:Q1}, we get \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} + v(0) \\ & = v_{\textrm{cm}} + \omega \cdot 0 \\ & = v_{\textrm{cm}} + 0 \\ & = v_{\textrm{cm}} \end{aligned} Again we get the same result as the board. At point \(P\)
At the point of contact with the ground, \(P\), we don’t expect a wheel to be moving (unless it skids or slips). If we look at our diagrams, we see that the rotational speed is in the opposite direction to the translational speed and its magnitude is
\begin{aligned} v(R) & = -\omega R \\ & = -\frac{v_{\textrm{cm}}}{R}\cdot R \\ & = -v_{\textrm{cm}} \end{aligned} It is negative because the speed is in the opposite direction. Equation \eqref{eq:Q1}, becomes \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} – \omega R \\ & = v_{\textrm{cm}} – \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} – v_{\textrm{cm}} = 0 \end{aligned} Not only do we get the same result for the rolling ball problem we see on the whiteboard but it is what we expect. When a rolling ball, wheel or object doesn’t slip or skid, the point of contact is stationary. Cycloid and the Rolling ball
If we were to trace the path drawn by a point on the ball we get something known as a cycloid. The rolling ball problem is an interesting one and the reason it is studied is because the body undergoes two types of motion at the same time — pure translation and pure rotation. This means that the point that touches the ground, the contact point, is stationary while the top of the ball moves twice as fast as the center. It seems somewhat counter-intuitive which is why we don’t often think about it but imagine if at the point of contact our car’s tires wasn’t stationary but moved. We’d slip and slide and not go anywhere fast. But that is another problem entirely. |
I'd like to pose a question about uniform sampling on the surface of a sphere.
I searched this site, and uniform sampling on a sphere surface seems to be quite a common problem. The common solution is to take two uniform random numbers $u$ and $v$ between 0 and 1, then compute the spherical coordinates as follows:
$\theta = 2\pi u$ $\phi = \cos^{-1}(2v - 1)$
However, in my application I am required to sample a point uniformly on a small patch on the surface, namely with $\theta$ between $\theta_{min}$ and $\theta_{max}$ and with $\phi$ between $\phi_{min}$ and $\phi_{max}$. I could re-pick random numbers until these requirements are met, but since the patch can be quite small and efficiency is an issue (this is for a ray-tracer), I would like to know if there is an algebraic solution that involves limiting the range of the random numbers $u$ and $v$. |
This article is aimed at relatively new users. It is written particularly for my own students, with the aim of helping them to avoid making common errors. The article exists in two forms: this WordPress blog post and a PDF file generated by , both produced from the same Emacs Org file. Since WordPress does not handle very well I recommend reading the PDF version.
1. New Paragraphs
In a new paragraph is started by leaving a blank line.
Do not start a new paragraph by using
\\ (it merely terminates a line). Indeed you should almost never type
\\, except within environments such as
array,
tabular, and so on.
2. Math Mode
Always type mathematics in math mode (as
$..$ or
\(..\)), to produce “” instead of “y = f(x)”, and “the dimension ” instead of “the dimension n”. For displayed equations use
$$,
\[..\], or one of the display environments (see Section 7).
Punctuation should appear outside math mode, for inline equations, otherwise the spacing will be incorrect. Here is an example.
Correct:
The variables $x$, $y$, and $z$ satisfy $x^2 + y^2 = z^2$.
Incorrect:
The variables $x,$ $y,$ and $z$ satisfy $x^2 + y^2 = z^2.$
For displayed equations, punctuation should appear as part of the display. All equations
must be punctuated, as they are part of a sentence. 3. Mathematical Functions in Roman
Mathematical functions should be typeset in roman font. This is done automatically for the many standard mathematical functions that supports, such as
\sin,
\tan,
\exp,
\max, etc.
If the function you need is not built into , create your own. The easiest way to do this is to use the
amsmath package and type, for example,
\usepackage{amsmath} ... % In the preamble. \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\inert}{Inertia}
Alternatively, if you are not using the
amsmath package you can type
\def\diag{\mathop{\mathrm{diag}}} 4. Maths Expressions
Ellipses (dots) are never explicitly typed as “…”. Instead they are typed as
\dots for baseline dots, as in
$x_1,x_2,\dots,x_n$ (giving ) or as
\cdots for vertically centered dots, as in
$x_1 + x_2 + \cdots + x_n$ (giving ).
Type
$i$th instead of
$i'th$ or
$i^{th}$. (For some subtle aspects of the use of ellipses, see How To Typeset an Ellipsis in a Mathematical Expression.)
Avoid using
\frac to produce stacked fractions in the text. Write flops instead of flops.
For “much less than”, type
\ll, giving , not
<<, which gives . Similarly, “much greater than” is typed as
\gg, giving . If you are using angle brackets to denote an inner product use
\langle and
\rangle:
incorrect: <x,y>, typed as
$<x,y>$.
correct: , typed as
$\langle x,y \rangle$
5. Text in Displayed Equations
When a displayed equation contains text such as “subject to ”, instead of putting the text in
\mathrm put the text in an
\mbox, as in
\mbox{subject to $x \ge 0$}. Note that
\mbox switches out of math mode, and this has the advantage of ensuring the correct spacing between words. If you are using the amsmath package you can use the
\text command instead of
\mbox.
Example $$ \min\{\, \|A-X\|_F: \mbox{$X$ is a correlation matrix} \,\}. $$ 6. BibTeX
Produce your bibliographies using BibTeX, creating your own bib file. Note three important points.
“Export citation” options on journal websites rarely produce perfect bib entries. More often than not the entry has an improperly cased title, an incomplete or incorrectly accented author name, improperly typeset maths in the title, or some other error, so always check and improve the entry. If you wish to cite one of my papers download the latest version of
njhigham.bib(along with
strings.bibsupplied with it) and include it in your
\bibliographycommand.
Decide on a consistent format for your bib entry keys and stick to it. In the format used in the Numerical Linear Algebra group at Manchester a 2010 paper by Smith and Jones has key
smjo10, a 1974 book by Aho, Hopcroft, and Ullman has key
ahu74, while a 1990 book by Smith has key
smit90.
7. Spelling Errors and Errors
There is no excuse for your writing to contain spelling errors, given the wide availability of spell checkers. You’ll need a spell checker that understands syntax.
There are also tools for checking syntax. One that comes with TeX Live is
lacheck, which describes itself as “a consistency checker for LaTeX documents”. Such a tool can point out possible syntax errors, or semantic errors such as unmatched parentheses, and warn of common mistakes.
8. Quotation Marks
has a left quotation mark, denoted here
\lq, and a right quotation mark, denoted here
\rq, typed as the single left and right quotes on the keyboard, respectively. A left or right double quotation mark is produced by typing two single quotes of the appropriate type. The double quotation mark always itself produces the same as two right quotation marks. Example: is typed as
\lq\lq hello \rq\rq.
9. Captions
Captions go
above tables but below figures. So put the
caption command at the start of a
table environment but at the end of a
figure environment. The
\label statement should go after the
\caption statement (or it can be put inside it), otherwise references to that label will refer to the subsection in which the label appears rather than the figure or table.
10. Tables
makes it easy to put many rules, some of them double, in and around a table, using
\cline,
\hline, and the
| column formatting symbol. However, it is good style to minimize the number of rules. A common task for journal copy editors is to remove rules from tables in submitted manuscripts.
11. Source Code
source code should be laid out so that it is readable, in order to aid editing and debugging, to help you to understand the code when you return to it after a break, and to aid collaborative writing. Readability means that logical structure should be apparent, in the same way as when indentation is used in writing a computer program. In particular, it is is a good idea to start new sentences on new lines, which makes it easier to cut and paste them during editing, and also makes a diff of two versions of the file more readable.
Example:
Good:
$$ U(\zbar) = U(-z) = \begin{cases} -U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases} $$
Bad:
$$U(\zbar) = U(-z) = \begin{cases}-U(z), & z\in D, \\ -U(z)-1, & \mbox{otherwise}. \end{cases}$$ 12. Multiline Displayed Equations
For displayed equations occupying more than one line it is best to use the environments provided by the amsmath package. Of these,
align (and
align* if equation numbers are not wanted) is the one I use almost all the time. Example:
\begin{align*} \cos(A) &= I - \frac{A^2}{2!} + \frac{A^4}{4!} + \cdots,\\ \sin(A) &= A - \frac{A^3}{3!} + \frac{A^5}{5!} - \cdots, \end{align*}
Others, such as
gather and
aligned, are occasionally needed.
Avoid using the standard environment
eqnarray, because it doesn’t produce as good results as the amsmath environments, nor is it as versatile. For more details see the article Avoid Eqnarray.
13. Synonyms
This final category concerns synonyms and is a matter of personal preference. I prefer
\ge and
\le to the equivalent
\geq
\leq\ (why type the extra characters?).
I also prefer to use
$..$ for math mode instead of
\(..\) and
$$..$$ for display math mode instead of
\[..\]. My preferences are the original syntax, while the alternatives were introduced by . The slashed forms are obviously easier to parse, but this is one case where I prefer to stick with tradition. If dollar signs are good enough for Don Knuth, they are good enough for me!
I don’t think many people use ‘s verbose
\begin{math}..\end{math}
or
\begin{displaymath}..\end{displaymath}
Also note that
\begin{equation*}..\end{equation*} (for unnumbered equations) exists in the amsmath package but not in in itself. |
WHY?
Goal-oriented dialogue tasks require two agents(a questioner and an answerer) to communicate to solve the task. Previous supervised learning or reinforcement learning approaches struggled to make appropriate question due to the complexity of forming a sentence. This paper suggests information theoretic approach to solve this task.
WHAT?
The answerer model for VQA is a simple neural network model which is the same as previous methods. In former SL and RL methods, questioner had two RNN-based models, one for generating a question and one for guessing the answer image. On the other hand, the questioner of AQM uses mathmatical calculation instead of RNN models.
The questioner of AQM first generates question candidates to choose from (Q-sampler). Next, the questioner calculate the information gain for each of question candidate and choose the question with the greatest information gain.
I[C, A_t; q_t, a_{1:t-1}, q_{1:t-1}]\\= H[C; a_{1:t-1}, q_{1:t-1}] - H[C|A_t; q_t, a_{1:t-1}, q_{1:t-1}]\\= \sum_{a_t}\sum_c p(c|a_{1:t-1}, q_{1:t-1})p(a_t|c, q_t, a_{1:t-1}, q_{1:t-1}) \ln \frac{p(a_t|c, q_t, a_{1:t-1}, q_{1:t-1})}{p(a_t|q_t, a_{1:t-1}, q_{1:t-1})}\\p(c|a_{1:t}, q_{1:t}) \propto p(c)\prod_{j=1}^t p(a_j|c, q_j, a_{1:j-1}, q_{1:j-1})
Since getting posterior requires the answer distribution of the answerer, the questioner approximate the answerer’s answer distribution.
\hat{p}(a_t|c, q_t, a_{1:t-1}, q_{1:t-1}) \propto \tilde{p}'(c)\prod_{j=1}^t \tilde{p}(a_j|c, q_j, a_{1:j-1}, q_{1:j-1})
The questioner selects the question that maximize the information gain based on this approximate answer distribution posterior.
q_t^* = argmax_{q_t \in Q} \tilde{I}[C, A_t; q_t, a_{1:t-1}, q_{1:t-1}]\\= argmax_{q_t \in Q} \sum_{a_t}\sum_c \hat{p}(c|a_{1:t-1}, q_{1:t-1})\tilde{p}(a_t|c, q_t, a_{1:t-1}, q_{1:t-1}) \ln \frac{\tilde{p}(a_t|c, q_t, a_{1:t-1}, q_{1:t-1})}{\tilde{p}'(a_t|q_t, a_{1:t-1}, q_{1:t-1})}\\\tilde{p}'(a_t|q_t, a_{1:t-1}, q_{1:t-1}) = \sum_c\hat{p}(c|a_{1:t-1}, q_{1:t-1})\cdot\tilde{p}(c, q_t, a_{1:t-1}, q_{1:t-1})
The algorithm for AQM’s questioner is as follows.
In practice, there are some options for implementation. First, Q-sampler that samples question candidates can be randQ that sample questions at random or countQ that sample questions with the least correlation. Second, YOLO9000 is used to pick the list of candidate objects and the prior for these candidate is set to 1/N. Third, the answerer model picks the answer independent of the history. On the other hand, the approximate for the answerer model can be varied. The approximate for the answerer model can be trained independently from answerer model(indA) or trained from the answer from answerer model(depA). While these two share the same training datatset, indAhalf and depAhalf do not share the dataet.
So?
AQM outperformed previous methods in goal oriented dialogue tasks such as MNIST Counting Dialog and GuessWhat?!. GuessWhat?! is one of the goal-oriented dialogue tasks that the questioner needs to guess the object in an image by asking questions to the answerer while only the answerer knows the answer. |
Is there a well-known Lagrangian that, writing the corresponding eq of motion, gives the Klein-Gordon Equation in QFT? If so, what is it?
What is the canonical conjugate momentum? I derive the same result as in two sources separately, but with opposite sign, and I am starting to suspect that the error could be in the Lagrangian I am departing from.
Is there any difference in the answers to that two questions if you choose (+---) or (-+++)? If so, which one?
Yes. The standard scalar field which all QFT books (e.g. Peskin & Schroeder, Zee) start with yields the KG equation. For that reason it is also called the Klein-Gordon field. The Lagrangian (density) is \begin{align} \mathcal{L} = \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2} m^2 \phi^2. \end{align} Here the metric is (+ - - -).
By definition it is $\pi = \frac{\partial \mathcal{L}}{\partial (\partial_0 \phi)}$. This gives $\pi = \partial_0 \phi$.
It is purely convention, there is no right choice. The only difference in using a different metric will be in how we write things down - any quantities that involve contraction with the metric $\eta_{\mu \nu}$ will change by a minus sign. For example in the Lagrangian, using the metric (- + + +), the first term is changed to $-\frac{1}{2} \partial_\mu \phi \partial^\mu \phi$. But this is still equal to $\frac{1}{2}(\partial_t^2 \phi - \nabla^2 \phi)$ regardless of which metric we use. |
Say I observe n univariate random variables $X_1, \dots, X_n$ that are each $N(\mu, \sigma^2)$ with common correlation $\rho$. Is it possible that these are jointly normal? If so, what are the conditions and how would I know if they are jointly normal.
There are
no conditions based only on the marginal pdfs that can ensure joint normality. Let $\phi(\cdot)$ denote the standard normal density. Then, if $X$ and $Y$ have joint pdf$$f_{X,Y}(x,y) = \begin{cases} 2\phi(x)\phi(y), & x \geq 0, y \geq 0,\\2\phi(x)\phi(y), & x < 0, y < 0,\\0, &\text{otherwise},\end{cases}$$then $X$ and $Y$ are (positively) correlated standard normal random variables (work out the marginal densities to verify this if it is not immediately obvious) that do not have a bivariate joint normal density. So, given only that $X$ and $Y$ are correlated standard normal random variables, how can we tell whether $X$ and $Y$ have the joint pdf shown above or the bivariate joint normal density with the same correlation coefficient ?
In the opposite direction, if $X$ and $Y$ are
independent random variables (note the utter lack of mention of normality of $X$ and $Y$) and $X+Y$ is normal, then $X$ and $Y$ are normal random variables (Feller, Chapter XV.8, Theorem 1). |
As the neutron is not point-like, consider it has a continuous distribution of charge $\rho(\mathbf{r})$ confined in a volume $\Omega$. The dipole electric moment is then given by
$\mathbf{D}(\mathbf{r})=\int_\Omega \rho(\mathbf{r}')\delta(\mathbf{r}-\mathbf{r}')d^3r'$
where the coordinates are measured from the centre of mass of the distribution.For a charged particle, this definition implies that for $\mathbf{D} \neq\mathbf{0}$ the "centre of charge" is displaced from the centre of mass of the distribution. For a distribution which has no net charge, that is
$Q=\int_\Omega \rho(\mathbf{r}) d^3r=0$
this definition implies that a there is a greater positive charge side of your distribution and a correspondingly greater negative charge in the other side.
Consider now that your particle has angular momentum $\mathbf{J}$ and that its orientation is given by $m$ (the eigenvalue of the $\hat{J}_z$ operator) relative to the $\hat{\mathbf{z}}$ axis. Notice that the only way to know the orientation of your charge distribution ("particle") is by the orientation of the angular momentum.
As a consequence, both $\mathbf{J}$ and $\mathbf{D}$ must transform equally under parity $P$ and time reversal $T$ if $\mathbf{D} \neq \mathbf{0}$ and if there is $P$ and $T$ symmetries. But $\mathbf{D}$ changes its sign under $P$ whereas $\mathbf{J}$ does not so $\mathbf{D}$ must vanish if there is $P$ symmetry. In a similar way, $\mathbf{D}$ does not change sign under $T$ but $\mathbf{J}$ does, so $\mathbf{D}$ has to vanish if there is $T$ symmetry. Hence if the neutron electric dipole is not zero we will have a violation of $PT$ symmetry.
Remark: This argument only applies to particles with non-zero dipole moment.
Experimental searches of the neutron electric dipole moment can be found:
Smith et al. Phys. Rev.
108, 120 (1957) [link to paper].
Baker et al. Phys. Rev. Lett.
97, 131801 (2006) [link to paper].
The upper bound in the last one for $|\mathbf{D}|$ is $2.9 \cdot 10^{-26}$ e cm.
D.
EDIT: As David said below, there is not $CPT$ violation in the, hypothetical case, of having $PT$ violation [=existence of non zero electric dipole moment]. |
Counting curves, and the stable length of currents
; Parlier, Hugo ;
E-print/Working paper (2016)
Let $\gamma_0$ be a curve on a surface $\Sigma$ of genus $g$ and with $r$ boundary components and let $\pi_1(\Sigma)\curvearrowright X$ be a discrete and cocompact action on some metric space. We study ... [more ▼]
Let $\gamma_0$ be a curve on a surface $\Sigma$ of genus $g$ and with $r$ boundary components and let $\pi_1(\Sigma)\curvearrowright X$ be a discrete and cocompact action on some metric space. We study the asymptotic behavior of the number of curves $\gamma$ of type $\gamma_0$ with translation length at most $L$ on $X$. For example, as an application, we derive that for any finite generating set $S$ of $\pi_1(\Sigma)$ the limit $$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\{\gamma\text{ of type }\gamma_0\text{ with }S\text{-translation length}\le L\}$$ exists and is positive. The main new technical tool is that the function which associates to each curve its stable length with respect to the action on $X$ extends to a (unique) continuous and homogenous function on the space of currents. We prove that this is indeed the case for any action of a torsion free hyperbolic group. [less ▲]Detailed reference viewed: 24 (1 UL)
Short closed geodesics with self-intersections
; Parlier, Hugo
E-print/Working paper (2016)
Our main point of focus is the set of closed geodesics on hyperbolic surfaces. For any fixed integer $k$, we are interested in the set of all closed geodesics with at least $k$ (but possibly more) self ... [more ▼]
Our main point of focus is the set of closed geodesics on hyperbolic surfaces. For any fixed integer $k$, we are interested in the set of all closed geodesics with at least $k$ (but possibly more) self-intersections. Among these, we consider those of minimal length and investigate their self-intersection numbers. We prove that their intersection numbers are upper bounded by a universal linear function in $k$ (which holds for any hyperbolic surface). Moreover, in the presence of cusps, we get bounds which imply that the self-intersection numbers behave asymptotically like $k$ for growing $k$. [less ▲]Detailed reference viewed: 18 (1 UL) |
Homogenization for a nonlinear wave equation in domains with holes of small capacity
1.
Department of Mathematics - State University of Maringá, 87020-900 Maringá, PR, Brazil, Brazil
2.
Department of Mathematics - State University of Maringá, Avenida Colombo, 5790, 87020-900 Maringá, PR, Brazil
$\partial_{t t} u_{\varepsilon} - \Delta u_{\varepsilon} + \partial_t F(u_{\varepsilon}) = 0$ in $\Omega_{\varepsilon}\times(0,+\infty),$
where $\Omega_{\varepsilon}$ is a domain containing holes with small capacity. In the context of optimal control, this semilinear hyperbolic equation was studied by Lions (1980) through a theory of ultra-weak solutions. Combining his arguments with the abstract framework proposed by Cioranescu and Murat (1982), for the homogenization of elliptic problems, a new approach is presented to solve the above nonlinear homogenization problem. In the linear case, one improves early classical results by Cionarescu, Donato, Murat and Zuazua (1991).
Mathematics Subject Classification:Primary: 35B27; Secondary: 35B4. Citation:M. M. Cavalcanti, V.N. Domingos Cavalcanti, D. Andrade, T. F. Ma. Homogenization for a nonlinear wave equation in domains with holes of small capacity. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 721-743. doi: 10.3934/dcds.2006.16.721
[1] [2]
Patrizia Donato, Florian Gaveau.
Homogenization and correctors for the wave equation in non periodic perforated domains.
[3] [4] [5] [6]
Guy V. Norton, Robert D. Purrington.
The Westervelt equation with a causal propagation operator coupled to the bioheat equation..
[7] [8] [9] [10] [11]
Thierry Horsin, Peter I. Kogut, Olivier Wilk.
Optimal $L^2$-control problem in coefficients for a linear elliptic equation.
II. Approximation of solutions and optimality conditions.
[12]
Sebastián Ferrer, Martin Lara.
Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and
orbital motion.
[13]
Manuel de León, Juan Carlos Marrero, David Martín de Diego.
Linear
almost Poisson structures and Hamilton-Jacobi equation. Applications
to nonholonomic mechanics.
[14]
Thierry Horsin, Peter I. Kogut.
Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result.
[15] [16]
Hongwei Zhang, Qingying Hu.
Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition.
[17] [18] [19] [20]
Andrzej Nowakowski.
Variational approach to stability of semilinear wave equation with nonlinear boundary conditions.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Find all the positive integers $m$ such that $$p_{m}≥2m$$
where $(p_{m})$ is the sequence of prime numbers
I have no idea to start.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Note that $7 < 2\cdot 4$, while $11 > 2\cdot 5$, and every prime after that is odd.
We have $p_n\ge n\log(n)\ge 2n$ for all $n\ge 8$. This follows from the standard estimates, e.g., see here.
@Dietrich Burde's answer does the hard part, of proving the high-$m$ behavior.
However, the correct answer to the problem is
$$ m \in \{1 \cup \{n | n \geq 5 \}$$ |
Two classes of time-series models, which are somewhat similar to each other, spring to mind as potentially being useful in this context; ARMAX models and ARDL models. Let me set the scene by jogging your memory and reminding you what these models look like (in brief, so please jog!).
ARMAX
An ARMAX model would provide you with a framework to explain the number of new jobs not only by the index of business development (the explanatory variable), but also by its own lagged values (AR terms) and disturbances (MA terms). This kind of model, known as an ARMAX(p,q) model, can take the following form$$y_{t} = \beta x_{t} + \phi_{1}y_{t-1} + \cdots + \phi_{p}y_{t-p} + \epsilon_{t} + \theta_{1}\epsilon_{t-1} \cdots + \theta_{q}\epsilon_{t-q}$$where the number of new jobs and the index of business development are denoted by $y_{t}$ and $x_{t}$, respectively. My favourite online reference for these models is this Rob Hyndman blog post and I recommend checking it out for more details (subtleties).
ARDL
An ARDL model would provide you with a framework to explain the number of new jobs by its own lags (AR terms) and by both contemporaneous and lagged values of the index of business development. This kind of model, known as an ARDL(p,q) model, can be written in the form$$y_{t} = \delta + \sum_{i=1}^{p} \alpha_{i}y_{t-i} + \sum_{j=0}^{q} \beta_{j}x_{t-j} + u_{t}$$where $u_{t}$ is an error term. As you should see, it's similar to ARMAX - constant terms can be included or not.
For purposes of demonstration, let's consider an ARDL(1,1) model to see the potential usefulness of this particular class of models in the present context. The ARDL(1,1) model can be expressed as follows$$y_{t} = \delta + \alpha_{1}y_{t-1} + \beta_{0}x_{t} + \beta_{1}x_{t-1} + u_{t}.$$
Nested within this model are some nice special cases with interesting interpretations! I'll mention just a few which could be useful to you and I'll leave a reference in case you want to track down the others. Note that the special cases can be extended into the more general ARDL(p,q) model.
Static Regression ($\alpha_{1} = \beta_{1} = 0$): $$y_{t} = \delta + \beta_{0}x_{t} + u_{t}$$ Leading-Indicator Model ($\alpha_{1} = \beta_{0} = 0$): $$y_{t} = \delta + \beta_{1}x_{t-1} + u_{t}$$ Partial Adjustment Model ($\beta_{1} = 0$): $$y_{t} = \delta + \alpha_{1}y_{t-1} + \beta_{0}x_{t} + u_{t}$$ Error Correction Model ($\beta_{0} + \beta_{1} = 1 - \alpha_{1}$): $$\Delta y_{t} = \delta + \beta_{0}\Delta x_{t} + (\alpha_{1}-1)(y_{t-1}-x_{t-1}) + u_{t}$$
OK, so there we have a bunch of models that
could be useful, but in order to build the empirical model some investigation is required. In other words, to select a model you'll need to employ some strategy.
Note that the nested nature of the ARDL model provides a modelling environment conducive to the general-to-specific approach of time-series econometric modelling, which is associated with David F. Hendry, so there's one place to look.
Hopefully that has jump started your brain enough to proceed, but let me end with some suggestions (listed in no particular order).
The cross correlation function (CCF) can be used to see if one variable leads another. Uncovering leading behaviour would help justify a leading indicator model, for example, but the data will have to speak. Perform (non-)stationarity tests; both formal and informal. Looks as though you'll need to induce stationarity (although, this depends; see next point.). Test for cointegration if necessary. Use the following time-domain tools to help identify structure in each time-series; the autocorrelation function (ACF), partial autocorrelation function (PACF), and residual autocorrelation function. These tools will help specify the final model and in the latter case also help perform diagnostic checks. Also, eyeballing the data in levels will not show you many of the hidden dynamics that these tools may give light to. Don't rule out the ARIMA class of models, but also consider other explanatory variables if you have reason to. If you really want to explain, consider using theory as a guide to build a proper econometric model; are new jobs a function of firm birth / death rates, tourist visits, something else, etc.? If you want to revert to a univariate framework, there are automated procedures that can be used in R to build both ARIMA and ETS models.
Based on the information provided, that's all I can say. Fundamentally, some modelling will have to be done on your behalf. I hope I've helped. |
Christoffel
I followed equations (3.5)-(3.10) carefully because I fell into the same tramp as I had before on one step and found an error of a sign in Carroll's (3.10) which is the important equation for the transformation of the connection. This is fairly obvious because it comes from (3.9) and a + term has gone to the other side of the equation without changing sign. It was also confirmed by notes I found at Physics 171 and the proof about Carroll's (3.26), see below. I struggled with that proof (and part 2 of exercise 1) for too long. Eventually I found it after I found another erroneous proof on another website which nevertheless gave me a great new indexing trick (note d). I have corrected the error here.
I still do not understand Carroll's third rule for covariant derivatives that they commute with contractions but he never seems to use it. Its meaning provoked a discussion on physics forums which did not help me. In another discussion my false assumption about commuting partial derivatives was exposed.
The three most important equations here are$$
{\mathrm{\Gamma }}^{\nu '}_{\mu '\lambda '}=\frac{\partial x^{\mu }}{\partial x^{\mu '}}\frac{\partial x^{\lambda }}{\partial x^{\lambda '}}\frac{\partial x^{\nu '}}{\partial x^{\nu }}{\mathrm{\Gamma }}^{\nu }_{\mu \lambda }-\frac{\partial x^{\mu }}{\partial x^{\mu '}}\frac{\partial x^{\lambda }}{\partial x^{\lambda '}}\frac{{\partial }^2x^{\nu '}}{\partial x^{\mu }\partial x^{\lambda }}
$$ and $$
{\mathrm{\Gamma }}^{\nu '}_{\mu '\lambda '}=\frac{\partial x^{\mu }}{\partial x^{\mu '}}\frac{\partial x^{\lambda }}{\partial x^{\lambda '}}\frac{\partial x^{\nu '}}{\partial x^{\nu }}{\mathrm{\Gamma }}^{\nu }_{\mu \lambda }+\frac{\partial x^{\nu '}}{\partial x^{\lambda }}\frac{{\partial }^2x^{\lambda }}{\partial x^{\mu '}\partial x^{\lambda '}}
$$ which are alternatives for the transformation of a connection. The first one was Carroll's (3.10) (corrected.)
The third is Carroll's (3.27). He writes it is "one of the most important equations in this subject; commit it to memory." It is for a torsion-free (##{\mathrm{\Gamma }}^{\lambda }_{\mu \nu }={\mathrm{\Gamma }}^{\lambda }_{\nu \mu }##) metric-compatible (##{\mathrm{\nabla }}_{\rho }g_{\mu \nu }=0##) connection and is$$
{\mathrm{\Gamma }}^{\sigma }_{\mu \nu }=\frac{1}{2}g^{\sigma \rho }\left({\partial }_{\mu }g_{\nu \rho }+{\partial }_{\nu }g_{\rho \mu }-{\partial }_{\rho }g_{\mu \nu }\right)
$$For some reason Carroll writes ##{\mathrm{\Gamma }}^{\lambda }_{\mu \nu }={\mathrm{\Gamma }}^{\lambda }_{\nu \mu }## as ##{\mathrm{\Gamma }}^{\lambda }_{\mu \nu }={\mathrm{\Gamma }}^{\lambda }_{(\mu \nu )}## which is the same but more complicated. The brackets are the symmetrisation operator.
Read all 12 pages at Commentary 3.2 Christoffel Symbol.pdf. |
I want to count the number of possibilities that one can produce $k$-subsets of a larger set $V$ with size $n$ such that 1) each subset $X_i$ has size at most $b$, 2) subsets do not have a common element, and 3) subsets altogether cover the elements in the larger set $V$.
More specifically, given a set $V$ where $|V|=n$, how many possibilities is there to choose subsets $X_1, ..., X_k \subseteq V$ such that
1) $|X_i| \leq b$,
2) $\forall_{i \neq j} X_i,X_j :$ $X_i \cap X_j = \emptyset$,
3) $\bigcup_{i \in \{1,...,k\}} X_i = V$,
Two possibilities are different if they do not have exactly same subsets. The order of subsets doesn't matter.
If finding the exact number of possibilities is hard, an upper bound suffices.
Can anyone suggest an approach to counting such possibilities? |
I am learning geometric algebra, and meet an identy of (edited according to Andrey's comments below)
$$ (a\wedge b)\cdot(c\wedge d) = (a \cdot d)(b\cdot c) - (a \cdot c)(b \cdot d)$$
How can one prove this without using indices? Like start from geometric algebra context of $$a\cdot b=\dfrac {ab+ba}{2}=b\cdot a$$
and $$a\wedge b=\dfrac {ab-ba}{2}$$ |
Given equation
$$E(t)=A \exp(-bt)$$
$A$ and $b$ are constant and $E$ is energy $t$ is time
If there is an error of say 1.5 percent In measured value of t
What is error in value of energy. How can I find it
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
First and foremost, this looks like an error propagation problem. You are given an equation and some measurement of error.
Formal error propagation is given approximately by the formula (ignoring all covarriance):
$$ \sigma_f^2 = \sum_i \sigma_{x_i}^2 \left(\frac{\partial f(x_i)}{\partial x_i} \right)^2$$
Where $\sigma_f$ is the total error that should be propagated, $\sigma_{x_i}$ is the error on the given varying element of $i$ (in your case, your $t$ variable), and $f(x_i)$ is the function of which you are trying to propagate error through (in your case, the equation $E(t) = A \exp\left( -bt \right)$.
Because you only have one term in your equation which has an error, namely $t$, such that $\sigma_t = 0.00015$, you can solve the error propagation formula for your new error.
The corresponding article of error propagation on Wikipedia has a much more detailed and formalized information about this subject. In particular is the section of pre-calculated error propagation formulas. Of interest is the following:
$$ f = a \exp\left(bA\right) \qquad \Rightarrow \qquad \sigma_f^2 \approx f^2 \left(b \sigma_A \right)^2$$
Given the real value $A$, with error $\sigma_A$; the exactly known real-valued constants $a,b$ where $\sigma_a = \sigma_b = 0$. The ending value of the original function (see above) $f(x_i)$ given as $f$.
Please note that it is hard to do error propagation with the function that you provided without the value of the number itself, that is $t$ and its result, $f(t)$. For other functional arrangements, it may be easier. |
Let $M$ be a von Neumann algebra acting on a Hilbert space $H$, and let $\tau$ be a faithful tracial state on $M$.
What is the relation between the GNS representation of $(M,\tau)$ and the original action of $M$ on $H$?
More precisely, let $M_\tau$ denote the Hilbert space obtained by completing $M$ with the inner product $\langle x,y\rangle_\tau=\tau(y^*x)$ (since $\tau$ is faithful, this inner product is nondegenerate), and $M$ acts faithfully (again since $\tau$ is a faithful state) on $M_\tau$ by left multiplication. Let $\varphi_\tau:M\to B(M_\tau)$ denote this representation.
My question is: is $\varphi_\tau(M)$ a von Neumann algebra on $M_\tau$? If so, $\tau$ becomes a vector state under the identification $M\simeq\varphi_\tau(M)$: $\tau(u)=\langle u(1_M),1_M\rangle_\tau$.
In case $M$ admits a cyclic vector $x$ for which $\tau(u)=\langle u(x),x\rangle$, the result is true, but the last comment above becomes useless. In fact, in this case the actions of $M$ on $H$ and on $M_\tau$ are unitarily equivalent (see Murphy, Theorem 5.1.4). |
I am familiar with regular substitution but I am confused on what my book writes here:
Evaluate $\int_{0}^{\infty} x^{m} e^{-ax^{n}} \, dx$, where $m$, $n$, and $a$ are positive constants.
Letting $ax^{n} = y$, the integral becomes
$$\int_{0}^{\infty} \left\{\left(\frac{y}{a}\right)^{1/n}\right\}^{m} e^{-y} \, d\left\{\left(\frac{y}{a}\right)^{1/n}\right\} = \frac{1}{na^{(m + 1)/n}} \int_{0}^{\infty} y^{(m + 1)/n - 1} e^{-y} \, dy = \frac{1}{na^{(m + 1)/n} }\Gamma \left(\frac{m + 1}{n}\right)$$
It is from Schaums outline to advanced calculus I thought I understood regular substitution but this is really confusing me. Like what is going on here? How is this being solved in this way and why keep d( a term) in the integral?
Can anyone please help explain it?
Thanks |
I'll jump into the question and then back off into qualifications and context
Using the definition of a definite integral as the limit of Riemann sums, what is the best way (or the very good ways) to establish the results $\int_a^bx^pdx=\frac{b^{p+1}-a^{p+1}}{p+1}$ without building a general theory of integrals?
Context: For better or worse, a common sequence in teaching integral calculus for the first or second time is to define the definite integral as a limit of Riemann sums. One then notes that this ( like the limit definition of derivatives) is effective for proving theorems but not very practical for specific calculations. So soon one demonstrates or implies that the definition is valid and then gets to the Fundamental Theorem of Calculus. However it is traditional to use the definition to evaluate $\int_a^bx^pdx$ for $p=0,1,2$ and perhaps $p=3$ using the lovely sum of cubes formula.
It might be tempting to prove the formula in greater generality without using the fundamental theorem (most students are less excited at this prospect than one might expect!). In my early days as a TA I came up with an approach which I thought was great. The students were not impressed and I have not used it since. Anyway, I have not seen it elsewhere although I am confident it is nothing novel. I am not going to reveal it right away just to see if it shows up. I realize that is questionable manners here on MO but I will put it up in a day or two, I just want to see what shows up first.
Here is a very brief sketch of two approaches I have seen:
1) Let $S_p(n)=\sum_{k=0}^nn^p.$ The explicit formulas for $p=0,1,2$ and also $p=3$ are attractive and not bad to prove by induction. Given an explicit formula for $S_p$ one easily evaluates the usual equal subinterval sums for $\int_0^1x^pdx$ and then extends to $\int_a^b.$ But $S_p$ gets more tedious for larger $p$. A clever method of Pascal allows one to use strong induction, the binomial theorem and telescoping sums to, derive an explicit formulas for $S_{p}$ for larger values of $p$, limited only by ones patience and stamina:
Take $(k+1)^{p+1}-k^{p+1}=\sum_1^{p+1}\binom{p+1}{j}k^{p+1-j}$ and sum for $k$ from $0$ to $n$ to get $(n+1)^{p+1}-0^{p+1}=\sum_1^{p+1}\binom{p+1}{j}S_{p+1-j}(n).$ Since we know everything except $S_p,$ the rest is algebra! This is quickly unpleasant and the final results are not as aesthetic as the first cases. HOWEVER, for the desired application we only need to establish that $S_p(n)=\frac{n^{p+1}}{p+1}+\frac{n^p}{2}+O(n^{p-1})$ That is not hard and shows that $n$ subdivisions yield $\frac{1}{p+1}-\frac{1}{2n} \lt \int_0^1x^pdx \lt \frac{1}{p+1}+\frac{1}{2n}$.
Notes: This is valid for $p$ a non-negative integer. Knowing enough about Bernouli numbers allows explicit formulas but I am interested in fairly elementary methods. I think that one involves the series expansion for $e^x$.
2) Due to Fermat: Partition into subintervals using points forming a geometric rather than arithmetic progression. I've seen this in two forms:
2.1) Choose $0 \lt a \lt b$ and divide using $a \lt ar \lt ar^2 \lt \cdots\lt ar^N=b$ so $r=\sqrt[N]{b/a}.$ The widths of the intervals form a geometric progression of common ratio $r$. The values of $x^p$ at the points of division form a geometric progression of common ratio $r^p.$ Thus the sum of rectangle areas using left endpoints gives as a lower bound for $\int_a^bx^p$ the geometric series with $N$ terms, first term $a^{p+1}(r-1)$ and ratio $r^{p+1}.$ Using righthand enpoints gives a similar upper bound with first term $a^{p+1}(r-1)r^p.$ With very little effort one arrives at
$$ \frac{(b^{p+1}-a^{p+1})(r-1)}{r^{p+1}-1} \lt \int_a^bx^pdx \lt \frac{(b^{p+1}-a^{p+1})(r-1)r^p}{r^{p+1}-1}.$$
If $p+1$ is a positive integer we have
$$ \frac{b^{p+1}-a^{p+1}}{1+r+r^2+\cdots+r^p} \lt \int_a^bx^pdx \lt \frac{(b^{p+1}-a^{p+1})r^p}{1+r+r^2+\cdots+r^p}.$$
Now let $N$ go to infinity sending $r$ to $1$ and squeezing to $\int_a^bx^pdx=\frac{b^{p+1}-a^{p+1}}{p+1}.$
This particular approach requires $0 \lt a.$ It is an easy extra step to extend the result to rational values of $p$ (except for the challenging $p=-1$) using $\frac{r^{p/q}-1}{r-1}=\frac{u^p-1}{u-1}/\frac{u^q-1}{u-1}$ for $u=r^{1/q}.$
2.2) Similar except now divide the interval $[0,b]$ using a value $0 \lt r \lt 1$ and infinitely many points $ \cdots br^3 \lt br^2 \lt br \lt b.$ Now one has infinite geometric series and the rest procedes similarly to before letting $r$ increase to $1.$
So that is the flavor of what I am asking about. I do not think this is a big list question unless there are a large number of approaches I have not seen.
CONTINUED To recap, we already know the answer , $\frac{b^{p+1}-a^{p+1}}{b-a}$, which we want for the area $A$ of the region under $x^p$ for $a \le x \le b$, we just want to prove it. (Assume for ease that $0 \lt a$.) A partition $P$ of $[a,b]$ is a sequence $a=x_0 \lt x_1 \lt \cdots \lt x_n=b$. The mesh $m(P)$ of $P$ is $\max(x_{i}-x_{i-1}).$ (There is rarely a reason to have unequal intervals, but Fermat gave one.) We use the sub-intervals, in two ways, as the bases of an assemblage of rectangles with heights determined by the endpoints. Since $x^p$ is monotonic, one is covered by the region and the other covers it. So the two areas provide a lower and an upper bound.
$$ \sum_1^nx_{i-1}^p(x_i-x_{i-1}) \lt A \lt \sum_1^nx_{i}^p(x_i-x_{i-1})$$ If we manage to compute or bound these bounds and show that, when the mesh goes to zero, they have a common limit (the one we expect), we are done. The actual bounds we compute are of value only for the interesting, but secondary, topic of speed of convergence. And anyway, if $m(P) \lt \epsilon$ then the difference between the two bounds is less than $(b-a)(b^p-(b-\epsilon)^p),$ which converges to zero. (For $p \lt 0$ use $a-(a+\epsilon)^p$)
So I propose to instead assign to each sub-interval $[u,v]$ the height $h(u,v)=\frac{v^{p+1}-u^{p+1}}{(p+1)(v-u)}$ and "compute" $\sum_1^nh(x_{i-1},x_i)(x_i-x_{i-1})$ which immediately collapses to, of course, $\frac{b^{p+1}-a^{p+1}}{p+1}.$
Establishing that this has any relevance requires showing that the height $h(u,v)$ is between $u^p$ and $v^p$. This is easy in practice if one simplifies. If you simplify first, then the whole thing looks like magic until you see what was done.
So for $p=5$, obviously $$u^5 \lt \frac{v^5+v^4u+v^3u^2+v^2u^3+vu^4+u^5}{6} \lt v^5.$$ OK, so what? Why not use the average, the geometric mean or $\left(\frac{u+v}{2}\right)^5$? Well, $(v-u)h(u,v)=\frac{v^6-u^6}{6}$ so $\sum_1^nh(x_{i-1},x_i)(x_i-x_{i-1})$ collapses to $\frac{b^6-a^6}{b-a}$.
About as easily
$\frac{1}{\sqrt{v}} \lt \frac{2}{\sqrt{u}+\sqrt{v}} \lt \frac{1}{\sqrt{u}}$
$\frac{1}{v^2} \lt \frac{1}{uv} \lt \frac{1}{u^2}$
$\frac{1}{v^4} \lt \left( \frac{1}{v^3u}+\frac{1}{v^2u^2}+\frac{1}{vu^3}\right)/3 \lt \frac{1}{u^4}$
It is slightly more fun to show that
$\sqrt{u} \lt \frac{2(v+\sqrt{vu}+u)}{3(\sqrt{v}+\sqrt{u})} \lt \sqrt{v}.$
SO:
Is this line of argument valid? Is it interesting? Have you seen it before?
To its credit I'll say that it does not show preference to any particular partition and uses nothing more complex than the two historic treatments above (although maybe it benefits from a modern frame of reference.) Also, rather than carefully converging to the correct answer as the partition evolves, it just starts there and stays unaffected. I don't immediately see that it can be applied to any other definite integrals. But the case of $x^p$ has a certain primary importance. |
I’ve been blown away, and immensely gratified, by the amazing response people have had to my post on Lanchester’s Laws and 40K. So I thought I’d do a brief followup based on a modification first suggested by WestRider, author of the blog Cascadian Grimdark and asked by another commenter: What if
some Imperial ships can’t be turned, because they blow their drives or what have you?
Basically, how often do the crews of the beleaguered Imperial vessels need to do the Space Marine equivalent of that most Guard like of commands: “On my position, fire for effect.”?
Now, there’s only one example if this in
Ruinstorm. The battle barge Samothrace, the Ultramarines flagship, realizes what is happening. Its captain decides on the Theoretical: “Nuts to that.” and this the Practical, which salvo of danger-close Cyclonic Torpedos which damage the Veritas Ferrum and completely consume the Samothrace, preventing it from becoming a revenant vessel.
How many Imperial ships would have to make a similar sacrifice, in some way wholly obliterating themselves before they became revenants, to reverse the victory of the Pilgrim Fleet that came out of the 1000 : 650 ship scenario? Which, for reference, is below:
This is pretty natural to modify in the equations I gave previously:
$$\frac{dImperium}{dt}=-\beta*Chaos$$
and
$$\frac{dChaos}{dt}=-\alpha*Imperium + \beta*Chaos*\gamma$$
We’ve introduced s new parameter, $\gamma$, which we’ll treat as the efficiency at which a dying Imperial ship turns into a revenant. In the previous analysis, this was implicitly 100%. $\gamma$ = 0.5 would suggest 50% of ships turn, etc.
As it turns out, for the scenario in the previous post, the critical value for $\gamma$ is 0.889. At that level of efficiency, the battle ends up looking like this:
Still a victory for the Pilgrim Fleet, but a much harder fought one, with the Imperials remaining in the fight nearly twice as long, and having much diminished the Chaos force. Below that threshold, even a little bit (in this case, 0.888)?
A pretty subtle change and you get a very different result, with a surviving (albeit brutally mauled) Imperial fleet. And you don’t have to go much higher above it for things to change in the other direction. Here’s a picture at 0.90:
A very small increase in the efficiency of the “revenant conversion” cuts the battle time by 25% and nearly doubles the number of surviving Chaos vessels.
This is just another illustration showing just how big of a deal outnumbering is – a subtle push towards a more or less efficient system for converting ships ends up having a massive impact on the outcome.
It definitely has to be
most ships though – even conversion rates that would, narratively, seem horrific, like say 50% don’t really challenge the Imperial fleet at all.
So there you have it. When facing demon fleets, make sure you self-destruct good and hard if it seems like things are going pear shaped. |
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Подробен запис - Подобни записи 2018-08-25 06:58 Подробен запис - Подобни записи 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Подробен запис - Подобни записи 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Подробен запис - Подобни записи 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Подробен запис - Подобни записи 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Подробен запис - Подобни записи 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Подробен запис - Подобни записи 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Подробен запис - Подобни записи 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Подробен запис - Подобни записи 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Подробен запис - Подобни записи |
I know, correct me if I am wrong, that the functions $H_n(x)\exp(-x^2/2)$ form a complete basis in $L^2(\mathbb{R},dx)$, where $H_n(x)$ is the $n$th Hermite polynomial. This must be true also for $x^n\exp(-x^2/2)$ with $n\in\mathbb{N}_0$. Does someone know a proof of the latter or can give me a reference? Since I am not a mathematician, I will really appreciate it if the proof contains all the details that might puzzle a non-mathematician.
I also have the problem to choose functions $f_k$ such that
\begin{equation} \int_{-\infty}^{+\infty}f_k(x)x^n\exp(-x^2)dx = \delta_{kn} \mbox{.} \end{equation}
Does anyone know a method to construct the $f_k$s? |
257 23 Homework Statement Two identical uniform triangular metal played held together by light rods. Caluclate the x coordinate of centre of mass of the two mass object. Give that mass per unit area of plate is 1.4g/cm square and total mass = 25.2g Homework Equations -
Not sure what I went wrong here, anyone can help me out on this? Thanks.
EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks
EDIT: Reformatted my request.
Diagram:
So as far as I know to calculate the center of mass for x, I have to use the following equation:
COM(x):
##\frac{1}{M}\int x dm##
And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula:
##dm = \mu * dx * y## where ##\mu## is the mass per unit area.
So subbing in the above equation into the first, I get:
##\frac{1}{M}\int x (\mu * dx *y) ##
##\frac{\mu}{M}\int xy dx##
Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4##
This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method?
Regardless of what, supposed I am correct:
COM for right triangle:
##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected)
COM for left triangle:
##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63...
Total COM = ##8+5.63/2## which is wrong :(
Thanks
Last edited: |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
The package
CircuiTikz provides a set of macros for naturally typesetting electrical and electronic networks. This article explains basic usage of this package.
Contents CircuiTikz includes several nodes that can be used with standard tikz syntax.
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{circuitikz} \begin{document} \begin{center} \begin{circuitikz} \draw (0,0) to[ variable cute inductor ] (2,0); \end{circuitikz} \end{center} \end{document}
To use the package it must be imported with
\usepackage{circuitikz} in the preamble. Then the environment circuitikz is used to typeset the diagram with tikz syntax. In the example a node called variable cute inductor is used.
As mentioned before, to draw electrical network diagrams you should use tikz syntax, the examples even work if the environment
tikzpicture is used instead of circuitikz; below a more complex example is presented.
\begin{center} \begin{circuitikz}[american voltages] \draw (0,0) to [short, *-] (6,0) to [V, l_=$\mathrm{j}{\omega}_m \underline{\psi}^s_R$] (6,2) to [R, l_=$R_R$] (6,4) to [short, i_=$\underline{i}^s_R$] (5,4) (0,0) to [open, v^>=$\underline{u}^s_s$] (0,4) to [short, *- ,i=$\underline{i}^s_s$] (1,4) to [R, l=$R_s$] (3,4) to [L, l=$L_{\sigma}$] (5,4) to [short, i_=$\underline{i}^s_M$] (5,3) to [L, l_=$L_M$] (5,0); \end{circuitikz} \end{center}
The nodes
short,
V,
R and
L are presented here, but there a lot more. Some of them are presented in the next section.
Below most of the elements provided by
CircuiTikz are listed: Monopoles Bipoles Diodes dynamical bipoles
For more information see: |
WHY?
Two approximation methods, Variational inference and MCMC, have different advantages: usually, variational inference is fast while MCMC is more accurate.
Note
Markov Chain Monte Carlo (MCMC) is approximation method of estimating a variable. MCMC first sample a random draw
z_0 and than draw a chain of variables from a stochastic transition operator q.
z_t \sim q(z_t|z_{t-1},x)
It is proved that
z_T will ultimately converge to true posterior p(z
x) with sufficiently enough samplings. WHAT?
Variational lowerbound contains posterior of latent variable q(z|x). This paper suggests to estimate q(z|x) more accurately with MCMC with auxiliary random variables and apply variational inference to ELBO with auxiliary variables. \
y = z_0, z_1, ..., z_{t-1}\\L_{aux} = \mathbb{E}_{q(y,z_T|x)}[\log[p(x,z_T)r(y|x,z_T)] - \log q(y, z_T|x)]\\\mathcal{L} - \mathbb{E}_{q(z_T|x)}\{D_{KL}[q(y|z_T,x)\|r(y|z_T,x)]\}\leq \mathcal{L} \leq \log[p(x)]\\
If we assume that auxiliary inference distribution also has a Markov structure,
r(z_0, ... z_{t-1}|x, z_T) = \prod^T_{t=1}r_t(z_{t-1}|x, z_t)\\\log p(x) \leq \mathbb{E}_z[\log p(x, z_T) - \log q(z_0, ..., z_T|x) + \log r(z_0, ... z_{t-1}|x, z_T)]\\ = \mathbb{E}_q[log[p(x, z_T)/q(z_0|x)] + \sum_{t=1}^T\log[r_t(z_{t-1}|x, z_t)/q_t(z_t|x,z_{t-1})]]
Since auxiliary variational lowerbound cannot be calculated analytically, we estimate MCMC lowerbound by sampling from transitions(
q_t) and inverse model(
r_t).
The gradient of resulted MCMC lowerbound can be calculated via remarameterization trick.
One of the efficient methods of MCMC is Hamiltonian Monte Carlo (HMC). Hamiltonian MC introduce auxiliary variables v with the same dimension as z.
H(v, z) = 0.5 v^T M^{-1}v - \log p(x,z)\\v_t' \sim q(v_t'|x, z_{t-1})\\
So?
Convolutional VAE with Hamiltonian Variational Inference (HVI) with 16 leapfrog step and 800 hidden nodes achieve slightly worse result than that of DRAW.
Critic
Many MCMC algorithms variates can be used to improve the result. |
The Dirac delta function can be defined as $$\delta(x)=\frac{1}{2\pi}\int_{-\infty}^\infty e^{itx}dt$$ From this we see that the dirac function has units of $x^{-1}$.
How do we represent the units in cases like the momentum eigenvectors which, when units are included, is represented as $$\frac{1}{\sqrt{2\pi\hbar\cdot(kg^{-1}m^{-1}s)}}e^{\frac{\iota px}{\hbar}}$$ or $$\frac{1}{\sqrt{2\pi\hbar}}e^{\frac{\iota px}{\hbar}}\cdot(kg^{\frac{1}{2}}m^{\frac{1}{2}}s^{-\frac{1}{2}})\,?$$
Is there a preferred way to write the units(not constrained to SI units, any other system including natural units too) or do we just leave them out, though it would be dimensionally inconsistent without implied units.
Sources I can find for the momentum eigenvector ignore the units of the delta function without even mentioning.
P.S. The problem comes when trying to normalize the momentum operator.
Define $$\psi_p(x)=Ae^{\frac{\iota px}{\hbar}}$$ Over here, $A$ has units of $m^{-\frac{1}{2}}$.
Normalizing it,$$\int_{-\infty}^\infty\psi^*_{p_1}(x)\psi_{p_2}(x)dx$$$$=|A|^2\int_{-\infty}^\infty e^{\frac{\iota \left(p_2-p_1\right)x}{\hbar}}dx$$$$=|A|^22\pi\hbar\delta\left(p_2-p_1\right)$$Therefore,
ignoring consistency of units, and assuming $A$ is positive,$$A=\frac{1}{\sqrt{2\pi\hbar}}$$Notice that the units do not match |
Abstract
We study the level-spacings distribution for eigenvalues of large $N\times N$ matrices from the classical compact groups in the scaling limit when the mean distance between nearest eigenvalues equals $1$.
Defining $\eta_N(s)$ the number of nearest neighbors spacings greater than $s>0$ (smaller than $s>0$) we prove functional limit theorem for the process $(\eta_N(s)-\mathbb{E}\eta_N(s))/N^{1/2}$, giving weak convergence of this distribution ot some Guassian random process on $[0,\infty)$.
The limiting Gaussian random process is universal for all classical compact groups. It is Hölder continuous with any exponent less than $1/2$. Similar results can be obtained for the $n$-level-spacings distribution. |
The authors wish to thank Antoine Cerfon and Dimitrios Andriopoulos of the Courant Institute at New York University for pointing out an error in our manuscript (Catto, Pusztai & Krasheninnikov 2015). This mistake only affects the results when a toroidal magnetic field is present. It arises because our Grad–Shafranov equation (3.11) should be corrected to read
where the only change is to insert the missing $\text{e}^{-\unicode[STIX]{x1D712}}$ multiplying the toroidal magnetic field term proportional to $b^{2}$ since it cannot have any density dependence. There is also a typographical error in (3.6) since there, $B_{o}$ should be replaced by $B_{Po}$ , with $B_{Po}=-\unicode[STIX]{x1D6FC}\unicode[STIX]{x1D713}_{o}/R_{o}^{2}$ . The equations shown here and the material in quotes are the corrected content. The references remain the same as in the publication.
‘Consequently, we expect $\unicode[STIX]{x1D6FC}+2<0$ unless $C^{2}b^{2}>\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}\text{e}^{-g-\unicode[STIX]{x1D714}^{2}(1-C^{-1})}$ , with $C=1$ if $g=0$ ’.
In § 4 the only change occurs in the penultimate sentence that becomes the following ‘For example, we do not consider the limit $C^{2}b^{2}\text{e}^{g+\unicode[STIX]{x1D714}^{2}(1-C^{-1})}>\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}\sim \unicode[STIX]{x1D6FD}g/2\gg \unicode[STIX]{x1D6FD}\gg 1,$ which requires a toroidal magnetic field but allows the magnetic field to vanish at infinity’.
‘Using the vacuum magnetic field solution for the $\unicode[STIX]{x1D6FC}=-2$ root of $H=1-\unicode[STIX]{x1D707}^{2}$ , we find that the $b^{2}\sim \unicode[STIX]{x1D6FD}g\sim \unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}\ll 1\sim \unicode[STIX]{x1D6FD}$ corrections to that result must satisfy’
‘For $g\ll 1$ only a weak density departure from cylindrical symmetry is allowed, giving’
‘Based on (3.17) we expect $b^{2}>\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}$ is required for a solution that keeps $0>\unicode[STIX]{x1D6FC}>-2$ , thereby making the poloidal magnetic field fall off at large distances and pinch in slightly at the equatorial plane. Indeed, in this small $g$ limit, finite $b^{2}>\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}$ seems to be required to find a numerical solution for $\unicode[STIX]{x1D6FC}>-2$ ’.
Next, equation (5.4) and the remainder of the paragraph that it appears in should be corrected to read as follows
‘These results are the same as in Catto & Krasheninnikov (2015) except the toroidal field term has been retained and it further enhances the pinching in of the flux surfaces at the equatorial plane. The disk thickness from $\text{e}^{-g\unicode[STIX]{x1D707}^{2}/2}$ is as given by (4.10). Result (5.4) is verified by a numerical solution which is imperceptibly different from that shown in figure 6(
a,b) for $\unicode[STIX]{x1D6FD}=0.001$ , $g=100$ , $\unicode[STIX]{x1D714}^{2}=40$ and $b^{2}=0.05$ . Analytically we find $\unicode[STIX]{x1D6FC}=-1.949$ and $\unicode[STIX]{x1D6E5}/R=0.14$ and a sensitive numerical solution is found for $\unicode[STIX]{x1D6FC}=-1.951214$ and $\unicode[STIX]{x1D6E5}/R=0.12$ . We only need $b^{2}>\unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}$ in this limit to satisfy $\unicode[STIX]{x1D6FC}+2>0$ from (3.17) due to the exponential $g$ factor and the use of the vacuum solution for $H$ away from the equatorial plane. Equation (5.4) remains valid in the strict Keplerian case $g=2\unicode[STIX]{x1D714}^{2}\gg 1$ , where we can evaluate the integrals in (5.2) a little more carefully to find $\unicode[STIX]{x1D6FC}+2\simeq b^{2}/[1+\unicode[STIX]{x1D6FD}(\unicode[STIX]{x03C0}/2g)^{1/2}]$ , which is consistent with (5.4) when $\unicode[STIX]{x1D6FD}\ll g^{1/2}$ . The numerical solution confirms that this strict Keplerian case is a valid limit’.
‘Catto & Krasheninnikov (2015) also find a disk solution localized to the equatorial plane by considering $g>2\unicode[STIX]{x1D714}^{2}$ and then allowing $g-2\unicode[STIX]{x1D714}^{2}\gg 1\gg \unicode[STIX]{x1D6FD}$ , so that the exponential dependence $\text{e}^{\unicode[STIX]{x1D712}}$ in the Grad–Shafranov equation provides the desired localization about $\unicode[STIX]{x1D707}=0$ for the assumed small $\unicode[STIX]{x1D6FD}$ terms. Therefore, we modify their treatment to find disk solutions with strong poloidal variation, but with the toroidal magnetic field retained to satisfy (3.17). This constraint was not considered in Catto & Krasheninnikov (2015). To begin, we need to find a solution in the disk different from the cylindrical solution $H=(1-\unicode[STIX]{x1D707}^{2})^{-\unicode[STIX]{x1D6FC}/2}$ valid outside the disk. We find this inner disk solution by considering the approximate Grad–Shafranov equation’
‘where now both rotation and gravity enter the exponential density dependence, for which we use’
‘When $g-2\unicode[STIX]{x1D714}^{2}\gg 1\sim \unicode[STIX]{x1D6FC}+2>0$ we obtain strong exponential decay away from the equatorial plane. Very near the equatorial plane $\text{d}^{2}H/\text{d}\unicode[STIX]{x1D707}^{2}<0$ in (5.5) if $g/2>\unicode[STIX]{x1D714}^{2}+\unicode[STIX]{x1D6FC}+2>0$ with $\unicode[STIX]{x1D6FC}<0$ , but once the right-hand side of (5.5) decays away then the $b^{2}$ term can grow. For $\unicode[STIX]{x1D6FC}+2\sim 1$ and $b^{2}$ not too large this growth occurs far enough away from the equatorial plane that the $b^{2}$ term may be ignored in the disk. These observations suggest, in agreement with Catto & Krasheninnikov (2015), that solutions that are strongly localized to the equatorial plane in the presence of gravity are not possible for $g<2\unicode[STIX]{x1D714}^{2}$ and $0>\unicode[STIX]{x1D6FC}>-2$ since the rotation is too strong for the plasma to be gravitationally confined’.
‘Continuing as in Catto & Krasheninnikov (2015), we multiply (5.5) (with the $b^{2}$ term ignored) by $\text{d}H/\text{d}\unicode[STIX]{x1D707}$ and integrate from $H=1$ (at $\unicode[STIX]{x1D707}=0$ ) to $H<1$ (for $\unicode[STIX]{x1D707}^{2}>0$ ) to find for $\unicode[STIX]{x1D6FC}+2\sim 1$ ’
‘where we select the negative root to make $\text{d}H/\text{d}\unicode[STIX]{x1D707}<0$ . Using $\int \text{d}x/\sqrt{1-\text{e}^{-x}}=2\,\tanh ^{-1}\sqrt{1-\text{e}^{-x}}$ we obtain’
‘where $\unicode[STIX]{x1D70E}\equiv (g-2\unicode[STIX]{x1D714}^{2})\sqrt{\unicode[STIX]{x1D6FD}}$ and the upper (lower) sign is for $\unicode[STIX]{x1D707}>0$ ( $\unicode[STIX]{x1D707}<0$ ). A solution strongly localized at the equatorial plane is found for $-x=(g-2\unicode[STIX]{x1D714}^{2})(1-H)/\unicode[STIX]{x1D6FC}\gg 1$ that results in only a small departure from the gravity free solution $H=(1-\unicode[STIX]{x1D707}^{2})^{-\unicode[STIX]{x1D6FC}/2}$ that remains an adequate approximation in the outer region. The behaviour $x\approx \mp \unicode[STIX]{x1D70E}\unicode[STIX]{x1D707}\approx \mp \unicode[STIX]{x1D70E}z/R$ implies a disk width $\unicode[STIX]{x1D6E5}=R/\unicode[STIX]{x1D70E}$ so that $\unicode[STIX]{x1D70E}\gg 1$ is required’.
‘Using (5.7) and (5.8) on the right-hand side of the integral constraint (5.1), with the cylindrical solution $H=(1-\unicode[STIX]{x1D707}^{2})^{-\unicode[STIX]{x1D6FC}/2}$ inserted on the left-hand side, yields the approximate result’
‘Gravity is assumed negligible outside the disk in this $\unicode[STIX]{x1D6FD}\ll 1$ limit, where the solution becomes cylindrical (with $C\simeq 1$ ). Then (5.9) is in agreement with (3.14) provided we assume $1\sim b^{2}\gg \unicode[STIX]{x1D6FD}\unicode[STIX]{x1D714}^{2}\sim \sqrt{\unicode[STIX]{x1D6FD}}$ so the outer solution is well approximated by $H=(1-\unicode[STIX]{x1D707}^{2})^{-\unicode[STIX]{x1D6FC}/2}$ . The plasma disk width is given by’
‘requiring $1/(g-2\unicode[STIX]{x1D714}^{2})^{2}\ll \unicode[STIX]{x1D6FD}\ll 1$ . Strict Keplerian motion is not allowed in this low $\unicode[STIX]{x1D6FD}$ thin disk limit’.
‘The new figure 7(
a,b) shows the flux surfaces, density contours and $H$ for $\unicode[STIX]{x1D6FD}=0.01$ , $g=120$ , $\unicode[STIX]{x1D714}^{2}=10$ and $b^{2}=1$ , for which the analytic results give $\unicode[STIX]{x1D6E5}/R=0.1$ and $\unicode[STIX]{x1D6FC}=-1.5$ . The numerical solution gives $\unicode[STIX]{x1D6E5}/R=0.0938$ and $\unicode[STIX]{x1D6FC}=-1.4715$ . In ( b) we also plot $H=(1-\unicode[STIX]{x1D707}^{2})^{-\unicode[STIX]{x1D6FC}/2}$ for reference. In this case the density decreases with radius (since $\unicode[STIX]{x1D6FC}>-1.5$ ). For the same parameters but with $b^{2}=1$ we find $\unicode[STIX]{x1D6FC}=-1.33$ and $\unicode[STIX]{x1D6E5}/R=0.1$ versus the numerical values of $\unicode[STIX]{x1D6FC}=-1.314$ and $\unicode[STIX]{x1D6E5}/R=0.094$ . We have not found solutions for $\unicode[STIX]{x1D6FC}\rightarrow 0$ , as claimed in Catto & Krasheninnikov (2015), even for finite $b^{2}$ ’. Acknowledgements
Work supported by the US Department of Energy grants DE-FG02-91ER-54109 at MIT and DE-FG02-04ER54739 at UCSD and by the International Career Grant of Vetenskapsrådet (Dnr. 330-2014-631). |
A scalar field is one which is unchanged under rotation. But how do we decide whether a given field (e.g., the temperature in a room) is unchanged under rotation or not? We need to measure the temperature at any point $P$ in two coordinates at some time $t$. In one $P$ has coordinates $(x,y,z)$ and in a rotated frame where $P$ has coordinates $(x^\prime,y^\prime,z^\prime)$. If we find $$T(x,y,z,t)=T^\prime(x^\prime,y^\prime,z^\prime,t)$$ for all points $P$, we will call it a scalar field.
To check whether it is unchanged under a Galilean boost or Lorentz boost, do we also need to perform experiments and decide? Can one exclude or establish whether the scalar-like behaviour holds under these transformations (even without measurements)? Stated differently, I mean
$\bullet$ Does one expect $$T(x,y,z,t)=T^\prime(x^\prime,y^\prime,z^\prime,t)$$ under Galilean boost where $\vec{r}^\prime=\vec{r}-\vec{V}t$ and $t^\prime=t$?
$\bullet$ Does one expect $$T(x,y,z,t)=T^\prime(x^\prime,y^\prime,z^\prime,t^\prime)$$ where the primed coordinates ${x^\prime}^\mu$ and unprimed coordinates $x^\mu$ are related by Lorentz boost? |
Aide de Scilab CACSD (Computer Aided Control Systems Design) Représentations formelles et conversions Plot and display noisegen pol2des syslin abinv arhnk arl2 arma arma2p arma2ss armac armax armax1 arsimul augment balreal bilin bstap cainv calfrq canon ccontrg cls2dls colinout colregul cont_mat contr contrss copfac csim ctr_gram damp dcf ddp dhinf dhnorm dscr dsimul dt_ility dtsi equil equil1 feedback findABCD findAC findBD findBDK findR findx0BD flts fourplan freq freson fspec fspecg fstabst g_margin gamitg gcare gfare gfrancis gtild h2norm h_cl h_inf h_inf_st h_norm hankelsv hinf imrep2ss inistate invsyslin kpure krac2 lcf leqr lft lin linf linfn linmeq lqe lqg lqg2stan lqg_ltr lqr ltitr macglov minreal minss mucomp narsimul nehari nyquistfrequencybounds obs_gram obscont observer obsv_mat obsvss p_margin parrot pfss phasemag plzr ppol prbs_a projsl repfreq ric_desc ricc riccati routh_t rowinout rowregul rtitr sensi sident sorder specfact ssprint st_ility stabil sysfact syssize time_id trzeros ui_observer unobs zeropen
Please note that the recommended version of Scilab is 6.0.2. This page might be outdated.
See the recommended documentation of this function damp
Natural frequencies and damping factors.
Calling Sequence [wn,z] = damp(sys) [wn,z] = damp(P [,dt]) [wn,z] = damp(R [,dt]) Parameters sys
A linear dynamical system (see syslin).
P
An array of polynomials.
R
An array of real or complex floating point numbers.
dt
A non negative scalar, with default value 0.
wn
vector of floating point numbers in increasing order: the natural pulsation in rad/s.
z
vector of floating point numbers: the damping factors.
Description
The denominator second order continuous time transfer function with complex poles can be written as
s^2 + 2*z*wn*s + wn^2 where
z is the damping factor and
wn the natural pulsation.
If
sys is a continuous time system,
[wn,z] = damp(sys) returns in
wn the natural pulsation (in rad/s) and in
z the damping factors of the poles of the linear dynamical system
sys. The
wn and
z arrays are ordered according to the increasing pulsation order.
If
sys is a discrete time system
[wn,z] = damp(sys) returns in
wn the natural pulsation (in rad/s) and in
z the damping factors of the continuous time equivalent poles of
sys. The
wn and
z arrays are ordered according to the increasing pulsation order.
[wn,z] = damp(P) returns in
wn the natural pulsation (in rad/s) and in
z the damping factors of the set of roots of the polynomials stored in the
P array. If
dt is given and non 0, the roots are first converted to their continuous time equivalents. The
wn and
z arrays are ordered according to the increasing pulsation order.
[wn,z] = damp(R) returns in
wn the natural pulsation (in rad/s) and in
z the damping factors of the set of roots stored in the
R array. If
dt is given and non 0, the roots are first converted to their continuous time equivalents.
wn(i) and
z(i) are the the natural pulsation and damping factor of
R(i).
Examples s = %s; num = 22801 + 4406.18*s + 382.37*s^2 + 21.02*s^3 + s^4; den = 22952.25 + 4117.77*s + 490.63*s^2 + 33.06*s^3 + s^4 h = syslin('c', num/den); [wn,z] = damp(h)
The following example illustrates the effect of the damping factor on the frequency response of a second order system.
s = %s; wn = 1; clf(); Z = [0.95 0.7 0.5 0.3 0.13 0.0001]; for k=1:size(Z,'*') z = Z(k) H = syslin('c', 1 + 5*s + 10*s^2, s^2 + 2*z*wn*s + wn^2); gainplot(H, 0.01, 1) p = gce(); p = p.children; p.foreground = k; end title("$\frac{1+5 s+10 s^2}{\omega_n^2+2\omega_n\xi s+s^2}, \quad \omega_n=1$") legend('$\xi = '+string(Z)+'$') plot(wn/(2*%pi)*[1 1], [0 70], 'r') // Natural pulsation
Computing the natural pulsations and daping ratio for a set of roots:
[wn,z] = damp((1:5)+%i) |
Answer
Stone house: 9 hours Wood house: 3 hours
Work Step by Step
We divide the value of Q by the rate at which heat is absorbed to find: $ \Delta t = \frac{ \Delta Q}{10^5} = \frac{75 \times 2000 \times .2 \times 30}{10^5 } = 9 \ hours$ For the wood house: $\Delta t = \frac{ \Delta Q}{10^5} = \frac{15 \times 2000 \times .33 \times 30}{10^5 } = 3 \ hours$ |
Almost everyone I know says that "backprop is just the chain rule." Althoughthat's
basically true, there are some subtle and beautiful things aboutautomatic differentiation techniques (including backprop) that will not beappreciated with this dismissive attitude.
This leads to a poor understanding. As I have ranted before: people do not understand basic facts about autodiff.
Evaluating \(\nabla f(x)\) is provably as fast as evaluating \(f(x)\). Code for \(\nabla f(x)\) can be derived by a rote program transformation, even if the code has control flow structures like loops and intermediate variables (as long as the control flow is independent of \(x\)). You can even do this "automatic" transformation by hand! Autodiff \(\ne\) what you learned in calculus
Let's try to understand the difference between autodiff and the type ofdifferentiation that you learned in calculus, which is called
symbolicdifferentiation.
I'm going to use an example from Justin Domke's notes,
If we were writing
a program (e.g., in Python) to compute \(f\), we'd takeadvantage of the fact that it has a lot of repeated evaluations for efficiency.
def f(x): a = exp(x) b = a**2 c = a + b d = exp(c) e = sin(c) return d + e
Symbolic differentiation would have to use the "flat" version of this function, so no intermediate variable \(\Rightarrow\) slow.
Automatic differentiation lets us differentiate a program with
intermediatevariables.
The rules for transforming the code for a function into code for the gradient are really minimal (fewer things to memorize!). Additionally, the rules are more general than in symbolic case because they handle as a superset of programs.
Quite beautifully, the program for the gradient
has exactly the same structureas the function, which implies that we get the same runtime (up to some constants factors).
I won't give the details of how to execute the backpropagation transform to theprogram. You can get that fromJustin Domke's notesand many other goodresources. Here's some codethat I wrote that accompanies to the
f(x) example, which has a bunch ofcomments describing the manual "automatic" differentiation process on
f(x).
Autodiff by the method of Lagrange multipliers
Let's view the intermediate variables in our optimization problem as simpleequality constraints in an equivalent
constrained optimization problem. Itturns out that the de facto method for handling constraints, the method Lagrangemultipliers, recovers exactly the adjoints (intermediate derivatives) in thebackprop algorithm!
Here's our example from earlier written in this constraint form:
The general formulation
The first set of constraint (\(1, \ldots, d\)) are a little silly. They are only there to keep our formulation tidy. The variables in the program fall into three categories:
input variables(\(\boldsymbol{x}\)): \(x_1, \ldots, x_d\) intermediate variables: (\(\boldsymbol{z}\)): \(z_i = f_i(z_{\alpha(i)})\) for \(1 \le i \le n\), where \(\alpha(i)\) is a list of indices from \(\{1, \ldots, n-1\}\) and \(z_{\alpha(i)}\) is the subvector of variables needed to evaluate \(f_i(\cdot)\). Minor detail: take \(f_{1:d}\) to be the identity function. output variable(\(z_n\)): We assume that our programs has a singled scalar output variable, \(z_n\), which represents the quantity we'd like to maximize.
The relation \(\alpha\) is adependency graph amongvariables. Thus, \(\alpha(i)\) is the list of
incoming edges to node \(i\) and\(\beta(j) = \{ i: j \in \alpha(i) \}\) is the set of outgoing edges. For now,we'll assume that the dependency graph given by \(\alpha\) is ① acyclic: no \(z_i\)can transitively depend on itself. ② single-assignment: each \(z_i\) appears onthe left-hand side of exactly one equation. We'll discuss relaxing theseassumptions in § Generalizations.
The standard way to solve a constrained optimization is to use the methodLagrange multipliers, which converts a
constrained optimization problem intoan unconstrained problem with a few more variables \(\boldsymbol{\lambda}\) (oneper \(x_i\) constraint), called Lagrange multipliers. The Lagrangian
To handle constraints, let's dig up a tool from our calculus class,the method of Lagrange multipliers,which converts a
constrained optimization problem into an unconstrainedone. The unconstrained version is called "the Lagrangian" of the constrainedproblem. Here is its form for our task,
Optimizing the Lagrangian amounts to solving the following nonlinear system of equations, which give necessary, but not sufficient, conditions for optimality,
Let's look a little closer at the Lagrangian conditions by breaking up the system of equations into salient parts, corresponding to which variable types are affected.
Intermediate variables (\(\boldsymbol{z}\)): Optimizing themultipliers—i.e., setting the gradient of Lagrangianw.r.t. \(\boldsymbol{\lambda}\) to zero—ensures that the constraints onintermediate variables are satisfied.
We can use forward propagation to satisfy these equations, which we may regard as a block-coordinate step in the context of optimizing the \(\mathcal{L}\).
Lagrange multipliers (\(\boldsymbol{\lambda}\), excluding \(\lambda_n\)): Setting the gradient of the \(\mathcal{L}\) w.r.t. the intermediate variables equal to zeros tells us what to do with the intermediate multipliers.
Clearly, \(\frac{\partial f_i(z_{\alpha(i)})}{\partial z_j} = 0\) for \(j \notin \alpha(i)\), which is why the \(\beta(j)\) notation came in handy. By assumption, the local derivatives, \(\frac{\partial f_i(z_{\alpha(i)})}{\partial z_j}\) for \(j \in \alpha(i)\), are easy to calculate—we don't even need the chain rule to compute them because they are simple function applications without composition. Similar to the equations for \(\boldsymbol{z}\), solving this linear system is another block-coordinate step.
Key observation: The last equation for \(\lambda_j\) should look very familiar:It is exactly the equation used in backpropagation! It says that we sum\(\lambda_i\) of nodes that immediately depend on \(j\) where we scaled each\(\lambda_i\) by the derivative of the function that directly relates \(i\) and\(j\). You should think of the scaling as a "unit conversion" from derivatives oftype \(i\) to derivatives of type \(j\). Output multiplier (\(\lambda_n\)): Here we follow the same pattern as for intermediate multipliers. Input multipliers \((\boldsymbol{\lambda}_{1:d})\): Our dummy constraints gives us \(\boldsymbol{\lambda}_{1:d}\), which are conveniently equal to the gradient of the function we're optimizing:
Of course, this interpretation is only precise when ① the constraints are satisfied (\(\boldsymbol{z}\) equations) and ② the linear system on multipliers is satisfied (\(\boldsymbol{\lambda}\) equations).
Input variables (\(\boldsymbol{x}\)): Unfortunately, the there is no closed-form solution to how to set \(\boldsymbol{x}\). For this we resort to something like gradient ascent. Conveniently, \(\nabla_{\!\boldsymbol{x}} f(\boldsymbol{x}) = \boldsymbol{\lambda}_{1:d}\), which we can use to optimize \(\boldsymbol{x}\)! Generalizations
We can think of these equations for \(\boldsymbol{\lambda}\) as a simple
linearsystem of equations, which we are solving by back-substitution when we use thebackpropagation method. The reason why back-substitution is sufficient for thelinear system (i.e., we don't need a full linear system solver) is that thedependency graph induced by the \(\alpha\) relation is acyclic. If we had needed afull linear system solver, the solution would take \(\mathcal{O}(n^3)\) timeinstead of linear time, seriously blowing-up our nice runtime!
This connection to linear systems is interesting: It tells us that we cancompute
global gradients in cyclic graphs. All we'd need is to run a linearsystem solver to stitch together local gradients! That is exactly what theimplicit function theoremsays!
Cyclic constraints add some expressive powerful to our "constraint language," and it's interesting that we can still efficiently compute gradients in this setting. An example of what a general type of cyclic constraint looks like is
where \(g\) can be any smooth multivariate function of the intermediate variables! Of course, allowing cyclic constraints comes at the cost of a more-difficult analogue of "the forward pass" to satisfy the \(\boldsymbol{z}\) equations (if we want to keep it a block-coordinate step). The \(\boldsymbol{\lambda}\) equations are now a linear system that requires a linear solver (e.g., Gaussian elimination).
Example use cases:
Bi-level optimization: Solving an optimization problem with another one inside it. For example, gradient-based hyperparameter optimization in machine learning. The implicit function theorem manages to get gradients of hyperparameters without needing to store any of the intermediate states of the optimization algorithm used in the inner optimization! This is a
hugememory saver since direct backprop on the inner gradient descent algorithm would require caching all intermediate states. Yikes!
Cyclic constraints are useful in many graph algorithms. For example, computing gradients of edge weights in a general finite-state machine or, similarly, computing the value function in a Markov decision process.
Other methods for optimization?
The connection to Lagrangians brings tons of algorithms for constrained optimization into the mix! We can imagine using more general algorithms for optimizing our function and other ways of enforcing the constraints. We see immediately that we could run optimization with adjoints set to values other than those that backprop would set them to (i.e., we can optimize them like we'd do in other algorithms for optimizing general Lagrangians).
Summary
Backprop does not directly fall out of the rules for differentiation that you learned in calculus (e.g., the chain rule).
This is because it operates on a more general family of functions: programswhich have intermediate variables. Supporting intermediate variables is crucial for implementing both functions and their gradients efficiently.
I described how we could use something we did learn from calculus 101, the method of Lagrange multipliers, to support optimization with intermediate variables.
It turned out that backprop is a
particular instantiationof the method of Lagrange multipliers, involving block-coordinate steps for solving for the intermediates and multipliers.
I also described a neat generalization to support
cyclicprograms and I hinted at ideas for doing optimization a little differently, deviating from the de facto block-coordinate strategy. Further reading
After working out the connection between backprop and the method of Lagrange multipliers, I discovered following paper, which beat me to it. I don't think my version is too redundant.
Yann LeCun. (1988) A Theoretical Framework from Back-Propagation.
Ben Recht has a great blog post that uses the implicit function theorem to
derive the method of Lagrange multipliers. He also touches on the connectionto backpropagation.
Ben Recht. (2016) Mechanics of Lagrangians.
Tom Goldstein's group took the Lagrangian view of backprop and used it to design an ADMM approach for optimizing neural nets. The ADMM approach can run massively in parallel and can leverage highly optimized solvers for subproblems. This work nicely demonstrates that understanding automatic differentiation—in the broader sense that I described in this post—facilitates the development of novel optimization algorithms.
Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, Tom Goldstein. (2018) Training Neural Networks Without Gradients: A Scalable ADMM Approach.
The backpropagation algorithm can be cleanly generalized from values to functionals!
Alexander Grubb and J. Andrew Bagnell. (2010) Boosted Backpropagation Learning for Training Deep Modular Networks.
Code
I have coded up and tested the Lagrangian perspective on automatic differentiation that I presented in this article. The code is available in this gist. |
I'm not too familiar with Mathematica graphics, but I would like to produce a "ring-list" with other lists point to a center list. Here is an example list of the many I've generated (each list is infinite, I think):
$$ \left( \begin{array}{c} \{2,5,11,23,47,19,13,3,7\} \\ \{3,7,5,11,23,47,19,13\} \\ \{5,11,23,47,19,13,3,7\} \\ \{7,5,11,23,47,19,13,3\} \\ \{11,23,47,19,13,3,7,5\} \\ \{13,3,7,5,11,23,47,19\} \\ \{17,7,5,11,23,47,19,13,3\} \\ \{19,13,3,7,5,11,23,47\} \\ \{23,47,19,13,3,7,5,11\} \\ \{29,59,17,7,5,11,23,47,19,13,3\} \\ \end{array} \right) $$
*it won't let me put in the LaTeX because it appears as code.
As you can tell, there are duplicates in this list. The first list is the "root" cycle. I would like all of the other lists to point, respectively, to there belonging connection to this "root" list.
Here is the type of thing I'm envisioning:
Pardon my poor paint skills (and lack of knowledge). If you're curious, I'm generating arithmetic prime sequences with the following algorithm:
primeCycle[x_] := Module[{},cycleList = {};h = x;AppendTo[cycleList, h];h = Last[FactorInteger[2*h + 1]][[1]];While[! MemberQ[cycleList, h], {AppendTo[cycleList, h], h = Last[FactorInteger[2*h + 1]][[1]];}];cycleList]
I plan on investigating more than $2\cdot h+1$, but I'm not able to compile enough data by hand. The hope is that maybe I learn something interesting.
I believe that for any function $A\cdot h \pm b$, ($A$ is prime and $b<(A-1)/2)$, there is always a "root" cycle (as depicted above).
I also think it may be interesting to investigate other functional forms, but I plan on sticking with simple functions for now.
Thanks! |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Let us consider the parameter p of population proportion. For instance, we might want to know the proportion of males within a total population of adults when we conduct a survey. A test of proportion will assess whether or not a sample from a population represents the true proportion from the entire population.
Critical Value Approach Section
The steps to perform a test of proportion using the critical value approval are as follows:
State the null hypothesis Hand the alternative hypothesis 0 H. A Calculate the test statistic:
\[z=\frac{\hat{p}-p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}}\]
where \(p_0\) is the null hypothesized proportion i.e., when \(H_0: p=p_0\)
Determine the critical region.
Make a decision. Determine if the test statistic falls in the critical region. If it does, reject the null hypothesis. If it does not, do not reject the null hypothesis.
Example S.6.1
Newborn babies are more likely to be boys than girls. A random sample found 13,173 boys were born among 25,468 newborn children. The sample proportion of boys was 0.5172. Is this sample evidence that the birth of boys is more common than the birth of girls in the entire population?
Here, we want to test
\(H_0: p=0.5\)
\(H_A: p>0.5\)
The test statistic
\[\begin{align} z &=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\\
&=\frac{0.5172-0.5}{\sqrt{\frac{0.5(1-0.5)}{25468}}}\\ &= 5.49 \end{align}\]
We will reject the null hypothesis \(H_0: p = 0.5\) if \(\hat{p} > 0.5052\) or equivalently if Z > 1.645
Here's a picture of such a "critical region" (or "rejection region"):
It looks like we should reject the null hypothesis because:
\[\hat{p}= 0.5172 > 0.5052\]
or equivalently since our test statistic Z = 5.49 is greater than 1.645.
Our Conclusion: We say there is sufficient evidence to conclude boys are more common than girls in the entire population. \(p\)- value Approach Section
Next, let's state the procedure in terms of performing a proportion test using the
p-value approach. The basic procedure is: State the null hypothesis H 0and the alternative hypothesis H. A Set the level of significance \(\alpha\). Calculate the test statistic:
\[z=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\]
Calculate the
p-value.
Make a decision. Check whether to reject the null hypothesis by comparing
p-value to \(\alpha\). If the p-value < \(\alpha\) then reject \(H_0\); otherwise do not reject \(H_0\). Example S.6.2
Let's investigate by returning to our previous example. Again, we want to test
\(H_0: p=0.5\)
\(H_A: p>0.5\)
The test statistic
\[\begin{align} z &=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\\
&=\frac{0.5172-0.5}{\sqrt{\frac{0.5(1-0.5)}{25468}}}\\ &= 5.49 \end{align}\]
The
p-value is represented in the graph below:
\[P = P(Z \ge 5.49) = 0.0000 \cdots \doteq 0\]
Our Conclusion: Because the p-value is smaller than the significance level \(\alpha = 0.05\), we can reject the null hypothesis. Again, we would say that there is sufficient evidence to conclude boys are more common than girls in the entire population at the \(\alpha = 0.05\) level.
As should always be the case, the two approaches, the critical value approach and the
p-value approach lead to the same conclusion. |
Abstract
We introduce the notion of a
complex cell, a complexification of the cells/cylinders used in real tame geometry. For $\delta \in (0,1)$ and a complex cell $\mathcal{C}$, we define its holomorphic extension $\mathcal{C}\subset \mathcal{C}^\delta $, which is again a complex cell. The hyperbolic geometry of $\mathcal{C}$ within $\mathcal{C}^\delta $ provides the class of complex cells with a rich geometric function theory absent in the real case. We use this to prove a complex analog of the cellular decomposition theorem of real tame geometry. In the algebraic case we show that the complexity of such decompositions depends polynomially on the degrees of the equations involved.
Using this theory, we refine the Yomdin-Gromov algebraic lemma on $C^r$-smooth parametrizations of semialgebraic sets: we show that the number of $C^r$ charts can be taken to be polynomial in the smoothness order $r$ and in the complexity of the set. The algebraic lemma was initially invented in the work of Yomdin and Gromov to produce estimates for the topological entropy of $C^\infty $ maps. For
analytic maps our refined version, combined with work of Burguet, Liao and Yang, establishes an optimal refinement of these estimates in the form of tight bounds on the tail entropy and volume growth. This settles a conjecture of Yomdin who proved the same result in dimension two in 1991. A self-contained proof of these estimates using the refined algebraic lemma is given in an appendix by Yomdin.
The algebraic lemma has more recently been used in the study of rational points on algebraic and transcendental varieties. We use the theory of complex cells in these two directions. In the algebraic context we refine a result of Heath-Brown on interpolating rational points in algebraic varieties. In the transcendental context we prove an interpolation result for (unrestricted) logarithmic images of subanalytic sets. |
In this and the following chapters, we apply the general theory of linear models to various special cases. This chapter considers the analysis of one-way ANOVA models. A one-way ANOVA model can be written
$$y_{ij} = \mu + \alpha_{i} + e_{ij}, \quad i = 1, \cdots, t, \quad j = 1, \cdots, N_i,$$
where
$${\rm E}(e_{ij}) = 0, {\rm Var}(e_{ij}) = \sigma^2, {\rm and \ Cov}(e_{ij}, e_{j^{\prime}}, e_{{i^\prime j^\prime}}) = 0 {\rm when} (i, j) \neq (j^\prime, j^\prime)$$
. For finding tests and confidence intervals, the
e
ij
s are assumed to have a multivariate normal distribution. Here α
i
is an effect for
y
ij
belonging to the
i
th group of observations. Group effects are often called
treatment effects
because one-way ANOVA models are used to analyze completely randomized experimental designs. |
WHY?
While conventional deep learning models have performed well on inference regarding individual entities, few models have been proposed focusing on inference regarding the relations among entities. Relational network aimed to capture relational information from images. In this paper, relational recurrent neural network tried to improve former memory augmented neural network models to capture relations among memories.
WHAT?
Assume that we have memory matrix M. To capture interaction among these memories, Multi-Head Dot Product Attention(MHDPA) is used (also known as self-attention).
Linear projections are used to make queries(Q), keys(K) and values(V) for each memory. Dot product of queries and keys are normalized to gain attention weights. Weighted average of values are used as next memory.
Q = MW^q\\K = MW^k\\V = MW^v\\A_{\theta}(M) = softmax(\frac{MW^q(MW^k)^T}{\sqrt{d_k}})MW^v = \tilde{M}
This composes one head of MHDPA. Assuming h is the number of head in MHDPA, F dimensional memories are projected to F/h dimension for each head, and heads are concatnated to represent next memory. New inputs for memory can be encoded easily without adding more parameters.
\tilde{M} = softmax(\frac{MW^q([M;x]W^k)^T}{\sqrt{d^k}})[M;x]W^v
Resulted memory can be used as memory cell in LSTM. row-wise MLP with layer normalization is used for
g_{\psi}.
s_{i,t} = (h_{i,t-1}, m_{i,t-1})\\f_{i,t} = W^f x_t + U^f h_{i,t-1} + b^f\\i_{i,t} = W^i x_t + U^i h_{i,t-1} + b^i\\o_{i,t} = W^o x_t + U^o h_{i,t-1} + b^o\\m_{i,t} = \sigma(f_{i,t} + \tilde{b}^f)\circ m_{i, t-1} + \sigma(i_{i,t})\circ g_{\psi}(\tilde{m}_{i,t})\\h_{i,t} = \sigma(o_{i,t})\circ tanh(m_{i,t})\\s_{i,t+1} = (m_{i,t}, h_{i,t})
So?
RRN achieved good performance in Nth farthest vector calculation task, program evaluation, reinforcement learning task and language modeling.
Critic
Beautiful, but not sure how this work. Need more experiments. |
Since the wrong 2012 papers, I have been encouraging
They try to clarify some previous ideas about the relationship between the holographic dictionary in quantum gravity on one side; and quantum error correction in quantum information science on the other side.
The paper is only four two-column pages long which is why you may want to read it in its entirety and I won't try to reproduce the technicalities here.
But a broader point is that the gauge symmetries of the boundary theory used to be considered "completely irrelevant" for the dual bulk description but it is becoming very likely that they actually do play a role in the bulk physics, too. And the precise mechanism that allows these gauge symmetries to play a role resembles the "quantum erasure correction" and "quantum secret sharing", concepts that are known and studied by the folks in the quantum information science, except that quantum gravity automatically seems to do these things in a smarter way than what they were able to find out so far!
In particular, Mintun et al. say that the bulk operator \(\Phi(0)\) at the center of the AdS (or any point) has to commute with the spatially separated operators, thanks to the bulk locality. Via the AdS/CFT dictionary, it means that \(\Phi(0)\) has to commute with all boundary
localoperators \({\mathcal O} (x_i)\), those restricted to a single site.
That condition may sound weird – it looks as if the bulk had nothing to do with the boundary and required "new" degrees of freedom. Except that this troublesome conclusion isn't right. No one is saying that the bulk operators have to be constructed from the local,
one-siteoperators on the boundary. If you allow to contract operators from several sites at the boundary, you will find out that solutions exist – and there are actually many of them. Those solutions are referred to as the "precursor" operators.
Note that Polchinski has been intrigued by precursors in AdS/CFT for more than 15 years.
In the usual picture of AdS/CFT, both sides have some local or gauge symmetries. The boundary theory has an \(SU(N)\) Yang-Mills symmetry. The SUGRA description of the bulk has diffeomorphisms, gauge symmetries for the \(p\)-form gauge fields, and so on. Normally, we say that only the physical states on both states – all the kinematical states modulo all the gauge symmetries – are supposed to match. And the unphysical states are some junk that depends on the description.
However, this paper says that even the unphysical states, especially the gauge-non-invariant states on the boundary, have some important application on the other side, especially in the bulk, because the bulk operators may be more naturally written using unphysical operators on the boundary etc. Those non-gauge-invariant operators would carry slightly more information but if you have just slightly gauge-non-invariant operators, you may find the corresponding right gauge-invariant operator and the procedure is analogous to quantum error correction.
This new paper has already made me think about various connections with ideas in my research or research of others but most of the details remain confusing. So let me sketch just two of these relationships and do so briefly.
Connections between the paper and some seemingly inequivalent ideas
First, two weeks ago, I wrote a blog post about the monster group in quantum gravity and I also mentioned a relationship between black hole microstates and the monodromies around the Euclideanized horizon. Some people think that I was completely kidding but I was not. ;-)
The strongest claim is that the volume of the whole (infinite-dimensional etc.) gauge group reduced to the event horizon will be nothing else than \(\exp(S_{BH})\), the exponentiated black hole entropy – essentially because the trip around the thermal circle is allowed to return you to the same state, up to any gauge transformation. (Well, the monodromies actually don't label all pure microstates but some particular mixed or ER-EPR states but I don't want to go into details here.) So every theory with gravitational degrees of freedom has to have a certain amount of symmetries. There are no black hole horizons in the paper by Mintun et al. but I do think that the degree of ambiguity of the precursor operators they acknowledge is mathematically analogous to the microstates of a black hole – or an event horizon, to be more general – and their work also implies something about the black hole code, too.
Second, I believe that there are intense overlooked relationships between the work by Mintun et al. (and older papers they built upon) and the formalism of Papadodimas and Raju. To be slightly more specific, look at and below equation 5 in Mintun et al.. A commutator isn't zero but it is "effectively" zero – in this case, when acting on gauge-invariant states, for some reason. This is similar to comments in Papadodimas-Raju in and below equation 9 in which the commutator of \({\mathcal O}\) and \(\tilde{\mathcal O}\) "effectively" vanishes – when included in a product of operators with a small enough number of factors.
In both cases, some commutators are claimed to be "much smaller than generic" because of certain special circumstances. If the claims are basically analogous, there is a relationship between the gauge invariance of (a priori more general, gauge-non-invariant) boundary operators; and between operators that keep the pre-existing bulk (black hole...) microstate the same (because they only allow a limited number of local field excitations).
Another general point. Both your humble correspondent and (later) Polchinski have been nervous about the main ambiguity in the Papadodimas-Raju construction. There are many ways how the black hole interior operators may be represented in terms of the boundary CFT operators. There's this state dependence (which I am not anxious about but Polchinski is) but it is not the only source of ambiguity: one has to pick a convention on ensembles and related things, too.
Now, Mintun, Polchinski, and Rosenhaus see something similar in their construction. In the boundary CFT (the same side), they also see many precursor operators to represent the bulk fields. They're equivalent when acting on gauge-invariant operators. However, it could be an important unexpected point that
the black hole interior operators are dual to gauge-non-invariant operators in the boundary CFTand if that's the case, much of the new content in the papers by Mintun et al. and Papadodimas-Raju could actually be the same, at least morally! If the quote above is right, we face a potential "intermediate" answer to the question whether the AdS/CFT correspondence contains predictions for the observations performed by the infalling observer. It may be the case that the precise predictions for the black hole interior may be extracted from the boundary CFT – but only if you add something like a "gauge choice", too.
Many details remain confusing, as you can see, but I do think that we are getting closer to a picture that teaches us a great deal about the character of the "holographic code" in quantum gravity as well the way how the black hole interior (and its local operators) are interpreted in terms of the boundary CFT (a non-gravitational or bulk-non-local description of the microstates). Effectively or approximately vanishing operators due to gauge symmetries and boundary gauge symmetries themselves seem to play a vital role in this final picture that is going to be found rather soon.
And that's my prophesy. |
\small \bold{\color{blue} \dot{m} = CA \ \sqrt[]{k \rho_{in} P_{in} { \left ( \frac{2}{k+1} \right ) }^{ \frac{k}{k-1}}} \ = \ CAP_{in} \ \sqrt[]{\left ( \frac{kM}{Z_{in} RT_{in}} \right ) { \left ( \frac{2}{k+1} \right ) }^{ \frac{k}{k-1}}} }
\small {\color{black} \dot{m} } = mass flow rate, kg/s
\small {\color{black} C } = discharge coefficient, dimensionless, usually 0.72.
\small {\color{black} A } = discharge hole cross-sectional area, m
2.
\small {\color{black} \rho_{in}} = real gas (total) density at total pressure P
in and total temperature T in, kg/m 3.
\small {\color{black} T_in } = absolute upstream total temperature of the gas, K.
\small {\color{black} M } = gas molecular weight (dimensionless).
\small {\color{black} R } = Universal Gas Law constant, (Pa.m
3) / (kgmol.K) = 8.314472.10 3.
\small {\color{black} Z_{in}} = the gas compressibility factor at P
in and T in. Ideal gas Z in = 1. |
Let's say I have a ball attached to a string and I'm spinning it above my head. If it's going fast enough, it doesn't fall. I know there's centripetal acceleration that's causing the ball to stay in a circle but this doesn't have to do with the force of gravity from what I understand. Shouldn't the object still be falling due to the force of gravity?
We have the ball orbiting at a distance $R$ from the centre of rotation and the string inclined at angle $\theta$ with respect to the horizontal.
Two
main forces act on the ball: gravity $mg$ ($m$ is the mass of the ball, $g$ the Earth's gravitational acceleration) and $F_c$, the centripetal force needed to keep the ball spinning at constant rate. $F_c$ is given by:
$$F_c=\frac{mv^2}{R},$$
where $v$ is the orbital velocity, i.e. the speed of the ball on its circular trajectory.
Trigonometry also tells us that if $T$ is the tension in the string, then:
$$T\cos\theta=F_c.$$
Similarly, as the ball is not moving in the vertical direction, thus $F_{up}$:
$$T\sin\theta=F_{up}=mg.$$
From this relation we can infer:
$$T=\frac{mg}{\sin\theta}.$$
And so:
$$\frac{mg}{\tan\theta}=F_c=\frac{mv^2}{R}.$$
Or:
$$\tan\theta=\frac{gR}{v^2}.$$
From this follows that for small $\tan\theta$ and thus small $\theta$ we need large $v$. But at lower $v$, $\theta$ increases. Also note that $\theta$ is invariant to mass $m$.
The string is at a slight angle to horizontal $\theta$. It is not exactly horizontal. The slight angle is such that the tension in the string exactly counteracts gravity, $T\sin(\theta)=m g$. So, there is actually a force acting upwards that counteracts gravity, and it is supplied by the string.
You're right that if $\theta=0$ exactly, there would be a problem and the object would necessarily fall a bit.
I appreciate that this has already been answered correctly, but I thought it may be worth adding a simplistic summary:
When the ball is spinning, there is a force acting on it which pushes it away from the centre of rotation. The only way it can get further away from that point is by moving upwards (because the string stops it from moving outwards without moving upwards). So if the force pushing the ball out is greater than the force pulling it down (gravity), it will rise.
I differ with all of the explanations above. If you are spinning a ball horizontally and leave the string, it would fall at once if it is in vacuum/airless place. In other scenario which is the realistic one, it does not fall because the ball stirs the air around it away, thus creating lower air pressure zone in the plane where it is being rotated. So the air below it creates an upward pressure to hold the rotating object.
This is the theory behind how a helicopter works. By the way, have you heard of the Ninja weapon Shuriken?
protected by Qmechanic♦ Nov 6 '15 at 11:13
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Consider two points $$P\left( {{x_1},{y_1}} \right)$$ and $$Q\left( {{x_2},{y_2}} \right)$$, then: 01. The distance formula \[\left| {PQ} \right| = \sqrt… Click here to read more
Geometry
01. The equation of a circle having the center at $$O\left( {0,0} \right)$$ and radius $$r$$ is \[{x^2} + {y^2}… Click here to read more
Some important results and formulas regarding the polar equation of a conic are listed here. 1. The polar equation of a… Click here to read more
1. $$A = \frac{1}{2}b \cdot h$$, where $$b$$ is the base and $$h$$ is the altitude of the triangle. 2. The… Click here to read more |
In my previous question (Integrating Joint Normal Distribution in R3), I describe a joint normal PDF with three independent random variables of $\mathbf N_3(0,\mathbf I)$. The joint pdf is $p(x,y,z)=p(x)p(y)p(z)={(1/\sqrt {2\pi})}^3e^{-{(x^2+y^2+z^2})/2}$ and the distribution I'm interested in becomes $$P(r<r_o)={(1/\sqrt {2\pi})}^3\int _{x^2+y^2+z^2<r_o^2}e^{-{(x^2+y^2+z^2})/2} dxdydz $$ This can be integrated through change of variables and use of integration by parts to
(Updated/Corrected) \begin{align} \ P(r<r_o) & = {(1/\sqrt {2\pi})}^3 \int _{0 \le r \le r_o} 4\pi r^2 e^{-r^2/2} dr \\ & = erf(\frac{r_o}{\sqrt{2}}) - \sqrt{ \frac{2}{\pi}} r_o e^{- \frac{r_o^{2}}{2}} \end{align}
On the other hand, I would expect $P(r<r_o) \equiv P(r^2<r_o^2)$
This is where colleagues expect a degree-3 $\chi^2$ distribution to be applicable. For this I used the formula from Wikipedia for the cumulative distribution of the $\chi^2$ with k=3 for three degrees of freedom:
$$ F(x;k)=\frac{\gamma(\frac{k}{2},\frac{x}{2})}{\Gamma(\frac{x}{2})}$$
Using $x=r^2$, I used this Matlab code to model it:
chi2Dist = gammainc(r.^2/2, 3/2,'lower') / gamma(3/2);
A plot comparing the two CDFs is shown below, and apart from the obvious scaling error in the $\chi^2$ CDF, they look like a match. I'd like some ideas on how to show they are the same (or not). (Also, how did I get the scale wrong on the $\chi^2$ CDF?) |
For each first-order $\sigma \,$-formula $\varphi(y,x_{1}, \ldots, x_{n}) \,,$ the axiom of choice implies the existence of a function $f_{\varphi}: M^n\to M$ such that, for all $a_{1}, \ldots, a_{n} \in M$, either $M\models\varphi(f_{\varphi} (a_1, \dots, a_n), a_1, \dots, a_n)$ or $M\models\neg\exists y \varphi(y, a_1, \dots, a_n) \,.$
Applying the axiom of choice again we get a function from the first order formulas $\varphi$ to such functions $f_{\varphi} \,.$
The family of functions $f_{\varphi}$ gives rise to a preclosure operator $F \,$ on the power set of $M \,$ $F(A) = \{b \in M \mid b = f_{\varphi}(a_1, \dots, a_n); \, \varphi \in \sigma ; \, a_1, \dots, a_n \in A \}$
for $A \subseteq M \,.$
Iterating $F \,$ countably many times results in a closure operator $F^{\omega} \,.$ Taking an arbitrary subset $A \subseteq M$ such that $\left\vert A \right\vert = \kappa$, and having defined $N = F^{\omega}(A) \,,$ one can see that also $\left\vert N \right\vert = \kappa \,.$ $N \,$ is an elementary substructure of $M \,$ by the Tarski–Vaught test. The trick used in this proof is essentially due to Skolem, who introduced function symbols for the Skolem functions $f_{\varphi}$ into the language. One could also define the $f_{\varphi}$ as partial functions such that $f_{\varphi}$ is defined if and only if $M \models \exists y \varphi(y,a_1,\dots,a_n) \,.$ The only important point is that $F \,$ is a preclosure operator such that $F(A) \,$ contains a solution for every formula with parameters in $A \,$ which has a solution in $M \,$ and that $\left\vert F(A) \right\vert \leq \left\vert A \right\vert + \left\vert \sigma \right\vert + \aleph_0 \,.$ (Downward Lowenheim-Skolem theorem proof)
The first question is, how does $F^\omega$ become closure operator?
The second question is, why is $\left\vert F(A) \right\vert \leq \left\vert A \right\vert + \left\vert \sigma \right\vert + \aleph_0 \,$? |
Let $T_{X}$ be the full transformation semigroup on $X$. For $\alpha$, $\beta \in T_{X}$ $$\alpha \mathcal{R}\beta \text { if and only if there exist }\gamma,\gamma' \in T_{X}:\alpha\gamma=\beta\gamma' .$$
This question that looks trivial, takes us into about an hour with my course mates. We argue that by definition $\alpha R\beta$ implies $\alpha T_{X}^1=\beta T_{X}^1$.
So, there exist $\gamma,\gamma' \in T_{X}$ such that $\alpha\gamma=\beta\gamma'$. Hence the result.
But our professor rejected our proof since $\gamma,\gamma' \in T_{X}$ not in $T_{X}^1$ as given in the statement of the problem. The lecture notes by Tero Harju are here, chapter 5 page 52.
Note that: In any semigroups S the relation $\mathcal{L}$, $ \mathcal{R}$ and $\mathcal{J}$ are define by $$x \mathcal{L}y \Leftrightarrow S^1x=S^1y$$ $$x \mathcal{R}y \Leftrightarrow xS^1=yS^1$$ $$x \mathcal{J}y \Leftrightarrow S^1xS^1=S^1yS^1$$.
The set $T_{X}$ is the set of all mappings from $X$ to $X$ known as the
full transformation semigroup on X with the operation of composition of mappings. |
Tricks to evaluate Gaussian Integral
But hey, $x$ is just a dummy variable. If we change it to $y$ this integration will still be the same.
\begin{align} I=\int_{-\infty}^{\infty}e^{-my^2}dy \label{eq:2} \end{align} Just for fun, let's multiply $\eqref{eq:1}$ and $\eqref{eq:2}$ \begin{align} I^2=\int_{-\infty}^{\infty}e^{-mx^2}dx \int_{-\infty}^{\infty}e^{-my^2}dy \end{align} Using the rule of integration we can write it as \begin{align} I^2=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}e^{-m(x^2+y^2)}dx dy \end{align} This equation looks like something that could be evaluated in polar coordinates. That's what we are going to do next.... We need the integrating factor to Jacobian $r$. We know the regular old polar coordinate statement is $$r^2=x^2+y^2$$ So, let's go ahead and convert this integral into polar coordinates. The limit for $r$ will be $(0,\infty)$ but the limit for $\theta$ will be $(0,2\pi)$
So \begin{align} I^2=\int_{0}^{2\pi} \int_{0}^{\infty}re^{-mr^2}dr d\theta \end{align} Look, the term $re^{-mr^2}$ has no independence on $\theta$. So we get \begin{align} I^2=2\pi \int_{0}^{\infty}re^{-mr^2}dr \end{align} Hmm, looks better. May be we can try using substitution method this time..... Let $$u=r^2$$ \begin{align*} \dfrac{du}{dr}&=2r\\ \Rightarrow dr &=\dfrac{du}{2r} \end{align*} And the limit will still be $(0,\infty)$. So \begin{align*} I^2 &=2\pi \int_{0}^{\infty}re^{-mu} \dfrac{du}{2r}\\ &=\pi \int_{0}^{\infty}e^{-mu} du\\ \end{align*} Looks awesome. Now this integral is really easy to evaluate, isn't it? Let's evaluate it..... \begin{align*} I^2 &=\pi \dfrac{1}{m}(-e^{-mu}\bigg|_0^{\infty})\\ \Rightarrow I^2 &=\dfrac{\pi}{m}\\ \Rightarrow I &=\sqrt{\dfrac{\pi}{m}} \end{align*} And that's how you evaluate the Gaussian integral. |
The incubation time for a breed of chicks is normally distributed with a mean of 20 days and standard deviation of approximately 1 day. Look at the figure below and answer the following questions. If 1000 eggs are being incubated, how many chicks do we expect will hatch in the following time periods? (Note: In this problem, let us agree to think of a single day or a succession of days as a continuous interval of time. Assume all eggs eventually hatch.)
a) in 18 to 22 days
chicks (b) in 19 to 21 days chicks (c) in 20 days or fewer chicks (d) in 17 to 23 days chicks
\(\text{Let }\Phi(x) \text{ be the CDF of the standard normal distribution}\\ P[n \leq N] = \Phi\left(\dfrac{N-\mu}{\sigma}\right)\\ P[N < n] = 1-P[n \leq N] \\ P[N_l \leq n \leq N_h] = \Phi\left(\dfrac{N_h-\mu}{\sigma}\right) - \Phi\left(\dfrac{N_l-\mu}{\sigma}\right)\)
now just dump your numbers in |
How to prove the following conclusion:
[For any infinite set $S$,there exists a bijection $f:S\to S \times S$] implies the Axiom of choice.
Can you give a proof without the theory of ordinal numbers.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I will assume you are familiar with the definitions of cardinality and the ordering of cardinals, even in the absence of choice. If anything is unclear please let me know, and I will see to elaborate if needed.
Fact I:If $\kappa$ is a cardinal, and $\aleph$ is any $\aleph$-number (i.e. a well-orderable cardinal), if $\kappa\le\aleph$ then $\kappa$ can be well ordered as well.
(This is a very nice exercise, in case you do not see it right away).
Definition: (Hartogs number)For an infinite set $\kappa$ we denote $\aleph(\kappa)$ the least ordinal which cannot be injected into $\kappa$.
When assuming the axiom of choice, $\aleph(\kappa)$ is $|\kappa|^+$ (that is the successor cardinal of $|\kappa|$). we have, if so, that $\aleph(\omega)=\omega_1$.
Without the axiom of choice this gets slightly more complicated, since there are sets which cannot be well ordered. In a way, Hartogs number measures how much of the set can we well order.
Consider, for example, the case of $A$ being an amorphous set (that is, $A$ is infinite, and every $B\subseteq A$ is either finite or $A\setminus B$ is finite). Since $A$ is infinite every finite cardinal can be embedded into it. However $\aleph_0\nleq |A|$, therefore $\aleph(A)=\omega$.
Fact II:For every infinite set $A$, the ordinal $\aleph(A)$ is an initial ordinal, i.e. an $\aleph$ cardinal.
Otherwise there is $f\colon \aleph(A)\to\alpha$ which is a injection, and $g\colon\alpha\to A$ which is an injection. Composition of injective functions is injective, therefore $g\circ f\colon \aleph(A)\to A$ is an injection of $\aleph(A)$ into $A$, which is a contradiction to the definition of $\aleph(A)$.
Lemma:If $\kappa$ is an infinite cardinal and $\aleph_\alpha$ is an $\aleph$-number, and $$\kappa+\aleph_\alpha=\kappa\cdot\aleph_\alpha$$ Then $\kappa\le\aleph_\alpha$ or $\kappa\ge\aleph_\alpha$.
Before proving this lemma, let us consider this useful corollary:
Corollary:If $\kappa+\aleph(\kappa)=\kappa\cdot \aleph(\kappa)$ then $\kappa$ can be well ordered. Proof: Since $\aleph(\kappa)$ cannot be injected into $\kappa$, in particular we have that $\aleph(\kappa)\nleq\kappa$, therefore $\kappa<\aleph(\kappa)$ and so it can be well ordered. Proof of Lemma: Let $A$ be of cardinality $\kappa$ and $P$ of cardinality $\aleph_\alpha$, without loss of generality we can assume the two sets are disjoint.
Since $|A\times P|=|A\cup P|$, we can also assume there are two disjoint sets $|A'|=|A|$ and $|P'|=|P|$ such that $A\times P=A'\cup P'$.
Since we divide an infinite set into two parts, at least one of the following is bound to occur:
There exists some $a\in A$ such that $\langle a,p\rangle\in A'$ for every $p\in P$, and then we have that $\kappa\ge\aleph_\alpha$ (by the map $p\mapsto\langle a,p\rangle$).
If there is no $a$ as above, then for every $a\in A$ there is some $p\in P$ such that $\langle a,p\rangle\notin A'$. For every $a\in A$ let $p_a\in P$ be the minimal $p\in P$ such that $\langle a,p\rangle\in P'$, and so the injective function $a\mapsto\langle a,p_a\rangle$ is an injective function of $A$ into $P'$, and therefore $\kappa\le\aleph_\alpha$.
Now the main theorem, proved by Tarski.
Theorem:Suppose for every infinite set $A$ it holds $|A|=|A\times A|$, then every set can be well ordered. Proof: Let $\kappa$ be an infinite cardinal. Consider $\kappa+\aleph(\kappa)$. Our assumption gives us:
$$\kappa+\aleph(\kappa)=(\kappa+\aleph(\kappa))^2=\kappa^2+2\kappa\cdot \aleph(\kappa)+\aleph(\kappa)^2\ge\kappa\cdot \aleph(\kappa)\ge\kappa+\aleph(\kappa)$$
Where the last $\ge$ sign is due to the function from $\kappa\cup \aleph(\kappa)$: $$x\mapsto\begin{cases} \langle x,0\rangle & x\in\kappa,x\neq k\\ \langle t,x\rangle & x\in \aleph(\kappa)\\ \langle k,1\rangle & x=k\end{cases}$$ (For fixed $t,k\in\kappa$ and $t\neq k$. Note that we use the fact $\kappa$ is infinite, therefore it has at least two elements).
By the Corollary we have that $\kappa$ can be well ordered.
Pointed out to me on the MathOverflow post of this question, that there was a request to avoid the ordinals. I did think about it, a little bit.
I do remember that studying this proof originally, as well as the proof that "If $\kappa$ is such that $|A|\le\kappa<|\mathcal P(A)|$, then $|A|=\kappa$" implies choice, offers no immediate intuition as for
why does the proof works. The use of Hartogs number looks like a magic trick.
However, since the first time I had studied this proof I have gained more than a bite of intuition with regards to the axiom of choice. It seems that this is a good place to slightly give the intuitive explanation for this proof.
I will start by saying that I do not believe that any proof of this theorem will be both concise and reveal the intuition behind it. So even if I were to think really hard and transform this proof into a choice function sort of proof, I doubt it will be any more revealing in its intuition, which is why I have decided to write the following instead -- it seems more profitable to the reader.
It is a rather well known fact that the axiom of choice is equivalent to the assertion that every set can be well ordered. In the absence of choice there are sets that cannot be well ordered. Hartogs number is a way to measure how much out of the set
can be well ordered.
The axiom of choice is equivalent, as well, to the assertions that the cardinalities of any two sets can be compared. This is exactly due to the idea of Hartogs number - if we can compare a cardinal with its Hartogs then it can be well ordered.
So essentially what we try to have is "enough" comparability to deduce the well ordering principle. This is exactly what the tricky lemma gives us.
The idea behind the proof of the lemma is that if we break the multiplication into a sum then we can find a way to compare the sets. We can replace the requirement that the second set is well ordered simply by "has a choice function on its power set" (this, however, is equivalent to saying that the set can be well ordered).
From this the theorem follows, proving that we can compare every infinite set with its Hartogs number, therefore implying the set can be well ordered.
Let $A$ be an arbitrary infinite set. I will show that $A$ has a choice function, i.e., a function $a:\mathcal{P}(A)\setminus\{\varnothing\}\to A$ such that $a(X) \in X$ for every nonempty $X \subseteq A$.
First, pick a set $B$ such that:
there is a choice function $b:\mathcal{P}(B)\setminus\{\varnothing\}\to B$, and
there is no injection from $B$ into $A$.
(For example, take $B$ to be the set of all isomorphism classes of wellorderings of subsets of $A$.) Assume further that $A \cap B = \varnothing$.
By hypothesis, there is an injection $f:(A\cup B)^2\to(A\cup B)$. Given $x \in A$, there must be a $y \in B$ such that $f(x,y) \in B$, otherwise $y \in B \mapsto f(x,y)$ would be an injection from $B$ into $A$. We may thus define a function $g:A \to B^2$ by $g(x) = (y,f(x,y))$ where $y = b(\{u \in B : f(x,u) \in B\})$, i.e., for each $x \in A$, $g(x)$ picks a pair $(y,z) \in B^2$ such that $f(x,y) = z$. Note that $g$ must be an injection: if $g(x) = (y,z) = g(x')$ then $f(x,y) = z = f(x',y)$ and hence $x = x'$ since $f$ is an injection.
Now observe that $B^2$ has a choice function just like $B$. Namely, let $c:\mathcal{P}(B^2)\setminus\{\varnothing\}\to B^2$ be defined by $c(X) = (y,z)$ where $$y = b(\{u \in B : (\exists v \in B)((u,v) \in X)\})$$ and $$z = b(\{v \in B : (y,v) \in X \})$$ for every nonempty $X \subseteq B^2$. It follows that $A$ has a choice function too. Namely, let $a:\mathcal{P}(A)\setminus\{\varnothing\}\to A$ be defined by $$a(X) = g^{-1}(c(\{g(x) : x \in X\}))$$ for every nonempty $X \subseteq A$.
Remark. The proof of the existence of the set $B$ outlined above relies on the theory of wellorderings but not on the theory of ordinals. In fact, the above argument can be carried out in the theory Z (= ZF minus the Replacement Axiom). It is well known that there are models of Z with very few ordinals, e.g., $V_{\omega+\omega}$ is a model of Z. |
I have the next doble integral:
$$ \int_{0}^{\infty}\int_{0}^{\infty}\frac{130000 E^{-0.36k^2}rSin[kr]Sin[kR]}{(72000+E^{2r})k^2R}drdk$$
In which the integration is made first in "r" and then is made in "k", finally giving a function of "R". The integral cannot be done analytically , so I used the next commands to find a table and the plot:
fc[k_?NumericQ, r_?NumericQ] = Simplify[130000*Exp[-0,36k^2]*r*Sin[kr]*Sin[kR]/((72000+Exp[2r])*k^2*R)] //Expand]
and then:
tb=Table[{R,NIntegrate[ fc[k,r],{k,0,Infinity},{r,0,Infinity},Method -> {"LevinRule", "Points" -> 5}, PrecisionGoal -> 2, MaxRecursion -> 50]}, {R, 0.1, 20, 0.1}]
So finally I got the next plot:
I Have to find the x-coordinate for a given y-coordinate , namely, solving and equation, say, for a y=20 I should find aproximately x=10, however I don't know how to solve and equation involving a table or this kind of integral, I have to solve this later adding another horrific integral like the one here and do the same thing. Moreover, the integration is taking to much just with one integral !!. Can anybody help with these issues? Thanks in advance :D |
Assessment | Biopsychology | Comparative |Cognitive | Developmental | Language | Individual differences |Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | The title of this article should beor chi-square distribution χ 2distribution . The initial letter is capitalized due to technical restrictions.
Probability density function Cumulative distribution function Parameters $ k > 0\, $ degrees of freedom Support $ x \in [0; +\infty)\, $ $ \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2}\, $ cdf $ \frac{\gamma(k/2,x/2)}{\Gamma(k/2)}\, $ Mean $ k\, $ Median approximately $ k-2/3\, $ Mode $ k-2\, $ if $ k\geq 2\, $ Variance $ 2\,k\, $ Skewness $ \sqrt{8/k}\, $ Kurtosis $ 12/k\, $ Entropy $ \frac{k}{2}\!+\!\ln(2\Gamma(k/2))\!+\!(1\!-\!k/2)\psi(k/2) $ mgf $ (1-2\,t)^{-k/2} $ for $ 2\,t<1\, $ Char. func. $ (1-2\,i\,t)^{-k/2}\, $
In probability theory and statistics, the
chi-square distribution (also chi-squared distribution), or χ 2 distribution, is one of the theoretical probability distributions most widely used in inferential statistics, i.e. in statistical significance tests. It is useful because, under reasonable assumptions, easily calculated quantities can be proved to have distributions that approximate to the chi-square distribution if the null hypothesis is true. $ Z = \sum_{i=1}^k \left(\frac{x_i-\mu_i}{\sigma_i}\right)^2 $
is distributed according to the chi-square distribution. This is usually written
$ Z\sim\chi^2_k $
The chi-square distribution has one parameter: $ k $ - a positive integer which specifies the number of degrees of freedom (i.e. the number of $ X_i $)
The chi-square distribution is a special case of the gamma distribution.
The best-known situations in which the chi-square distribution is used are the common chi-square tests for goodness of fit of an observed distribution to a theoretical one, and of the independence of two criteria of classification of qualitative data. However many other statistical tests lead to a use of this distribution, for example Friedman's analysis of variance by ranks.
PropertiesEdit
The chi-square probability density function is
$ f(x;k)= \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2} $
where $ x \ge 0 $ and $ f_k(x) = 0 $ for $ x \le 0 $. Here $ \Gamma $ denotes the Gamma function. The cumulative distribution function is:
$ F(x;k)=\frac{\gamma(k/2,x/2)}{\Gamma(k/2)}\, $
where $ \gamma(k,z) $ is the incomplete Gamma function.
Tables of this distribution — usually in its cumulative form — are widely available (see the External links below for online versions), and the function is included in many spreadsheets (for example OpenOffice.org calc or Microsoft Excel) and all statistical packages.
If $ p $ independent linear homogeneous constraints are imposed on these variables, the distribution of $ X $ conditional on these constraints is $ \chi^2_{k-p} $, justifying the term "degrees of freedom". The characteristic function of the Chi-square distribution is
$ \phi(t;k)=(1-2it)^{-k/2}\, $
The chi-square distribution has numerous applications in inferential statistics, for instance in chi-square tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student's t-distribution. It enters all analysis of variance problems via its role in the F-distribution, which is the distribution of the ratio of two independent chi-squared random variables divided by their respective degrees of freedom.
The normal approximationEdit
If $ X\sim\chi^2_k $, then as $ k $ tends to infinity, the distribution of $ X $ tends to normality. However, the tendency is slow (the skewness is $ \sqrt{8/k} $ and the kurtosis is $ 12/k $) and two transformations are commonly considered, each of which approaches normality faster than $ X $ itself:
Fisher showed that $ \sqrt{2X} $ is approximately normally distributed with mean $ \sqrt{2k-1} $ and unit variance.
Wilson and Hilferty showed in 1931 that $ \sqrt[3]{X/k} $ is approximately normally distributed with mean $ 1-2/(9k) $ and variance $ 2/(9k) $.
$ k-\frac{2}{3}+\frac{4}{27k}-\frac{8}{729k^2} $
Note that 2 degrees of freedom leads to an exponential distribution.
The information entropy is given by:
$ H = \int_{-\infty}^\infty f(x;k)\ln(f(x;k)) dx = \frac{k}{2} + \ln \left( 2 \Gamma \left( \frac{k}{2} \right) \right) + \left(1 - \frac{k}{2}\right) \psi(k/2) $
where $ \psi(x) $ is the Digamma function.
Related distributionsEdit $ X \sim \mathrm{Exponential}(\lambda = 2) $ is an exponential distribution if $ X \sim \chi_2^2 $ (with 2 degrees of freedom). $ Y \sim \chi_k^2 $ is a chi-square distribution if $ Y = \sum_{m=1}^k X_m^2 $ for $ X_i \sim N(0,1) $ independent that are normally distributed. If the $ X_i\sim N(\mu_i,1) $ have nonzero means, then $ Y = \sum_{m=1}^k X_m^2 $ is drawn from a noncentral chi-square distribution. $ Y \sim \mathrm{F}(\nu_1, \nu_2) $ is an F-distribution if $ Y = (X_1 / \nu_1)/(X_2 / \nu_2) $ where $ X_1 \sim \chi_{\nu_1}^2 $ and $ X_2 \sim \chi_{\nu_2}^2 $ are independent with their respective degrees of freedom. $ Y \sim \chi^2(\bar{\nu}) $ is a chi-square distribution if $ Y = \sum_{m=1}^N X_m $ where $ X_m \sim \chi^2(\nu_m) $ are independent and $ \bar{\nu} = \sum_{m=1}^N \nu_m $. if $ X $ is chi-square distributed, then $ \sqrt{X} $ is chi distributed.
Name Statistic chi-square distribution $ \sum_{i=1}^k \left(\frac{X_i-\mu_i}{\sigma_i}\right)^2 $ noncentral chi-square distribution $ \sum_{i=1}^k \left(\frac{X_i}{\sigma_i}\right)^2 $ chi distribution $ \sqrt{\sum_{i=1}^k \left(\frac{X_i-\mu_i}{\sigma_i}\right)^2} $ noncentral chi distribution $ \sqrt{\sum_{i=1}^k \left(\frac{X_i}{\sigma_i}\right)^2} $ See alsoEdit Overview of Chi-Square Test On-line calculator for the significance of chi-square, in Richard Lowry's statistical website at Vassar College. Distribution Calculator Calculates probabilities and critical values for normal, t-, chi2- and F-distribution
This page uses Creative Commons Licensed content from Wikipedia (view authors). |
In the opening scene of the “The Euclid Alternative”, we see Sheldon (Jim Parsons) demanding that Leonard (Johnny Galecki)needs to drive him around to run various errands. Leonard, after spending a night in the lab using the new Free Electron Laser to perform X-ray diffraction experiments. In the background, we can see equations that describe a rolling ball problem on the whiteboard in the background.
Rolling motion plays an important role in many familiar situations so this type of motion is paid considerable attention in many introductory mechanics courses in physics and engineering. One of the more challenging aspects to grasp is that rolling (without slipping) is a combination of both translation and rotation where the point of contact is instantaneously at rest.The equations on the white board describe the velocity at the point of contact on the ground, the center of the object and at the top of the object.
Pure Translational Motion
When an object undergoes pure translational motion, all of its points move with the same velocity as the center of mass– it moves in the same speed and direction or \(v_{\textrm{cm}}\).
Pure Rotational Motion
In the case of a rotating body, the speed of any point on the object depends on how far away it is from the axis of rotation; in this case, the center. We know that the body’s speed is \(v_{\textrm{cm}}\) and that the speed at the edge must be the same. We may think that all these points moving at different speeds poses a problem but we know something else — the object’s angular velocity.
The angular speed tells us how fast an object rotates. In this case, we know that all points along the object’s surface completes a revolution in the same time. In physics, we define this by the equation: \begin{equation} \omega=\frac{v}{r} \end{equation} where \(\omega\) is the angular speed. We can use this to rewrite this equation to tell us the speed of any point from the center: \begin{equation} v(r)=\omega r \end{equation} If we look at the center, where \(r=0\), we expect the speed to be zero. When we plug zero into the above equation that is exactly what we get: \begin{equation} v(0)= \omega \times 0 = 0 \label{eq:zero} \end{equation} If we know the object’s speed, \(v_{\textrm{cm}}\) and the object’s radius, \(R\), using a little algebra we can define \(\omega\) as: \[\omega=\frac{v_{\textrm{cm}}}{R}\] or the speed at the edge, \(v_{\textrm{cm}}\) to be \(v(R)\) to be: \begin{equation} v_{\textrm{cm}}=v(R) = \omega R \label{eq:R} \end{equation} Putting it all Together
To determine the absolute speed of any point of a rolling object we must add both the translational and rotational speeds together. We see that some of the rotational velocities point in the opposite direction from the translational velocity and must be subtracted. As horrifying as this looks to do, we can reduce the problem somewhat to what we see on the whiteboard. Here we see the boys reduce the problem and look at three key areas, the point of contact with the ground (\(P)\), the center of the object, (\(C\)) and the top of the object (\(Q\)).
We have done most of the legwork at this point and now the rolling ball problem is easier to solve.
At point \(Q\)
At point \(Q\), we know the translational speed to be \(v_{\textrm{cm}}\) and the rotational speed to be \(v(R)\). So the total speed at that point is
\begin{equation} v = v_{\textrm{cm}} + v(R) \label{eq:Q1} \end{equation} Looking at equation \eqref{eq:R}, we can write \(v(R)\) as \begin{equation} v(R) = \omega R \end{equation} Putting this into \eqref{eq:Q1} and we get, \begin{aligned} v & = v_{\textrm{cm}} + v(R) \\ & = v_{\textrm{cm}} + \omega R \\ & = v_{\textrm{cm}} + \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} + v_{\textrm{cm}} = 2v_{\textrm{cm}} \end{aligned} which looks almost exactly like Leonard’s board, so we must be doing something right. At point \(C\)
At point \(C\) we know the rotational speed to be zero (see equation \eqref{eq:zero}).
Putting this back into equation \eqref{eq:Q1}, we get \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} + v(0) \\ & = v_{\textrm{cm}} + \omega \cdot 0 \\ & = v_{\textrm{cm}} + 0 \\ & = v_{\textrm{cm}} \end{aligned} Again we get the same result as the board. At point \(P\)
At the point of contact with the ground, \(P\), we don’t expect a wheel to be moving (unless it skids or slips). If we look at our diagrams, we see that the rotational speed is in the opposite direction to the translational speed and its magnitude is
\begin{aligned} v(R) & = -\omega R \\ & = -\frac{v_{\textrm{cm}}}{R}\cdot R \\ & = -v_{\textrm{cm}} \end{aligned} It is negative because the speed is in the opposite direction. Equation \eqref{eq:Q1}, becomes \begin{aligned} v & = v_{\textrm{cm}} + v(r) \\ & = v_{\textrm{cm}} – \omega R \\ & = v_{\textrm{cm}} – \frac{v_{\textrm{cm}}}{R}\cdot R \\ & = v_{\textrm{cm}} – v_{\textrm{cm}} = 0 \end{aligned} Not only do we get the same result for the rolling ball problem we see on the whiteboard but it is what we expect. When a rolling ball, wheel or object doesn’t slip or skid, the point of contact is stationary. Cycloid and the Rolling ball
If we were to trace the path drawn by a point on the ball we get something known as a cycloid. The rolling ball problem is an interesting one and the reason it is studied is because the body undergoes two types of motion at the same time — pure translation and pure rotation. This means that the point that touches the ground, the contact point, is stationary while the top of the ball moves twice as fast as the center. It seems somewhat counter-intuitive which is why we don’t often think about it but imagine if at the point of contact our car’s tires wasn’t stationary but moved. We’d slip and slide and not go anywhere fast. But that is another problem entirely. |
Let $\Phi_n(x) \in \mathbb{Z}[x]$ denote the $n$-th cyclotomic polynomial, and let $\mathbb{F}_q$ be the finite field with $p^k = q$ elements ($p$ prime). Let $\Phi'_n(x)$ be the reduction of $\Phi_n(x)$ mod $p$ (i.e., $\Phi'_n(x)$ is the image of $\Phi_n(x)$ in $\mathbb{F}_q[x]$). Is it true that $\Phi'_n(x)$ is the $n$-th cyclotomic polynomial in $\mathbb{F}_q$; that is, are the roots of $\Phi_n'(x)$ precisely the primitive $n$-th roots of unity in $\mathbb{F}_q$? If so, I want to prove this is the case, but I'm not sure how I'd go about doing this.
You need $\gcd(p,n)=1$ for otherwise there are no roots of unity of order $n$ in
any field $\Bbb{F}_q,q=p^n$. Basically this is because $1$ is the only root of $x^p-1=(x-1)^p$.
But, assuming $\gcd(n,p)=1$, the claim is true. If $\alpha\in\Bbb{F}_q$ is a root of unity of order $n$ (implying $n\mid q-1$), then the powers $\alpha^k$, $0<k<n,\gcd(k,n)=1$ are also of order $n$. Therefore we can deduce that there are $\phi(n)$ such elements in $\Bbb{F}_q$. Because they all have order $n$, they are zeros of $x^n-1$, but they are not zeros of any $x^d-1, d\mid n$.
In the ring $\Bbb{Q}[x]$ we have the factorization $$ x^n-1=\prod_{d\mid n}\Phi_d(x), $$ and we know that $\gcd(\Phi_{d_1},\Phi_{d_2})=1$ whenever $d_1\neq d_2$. Because reduction modulo $p$ is a homomorphism of polynomial rings we get the factorization in $\Bbb{F}_p[x]$ $$ x^n-1=\prod_{d\mid n}\Phi'_d(x).\qquad(*) $$ As we assumed $\gcd(n,p)=1$ we have, for $f_n(x)=x^n-1$, $$\gcd(f(x),f'(x))= \gcd(x^n-1,nx^{n-1})=1,$$ so we know that the zeros of $x^n-1$ in any field of characteristic $p$ are also simple. Therefore we still have no common factors and $\gcd(\Phi_{d_1}'(x),\Phi_{d_2}'(x))=1$ whenever $d_1\neq d_2$.
Any root of unity of $\alpha$ order $n$ in $\Bbb{F}_q$ is a zero of the left hand side of $(*)$, so it must also be a zero of one of the factors $\Phi_d'(x)$. Because $\Phi_d'(x)\mid x^d-1$ it follows that $\alpha$ cannot be a zero of $\Phi_d'(x)$ for any proper divisor $d\mid n$. Therefore $\Phi_n'(\alpha)=0$.
Because $\deg\Phi_n(x)=\phi(n)$ equals the number of roots of unity of order $n$ in $\Bbb{F}_q$, we can conclude that $$ \Phi_n'(x)=\prod_{\alpha\in\Bbb{F}_q\ \text{of order $n$}}(x-\alpha). $$
What changes from characteristic zero setting is that the polynomials $\Phi_n(x)$ are usually not irreducible. It remains irreducible after reduction modulo $p$ if and only if $p$ is a generator of the multiplicative group of residue classes $\Bbb{Z}_n^*$. |
I am using a PIC micro with a 10bit ADC to take readings from an analog signal with a frequency less than 300 hz. However that analog signal is in the range of -2 V and +2 V. How can I condition the signal to get it into a usable range (assuming the input to the ADC has to be positive) Also I do not have a positive and negative power supply.
important note: this answer was posted to solve the problem for -20V to +20Vinput, because that was what was asked. It's a clever method but doesn't work work if the input voltage limit stays between the rails.
You'll have to scale the voltage with a resistor divider so that you get a voltage between -2.5V and +2.5V, and add 2.5V. (I'm presuming a 5V power supply for your PIC).
The following calculation looks long, but that's only because I explain every step in detail. In reality it's so easy that you can do it in your head in no time.
First this:
R1 is the resistor between \$V_{IN}\$ and \$V_{OUT}\$,
R2 is the resistor between \$+5V\$ and \$V_{OUT}\$, and R3 is the resistor between \$V_{OUT}\$ and \$GND\$.
How many unknowns do we have? Three, R1, R2 and R3. Not quite, we can choose one value freely, and the other two are dependent on that one. Let's choose R3 = 1k. The mathematical way to find the other values is to create a set of two simultaneous equations from two (\$V_{IN}\$, \$V_{OUT}\$) pairs, and solve for the unknown resistor values. Any (\$V_{IN}\$, \$V_{OUT}\$) pairs will do, but we'll see that we can tremendously simplify things by carefully choosing those pairs, namely the extreme values: (\$+20V\$, \$+5V\$) and (\$-20V\$, \$0V\$).
First case: \$V_{IN} = +20V\$, \$V_{OUT}=+5V\$
Note that (and this is the key to the solution!) both ends of R2 see \$+5V\$, so there's no voltage drop, and therefore no current through R2. That means that \$I_{R1}\$ has to be the same as \$I_{R3}\$ (KCL). \$I_{R3}=\dfrac{+5V-0V}{1k\Omega}=5mA=I_{R1}\$. We know the current through R1, and also the voltage over it, so we can calculate its resistance: \$R1=\dfrac{+20V-5V}{5mA}=3k\Omega\$. Found our first unknown!
Second case: \$V_{IN} = -20V\$, \$V_{OUT}=0V\$
The same thing as with R2 happens now with R3: no voltage drop, so no current. Again according to KCL, now \$I_{R1}\$ = \$I_{R2}\$. \$I_{R1}=\dfrac{-20V-0V}{3k\Omega}=6.67mA=I_{R2}\$. We know the current through R2, and also the voltage over it, so we can calculate its resistance: \$R2=\dfrac{+5V-0V}{6.67mA}=0.75k\Omega\$. Found our second unknown!
So a solution is: \$R1 = 3k\Omega, R2 = 0.75k\Omega, R3 = 1k\Omega\$.
Like I said it's only the
ratio between these values which is important, so I might as well pick \$R1 = 12k\Omega, R2 = 3k\Omega, R3 = 4k\Omega\$. We can check this solution against another (\$V_{IN}\$, \$V_{OUT}\$) pair, e.g. (\$0V\$, \$2.5V\$). R1 and R3 are now parallel (they both have +2.5V-0V over them, so when we calculate their combined value we find \$0.75k\Omega\$, exactly the value of R2, and the value we needed to get \$+2.5V\$ from \$+5V\$! So our solution is indeed correct. [QC stamp goes here]
The last thing to do is to connect \$V_{OUT}\$ to the PIC's ADC. ADCs often have rather low input resistances, so this may disturb our carefully calculated equilibrium. Nothing to worry about, however, we simply have to increase R3 so that \$R3 // R_{ADC} = 1k\Omega\$. Suppose \$R_{ADC} = 5k\Omega\$, then \$\dfrac{1}{1k\Omega}=\dfrac{1}{R3}+\dfrac{1}{R_{ADC}}=\dfrac{1}{R3}+\dfrac{1}{5k\Omega}\$ From this we find \$R3=1.25k\Omega\$.
edit OK, that was clever and very simple, even if I say so myself. ;-) But why wouldn't this work if the input voltage stays between the rails? In the above situations we always had a resistor which had no current flowing through it, so that, following KCL, the current coming into the \$V_{OUT}\$ node via one resistor would leave via the other one. That meant that one voltage had to be higher than \$V_{OUT}\$, and the other lower. If both voltages are lower there would only flow current away from that node, and KCL forbids that.
The easiest way is to use a "resistor divider".
You didn't say what voltage this PIC is running at and therefore the A/D input range is, so let's use 5V for the example. Your input voltage range is 40V, and the output 5V, so you need something that attenuates by at least 8. You also need the result to be centered on 1/2 Vdd, which is 2.5V, whereas your input voltage is centered on 0V.
This can be accomplished with 3 resistors. One end of all three resistors are connected together and to the PIC A/D input pin. The other end of R1 goes to the input signal, R2 goes to Vdd, and R3 goes to ground. The resistor divider is formed by the R1 and the parallel combination of R2 and R3. You can adjust R2 and R3 to center the resulting range at 2.5V, but for simplicity explaining this we'll live with a little bit of assymetry and attenuate a little bit more to make sure both ends are limited to the Vss-Vdd range.
Let's say the PIC wants the analog signal to have a impedance of 10 kΩ or less. Again for simplicity, let's make R2 and R3 20 kΩ. The impedance feeding the PIC will be no more than the parallel combination of those, which is 10 kΩ. To get attenuation of 8, R1 needs to be 7 times R2//R3, which is 70 kΩ. However, since the result won't be exactly symmetric, we need to attenuate a little more to make sure -20V in won't result in less than 0V into the PIC. That actually requires attenuation of 9, so R1 must be at least 8 times R2//R3, which is 80 kΩ. The standard value of 82 kΩ will allow for some slop and margin but you still get most of the A/D range to measure the original signal.
Added:
Here is a example of finding the exact solution to a similar problem. This has no assymetry and has a particular specified output impedance. This form of solution can always be used when the A/D range is wholly within the input voltage range.
This is the standard circuit for that. You need to scale the resistor values for your required impedance.
If the signal is not DC, or if a DC reference isn't important, the signal can be capacitively coupled to the input of the ADC.
Alternatively, if your ground for the PIC is floating, you could tie your signal ground to 1/2 VDD of the PIC.
The following circuit should do the job:
3.3V + | \ / 1k \ | +-- ADC input | \ / 1k \ | +-- Signal input (-2V to +2V)
It's a potential divider. At -2V, the output will be 0.65V; at +2V, 2.65V.
All noise on the 3.3V rail will get transferred to the input, so use a good voltage reference to reduce this problem.
This will work with other supplies too, but the offset will shift.
Thomas' voltage adder with two identical resistors is indeed simple, but has the disadvantage that the input range to the ADC is reduced, which means that noise will have a bigger influence. Also the lower limit is at 0.65V. If your microcontroller doesn't have a \$V_{ADCREF-}\$ input (most controllers don't) that part of the input range will remain unused.
This is easy to fix: choose the resistor ratio so that \$V_{ADC}\$ will be 0V if the input is -2V. For a \$V_{DD}\$ of 5V this means the input resistor should be 2/5 of the pull-up resistor. At 2V input \$V_{ADC}\$ will be 2.86V. Set \$V_{ADCREF+}\$ to this level, and the -2V to +2V will cover the full ADC range.
If your \$V_{DD}\$ = 3.3V the input resistor should be 61% (\$\frac{2V}{3.3V}\$) of the pull-up. At +2V in \$V_{ADC}\$ will be 2.49V. |
Let $A$ and $B$ be closed, bounded, disjunct subsets of $\mathbb{R}^p$
Now, this is not a metric, but define $\delta$ like this: $$ \delta = \inf V, $$ where $$ V = \{ \| a-b \| \mid a \in A \text{ and } b \in B \} $$
Show that $\delta$ is strictly greater than $0$.
Here's my reasoning: Since $A$ and $B$ are closed and bounded, $V$ will be closed and bounded. Because $V$ is closed and bounded, it will contain its infimum. Because $A$ and $B$ are disjunct, $0$ can't be part of $V$. Because $V$ only contains positive members of $\mathbb{R}$, $\delta$ cannot be $0$ and must therefore be strictly positive.
The part where I'm confused is the first sentence. Why exactly will $V$ be closed? |
I'm not entirely sure of the extent of their power, whether it is limited to simple compound manipulation, but theoretically, the amount of energy in a 10m sphere is more than he will ever need.
Matter can be converted directly to energy, for example, during matter-antimatter collisions, so if we know the amount of matter in the area around him, we know the amount of energy.
Let's say that he is standing on the ground, with a 10m hemisphere of air above him, and a 10m hemisphere of dirt below him.
Well, the density of air (at sea level) is $1.225~\mathrm{kg/m^3}$ and the density of dirt $1000~\mathrm{kg/m^3}$.
The volume of a hemisphere can be calculated as:
$$V = \frac{\frac43\pi r^2}2$$
so, with a 10m radius, it is$$\frac{\frac43\pi 10^2}2$$which is about $210 ~\mathrm{m^3}$.
Now, we can calculate the mass of the air around him:$$1.225 \times 210 = 257.25$$and dirt$$1000 \times 210 = 210000$$
So, in total the mage has $210257.25 ~\mathrm{kg}$ of matter around him.
Plug into equation to get energy:$$E = mc^2\\~\\~\\E = 210257.25 \times (3 \times 10^8)^2$$which my calculator tells me is equal to $1.8923152 \times 10^{22} ~ \mathrm J$.
So, your mage has access to$$18,923,152,000,000,000,000,000$$joules of energy. Hurl a fireball of that and you'd wipe out the planet. To put that into context, the energy released by the Little Boy atomic bomb was $6.276 \times 10^{13}$ joules.
The energy your mage has access to is enormous, equivalent to $301516125$ nukes. |
Forgot password? New user? Sign up
Existing user? Log in
Brilliant should give a option for chat among its users so that they can discuss studies and share their views all over the world...This would increase the research in various fields!!
Note by Ravi Prakash Roy 6 years, 2 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Log in to reply
Lol, i made nothing show upBut what i wanted to do was this: https://brilliant.org/slackin/Or i can make it a hyperlink to make it look better cause why not?Press here
No, brilliant is for learning, let's leave social networking for chatting ;)Plus chatting could cause server crashes and lag.
really it is a good idea....
my idea is that....most of them have been introduced to the live challenge right.....now if it is a group chat,the person would not dare to ask a brilliant question which would actually be exposed to everyone...isn't it?now it is not possible to have a lot of members on a group chat...if it is divided into various group chats with a limited amount of members in each group then it would be good...this is the basic thought i can give but there is a hole in it......--> if the limit of a group chat is 22 members and there are 23 members online...what would the 23rd person do??the solution - if he simply posts it to the discussion that i am alone in the group chat...the limit of the group can be extended to 24 members by the person who started the group chat.....and yes,there shouldn't be only two people in a group..perhaps he should also post "can anyone enter the group chat?" in the discussion.....if my language can't be understood or any problem in the way i told...just tell me.. :) :D
Brilliant can actually make , a chat in , Hall.com just like AoPS users have done :)Then people can chat , with each other also can hold group disussion and even indeed , the moderator of brilliant can then take care if they discuss any weakly challenge problem :)
For me..I think it's okay to have an option of having chat among users. I believe that all users here didn't just make an account to have big points but to enhance their (as well as I) skills in Math, Geometry, Physics, Com. Scie, and etc.. And because of this reason, users are probably competitive enough for them not to share his/her solutions or answers on some difficult problems.
how to sign up in the competitions like hunger games
Just click Competitions on top of the page.
I would instead suggest a way to make the discussion forum more "social" - such as the addition of tabs including, but not limited to: science sections, maths sections, miscellaneous sections, etc, and of course any other sub-groups within sections if needed.
This offers distinct advantages from a chat system. For example, forum posts are much easier to moderate than chat rooms.This ensures minimal "cheating" (i.e posting of Brilliant problems). Furthermore, "trolling" will not present as much of a threat - a common problem with chat rooms.
Anyway, just my two cents. But I think that a chat system would be largely unnecessary given these other viable alternatives.
Those are some good ideas. I'd like to add that it would be cool if each post shows a view count, and perphaps even who viewed it, e.g. whether the staff has seen it or not.
yeah TIM,,,lets see if brilliant work on it or not????
yes stefan,,you are right,,,but atleast it should show who are online ???
Of course it will. Stefan just suggested making sub-divisions in the chats relating to maths,science,etc so that it will be more user-friendly and for better moderation.
For this purpose don't we have a discussion page...
hi man , what is this game?
absolutely not a good idea
Why not? I think it is rather good.
What if, if they discuss brilliant questions???
Look around friend,in fb groups,math.stack and in other sites questions of brilliant are posted.Cheating can't ever be completely stopped.As sreejato suggested there should be a complaint option.
But it's silly if Brilliant let the cheat happen inside Brilliant it self. However i think Briliant staff control is needed.
@Hafizh Ahsan Permana – umm cheating is cheating,inside or outside,does it really matter?
@Soham Chanda – If we limit the ease which a person can cheat, then yes. Chat rooms are an easy place to cheat (and hard to moderate), especially if there are 1000+ people all on-line, chatting at once.
In order to cope with this problem, I would suggest adding a complaint feature in the chat section. If someone asks others answers to live Brilliant problems, others can lodge a complaint against that cheat, which would expose their conversation to the Brilliant staff. The concerned staff members would then look into their conversation and take appropriate measures against the cheat. Anyways, this is just a suggestion and I don't know whether this is feasible or not.
you are right sreejato,,,actually there is no space for such cheap work like asking qustion because brilliant is not just to get points by hook or by crook,,,but chat feature can be used to for interaction between two brilliant maths addicts,,,,this chat must show who are online so that,,,,they share their views over a concept!
Problem Loading...
Note Loading...
Set Loading... |
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Detaljert visning - Lignende elementer 2018-08-25 06:58 Detaljert visning - Lignende elementer 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Detaljert visning - Lignende elementer 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Detaljert visning - Lignende elementer 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Detaljert visning - Lignende elementer 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Detaljert visning - Lignende elementer 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Detaljert visning - Lignende elementer 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Detaljert visning - Lignende elementer 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Detaljert visning - Lignende elementer 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Detaljert visning - Lignende elementer |
This question actually has a very easy and rigorous answer. Having which-path information "available" is just a crude way of saying that the system is correlated with
anything else. Usually, this is because the system has been decohered in whatever basis corresponds to the possible paths, which is usually position basis. In your case, the photon is actually never put into a coherent local superposition, and so interference will not be seen. Instead the SPDC process essentially creates a Bell state where one photon is thrown away. Skematically, the situation you describe is as follows. The splitting process is
$\vert S \rangle \otimes \vert I \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle \otimes \vert I_L \rangle + \vert S_R \rangle \otimes \vert I_R \rangle \Big ] = \vert \psi \rangle \qquad \qquad \qquad (1)$
where $S$ and $I$ stand for the signal and idle photons, respectively, and $L$ and $R$ stand for the left and right path. The reduced state of the signal photon is
$\rho^{(S)}=\mathrm{Tr}_I\Big[\vert \psi \rangle \langle \psi \vert \Big]$
(If you don't know what $\mathrm{Tr}$ means, or what a density matrix is, you absolutely must go learn about them. It doesn't take that long, and is crucial for understanding this question.) The measurement performed by the apparatus is essentially a measurement in the basis $\{ \vert \pm \rangle = \vert S_L \rangle \pm \vert S_R \rangle \} $. Here, getting a "plus" result in the laboratory means seeing the photon near a peak on the screen, and a "minus" result is seeing it in a trough.
You can check that measuring $\rho^{(S)}$ in the $\{ \vert \pm \rangle \}$ basis (or, in fact, any basis at all) gives equal probability of either outcome. This means no interference pattern, since photons are evenly spread over peaks and troughs. In particular, this is true no matter what happens to the idle photon; it could be carefully measured, or thrown away.
On the other hand, if you simply send the photon into a double slit experiment by sending it through a small hole and allowing the photon to enter either slit without being correlated with anything else, the evolution looks like
$\vert S \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle + \vert S_R \rangle \Big ] \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (2)$
which doesn't involve a second photon that "knows" anything. In this case, a measurement in the $\{ \vert \pm \rangle \}$ basis gives "plus" with certainty (or near certainty), meaning we see an interference pattern because all (or most) of the photons only land at the peaks.
Finally, suppose we place a second particle like a spin-up electron in front of the right slit such that the electron's spin flips if and only if the photons brushes by it on the way through the right slit. In this case we'd get
$\frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle + \vert S_R \rangle \Big ] \otimes \vert e_\uparrow \rangle \to \frac{1}{\sqrt{2}} \Big [ \vert S_L \rangle \otimes \vert e_\uparrow \rangle + \vert S_R \rangle \otimes \vert e_\downarrow \rangle \Big ] \qquad \qquad (3)$
Now, although nothing has really happened to the signal photon as it passed through the right slit--it doesn't, say, get slowed down or deflected--the electron now knows where the photon is. In fact, this state is identical to the first one we considered except with the electron in place of the idle photon. If we make a measurement on the signal photon, we now get either outcome with equal likelihood, meaning the interference pattern is lost.
The process of the electron getting entangled with photon is known as
decoherence. (Note that we only use that word when the electron is lost, like it usually is. If the electron was still accessible and could potentially be brought back to interact again with the photon, we'd just say they had become entangled.) Decoherence is the key process, and plays a fundamental role in understanding how "classicality" arises in a fundamentally quantum world. Edit:
Make sure not to confuse two possible situations. The first is where the momenta of the idle and signal photon are correlated, and the slits are positioned to simply select for one of two possible outcomes, corresponding to equation (1) above:
The second is where the signal photon's spread over $L$ and $R$ is not caused by an initial event correlating it with idle photon, but simply by its own coherent spreading when it is restricted to pass through a small hole, corresponding to equation (2):
Note here that there is no violations of conservations of momentum, a subtle (for beginners) consequences of the infinite dimensional aspect of the photon's Hilbert space. (The fact that the two-slit experiment is the canonical example for introducing quantum wierdness is unfortunate because of these complications.) When the photon is confined to a small initial slit, it necessarily has a wide transverse momentum spread.
It might be helpful to concatenate these two cases:
Here, the idle photon is initially entangled with the signal photon, but the wall with the single slit destroys the signal photon for the $X/R_1$ outcome. When $Y/L_1$ happens, the signal photon can now be sent through 2 slits to produce and interference pattern. The idle photon's direction $X$ vs. $Y$ was correlated with the signal photon's $L_1$ vs. $R_1$, but it is never correlated with $L_2$ vs. $R_2$. |
Warning: Boring technical stuff that’s only here because I needed it for the model in the Helicopter Money paper, and there are no other good references online.
All the existing resources on the internet that I’ve found are either vague or inconsistent with their
σ/ ρ notation (where \(ρ=\frac{σ-1}{σ}\) and σ is the elasticity of substitution, which makes the exponents in the utility function look cleaner, though the exponents in the demand function will be correspondingly messier), or don’t derive it with the coefficients. Almost none go step by step (with the exception of this video) or carry through the coefficients with exponents, and literally none derive a function for more than two goods.
We’re going to do all of these: a fully general derivation of demand functions from an
n-good CES utility function, carrying through the actual elasticity of substitution as a parameter. I’ll use sum notation throughout, which you can easily expand to a definite number of goods. It’s worth noting though that the elasticity of substitution has to be the same between all pairs of goods, otherwise there’s no fully general form.
We start by writing our CES function this way, raising our coefficient to the 1/
σ and summing over a set of n goods. You might wonder why we’re raising our coefficient to an exponent too. It’ll make our demand function slightly cleaner in the end, and since it’s a parameter, you can just define α n
(1) $$U=\left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ} \right)^\frac{σ}{σ-1} $$
A function of this form means that the elasticity of substitution between any pair of goods is
σ. Our budget constraint, then, is
(2) $$I=\sum_nP_nG_n$$
So we want to maximize (1) subject to (2). So we set up our lagrangian, and derive with respect to each good plus λ, which gives us
n+1 first-order conditions.
(3) $$\mathcal{L} = \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ} \right)^\frac{σ}{σ-1} + λ\left(I-\sum_nP_nG_n\right)$$
Since both parts of the Lagrangian are sums, and the parameters of the various goods are all siloed into their own terms, this is actually fairly straightforward to derive with respect to any
G variable. So we’ll pick any two goods, say a and b, and derive with respect to G a and
To get the derivative of the first part of the Lagrangian, remember the chain rule for deriving
f( g( x)): \(\frac{∂ f}{∂ x} = \frac{∂ f}{∂ g}\frac{∂ g}{∂ x}\). Our f( g) will be \(g^\frac{σ}{σ-1}\), and our g( x) will be the sum on the inside of the parentheses. Carrying down the \(\frac{σ}{σ-1}\) exponent from \(\frac{∂ f}{∂ g}\) cancels out the \(\frac{σ-1}{σ}\) exponent carried down from the G n in \(\frac{∂ g}{∂ x}\). This gives us:
(4) $$\frac{∂\mathcal{L}}{∂G_a} = \left(\frac{β_a}{G_a}\right)^\frac{1}{σ} \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ}\right)^\frac{1}{1-σ} – \lambda P_a = 0$$
(5) $$\frac{∂\mathcal{L}}{∂ G_b} = \left(\frac{β_b}{G_b}\right)^\frac{1}{σ} \left(\sum_n β_n^{1/σ}G_n^\frac{σ-1}{σ}\right)^\frac{1}{1-σ} – \lambda P_b = 0$$
(6) $$\frac{∂\mathcal{L}}{∂ \lambda} = I-\sum_nP_nG_n = 0$$
We have
β a/
From here, we want the relative price of
P a to
(7) $$\frac{P_a}{P_b} = \frac{(β_a/G_a)^\frac{1}{σ}}{(β_b/G_b)^\frac{1}{σ}} = \left(\frac{β_a G_b}{β_b G_a}\right)^\frac{1}{σ}$$
The right hand side is the marginal rate of substitution between
G a and
Solving for
G b, we have
(8) $$G_b = \frac{β_bG_a}{β_a}\left(\frac{P_a}{P_b}\right)^σ$$
And similarly for
G c, etc. What we’re aiming to do here is to replace all the
So, substituting (8) and its brothers back into the budget constraint (2) gives us
(9) $$I=P_aG_a + \sum_{n\neq a}P_n\frac{β_nG_a}{β_a}\left(\frac{P_a}{P_n}\right)^σ$$
$$= G_a \sum_{n}\frac{β_n}{β_a}P_a^σ P_n^{1-σ}$$
We’ve substituted every
G term of the sum, except the original G a term, with an expression in terms of
All that remains is to solve for
G a.
(10) $$ G_a = \frac{I}{\sum_{n}\frac{β_n}{β_a}P_a^σ P_n^{1-σ}} $$
$$= \frac{I P_a^{-σ}}{\sum_{n}\frac{β_n}{β_a}P_n^{1-σ}}$$
So there’s our demand function for
G a, and |
If $a,b,c$ are sides of a triangle prove that-
$$\frac a{c+a-b}+\frac b{a+b-c}+\frac c{b+c-a}\geq3$$
I am having problem in approaching the problem as the sides are not mentioned as integers.How do I approach the problem?
Thanks for any help!!..
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
By using the Ravi transformation let $a=x+y$, $b=y+z$ and $c=z+x$, then \begin{align} \frac a{c+a-b}+\frac b{a+b-c}+\frac c{b+c-a}&=\frac{x+y}{2x}+\frac{y+z}{2y}+\frac{z+x}{2z}\\ &=\frac{3}{2}+\frac{1}{2}\left(\frac{y}{x}+\frac{z}{y}+\frac{x}{z}\right) \end{align} Now use AM-GM inequality. |
All Questions
Các bạn giúp mik vs
1We started launching the Green Campaign a week ago
>We have.............
2 The road is too narrow for the volume of traffic to travel
>The road is not.........
3 We are having some workers repaint the fence
>We are.................
4 People made these toys from organic material
>These toys.............
5 They may have removed all of the waste paper last week
>All of the waste paper........
6 the children used to play traditional games long ago
>Traditional games....................
Maciej carries a bookmark with this table on it to help him remember his secret four digit number. If his secret number is 8526, all he has to do is to remember the words HELP. To retrieve his number, he looks up the letters of the word HELP and finds the corresponding digits in the top row of the table. Another example: The word LOVE can be used to help Maciej remember the secret number 2525. Maciej has to remember a new secret number. Only three of the following words produce this new number. Which one does not?
1 2 3 4 5 6 7 8 9 0 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
các bạn giúp mik nha
I. Rewrite the sentences below so that it has a similar meaning to the first, beginning with the words given
1. I find his handwriting very hard to read
-> I have....................
2. He got down to writing a letter as soon as he returned from his work
-> No sooner.................................
3. "If I were you, I wouldn't accept his marriage proposal", said Nam to Lan
-> Nam................
4. No matter how hard I tried, I could not open the window
-> Try.......................
5. Please don't ask me that question
-> I'd rather.............
II. Finish the second sentence so that it has the same meaning as the first one, using the given words. Do not change the given word
1. The fridge is complete empty (LEFT)
->
2. It is pointless to have that old typewriter repaired (WORTH)
->
3. Frank never pays any attention to my advice. (NOTICE)
->
4. John only understood very little of what the teacher said (HARDLY_
->
5. Her ability to run a company really impresses me (IMPRESSED)
->
For a ; b ; c is a non-negative real number: a + b + c = 3. Prove that : \(\left(3abc+1\right)\left(a^2b+b^2c+c^2a\right)\ge12abc\)
NGƯỜI TA GIẢM CHIỀU RỘNG MẢNH ĐẤY HÌNH CHỮ NHẬT 8M ĐƯỢC MỘT MẢNH ĐẤT HÌNH CHỮ NHẬT MỚI SAU ĐÓ TĂNG CHÌỀN DÀI THÊM 8M . TÍNH DIỆN TÍCH MẢNH ĐẤT SAU KHI THAY ĐỔI CHIỀU DÀÌ VÀ CHIỀU RỘNG BIẾT RẰNG PHẦN ĐẤT HÌNH CHỮ NHẬT BỊ CẮT ĐI GẤP ĐÔI DIỆN TÍCH HÌNH CHỮ NHẬT ĐƯỢC THÊM VÀO.
Given two function \(f\left(x\right)=\sqrt{25x^2-30x+9}\), \(g\left(y\right)=y\). How many values of a are there such that \(f\left(a\right)=g\left(a\right)+7\)?
Nguyễn Linh Chi 08/08/2019 at 09:44
\(f\left(a\right)=\sqrt{25a^2-30a+9}\)
\(g\left(a\right)=a\)
We have the following:
\(f\left(a\right)=g\left(a\right)+7\)
\(\Leftrightarrow\sqrt{25a^2-30a+9}=a+7\)
\(\Leftrightarrow\left\{{}\begin{matrix}25a^2-30a+9=\left(a+7\right)^2\\a+7\ge0\end{matrix}\right.\)
\(\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\24a^2-44a-40=0\end{matrix}\right.\Leftrightarrow\left\{{}\begin{matrix}a\ge-7\\\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\end{matrix}\right.\Leftrightarrow\left[{}\begin{matrix}a=\dfrac{5}{2}\\a=-\dfrac{2}{3}\end{matrix}\right.\)
Finally, there are two values of a.Uchiha Sasuke selected this answer.
Find all positive integer values of n such that \(\dfrac{n-17}{n+23}\) is a square of a quotient number.
Lux Arcadia 14/06/2019 at 14:20
\(\dfrac{n-17}{n+23}=\dfrac{n+23-40}{n+23}=1-\dfrac{40}{n+23}\)
\(\Rightarrow\left\{{}\begin{matrix}40⋮\left(n+23\right)\\n+23\ge23\left(n\ge0\right)\end{matrix}\right.\)
\(\Leftrightarrow\left\{{}\begin{matrix}\left(n+23\right)\in\left\{1;2;4;5;8;10;20;40\right\}\\n+23\ge23\end{matrix}\right.\)
\(\Leftrightarrow n+23=40\Leftrightarrow n=17\)
Given two functions \(f\left(x\right)=5x+1\) and \(g\left(x\right)=ax+3\). Find the value of g(1) if \(a=f\left(2\right)-f\left(-1\right)\)
How many values of the whole number m such that the function \(y=\left(2016-m^2\right)x+3\) is increasing?
Tôn Thất Khắc Trịnh 13/06/2019 at 03:46
So as to have the function be increasing, 2016-m
2must be positive. Therefore, 2016 must be greater than m 2. Since 2016 is a positive number, m has to range from \(-\sqrt{2016}\) to \(\sqrt{2016}\). In other words, m has to be between -44.899 and 44.899. Since m is a whole number, the minimal value of m is -44 and the maximum is 44. To calculate the number of values, we use this formula: \(N=\frac{44-(-44)}{1}+1=89\)
To conclude, there are 89 values of m such that the function is increasing.Uchiha Sasuke selected this answer. |
Details Parent Category: Community Published on Monday, 13 May 2019 12:11 Written by sebastien.popoff
\(
\def\ket#1{{\left|{#1}\right\rangle}} \def\bra#1{{\left\langle{#1}\right|}} \def\braket#1#2{{\left\langle{#1}|{#2}\right\rangle}} \) The Speckle-Correlation Transmission Matrix [highlight]
Measuring the optical phase is an ubiquitous challenge in optique. Through a linear scattering medium, one can always links the output optical field to the input one using the transmission matrix. However, one still has to measure the phase of the complex output field. In [K. Lee and Y. Park, Nat. Commun, 7 (2016)] the authors introduce a technique to reconstruct a complex optical field using a thin diffuser. Once the matrix is calibrated, only an intensity measurement is required to reconstruct the amplitude and the phase of the complex optical field.
Let's consider a scattering sample. Using various techniques we partially cover in the previous tutorials, one can measure its transmission matrix \(\mathbf{T}\). If we call \(u_p\), \(p\in [1...M]\) the input basis elements, the input field can be expressed
$${E_{in}} = \sum_p^M c_p u_p$$
The resulting output complex field after propagation through the medium reads
$$E_{out} = \sum_p^M c_p t_p$$
with \(t_p\) is the \(p^\text{th}\) column of the transmission matrix \(\mathbf{T}\), i.e. the complex output field corresponding to the injection of the \(p^\text{th}\) element \(u_p\) of the input basis.
The problem is the following, knowing the transmission matrix \(\mathbf{T}\), how can we recover the complex input fields \(E_{in}\), represented by its coeffecitients \(c_p\), by only measuring the ouput intensity \(I_{out}=E_{out}E_{out}^*\) ?
In [K. Lee and Y. Park, Nat. Commun, 7 (2016)], the authors introduced an operator they call the speckle-correlation transmission matrix \(\mathbf{Z}\) represented by its coefficients
$$
\mathbf{Z}_{pq}=\frac{1}{\left\langle\lvert t_p\rvert^2\right\rangle_r\left\langle\lvert t_q\rvert^2\right\rangle_r} \left[ \left\langle t_p^* t_q E_{out}^* E_{out} \right\rangle_r-\left\langle t_p^* t_q\right\rangle_r\left\langle E_{out}^* E_{out}\right\rangle_r \right] $$
\(\left\langle .\right\rangle_r\) represents the spatial averaging.
Each element of this matrix takes the form of a 4-field correlator. What is very remarkable is that, unlike standardly studied correlators, it does not involve two intensities, but one intensities (\(E_{out}^* E_{out}\)) and two different fields (\(t_p^*\) and \(t_q\)). Using some assumptions about the transmission matrix (Gaussian distrubution), and the speckle patterns, we obtain
$$
\mathbf{Z}_{pq}\approx c_p c_q^* $$
It follows that this matrix has a rank equal to one and its unique singular vector is the incident field in the input basis representation \(c_p\), \(p\in [1...M]\).
Experimentally, after calibration of the transmission matrix \(\mathbf{T}\), one can measure the output intensity pattern \(I_{out}\) for a given unkown complex field and calclate the elements of the matrix \(\mathbf{Z}\). A singular value decomposition is performed and the singular vector corresponding to the highest singular value is taken as a measure of the input field.
This approach, introduced in [K. Lee and Y. Park, Nat. Commun, 7 (2016)], was then successfully applied for lens-less holographic microscopy [Y. Baek, K. Lee and Y. Park, Phys. Rev. Appl., 7 (2016)] by the same group. It was then used for sending optical signals through a thin diffuser using a multiplexing scheme based on the orbital angular momentum in [L. Gong, Q. Zhao, H. Zhang, X.-Y. Hu, K. Huang, J.-M. Yang and Y.-M. Li, Light Sci. Appl., 8 (2019)]. |
First, let us fix some definitions.
Definition: an
$n$-simplex is a $n$-dimensional polytope which is the convex hull of it's $n+1$ vertices. Importantly, no vertex is contained in the convex hull of any other vertices.
Definition: a
simplicial complex $K$ is a set of simplicies such that any face of a simplex of $K$ is also in $K$ and the intersection of any two simplicies $\sigma_1, \sigma_2$ is a face of $\sigma_1$ and $\sigma_2$.
Part (a): In order to show that these vertices actually form the vertices of simplex, we must check that no vertex is contained in the convex hull of any of the others. Suppose that $b_k=(x_{0,k},x_{1,k},\cdots,x_{n,k})$ (in $v_i$ coordinates) is contained in the convex hull of the other $b_j$. This would mean that there's a nontrivial linear dependence relation on the the set $\{b_j\}$.
But if we have some linear relation $\sum_{i=0}^n a_ib_i=0$, we can replace the $b_i$ by their defining linear combinations $\frac{1}{n+1}(v_0+\cdots+v_n)$ and obtain a linear relation on the $v_i$. But this is clearly ridiculous as the vectors $v_i$ are linearly independent since we assumed they were the vertices of a simplex. Some quick calculation shows that if the coefficients of $v_i$ are all zero, then so too are the coefficients $a_i$.
Part (b): For this, we need to check that the union of these simplicies is the entire space and that any two simplicies intersect in simplices.
To do the first, I'll give a method for determining which simplex a given point lies in. Represent an arbitrary point in our simplex as the linear combination of $v_i$, ie $x=a_0v_0+\cdots+a_nv_n$. I claim that it lies in the simplex determined by $b_{\sigma_0},\cdots,b_{\sigma_n}$ where $\sigma_i$ is the simplex defined by taking the greatest $i+1$ elements of the set $\{v_j\}$ given an order by $v_i\geq v_j$ if $a_i\geq a_j$ and ties broken by the lexographic ordering. Note that this always returns an answer- so every point in our original simplex is in at least one simplex in the barycentric subdivision.
Next, we need to show that if any collection of simplicies intersect, they intersect in a simplex. Since a simplex is determined entirely by its vertices, this means that the intersection set is determined by intersection of the vertices of the simplicies in common. But this means that the intersection of simplicies is exactly a simplex with vertices which are the intersection of the vertex sets of the simplicies we're intersecting. (Since these form a subset of the vertices of a simplex, they again determine a simplex.)
Part (d): Apply (b) to see that the barycentric subdivision forms a simplicial complex. The geometric realizations are the same because the behavior on the barycentric subdivision is still totally determined by the behavior of the points that were inherited from the original simplex. This is a moral-of-the-story answer, because I'm not sure what definition of geometric realization you're working with (the one I know is defined as a functor from simplicial sets to compactly generated hausdorff topological spaces- if you post your definition and you're still interested, I can add more to this.)
Finally, I've noticed from your recent questions that you seem to be in the midst of self-studying a lot of the foundations for algebraic topology. I highly recommend actually taking a course in the subject and discussing issues with your professor and coursemates- live interaction with peers and teachers is a much more reliable and useful tool than consulting stackexchange every time you have a potential issue. |
Microscopic processes involving particles proceed differently if forced to go backwards One of them is Babar the Elephant. Don't ask me which one – I would guess it's the daddy. Instead, I can offer you Peter F.'s elephant who can paint an elephant with a flower. Physical Review Letters just published a paper Observation of Time Reversal Violation in the B0 Meson System (arXiv, July 2012)by the BaBar collaboration at Stanford's SLAC that directly proves the violation of T, or the time-reversal symmetry. Even though the result isn't new anymore, the publication was an opportunity for some vibrations in the media:
What did they do?
They studied B-mesons – the same particles whose decays were recently claimed to send supersymmetry to the hospital. Mesons are particles constructed out of 1 quark and 1 antiquark, roughly speaking, and "B" means that bottom quarks and/or antiquarks are involved. The high frequency of the letter "B" in "BaBar" has the same reason. In fact, "BaBar" is \(B\bar B\) [bee-bee-bar] as pronounced by Babar the Elephant.
The BaBar Collaboration looked for various processes in which \(B^0\) and \(\bar B^0\), two "flavor eigenstates" of the neutral B-mesons, transform either to \(J/\psi K_L^0\) (called \(B_+\)) or \(c\bar cK_S^0\) (called \(B_-\)). And if I simplify things just a little bit, statistics applied to 468 million entangled \(B\bar B\)-pairs produced in \(\Upsilon (4S)\) decays showed that some "asymmetries" that should be zero if T were a symmetry were decidedly nonzero.
One may say that the transformation of \(B^0\) into \(B_-\) was detectably faster than the inverse process. We often talk about 2-sigma or 3-sigma "bumps" and 5 standard deviations is a threshold for a "discovery". So you may want to ask how many sigmas these BaBar folks actually have to claim that the microscopic laws have an inherent arrow of time. Their signal is actually 14 sigma or 17 sigma, depending on the details, so the probability of a false positive is something like \(10^{-43}\). Compare it with the 10-percent risk of false positives tolerated in soft scientific disciplines.
Note that one doesn't need an exponentially huge amount of data. If most of the errors are statistical in character, you only need about a 10 times greater dataset to go from 5 standard deviations to 15 standard deviations. Just 10 times more data and the risk of a false positive drops from \(10^{-6}\) to \(10^{-43}\).
Reminding you of C, P, T, CP, CPT, and all that
Our bodies (and many other things) are "almost" left-right symmetric. For a long time, physicists believed (and most laymen probably still believe) that the fundamental particles in Nature had to be left-right-symmetric as well, and behave in a left-right symmetric manner, too. And if some particles (such as amino acids) are left-right-asymmetric (look like a screw), there must exist their mirror images with exactly the same masses and other properties.
This assumption seemed to be satisfied by all the phenomena known before the 1950s. But experiments in the 1950s showed that this left-right symmetry that physicists call "parity" and denote "P" is actually violated in Nature. There are particles that are spinning much like the wheels of a car going forward – but they prefer to shoot to the left side, not right side, for no apparent reason. While \(SO(3)\) is a symmetry, \(O(3)\) is not.
Left-right-asymmetric physics (physicists say "chiral" physics, referring to the word "kheir/χειρ" for a "hand" because the left hand and the right hand differ which is why we use various hands for various right-hand rules) is easily constructed using spinors, mathematical objects generalizing vectors that may be described as "square roots of vectors". In particular, in 3+1 dimensions, or any even total dimensionality, one may write down equations for "Weyl [chiral] spinors" that will force a particle to resemble a left-handed screw, or right-handed screw, but forbid its opposite motion.
And indeed, all the neutrinos are left-handed while the antineutrinos – and it's the antineutrino that you get by a decay of a neutron – are always right-handed. Nature has an inherent left-right asymmetry built into it. Note that this correlation also violates the C symmetry, or the charge-conjugation symmetry that replaces particles by antiparticles. If you act with C on a (possible) left-handed neutrino, you get a left-handed antineutrino which is not allowed.
For a decade, people thought that a more sophisticated symmetry, CP, that you obtain by the simultaneous action of P and C is obeyed by Nature. If you mirror-reflect all the objects and particles
andreplace all particles by their antiparticles, you should get another allowed state, one that has the same mass/energy and behaves in the "same way".
However, in the 1960s, even this CP-symmetry was found to be violated. The spectrum of allowed objects is pretty much CP-symmetric in Nature and in all Lagrangian quantum field theories we may write down but the pairs related by CP behave differently. The complex phase in the CKM matrix is the only truly established source of CP-violation we know in Nature. New physical effects such as supersymmetry implies that new sources of CP-violation probably exist. They're probably also badly needed to obtain the high matter-antimatter antisymmetry that had to exist when the Cosmos was young, before almost everything annihilated, so that we're still here. But no clear proofs of other sources of CP-violation are available at this moment although some hints of discrepancies exist.
So C and P are not symmetries; they are violated even by the spectrum of allowed objects. CP is allowed by the spectrum of allowed objects but the dynamics (especially mixing angles etc.) imply that it is not an exact symmetry. As you can see, the CP-violation is even weaker than the C-violation and the P-violation.
But there is a combination of operations that has to be a symmetry in every relativistic quantum field theory, the CPT-symmetry. This fact was proved by Wolfgang Pauli and is called the CPT-theorem. The CPT operation does C and P at the same moment and it also performs the time reversal – it reverts the direction of the arrow of time.
Note that among C,P,T, only T is an "antilinear operator" which means that \[
T\ket{\lambda \psi} = \lambda^* T\ket\psi
\] including the asterisk which means complex conjugation (that's the reason of the prefix, anti-). Various combinations of C,P,T are linear or antilinear depending on whether T is included. Note that the complex conjugation is needed for the time reversal already in ordinary non-relativistic quantum mechanics because the complex conjugation is the only sensible way to change \(\exp(+ipx/\hbar)\) to \(\exp(-ipx/\hbar)\) i.e. to change the sign of the momentum \(p\) – and the velocity \(v=dx/dt\) – which is needed for particles to evolve backwards.
Why the CPT has to be a symmetry in quantum field theory – and almost certainly is an exact symmetry in string theory as well? It's because it may be interpreted as the "rotation of the spacetime by 180 degrees" which is a symmetry because it belongs to the Lorentz group analytically extended to complex values of the parameters (which is allowed).
Work in the momentum space and extend the time coordinate to imaginary values\[
t\to \tau = it.
\] Analytically continue all fields or Green's functions and amplitudes (as functions of the momenta, to be kosher, because only as functions of the momenta, the functions are holomorphic) to the imaginary values of the time component. Now, the 4-dimensional spacetime with points \((x,y,z,\tau)\) becomes a Euclidean 4-dimensional space.
The rotations between \(z\) and \(\tau\) are nothing else than \(tz\)-boosts extended to imaginary values of the "boost rapidity". By the analyticity, if the ordinary real boosts are symmetries, so must be the imaginary boosts. The imaginary rapidity is nothing else than the ordinary angle. Take the angle to be \(\pi\). This will revert the sign of both \(\tau\) and \(z\) – which means that it will perform both P and T. Now, if you analytically continue it back, the effect is clearly nothing else than the flipping of signs of \(t\) and \(z\), so you naively get the PT transformation and prove it is a symmetry because it is just a \(\pi\)-rotation.
However, you actually get a CPT transformation. Purely geometrically, by looking at the shape of the world lines, you can't distinguish PT from CPT because C only acts "internally" and doesn't change the shape of the world lines etc. The reason why the rotation by 180 degrees is CPT and not just PT is that the reflection of T also reverts the "arrow" on all the world lines, and particles moving backwards in time are actually antiparticles. (I could formulate an equivalent argument more mathematically and convincingly, but it's enough here, I hope.)
So CPT is always a symmetry. If you replace all particles by antiparticles; change their configuration to its mirror image; and invert the sign of all the velocities, then the subsequent evolution in time will look like exactly the evolution of the original system backwards in time (reflected in space as well and enjoying the inverted labels for all particles/antiparticles).
Because CP isn't a symmetry and CPT is a symmetry, T – which is a convolution of the CP and CPT transformations, a convolution of a symmetry and a non-symmetry – clearly refuses to be a symmetry, too. That's also why they could directly detect a violation of T in the BaBar experiment.
This has nothing to do with the arrow of time in statistical physics Thank God, even Sean Carroll knows and acknowledges this fact.
I must emphasize that these effects are only large enough in special systems interacting via the weak interactions and they're weak, anyway. In reality, we know the "arrows of time" that have been discussed many times on this blog. We forget but rarely "unforget", eggs break but not unbreak, we mostly get older but not younger, the heat goes from warmer bodies to cooler ones but not vice versa, friction slows downs vehicles but doesn't speed them up, and so on. Decoherence produces nearly diagonal density matrices out of pure and coherent states but the opposite process – emergence of quantum coherence out of decoherent chaos – doesn't occur.
These manifestations of the "arrow of time" have nothing whatsoever to do with the violation of T that was discussed at the beginning of the article and that was experimentally demonstrated by BaBar. The microscopic BaBar-like T-violation is neither necessary nor sufficient a condition for the existence of the arrow of time in thermodynamics etc.
Even if you had microscopically time-reversal-symmetric laws of physics, they would produce time-reversal-asymmetric macroscopic laws with friction and the second law. It's because the origin of all these "macroscopic asymmetries" is in the logical arrow of time – the fact that the probabilities of evolution between ensembles of microstates have to be averaged over initial states but summed over final states, so the initial states and final states have to be treated differently, because of the basic laws of logic and probability calculus.
Again, the microscopic T-violation isn't a necessary condition for the entropy to increase and for other signs of the arrow of time in the macroscopic world around us.
The opposite relationship is also wrong; the microscopic T-violation wouldn't be sufficient for the macroscopic one, either. If you tried to deny the existence of the logical arrow of time, the BaBar-like T-violation in the microscopic laws of physics wouldn't be sufficient to produce the "huge" asymmetries between the processes that go in one direction and those that (usually don't) proceed in the opposite direction simply because the microscopic T-violation is far too weak and doesn't have a "uniform arrow" that would give the future its futureness and award the past with its pastness, anyway.
I plan to dedicate some article to statistical physics in a foreseeable future again. Right now, one must emphasize that the experimental detection of the T-violation is a detection of an asymmetry in the fundamental equations of physics that apply when the initial state and the final state are fully specified and known – so ignorance, the main prerequsite needed for thermodynamics to emerge, is absent. |
In the idealised case, the answer to this is slightly surprising. The fact that the mass of a rocket must include the mass of its fuel is embodied in the rocket equation, $$\Delta v = v_e \ln\frac{m_i}{m_f},$$where $m_i$ is the initial mass of the rocket (including fuel, payload and everything else), and $m_f$ is the final mass, including the payload but much less fuel. $v_e$ is the effective exhaust velocity, which we might as well assume stays fixed for a given type of rocket, and $\Delta v$ is essentially the velocity change required to reach escape velocity, which we'll also assume stays constant.
The above equation does not include the acceleration due to gravity, which is of course an important factor. This is because (as is usually done) it's included in the $\Delta v$ term, which includes the velocity you lose to gravitational acceleration as the rocket ascends. You can put in the gravitational acceleration explicitly and the result doesn't change, as I'll show below.
Rearranging the rocket equation gives us$$m_i = m_f e^{\Delta v/v_e},$$which tells us the amount of fuel (the majority of $m_i$) we need to lift a mass $m_f$. You can see that this is exponential in $\Delta v$, meaning that if we want to go a little bit faster we need a much bigger rocket. This is called "the tyranny of the rocket equation."
In this case we don't want to go faster, we just want to send more stuff, i.e. we want to increase $m_f$. But the equation is not exponential in $m_f$, it's
linear. Therefore if we ignore any changes in rocket design that would be needed to increase its size, we can conclude that if you want to double the payload, you only need to double the size of the rocket, not quadruple it.
If we want to do this more precisely, we should include gravitational acceleration in the rocket equation. As per this answer by Asad to another question, this gives us$$\Delta v = v_e ln \frac{m_i}{m_f} - g\left(\frac{m_f}{\dot m}\right),$$where $g$ is acceleration due to gravity and $\dot m$ is the rate at which fuel is burned, which we assume is constant over time. According to the reasoning in Asad's answer, we end up with$$m_i = m_f \left(\exp\left(\frac{\Delta v + g\left(\frac{m_f}{\dot m}\right)}{v_e}\right) -1\right)^{-1},$$where $\Delta v$ is now the true escape velocity rather than the effective escape velocity. In Asad's answer, he assumes that $\dot m$ stays constant as you change $m_f$, and he concludes that there is a strong limit to the size of a rocket. But in fact if you were going to make a rocket twice the size, it wouldn't make sense to keep $\dot m$ the same. To take it to an extreme, imagine building something the size of a Saturn V that burns fuel at the same rate as a hobby rocket. It obviously wouldn't be able to lift itself off the launch pad, and nobody would consider building such a design. So let's instead assume that the burn rate is proportional to the size of the rocket. This means that $\frac{m_f}{\dot m}$ is a constant, and the equation as a whole is still of the form$$m_i = m_f \times \text{a constant},$$so it's linear in $m_f$.
In fact none of this is really all that surprising after all, because if you want to send twice the mass you could always just use two rockets of the original size. By just strapping those rockets next to each other you've got one of twice the size that can send twice the payload. Moreover, it burns fuel at twice the rate, just as I assumed above. There's no reason that wouldn't work in principle. (Though in practice it would be another matter of course!)
If the equation had been exponential in $m_f$ then there would have been a point at which increasing the payload mass would require an unreasonable amount of extra fuel, and that would have imposed a strong practical limit on rocket size. But since it's linear this doesn't really happen. The limits on rocket size are not due to an exponential increase in propellant mass, but to the engineering challenges in building a structure of that size and complexity that won't fail under the violent conditions of a rocket launch.
These include factors to do with the way the strength of a structure scales with its size and (I imagine) practical issues involved in getting fuel where it needs to be at the right time. In this respect the factors that limit the size of rockets are quite similar to the factors that limit the size of buildings. |
Existence and uniform decay for the Euler-Bernoulli viscoelastic equation with nonlocal boundary dissipation
1.
Departamento de Matemática - Universidade Estadual de Maringá, 87020-900 Maringá - PR, Brazil
$u_{t t} +\Delta^2 u-\int_0^t g(t-\tau) \Delta^2 u(\tau)d\tau = 0\quad$ in $\Omega \times (0,\infty)$
subject to nonlinear boundary conditions is considered. We prove existence and uniform decay rates of the energy by assuming a nonlinear and nonlocal feedback acting on the boundary and provided that the kernel of the memory decays exponentially.
Mathematics Subject Classification:74K20, 74D10, 34B15, 93D2. Citation:Marcelo Moreira Cavalcanti. Existence and uniform decay for the Euler-Bernoulli viscoelastic equation with nonlocal boundary dissipation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 675-695. doi: 10.3934/dcds.2002.8.675
[1]
Jong Yeoul Park, Sun Hye Park.
On uniform decay for the coupled Euler-Bernoulli viscoelastic system with boundary damping.
[2] [3]
Maja Miletić, Dominik Stürzer, Anton Arnold.
An Euler-Bernoulli beam with nonlinear damping and a nonlinear spring at the tip.
[4]
Fathi Hassine.
Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping.
[5]
Louis Tebou.
Well-posedness and stabilization of an Euler-Bernoulli equation with a localized nonlinear dissipation involving the $p$-Laplacian.
[6]
Kaïs Ammari, Denis Mercier, Virginie Régnier, Julie Valein.
Spectral analysis and stabilization of a chain of serially connected Euler-Bernoulli beams and strings.
[7]
Louis Tebou.
Energy decay estimates for some weakly coupled Euler-Bernoulli and wave equations with indirect damping mechanisms.
[8]
Gen Qi Xu, Siu Pang Yung.
Stability and Riesz basis property of a star-shaped network of Euler-Bernoulli beams with joint damping.
[9]
Martin Gugat, Markus Dick.
Time-delayed boundary feedback stabilization of the isothermal Euler equations with friction.
[10] [11] [12] [13] [14] [15]
Ángela Jiménez-Casas, Aníbal Rodríguez-Bernal.
Boundary feedback as a singular limit of damped hyperbolic problems with terms concentrating at the boundary.
[16]
Nguyen H. Sau, Vu N. Phat.
LP approach to exponential stabilization of singular linear positive time-delay systems via memory state feedback.
[17]
Julius Fergy T. Rabago, Hideyuki Azegami.
A new energy-gap cost functional approach for the exterior Bernoulli free boundary problem.
[18]
Markus Dick, Martin Gugat, Günter Leugering.
A strict $H^1$-Lyapunov function and feedback stabilization for the isothermal Euler equations with friction.
[19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
...in the past, on short timescales, it has therefore fluctuated rapidly...
Honza [=Jan] U. sent me the following one-hour April 2013 talk by Prof Murry Salby of Australia's Macquarie University:
This astrophysicist and atmospheric scientist has a rather impressive publication record. At the beginning, I was a bit discouraged by Pierre Gosselin's summary that suggested that Salby was making some widespread elementary errors about the direct attribution of CO
2emissions according to their isotopic composition (the extra CO 2we see in the atmosphere generally has a very different composition than the CO 2when we emitted it, because the carbon is being quickly recycled all the time while chemistry doesn't care about the differences between isotopes but it's still true that our additions of CO 2have increased the CO 2concentration).
But I was wrong, Salby isn't doing these particular trivial mistakes and when I ultimately listened to the talk, it looked rather impressive.
He employs various types of statistical models, Fourier transformations, and other things to decode the relationships between CO
2and the temperature. Of course, the temperature is mainly the driver of CO 2and CO 2follows – to say the least, that's the dominant relationship during the glaciation cycles (the time scales from tens to hundreds of thousands of years).
Those things wouldn't be new and I wouldn't listen to another 1-hour talk that just discusses whether CO
2was the cause or the consequence during glaciation cycles. Of course it was the consequence. Whoever still acts as if he were misunderstanding these basic issues is either a hopelessly brainwashed moron or an amazingly dishonest demagogue or both.
But Salby said much more than that.
He argued that the anomalous CO
2concentration may be approximated as the integral of the anomalous temperature, \[
\Delta\,{\rm conc}(CO_2) = \alpha \int \dd t\,\Delta T
\] which sort of explains why it seems to be rising so smoothly (if we ignore the nearly periodic seasonal variations). But Salby has also presented some evidence that the ice record heavily underestimates the fluctuations of the CO
2concentration, especially the high-frequency (short-period) oscillations that occurred a long time ago. If that's true, it's pretty likely that concentrations above 400 ppm may have been rather mundane even before the industrial activity.
Using the Fourier methods, he argues that there is a phase shift of 90 degrees between the temperature and CO
2pretty much at all frequencies. I am not quite seeing how this may be true because at least in the glaciation cycles, i.e. at the 10,000-year time scale, these two quantities are pretty much in sync. How does the phase shift move to 90 degrees for shorter time scales?
And his discussion of the different isotopic composition (C12 vs C13) of the fossil fuels and the present plant life is sophisticated, not the kind of silly caricature I was led to expect. At any rate, Salby concludes that the excess CO
2is caused by the integrated or accumulated positive temperature anomaly in 1920-1940 and 1980-2000 or so and these positive anomalies may be interpreted as noise, not results of any trends.
That sounds nice except that I think it's obvious that the CO
2we have added to the atmosphere has led to some increased CO 2concentrations and the latter increase is comparable to 50% of the former (airborne fraction etc.) – it's not negligible. It doesn't matter that there are 50 times more important contributions to the CO 2atmospheric budget as well. Despite these dominant contributions, a small surplus simply can't become completely invisible.
If you were thinking whether you should listen to that talk, my recommendation is probably Yes. Despite the fact that he is trying to deny some obvious facts – if I understand the discussion about the attribution of an elevated CO
2well and if I am right about its imperfections – he is also saying lots of new things and offering many sophisticated methods that you may want to know about.
At the end, Salby offers some criticisms of the climate models that I only partly agree with. Concerning the agreeable conclusions, he says that the prevailing climate models show CO
2and the temperature essentially as the same thing; in the real world, they're not the same thing at all. These two claims – and their paramount contrast – are self-evidently true.
To mention the propositions I don't quite share, he says that theories can't ever be tested against the past data; tests of predictions of the future are always needed. I disagree with that. It's a historical coincidence whether some data were collected before a theory was written down or after that. A theory is always constructed or chosen according to the data in the past and then it gives predictions for other phenomena as well. Those phenomena may be data to be collected in the future but also additional data that may be collected about the past. Regardless of the timing, such data may be used to strengthen or weaken our confidence in the theory (or rule it out).
See also replies by MeteoLCD (more detailed review than mine!), Anthony Watts' 100+ commenters, The Hockey Schtick, Climate Depot, Tall Bloke, and – from the crazy side of the aisle – John Cook (I agree with some of the criticism) and Deltoid who calls Salby "unhelpful" (for "the cause") LOL. ;-) |
We investigate the claim that second exterior derivative of any form A always vanishes:
$$\mathrm{d}(\mathrm{d}A)\mathrm{=0}$$This is Carroll's equation (2.80). He then tells us that this is due to the definition of the exterior derivative and the fact that partial derivatives commute, ##{\partial }_{\alpha }{\partial }_{\beta }={\partial }_{\beta }{\partial }_{\alpha }##. From the definition we get $$\large\mathrm{d(dA)}=\left(p+1\right)p{\partial }_{[{\mu }_1}{\partial }_{[{\mu }_2}A_{{\mu }_3\dots {\mu }_{p+1}]]}$$This contains nested antisymmetrisation operators which we met in Exercise 2.08. The expansion of the equation contains ##p!\left(p+1\right)!## terms in total containing permutations of the indices ##{\mu }_1{\mu }_2{\mu }_3\dots {\mu }_{p+1}##. If ##p## was ##10## that would be 39,916,800 terms. First I exercised my permutation skills with ##2, 3, 4## and ##n## indices to get the drift of the proof. I was then able to expand groups of ##p+1## terms of ##\mathrm{d}(\mathrm{d}A)##. Each one vanished due to ##{\partial }_{\alpha }{\partial }_{\beta }={\partial }_{\beta }{\partial }_{\alpha }##. It did not depend on the antisymmetry of ##A##. This is rather like the fact that the wedge product of a 2-form and a 1-form does not depend on the antisymmetry of the 2-form as we discovered in Commentary 2.9 Differential forms.pdf. We also note that $$\large A_{[{\mu }_1}B_{[{\mu }_2}C_{{\mu }_3\dots {\mu }_{p+1}]]}=0$$for any rank 1 tensors ##A,B## and any tensor ##C## and the up/down position of any index in the tensors is immaterial. Read all 5 pages at Commentary 2.9 Second exterior derivative.pdf.
$$\mathrm{d}(\mathrm{d}A)\mathrm{=0}$$This is Carroll's equation (2.80). He then tells us that this is due to the definition of the exterior derivative and the fact that partial derivatives commute, ##{\partial }_{\alpha }{\partial }_{\beta }={\partial }_{\beta }{\partial }_{\alpha }##. From the definition we get
$$\large\mathrm{d(dA)}=\left(p+1\right)p{\partial }_{[{\mu }_1}{\partial }_{[{\mu }_2}A_{{\mu }_3\dots {\mu }_{p+1}]]}$$This contains nested antisymmetrisation operators which we met in Exercise 2.08. The expansion of the equation contains ##p!\left(p+1\right)!## terms in total containing permutations of the indices ##{\mu }_1{\mu }_2{\mu }_3\dots {\mu }_{p+1}##. If ##p## was ##10## that would be 39,916,800 terms.
First I exercised my permutation skills with ##2, 3, 4## and ##n## indices to get the drift of the proof. I was then able to expand groups of ##p+1## terms of ##\mathrm{d}(\mathrm{d}A)##. Each one vanished due to ##{\partial }_{\alpha }{\partial }_{\beta }={\partial }_{\beta }{\partial }_{\alpha }##. It did not depend on the antisymmetry of ##A##. This is rather like the fact that the wedge product of a 2-form and a 1-form does not depend on the antisymmetry of the 2-form as we discovered in Commentary 2.9 Differential forms.pdf.
We also note that
$$\large A_{[{\mu }_1}B_{[{\mu }_2}C_{{\mu }_3\dots {\mu }_{p+1}]]}=0$$for any rank 1 tensors ##A,B## and any tensor ##C## and the up/down position of any index in the tensors is immaterial.
Read all 5 pages at Commentary 2.9 Second exterior derivative.pdf. |
Answer
(a)$\tau=(\frac{L}{2})mg$ (b)0 (c)$\tau=(\frac{L}{2})mg$
Work Step by Step
We know that (a) $\tau=r\times F$ $\implies \tau=(\frac{L}{2})mg$ (b) $\tau=r\times F$ $\implies \tau=(0)mg$ $\tau=0$ (c) $\tau=r\times F$ $\tau=-(\frac{L}{2})mg$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
These kind of questions literally plagued me for a long while. So, I feel your pain when you feel like you need some outside affirmation that your remarks are correct or not.
Before moving on however let's define some notation. We define the
set $\mathbb{R}^n$ as the set of n-tuples of real numbers. That is elements of $\mathbb{R}^n$ look like $(a_1,a_2,\dots, a_n)$ where the $a_i$ are in the set $\mathbb{R}$. We define $M_{n\times 1}(\mathbb{R})$ as the set of $n\times 1$ matrices with elements from the set $\mathbb{R}$. Also let's set $M_{m\times n}(F)$ to be the set of $m\times n$ matrices with entries from the underlying set of the field $F$. Also the little operator $^t$ is just matrix transpose.
The first thing that will help you understand the problem you're having is that you
must remember that a vector space is so much more than a set. A vector space is a an ordered triple $(V, F, \cdot)$ such that $V$ is an abelian group and $F$ is a field, and $\cdot$ (scalar product) is a map from the underlying sets of $V$ and $F$ to the underlying set of $V$. There are also compatibility axioms regarding the addition operations in $V$ and $F$, and compatibility axioms regarding scalar multiplication with multiplication in $F$. The things that we call 'vectors' are the elements of the underlying set of $V$.
The most detail that one usually delves into when specifying a vector space is naming the
underlying set of $V$ and the underlying set of $F$. This is because often the group, field, and scalar operations are self-evident and it's a pain to typeset. For example when say that $\mathbb{R}$ is a vector space over itself, we get that the group operation and the field addition is the same: real addition, and we get that scalar multiplication coincides with the field multiplication: real multiplication.
Now you should note that we can begin to discuss bases and dimensions. These discussionsdo not depend on the
representation of the vectors of your vector space---only the vectors themselves. And as the natural numbers can be built up set-theoretically, we cantalk about a vector space being finite dimensional (it has a basis equinumerous with a finite ordinal).
If all this layering was not bad enough, when we deal with finite dimensional vector spaces we often opt to write the vectors of a vector space not in their set theoretic representation but rather as a
coordinate vector. This is where my hell began, and I think it is where yours is too.
Let's assume we are dealing with a finite dimensional vector space $V$ over some field $F$.Let's say the dimension is $n$. Now we note that $M_{n\times 1}(F)$ also forms a vector space over $F$ (with matrix addition and pointwise multiplication as scalar multiplication). A
coordinate representation of $V$ is a linear isomorphism from $V$ to $M_{n\times 1}(F)$. And the coordinate vector of a vector is the image of the vector under this linear isomorphism. Here the problems start because there are soooooo very many different linear isomorphisms from $V$ to $M_{n\times 1}(F)$. In fact, for each ordered basis of $V$ and each ordered basis of $M_{n\times 1}(F)$ there is an isomorphism which maps the first onto the other in the same order.
Although this might seem like a deterrent, it is extremely nice computationally---especially when we set up some rules-of-thumb. Some important rules of thumb: there is a canonical way to represent $\mathbb{R}^n$ as a coordinate vector. The 'canonical' coordinate representation of $\mathbb{R}^n$ takes the
vector $(1,0,0...,0)$ and carries it onto the vector $[1,0,0,...,0]^t$ and we call the latter the coordinate vector of the former. Another rule of thumb: we assume that the ordered basis that we choose for $M_{n\times 1}(F)$ to be the images of the ordered basis of $V$ is in order: $[1,0,\dots,0]^t, [0,1,\dots,0]^t,\dots,[0,\dots,0,1]^t$.
What do I mean by 'it is nice computationally'? As I'm sure you've worked with, setting up a coordinate representation allows us to codify a linear transformation as a matrix in $M_{m\times n}(F)$. That is when we have two vector spaces $V$ and $W$ and a linear transformation $T:V\rightarrow W$
AND we have chosen a coordinate representation of $V$ with $M_{n\times 1}(V)$ and a coordinate representation of $W$ with $M_{m\times 1}(V)$ we can codify $T$ as an element of $M_{m\times n}(F)$. And we can realize the application of $T$ to an element in $V$ as matrix multiplication between an element of $M_{m\times n}(F)$ with the coordinate representation of the vector. This way of explanation also reveals that a matrix is dependent on the two ordered bases chosen for $V$ and $W$.
Another rule of thumb: when $V=W$ we assume that the basis we are using to represent the elements of $V$ as coordinate vectors does not change (but there is no reason a priori to assume it; it's just convention).
Now let's directly tackle your questions. It appears your operator $[\cdot]_\mathcal{C}$ is the linear isomorphism I talked about above which assigns an element of $\mathbb{R}^n$ to its coordinate vector representation. I personally do not codify $n\times 1$ matrices as ordered $n$-tuples. In my theory, you do not have $v=[v]_\mathcal{C}$. You do get things which look similar. To typify what I mean let's take an example. Let's take $(1,0,\dots, 0)\in\mathbb{R}^n$. Then$$[(1,0,\dots,0)]_\mathcal{C}=[1,0,\dots,0]^t$$
And as I said, in my theory, I do not codify $n\times 1$ matrices as $n$-tuples. So this function is not the identity. Everything else in your post, I agree with however.
I hope I have explained enough and that I have saved you from future turmoil.
ADDENDUM
Perhaps if I give an example you can see why the conventions we use are important.Lets take the vector space $V=\mathbb{R}^2$. There are many bases for $V$. The
standard basis is $\{(1,0), (0,1)\}$ (in this order). But let's work with another basis. Let's set $\mathcal{B}=\{(2,0), (1,1)\}$ (in this order). This basis was not chosen for any particular reason rather than its different from the standard basis. Now let's take the vector $v=(3,1)$ in $\mathbb{R}^2$. What is $v$'s coordinate vector? That is, what is $[(3,1)]_\mathcal{B}$? We have that $[\cdot]_\mathcal{B}$ is the linear operator from $\mathbb{R}^2$ to $M_{2\times 1}(\mathbb{R})$ which takes the vector $(2,0)$ to the vector $[1,0]^t$ and the vector $(1,1)$ to the vector $[0,1]^t$. Thus
$$[(3,1)]_\mathcal{B}=[(2,0)+(1,1)]_\mathcal{B}=[(2,0)]_\mathcal{B}+[(1,1)]_\mathcal{B}=[1,0]^t+[0,1]^t=[1,1]^t$$
Thus when we work in the basis $\mathcal{B}$ the coordinate vector of $(3,1)$ is $[1,1]^t$.
This all works because there is a convention that operator $[\cdot]_\mathcal{B}$ carries $\mathcal{B}$ to $\{[1,0]^t,[0,1]^t\}$. But what if $[\cdot]_\mathcal{B}$ carried $\mathcal{B}$ to a different basis? Lets take $\{[2,0]^t,[1,1]^t\}$ for an example. Then we get something which does not follow convention:
$$[(3,1)]_\mathcal{B}=[(2,0)+(1,1)]_\mathcal{B}=[(2,0)]_\mathcal{B}+[(1,1)]_\mathcal{B}=[2,0]^t+[1,1]^t=[3,1]^t$$
You see that we don't get something very different. This is the reason you might think that $v=[v]_\mathcal{C}$ in your example (especially if you implement $n\times 1$ matices as $n$-tuples) because the conventions force us to take a basis for $\mathbb{R}^n$ onto the standard basis for $M_{n\times 1}(\mathbb{R})$. And when we have chosen the standard basis for $\mathbb{R}^n$ the coordinate vector representations of vectors look almost exactly alike. |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Kaj Hansen
top
2% overall
As an undergraduate, I studied mathematics at the University of Georgia. In early 2014, I put together a short series of expository videos on Ramsey theory that can be viewed here (produced and published by my good friend Eddie Beck).
I'm active primarily in the point-set topology and various abstract algebra tags. Here's a handful of my less run-of-the-mill contributions to this site:
Finding primitive elements for finite extensions of $\mathbb{Q}$: a Galois-theoretic technique Constructing connected spaces with arbitrarily many path components On symmetric polynomials Square-and-multiply: an algorithm for computationally efficient exponentiation in a given semigroup On the infinite dihedral group $D_{\infty}$ Finding Galois groups $\cong S_n$ via generating sets: an example If $\text{Gal}(p) \cong G_1$ and $\text{Gal}(q) \cong G_2$, when is $\text{Gal}(pq) \cong G_1 \times G_2$? Visualizing ring homomorphisms The intersection of two compact sets need not be compact A continuous function $f:[a,b] \to \mathbb{R}$ is Riemann integrable Quotient spaces are ill-behaved with respect to separation axioms The box topology on infinite products: problems with continuity The Galois group of an irreducible, rational, cubic polynomial is determined by its discriminant A one-point connectificationof any topological space A simple application of the Banach fixed-point theorem Proving that closed & open subsets of locally compact Hausdorff spaces are locally compact—without toomuch machinery Friendly logarithms: Example 1, Example 2, Example 3
Outside of math, I am interested in existentialism, Christianity, and exploring the nature of consciousness. I also hold music in the highest regard. I (at least try to) listen to a wide variety of genres, from minimalist / ambient to folk to psychedelic rock, with a particular soft spot for "extreme" metal—especially death and black.
If you want to connect with me elsewhere, I play chess here under the username Kaj_Hansen and Starcraft II (main-racing as Terran) under the "BattleTag" MementoMori#11653. Feel free to add me.
Athens, GA
Member for 5 years, 6 months
8,963 profile views
Last seen 3 hours ago |
Range of a matrix
The
rangeof m× nmatrix A, is the span of the ncolumns of A. In other words, for
\[ A = [ a_1 a_2 a_3 \ldots a_n ] \]
where \(a_1 , a_2 , a_3 , \ldots ,a_n\) are
m-dimensional vectors,
\[ range(A) = R(A) = span(\{a_1, a_2, \ldots , a_n \} ) = \{ v| v= \sum_{i = 1}^{n} c_i a_i , c_i \in \mathbb{R} \} \]
The dimension (number of linear independent columns) of the range of
A is called the rank of A. So if 6 × 3 dimensional matrix B has a 2 dimensional range, then \(rank(A) = 2\).
For example
\[C =\begin{pmatrix}
1 & 4 & 1\\ -8 & -2 & 3\\ 8 & 2 & -2 \end{pmatrix} = \begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix}= \begin{pmatrix} y_1 \\ y_2\\ y_3 \end{pmatrix}\] C has a rank of 3, because \(x_1\), \(x_2\) and \(x_3\) are linearly independent. Nullspace p>The nullspaceof a m\(\times\) nmatrix is the set of all n-dimensional vectors that equal the n-dimensional zero vector (the vector where every entry is 0) when multiplied by A. This is often denoted as
\[N(A) = \{ v | Av = 0 \}\]
The dimension of the nullspace of
Ais called the nullityof A. So if 6 \(\times\) 3 dimensional matrix Bhas a 1 dimensional range, then \(nullity(A) = 1\).
The range and nullspace of a matrix are closely related. In particular, for
m \(\times\) n matrix A,
\[\{w | w = u + v, u \in R(A^T), v \in N(A) \} = \mathbb{R}^{n}\]
\[R(A^T) \cap N(A) = \phi\]
This leads to the
rank--nullity theorem, which says that the rank and the nullity of a matrix sum together to the number of columns of the matrix. To put it into symbols:
\[A \in \mathbb{R}^{m \times n} \Rightarrow rank(A) + nullity(A) = n\]
For example, if
B is a 4 \(\times\) 3 matrix and \(rank(B) = 2\), then from the rank--nullity theorem, on can deduce that
\[rank(B) + nullity(B) = 2 + nullity(B) = 3 \Rightarrow nullity(B) = 1\]
Projection
The
projectionof a vector xonto the vector space J, denoted by Proj( X, J) ,is the vector \(v \in J\) that minimizes \(\vert x - v \vert\). Often, the vector space Jone is interested in is the range of the matrix A, and norm used is the Euclidian norm. In that case
\[Proj(x,R(A)) = \{ v \in R(A) | \vert x - v \vert_2 \leq \vert x - w \vert_2 \forall w \in R(A) \}\]
In other words
\[Proj(x,R(A)) = argmin_{v \in R(A)} \vert x - v \vert_2\] |
Let $\{a_n\}$ be a sequence such that
$a_n\geq 0$ for all $n$ $\{a_n\}$ is monotonically decreasing $\sum_{n=1}^\infty a_n$ converges
Is it true that as $n\rightarrow\infty$ then $$n\log n\;a_n\rightarrow 0$$
Given the hypotheses, we can show that $n a_n\rightarrow 0$ as $n\rightarrow\infty$. This follows since $$0\leq 2na_{2n}\leq 2(S_{2n}-S_n)\quad\text{ and }\quad 0\leq(2n+1)a_{2n+1}\leq 2(S_{2n+1}-S_n)+a_{2n+1}$$
I've been trying to adapt this approach to $n\log n\; a_n$, but it has been fruitless so far. I've been working with inequalities that involve $\log$, but they each seem to be too 'weak'; in that, I end up with a product sequence with one part going to $0$ and the other going to $\infty$. I also tried condensation, but I can't determine whether the general term of that new series forms a monotonically decreasing sequence. I also have not been able to come up with a counter-example.
Any help in resolving the question in either direction is appreciated.
UPDATE
RRL's approach solves every case where $\lim\inf n\log n\;a_n>0$, but we still haven't resolved the case where $\lim\inf n\log n\; a_n=0$ and $\lim\sup n\log n\; a_n>0$.
On a serendipitous note, I was reading through one of my books on analysis and the result for $na_n\rightarrow 0$ was posed as a problem. It came with a footnote that $1/n$ cannot be replaced by a function that approaches $0$ quicker. I take this to include $1/n\log n$. However, I'm having trouble finding the exact reference the author cites. The book I found this in is "Elementary Real and Complex Analysis" by Geogi E. Shilov (first printed in 1973), and the only direction he gives is the name "A.S. Nemirovski". So, any help directing me to this reference would be a big help as well. |
Firstly, my question may be related to a similar question here: Are complex determinants for matrices possible and if so, how can they be interpreted?
I am using: $$ \left(\begin{array}{cc} a&b\\ c&d \end{array}\right)^{-1} = {1 \over a d - b c} \left(\begin{array}{rr} d&-b\\ -c&a \end{array}\right)~,~~\text{ where } ad-bc \ne 0. $$ which is a very well known way to calculate the inverse of a 2x2 matrix. My problem interpreting what the significance of a complex determinant (i.e. the denominator on the right hand side of the '='). I've always assumed you'd take the magnitude of the complex dterminant in this case?
The reason why I am asking is I am writing a function in the C programming language which should be able to take real (imaginary part = 0) and complex values of $a, b, c$ and $d$.
If I was to take the magnitude of the complex number in the denominator this isn't a problem, but for cases where the real part of determinant turns out to be negative and the imaginary part is equal to zero, would it be correct to take the magnitude in this case as it would lead to a sign change in in the elements of the inverse matrix?
Eg. Determinant $ = -2 +j0$, so: $abs(-2 + j0) = 2$, which would change the signs of the elements of my inverse matrix. However, if I was to work this out with a paper and pen, I simply treat my "complex" determinant as a real number and I don't bother taking the magnitude or the absolute value, thus maintaining the '-' in the real part (determinant $=-2$).
Many thanks
EDIT: Say determinant =$z$ So, $z^{-1} = r^{-1}(cos \theta + j sin \theta) ^{-1}$, where $r = |z|$ and $\theta = angle(z)$. If $z_{imag} = 0$, then: $$ z^{-1} = r^{-1} cos\theta $$
$$ z^{-1} =1/r = 1/|z| $$ The line above is obviously not correct and is the source of my confusion! |
Since I spend a lot of time on solving sparse linear equation systems then I am also a user of sparse matrix reordering methods. My claim to fame is that I have implemented approximate minimum degree myself and it is used in MOSEK.
Below I summarize some interesting link to graph partitioning software:
It is very common to use a BLAS library to perform linear algebra operations such as dense matrix times dense matrix multiplication which can be performed using the dgemm function. The advantage of BLAS is
it is well a defined standard.
and hardware vendors such as Intel supplies a tuned version.
Now at MOSEK my employeer we use the Intel MKL library that includes a BLAS implementation. It really helps us deliver good floating point performance. Indeed we use a sequential version of Intel MKL but call it from potentially many threads using Clik plus. This works well due to the well designed BLAS interface. However, there is one rotten apple in the basket and that is error handling.
Here I will summarize why the error handling in the BLAS standard is awful from my perspective.
First of all why can errors occur when you do the dgemm operation if we assume the dimensions of the matrices are correct and ignoring issues with NANs and the like. Well, in order to obtain good performance the dgemm function may allocate additional memory to store smallish matrices that fit into the cache. I.e. the library use a blocked version to improve the performance.
Oh wait that means it can run of memory and then what? The BLAS standard error handling is to print a message to stderr or something along that line.
Recall that dgemm is embedded deep inside MOSEK which might be embedded deep inside a third party program. This implies an error message printed to stderr does not make sense to the user. Also the user would NOT like us to terminate the application with a fatal error. Rather we want to know that an out of space situation happened and terminate gracefully. Or doing something to lower the space requirement. E.g. use a fewer threads.
What is the solution to this problem? The only solution offered is to replace a function named xerbla that gets called when an error happens. The idea is that the function can set a global flag indicating an error happened. This might be a reasonable solution if the program is single threaded. Now instead assume you use a single threaded dgemm (from say Intel MKL) but call it from many threads. Then first of all you have to introduce a lock (a mutex) around the global error flag leading to performance issues. Next it is hard to figure out which of all the dgemm calls that failed. Hence, you have to fail them all. What pain.
Why is the error handling so primitive in BLAS libraries. I think the reasons are:
BLAS is an old Fortran based standard.
For many years BLAS routine would not allocate storage. Hence, dgemm would never fail unless the dimensions where wrong.
BLAS is proposed by academics which does not care so much about error handling. I mean if you run out of memory you just buy a bigger supercomputer and rerun your computations.
If the BLAS had been invented today it would have been designed in C most likely and then all functions would have returned an error code. I know dealing with error codes is a pain too but that would have made error reporting much easier for those who wanted to do it properly.
I found the talk: Plain Threads are the GOTO of todays computing by Hartmut Kaiser very interesting because I have been working on improving the multithreaded code in MOSEK recently. And is also thinking how MOSEK should deal with all the cores in the CPUs in the future.I agree with Hartmut something else than plain threads is needed.
First a clarification conic quadratic optimization and second order cone optimization is the same thing. I prefer the name conic quadratic optimization though.
Frequently it is asked on the internet what is the computational complexity of solving conic quadratic problems. Or the related questions what is the complexity of the algorithms implemented in MOSEK, SeDuMi or SDPT3.
To the best of my knowledge almost all open source and commercial software employ a primal-dual interior-point algorithm using for instance the so-called Nesterov-Todd scaling.
A conic quadratic problem can be stated on the form
\[\begin{array}{lccl}\mbox{min} & \sum_{j=1}^d (c^j)^T x^j & \\\mbox{st} & \sum_{j=1}^d A^j x^j & = & b \\& x^j \in K^j & \\\end{array}\]where \(K_j\) is a \(n^j\) dimensional quadratic cone.Moreover, I will use \(A = [A^1,\ldots, A^d ]\) and \(n=\sum_j n^j\). Note that \(d \leq n\).First observe the problem cannot be solved exactly on a computer using floating numbers since the solution might be irrational. This is in contrast to linear problems that always have rational solution if the data is rational.
Using for instance the primal-dual interior point algorithm the problem can be solved to \(\varepsilon\) accuracy in \(O(\sqrt{d} \ln(\varepsilon^{-1}))\) interior-point iterations, where \(\varepsilon\) is the accepted duality gap. The most famous variant having that iteration complexity is based on Nesterov and Todds beautiful work on symmetric cones.
Each iteration requires solution of a linear system with the coefficient matrix\[ \label{neweq}\left [ \begin{array}{cc}H & A^T \\A & 0 \\\end{array}\right ] \mbox{ (*)}\]This is the most expensive operation and that can be done in \(O(n^3)\) complexity using Gaussian elimination so we end at the complexity \(O(n^{3.5}\ln(\varepsilon^{-1}))\).
That is the theoretical result. In practice the algorithms usually works much better because they normally finish in something like 10 to 100 iterations and rarely employs more than 200 iterations. In fact if the algorithm requires more than 200 iterations then typically numerical issues prevent the software from solving the problem.
Finally, typically conic quadratic problem is sparse and that implies the linear system mentioned above can be solved must faster when the sparsity is exploited. Figuring our to solve the linear equation system (*) in the lowest complexity when exploiting sparsity is NP hard and therefore optimization only employs various heuristics such minimum degree order that helps cutting the iteration complexity. If you want to know more then read my Mathematical Programming publication mentioned below. One important fact is that it is impossible to predict the iteration complexity without knowing the problem structure and then doing a complicated analysis of that. I.e. the iteration complexity is not a simple function of the number constraints and variables unless A is completely dense.
To summarize primal-dual interior-point algorithms solve a conic quadratic problem in less 200 times the cost of solving the linear equation system (*) in practice.
So can the best proven polynomial complexity bound be proven for software like MOSEK. In general the answer is no because the software employ an bunch of tricks that speed up the practical performance but unfortunately they destroy the theoretical complexity proof. In fact, it is commonly accepted that if the algorithm is implemented strictly as theory suggest then it will be hopelessly slow.
I have spend of lot time on implementing interior-point methods as documented by the Mathematical Programming publication and my view on the practical implementations are that are they very close to theory. |
These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.).
*Exercise 1*
A positive charge is located at position $\vec{r}_s$. This charge is the source of an electric field that we are going to simulate with arrows.
Our goal is to determine the E field at a point P, at position $\vec{r}_P = (x_P,y_P,z_P)$. This is called the "field point." The electric field at point $\vec{r}_P$ due to the source [charge] at $\vec{r}_s$ is $\vec{E} = \frac{k q\hat{r}}{r^2}$. The vector $\vec{r}$ points from the source to the field point.
This program will be effectively two-dimensional, so we will take $z_P = z_S = 0.$
1. Open the template for Exercise 1, which shows a sphere (representing charge) placed at a position $\vec{r}_s.$ The field point P is shown at $\vec{r}_P.$ The arrows represent the position vector of each.
2. Create an arrow that starts at the source (not the origin) and ends at the field point P. This arrow is the $r$ that appears in the equation for $\vec{E}$.
3. The electric field due to a positive charge should be in the same direction as $\vec{r}$. Looking at the arrow you've created, make sure that it is in the same direction as what you would expect for the E field at point P.
*Exercise 2*
1. Once your arrow $\vec{r}$ is in the correct position, have your program calculate and print its magnitude (denoted |$\vec{r}$| or simply $r$).
2. The unit vector is defined as $\hat{r} = \vec{r}/r$. Create an arrow that represents $\hat{r}$.
3. What magnitude will any unit vector have?
4. Check your new arrow for plausibility (use print statements as needed). What should be true of its length? Of its direction?
*Exercise 3*
Start a new program with the template for Exercise 3. The goal is to create a game that is similar to Minesweeper. In Minesweeper, a grid of boxes hides a landmine, and the goal is to figure out the location of the mines by clicking on the boxes around them, without uncovering the mines themselves (which explode if they are uncovered). Here, we start with an electric field due to a distribution of point charges (the "mines"). As in minesweeper, the field is covered by a grid of boxes.

As a piece of the field is revealed with each click, the users can guess where the charges are located.
1. Run the program. Notice that clicking a box reveals a field arrow. The first click takes a few seconds to work, so be patient.
2. Notice that the field is not physically correct; it is just a constant field. Change the field so that it represents the electric field of a charge $q = 2\cdot 10^{-10} C$ (which can be represented as 2e-10). Hint: You'll need to define r and rhat, but rP and rS are already defined for you in the code.
3. Run your code and edit your program until a plausible result is obtained.
4. Predict how this visualization would be different if the charge was negative.
Test your predictions.
*Exercise 4*
1. Add a second point charge to the simulation. Adjust the electric field so that is the superposition of the E fields of the individual point charges. In other words, each box will still only have one arrow, with axis given by $\vec{E} = \vec{E_1}+\vec{E_2}$.
2. Check the output for plausibility:
* What do you expect to see if the program happens to place the two charges at the same location? (You'll only see one sphere - but how will the arrows be affected?)
* Test the situations where the two charges are the same sign
* Test two opposite charges (color-code them so you know which is positive).
_Optional_
* Add in functionality that prevents the two charges from being placed at the same grid point. (For those with programming experience.)
* Play around with the number of grid points. If you have a large number of grid points, you may need to increase the scene range to accommodate all the boxes. You also may want to add more charges.
* Since the actual game Minesweeper stops the game when a mine is triggered, you could have the program stop if the user clicks a charge and show a notification that the game is lost. As written, all the template does is turn that grid point yellow when the mine is triggered. This is boring - can you make the triggering of a mine more exciting? Make everything explode or something? |
The answer to the first question is yes.
Let $V$ be a finite dimensional complex vector space, and let $\langle\cdot,\cdot\rangle:V\times V\rightarrow\Bbb{C}$ be an inner product. Fix a basis $\{e_1,\ldots,e_n\}$, so that for any two vectors $u,v\in V$, we have
$$\langle u,v\rangle=\left\langle\sum_{j=1}^n\alpha_je_j,\sum_{k=1}^n\beta_ke_k\right\rangle=\sum_{j=1}^n\sum_{k=1}^n\alpha_j\bar{\beta}_j\langle e_j,e_k\rangle$$
So if we define $M=(M_{ij})_{n\times n}=(\langle e_i,e_j\rangle)_{n\times n}$, we have exactly the expression
$$\langle u,v\rangle = u^\intercal M \bar{v}$$
as desired. It is easy to check that $M$ is Hermitian - this follows from conjugate symmetry of the inner product: $\overline{\langle x,y\rangle}=\langle y,x\rangle$. Positive definiteness follows from the positivity of the inner product - $\langle x,x\rangle>0$ for all $x\in V\backslash\{0\}$.
For the second claim, it suffices to show the identity $x^\intercal M \bar{y}=y^*\overline{M} x$. This is straightforward:
$$x^\intercal M\bar{y}=\sum_j\alpha_j\sum_kM_{jk}\bar{\beta}_k=\sum_k\bar{\beta}_k\sum_jM_{jk}\alpha_j=y^*\overline{M}x$$
Notice of course that we must then use the matrix $\overline{M}$, i.e. the same matrix won't work. (Thanks to @user1551 for pointing this out) |
I'm looking for existing papers studying a variation to Einstein equation that does not rely on the annoying matter conservation identity:
$$ T_{\mu \nu; \nu} = 0 $$
And instead tries to equate the divergence-free Einstein tensor with a sum of $T_{\mu \nu}$ plus some gravitational energy tensor $Y_{\mu \nu}$:
$$ G_{\mu \nu} = 8 \pi G (T_{\mu \nu} + \theta Y_{\mu \nu}) $$
Where the $\theta$ factor is a parameter of the ansatz.
Let me explain why this ansatz should be physically interesting: because the vanilla version of Einstein equation is based on the assumption that conversion between gravitational and non-gravitational energy does not happen never, ever. If you cherry-pick the gravitational energy tensor to be, say, the Landau Lifshitz
tensor:
$$ Y^{\mu \nu} = (\sqrt{-g}_{; \alpha \beta}) ( g^{\mu \nu} g^{\alpha \beta} - g^{\mu \alpha} g^{\nu \beta} ) $$
(notice this is not the pseudo tensor variant; those derivatives are covariant)
this tensor is zero in the weak-field limit, is non-zero only after second order corrections in the metric, so it would match most astronomical observations that match GR in the weak field limit. Would be interesting to see what predictions does this produce in the nonlinear regime. In fact, the above argument applies just as well to any meaningful gravitational tensor that has only second order (or smaller) non-zero corrections.
Any thoughts? |
Christoffel Symbol
In maths that is
QuestionVerify the consequences of metric compatibility: If
\begin{align}
{\mathrm{\nabla }}_{\sigma}g_{\mu \nu }=0 & \phantom {10000}(1) \\
\end{align}then (a)\begin{align}
{\mathrm{\nabla }}_{\sigma}g^{\mu \nu }=0 & \phantom {10000}(2) \\
\end{align}and (b)\begin{align}
{\mathrm{\nabla }}_{\lambda}{\varepsilon }_{\mu \nu \sigma \rho }=0 & \phantom {10000}(3) \\
\end{align} I am not sure if we are assuming ##{\mathrm{\Gamma }}^{\tau }_{\lambda \mu }={\mathrm{\Gamma }}^{\tau }_{ \mu \lambda }## or not.
AnswerPart (a) was quite simple but I struggled with the part (b) until 23 March and had to give up. Along the way I had lots of practice at index manipulation, I reacquainted myself with Cramer's rule for solving simultaneous equations, proved (b) on the surface of a sphere, found the 'dynamite' version of Carroll's streamlined matrix determinant equation (2.66) and added some equation shortcut keys to my keyboard. The time was not wasted.
We make frequent use here of the fact that ##g_{\mu \nu }g^{\mu \rho }\mathrm{=}{\delta}^{\rho }_{\nu }## and the indexing effect of the Kronecker delta: ##{\delta}^{\lambda }_{\beta }\mathrm{\Gamma }^{\mu }_{\sigma \lambda }={\mathrm{\Gamma }}^{\mu }_{\sigma \beta }## because we are summing over ##\lambda ## and the only non-zero term is when ##\beta =\lambda ##. In this case ##\mathrm{\Gamma }## can be replaced by any symbol or tensor of any rank.
Here is the full effort Ex 3.01 Consequences of metric compatibility.pdf (7 pages of which 4 might be worth looking at). |
I need to model the following problem:
In the next semester, the school plans to replace the tutorials by self-organized student groups. Your task is to find the best possible partition of students into groups and to appoint a group leader for each group. To achieve this task, you are given a list of students students. Each student needs to be assigned to exactly one group, and each group needs to have a (student) leader. Every student can be a leader, but we can have only one per group. A leader needs to be assigned to the group he is leading. The number of groups needs to be at least g and at most G. The quality of the groups depends on two criteria. First, the collaboration quality of a student i in a group with leader j is given by collaboration[i,j]. Furthermore, each student i has a certain leadership quality given by leadership[j]. You have to maximize the overall collaboration quality plus the sum of the leadership qualities of the leaders.
I have come up with the following and was wondering if I'm missing something or if someone can show me in the right direction:
I used the variable $x_{i,j}$ if a student $i$ is assigned to the group with leader $j$ ($x_{i,j}=1$) or not ($x_{i,j}=0$). The variable $y_j$ shall denote whether student $j$ is a leader ($=1$) or not ($=0$).
I assumed that there are $n$ students and $m$ leaders. Then I came up with the following LP:
max $\sum_{i=1}^n \sum_{j=1}^m x_{i,j} \cdot \text{collaboration}_{i,j} + \sum_{j=1}^m y_j \cdot \text{leadership}_j$
s.t.
$\sum_{j=1}^m x_{i,j} = 1 \qquad \forall i \in \{1,\ldots,n\}$
$g \leq \sum_{j=1}^my_j \leq G$
$x_{i,j} \leq y_j \qquad \forall i \in \{1,\ldots,n\} \quad \forall j \in \{1,\ldots,m\}$
$x_{i,j} \in \{0,1\} \qquad \forall i \in \{1,\ldots,n\} \quad \forall j \in \{1,\ldots,m\}$
$y_j \in \{0,1\} \forall j \in \{1,\ldots,m\}$
This is as far as I have gotten. I know I'm missing the connection between $x_{i,j}$ and $y_{j}$. Also I'm not sure about my function I want to maximize. Help would be appreciated.
EDIT: After the comment I updated the constraints. We have a small example, where we can test our model and I'm not able to reproduce the correct anwser with the model above. Any suggetions, where my error is? |
Answer
$(4R/3\theta)sin(.5\theta)$ from the tip of the slice.
Work Step by Step
We know that the area is: $A=\frac{\pi r^2 \theta}{2\pi}=\frac{ r^2 \theta}{2}$ Thus, we find that the center of mass, using the geometric center of the slice of pizza, is: $y_{cm}=\frac{.67r^3sin.5\theta}{ r^2 (.5\theta)}$ $y_{cm}=(4R/3\theta)sin(.5\theta)$ from the tip of the slice. |
WHY?
This paper first prove that the expresiveness of a language model is restricted by softmax and suggest a way to overcome this limit.
WHAT?
The last part of language models usuallt consist of softmax layer applied on a product of context vector(h) and a word embedding w.
P_{\theta}(x|c) = \frac{\exp\mathbf{h}_c^t\mathbf{w}_x}{\sum_{x'}\exp \mathbf{h}_c^t\mathbf{w}_x}
This paper formulate language modeling as language factorization problem. To do this, three matrices can be defined: context vectors, word embedding, and log probabilities of the true data distribution.
\mathbf{H}_{\theta} = \begin{bmatrix}\mathbf{h}^T_{C_1}\\ \mathbf{h}^T_{C_2}\\ ...\\ \mathbf{h}^T_{C_N}\\ \end{bmatrix}\\\mathbf{W}_{\theta} = \begin{bmatrix}\mathbf{w}^T_{x_1}\\ \mathbf{w}^T_{x_2}\\ ...\\ \mathbf{w}^T_{x_M}\\ \end{bmatrix}\\\mathbf{A} = \begin{bmatrix}\log P*(x_1|c_1) & \log P*(x_2|c_1) & ... & \log P*(x_M|c_1) \\ \log P*(x_1|c_2) & \log P*(x_2|c_2) & ... & \log P*(x_M|c_2) \\ ... & ... & ... & ... \\ \log P*(x_1|c_N) & \log P*(x_2|c_N) & ... & \log P*(x_M|c_N) \\ \end{bmatrix}\\
Also, we can define a set of matrices formed by applying row-wise shift to A.
F(\mathbf{A}) = \{\mathbf{A} + \mathbf{\Lambda}\mathbf{J}_{N,M}\mathbf{\Lambda} is diagonal and \mathbf{\Lambda} \in \mathbb{R}^{N \times N}\}
We can derive two property of this set: F(A) is all possible logits of the true data distribution, and all matrices in F(A) have similar rank, with the maximum rank difference being 1. If we want HW to be in F(A), HW must have rank as large as A. However, the rank of HW is strictly upperbounded by the embedding size d.
\mathbf{H}_{\theta}\mathbf{W}_{\theta}^T = \mathbf{A}'\\d \geq min_{\mathbf{A}'\in F(\mathbf{A})}rank(\mathbf{A}')
This proves the softmax bottleneck meaning softmax layer does not have the capacity to express the true data if the dimension d is too small.
To solve this problem, this paper suggest mixture of softmaxes which have improved expresiveness. Since the matrix is a nonlinear function of context vectors and word embeddings, the rank of the matrix is not restricted to d.
P_{\theta}(x|c) = \sum^K_{k=1}\pi_{c,k}\frac{\exp\mathbf{h}_{c,k}^T\mathbf{w}_x}{\sum_{x'}\exp \mathbf{h}_{c,k}^T\mathbf{w}_x}; s.t. \sum^K_{k=1}\pi_{c,k} = 1\\\pi_{c,k} = \frac{\exp\mathbf{w}^T_{\pi, k}\mathbf{g}_t}{\sum_{k'=1}^K \exp\mathbf{w}^T_{\pi, k}\mathbf{g}_t}\\\mathbf{h}_{c+t, k} = tanh(\mathbf{W}_{h,k}\mathbf{g}_t)\\\hat{\mathbf{A}}_{MoS} = \log \sum_{k=1}^K \Pi_k \exp(\mathbf{H}_{\theta, k}\mathbf{W}_{\theta}^T)
So?
The perplexity of MoS on Penn Treebank, WikiText, and 1B word dataset showed clearly improved performance than softmax. Even though the computation time of MoS is 2~3 times slower, MoS was better at making context dependent prediction.
Critic
Mixture of softmax seems incredible, but there may be computationally efficient way of doing this. |
In the set of SUSY notes I'm following, the Pauli operator is given as: ${(\sigma^\mu)}_{\alpha\dot{\alpha}} = (I_2, \sigma^1, \sigma^2, \sigma^3)$.
The antisymmetric tensor that lowers and raises indices is defined as: $\epsilon^{\alpha\beta} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$
When we define the barred Pauli operator by applying this twice,
${(\bar{\sigma}^\mu)}^{\dot{\alpha}\alpha} = \epsilon^{\alpha\beta}\epsilon^{\dot{\alpha}\dot{\beta}} {(\sigma^\mu)}_{\beta\dot{\beta}} = (I_2, -\sigma^1, -\sigma^2, -\sigma^3)$.
I really struggle with Spinor indices, and keep trying to think of them as matrices and which I'm sure is incredibly wrong. For example, acting on this for ${(\bar{\sigma}^0)}^{\dot{\alpha}\alpha} = \epsilon^{\alpha\beta}\epsilon^{\dot{\alpha}\dot{\beta}} {(\sigma^0)}_{\beta\dot{\beta}}$, I have written
$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}^{\alpha\beta}\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}^{\dot{\alpha}\dot{\beta}}\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}_{\beta\dot{\beta}} = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}^{\dot{\alpha}\alpha} = (-I_2)^{\dot{\alpha}\alpha}$,
whereas looking at the notes I should not have a minus sign. How does applying the epsilon tensor leave the first element of the Pauli operator ($I_2$) intact but applying a minus sign to the rest of the barred tensor? Or is this just a convention that we define?
Apologies if there are any errors in the TeX, still learning! Any help on this area would be great, really think I'm doing this wrong by trying to write things out in matrices and contract Spinor indices. |
Explaining and Harnessing Adversarial ExamplesExplaining and Harnessing Adversarial ExamplesIan J. Goodfellow and Jonathon Shlens and Christian Szegedy2014
Paper summarydavidstutzGoodfellow et al. introduce the fast gradient sign method (FGSM) to craft adversarial examples and further provide a possible interpretation of adversarial examples considering linear models. FGSM is a grdient-based, one step method for generating adversarial examples. In particular, letting $J$ be the objective optimized during training and $\epsilon$ be the maximum $\infty$-norm of the adversarial perturbation, FGSM computes$x' = x + \eta = x + \epsilon \text{sign}(\nabla_x J(x, y))$where $y$ is the label for sample $x$. The $\text{sign}$ method is applied element-wise here. The applicability of this method is shown in several examples and it is commonly used in related work.In the remainder of the paper, Goodfellow et al. discuss a linear interpretation of why adversarial examples exist. Specifically, considering the dot product$w^T x' = w^T x + w^T \eta$it becomes apparent that the perturbation $\eta$ – although insignificant on a per-pixel level (i.e. smaller than $\epsilon$) – causes the activation of a single neuron to be influence significantly. What is more, this effect is more pronounced the higher the dimensionality of $x$. Additionally, many network architectures today use $\text{ReLU}$ activations, which are essentially linear.Goodfellow et al. conduct several more experiments; I want to highlight the conclusions of some of them:- Training on adversarial samples can be seen as regularization. Based on experiments, it is more effective than $L_1$ regularization or adding random noise.- The direction of the perturbation matters most. Adversarial samples might be transferable as similar models learn similar functions where these directions are, thus, similarly effective.- Ensembles are not necessarily resistant to perturbations.Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
First published: 2014/12/20 (4 years ago) Abstract: Several machine learning models, including neural networks, consistentlymisclassify adversarial examples---inputs formed by applying small butintentionally worst-case perturbations to examples from the dataset, such thatthe perturbed input results in the model outputting an incorrect answer withhigh confidence. Early attempts at explaining this phenomenon focused onnonlinearity and overfitting. We argue instead that the primary cause of neuralnetworks' vulnerability to adversarial perturbation is their linear nature.This explanation is supported by new quantitative results while giving thefirst explanation of the most intriguing fact about them: their generalizationacross architectures and training sets. Moreover, this view yields a simple andfast method of generating adversarial examples. Using this approach to provideexamples for adversarial training, we reduce the test set error of a maxoutnetwork on the MNIST dataset.
#### Problem addressed:A fast way of finding adversarial examples, and a hypothesis for the adversarial examples#### Summary:This paper tries to explain why adversarial examples exists, the adversarial example is defined in another paper \cite{arxiv.org/abs/1312.6199}. The adversarial example is kind of counter intuitive because they normally are visually indistinguishable from the original example, but leads to very different predictions for the classifier. For example, let sample $x$ be associated with the true class $t$. A classifier (in particular a well trained dnn) can correctly predict $x$ with high confidence, but with a small perturbation $r$, the same network will predict $x+r$ to a different incorrect class also with high confidence.This paper explains that the exsistence of such adversarial examples is more because of low model capacity in high dimensional spaces rather than overfitting, and got some empirical support on that. It also shows a new method that can reliably generate adversarial examples really fast using `fast sign' method. Basically, one can generate an adversarial example by taking a small step toward the sign direction of the objective. They also showed that training along with adversarial examples helps the classifier to generalize.#### Novelty:A fast method to generate adversarial examples reliably, and a linear hypothesis for those examples.#### Datasets:MNIST#### Resources:Talk of the paper https://www.youtube.com/watch?v=Pq4A2mPCB0Y#### Presenter:Yingbo Zhou |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.